Inside the C.D.C.s Pandemic Weather Service – The New York Times

Scientists have been modeling infectious-disease outbreaks since at least the early 1900s, when the Nobel laureate Ronald Ross used mosquito-reproduction rates and parasite-incubation periods to predict the spread of malaria. In recent decades, Britain and several other European countries have managed to make forecasting a routine part of their infectious-disease control programs. So why, then, has forecasting remained an afterthought, at best, in the United States? For starters, the quality of any given model, or resulting forecast, depends heavily on the quality of data that goes into it, and in the United States, good data on infectious-disease outbreaks is hard to come by: poorly collected in the first place; not easily shared among different entities like testing sites, hospitals and health departments; and difficult for academic modelers to access or interpret. For modeling, its crucial to understand how the data were generated and what the strengths and weaknesses of any data set are, says Caitlin Rivers, an epidemiologist and the associate director of the C.F.A. Even simple metrics like test-positivity rates or hospitalizations can be loaded with ambiguities. The fuzzier those numbers are, and the less modelers understand about that fuzziness, the weaker their models will be.

Another fundamental problem is that the scientists who make models and the officials who use those models to make decisions are often at odds. Health officials, concerned with protecting their data, can be reluctant to share it with scientists. And scientists, who tend to work in academic centers and not government offices, often fail to factor the realities faced by health officials into their work. Misaligned incentives also prevent the two from collaborating effectively. Academia tends to favor advances in research whereas public-health officials need practical solutions to real-world problems. And they need to implement those solutions on a large scale. Theres a gap between what academics need to succeed, which is to publish, and whats needed to have real impact, which is to build systems and structures, Rosenfeld says.

These shortcomings have hampered every real-world outbreak response so far. During the H1N1 pandemic of 2009, for example, scientists struggled to communicate effectively with decision makers about their work and in many cases failed to access the data they needed to make useful projections about the viruss spread. They still built many models, but almost none of them managed to influence the response effort. Modelers faced similar hurdles with the Ebola outbreak in West Africa five years later. They managed to guide successful vaccine trials by pinpointing the times and places where cases were likely to surge. But they were not able to establish any coherent or enduring system for working with health officials. The network that exists is very ad hoc, Rivers says. A lot of the work that gets done is based on personal relationships. And the bridges that you build during any given crisis tend to evaporate as soon as that crisis is resolved.

Nov. 28, 2021, 9:43 p.m. ET

Scientists and health officials have made many attempts to close these gaps. Theyve created several programs, collaborations and initiatives in the past two decades each one meant to improve the science and practice of real-world outbreak modeling. How well those efforts fared depends on whom you ask: One such effort changed course after its founder retired, some ran out of funding, others still exist but are too limited in scope to tackle the challenges at hand. Marc Lipsitch, an infectious-disease epidemiologist at Harvard and the C.F.A.s director for science, says that, nonetheless, each contributed something to the current initiative: Its those previous efforts that helped lay the groundwork for what we are doing now.

At the pandemics outset, for example, modelers relied on the lessons they learned from FluSight, an annual challenge in which scientists develop real-time flu forecasts that are then gathered on the C.D.C.s website and compared with one another, to build a Covid-focused system that they called the Covid-19 Forecast Hub. By early April 2020, this new hub was publishing weekly forecasts on the C.D.C.s website that would eventually include death counts, case counts and hospitalizations at both the state and national levels. This was the first time modeling was formally incorporated into the agencys response at such a large scale, George, who is director for operations for the C.F.A., told me. It was a huge deal. Instead of an informal network of individuals, you had somewhere in the realm of 30 to 50 different modeling groups that were helping with Covid in a consistent, systematic way.

But if those projections were painstaking and modest scientists ultimately decided that any forecasts more than two weeks out were too uncertain to be useful they were also no match for the demands of the moment. As the coronavirus epidemic turned into a pandemic, scientists of every ilk were flooded with calls. School officials and health officials, mayors and governors, corporate leaders and event organizers all wanted to know how long the pandemic would last, how it would unfold in their specific communities and what measures they should employ to contain it. People were just freaking out, scouring the internet and calling any name they could find, Rosenfeld told me. Not all of those questions could be answered: Data was scant, and the virus was novel. There was only so much that could be modeled with confidence. But when modelers balked at these requests, others stepped into the void.

Read the rest here:

Inside the C.D.C.s Pandemic Weather Service - The New York Times

Related Posts

Comments are closed.