Modeling Uncertainty: How to make sense of changing predictions
By Benjamin Jacobson and Stefanie Friedhoff, HGHI
In a pandemic, hard evidence is a precious commodity. At a time when we have a critical need for answers, data and clear evidence to support better decision making are only slowly forthcoming. There is so much we still don’t know about the virus, its features, how it spreads, and who it kills and why.
To fill this information void, researchers try to bring past experiences and their best expert interpretations to the task by modeling potential outcomes. Even when there is no pandemic, key decisions in public health or economics are based on such models. But the more uncertainty there is, the more diverse these predictions become.
Unsurprisingly, this quickly gets confusing for anyone who hasn’t spent their career in mathematical modeling. Estimates of how many people will eventually get sick and how many will die from COVID-19 in the U.S. have ranged from 50,000 to upwards of 2 million. And while some estimates say we need 20 million coronavirus tests every day nationwide, our latest model comes out at around 900,000 daily tests by May 15.
Models provide guideposts, not facts
The key to understanding such discrepancies is that models provide guideposts, not facts. In this pandemic, the estimates scientists generate are like thoughtful contributions to a discussion that over time will bring better answers as we work through how we arrived at different estimates—and incorporate new evidence as it becomes available.
Understanding the assumptions that underlie various models and comparing their outcomes can make it easier to place each model in context and draw useful conclusions.
In this post, we share details on how our HGHI team arrived at the latest estimate of over 900,000 tests needed in the U.S. by May 15—and explain why this estimate is different from the 500,000 daily tests we projected were needed by May 1 (while the method underlying these numbers remains the same.) We explore the reasons these numbers have changed and may continue to, and how to make sense of this variation.
How many cases will there be?
To estimate how much testing a state or country needs two weeks from now, our estimates start by predicting how many active cases they will have at that point in time. A state with only one infected individual clearly does not need as many tests as one with thousands of cases, regardless of how many people live there. Because predicting cases lies at the core of predicting tests, much of the variation seen in estimating tests is a manifestation of the uncertainty inherent in estimating future case counts.
Which model to use?
At this point in the pandemic, there are many COVID-19 models predicting the number of cases and deaths for each state in the weeks to come. They are listed on this CDC page, and there is also now an ensemble model aiming to combine the different approaches.
To get an understanding of how they compare, we applied our strategies for estimating tests to three different models that predict case and death counts: The Youyang Gu model, the Los Alamos model and the MIT model.
These models provide estimates of cases and deaths up to six weeks out, based on the trends in numbers of cases and deaths at the time they are updating. While each model takes a different approach to modelling and comes up with different estimates, even within each model we have seen estimates change dramatically over time as states release new data and parameters are updated.
Social distancing didn’t reduce cases as much as predicted
Recently, many of these changes have arisen from the realization that while social distancing has been a valuable and necessary step in containing the virus and slowing transmission, it has not brought about the dramatic falloff that many had hoped for. Most early models showed a peak but failed to anticipate a plateau of cases following the peak. These early models predicted no new deaths by the end of May, but as we are still at over 1500 daily deaths that now seems impossible. As a result, models have been increasing their estimates for future cases and deaths across the board. This plays a large role in why our prediction of needed tests had to increase from 500,000 to almost one million—more infections means more testing.
Let’s note that both the case and death counts these models are calibrated to are imperfect. Official numbers are likely missing COVID-19 cases and COVID-19 deaths in most states. Assuming that death counts are slightly more accurate than case counts because deaths are harder to miss, we chose to base our calculations on the estimated number of deaths from these models, and then calculate the number of estimated new cases. To do so, we look two weeks past our target estimation date of May 15 in this case, to take the projected deaths for May 29. Why two weeks? Because there is evidence for an approximately 2-week infection period in fatal cases of COVID-19. Then we multiply those death estimates for May 29 by 100, based on an assumed 1% Case Fatality Rate. This yields our best guess of how many new cases on May 15 would lead to those deaths on May 29.
Two strategies to predict tests needed
Once we have the predicted case counts in this way, we apply two strategies:
- The test and trace strategy. Here, we determine the number of tests needed by identifying what it will take to carry out a strategy of testing symptomatic individuals and also testing at least 10 contacts for each of those individuals whose test has come back positive for COVID-19. With the test and trace strategy, we first assume that 75% of COVID cases will show symptoms, based on a CDC report that about 25% of cases are asymptomatic. We add in additional influenza-like illness (ILI) cases from CDC ILI surveillance in previous years, resulting in additional expected symptomatic individuals. Finally, we assume that each symptomatic individual testing positive for COVID-19 will have on average 10 contacts who will need to be tested.
- The “below 10% positive” strategy in which we determine the number of tests needed to guarantee a positive rate below 10%. For this 10% model we simply multiply the number of expected cases by 10 to guarantee that our positive rate will be less than 10%, the suggested maximum rate from the World Health Organization and others. This is an imperfect number and not a guarantee that a state is doing enough tests, but it is a good indicator to watch and compare to the Test and Trace strategy outcomes.
These strategies generally result in similar estimates of needed testing, suggesting that we’re in the right ballpark. We move forward with the test and trace strategy as we feel it captures more nuance and better represents the actual strategies that should be implemented with testing.
Different numbers, same direction
If we now apply our test and trace strategy to different models, we can see that the outcomes are not the same, but show the same directionality. This perfectly highlights why even with significant variance and uncertainty these models can give insight into which states need to ramp up testing, so long as they are treated as guideposts rather than precise targets:
How many tests will New York City need by June 1?
While we were running our models to get a better understanding of what estimates to settle on and publish, the NYT asked us for specific estimates of needed testing in New York City on June 1. Here is what this calculation looks like using data available on May 6:
The Youyang Gu model projects 41.8 new deaths on June 15 in NYC. This means 4,180 new cases on June 1 in NYC based on a 1% Case Fatality Rate (CFR) and a two-week infection period. Using our two strategies for calculating tests we get estimates of 35,415 (test and trace) to 41,800 (10%) tests needed.
The Los Alamos model projects 22.3 new deaths on June 15 in NY state which implies 2,233 new cases in NYC on June 1. This results in estimates of 19,352 (test and trace) to 22,330 (10%) tests needed on June 1.
The MIT model projects 27.5 new deaths on June 15 in NY, which implies 2,750 new cases in NYC on June 1. This results in estimates of 23,618 (test and trace) to 27,500 (10%) tests needed on June 1.
(Note: Models provide case estimates on the state level, but we extrapolate to NYC based on current data showing that 55% of the cases in NY State come from NYC.)
Overall these estimates provide a range of needed tests from 19,352 to 41,800. This seems like a big range, but when you consider that NYC performed only about 10,000 tests a day in early May, it becomes clear that many more tests need to become available and be made accessible regardless of which model you choose.
Why we chose the GU model for our latest update
In our earlier estimates predicting testing needs for May 1, we used the Los Alamos model to anticipate numbers of cases. Learning from what we have observed since, we switched to using the Gu model death estimates for our latest projections. Here is why:
- Tracking day by day, the Gu model has been the one closest to making predictions that came true or somewhat true
- A lot of other models have over-estimated the impact of social distancing, and thus had to correct more as it became clear that social distancing hasn’t affected infection and death rates as much as anticipated
- The Gu model makes fewer assumptions by using a machine learning approach to infer parameters
- The CDC ensemble model, mentioned above, tracks closest to the Gu model
This switch to a new model plays only a small role in why our numbers have changed from previous iterations—the general shift across all models towards higher numbers of predicted cases remains the primary driver of the increase in our estimates.
Making Sense of Evolving Numbers
With such a quickly changing landscape it is important a step back and remember what these numbers are meant for. Rather than viewing estimates of testing as highly specific, we think of them as offering a framework for moving forward. All states must focus on how to effectively and strategically use their tests to limit the spread of COVID-19, but states that fall far short on required tests must continue to push on expanding testing capacity as well.
Also, testing thresholds aren’t binary: A state that doubles its testing is in better shape to limit COVID spread whether or not it reaches its estimated need. As numbers continue to change in the future, any progress is valuable. Rather than focusing on specific numbers, it is the directionality of a state that matters. Does it need a lot more testing, a little more, or is it in range?
Testing, as we all know, is at the heart of our collective ability to save lives and bring the economy back. As the virus continues to spread, more or less slowly, estimates of needed tests will continue to change, just as the localized outbreaks do. They can serve as a marker of how well individual communities are doing in their response.