Machine-Readable Prevalence Estimates
2023-05-12
The calculations are manual, which makes it harder to catch errors.
It's hard to tell exactly how sourcing works for inputs that are being pulled in from elsewhere.
You might want multiple estimates based on changing inputs over time.
At the NAO we also started with prose estimates, in Google Docs, but in addition to the issues above we found that the review tooling wasn't a good fit for the kind of deep reviews we wanted. After some initial struggles we switched to Python to represent our estimates; you can see the code on github.
An estimate is a combination of inputs (numbers we get from somewhere else) and calculations (how we combine those inputs). Most of the effort is in the inputs: making sure it's clear where the numbers come from. For example, here's how we represent that the CDC estimates that there were 1.2M people with HIV in the US in 2019:
us_infected_2019 = PrevalenceAbsolute( infections=1.2e6, country="United States", date="2019", active=Active.LATENT, source="https://www.cdc.gov/hiv/library/reports/hiv-surveillance/vol-26-no-2/content/national-profile.html#:~:text=Among%20the%20estimated-,1.2%20million%20people,-living%20with%20HIV", )
An absolute prevalence isn't useful to us without connecting it to a population. Here's how we could represent the corresponding population:
us_population_2019 = Population( people=328_231_337, country="United States", date="2019-01-01", source="https://www.census.gov/newsroom/press-releases/2019/new-years-population.html", )
And we can connect these to get a Prevalence
:
us_prevalence_2019 = us_infected_2019.to_rate(us_population_2019)
The to_rate
method checks that the locations and dates
are compatible, does the division, gives us a Prevalence
.
For a more complex example, you could look at norovirus.py. This is doing the calculation from the previous blog post, with the addition of estimates for the Norovirus Genogroup I and II subtypes.
For each estimate one team member creates the initial estimate and then we use GitHub's code review process for a line-by-line review. This includes validating all inputs match what's listed on the external site and that we're using the data source correctly, in addition to checking that the overall structure of the estimate is reasonable.
I think there's a decent chance that this isn't the final way this will look, and as we get further along in the project we'll want to make the details of the prevalence calculation available to the model so it can understand uncertainty. Most of the work is in determining the inputs and reviewing the structure of the estimate, however, so if we do need to make changes like that I expect they won't be too bad.
This post describes in-progress work at the NAO and covers work from a team. The Python-based estimation framework is mostly my work, with help from Simon Grimm and Dan Rice.