AMU Diseases Emergency Management Health & Fitness Infectious Diseases Intelligence Original Public Safety

4 Questions to Evaluate the Accuracy of COVID-19 Predictive Models

Start an intelligence studies degree at American Military University.

By Erik Kleinsmith, Associate Vice President, Public Sector Outreach, American Military University

In our self-quarantine, every day seems to blend into the next. We do what we can from our homes, taking care of our work and family responsibilities while waiting out this pandemic. Either physically or psychologically, COVID-19 has invaded everyone’s lives. In the midst of this crisis, leadership at the federal, state, and local levels seek both solutions and perspectives on how they think this pandemic will play out. Predictive analysis affects the planning, restrictions, and allocation of limited resources to combat it.

To help explain everything related to COVID-19, there is suddenly an abundance of predictive models featured on the news and within our social media feeds. These models attempt to show the path the virus has already taken, how many people are expected to come down with the virus, and the percentages of people expected to either succumb to or overcome the novel coronavirus in the coming weeks.

Anyone watching or reading the news or scrolling through their social media accounts has likely already seen dozens of these predictive models with algorithmic curves displayed on a timeline or as red blotches on a map showing the impact of the virus on various parts of the nation. From these models, pundits and experts tell us when this pandemic will peak, recede, and end so we can go back to our former, albeit modified, way of life.

Unfortunately, many of these predictive models are wrong; they have not been critically vetted or challenged. Consequently, our leadership from the President to our local school boards have had to make radical and life-changing decisions based on these models.

The Need to Critically Evaluate Predictive Models

Models are only as good as the data that populates them, the person who designs them, and the organization that presents and interprets them. Because of this, many predictive models do not consider the most pertinent information, use the wrong information, or present information in a way that is misleading.

As a result, many models don’t stand the test of time. For the most part, this is understandable since this is a constantly evolving situation with new information released hourly from around the world. In addition, there is limited past information to compare current data to because there hasn’t been a widespread pandemic like this in recent history. Prognosticators of hurricanes, earthquakes, and political elections have more empirical data to work with than those working to combat the virus have today.

Just as the public should be skeptical and take a critical look at news sources, we should also critically evaluate predictive models and not assume that they are accurate just because they were created by a medical expert. Below are some basic questions to consider when evaluating pandemic models and the information they present:

Question 1: Who created the model?
In the intelligence field, analytic products and assessments almost always have named authors. Whether it is an individual intelligence officer providing an intelligence briefing or a group of analysts writing a strategic assessment for a publication, decision makers demand to know who authored them. Many people do not like having to put their name to such a document because they don’t want to be held responsible if the information is inaccurate, but the public should demand to know who created these models. Be skeptical of medical models that don’t cite an author. Every predictive model, no matter how factual it may look on the surface, can carry the underlying bias of the creator who wrote or presented it.

Question 2: What is the purpose of the model?
Every model has a reason for its creation and use. Knowing this reason is important for understanding its results. Is the model designed to assist medical supply chain managers or first responders? Does the source organization have a political or financial agenda that’s supported in the model? The adage that “figures can lie and liars can figure” can be applied to determine if those who are presenting the model have cherry-picked information that advances their political argument.

Question 3: What data and information is the model using?
Demand to know about the data a particular model draws from. Another useful adage is “garbage in, garbage out,” which is never truer than when applied to some of the models being used to predict the spread of this virus. Skeptics should be asking what sources of information does this model use and are these sources trustworthy? For example, the ability to test for the virus is not uniform across the world; therefore, infection rates are not accurate. Likewise, information from countries with totalitarian governments must be considered unreliable because their data cannot be verified by outside sources. Throwing unconfirmed data into a model without second thought is guaranteeing that the model is garbage as well.

Question 4: How does the model use data and information to create its results?
The public should also require a clear explanation of what algorithms are used to create the model. When the public started paying attention to COVID-19 and went into a toilet paper-infused panic, the models used to predict its severity were all over the map. Death rates ranged from a few thousand to millions worldwide, but reporters, pundits, celebrities, and other self-declared epidemiologists presented these models without questioning their veracity or accuracy.

It is especially important that decision makers ask very critical questions about how the model calculated its results. Did it take into account work stoppages, social distancing measures, and the phenomenal retooling of many American businesses to start manufacturing ventilators and masks? How has the model accounted for the rapid and almost daily successes resulting from increased testing? It’s important to note that the algorithms used in these models should be constantly updated because an algorithm created a few weeks ago can be quickly outdated by more recent events.

By asking these basic questions, the public can more effectively evaluate the veracity and intention of predictive models. Perhaps more importantly, we can also compare and contrast different models to verify that the information presented is accurate and unbiased.

Just because a model was developed by a medical professional or published by a medical organization doesn’t mean it shouldn’t be critically evaluated. Medical predictive models should be subjected to the same scrutiny as intelligence reports so that the most accurate and reliable information is being presented to understand the severity of this virus.

predictive modelsAbout the Author: Erik Kleinsmith is the Associate Vice President for Business Development in Intelligence, National & Homeland Security, and Cyber for American Military University. He is a former Army Intelligence Officer and the former portfolio manager for Intelligence & Security Training at Lockheed Martin. Erik is one of the subjects of a book entitled The Watchers by Shane Harris, which covered his work on a program called Able Danger, tracking Al-Qaeda prior to 9/11. He is the author of the 2020 book, Intelligence Operations: Understanding Data, Tools, People, and Processes. He currently resides in Virginia with his wife, son, and daughter. To contact the author, email IPSauthor@apus.edu. For more articles featuring insight from industry experts, subscribe to In Public Safety’s bi-monthly newsletter.

Erik Kleinsmith is AVP in Intelligence, National & Homeland Security for AMU. He is a former Army Intelligence Officer and portfolio manager for Intelligence & Security Training at Lockheed Martin. He is a subject of the book “The Watchers” about Able Danger. He published a book, “Intelligence Operations: Understanding Data, Tools, People, and Processes.”

Comments are closed.