Monday , August 2 2021
Home / Bryan Caplan /“Follow the Science” Might Not Mean What You Think It Means

“Follow the Science” Might Not Mean What You Think It Means

Summary:
Here’s a guest post by ASU’s Richard Hahn, reprinted with his permission.  I suspect he’d be happy to respond to comments! The problem with punchy slogans is that they are subject to (mis)interpretation. In the wake of the Covid-19 pandemic, it has become common for folks to urge policy-makers to “follow the science”. But what exactly does this mean? There is a version of this slogan that I strongly support, but I worry that many of the people invoking it mean something different, and that this interpretation could actually undermine faith in science. In this longish essay I discuss the inherently approximate nature of science and explore how a scientific mindset can lead to near-sighted policy decisions. At the very least, readers may enjoy the linked articles,

Topics:
Bryan Caplan considers the following as important: ,

This could be interesting, too:

Jayme Lemke writes The Call of the Wild Economist

David Henderson writes Benefits of the American Revolution: An Exploration of Positive Externalities

Shanon FitzGerald writes Security States

Scott Sumner writes Strange moral calculations

Here’s a guest post by ASU’s Richard Hahn, reprinted with his permission.  I suspect he’d be happy to respond to comments!


The problem with punchy slogans is that they are subject to (mis)interpretation. In the wake of the Covid-19 pandemic, it has become common for folks to urge policy-makers to “follow the science”. But what exactly does this mean? There is a version of this slogan that I strongly support, but I worry that many of the people invoking it mean something different, and that this interpretation could actually undermine faith in science. In this longish essay I discuss the inherently approximate nature of science and explore how a scientific mindset can lead to near-sighted policy decisions. At the very least, readers may enjoy the linked articles, which represent a greatest-hits collection of writing that influenced my own thinking about public health policy over the past year.

What even is science?

All empirical knowledge is approximate.

First of all, making sense of “follow the science” demands an understanding of what science is. It is important to remember that science is a process for learning about the world, not merely an established body of knowledge to be consulted. Some areas of science, like Newtonian physics, might give the impression of finality, but that is misleading. Even classical physics is just an approximation; over time we have been able to figure out tasks for which that approximation is adequate (aiming missiles) and those for which it isn’t (GPS, which requires more complex adjustments). For newer science, especially pertaining to complex subjects like human biochemistry, we often do not have a good sense of how our approximations might fail, nor do we have a more refined theory at the ready.

Scientific inquiry is a process of continual revision and refinement and the knowledge we uncover by this process is always provisional (subject to revision) and contingent (subject to qualification). For example, we might think we know that a struck match will light, because we understand the basic chemistry of match heads. But if the match is in a chamber with no oxygen, then it won’t light. It also won’t light if it is in a strong magnetic field. The “scientific theory of matches” involves understanding the mechanisms of the match well enough that we can forecast novel scenarios in which the match will or will not light. Sometimes we will be wrong, but that is how we learn, how our approximations improve.

Some scientifically acquired knowledge is more approximate than others.

The scientific knowledge that underlies jet planes or heart surgery is quite a lot different from that underlying cell biology or genetics which is quite a lot different than that underlying epidemiology or climate science.  Scientific inquiry is an idealized method of establishing how the world works, but some phenomena are simply less well understood than others, despite being investigated using common scientific techniques (experiments, observation, data analysis, etc). A surgeon’s authority on surgery is qualitatively different from an climatologist’s authority on climate; they are both experts, but their domains of expertise are vastly different. One major difference between various areas of inquiry is whether or not they are amenable to repeatable experiments that are essentially similar; surgical procedures are, climatology (and paleontology and economics) are not.

Mathematical/computer models incorporate many assumptions.  

Computational models encapsulate many relationships at the same time. To continue with the match example, a quantitative model of match-lighting might include the size of the match, the proportion of potassium chlorate to sulfur, the force with which it is struck, the amount of oxygen in the environment and the strength of the magnetic field surrounding the match, etc. Accordingly, models reflect our knowledge of processes that have been observed, but also extend to situations that have not yet been observed. In this latter case — when a model produces a prediction for a situation where there is no data yet — the model’s prediction is not a demonstration of scientific fact, rather it is the starting point of a scientific experiment! Moreover, if the inputs to a model are wrong, then its predictions will be wrong, too, even if the model itself is correct. When models contain within them dozens or hundreds of potentially violable assumptions, that means there is lots of science left to do, rather than that there are clear conclusions to be drawn.

Science alone cannot provide decisions.

When the correct inputs to a model are unknown, or the model’s appropriateness in a novel setting is questionable, the scientific method itself doesn’t provide guidance for navigating these uncertainties. Science alone cannot solve practical decisions for two fundamental reasons. First, science provides a process for (eventually) resolving uncertainties by a process of experimentation, but often decisions must be made before that process can be undertaken. Pure science has the luxury of being agnostic until rigorous investigations are conducted, but insincere skepticism has no place when critical decisions must be made. Second, the scope of practical considerations is usually much broader than what sound scientific method usually allows. By its very nature, controlled experimentation involves narrowing the scope of a phenomena under consideration. The broader the domain of impact, the less science can provide clear answers — studying a single match is fundamentally different than studying the economic and human health ramifications of forest fires.

Appeals to Science during Covid-19

In light of the above, what might “follow the science” mean in 2021? If it simply means that scientists and scientific studies should be consulted when evaluating policy decisions, that is all to the good. To do otherwise would be very foolish! However, if instead it means that scientists should be empowered to make policy decisions unilaterally or that only scientifically obtained knowledge should be considered, then I would argue that it is unwise. A wait-and-see approach to ambiguity in conjunction with a narrow purview can make scientists ineffective policy makers. This dynamic was apparent on many matters related to Covid-19.

Lockdowns and mathematical epidemiology models. Standard models of infectious disease predict that early-stage epidemics will surge very much like compound interest makes credit card debt balloon: more infected people leads to more infected people, and so on. The steepness of the surge depends on various inputs to the model. One important input that dictates the human toll of an epidemic is the fatality rate. It is well known that fatality rates in the earliest stages of an epidemic are typically inflated — both because mild cases are not included in the denominator and because the numerator is apt to be calculated based on only the sickest patients — but influential models in March 2020 used these over-estimates anyway. Why? I suspect it has to do with the modelers’ narrowly construed objective: to limit deaths due to Covid-19. Based on this narrow mandate, policy recommendations would have been extreme according to any standard model of disease spread; selectively emphasizing the most dire forecast was essentially an act of persuasion. But plugging in worst-case values for key inputs had the effect of prioritizing Covid-19 deaths to the exclusion of the multitude of other factors affecting human welfare. Narrowness of focus is an asset in scientific investigation, but not in public policy.

Plumes of virulent effluvia. A similar dynamic played out in the early weeks of the pandemic regarding outdoor exercise: could you catch the virus while jogging on the bike path? Prima facie, it should have been doubtful, based on our extensive experience with other viruses. The sheer volume of the outside atmosphere relative to that of human exhalations, in combination with the dispersing effects of wind, should have been reassuring. But then a widely-reported simulation came out — based on a fancy computer model — showing that human breath vapor can linger in the air and waft substantial distances. Based on this scientific report, many individuals began to wear masks while on their morning jog, if not skipping it altogether. Parks were closed. But here’s the thing: that simulation study didn’t reveal anything we didn’t know before. When someone walks by you wearing perfume, you can smell it, and we knew that before it was “scientized” with a colorful infographic. Meanwhile, the actually relevant question — what amount of virus must you be exposed to before it can be expected to lead to an infection and how does this compare to the amount of virus that would be in a typical jogger’s vapor wake — remained unanswered. Again, the narrowness of the scientific approach is a weakness here. By reporting on a necessary — but not sufficient — condition for contracting the disease, the study fell far short of providing actionable information. At the same time, the pretense that this computer simulation proved something new made it seem as if our knowledge that similar viruses were not readily transmissible via brief outdoor contact was inapplicable due to lack of rigor.

Again, affected skepticism is an understandable position if the goal is to publish a science paper, but it has no place in public policy. During Covid-19, this pattern repeated itself over and over: efficacy of masks, contractability via surfaces, reinfection. In each case “probably” or “probably not” became “it’s possible, we have to await further studies to venture any opinion”. Coupled with a monofocus on Covid-19 case suppression, no mitigation measure was too extreme, including essentially cancelling elementary school indefinitely.

Vaccine efficacy doubt. As a final example — and probably the most important one — consider the messaging about vaccine efficacy. The basic science underlying vaccines is more solid than the science underlying non-pharmaceutical interventions by many orders of magnitude because the fundamental mechanisms behind vaccines are narrow, well-studied, and universal in a way that behavioral level interventions are definitely not. Advice to keep wearing your mask post-vaccine — just to be safe! — while simultaneously keeping national parks closed, ignores gross qualitative differences among scientific fields and reveals the scientists’ asymmetric utility (mitigation is always costless). A scientific explanation should not change based on what an audience is likely to do with it, and when it does, that erodes faith in the scientific process itself.

Protecting faith in science

To summarize, when both uncertainty and stakes are high, principled agnosticism in the name of science is unethical; prior knowledge should not be discarded in a bid at scientific purity. Further, when uncertainty cannot be resolved, a worst-case analysis based on a too-narrow criterion (covid case counts only) can lead to bad policy — the collateral damage to other facets of welfare may overwhelm the improvements on the narrow metric: to entertain this possibility is not anti-science, it’s ethical policy-making.

In context “follow the science” sometimes means that one should not even raise the question of policy trade-offs. But this is a mistake: just because something is easier to measure, making it more amenable to scientific modeling, does not mean it is more important. It is understandable that the scientists who study a particular phenomenon are inclined to focus narrowly on it, but we should not mistake their professional commitment with society’s broader needs.

The core belief underlying the scientific enterprise is that the workings of our world are knowable in principle, even if that knowledge will always be imperfect in practice. This perspective has given us air travel and artificial hearts and life-saving vaccines. It is a perspective that is worth celebrating and protecting. But, when we

  • fail to acknowledge qualitative differences between different areas of scientific inquiry,
  • conflate science with (narrowly defined) risk aversion, and
  • fail to distinguish between scientific advice and the personal priorities of the scientists providing that advice,

we run the risk of undermining faith in all science, which would be a tragedy. We must not let rhetoric about science turn science into rhetoric.

Bryan Caplan
Bryan Caplan is Professor of Economics at George Mason University and Senior Scholar at the Mercatus Center. He has published in the New York Times, the Washington Post, the Wall Street Journal, the American Economic Review, the Economic Journal, the Journal of Law and Economics, and Intelligence, and has appeared on 20/20, FoxNews, and C-SPAN. Bryan Caplan blogs on EconLog.

Leave a Reply

Your email address will not be published. Required fields are marked *