In the next few weeks, I want to discuss some of the broader questions and some of the detailed topics that have come up during the whole coronavirus origins debate.
First, let’s talk about rare events in complex systems being evaluated with little to no direct data.
A common refrain during this debate has been to say that most pandemics in history have been zoonotic.
There are two things wrong with this framework.
First, an investigation after a rare incident happens is very different from merely assessing the likelihoods of the many potential steps leading up to it—even if some are quite common by themselves. It’s an important difference that’s crucial because the correct understanding of the conceptual problem is the key that opens the path to preventing future incidents.
Second, advanced scientific technology changes such calculations because that’s a bit like saying we’ve always had atoms, so no need to worry about nuclear energy safety or nuclear weapons.
On the first dimension: Of course, it’s true that zoonosis—a virus managing to leap to humans from another species is a regular enough event. But most of the time, these don’t lead anywhere. Spillovers from other species to humans happen all the time, but most of them are dead-ends, not even outbreaks. There is one more crucial trick the virus must master before there is an outbreak: jumping from one human to another. Without that, the victim gets sick, if that, but that’s it. If human-to-human transmission happens, then maybe it can go somewhere but that is not a minor step.
Even then, not all outbreaks become pandemics.
Some viruses have features that fuel global spread—like spreading silently, without symptoms, or through the air like SARS-CoV-2 can for both. Ebola, for example, has limited pandemic potential: people are infectious only after they develop symptoms, and it spreads mainly through bodily fluids of sickened people. It’s a deadly, terrible disease but sitting next to a patient in an airplane is little risk, especially if they aren’t even sick yet.
All this means that while zoonotic sparks, spillovers, and even outbreaks are many (in descending order of commonality), pandemics are really rare, even though they are an ever-present threat. It’s hard for us to think about such structurally ever-present but rare threats, because they differ greatly from everyday threats.
Also, if one was assessing potential paths, the correct comparison to zoonosis as a competing explanation, lab accidents or escapes, known and unknown, is also something that happens regularly.
We know lab escapes and screw-ups happen, they are regular enough, and not always noticed, and there is underreporting of some degree—how much? An unknown unknown. Still, most of the time, these don’t lead even to outbreaks for similar reasons: most pathogens aren’t that easily transmissible to begin with—especially without being noticed.
Neither spillovers nor lab incidents lead to pandemics regularly.
This makes post-hoc assessment of likelihood quite difficult, and prone to faux precision.
This is how I put it in my piece on coronavirus origins in the New York Times:
Plus, once a rare event, like a pandemic, has happened, one has to consider all the potential paths to it. It’s like investigating a plane crash. Flying is usually very safe, but when a crash does happen, we don’t just say mechanical errors and pilot mistakes don’t usually lead to catastrophes and that terrorism is rare. Rather, we investigate all possible paths, including unusual ones, so we can figure out how to prevent similar events.
Once a plane crash does occur, we don’t tell the investigators that planes are really really safe, so let’s not worry about it. In fact, the thoroughness of that investigation is exactly what makes planes really really safe, in the future: we figure it out and fix it. Most common reasons for plane crashes are pilot error, mechanical error and weather, but an investigative not looking to see if there are any traces of explosives—merely because, indeed, terrorism is very rare—would need to be fired on the spot.
Plus, the lack of evidence makes it even more difficult. For this case, It’s lacking, and deliberately so because the power in control—the Chinese government—has decided it should be so.
“There is no evidence that there is a lab leak” is a refrain that one encounters, but this is a case where it doesn’t mean this is exculpatory, unlike if a genuine investigation was carried out and nothing had turned up. It’s also true for “where’s the evidence that it is zoonotic”. Here, the party who would be responsible, the government of China, is actively making sure there is no evidence. I think one can have different views of what’s being covered up and how, but hard to deny that the cover-up is real, and except for a brief window between the end of January and sometime in Spring of 2020, it’s been extensive.
A plane has crashed, the crash site has been declared off-limits with a party with an interest in covering it all up, and we’re trying to guess what happened. “Flying is safe most of the time” is not false, but it doesn’t address any issues we face.
That’s why I prefer the epistemological humility that Yale virologist As Yale imminologist Akiko Iwasaki brought to the question when she said, “there’s so little evidence for either of these things, that it’s almost like a tossup.”
Here’s another formulation by her, with the same point:
"Even though there is very little evidence for any of these possibilities, that [WHO] report basically said that the lab is extremely unlikely. Whereas the other possibilities are possible to likely. So as a scientist, it feels a bit awkward without any data to conclude the likeliness of these scenarios in this manner."
I’m think both of those statements are correct in a profound way, prior possibilities of the landscape of events leading up to the event aren’t, by themselves, strong enough data or evidence by themselves for weighing likelihood with a lot of confidence (though people may have leanings, it still cannot be too precise for obvious reasons without being misleading).
Second, there is also the question of the correct historical framework for thinking.
Looking at the post-molecular biology era, when we were more in a position to be the cause of a pandemic, the known incidence is that two out of three pandemics were completely zoonotic: the same odds that Nate Silver had assigned to Hillary Clinton winning the presidency in 2016. It’s actually not saying that much either—again, given a rare outcome—except that ruling out pathways without evidence isn’t warranted.
Finally, even under the best of circumstances, a series of unusual and rare events can easily be lost to time and evolution. That’s why the answer may just not have been available even if everything had been done right.
So this has two conclusions wrapped up into it.
First, going forward, we should consider all possibilities as potential pathways and think about what can be done. I people don’t understand the evidentiary basis on which people are assigning precise(ish) likelihoods to the paths leading to a rare, perhaps once-in-a-century rare, event when a lot of paths have been demonstrated as viable.
Second, we should try to institute global reforms (which I think are possible! More on that later!) so that not only do we make future likelihood less likely, we should try to decrease the odds of finding ourselves in this position again: unable to make judgments because of the depth of cover-up and lack of evidence.
As an aside, this question comes up also for terrorism (in the Western context) and ethnic tensions/violence within a country. The former is a rare event, requiring different investigative method, while the latter is a structural problem. But they often get conflated in a lot of discussions—for example, the appropriateness of end-to-end encryption, investigative methods, surveillance, etc. Turns out that there are a lot of meta-features of different kinds of events that create conceptual tool-kits that can be brought over, even though on the surface, the event is a different category. I’ve been trying to find a word for this? Structural isomorphism? Methodological similarity? I’m very open to suggestions!
By the way: subscribers have already seen this list, that came up during our weekly open thread but here’s a few of things in this context that I’d like to cover going forward:
1-More about that furin cleavage site debate (it ended up being too wonky for the article but it’s actually an interesting intersection between science, evolution and academic incentives).
2-Conditional and posterior probability and why it’s so important to know when to use them, and when to stay away from “in general” post-hoc assessments for rare events.
3-How to try to align incentives globally in a way that can also help scientists in authoritarian settings, and also help us all.
4-Safety cultures in complex settings, and what we can learn from examples like the aviation industry.
5-The groupthink! The groupthink! And the motivated reasoning. So much that it hurts, and startling. Humans are humans, even smart ones, part zillion.
6-The striking dearth of biosafety discussions after a year of coverage when it is clearly central to the question. A lot of media coverage of this has been like interviewing only physicists studying fission while completely ignoring nuclear reactor experts to try to understand a potential Chernobyl.
7-Not understanding that all potential pathways are important, even if one judges them to be rare or unusual (though see 3) if the tail risk is catastrophic. How hard can this be even after more than a year of a miserable, deadly pandemic? Apparently, still very hard.
8-Induction, which gets implicitly invoked a lot around in this topic as a methodological tool, and why and how it is not appropriate, especially for rare events.
9-The nonsense around “don’t make China mad, they won’t cooperate.” It was always but a thin-leaf cover for “please don’t talk about things we don’t like.”
Chinese officials could hardly cooperate less, and they have been doing everything from claiming the outbreak started in the US, asserting that it was maybe imported from Japan in frozen fish, to spreading misinformation about Western vaccines. They may cooperate, going forward, though if we can find a reciprocal framework that is mutually beneficial for them as well. Pretending the things that happened didn’t happen—including, crucially the terrible toll of their initial cover-up—doesn’t affect that and “don’t talk about the topic” is really not at all the key to what we may be able to do going forward. If we don’t talk about it, though, we won’t even figure out what might be a desirable end-goal, despite the obvious difficulties.
10-The idea that recognizing the duress and limits scientists in authoritarian countries are under is somehow “racist” against them. It’s actually the opposite—the duress and the coercion they face is crucial to not being racist against them. The alternative to denying this very real duress is surrendering to caricatures of villainous scientists cooking up bioweapons! (With coronaviruses to boot).
I’ve been trying to find a word for this? Structural isomorphism? Methodological similarity? I’m very open to suggestions! How about CONCEPTUALLY ANALOGOUS?
Epistemology: The theory of knowledge, especially with regard to its methods, validity, and scope. Epistemology is the investigation of what distinguishes justified belief from opinion.