The "Highly Trained Expert" Fallacy - Counterexamples?

Mick West

Staff member
Metabunk 2020-06-26 09-06-02.jpg

An argument that is often used by believers in unusual theories is that since some more conventional explanations require that a person has made a mistake, and since that person is a highly trained expert, then that means it's highly unlikely they would make a mistake, and hence the conventional explanation is highly improbable, and so this puts their unusual theory at the top of the list.

The classic example here is with UFOs. The "expert" witness is often a pilot, and ideally a military pilot. If a pilot reports they saw some strange flying craft, then, for some UFO fans, the only possible explanation is that it is, in fact, a strange flying craft. This conclusion is reached because the pilot is "highly trained" and hence it is thought to be impossible that they would misidentify Venus, or Mars, or another plane, or a bird, or a balloon, as a strange flying craft.

And yet experts DO make mistakes. Pilots actually misidentify Venus as an oncoming plane relatively frequently. Pilots land on the wrong runway, or at the wrong airport. Pilots think they are upside down when they are not. Pilots misidentify oil rig lights in the ocean as lights in the sky.

The fallacy extends to other conspiracy domains. With 9/11 we have some "highly trained" engineers who can't immediately (or even eventually) wrap their heads around why World Trade Center Building Seven collapsed in the way it did. With "chemtrails" we have some scientists from various fields who have been convinced by specious arguments about contrails. And of course, there are more mainstream areas where it comes into play. There are anti-vaccine "highly trained" experts. There's the "highly trained" Dr. Judy Mikovits who made the glib but error-riddled "Plandemic" video. There are "experts" who think that really low levels of radio waves can have serious health effects.

Experts make mistakes. I think it might be useful to gather examples of expert mistakes. Not to make the argument that experts are idiots - indeed many of the people involved are highly intelligent, highly trained, capable, and experienced. We should also be cautious to not overstate the prevalence of mistakes. In many cases, they are indeed quite rare. The point here is that mistakes are possible no matter how talented and experienced the expert is. The ultimate point is that one should not discount that possibility, especially when the alternatives (aliens, vast hyper-competent conspiracies, chemtrails, etc. ) are things that are greatly lacking in evidence (and often with significant counter-evidence.)

Such discussions often devolve into intractable subjective assessments of probability: "sure, experts make mistakes, but how likely is it that two experts would make mistakes, or that an expert would make a mistake on the same day that some else odd happened?" These rebuttals are not entirely without merit - often two or more things happening is less likely than one thing happening (unless the events have some causal link.) But any such discussion would really benefit (on all sides) from a deeper understanding of the types of mistakes that experts make, and (where possible) how often they make them.

So I'm starting this thread as a place to gather illustrative examples that will help illuminate the fallacy, and shed some clarity on the "how likely" argument. Let's collect cases where experts got it very wrong.

Mick West

Staff member
The first example I'm going to give is a classic in engineering, the Hyatt Regency Hotel walkway collapse of 1981. Here over a hundred people were killed when a suspended walkway collapsed. The mistake made here was a simple one, and yet was not immediately obvious. A simple design change was approved by the engineer who did not realize that the design change doubled the load on one connection.

Of interest here is that this is a problem that seems simple when you understand it, but before you do, it's not intuitively obvious. Engineers can make the same mistake other people do, and think that the design change does not change the applied loads.

The actually issue is very well explained by the above diagram, and more clearly by Grady Hillhouse of Practical Engineering:

It was also not a solitary mistake. Multiple people failed to see the problem. In addition, the original design (which would probably not have failed) was dangerously lacking in redundancy itself. The failure occurred during a dance party with a large number of people moving on and under the walkway. It was a sequence of events, failures by experts that might seem highly unlikely, but actually happened.

Agent K

Active Member
If anything, conspiracy theorists and cranks like flat earthers, anti-vaxxers, and covidiots have too little respect for expertise. They'll happily call out mistakes made by experts when it suits them, like the CDC's about-face on wearing facemasks.

Speaking of which, here's a look back at a Time article about facemasks from March 3, and its update on April 3.
Its original title was "Health Experts Are Telling Healthy People Not to Wear Face Masks for Coronavirus. So Why Are So Many Doing It?"
On April 3, it was changed to "Public Health Experts Keep Changing Their Guidance on Whether or Not to Wear Face Masks for Coronavirus"
Note: The CDC has updated its guidance to the public around wearing masks during the coronavirus pandemic. On April 3, it advised Americans to wear non-medical cloth face coverings, including homemade coverings fashioned from household items, in public settings like grocery stores and pharmacies. See our latest story for more on the science of face masks.’s no surprise that face masks are in short supply—despite the CDC specifically not recommending them for healthy people trying to protect against COVID-19. “It seems kind of intuitively obvious that if you put something—whether it’s a scarf or a mask—in front of your nose and mouth, that will filter out some of these viruses that are floating around out there,” says Dr. William Schaffner, professor of medicine in the division of infectious diseases at Vanderbilt University. The only problem: that’s not likely to be effective against respiratory illnesses like the flu and COVID-19. If it were, “the CDC would have recommended it years ago,” he says. “It doesn’t, because it makes science-based recommendations.”
Lynn Bufka, a clinical psychologist and senior director for practice, research and policy at the American Psychological Association, suspects that people are clinging to masks for the same reason they knock on wood or avoid walking under ladders. “Even if experts are saying it’s really not going to make a difference, a little [part of] people’s brains is thinking, well, it’s not going to hurt. Maybe it’ll cut my risk just a little bit, so it’s worth it to wear a mask,” she says. In that sense, wearing a mask is a “superstitious behavior”: if someone wore a mask when coronavirus or another viral illness was spreading and did not get sick, they may credit the mask for keeping them safe and keep wearing it.
Last edited:

Agent K

Active Member
I could fill the page with COVID-19 related errors and retractions alone.

The Lancet Retracts Hydroxychloroquine Study
The online medical journal The Lancet has apologized to readers after retracting a study that said the anti-malarial drug hydroxychloroquine did not help to curb COVID-19 and might cause death in patients.

Retracted: Study that questioned effectiveness of masks against SARS-CoV-2
Disclaimer: The study that this article discusses — “Effectiveness of surgical and cotton masks in blocking SARS-CoV-2” — was retracted by the authors following recommendations from the editors of Annals of Internal Medicine. The authors have admitted that the data that they analyzed in their study were “unreliable,” making their findings “uninterpretable.

Research published at the beginning of April — which has since been retracted — casts serious doubts about the effectiveness of both surgical and cloth masks in preventing the spread of infectious SARS-CoV-2 particles.

These experts got it right on the second try.
For the Canadian researchers, the finding that hotter weather doesn't reduce COVID-19 cases was surprising.
"We had conducted a preliminary study that suggested both latitude and temperature could play a role," said study co-author Dr. Peter Jüni, also from the University of Toronto. "But when we repeated the study under much more rigorous conditions, we got the opposite result."
It is such an odd thing to me because everyone knows someone that should have known better, but made a mistake anyhow.

Lots of people know about the Mars Climate Orbiter being lost because of the Imperial-Metric issue, but that's not the actual reason it was lost. The broader issue at work there was making sure that the specs were checked out properly rather than make assumptions about what the specs said. But that's still not the actual reason the probe was destroyed. Even with the thruster output being wrong because of the unit mix up, the spacecraft was controllable and predictably controllable. It did make it to Mars, after all.

Here's were "experts were wrong" comes in. Like all accident reports, there's multiple causes. When you read the report, it becomes very clear that the unit of measurement issue was almost an odd little thing they should have picked up immediately, except for, you know, the whole experts being the smartest guy in the room thing. The MCO had already had four trajectory correction maneuvers planned and executed. That's over the course of several months that the evidence that something was wrong with the thruster output, giving you both the data and time to work out what was wrong and work out what thruster sequence could put MCO on the correct trajectory. Engineers had already noticed something was amiss when TCM-3 was executed. They were having to fire the thrusters more frequently and for much longer than previous Mars missions. Mars Global Surveyor, the previous mission, had TCM-3 canceled and only used 1, 2, and 4. MGS's TCM-4 adjusted the vehicle by 0.29 m/s. MCO's TCM-4 was 4.0 m/s. There was ample evidence something was seriously wrong with MCO.

This was enough warning, though, that a fifth course correction was in order. It never happened. Somehow, the idea that the targeting software was the problem came into play. However, the data was fed into the software used for a previous Mars mission and it came back with the same trajectory that was off target by hundreds of kilometers. Staring them in the face, though, is almost a year's worth of tracking data and four thruster sequences that told you where the probe was and what TCM-5 would have to do to put it back on track. TCM-5 was brought up, but it never seemed to gain the traction it needed, because every team seemed to be equally unsure of what the actual situation was. Which itself should have been a massive warning that something was seriously wrong.

You've got multiple teams of multiple experts all sitting in their own rooms thinking "you know, this isn't quite right, but I'll see where it goes." That's a pretty big mistake for an expert to make. Frankly, I'm sort of surprised that they never worked out the thruster issue. You'd think someone would have been like "have you noticed that our thrusters seem to be underpowered? Everything is like four and a half times times what it was on our earlier missions." "Hang on...four and a half? One pound of force is 4.45 newtons. Is there something wrong with the thruster software?"


I guess it would be very easy to write many volumes on how highly trained and experienced professionals do things incorrectly. Pilots crash planes; doctors saw off the wrong leg; engineers build suspension bridges that fall apart at the first gust of wind - the list is endless. The increasing complexity of modern technology, medicine, engineering and so forth only increase the likelihood of human error.

One way to minimise these problems is the checklist. I have always been a big fan of using checklists and was delighted a few years ago when I received a copy of The Checklist Manifesto by Atul Gawande. I think it was meant as an ironic gift but I was happy enough with the joke.

He argues that there are two types of error. The first is an error of ignorance, where we just don't know or understand things. This is sort of understandable in an unknown/unknowns way (and is more or less how society, science and technology advances). The second is the error of ineptitude where we don't properly use, implement or review what we actually know. This is less forgivable and the checklist is designed to improve outcomes for all professionals and minimise this.

Gawande quotes numerous examples where medical outcomes have been significantly improved following the introduction of checklists for certain procedures (he is a surgeon). In a very real sense the checklist had corrected previous ineptitude.

I can't really remember the medical examples but do remember the example of what eventually turned into the B-17 bomber. This was built by Boeing to satisfy a USAAC spec and performed well enough on paper to be selected for evaluation. On its evaluation flight in front of the assembled brass it left the ground and promptly crashed, killing some of the crew. The aircraft was extremely complicated and was deemed something like "too much for one man to fly". Sadly that was true and the pilot missed out a step in the take-off procedure and died in the crash.

The test pilots got together and designed a pre-flight checklist that fitted on an index card. That solved the complexity problem and there were no more incidents of that nature. The USAAC had kept an interest in the bomber (despite Boeing losing the tender) and eventually went on to purchase 13,000 of them.

I suppose it is great to know lots of stuff but not much good if you don't use it all in the correct manner. There is no shame in other people checking your work either (something a few professionals might choose to remember).