The "Highly Trained Expert" Fallacy - Counterexamples?

Mick West

Administrator
Staff member
Metabunk 2020-06-26 09-06-02.jpg

An argument that is often used by believers in unusual theories is that since some more conventional explanations require that a person has made a mistake, and since that person is a highly trained expert, then that means it's highly unlikely they would make a mistake, and hence the conventional explanation is highly improbable, and so this puts their unusual theory at the top of the list.

The classic example here is with UFOs. The "expert" witness is often a pilot, and ideally a military pilot. If a pilot reports they saw some strange flying craft, then, for some UFO fans, the only possible explanation is that it is, in fact, a strange flying craft. This conclusion is reached because the pilot is "highly trained" and hence it is thought to be impossible that they would misidentify Venus, or Mars, or another plane, or a bird, or a balloon, as a strange flying craft.

And yet experts DO make mistakes. Pilots actually misidentify Venus as an oncoming plane relatively frequently. Pilots land on the wrong runway, or at the wrong airport. Pilots think they are upside down when they are not. Pilots misidentify oil rig lights in the ocean as lights in the sky.

The fallacy extends to other conspiracy domains. With 9/11 we have some "highly trained" engineers who can't immediately (or even eventually) wrap their heads around why World Trade Center Building Seven collapsed in the way it did. With "chemtrails" we have some scientists from various fields who have been convinced by specious arguments about contrails. And of course, there are more mainstream areas where it comes into play. There are anti-vaccine "highly trained" experts. There's the "highly trained" Dr. Judy Mikovits who made the glib but error-riddled "Plandemic" video. There are "experts" who think that really low levels of radio waves can have serious health effects.

Experts make mistakes. I think it might be useful to gather examples of expert mistakes. Not to make the argument that experts are idiots - indeed many of the people involved are highly intelligent, highly trained, capable, and experienced. We should also be cautious to not overstate the prevalence of mistakes. In many cases, they are indeed quite rare. The point here is that mistakes are possible no matter how talented and experienced the expert is. The ultimate point is that one should not discount that possibility, especially when the alternatives (aliens, vast hyper-competent conspiracies, chemtrails, etc. ) are things that are greatly lacking in evidence (and often with significant counter-evidence.)

Such discussions often devolve into intractable subjective assessments of probability: "sure, experts make mistakes, but how likely is it that two experts would make mistakes, or that an expert would make a mistake on the same day that some else odd happened?" These rebuttals are not entirely without merit - often two or more things happening is less likely than one thing happening (unless the events have some causal link.) But any such discussion would really benefit (on all sides) from a deeper understanding of the types of mistakes that experts make, and (where possible) how often they make them.

So I'm starting this thread as a place to gather illustrative examples that will help illuminate the fallacy, and shed some clarity on the "how likely" argument. Let's collect cases where experts got it very wrong.
 

Mick West

Administrator
Staff member
The first example I'm going to give is a classic in engineering, the Hyatt Regency Hotel walkway collapse of 1981. Here over a hundred people were killed when a suspended walkway collapsed. The mistake made here was a simple one, and yet was not immediately obvious. A simple design change was approved by the engineer who did not realize that the design change doubled the load on one connection.



Of interest here is that this is a problem that seems simple when you understand it, but before you do, it's not intuitively obvious. Engineers can make the same mistake other people do, and think that the design change does not change the applied loads.

The actually issue is very well explained by the above diagram, and more clearly by Grady Hillhouse of Practical Engineering:
Source: https://www.youtube.com/watch?v=VnvGwFegbC8


It was also not a solitary mistake. Multiple people failed to see the problem. In addition, the original design (which would probably not have failed) was dangerously lacking in redundancy itself. The failure occurred during a dance party with a large number of people moving on and under the walkway. It was a sequence of events, failures by experts that might seem highly unlikely, but actually happened.
 

Agent K

Active Member
If anything, conspiracy theorists and cranks like flat earthers, anti-vaxxers, and covidiots have too little respect for expertise. They'll happily call out mistakes made by experts when it suits them, like the CDC's about-face on wearing facemasks.

Speaking of which, here's a look back at a Time article about facemasks from March 3, and its update on April 3.
Its original title was "Health Experts Are Telling Healthy People Not to Wear Face Masks for Coronavirus. So Why Are So Many Doing It?"
http://archive.is/qMV7M
On April 3, it was changed to "Public Health Experts Keep Changing Their Guidance on Whether or Not to Wear Face Masks for Coronavirus"
Article:
Note: The CDC has updated its guidance to the public around wearing masks during the coronavirus pandemic. On April 3, it advised Americans to wear non-medical cloth face coverings, including homemade coverings fashioned from household items, in public settings like grocery stores and pharmacies. See our latest story for more on the science of face masks.

...it’s no surprise that face masks are in short supply—despite the CDC specifically not recommending them for healthy people trying to protect against COVID-19. “It seems kind of intuitively obvious that if you put something—whether it’s a scarf or a mask—in front of your nose and mouth, that will filter out some of these viruses that are floating around out there,” says Dr. William Schaffner, professor of medicine in the division of infectious diseases at Vanderbilt University. The only problem: that’s not likely to be effective against respiratory illnesses like the flu and COVID-19. If it were, “the CDC would have recommended it years ago,” he says. “It doesn’t, because it makes science-based recommendations.”
...
Lynn Bufka, a clinical psychologist and senior director for practice, research and policy at the American Psychological Association, suspects that people are clinging to masks for the same reason they knock on wood or avoid walking under ladders. “Even if experts are saying it’s really not going to make a difference, a little [part of] people’s brains is thinking, well, it’s not going to hurt. Maybe it’ll cut my risk just a little bit, so it’s worth it to wear a mask,” she says. In that sense, wearing a mask is a “superstitious behavior”: if someone wore a mask when coronavirus or another viral illness was spreading and did not get sick, they may credit the mask for keeping them safe and keep wearing it.
 
Last edited:

Agent K

Active Member
I could fill the page with COVID-19 related errors and retractions alone.

The Lancet Retracts Hydroxychloroquine Study
Article:
The online medical journal The Lancet has apologized to readers after retracting a study that said the anti-malarial drug hydroxychloroquine did not help to curb COVID-19 and might cause death in patients.

Retracted: Study that questioned effectiveness of masks against SARS-CoV-2
Article:
Disclaimer: The study that this article discusses — “Effectiveness of surgical and cotton masks in blocking SARS-CoV-2” — was retracted by the authors following recommendations from the editors of Annals of Internal Medicine. The authors have admitted that the data that they analyzed in their study were “unreliable,” making their findings “uninterpretable.

Research published at the beginning of April — which has since been retracted — casts serious doubts about the effectiveness of both surgical and cloth masks in preventing the spread of infectious SARS-CoV-2 particles.

These experts got it right on the second try.
Article:
For the Canadian researchers, the finding that hotter weather doesn't reduce COVID-19 cases was surprising.
"We had conducted a preliminary study that suggested both latitude and temperature could play a role," said study co-author Dr. Peter Jüni, also from the University of Toronto. "But when we repeated the study under much more rigorous conditions, we got the opposite result."
 
It is such an odd thing to me because everyone knows someone that should have known better, but made a mistake anyhow.

Lots of people know about the Mars Climate Orbiter being lost because of the Imperial-Metric issue, but that's not the actual reason it was lost. The broader issue at work there was making sure that the specs were checked out properly rather than make assumptions about what the specs said. But that's still not the actual reason the probe was destroyed. Even with the thruster output being wrong because of the unit mix up, the spacecraft was controllable and predictably controllable. It did make it to Mars, after all.

Here's were "experts were wrong" comes in. Like all accident reports, there's multiple causes. When you read the report, it becomes very clear that the unit of measurement issue was almost an odd little thing they should have picked up immediately, except for, you know, the whole experts being the smartest guy in the room thing. The MCO had already had four trajectory correction maneuvers planned and executed. That's over the course of several months that the evidence that something was wrong with the thruster output, giving you both the data and time to work out what was wrong and work out what thruster sequence could put MCO on the correct trajectory. Engineers had already noticed something was amiss when TCM-3 was executed. They were having to fire the thrusters more frequently and for much longer than previous Mars missions. Mars Global Surveyor, the previous mission, had TCM-3 canceled and only used 1, 2, and 4. MGS's TCM-4 adjusted the vehicle by 0.29 m/s. MCO's TCM-4 was 4.0 m/s. There was ample evidence something was seriously wrong with MCO.

This was enough warning, though, that a fifth course correction was in order. It never happened. Somehow, the idea that the targeting software was the problem came into play. However, the data was fed into the software used for a previous Mars mission and it came back with the same trajectory that was off target by hundreds of kilometers. Staring them in the face, though, is almost a year's worth of tracking data and four thruster sequences that told you where the probe was and what TCM-5 would have to do to put it back on track. TCM-5 was brought up, but it never seemed to gain the traction it needed, because every team seemed to be equally unsure of what the actual situation was. Which itself should have been a massive warning that something was seriously wrong.

You've got multiple teams of multiple experts all sitting in their own rooms thinking "you know, this isn't quite right, but I'll see where it goes." That's a pretty big mistake for an expert to make. Frankly, I'm sort of surprised that they never worked out the thruster issue. You'd think someone would have been like "have you noticed that our thrusters seem to be underpowered? Everything is like four and a half times times what it was on our earlier missions." "Hang on...four and a half? One pound of force is 4.45 newtons. Is there something wrong with the thruster software?"
 

Tedsson

Member
I guess it would be very easy to write many volumes on how highly trained and experienced professionals do things incorrectly. Pilots crash planes; doctors saw off the wrong leg; engineers build suspension bridges that fall apart at the first gust of wind - the list is endless. The increasing complexity of modern technology, medicine, engineering and so forth only increase the likelihood of human error.

One way to minimise these problems is the checklist. I have always been a big fan of using checklists and was delighted a few years ago when I received a copy of The Checklist Manifesto by Atul Gawande. I think it was meant as an ironic gift but I was happy enough with the joke.

He argues that there are two types of error. The first is an error of ignorance, where we just don't know or understand things. This is sort of understandable in an unknown/unknowns way (and is more or less how society, science and technology advances). The second is the error of ineptitude where we don't properly use, implement or review what we actually know. This is less forgivable and the checklist is designed to improve outcomes for all professionals and minimise this.

Gawande quotes numerous examples where medical outcomes have been significantly improved following the introduction of checklists for certain procedures (he is a surgeon). In a very real sense the checklist had corrected previous ineptitude.

I can't really remember the medical examples but do remember the example of what eventually turned into the B-17 bomber. This was built by Boeing to satisfy a USAAC spec and performed well enough on paper to be selected for evaluation. On its evaluation flight in front of the assembled brass it left the ground and promptly crashed, killing some of the crew. The aircraft was extremely complicated and was deemed something like "too much for one man to fly". Sadly that was true and the pilot missed out a step in the take-off procedure and died in the crash.

The test pilots got together and designed a pre-flight checklist that fitted on an index card. That solved the complexity problem and there were no more incidents of that nature. The USAAC had kept an interest in the bomber (despite Boeing losing the tender) and eventually went on to purchase 13,000 of them.

I suppose it is great to know lots of stuff but not much good if you don't use it all in the correct manner. There is no shame in other people checking your work either (something a few professionals might choose to remember).
 

jarlrmai

Active Member
Came across this one recently, trained military pilot makes error, crashes B52.

https://en.wikipedia.org/wiki/1994_Fairchild_Air_Force_Base_B-52_crash
 
Last edited by a moderator:
The Sally Clark murder trial illustrates the egregious misapplication of statistical data by a supposed expert.

 

FatPhil

Active Member
When I adopted this as an email/usenet .sig nearly two decades back, I didn't know how hard it would be two decades later to prove it's a real quote. I think I lifted it from a /The Register/ story, or similar.

"One cannot delete the Web browser from KDE without losing the ability to
manage files on the user's own hard disk."
-- Prof. Stuart E Madnick, MIT, "expert" witness for Microsoft. 2002/05/02

During that bit of the questioning, I believe Madnick was also under the misapprehension that KDE is an operating system, rather than just a graphical environment. The sound of face-palming from Linux nerds worldwide was clear at the time, but alas it seems to have faded into just myth and legend now.
 

SimonC

New Member
Hi, If you are just looking for examples of scientists believing false theories, or not accepting new correct theories, there must be whole books on the subject. A few of my favourites :-
  • Phlogiston theory - it seemed obvious that when we burn something e.g. wood, something hot comes out of the wood (the phlogiston) and just leaves behind a small amount of ash. Hard to believe that actually oxygen is combining with the wood.
  • In 1835, Auguste Comte, a prominent French philosopher, stated that humans would never be able to understand the chemical composition of stars. In 1859, spectroscopy was discovered, which shows what the stars are made of.
  • Studies of geology and evolution showed that the earth must have existed for billions of years, but physicists had no method to keep the sun shining for so long, so it "disproved" the geology and evolution of an ancient earth for a long time, until the discovery of radioactivity.
  • At the end of the 19th century physicists were thinking they had pretty well found everything, it was going to be boring just refining the physical constants. Before finding the structure of the atom, radioactivity, quantum effects etc.
  • Then in the early days of radioactivity, they started finding "N rays" everywhere. https://en.wikipedia.org/wiki/N-ray
  • Plate tectonics - explains pretty much every feature of earth's surface but was not believed, again because no-one could think of a process to make the continents move, until the evidence was overwhelming.
  • Percival Lowell was convinced he could see canals on Mars, due to the usual problems, it is far away and out of focus.
  • In the oil industry, there is a tale of a senior geologist who promised to "drink every drop of oil found in the North Sea". My own manager in the early 1990s said any talk of finding oil around the Falklands was a complete hoax and a fraud, after all the seismic data was shown on TV. One of the companies operating there has now found 1.7 billion barrels of oil in place.
Of course there is another level of confusion, quotes from scientists sounding wrong which are now disputed, e.g. did astronomers really say "space travel is bunk" , just before it happened ? https://en.wikipedia.org/wiki/Harold_Spencer_Jones#Opinions_about_Space_Travel
 
Last edited:
Came across this one recently, trained military pilot makes error, crashes B52.
This puts me in mind of the Tenerife Disaster. The guy that headed KLM's flight training department, the in-house expert on the 747, and initially selected to head the accident investigation was the very pilot that took off without clearance and caused the accident* in the first place. He may have been one of the most experienced and skilled airline pilots in the world, yet made one of the most basic mistakes that can possibly be made.

* not to completely absolve ATC. They screwed up royally too.
 

SimonC

New Member
Sir Arthur Conan Doyle, the creator of the world's greatest scientific detective Sherlock Holmes, believed that photographs of fairies were real.

The Cottingley Fairies appear in a series of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986), two young cousins who lived in Cottingley, near Bradford in England. In 1917, when the first two photographs were taken, Elsie was 16 years old and Frances was 9. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of The Strand Magazine. Doyle, as a spiritualist, was enthusiastic about the photographs, and interpreted them as clear and visible evidence of psychic phenomena. Public reaction was mixed; some accepted the images as genuine, others believed that they had been faked.

Interest in the Cottingley Fairies gradually declined after 1921. Both girls married and lived abroad for a time after they grew up, and yet the photographs continued to hold the public imagination. In 1966 a reporter from the Daily Express newspaper traced Elsie, who had by then returned to the United Kingdom. Elsie left open the possibility that she believed she had photographed her thoughts, and the media once again became interested in the story.

In the early 1980s Elsie and Frances admitted that the photographs were faked, using cardboard cutouts of fairies copied from a popular children's book of the time, but Frances maintained that the fifth and final photograph was genuine. Currently the photographs and two of the cameras used are on display in the National Science and Media Museum in Bradford, England. In December 2019 the third camera used to take the images was acquired and is scheduled to complete the exhibition.[1] https://en.wikipedia.org/wiki/Cottingley_Fairies

My previous post on this was deleted for not enough detail, is this too much ?
 

SimonC

New Member
..... and some allegations of child abuse rings in the UK, proved to be false after a huge amount of suffering inflicted on the families.

The father of a family was imprisoned in 1986 shortly after the family's arrival in South Ronaldsay, for child abuse. No formal child protection proceedings were initiated. After an alarm raised by officials in a neighbouring authority, sparked by a girl's claim to social workers and police that ritualistic satanic abuse had taken place,[1] action was taken. Other children were taken in late 1990, and the two youngest were told that their mother was dead. Local people began a campaign for the children to be allowed home. It was repeatedly decided that their welfare could not be assured in the care of their mother. It took six years before the last of the children was returned to their mother.[2]

The case came to court in April, and after a single day the presiding judge, Sheriff David Kelbie, dismissed the case as fatally flawed and the children were allowed to return home. The judge criticised the social workers involved, saying that their handling of the case had been "fundamentally flawed" and he found in summary that "these proceedings are so fatally flawed as to be incompetent" and that the children concerned had been separated and subjected to repeated cross-examinations almost as if the aim was to force confessions rather than to assist in therapy. Where two children made similar statements about abuse this appeared to be the result of "repeated coaching".[4] He added that in his view "There is no lawful authority for that whatsoever". Sheriff Kelbie also said that he was unclear what the supposed evidence provided by the social services proved.[5]

The objects seized during the raids were later returned; they included a videotape of the TV show Blackadder, a detective novel by Ngaio Marsh, and a model aeroplane made by one of the children from two pieces of wood, which was identified by social workers as a "wooden cross". The minister was asked to sign for the return of "three masks, two hoods, one black cloak", but refused to sign until the inventory was altered to "three nativity masks, two academic hoods, one priest's robe".

During the investigation the children received several lengthy interviews. McLean was later described by several of the children as a terrifying figure who was "fixated on finding satanic abuse", and other children described how she urged them to draw circles and faces, presumably as evidence indicating abusive rites.[2] These techniques were strongly criticised by Sheriff Kelbie.

One of the children later said of the interviews:

"In order to get out of a room, after an hour or so of saying, 'No, this never happened', you'd break down."[3]

One of the children later said:

I would never say that a child's testimony in the company of Liz McLean at the time [is reliable]. She was a very manipulative woman, and she would write what she wanted to write. I would doubt any child supposedly making allegations in that situation."

https://en.wikipedia.org/wiki/Cleveland_child_abuse_scandal https://en.wikipedia.org/wiki/Orkney_child_abuse_scandal
 

Inti

Senior Member.
Isn’t this closely related to the idea of the Nobel disease, where Nobel laureates convince themselves that expertise is universally transferable. There was a good article on this in Sceptical Inquirer,
Some authors have invoked the term Nobel Disease to describe the tendency of many Nobel winners to embrace scientifically questionable ideas (Gorski 2012). We adopt this term with some trepidation given its fraught implications. Some authors (e.g., Berezow 2016) appear to assume that Nobel winners in the sciences are more prone to critical thinking errors than are other scientists. It is unclear, however, whether this is the case, and rigorous data needed to verify this assertion are probably lacking.

In this article, we explore the more circumscribed question of whether and to what extent the Nobel Prize, conceptualized as a partial but imperfect proxy of scientific brilliance, is incompatible with irrationality.

https://skepticalinquirer.org/2020/...gence-fails-to-protect-against-irrationality/

They list eight Nobel recipients who espoused erroneous ideas, to say the list, such as Pauling’s obsession with vitamin C and both Watson and Shockley’s racist theories. They also briefly mention a number of other suffered.

To which I would add Freeman Dyson and his denial of the science of global heating.
 

FatPhil

Active Member
how was he [Conan Doyle] a highly trained expert?

He was an expert in being fooled by bunk, completely suckered by the Spiritualist movement:

-- https://www.nytimes.com/interactive...s/archives/arthur-conan-doyle-sherlock-holmes

Clearly there's plenty of time in the afterlife for learning new languages. Alles ist klar!
 

JMartJr

Active Member
A very large percentages of trials involving expert witnesses would provide case -- as experts are called by both sides to support each side's position, it is not unusual to have experts reach opposing opinions based on the same underlying set of facts. In such cases, at least one of them is likely to be wrong.

(Edited to de-typo -- I need to start proof reading better.)
 
Last edited:
Top