The "Highly Trained Expert" Fallacy - Counterexamples?

Mick West

Administrator
Staff member
Metabunk 2020-06-26 09-06-02.jpg

An argument that is often used by believers in unusual theories is that since some more conventional explanations require that a person has made a mistake, and since that person is a highly trained expert, then that means it's highly unlikely they would make a mistake, and hence the conventional explanation is highly improbable, and so this puts their unusual theory at the top of the list.

The classic example here is with UFOs. The "expert" witness is often a pilot, and ideally a military pilot. If a pilot reports they saw some strange flying craft, then, for some UFO fans, the only possible explanation is that it is, in fact, a strange flying craft. This conclusion is reached because the pilot is "highly trained" and hence it is thought to be impossible that they would misidentify Venus, or Mars, or another plane, or a bird, or a balloon, as a strange flying craft.

And yet experts DO make mistakes. Pilots actually misidentify Venus as an oncoming plane relatively frequently. Pilots land on the wrong runway, or at the wrong airport. Pilots think they are upside down when they are not. Pilots misidentify oil rig lights in the ocean as lights in the sky.

The fallacy extends to other conspiracy domains. With 9/11 we have some "highly trained" engineers who can't immediately (or even eventually) wrap their heads around why World Trade Center Building Seven collapsed in the way it did. With "chemtrails" we have some scientists from various fields who have been convinced by specious arguments about contrails. And of course, there are more mainstream areas where it comes into play. There are anti-vaccine "highly trained" experts. There's the "highly trained" Dr. Judy Mikovits who made the glib but error-riddled "Plandemic" video. There are "experts" who think that really low levels of radio waves can have serious health effects.

Experts make mistakes. I think it might be useful to gather examples of expert mistakes. Not to make the argument that experts are idiots - indeed many of the people involved are highly intelligent, highly trained, capable, and experienced. We should also be cautious to not overstate the prevalence of mistakes. In many cases, they are indeed quite rare. The point here is that mistakes are possible no matter how talented and experienced the expert is. The ultimate point is that one should not discount that possibility, especially when the alternatives (aliens, vast hyper-competent conspiracies, chemtrails, etc. ) are things that are greatly lacking in evidence (and often with significant counter-evidence.)

Such discussions often devolve into intractable subjective assessments of probability: "sure, experts make mistakes, but how likely is it that two experts would make mistakes, or that an expert would make a mistake on the same day that some else odd happened?" These rebuttals are not entirely without merit - often two or more things happening is less likely than one thing happening (unless the events have some causal link.) But any such discussion would really benefit (on all sides) from a deeper understanding of the types of mistakes that experts make, and (where possible) how often they make them.

So I'm starting this thread as a place to gather illustrative examples that will help illuminate the fallacy, and shed some clarity on the "how likely" argument. Let's collect cases where experts got it very wrong.
 
The first example I'm going to give is a classic in engineering, the Hyatt Regency Hotel walkway collapse of 1981. Here over a hundred people were killed when a suspended walkway collapsed. The mistake made here was a simple one, and yet was not immediately obvious. A simple design change was approved by the engineer who did not realize that the design change doubled the load on one connection.



Of interest here is that this is a problem that seems simple when you understand it, but before you do, it's not intuitively obvious. Engineers can make the same mistake other people do, and think that the design change does not change the applied loads.

The actually issue is very well explained by the above diagram, and more clearly by Grady Hillhouse of Practical Engineering:

Source: https://www.youtube.com/watch?v=VnvGwFegbC8


It was also not a solitary mistake. Multiple people failed to see the problem. In addition, the original design (which would probably not have failed) was dangerously lacking in redundancy itself. The failure occurred during a dance party with a large number of people moving on and under the walkway. It was a sequence of events, failures by experts that might seem highly unlikely, but actually happened.
 
If anything, conspiracy theorists and cranks like flat earthers, anti-vaxxers, and covidiots have too little respect for expertise. They'll happily call out mistakes made by experts when it suits them, like the CDC's about-face on wearing facemasks.

Speaking of which, here's a look back at a Time article about facemasks from March 3, and its update on April 3.
Its original title was "Health Experts Are Telling Healthy People Not to Wear Face Masks for Coronavirus. So Why Are So Many Doing It?"
http://archive.is/qMV7M
On April 3, it was changed to "Public Health Experts Keep Changing Their Guidance on Whether or Not to Wear Face Masks for Coronavirus"
Article:
Note: The CDC has updated its guidance to the public around wearing masks during the coronavirus pandemic. On April 3, it advised Americans to wear non-medical cloth face coverings, including homemade coverings fashioned from household items, in public settings like grocery stores and pharmacies. See our latest story for more on the science of face masks.

...it’s no surprise that face masks are in short supply—despite the CDC specifically not recommending them for healthy people trying to protect against COVID-19. “It seems kind of intuitively obvious that if you put something—whether it’s a scarf or a mask—in front of your nose and mouth, that will filter out some of these viruses that are floating around out there,” says Dr. William Schaffner, professor of medicine in the division of infectious diseases at Vanderbilt University. The only problem: that’s not likely to be effective against respiratory illnesses like the flu and COVID-19. If it were, “the CDC would have recommended it years ago,” he says. “It doesn’t, because it makes science-based recommendations.”
...
Lynn Bufka, a clinical psychologist and senior director for practice, research and policy at the American Psychological Association, suspects that people are clinging to masks for the same reason they knock on wood or avoid walking under ladders. “Even if experts are saying it’s really not going to make a difference, a little [part of] people’s brains is thinking, well, it’s not going to hurt. Maybe it’ll cut my risk just a little bit, so it’s worth it to wear a mask,” she says. In that sense, wearing a mask is a “superstitious behavior”: if someone wore a mask when coronavirus or another viral illness was spreading and did not get sick, they may credit the mask for keeping them safe and keep wearing it.
 
Last edited:
I could fill the page with COVID-19 related errors and retractions alone.

The Lancet Retracts Hydroxychloroquine Study
Article:
The online medical journal The Lancet has apologized to readers after retracting a study that said the anti-malarial drug hydroxychloroquine did not help to curb COVID-19 and might cause death in patients.

Retracted: Study that questioned effectiveness of masks against SARS-CoV-2
Article:
Disclaimer: The study that this article discusses — “Effectiveness of surgical and cotton masks in blocking SARS-CoV-2” — was retracted by the authors following recommendations from the editors of Annals of Internal Medicine. The authors have admitted that the data that they analyzed in their study were “unreliable,” making their findings “uninterpretable.

Research published at the beginning of April — which has since been retracted — casts serious doubts about the effectiveness of both surgical and cloth masks in preventing the spread of infectious SARS-CoV-2 particles.

These experts got it right on the second try.
Article:
For the Canadian researchers, the finding that hotter weather doesn't reduce COVID-19 cases was surprising.
"We had conducted a preliminary study that suggested both latitude and temperature could play a role," said study co-author Dr. Peter Jüni, also from the University of Toronto. "But when we repeated the study under much more rigorous conditions, we got the opposite result."
 
It is such an odd thing to me because everyone knows someone that should have known better, but made a mistake anyhow.

Lots of people know about the Mars Climate Orbiter being lost because of the Imperial-Metric issue, but that's not the actual reason it was lost. The broader issue at work there was making sure that the specs were checked out properly rather than make assumptions about what the specs said. But that's still not the actual reason the probe was destroyed. Even with the thruster output being wrong because of the unit mix up, the spacecraft was controllable and predictably controllable. It did make it to Mars, after all.

Here's were "experts were wrong" comes in. Like all accident reports, there's multiple causes. When you read the report, it becomes very clear that the unit of measurement issue was almost an odd little thing they should have picked up immediately, except for, you know, the whole experts being the smartest guy in the room thing. The MCO had already had four trajectory correction maneuvers planned and executed. That's over the course of several months that the evidence that something was wrong with the thruster output, giving you both the data and time to work out what was wrong and work out what thruster sequence could put MCO on the correct trajectory. Engineers had already noticed something was amiss when TCM-3 was executed. They were having to fire the thrusters more frequently and for much longer than previous Mars missions. Mars Global Surveyor, the previous mission, had TCM-3 canceled and only used 1, 2, and 4. MGS's TCM-4 adjusted the vehicle by 0.29 m/s. MCO's TCM-4 was 4.0 m/s. There was ample evidence something was seriously wrong with MCO.

This was enough warning, though, that a fifth course correction was in order. It never happened. Somehow, the idea that the targeting software was the problem came into play. However, the data was fed into the software used for a previous Mars mission and it came back with the same trajectory that was off target by hundreds of kilometers. Staring them in the face, though, is almost a year's worth of tracking data and four thruster sequences that told you where the probe was and what TCM-5 would have to do to put it back on track. TCM-5 was brought up, but it never seemed to gain the traction it needed, because every team seemed to be equally unsure of what the actual situation was. Which itself should have been a massive warning that something was seriously wrong.

You've got multiple teams of multiple experts all sitting in their own rooms thinking "you know, this isn't quite right, but I'll see where it goes." That's a pretty big mistake for an expert to make. Frankly, I'm sort of surprised that they never worked out the thruster issue. You'd think someone would have been like "have you noticed that our thrusters seem to be underpowered? Everything is like four and a half times times what it was on our earlier missions." "Hang on...four and a half? One pound of force is 4.45 newtons. Is there something wrong with the thruster software?"
 
I guess it would be very easy to write many volumes on how highly trained and experienced professionals do things incorrectly. Pilots crash planes; doctors saw off the wrong leg; engineers build suspension bridges that fall apart at the first gust of wind - the list is endless. The increasing complexity of modern technology, medicine, engineering and so forth only increase the likelihood of human error.

One way to minimise these problems is the checklist. I have always been a big fan of using checklists and was delighted a few years ago when I received a copy of The Checklist Manifesto by Atul Gawande. I think it was meant as an ironic gift but I was happy enough with the joke.

He argues that there are two types of error. The first is an error of ignorance, where we just don't know or understand things. This is sort of understandable in an unknown/unknowns way (and is more or less how society, science and technology advances). The second is the error of ineptitude where we don't properly use, implement or review what we actually know. This is less forgivable and the checklist is designed to improve outcomes for all professionals and minimise this.

Gawande quotes numerous examples where medical outcomes have been significantly improved following the introduction of checklists for certain procedures (he is a surgeon). In a very real sense the checklist had corrected previous ineptitude.

I can't really remember the medical examples but do remember the example of what eventually turned into the B-17 bomber. This was built by Boeing to satisfy a USAAC spec and performed well enough on paper to be selected for evaluation. On its evaluation flight in front of the assembled brass it left the ground and promptly crashed, killing some of the crew. The aircraft was extremely complicated and was deemed something like "too much for one man to fly". Sadly that was true and the pilot missed out a step in the take-off procedure and died in the crash.

The test pilots got together and designed a pre-flight checklist that fitted on an index card. That solved the complexity problem and there were no more incidents of that nature. The USAAC had kept an interest in the bomber (despite Boeing losing the tender) and eventually went on to purchase 13,000 of them.

I suppose it is great to know lots of stuff but not much good if you don't use it all in the correct manner. There is no shame in other people checking your work either (something a few professionals might choose to remember).
 
Came across this one recently, trained military pilot makes error, crashes B52.

https://en.wikipedia.org/wiki/1994_Fairchild_Air_Force_Base_B-52_crash

On Friday, 24 June 1994, a United States Air Force (USAF) Boeing B-52 Stratofortress crashed at Fairchild Air Force Base, Washington, United States,[1] after its pilot, Lieutenant Colonel Arthur "Bud" Holland, maneuvered the bomber beyond its operational limits and lost control. The B-52 stalled, fell to the ground and exploded, killing Holland and the three other field-grade officers on board the aircraft.

The subsequent investigation concluded that the crash was attributable primarily to three factors: Holland's personality and behavior; USAF leaders' delayed or inadequate reactions to earlier incidents involving Holland; and the sequence of events during the aircraft's final flight. The crash is now used in military and civilian aviation environments as a case study in teaching crew resource management. It is also often used by the U.S. Armed Forces during aviation safety training as an example of the importance of complying with safety regulations and correcting the behavior of anyone who violates safety procedures.
...
On 24 June 1994, a USAF B-52H bomber crew stationed at Fairchild Air Force Base prepared to practice an aircraft demonstration flight for an air show which was due to take place the following day. The crew consisted of pilots Lt. Col. Arthur "Bud" Holland (aged 46) and Lt. Col. Mark McGeehan (38), Colonel Robert Wolff (46), and weapon systems officer/radar navigator Lt. Col. Ken Huston (41). Holland was the designated aircraft commander for the flight, with McGeehan as the co-pilot and Wolff as a safety observer. Holland was the chief of the 92nd Bomb Wing's Standardization and Evaluation branch, McGeehan was the commander of the 325th Bomb Squadron, Wolff was the vice commander of the 92nd Bomb Wing, and Huston was the 325th Bomb Squadron's operations officer.
Content from External Source
 
Last edited by a moderator:
The Sally Clark murder trial illustrates the egregious misapplication of statistical data by a supposed expert.


Clark's first son died in December 1996 within a few weeks of his birth, and her second son died in similar circumstances in January 1998. A month later, Clark was arrested and tried for both deaths.

Professor Sir Roy Meadow, former Professor of Paediatrics at the University of Leeds ... testified at Clark's trial that the chance of two children from an affluent family suffering cot death was 1 in 73 million. Clark was convicted by a 10–2 majority verdict on 9 November 1999, and given the mandatory sentence of life imprisonment. She was widely reviled in the press as the murderer of her children.

In October 2001, the Royal Statistical Society (RSS) issued a public statement expressing its concern at the "misuse of statistics in the courts". It noted that there was "no statistical basis" for the "1 in 73 million" figure. In January 2002, the RSS wrote to the Lord Chancellor pointing out that "the calculation leading to 1 in 73 million is false".

Meadow's calculation was based on the assumption that two SIDS deaths in the same family are independent. The RSS argued that "there are very strong reasons for supposing that the assumption is false. There may well be unknown genetic or environmental factors that predispose families to SIDS, so that a second case within the family becomes much more likely than would be a case in another, apparently similar, family."

Clark's release in January 2003 prompted the Attorney General Lord Goldsmith to order a review of hundreds of other cases. Two other women convicted of murdering their children ... had their convictions overturned and were released from prison. Trupti Patel, who was also accused of murdering her three children, was acquitted in June 2003. In each case, Roy Meadow had testified about the unlikelihood of multiple cot deaths in a single family.
Content from External Source
 
When I adopted this as an email/usenet .sig nearly two decades back, I didn't know how hard it would be two decades later to prove it's a real quote. I think I lifted it from a /The Register/ story, or similar.

"One cannot delete the Web browser from KDE without losing the ability to
manage files on the user's own hard disk."
-- Prof. Stuart E Madnick, MIT, "expert" witness for Microsoft. 2002/05/02

During that bit of the questioning, I believe Madnick was also under the misapprehension that KDE is an operating system, rather than just a graphical environment. The sound of face-palming from Linux nerds worldwide was clear at the time, but alas it seems to have faded into just myth and legend now.
 
Hi, If you are just looking for examples of scientists believing false theories, or not accepting new correct theories, there must be whole books on the subject. A few of my favourites :-
  • Phlogiston theory - it seemed obvious that when we burn something e.g. wood, something hot comes out of the wood (the phlogiston) and just leaves behind a small amount of ash. Hard to believe that actually oxygen is combining with the wood.
  • In 1835, Auguste Comte, a prominent French philosopher, stated that humans would never be able to understand the chemical composition of stars. In 1859, spectroscopy was discovered, which shows what the stars are made of.
  • Studies of geology and evolution showed that the earth must have existed for billions of years, but physicists had no method to keep the sun shining for so long, so it "disproved" the geology and evolution of an ancient earth for a long time, until the discovery of radioactivity.
  • At the end of the 19th century physicists were thinking they had pretty well found everything, it was going to be boring just refining the physical constants. Before finding the structure of the atom, radioactivity, quantum effects etc.
  • Then in the early days of radioactivity, they started finding "N rays" everywhere. https://en.wikipedia.org/wiki/N-ray
  • Plate tectonics - explains pretty much every feature of earth's surface but was not believed, again because no-one could think of a process to make the continents move, until the evidence was overwhelming.
  • Percival Lowell was convinced he could see canals on Mars, due to the usual problems, it is far away and out of focus.
  • In the oil industry, there is a tale of a senior geologist who promised to "drink every drop of oil found in the North Sea". My own manager in the early 1990s said any talk of finding oil around the Falklands was a complete hoax and a fraud, after all the seismic data was shown on TV. One of the companies operating there has now found 1.7 billion barrels of oil in place.
Of course there is another level of confusion, quotes from scientists sounding wrong which are now disputed, e.g. did astronomers really say "space travel is bunk" , just before it happened ? https://en.wikipedia.org/wiki/Harold_Spencer_Jones#Opinions_about_Space_Travel
 
Last edited:
Came across this one recently, trained military pilot makes error, crashes B52.
This puts me in mind of the Tenerife Disaster. The guy that headed KLM's flight training department, the in-house expert on the 747, and initially selected to head the accident investigation was the very pilot that took off without clearance and caused the accident* in the first place. He may have been one of the most experienced and skilled airline pilots in the world, yet made one of the most basic mistakes that can possibly be made.

* not to completely absolve ATC. They screwed up royally too.
 
Sir Arthur Conan Doyle, the creator of the world's greatest scientific detective Sherlock Holmes, believed that photographs of fairies were real.

The Cottingley Fairies appear in a series of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986), two young cousins who lived in Cottingley, near Bradford in England. In 1917, when the first two photographs were taken, Elsie was 16 years old and Frances was 9. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of The Strand Magazine. Doyle, as a spiritualist, was enthusiastic about the photographs, and interpreted them as clear and visible evidence of psychic phenomena. Public reaction was mixed; some accepted the images as genuine, others believed that they had been faked.

Interest in the Cottingley Fairies gradually declined after 1921. Both girls married and lived abroad for a time after they grew up, and yet the photographs continued to hold the public imagination. In 1966 a reporter from the Daily Express newspaper traced Elsie, who had by then returned to the United Kingdom. Elsie left open the possibility that she believed she had photographed her thoughts, and the media once again became interested in the story.

In the early 1980s Elsie and Frances admitted that the photographs were faked, using cardboard cutouts of fairies copied from a popular children's book of the time, but Frances maintained that the fifth and final photograph was genuine. Currently the photographs and two of the cameras used are on display in the National Science and Media Museum in Bradford, England. In December 2019 the third camera used to take the images was acquired and is scheduled to complete the exhibition.[1] https://en.wikipedia.org/wiki/Cottingley_Fairies

My previous post on this was deleted for not enough detail, is this too much ?
 
..... and some allegations of child abuse rings in the UK, proved to be false after a huge amount of suffering inflicted on the families.

The father of a family was imprisoned in 1986 shortly after the family's arrival in South Ronaldsay, for child abuse. No formal child protection proceedings were initiated. After an alarm raised by officials in a neighbouring authority, sparked by a girl's claim to social workers and police that ritualistic satanic abuse had taken place,[1] action was taken. Other children were taken in late 1990, and the two youngest were told that their mother was dead. Local people began a campaign for the children to be allowed home. It was repeatedly decided that their welfare could not be assured in the care of their mother. It took six years before the last of the children was returned to their mother.[2]

The case came to court in April, and after a single day the presiding judge, Sheriff David Kelbie, dismissed the case as fatally flawed and the children were allowed to return home. The judge criticised the social workers involved, saying that their handling of the case had been "fundamentally flawed" and he found in summary that "these proceedings are so fatally flawed as to be incompetent" and that the children concerned had been separated and subjected to repeated cross-examinations almost as if the aim was to force confessions rather than to assist in therapy. Where two children made similar statements about abuse this appeared to be the result of "repeated coaching".[4] He added that in his view "There is no lawful authority for that whatsoever". Sheriff Kelbie also said that he was unclear what the supposed evidence provided by the social services proved.[5]

The objects seized during the raids were later returned; they included a videotape of the TV show Blackadder, a detective novel by Ngaio Marsh, and a model aeroplane made by one of the children from two pieces of wood, which was identified by social workers as a "wooden cross". The minister was asked to sign for the return of "three masks, two hoods, one black cloak", but refused to sign until the inventory was altered to "three nativity masks, two academic hoods, one priest's robe".

During the investigation the children received several lengthy interviews. McLean was later described by several of the children as a terrifying figure who was "fixated on finding satanic abuse", and other children described how she urged them to draw circles and faces, presumably as evidence indicating abusive rites.[2] These techniques were strongly criticised by Sheriff Kelbie.

One of the children later said of the interviews:

"In order to get out of a room, after an hour or so of saying, 'No, this never happened', you'd break down."[3]

One of the children later said:

I would never say that a child's testimony in the company of Liz McLean at the time [is reliable]. She was a very manipulative woman, and she would write what she wanted to write. I would doubt any child supposedly making allegations in that situation."

https://en.wikipedia.org/wiki/Cleveland_child_abuse_scandal https://en.wikipedia.org/wiki/Orkney_child_abuse_scandal
 
Isn’t this closely related to the idea of the Nobel disease, where Nobel laureates convince themselves that expertise is universally transferable. There was a good article on this in Sceptical Inquirer,
Some authors have invoked the term Nobel Disease to describe the tendency of many Nobel winners to embrace scientifically questionable ideas (Gorski 2012). We adopt this term with some trepidation given its fraught implications. Some authors (e.g., Berezow 2016) appear to assume that Nobel winners in the sciences are more prone to critical thinking errors than are other scientists. It is unclear, however, whether this is the case, and rigorous data needed to verify this assertion are probably lacking.

In this article, we explore the more circumscribed question of whether and to what extent the Nobel Prize, conceptualized as a partial but imperfect proxy of scientific brilliance, is incompatible with irrationality.

https://skepticalinquirer.org/2020/...gence-fails-to-protect-against-irrationality/

They list eight Nobel recipients who espoused erroneous ideas, to say the list, such as Pauling’s obsession with vitamin C and both Watson and Shockley’s racist theories. They also briefly mention a number of other suffered.

To which I would add Freeman Dyson and his denial of the science of global heating.
 
how was he [Conan Doyle] a highly trained expert?

He was an expert in being fooled by bunk, completely suckered by the Spiritualist movement:

Conan Doyle attended seances and wrote and lectured on spiritualism. He befriended Harry Houdini, the escape artist and magician, maintaining that Houdini had psychic powers even though Houdini himself denied it.

Their friendship ended soon after Houdini attended a seance at which Conan Doyle’s second wife, the former Jean Leckie, claimed to be channeling the spirit of Houdini’s beloved mother. Leckie produced several pages of automatic writing, in fluent English and signed with a cross. Houdini was highly skeptical: His mother, a Jew, had been a rabbi’s wife and, as an immigrant from Hungary had a limited grasp of English.
Content from External Source
-- https://www.nytimes.com/interactive...s/archives/arthur-conan-doyle-sherlock-holmes

Clearly there's plenty of time in the afterlife for learning new languages. Alles ist klar!
 
A very large percentages of trials involving expert witnesses would provide case -- as experts are called by both sides to support each side's position, it is not unusual to have experts reach opposing opinions based on the same underlying set of facts. In such cases, at least one of them is likely to be wrong.

(Edited to de-typo -- I need to start proof reading better.)
 
Last edited:
I am not sure this is a very helpful topic if we are trying to debunk pseudo-science and hopefully give the public more faith in proper scientists. Anyway .....

The most destructive example of a scientist accepted as an expert in his own country is probably Trofim Lysenko, a geneticist in the Soviet Union under Stalin. He believed that plants grew depending on their environment , rather than characteristics inherited from parents. So he taught that farmers had to plant their seeds very close together because plants of one species would not compete against their fellows but would willing die in sacrifice. Cows treated well would produce lots of milk, it had nothing to do with which parents they were bred from. He claimed that Siberia had been transformed into a land of orchards and gardens. ( Sorry I may not be explaining this very well). Significantly he attacked anyone who tried to use statistics to test the theories. Over 3,000 biologists were imprisoned, fired, or executed for attempting to oppose Lysenkoism and genetics research was effectively destroyed until the death of Stalin in 1953. It is claimed that Lysenko played an active role in the famines that killed millions of Soviet people. The Chinese government also used his science for a while, with similar results.
https://en.wikipedia.org/wiki/Trofim_Lysenko
https://en.wikipedia.org/wiki/Lysenkoism
I guess the moral is to listen to the consensus of experts across all countries, not one maverick in one country. Worryingly these theories are starting to be discussed and approved again.
 
I recently read the Skeptical Inquirer article "Percival Lowell and the Canals of Mars."

https://skepticalinquirer.org/2018/05/percival-lowell-and-the-canals-of-mars/

I wonder if Fravor's Tic-Tac incident and sighting are similar to Lowell's Mar's canal observations?
Excerpts:
Speculation. That’s where the problems come in. There is physical reality, and there is interpretation; and it is frequently the interpretation, rather than the reality, that seizes the attention of human beings. Our brains are remarkably predisposed to the interpretation of objective physical reality in psychological, self-referential terms. Unfortunately, these terms are frequently just plain wrong.

Examples of this are legion. In previous articles in SI, my coauthors and I have discussed ordinary objects that have metamorphosed, in the minds of their observers, into nonexistent phenomena ranging from UFOs to Bigfoot, and we have found specific patterns of mental processing that contribute directly to these misinterpretations (e.g., Sharps et al. 2016).

But why does anybody see them in the first place? As mentioned, research in my laboratory, published primarily in SI (e.g., Sharps et al. 2016), has elucidated some of the psychological dynamics of those who think they see Bigfoot, flying saucers, aliens, and ghosts. One of the things we found in that research was that people generally don’t make something out of nothing. In other words, you don’t see Bigfoot on a featureless plain; you see an ape-shaped tree stump or something similar, and your brain makes Bigfoot out of it for you. The same brain-based phenomena can also create a Loch Ness monster out of a school of Scottish salmon, a Death Star out of a helicopter with a broken landing light, and so on. These Gestalt reconfigurations result from our mental misperception and misinterpretation of real things in the real world—or on the real Mars—and these phenomena are governed by specific psychological laws. These laws are suggested to be a major psychological source of the observation of the canals of Mars.

But how does an astronomer such as Lowell or Schiaparelli maintain his beliefs in these canals, to the point at which, in the face of mounting professional opposition, he sees more and more of them? Human beings are social creatures with the ability to develop strong investments in our ideas and beliefs. This is suggested to be another major source of the Canal phenomenon: sociocognitive influences, to be joined with individual differences and Gestalt reconfiguration.

Individual Differences​

But expertise aside, the fact is that brevity of observation limits our precision, in astronomy as in criminal eyewitness identification. Brevity can completely change our interpretation of our observations (e.g., Sharps 2017), whether we think we see a criminal suspect with a gun or a canal on the planet Mars. In short, if we have strong individual psychological reasons to see canals, we will see them if the observational conditions permit them at all.

Observations are subject to the psychology of the individual interpreting them; this is a crucial principle that all scientists, in all fields, should consider.

Gestalt Reconfiguration​

Gestalt psychology, the venerable theoretical perspective that deals with perceptual and cognitive configuration, provides rather good answers (e.g., King and Wertheimer 2005; Kohler 1947). Consider two of the Gestalt laws of perception, the laws of closure and of good continuation (see Sharps 1993). When we see objects that are close together, we tend to see them as connected; and when they form contours, lines, or curves, we tend to see them as units.

Sociocognitive Factors​

Lowell had invested enormously, in financial and in psychological terms, in the canals of Mars, and as has been demonstrated many times, strong investment leads to strong beliefs that are difficult to sway even in the presence of contrary evidence. The principle of cognitive dissonance (Festinger 1957; Sharps 2017) deals with this rather nicely. Even if a given idea proves to be completely wrong, we tend to hold to it, and even to defend it with relatively incoherent cognitive processing, if we’re sufficiently invested in it (Festinger 1957).

Conclusions​

Instead, we find an important lesson for our more modern inquiries. The scientist does not lie outside of the natural world. Rather, the scientist is entirely part of that world and is subject to scientific law; in the present case, to elements of the Gestalt laws of perception and cognition and to the laws of related areas of experimental psychology. It is important for all scientists, in all disciplines, to be aware of these essential facts and to use them to exert caution in the interpretation of what might otherwise be interpreted as purely objective observations.
Content from External Source
 
Back
Top