Pseudoskepticism (and morality)

Mendel

Senior Member.
There's not only pseudoscience, but also pseudoskepticism (pseudoscepticism?). The author of the following excerpts believes that we are morally responsible for our pseudoscientific and pseudoskeptical beliefs even if we are sincere.
Article:

The Ethics of Belief, Cognition, and Climate Change Pseudoskepticism: Implications for Public Discourse

Lawrence Torcello

First published: 22 January 2016
https://doi.org/10.1111/tops.12179


[...]

The title “ethics of belief” comes from a 19th-century paper written by British philosopher and mathematician W.K. Clifford. Clifford argues that we are morally responsible for our beliefs because (a) each belief that we form creates the cognitive circumstances for related beliefs to follow, and (b) we inevitably influence each other through those beliefs.

[...]

I therefore want to insist on the term
pseudoskepticism as a more accurate label for describing the rejection of scientific consensus based on ideologically driven reasoning, or on the grounds of some formless preference for contrarianism or cynicism parading as scientific skepticism (Torcello, 2011, 2012, 2014b). It is worth emphasizing that pseudoskepticism, as I formulate the concept, is a species of science denialism but not a synonym for science denial.9 Pseudoskeptics attempt to portray themselves as scientifically objective while they depict mainstream scientists as credulous dogmatists. Pseudoskeptics are unique among science denialists in their attempt to reject science while appropriating the epistemological authority of science and scientific skepticism. In this way pseudoskepticism is akin to pseudoscience in that both run contrary to the very conceptions from which they contrive their justifications (i.e., pseudoscience disavows the methodological constraints of modern science while seeking the approbation of science. Pseudoskepticism seeks to critique scientific consensus as uncritical, or fraudulent, while ignoring the extent to which rigorous methodological skepticism informs modern scientific consensus).

[...]

Perceived tension between the information deficit and cultural cognition models may stem from the fact that participants in the most relevant study (Kahan et al., 2012) who reject climate consensus could more accurately demonstrate knowledge of what climate scientists believe. In point of fact, however, this type of demonstration is not necessarily an appropriate measure of scientific literacy (Holbrook & Rannikmae, 2009). It may merely be a measure of familiarity with general concepts that one can name and identify, without an informed grasp of how those concepts developed. These are the concepts that Kahan et al. subjects believe scientists get wrong, and that those subjects suppose themselves to understand correctly.
A more robust measure of scientific literacy involves the ability to articulate what is methodologically entailed in scientific consensus and how that epistemological process measures up against other attempts to understand the natural world. Scientific literacy should be reflected in the ability to understand the scientific process, to articulate why it has a place of epistemic privilege, and to incorporate such understanding into one's own belief formation in a way that can be identified and accounted for (Holbrook & Rannikmae, 2009).
Knowledge of claims made by scientists (that one believes to be erroneous) fail to capture the deeper, and I argue, more relevant epistemological understandings of why the scientific process works as well as it does. Furthermore, one cannot be expected to know the current claims being made by researchers in every field of science. So any understanding of scientific literacy depending on such knowledge is necessarily incomplete and arbitrary. Scientific literacy, regarding the epistemological merits of the scientific process, should be relevant across scientific domains. The information deficit relevant to science denial is more appropriately understood as a philosophical deficit in understanding the epistemology of science.


Basically, pseudosceptics can have quite a good grasp of what the established facts and consensus on a certain issue are (better than the average person on the street), but they only have a "cargo cult" knowledge of the social/scientific processes by which experts/scientists arrive at these consensuses. They mimic those misunderstood processes and thereby confidently claim the same expert authority for themselves, while denying it to the actual experts.

I'd be happy to go more in-depth (there's more in this paper we could quote), but this suffices to pseudoskepticism on metabunk's map. (Apologies if I chose a bad subforum.)
 
At first I thought you were talking about people who ride on the coat-tails of others - like I do with Mick's work when I pretend to understand why chemtrails aren't real, or like YouTube atheists do when parroting the words of Dawkins and Hitchens - but in reading the original paper it seems that "the rejection of scientific consensus" and "science denialism" are key components.

So I'm guessing a good example of that would be someone like the flat earther Nathan Oakley? He's someone who likes to use scientific-sounding words, believes he is being scientific, and heartily rejects scientific consensus without really understanding it.

(Or at least he did last time I tuned in, a year or so ago).
 
That's a good account of pseudoskepticism. I'd argue it's broader and isn't confined to science denial. I've been planning a video about seven key "symptoms" of pseudoskepticism, objective behaviors that one can self-assess:
1. Skepticism not of complications to an explanation, but rather, to the lack of complications. Example, being skeptical that Gofast is "only" airborne clutter drifting aloft, when it might be a mysterious powered vehicle.
2. Skepticism of expert opinion, without extending the same skepticism to experts in one's own camp. Example, being skeptical of NIST's account of 9/11, but not showing skepticism of David Chandler's claims.
3. Skepticism of the qualifications of anyone who challenges you. Example, Mick is just a video-game designer, what does he know?
4. Skepticism of direct empirical data, in favor of messier "evidence" like human behavior, intrigues, cover-ups, etc., that have no bearing on the empirical data. Example, countering rigid limitations on the size of the Gofast object (based on the camera's field of view) with, "If the object was so small, how did the pilots ever get a visual on it in order to track it?"
5. Lack of interest in possible answers, with a greater emphasis on questions, so as to create the appearance of skepticism. Example: Answering "Can you choose the UAP you find most convincing so we can discuss it?" with, "I'm too busy watching paint dry."
6. Applying the word dogma or faith to scientific or other expert consensus, as noted in the OP. This is often accompanied by an appeal to a sociological failure of science or peer review, if not belief in an outright coverup, which goes back to #4.
7. Undue focus on perceived missteps by the party you're being skeptical of, as a way to dismiss everything they say. Example, NIST cut some corners with their WTC simulations, therefore even their broad conclusions must be rejected. Or, a favorite that keeps popping up on Twitter: Mick once analyzed a video before watching it — therefore he is a fraud, and anything he says about unrelated matters is to be dismissed.

Of course, most of these are variations on ad hominem or other fallacies.
 
Skepticism of direct empirical data
I've encountered that in the form of using a study, not to answer the questions the study was designed for, but using the data to support some other point, out of a desite to "read between the lines" and come to conclusions "they're not telling you".

The problem with this is that good studies are designed to eliminate sources of error as much as possible, so when designing a study, you set it up specifically that external factors that could affect your result cancel out. The problem with this is that if you ask a different question, that wouldn't be true, so we should really look for a study designed for that question. If you come from a position of distrust, and do the opposite, your conclusions are always based on bad data. (Which also explains why the experts disagree.)

Example: you study the performance of "solar sails" on boats; you design experiment A such that all boats experience the same wind, and experiment B so that they all experience the same amount of sun, and you judge sailing performance by A and electrical performance by B. The pseudoskeptic notices your study is funded by some people who also invest in "Big Solar", concludes without evidence that your study is fake, and uses the raw data from A to judge electrical performance, and B to judge the sailing performance, and proves you wrong "with your own data".

Sometimes this happens to study authors when they don't know what they're looking for when they set up a study. They might e.g. do a 3-month vaccine side effect study, but exclude pregnant women in the 3rd trimester from the vaccine arm (too risky), collect all side effects, and then notice that the rate of still births (per all births) in the vaccine arm is alarmingly high, while the rate of still births in the control arm is normal. If the study had been designed to look for that effect, it shouldn't have excluded those women, or excluded them from both arms, or actually observed for 9 months. (If a woman in the 1st or 2nd trimester gives birth in the next 3 months, it's always premature; they had a high rate of bad births because the study design eliminated the good ones.)

So if you don't use the good data because you're "a skeptic", expect bad conclusions, and expect to be held responsible for spreading them.
 
Last edited:
Article:
It just makes me wonder. If you’re a comedian / podcaster / media figure and you’re distrustful of authority, distrustful of authority, distrustful of authority, then how do you decide what authorities you do trust? You don’t believe the Warren Commission which had all that public documentation, but you do trust some bozo with a Ted talk who makes stuff up about sleep . . . because the bozo is a University of California professor? That can’t be it, right? You wouldn’t trust every University of California professor.
Remember the The Chestertonian Principle: Extreme skepticism is a form of credulity.

This connects to pseudoskepticism because that "skepticism" is usually directed at certain claims, while others are credulously accepted. True skepticism is an undirected mental stance.
 
Back
Top