Popular Mechanics piece: “Your Consciousness Can Jump Through Time”

Montauk

Active Member
Feel free to delete this if it doesn't fit within the formatting guidelines

This was making the rounds a couple of weeks back:

https://www.popularmechanics.com/science/a65653221/science-of-precognition-explained/

The article presents anecdotes from Julia Mossbridge, Ph.D., and experiments by Dean Radin, Ph.D., as evidence for precognition. The broader theory suggested is that consciousness can perceive the future because time is not linear.

Here's some relevant excerpts:

As far back as the age of seven, Mossbridge has had precognitive dreams, she says. She and her parents were skeptical of them until she began recording the details in a dream journal. While she admits she's misremembered some of her dream visions, she's also been able to foretell events from the future that she would have had no other way of knowing.
Radin created an experiment to prove it. His hypothesis was that if awareness transcended time, responses to an upcoming stimulus would appear before the stimulus itself…This experiment has since been replicated ad nauseam and echoed the original results, which were statistically significant.
In 1995, the CIA even declassified its own precognition research after statisticians were hired to review the work and declare it statistically reliable.
Mossbridge's research has shown that most people are capable of some level of precognition.

But as we all know, anecdotes suffer from confirmation bias, selective recall, vague dream content, and no control which make it very much not evidence, Radin's "presentiment" EEG experiments have not held up under stricter replication (see Wagenmakers et al. (2011), "Why Psychologists Must Change the Way They Analyze Their Data."), CIA "Stargate" and related ESP research were terminated in 1995 as useless (see AIR report), and quantum entanglement does not provide a mechanism for precognition.

This is obviously a clear attempt to further the mainstreaming of this long debunked and analyzed theory. It's also another nail in the coffin for the once great Popular Mechanics.
 
Last edited:
That was in Popular Mechanics? Jeez. When I was a kid they'd run articles on how to build a solar still or how to use your watch as a compass. A far out article was how to build a jet powered go-kart. Now it's The Tooth Fairy Gazette?

This article is the print equivalent of Ow! My Balls.
 
Last edited:
Yeah whenever anyone brings up the CIA documents for Stargate, you can immediately counter that point by offering the fact they're leaving out the CIA explicitly closed the program after evaluations found it to be nonsense and wholly impactful.

None of the "success" cases claimed are real either, even during the DoD days largely. They claim "successes" that every other participant and documentary evidence showcases they did not have an impact in.

One of the more frequently referenced cases w/ the Red Brigades and Dozier is pretty funny. We let the Italians take the full credit and shunted our own participation down because of the context, which gave these guys some gap room to take credit but we now know many years later to be false (pre-2010s it would've been harder to know this).
So, in 1981, just a sheer few months before, the Army formalized the expansion of a group priorly called the Field Operations Group, into what was then termed what is now infamously known as the Intelligence Support Activity (this is the period where that was the used name - I've covered this group elsewhere on here w/r/t Elizondo).
This was in fact their first known mission after being stood up formally like this, it was so close to their creation date it may actually be the first. Regardless, this would be the very mission that 'cemented' their existence into today as a critical capability. A team of folks from ISA and CAG led by a Colonel named Jesse Johnson provided technical intelligence support to the Italians. They conducted a splendid operation narrowing SIGINT intercepts and HUMINT derived from the arrests and narrowed it to a specific area where they assessed patterns of life for locations in re things like electricity usage.
The psy-bros didn't really do anything here besides getting asked to provide pocket advice to an officer - that provably was not what resulted in anything.

Don't have a digital copy to screenshot but the "real" story there is written in Relentless Strike by Sean Naylor.
 
We have so many mentions of Radin on Metabunk, his involvement has not been on my radar. I did write on one of his experiments, and that fostered much distrust in any other experiments Radin might be conducting.
 
The original article is paywalled for me, but I found an archived version here: https://archive.ph/3e8Kh

Reviews critical of Mossbridge and Radin's various meta-analyses have previously noted several problems with them, such as Cross-Examining the Case for Precognition: Comment on Mossbridge and Radin (2018), Houran et al, 2018 (attached below).

Abstract
External Quote:
Based on a review and meta-analyses of empirical literature in parapsychology, Mossbridge and Radin (2018) argued for anomalous replicable effects that suggest the possibility of precognitive ability or retrocausal phenomena. However, these conclusions are refuted on statistical and theoretical grounds—the touted effects are neither meaningful, interpretable, nor even convincingly replicable. Moreover, contrary to assertions otherwise, the possibility of authentic retrocausation is discredited by modern theories in physics. Accordingly, Mossbridge and Radin's interpretations are discussed in terms of misattribution biases that serve anxiolytic functions when individuals confront ambiguity, with potential reinforcement from perceptual–personality variables such as paranormal belief. Finally, we argue that research in human consciousness should be multidisciplinary, and notably, leverage informed investigators in the physical sciences to advance truly valid and cumulative theory building.

And attempts to replicate effects of studies described in the meta-analyses have been unsuccessful. A recent example is The Future Failed: No Evidence for Precognition in a Large Scale Replication Attempt of Bem (2011), Muhmenthaler et al, 2022.

Abstract
External Quote:
Precognition describes the ability to anticipate information about a future event before this event occurs. The goal of our study was to test the occurrence of precognition by trying to replicate three experiments of the most central study in the field (Bem, 2011, Journal of Personality and Social Psychology). In this study, Bem time-reversed well-established psychological effects so that a "causal" stimulus appeared after the participants gave their response. We conducted two priming experiments and a free recall experiment in the backward "precognition" version and, as a control manipulation, in the classic forward version. More than 2000 participants participated via the Internet; thus, our study had high statistical power. The results showed no precognition effects at all. We further conducted exploratory post hoc analyses on different variables and questionnaire items and found some significant effects. Further studies should validate these potentially interesting findings by using theory-driven hypotheses, preregistrations, and confirmatory data analyses.
 

Attachments

Like Sabine Hossenfelder jokingly likes to say, it is about 100% on my bullsh*t meter.
 
The article presents anecdotes from Julia Mossbridge, Ph.D., and experiments by Dean Radin, Ph.D., as evidence for precognition.

I'll give you spooky...

Because I love to know the reliability of a source, the first thing I did was search for ``retraction Julia Mossbridge''
The first hit (from Startpage) was a retraction by Dean Radin that had nothing to do with Mossbridge!

(Specifically of https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2016.00173/full /Prediction of Mortality Based on Facial Characteristics/ retracted here: https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2016.00515/full , saying "concerns were raised regarding the scientific validity of the article".)

What are the chances of that!?!?!?

Well, techically, quite high: the Frontiers webpage knows that they're in the same clique, and presents data to the search engine to support that clustering.

Apparently Mossbridge clusters with Jessica Utts:
Mossbridge, J., Tressoldi, P., and Utts, J. (2012). /Predictive physiological anticipation preceding seemingly unpredictable stimuli: a meta-analysis/. Front. Psychol. 3:390. doi: 10.3389/fpsyg.2012.00390 - https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2012.00390/full

Utts being the credulous one in the Project Stargate review in 1995 "Hyman argued that Utts' conclusion that ESP had been proven to exist "is premature, to say the least."" (quoth https://en.wikipedia.org/wiki/Remote_viewing ) - more here: https://www.metabunk.org/threads/cl...ccess-to-hidden-information.13057/post-294835

Clearly a full-paid-up member of the woo-niverse.

Just waiting for a Mossbridge/Bem collaboration now.
 
Radin's "presentiment" EEG experiments have not held up under stricter replication (see Wagenmakers et al. (2011), "Why Psychologists Must Change the Way They Analyze Their Data.")
I have plenty of priors for "likely gonna be garbage" (Puthoff, Loeb, Utts, ...), but Wagenmakers is one of my "likely gonna be good" priors (along with Gelmann and Wiseman in these kinds of fields).

So much so that if it's one of them criticising Sean Carroll, my absolute favourite physics communicator, I doubt Carroll. (This almost always means that he's stepped outside his field of expertise, in which case, he's lost the "authority" badge already.)

These shortcuts change how I read papers. If it's a paper from a "likely gonna be garbage" source, I go in specifically looking for mistakes. Unfounded assumptions (citing discredited papers, or simply papers from discredited sources, ... ), poor experimental design (lack of controls, experimenter degrees of freedom, ...), invalid statistical analysis (lack of, or meaningless error bars, unjustified prior indifference, ...), fallacious reasoning (false dichotomy, ...), and sometimes outright fraud (could their N support their p-value, and again the error bars, ...)

And if it's from the likely good, I make myself a nice cup of tea to enjoy with it. (And don't get me wrong, because no paper is perfect, quite often there's a "no but!", however, that's almost certainly one that could be resolved with a little clarification - there might be unstated assumptions that are always in use but that I'm unfamiliar with, say.)
 
Back
Top