Claim: Remote Viewing is a Scientifically Proven Technique that Utilizes a Natural Human Ability to Enable Access to Hidden Information

A here stands for "Anomalous Cognition/ESP has been sufficiently established by the research."

B here stands for "We ought to focus our resources on underlying mechanisms of action research."

So the claim is "If ESP has been sufficiently established by the research, then we should now focus our resources on studying the underlying mechanisms of action."

And since she believes ESP has been sufficiently established by the research, she is calling for the parapsychology community to shift their focus now to mechanisms of action research.

Your disagreement with her is on whether or A is true, not on whether the conditional If A - > B is true.
It is absolutely unusual and cause for concern for this logic to be brought into play at this stage.

In general, you might find some effect in a specific experimental situation. Research then focuses on replicating the effect, and on extending the knowledge about the types of situations where it occurs. For your example with SSRIs, you might want to find out which kinds of diagnoses predict that the treatment with SSRIs would be effective (and which wouldn't).

After cursory research, it seems to me that the field of ESP (including remore viewing) moved in the opposite direction: after early claims that free-form remote viewing was possible, it narrowed to an experimental protocol where people are asked to divine a choice between a small number of images/objects, and judges being required to determine if the viewing fits the selection because the viewings ate so unspecific as to be useless by themselves.

It is a protocol that is extremely narrow in scope, compared to the phenomenon it is supposed to be "proving".

It is a protocol that is designed to create a huge number of false positive outcomes.

And it lends itself easily to cheating and manipulation.

For example, if I was operating a RV tournament app, I'd soon have access to a huge database of what kinds of images are very likely to fit random predictions, and what images aren't. It's obvious how that knowledge could be used to game an RV experiment in a way that might not be obvious—a way to raise the number of false positives significantly above 50%.

The sensible call is to broaden the number and type of protocols that show ESP effects. A call of "let's stick to what works, and explain that" is unscientific.
 
I have, typically in Lit. Reviews and meta-analytic studies. Might vary by field though. I'm mostly familiar with philosophy (where this kind of thing doesn't apply) and behavioral sciences. It'd take me a bit to find examples of it since it's not the kind of thing I mentally flag for future reference, but it's not at all unusual to see. Researchers comment on what avenues are dead ends and what areas are open for fruitful new research all the time in their papers.
My field ranges from behavioral stuff to biomechanics to counseling techniques to case studies to straight-up "how the hell is this body part doing this thing" type stuff, as it's health-allied. Philosophy is probably a bit different, but my field has a ton of variety in the type of literature it uses. There may be some overlap in the behavioral realm, though!

An update: I downloaded the remote viewing app. Using it is pretty fun, and it might make a cute thread. I'm not claiming that it's the most scientific thing ever, but it was fun to use and oddly calming. It's kinda like Wordle for people who believe in ESP in that you try to draw the image you'll see tomorrow, so everyone has the same challenge on the same day (if that makes sense).
 
https://www.ics.uci.edu/~jutts/air.pdf
The paper begins,
Research on psychic functioning, conducted over a two decade period, is examined to determine whether or not the phenomenon has been scientifically established.
Content from External Source
From this outset, I would expect the author to set forth criteria by which to determine whether something has been "scientifically established", and then to apply the criteria. Maybe the author could even develop a scale with "not established" at one end, and "broad scientific consensus" at the other end, and then find a way to figure out how far along that scale a particular effect is.

After the replication crises in various fields, my own criteria for "well established" would be, at least two pre-registered well-designed large-scale trials by independent researchers.

Another problem for me is that the effect is called "anomalous cognition", but it's not clear to me at all what the claimed nature of this cognition is, and how it differs from intuition. The field of phrenology had all sorts of specific knowledge of what part of the brain did what, but the field of "anomalous cognition" is not specific enough in its claims as to be falsifiable.

What kind of experiment could I do, and what outcome would I need to observe, for ESP researchers to accept that their field is bunk (as phrenology was)? If this question is unanswerable, then the field is not science. The minimum standard for "scientifically established" is "falsifiable", otherwise it's not even scientific.
 
Last edited:
From Utts' paper, we find many phrases that concern me.

1. Using the standards applied to any other area of science, it is concluded that psychic functioning has been well established.
2. That means that it is reliable enough to be replicated in properly conducted experiments, with sufficient trials to achieve the long-run statistical results needed for replicability.
3. For instance, it doesn't appear that a sender is needed. Precognition, in which the answer is known to no one until a future time, appears to
work quite well.
Recent experiments suggest that if there is a psychic sense then it works much like our other five senses, by detecting change. 4. Given that physicists are currently grappling with an understanding of time, it may be that a psychic sense exists that scans the future for major change,
5. There is little benefit to continuing experiments designed to offer proof, since there is little more to be offered to anyone who does not accept the current collection of data.
Content from External Source
https://www.ics.uci.edu/~jutts/air.pdf
(I've added the numbers for reference.)

1. But her paper (indeed, the entire field of study, is not using the rigorous standards of other sciences. I'm speaking as a chemist, of course, and fully understand that something like medical studies may depend much more upon subjective judgements than do the "hard" sciences; @tinkertailor may have more to say on that. Most scientific work expects to be checked and reproduced by independent researchers and that can't be done here except in a rather vague and statistical sense.

2. Sufficient trials to see long-run statistics also means sufficient trials to get the statistically expected run of successful hits. Have we any guarantee that poor results were not discarded because "that's not enough trials"? I don't mean just in her own work, of course.

3. No "sender" needed? Is that the reason they call it remote viewing rather than telepathy?

4. (a major red flag for me) Scanning the future, even as a throwaway line in an introduction, brings us right back into the long-discredited centuries-old realm of fortune-telling.

5. This is a highly unprofessional sour-grapes line, saying, in effect, why should we give more proof when they won't believe our data anyway. If people won't believe your data, then you need more and better data, or more rigorous ways of verifying the data. To stop collecting more data is not a way to win over converts.
 
Last edited:
https://www.ics.uci.edu/~jutts/air.pdf
The paper begins,
Research on psychic functioning, conducted over a two decade period, is examined to determine whether or not the phenomenon has been scientifically established.
Content from External Source
From this outset, I would expect the author to set forth criteria by which to determine whether something has been "scientifically established", and then to apply the criteria. Maybe the author could even develop a scale with "not established" at one end, and "broad scientific consensus" at the other end, and then find a way to figure out how far along that scale a particular effect is.

After the replication crises in various fields, my own criteria for "well established" would be, at least two pre-registered well-designed large-scale trials by independent researchers.

Another problem for me is that the effect is called "anomalous cognition", but it's not clear to me at all what the claimed nature of this cognition is, and how it differs from intuition. The field of phrenology had all sorts of specific knowledge of what part of the brain did what, but the field of "anomalous cognition" is not specific enough in its claims as to be falsifiable.

What kind of experiment could I do, and what outcome would I need to observe, for ESP researchers to accept that their field is bunk (as phrenology was)? If this question is unanswerable, then the field is not science. The minimum standard for "scientifically established" is "falsifiable", otherwise it's not even scientific.

Regarding falsification: A good theory makes bold predictions about what we should expect to find if the theory is true. Under the falsificationist view, good theories should specify in advance an experiment which, if the result of the experiment contradicts the theory, the theory has to be given up.

In the case of ESP, we should expect to find that if ESP exists, the target object or image being remotely viewed by subjects in the lab should be accurately identified and described at rates higher than what we would expect to find due to random chance alone.

This is what ESP proponents claim the data shows. They consistently find that there is a slightly smaller than chance effect consistently measured in studies with varied experimental designs. The effect is a small one, which suggests to them that this ability exists in the general population to various degrees just like many other human abilities. They then claim that this effect size is consistently measured to be even bigger when subjects are not chosen at random from the general population but are specifically selected for if they've been identified as "gifted".

To falsify the theory that ESP exists we would expect to find that these studies would consistently fail to reject the null hypothesis. If there is no ESP, we would expect to find people only correctly identifying the target images or objects at rates consistent with what we'd expect to find due to random chance alone.

So if we insist on falsifiability as a necessary condition for something to be deemed "scientific", ESP research safely meets that challenge.

But falsifiability as a demarcation line for science isn't something I accept regardless. It's not at all clear that scientists do or should abandon a theory when it conflicts with observed results. Historically speaking when an experiment conflicts with the predictions of a theory, the theory is seldom ever discarded as having been falsified. What scientists historically do is make modifications to the theory's auxiliary assumptions in order to rescue it from falsification. And in fact this is often the absolutely correct thing to do. A theory is never just a claim that stands on its own, it's a claim that carries with it a host of auxiliary assumptions, and an experiment that does not match the predictions of that theory doesn't tell us which of the auxiliary assumptions need to be modified, only that something inside that set of auxiliary assumptions needs to be.

When Uranus did not turn out to have the orbit predicted by Newtonian physics, scientists did not immediately rush to reject Newtonian physics as having been falsified. Nor should they have. They instead posited that the unusual orbit of Uranus could be explained by a yet-to-be-discovered planet that turned out to be Neptune. A strict falsificationist, if they were to be consistent, would have to insist that since the observation of Uranus' orbit failed to meet the predictions of Newtonian physics, the theory of Newtonian physics should have been abandoned right there. But the astronomers of the day simply made modifications to their theory, a modification which yielded *new* predictions that *now* fit the observable data.

Suppose Neptune had turned out to not exist, would scientists have then dropped Newtonian physics? Should they have?

The history of science is full of such examples that show the way theories come to be accepted and abandoned is not by means of some crucial experiments coming along and falsifying the whole edifice.

So even if we grant that falsification is the hallmark of good science, there's nothing about ESP research that makes it stand out compared to other normal areas of research. It makes predictions about the world that can be verified and/or falsified.

But falsification isn't the standard for good science anyway. As a matter of actual practice falsification only leads to modification of theories, not rejection of them wholesale. *That* only tends to happen when other conditions are in place that people like Kuhn and Imre Lakatos have written about.
 
From Utts' paper, we find many phrases that concern me.





Content from External Source
https://www.ics.uci.edu/~jutts/air.pdf
(I've added the numbers for reference.)

1. But her paper (indeed, the entire field of study, is not using the rigorous standards of other sciences. I'm speaking as a chemist, of course, and fully understand that something like medical studies may depend much more upon subjective judgements than do the "hard" sciences; @tinkertailor may have more to say on that. Most scientific work expects to be checked and reproduced by independent researchers and that can't be done here except in a rather vague and statistical sense.

2. Sufficient trials to see long-run statistics also means sufficient trials to get the statistically expected run of successful hits. Have we any guarantee that poor results were not discarded because "that's not enough trials"? I don't mean just in her own work, of course.

3. No "sender" needed? Is that the reason they call it remote viewing rather than telepathy?

4. (a major red flag for me) Scanning the future, even as a throwaway line in an introduction, brings us right back into the long-discredited centuries-old realm of fortune-telling.

5. This is a highly unprofessional sour-grapes line, saying, in effect, why should we give more proof when they won't believe our data anyway. If people won't believe your data, then you need more and better data, or more rigorous ways of verifying the data. To stop collecting more data is not a way to win over converts.

Re: 1 & 2, this paper does a pretty good job discussing the history of such concerns being raised and the ways they've been addressed:

https://www.researchgate.net/public...feld_ESP_Debate_A_Basic_Review_and_Assessment

I think an interesting thing that I've learned when diving into parapsychology is just how many of the issues facing psychology's replication crisis were first raised in the parapsychological literature, and just how ahead the parapsychology literature is compared to many other branches of "mainstream" psychology in terms of adjusting to and adapting their methodologies to said issues.
 
But falsification isn't the standard for good science anyway. As a matter of actual practice falsification only leads to modification of theories, not rejection of them wholesale. *That* only tends to happen when other conditions are in place that people like Kuhn and Imre Lakatos have written about.

To state that prediction of the theory (falsifiability and testability), using preferably truth-preserving formal deductive reasoning (formal logic or mathematics) rather than sloppy reasoning with gaping holes, isn't the foundation of both classical and modern physics, cries for a rational justification. If testability by experiments isn't the foundation of science, what is?

That theories and hypotheses are routinely modified to yield increasingly more accurate predictions is not a function of the failure of falsificasionism but rather the very fruit of it.

It seems to me ESP research isn't as rigorous in designing experiments free from design flaws introducing bias to results and thereby accounting for their deviation from the null hypothesis. Using only four types of shapes as options for the gangzfeld subject to choose from is a case in point. As well as the use of judges interpreting vague language to which any data can be made to fit.
 
The prediction of the existence of the rings of Jupiter by Ingo Swann before their actual discovery in 1971 by the Voyager 1 probe:

A pretty safe bet.....especially as we now know that all the outer planets have rings of some sort. I'd have been more impressed if he'd said Io is volcanic and looks like a giant pizza. Strange how he missed that one !
 
But please don't tell me what I agree or disagree with.
Your disagreement with her is on whether or A is true, not on whether the conditional If A - > B is true.
th.jpg Slightly annoyed at this, but I guess we all have different ways of expressing ourselves.
I cannot remember ever reading an academic paper in any peer-reviewed journal that said anything about author "opinion" or further research being a waste of resources. I read and analyze hundreds of articles a year for school, and it's my understanding that this is the sort of thing authors are rejected for during the peer review process.
:) I agree. In the sciences, a dispassionate tone is the norm for published papers.
You present your results, show the statistical working and determine whether the null hypothesis is rejected or accepted.
The subsequent "abstract" and/or "discussion" sections might include recommendations for future research, or consideration of how the same research might be improved, but I've never seen an academic propose, for a contentious but non-ethically problematic subject, that research should cease because something is proven to their satisfaction.

Utts wasn't saying "Research in my department into the reality of remote viewing will cease, because it's proven",
she was stating that remote viewing is a proven phenomenon, and that no more research to demonstrate this was necessary.
This is an extraordinary position to take.

Whatever their errors, Fleischmann and Pons didn't say "...we've demonstrated cold fusion, there's no need to replicate it".
Whereas the rightfully struck-off Andrew Wakefield was adamant that his abysmal "study" told "the truth" about MMR vaccine, and was noticeably hostile to the tidal wave of evidence against his purported findings.

So the claim is "If ESP has been sufficiently established by the research, then we should now focus our resources on studying the underlying mechanisms of action."
No, it is very clear from Jessica Utts' paper that she is saying ESP has been sufficiently established.
It is clear to this author that anomalous cognition is possible and has been demonstrated.
Content from External Source
An Assessment for the Evidence for Psychic Functioning, Professor Jessica Utts, 1995

It's obvious that if the efficacy of SSRIs has been demonstrated to a high degree of confidence that no more effort should be spent researching the same question.
We've had SSRIs since the late 1980s, but it's easy to find studies (and meta studies) of their efficacy in treating depression in adults -their primary use, and the use for which they were originally licenced- up to at least 2018 (maybe COVID-19 chilled subsequent primary care studies).
None of the studies I've seen conclude that no more effort should be spent researching the same question.

A pretty safe bet.....especially as we now know that all the outer planets have rings of some sort.
Ah, but Swann's rings are special.
Very high in the atmosphere there are crystals... they glitter. Maybe the stripes are like bands of crystals, maybe like rings of Saturn, though not far out like that. Very close within the atmosphere.
Content from External Source
Jupiter RV session
They're within the atmosphere- so they haven't actually been discovered. Yet. ;)
 
I'm speaking as a chemist, of course, and fully understand that something like medical studies may depend much more upon subjective judgements than do the "hard" sciences; @tinkertailor may have more to say on that.
Tinkertailor always has more to say on anything. :)
 
It seems to me ESP research isn't as rigorous in designing experiments free from design flaws introducing bias to results and thereby accounting for their deviation from the null hypothesis. Using only four types of shapes as options for the gangzfeld subject to choose from is a case in point. As well as the use of judges interpreting vague language to which any data can be made to fit.
Agreed. In a hypothetical case (because I do not know what actual kinds of images the ganzfeld studies used), if the four images chosen were a beach, a roast turkey, an airplane, and a city street, and the subject had a vague impression of a forest, none of them would match. But the experiment calls for ranking of the four images nonetheless. He might choose as number one a view of a city street because there was a shrub visible on the sidewalk, but how well or how poorly the image matched the impression plays no part in assessing success.

I am, of course, referring to the studies in which images were used, not just the Zener cards. There's a discussion of proper randomization techniques using, for example, a National Geographic data base of thousands of photographs. But for any set of photos chosen, the method of determining "hits" was still a ranking among the four.
 
Last edited:
Here's a link to Hyman and Honorton's standards for testing, which also includes commentary of the reasons they think the standards should be more rigorous.
https://parapsych.org/uploaded_file.../09_hyman_and_honorton_a_joint_communiqu_.pdf

The paper is in a format which does not permit me to to select and copy portions, but they discuss these topics as ones that should have more well-defined methodology:

Control for sensory leakage
Randomization of target images
Judging and feedback
Multiple analysis
File drawer and retrospective analysis
Statistics
Documentation
The role of meta-analyses.

The "file drawer" comment refers to things like cutting an experiment short just after a run of successes, thus giving an artificially enhanced average, and agree that the number of tests should be specified beforehand, and adhered to. They also comment that 20% of the 28 trials used in a meta-analysis had errors in their use of statistics. I just saw a reference to that 28 trials in another paper referenced here, but I'm not sure whose link that was.
 
They also comment that 20% of the 28 trials used in a meta-analysis had errors in their use of statistics.

This is always a bad sign. Sometimes even in physics they mess up the statistical analysis.

Statistics are sometimes misapplied to give datasets a more 'sciency' facade despite often involving a lot of sloppy inductive reasoning and interpretation. This includes, but is not limited to, mistaking correlation for causation, ‘affirming the consequent’ / ‘denying the antecedent fallacy’, mistaking a necessary condition for a sufficient condition (each and all are misapplications of the Modus ponens inference rule of formal logic), selection bias and poor raw data verification and coding. These are chronically perpetrated mistakes and fallacies in the way probability distributions are applied and interpreted, in counterfeit counterfactuals, in designing and interpreting diff-in-diff analyses, in variance analyses, and in the interpretation of regression analysis.
 
Agreed. In a hypothetical case (because I do not know what actual kinds of images the ganzfeld studies used), if the four images chosen were a beach, a roast turkey, an airplane, and a city street, and the subject had a vague impression of a forest, none of them would match. But the experiment calls for ranking of the four images nonetheless. He might choose as number one a view of a city street because there was a shrub visible on the sidewalk, but how well or how poorly the image matched the impression plays no part in assessing success.
I don't think the contrived limited choice is problematic. As long as there is proper randomization at all the levels, as autoganzfeld methodology is supposed to ensure, then if the null hypothesis is that choices are actually random, then the result should match the expectation (25 percent in this case), if the null hypothesis is true. There's no need to measure accuracy if you're at the point of figuring out whether there is a phenomenon to speak of at all and deviation from the expectation means that the choices made by the subjects were, on average, not random (whatever that means).

I think that the "physics" of esp is where the rubber meets the road, and I'd rather see sensible ideas and  equations being put forth and tested, rather than endless meta analyses of meta analyses of meta......
 
I don't think the contrived limited choice is problematic. As long as there is proper randomization at all the levels, as autoganzfeld methodology is supposed to ensure, then if the null hypothesis is that choices are actually random, then the result should match the expectation (25 percent in this case), if the null hypothesis is true.
I don't think it's a problem with getting statistics, but it would certainly be a problem if the program as a whole had the objective of evaluating the efficacy of the experiments. And although I'd welcome sensible ideas, adding equations to the mix would just go back to statistics.
 
Inspired by this thread, I dug up from my yt history a presentation by Dean Radin on parakinesis causing double slit not entirely random.
However, the guy set of my crackpot-alarm, so I started looking for critique of his experiments.

Funny enough, when defending his statical method I noticed he was consulting no other but Jessica Utts - who is also his co-author(?!). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7174750/

Not evidence for anything in particular, but just a data point of lack of independent research in those circles. ;)
 
I don't know if this might be of interest,
"Experiment One of the SAIC Remote Viewing Program: A critical re-evaluation",
Dr Richard Wiseman, Dr Julie Milton, 1999, Journal of Parapsychology.

It discusses some of the studies into remote viewing (RV) conducted by SRI/ SAIC. Methodological doubts are raised,
PDF attached, below ("SAIC Exp1 re-evaluation Wiseman Milton").

In September 1995, the American Institutes for Research (AIR), contracted by the Central Intelligence Agency, assembled a ‘blue ribbon’ panel to evaluate this research (Mumford, Rose & Goslin, 1995). The panel included two reviewers chosen for their expertise in parapsychological research, namely Dr Jessica Utts, a professor of statistics at the University of California at Davis and Dr Ray Hyman, a psychologist at the University of Oregon. Dr Michael Mumford and Dr Andrew Rose, both of them senior behavioural scientists and experts in research methods at AIR, together with Dr Lincoln Moses, a professor of statistics at Stanford University and Dr David Goslin, President of AIR and review coordinator, completed the panel. The resulting AIR report contained two major reviews of the research (Utts, 1995a; Hyman, 1995a) and a concluding section that outlined the main points of agreement and disagreement between the two reviews.
Content from External Source
(Wiseman, Milton).

The "Utts, 1995a" reference is the paper metabunker AR318308 linked to (link here); Utts is clearly convinced of the reality of RV.
Aspects of the methodological issues listed by Utts on page 8 of her review are questioned by Wiseman and Milton.

It should be noted that Hyman (see above) markedly disagreed with Utts' interpretation of the evidence, writing

Jessica Utts and I were commissioned to evaluate the research on remote viewing and related phenomena which was carried out at Stanford Research Institute (SRI) and Scientific Applications International Corporation (SAIC) during the years from 1973 through 1994. We focussed on the ten most recent experiments which were conducted at SAIC from 1992 through 1994. These were not only the most recent but also the most methodologically sound. We evaluated these experiments in the context of contemporary parapsychological research. Professor Utts concluded that the SAIC results, taken in conjunction with other parapsychological research, proved the existence of ESP, especially precognition. My report argues that Professor Utts' conclusion is premature, to say the least. The reports of the SAIC experiments have become accessible for public scrutiny too recently for adequate evaluation. Moreover, their findings have yet to be independently replicated. My report also argues that the apparent consistencies between the SAIC results and those of other parapsychological experiments may be illusory. Many important inconsistencies are emphasized. Even if the observed effects can be independently replicated, much more theoretical and empirical investigation would be needed before one could legitimately claim the existence of paranormal functioning.
Content from External Source
"Evaluation of a Program on Anomalous Mental Phenomena", Ray Hyman, 1996, Journal of Scientific Exploration
https://web.archive.org/web/2008060...ificexploration.org/jse/abstracts/v10n1a2.php
(regrettably, no additional text at linked-to source).

Whatever the relative merits of Utts' and Hyman's reviews, the fact is the CIA ceased funding RV research as a result.

Edwin C. May, the judge of the SAIC Experiment One results, replied to Wiseman and Milton's re-evaluation
(abstract viewable here, https://psycnet.apa.org/record/1999-05017-002), Journal of Parapsychology 62 (4), 1998.
Wiseman and Milton responded, stating that their methodological concerns about the SAIC research had not been refuted by May; see https://core.ac.uk/download/1638365.pdf (or PDF, attached below- "SAIC Exp1 Reply to May").

In their reply to May, Wiseman and Milton briefly mention an earlier critical evaluation of an ESP experiment,
"Exploring Possible Sender-to-Experimenter Acoustic Leakage in the PRL AutoGanzfeld",
Richard Wiseman, Matthew Smith, Diana Kornbrot, 1996, Journal of Parapsychology 60 (2), link to abstract
https://psycnet.apa.org/record/1997-07199-001, PDF attached below ("Auditory Leakage AutoGanzfeld").
(Diana Kornbrot supervised my undergraduate project, many years ago -nothing to do with ESP- she's a very hard-working, perpetually positive woman).

Wiseman et al.'s evaluations aren't particularly exciting- not nearly as exciting as claims of remote viewing- but they might be of interest to people who want to understand possible mechanisms underpinning claims of remote viewing.
 

Attachments

  • SAIC Exp1 re-evaluation Wiseman Milton.pdf
    195.6 KB · Views: 91
  • SAIC Exp1 Reply to May.pdf
    196.9 KB · Views: 60
  • Auditory Leakage AutoGanzfeld.pdf
    125 KB · Views: 53
Last edited:
In the case of ESP, we should expect to find that if ESP exists, the target object or image being remotely viewed by subjects in the lab should be accurately identified and described at rates higher than what we would expect to find due to random chance alone.
Yes. But p-hacking is a thing now.
Article:
Data dredging (also known as data snooping or p-hacking)[1][a] is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.[2]


If remote viewing is a purely statistical phenomenon, then it is not clear at all that it is caused by paranormal cognition.

The idea of "talented individuals" suggests that people can be identified who are correctly predicted to perform better at these experiments than random control groups; and that the conditions under which these individuals perform best are also identifiable. I have not heard that they are. (If they were, these individuals would have been eligible for the JRF prize.)
 
I am concerned that we've left Deirdre intently staring at something since post 78...
actually i knew she wouldnt take me up on it

I've been having visions of a better world... a world with sunshine, where Doctor Who fans are treated with respect...
I boycotted the Chibnall years, so i'm the wrong type of Dr. Who fan. :)

Not only fine but true to her MO. ;) @deirdre
But always scolds me for saying too much. :(
you're in a grouchy mood today i see. fighting with
the missus?
 
i always thought they stopped because the Russians stopped.
i'm wrong. seems they kept going
for a bit longer
Article:
According to the statement of A.V.Bobrow
(А.В.Бобров), published in [97], the center ’Vent’ -
in different forms – existed until 2005, with funding
and active works up to the end of the 90s. According
to publications in [97], the work of the center has
lasted at least until 1995, more recent publications are
related to the ’International Institute of Theoretical and
Applied Physics’. Publications, see Fig. 11, show that the
peak of research activity associated with instrumental
psychotronics, continued until 2002-2003
..
After 2000
the group of Kreml’ psychics [100] was dissolved; the
14
magazine ’Parapsychology and Psychophysics’ was closed;
many laboratories, see e.g. [70], were closed in 2004;
at the
end of 2003 the military unit 10003, established in 1989
to explore the possibilities of military use of paranormal
phenomena [101], was eliminated.
 
Fascinating, and terrifying that the US and the USSR should both be sidetracked down such a dead-end track.

The Soviets were investigating black magic, too; but what makes black magic black? If magic could be used for good, then it surely would be white magic.
 
The Soviets were investigating black magic, too; but what makes black magic black? If magic could be used for good, then it surely would be white magic.
what? what makes you think Soviets were investigating black magic?

i'm not sure about the black magic is bad and white magic is good designation, although im sure some people use those definitions. To me black magic is if you harness the power from Satan, white magic is harnessing the power of Nature.
 
Perhaps everything magical the Soviets did (if anything) could be classified as 'black magic'
or anything sorcerers and witches did. i wonder if the term 'white magic' even existed back then. ?? hhmmm.

edit: it was
Article:
Some time in 1977 the Managing Director of Mowbrays, a prestigious religious publishing house was looking in his shaving mirror one morning and was suddenly struck with the thought of publishing a History of White Magic. Goodness knows where that came from! Were inner plane friends of mine beaming psychical suggestions via the looking glass? Was he thus an unconscious medium?
 
Last edited:
Yes. But p-hacking is a thing now.
Article:
Data dredging (also known as data snooping or p-hacking)[1][a] is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.[2]


If remote viewing is a purely statistical phenomenon, then it is not clear at all that it is caused by paranormal cognition.

The idea of "talented individuals" suggests that people can be identified who are correctly predicted to perform better at these experiments than random control groups; and that the conditions under which these individuals perform best are also identifiable. I have not heard that they are. (If they were, these individuals would have been eligible for the JRF prize.)

And to sort of repeat what I wrote back in post #8 of this thread.

If such "talented individuals" exist what have they done with their talent?
Where are the individuals or companies that make money by performing these services for others, or exploit it themselves?
Academic researchers who devise some new way to make something these days quickly form companies to exploit that new process.
Where are the people exploiting their ability to remote view?
This of course is not proof that such abilities do not exist, but would support Niven's contention that if they do exist they are "nearly useless."
 
Fascinating, and terrifying that the US and the USSR should both be sidetracked down such a dead-end track.

It is. In the case of the Soviets, their science was often in the service of the state and the party and that meant in the service of the Revolution. So, if someone or some idea was liked by the party leaders it became the default. The theories of Trofim Lysenko is the classic example. He was from a peasant family and rejected "western" ideas about genetics and came up with his own ideas based on Leninism:

Lysenkoism
Content from External Source
(Russian: Лысенковщина, romanized: Lysenkovshchina, IPA: [lɨˈsɛnkəfɕːʲɪnə]; Ukrainian: лисенківщина, romanized: lysenkivščyna, IPA: [lɪˈsɛnkiu̯ʃtʃɪnɐ]) was a political campaign led by Soviet biologist Trofim Lysenko against genetics and science-based agriculture in the mid-20th century, rejecting natural selection in favour of a form of Lamarckism, as well as expanding upon the techniques of vernalization and grafting. In time, the term has come to be identified as any deliberate distortion of scientific facts or theories for purposes that are deemed politically or socially desirable.

More than 3,000 mainstream biologists were dismissed or imprisoned, and numerous scientists were executed in the Soviet campaign to suppress scientific opponents. The president of the Soviet Agriculture Academy, Nikolai Vavilov, who had been Lysenko's mentor, but later denounced him, was sent to prison and died there, while Soviet genetics research was effectively destroyed. Research and teaching in the fields of neurophysiology, cell biology, and many other biological disciplines were harmed or banned.
Content from External Source
https://en.wikipedia.org/wiki/Lysenkoism

So, I would imagine in a similar vain, if an important party official thought there was something to ESP/Psy, than that's what will be studied. Sucessfuly. Even if it wasn't.

The US intelligence community gets wind of it and has a huge case of FOMO and we end up with Puthoff and Bay studying Psy for 20+ years and the CIA paying for it.
 
Last edited:
If such "talented individuals" exist what have they done with their talent?

I think that's a strong point; in the real world people with such abilities would be highly valued
(there are films and TV series where the "gifted" are employed, or pursued, by intelligence agencies or shadowy corporations).

The fact that such hypothetical abilities might only be scoring a little above chance is immaterial; panels of "viewers" could be used, and significant results derived from correlating their reports. And almost all human abilities improve with practice.

Despite various usually inaccurate stories about psychics helping in police investigations over the years, where are the viewers if (God forbid) a child goes missing in the real world? At the minimum they could perhaps check out wells/ inaccessible areas and reduce the search area for the police. Or maybe viewers could apply their skills to help find missing climbers and cavers.
After a number of searches, any performance above chance would eventually be evident.

Why don't viewers report the numbers and locations of bad guys and their prisoners in hostage situations? Where's the psychic bomb squad or de-mining team, where viewers find the ordnance and PK (psychokinesis) exponents diffuse them, at a safe distance?
If a crane can be viewed at a Soviet depot, why can't the viewers see, and report on, enemy emplacements if our troops are required to go into action?

We could have an extraordinary space exploration program at a fraction of the cost of the real thing.
As Inigo Swann's impressions of Jupiter appear to be unaffected by the inverse square law, and- as the SRI researchers themselves commented on- Swann started giving his impressions in less time from the start of his session than it would take light to travel from Earth to Jupiter- we could start deep space exploration, perhaps interstellar exploration, at practically no cost tomorrow.

In fact the "viewers" could have been doing all these things, at (it would seem) little cost or risk to themselves, for decades.
But there's no good evidence that they have. Strangely, Swann never published a guide to the solar system, and can't have returned to Jupiter, considering that he thought (in 1995 IIRC, after the 1994 Shoemaker-Levy 9 comet impact) that craters are visible on Jupiter as seen from space.

As often seen on Metabunk, extraordinary claims, often vague details, and a small minority of enthusiastic supporters who seem reluctant to consider alternative hypotheses (while implying that the majority of their peers have that very flaw).
And nothing of much use to show for it.

I'm not sure any RV reports are distinguishable from imagination, or in some cases (the abducted US General in Italy) the use of general knowledge and common sense. I don't know if the viewers knew that these were the faculties that they were using- perhaps not.
Maybe someone should run experimental trials where people with a broad range of interests, decent general knowledge and an interest in current affairs are asked to simply imagine distant locations, or guess where things of interest might be found.

(At least the SRI/ SAIC "viewers" got to live what must have been reasonably comfortable lives at the American taxpayer's expense for a few years, so someone benefitted).
 
It is. In the case of the Soviets, their science was often in the service of the state and the party and that meant in the service of the Revolution. So, if someone or some idea was liked by the party leaders it became the default. The theories of Trofim Lysenko is the classic example. He was from a peasant family and rejected "western" ideas about genetics and came up with his own ideas based on Leninism:
Lysenko (whose ideas are unfortunately undergoing a bit of a resurgence in Russia at the moment) must bear the blame for widespread famine due to crop failure. That is the disastrous result that may come from permitting policy to take the place of science.

Although it’s impossible to say for sure, Trofim Lysenko probably killed more human beings than any individual scientist in history. Other dubious scientific achievements have cut thousands upon thousands of lives short: dynamite, poison gas, atomic bombs. But Lysenko, a Soviet biologist, condemned perhaps millions of people to starvation through bogus agricultural research—and did so without hesitation. Only guns and gunpowder, the collective product of many researchers over several centuries, can match such carnage.
Content from External Source
https://www.theatlantic.com/science/archive/2017/12/trofim-lysenko-soviet-union-russia/548786/
 
Last edited:
With respect to the issue of continuing to perform Remote Viewing (RV) experiments even though it has been "proved".

Yes, you need to continue to perform experiments. Every time a potential mechanism by which RV might work is proposed you are creating a need for further experiments. Experiments to prove or disprove that specific mechanism. Has every possible influence that might effect the viewers ability been accounted for in the previous experiments? Highly unlikely. So the date from old experiments may not have controlled for or even measured factors that would influence the proposed mechanism. Cell phone tower signal strength? Atmospheric pressure? What the viewer had for breakfast that day? Their blood type?

Just proposing a mechanism and hoping the old data is sufficient to prove or disprove it is likely to prove disappointing.
 
Perhaps everything magical the Soviets did (if anything) could be classified as 'black magic'
or anything sorcerers and witches did.
Bit off-topic but an interesting coincidence,
in the 1974 novel "Tinker Tailor Soldier Spy" by John Le Carré, himself a former MI6 (Secret Intelligence Service) operative,
valued intelligence from a high-ranking Soviet source is so secret that only a tiny cabal within MI6 manage it.
The "product" from this source is codenamed "Witchcraft".
 
Back
Top