A Tear In The Sky - Nimitz/Tic-Tac/Catalina UFO Documentary

They have finally published a paper on the expedition and the "Tear in the Sky"
Mick included the paper with his post; this is a new forum page, so it bears repeating.

We discussed a quantum random noise generator (QRNG) earlier in the thread:
The actual UAPx report mentions a Quantum Random Noise Generator being used as a baseline:

1671846765646.png
The paper does not mention it at all. Maybe it's among the "other tools [that] were less useful, and thus they are not listed here".

I can see some rationale for bringing it, because the UAPx concept of "anomaly" is statistical. This is the "Discussion" section of the paper in full (and is is unusual for that section to not discuss the results presented before):
External Quote:
6. Discussion

In light of the possibilities, our most intriguing event appears almost by definition to be ''ambiguous''; changing interpretations change the statistical significance. That has inspired us to recommend a general plan for the field. We suggest (scientific) UAP researchers adopt the following conventions: An ambiguity requiring further study is a coincidence between two or more detectors or data sets at the level of 3 [sigma] or more, with a declaration of genuine anomaly requiring (the HEP-inspired) 5 [sigma], combining Eqs. (5) and (6). (HEP = High-Energy Physics.)
Coincidence here is defined as ''simultaneity'' within the temporal resolution, and spatial when germane. This way, one rigorously quantifies the meaning of extraordinary evidence, in the same way it has been done historically by particle physicists, who have established a very high bar to clear.
The statistical significance must be defined relative to a null hypothesis, in our case accidental coincidence, combined with causally-linked hypotheses, like cosmic rays striking camera pixels.
For cases where significance is difficult to determine, we recommend defining ambiguity based on the number of background events expected, where 1 event is the borderline: e.g., if < 1 event is expected to be near-simultaneous for a particular pair of sensors, but ≥ 1 events are detected, they should each be inspected, as time permits, especially qualitatively ambiguous incidents.
With a concept like this, it makes sense to bring sources of randomness (such as the cosmic watches) that provide enough randomness to generate the occasional "anomaly".

The QRNG may have been intended to serve as another source of randomness, or perhaps as a control: if you pick some "events" based on non-UFO-related randomness, they should be qualitatively different from UFO-related randomness, right?

The whole concept obviously suffers from the fact that we don't know whether the Cosmic Watch events are UFO-related at all. It could be akin to rolling dice, and when you roll the same number 3 times in a row, you go "well, that's anomalous, let's look at my UFO camera".
The dice don't make that moment special, they're completely unrelated.

There's a little bit of data on the Cosmic Watch in the appendix, and a promise of "a later paper about Cosmic Watches for both UAP as well non-UAP research, such as solar physics."
 
I'm still internally screaming at this:
External Quote:
For cases where significance is difficult to determine, we recommend defining ambiguity based on the number of background events expected, where 1 event is the borderline: e.g., if < 1 event is expected to be near-simultaneous for a particular pair of sensors, but ≥ 1 events are detected, they should each be inspected, as time permits, especially qualitatively ambiguous incidents.
2 sigma is the usual 95% confidence, i.e. an event that you're expecting to occur once in 20 trials by random chance. An event that you're expecting to occur once in 1 trial is 0 sigma, not anywhere near their proposed 3 sigma standard for ambiguity.

And this is on top of the fact that they don't have any idea what the UAP they're looking for actually are, and hence no statistical model of how they would manifest.

Either I'm missing something fundamental, or this is statistical nonsense.

I mean, Elizondo's "five observables" at least make some kind of sense: I can agree to "if it breaks physics, it's anomalous", even if I don't think alien visitors would be able to do that.
But to go, "I won a coin flip, that's anomalous" is just weird to me.
 
Last edited:
But to go, "I won a coin flip, that's anomalous" is just weird to me.
It would seem that LOSING a coin flip would be equally anomalous. So everything is anomalous. (If everything is anomalous, is anything anomalous?)

Also, are the conflating/confusing "ambiguous" with "anomalous?"
 
Also, are the conflating/confusing "ambiguous" with "anomalous?"
They're treating a 3 sigma event as "might be anomalous, might not", and that's the ambiguity.

It's said to be "HEP-inspired", I tracked that down:
Article:
particle_physicist on March 15, 2021 12:48 PM at 12:48 pm said:

Experimental particle physicist here.
Several comments:

(1) The three sigma ("evidence for") and five sigma ("discovery of") rules are essentially a particle physics convention. I don't think that other fields of physics are too concerned with that.

(2) They are a useful convention to protect against false positives and also against the fact that many of the uncertainties we deal with are what we call "systematic" in nature, e.g. they have to do with how well we understand our detector and other processes that can contaminate our signals. These systematic uncertainties can easily be underestimated. [...]

(4) Discovery of BSM would be an extraordinary claim, and extraordinary claims require extraordinary evidence (5 sigma).
In particle physics, 3 sigma results have tended to not replicate.

For the non-scientists, many random processes generate results that, when tabulated, look roughly like a bell. This is called a "normal distribution". Every normal distribution is characterized by the bell's center (the mean μ) and its "width" (the standard deviation σ , referred to as "sigma").
Article:
Empirical_rule_histogram.svg.png

For an approximately normal data set, the values within one standard deviation of the mean account for about 68% of the set; while within two standard deviations account for about 95%; and within three standard deviations account for about 99.7%.
The article gives the probability for μ ± 5σ as 0.999999426696856 = 1 - 0.00006%.

For example, if you flipped a fair coin to see how many times you got heads in a row (a run), a run of 5 heads or more would be a 2 sigma event. A run of 9 heads or more would be a 3 sigma event. A run of 21 heads or more would be 5 sigma event.

When you find a 5 sigma event, you have to make sure you understand the randomness involved: if the coin is crooked, then your calculations are off—that's what the physicist called a "systemic uncertainty".

Note also that a 5 sigma event is not anomalous per se: if you built a bunch of robots and did the coin flip experiment 2 million times, you'd expect to see a 21 heads run, that'd be normal and not anomalous.
 
I haven't had time to go through the paper, so I'm hoping I'm not pulling an irrelevant tangential point here.

Reading these last few comments, this reminds me of the Prosecutor's fallacy.

People assuming that the probability that something anomalous has happened given that an evidence was observed is the same as (1 - probability that the evidence would be observed "normally").

In high energy physics there are some theoretical models that use statistics to evaluate how the observations support each hypothesis. One can't just take that sort of statistical analysis and use it in another field where no theoretical model exists. This looks like an example of what Feynman called Cargo Cult Science. It has the trappings of science, some outward appearance that reminds us of the form of a scientific investigation, but isn't real science.

The fallacy is in claiming that measurements with low probability are evidence that it is likely that an anomalous thing has happened

To illustrate the fallacy, consider, the Sally Clark case.

A pediatrician came up with the erroneous statistical interpretation that, since the likelihood of a sudden infant death from natural causes is rather low (say, [0.01% -- 0.1%] range, depending on specifics of the case), if two baby deaths happen under the same caregiver, the likelihood of them being natural is the square of the probabilities of a single death, so in the [1:100 000 000 -- 1:1 000 000] range. This has contributed to quite a few apparently wrongful murder convictions in the UK. In Clark's case, she spent 3 years in jail and, after exonerated, suffered from severe depression and ended up dying from acute alcohol intoxication, aged 42, about 4 years after being released.

I could go on, but I don't want to make this comment too long. I can expound if needed.

Instead of this fallacious use of statistics, it would more appropriate to use likelihood ratios or just an odds view.

And just like it is far more likely that an infant that has been found dead has died from natural causes than it is that the mother killed him, however low the rates of infant death; it is far more likely that some low probability measurement happened fortuitously than it is that it was caused by interstellar spaceships, or ghosts or whatever paranormal thing people are trying to prove with their instruments, however low the probability of the observation.
 
In high energy physics there are some theoretical models that use statistics to evaluate how the observations support each hypothesis. One can't just take that sort of statistical analysis and use it in another field where no theoretical model exists. This looks like an example of what Feynman called Cargo Cult Science. It has the trappings of science, some outward appearance that reminds us of the form of a scientific investigation, but isn't real science.

As a very non-math person, I was wondering about this. Is the whole "sigma" thing from partial physics being misused? Or is there a legitimate use of the term outside of that field?

I ask, because my first exposure to it was at a winery. Seriously. It was called Six Sigma and was owned by a German guy that used to be a big shot at Deutsche Bank and then GE Capital, before retiring and setting up a winery. He explained the sigma ranges as something like process optimization, IIRC. It was from engineering, then he had maybe borrowed it or learned for finance. Something about be a "sigma certified" specialist, and using the system to optimize and fine tune processes to create stuff or make decisions. Attaining 6 Sigma, meant the process was as perfect and repeatable as it could be and therefore creates something perfect all the time. Or something like that. He had a really good Tempranillo. Whether it was 6 Sigma or not, I'll let others judge.

My wife does remember hearing about it as some sort of management training thing. Maybe it's legit, but I wouldn't be surprised if it's a bit of BS with borrowed terminology to give it a veneer of being sciencey.
 
As a very non-math person, I was wondering about this. Is the whole "sigma" thing from partial physics being misused? Or is there a legitimate use of the term outside of that field?
...
My wife does remember hearing about it as some sort of management training thing. Maybe it's legit, but I wouldn't be surprised if it's a bit of BS with borrowed terminology to give it a veneer of being sciencey.
Yeah, it's a management thing: https://en.wikipedia.org/wiki/Six_Sigma
Six Sigma () is a set of techniques and tools for process improvement. It was introduced by American engineer Bill Smith while working at Motorola in 1986.[1][2]

Six Sigma strategies seek to improve manufacturing quality by identifying and removing the causes of defects and minimizing variability in manufacturing and business processes. This is done by using empirical and statistical quality management methods and by hiring people who serve as Six Sigma experts. Each Six Sigma project follows a defined methodology and has specific value targets, such as reducing pollution or increasing customer satisfaction.

The term Six Sigma originates from statistical quality control, a reference to the fraction of a normal curve that lies within six standard deviations of the mean, used to represent a defect rate.
There's a whole certification scheme related to process improvement, there's also Lean Six Sigma, and you can work your way up to Lean Six Sigma blackbelt. :rolleyes: It's mostly basic management skills with an engineering gloss, so it gets pushed on people in non-industrial fields, but it's not like you need to go through a systematic analysis to figure out you need a ticketing system to manage service requests.
 
As a very non-math person, I was wondering about this. Is the whole "sigma" thing from partial physics being misused? Or is there a legitimate use of the term outside of that field?

I ask, because my first exposure to it was at a winery. Seriously. It was called Six Sigma and was owned by a German guy that used to be a big shot at Deutsche Bank and then GE Capital, before retiring and setting up a winery. He explained the sigma ranges as something like process optimization, IIRC. It was from engineering, then he had maybe borrowed it or learned for finance. Something about be a "sigma certified" specialist, and using the system to optimize and fine tune processes to create stuff or make decisions. Attaining 6 Sigma, meant the process was as perfect and repeatable as it could be and therefore creates something perfect all the time. Or something like that. He had a really good Tempranillo. Whether it was 6 Sigma or not, I'll let others judge.

My wife does remember hearing about it as some sort of management training thing. Maybe it's legit, but I wouldn't be surprised if it's a bit of BS with borrowed terminology to give it a veneer of being sciencey.
Six Sigma is a specific "management training thing" (process improvement more specifically), but the name is inspired by sigma referred to in stats.

Back to the UFO topic, it's being used properly here. The reason particle physics uses such a high bar (5 sigma) is because they perform so many tests. Assuming a null effect and the typical significance threshold of p = .05, the error rate for 1 test will be... 5%. Running 2 tests however (again assuming no effect), the probability of incorrectly finding a "significant" effect in one or both tests is 1 - .95^2 = 9.75%. Running 3 tests. 1-.95^3 = 14.3%. Running n tests, the probability of making a false discovery (this is literally called the false discovery rate) is 1 -.95^n. Particle accelerator experiments perform thousands or millions of tests, so the usual .05 threshold would result in false discoveries for basically every experiment. Hence the use of 5-sigma, which corresponds to a significance threshold of 3 x 10^-7. You can run LOTS of tests while keeping the overall false discovery probability low.

Since the UFO people are running their sensors so much, they are similarly performing lots of tests, so they'll similarly need to use high sigma confidence to avoid false discoveries.

Big disclaimer: using 5 sigma isn't a magic wand either. If their methodology is flawed in some other way, the exact sigma level doesn't matter.
 
Back
Top