Mendel
Senior Member.
These "self-debunks" are included as Appendices A and B in the "Initial results..." paper. I like that.Update from UAPx, they've Identified two anomalous events as prosaic..
These "self-debunks" are included as Appendices A and B in the "Initial results..." paper. I like that.Update from UAPx, they've Identified two anomalous events as prosaic..
Mick included the paper with his post; this is a new forum page, so it bears repeating.They have finally published a paper on the expedition and the "Tear in the Sky"
The paper does not mention it at all. Maybe it's among the "other tools [that] were less useful, and thus they are not listed here".The actual UAPx report mentions a Quantum Random Noise Generator being used as a baseline:
![]()
With a concept like this, it makes sense to bring sources of randomness (such as the cosmic watches) that provide enough randomness to generate the occasional "anomaly".External Quote:6. Discussion
In light of the possibilities, our most intriguing event appears almost by definition to be ''ambiguous''; changing interpretations change the statistical significance. That has inspired us to recommend a general plan for the field. We suggest (scientific) UAP researchers adopt the following conventions: An ambiguity requiring further study is a coincidence between two or more detectors or data sets at the level of 3 [sigma] or more, with a declaration of genuine anomaly requiring (the HEP-inspired) 5 [sigma], combining Eqs. (5) and (6). (HEP = High-Energy Physics.)
Coincidence here is defined as ''simultaneity'' within the temporal resolution, and spatial when germane. This way, one rigorously quantifies the meaning of extraordinary evidence, in the same way it has been done historically by particle physicists, who have established a very high bar to clear.
The statistical significance must be defined relative to a null hypothesis, in our case accidental coincidence, combined with causally-linked hypotheses, like cosmic rays striking camera pixels.
For cases where significance is difficult to determine, we recommend defining ambiguity based on the number of background events expected, where 1 event is the borderline: e.g., if < 1 event is expected to be near-simultaneous for a particular pair of sensors, but ≥ 1 events are detected, they should each be inspected, as time permits, especially qualitatively ambiguous incidents.
2 sigma is the usual 95% confidence, i.e. an event that you're expecting to occur once in 20 trials by random chance. An event that you're expecting to occur once in 1 trial is 0 sigma, not anywhere near their proposed 3 sigma standard for ambiguity.External Quote:For cases where significance is difficult to determine, we recommend defining ambiguity based on the number of background events expected, where 1 event is the borderline: e.g., if < 1 event is expected to be near-simultaneous for a particular pair of sensors, but ≥ 1 events are detected, they should each be inspected, as time permits, especially qualitatively ambiguous incidents.
It would seem that LOSING a coin flip would be equally anomalous. So everything is anomalous. (If everything is anomalous, is anything anomalous?)But to go, "I won a coin flip, that's anomalous" is just weird to me.
They're treating a 3 sigma event as "might be anomalous, might not", and that's the ambiguity.Also, are the conflating/confusing "ambiguous" with "anomalous?"
In particle physics, 3 sigma results have tended to not replicate.Article: particle_physicist on March 15, 2021 12:48 PM at 12:48 pm said:
Experimental particle physicist here.
Several comments:
(1) The three sigma ("evidence for") and five sigma ("discovery of") rules are essentially a particle physics convention. I don't think that other fields of physics are too concerned with that.
(2) They are a useful convention to protect against false positives and also against the fact that many of the uncertainties we deal with are what we call "systematic" in nature, e.g. they have to do with how well we understand our detector and other processes that can contaminate our signals. These systematic uncertainties can easily be underestimated. [...]
(4) Discovery of BSM would be an extraordinary claim, and extraordinary claims require extraordinary evidence (5 sigma).
The article gives the probability for μ ± 5σ as 0.999999426696856 = 1 - 0.00006%.Article: ![]()
For an approximately normal data set, the values within one standard deviation of the mean account for about 68% of the set; while within two standard deviations account for about 95%; and within three standard deviations account for about 99.7%.
Everything being anomalous would be anomalous.It would seem that LOSING a coin flip would be equally anomalous. So everything is anomalous. (If everything is anomalous, is anything anomalous?)
Also, are the conflating/confusing "ambiguous" with "anomalous?"
In high energy physics there are some theoretical models that use statistics to evaluate how the observations support each hypothesis. One can't just take that sort of statistical analysis and use it in another field where no theoretical model exists. This looks like an example of what Feynman called Cargo Cult Science. It has the trappings of science, some outward appearance that reminds us of the form of a scientific investigation, but isn't real science.
Yeah, it's a management thing: https://en.wikipedia.org/wiki/Six_SigmaAs a very non-math person, I was wondering about this. Is the whole "sigma" thing from partial physics being misused? Or is there a legitimate use of the term outside of that field?
...
My wife does remember hearing about it as some sort of management training thing. Maybe it's legit, but I wouldn't be surprised if it's a bit of BS with borrowed terminology to give it a veneer of being sciencey.
There's a whole certification scheme related to process improvement, there's also Lean Six Sigma, and you can work your way up to Lean Six Sigma blackbelt.Six Sigma (6σ) is a set of techniques and tools for process improvement. It was introduced by American engineer Bill Smith while working at Motorola in 1986.[1][2]
Six Sigma strategies seek to improve manufacturing quality by identifying and removing the causes of defects and minimizing variability in manufacturing and business processes. This is done by using empirical and statistical quality management methods and by hiring people who serve as Six Sigma experts. Each Six Sigma project follows a defined methodology and has specific value targets, such as reducing pollution or increasing customer satisfaction.
The term Six Sigma originates from statistical quality control, a reference to the fraction of a normal curve that lies within six standard deviations of the mean, used to represent a defect rate.
Six Sigma is a specific "management training thing" (process improvement more specifically), but the name is inspired by sigma referred to in stats.As a very non-math person, I was wondering about this. Is the whole "sigma" thing from partial physics being misused? Or is there a legitimate use of the term outside of that field?
I ask, because my first exposure to it was at a winery. Seriously. It was called Six Sigma and was owned by a German guy that used to be a big shot at Deutsche Bank and then GE Capital, before retiring and setting up a winery. He explained the sigma ranges as something like process optimization, IIRC. It was from engineering, then he had maybe borrowed it or learned for finance. Something about be a "sigma certified" specialist, and using the system to optimize and fine tune processes to create stuff or make decisions. Attaining 6 Sigma, meant the process was as perfect and repeatable as it could be and therefore creates something perfect all the time. Or something like that. He had a really good Tempranillo. Whether it was 6 Sigma or not, I'll let others judge.
My wife does remember hearing about it as some sort of management training thing. Maybe it's legit, but I wouldn't be surprised if it's a bit of BS with borrowed terminology to give it a veneer of being sciencey.