"The UAP assessment matrix"

jdog

Senior Member.
A paper in the journal Acta Astronautica titled "The UAP assessment matrix: A framework for evaluating evidence and understanding regarding Unidentified Anomalous Phenomena" offers "an assessment matrix that allows a rigorous appraisal of any given UAP case."

The lead author is Tim Lomas of the Harvard School of Public Health, whose work on UAPs, the possibility of cryptoterrestrials, and the possibility of extraterrestrial visitors has been mentioned in this forums several times. (Though most of his publications appear to be on global wellbeing and happiness.)
Over recent years the issue of Unidentified Anomalous Phenomena (UAP) has increasingly captivated attention and even concern, as reflected in the US military establishing a UAP Task Force in 2020. By their very definition however, such phenomena present an epistemological challenge to observers and analysts, raising questions such as what does it mean for something to be unidentified or anomalous, and relatedly, what kind of evidence and understanding would it take for the phenomenon to become identified and explained. This paper aims to help address these issues by providing a UAP Assessment Matrix that would allow observers to appraise a given UAP event/case, featuring two main dimensions: evidence (i.e., the quality of the data pertaining to it); and understanding (i.e., the extent to which the data align with various theories and explanations). Moreover, both dimensions feature numerous sub-dimensions (which is what makes the framework a matrix), allowing more nuanced and fine-grained assessments to be made. We also demonstrate the matrix using a little-known but significant UAP case study from 1953. The matrix will ideally provide a foundation for more rigorous and considered analyses of UAP events and stimulate further understanding of this vitally important topic.
2025-05-19_10-51-53.jpg

The basic 2-dimensional state space underpinning the uap-am.

On first skim, the assessment system seems to give great deference to a witness's immediate perceptions and offers a gish-gallopy dismissal of skepticism: "This important point is often overlooked by people who seem to assume that just because they don't have access to the data then it doesn't exist."

Plus this weird line about judging evidence: "While there are of course intellectual and practical overlaps between scientific and criminalistic endeavours, there are important differences, and engaging with the UAP topic seems closer in spirit to the latter."

The matrix is applied to assess a 1953 siting reported by Kelly Johnson and other Lockheed personnel, where Kelly reported seeing what looked like a fast-moving "saucer" over the Pacific. "The object, even in the glasses, appeared black and distinct, but I could make out no detail, as I was looking toward the setting sun, which was, of course, below the horizon at the time." (No effort is made in the assessment to determine whether the phenomenon might have had a more prosaic meteorological explanation.)
Although the eye-witness accounts are essentially of the highest quality (indeed, it is hard to conceive of better witnesses), the case received scores of 0 for the other types of evidence, which dragged the mean down considerably. Speaking reflectively and honestly, we are unsure if this "skewing" represents a limitation/flaw with the matrix, or conversely, whether it is actually an appropriate result. On one hand, it seems unfortunate that a case that involves the highest quality eye-witness testimony should be dragged down simply because other types of data are missing. On the other hand, it may well be fair for cases relying on eye-witness testimony alone to be judged relatively poorly, leaving room for cases with multiple lines of evidence to receive far higher scores

Of note, the first reference cited is Jacques Vallée's "Dimensions: a casebook of alien contact"; another is an op-ed by Christopher Mellon on the Fermi paradox. Most of the reference are to government UAP reports, news stories about them, and classic UFO reports, with only a handful of citations of prior scholarly research.
 

Attachments

  • 1-s2.0-S0094576525002127-gr1.jpg
    1-s2.0-S0094576525002127-gr1.jpg
    92.5 KB · Views: 14
Last edited by a moderator:
On page 3 they are already misrepresenting some things.

External Quote:
Speaking in May 2023, Dr Sean Kirkpatrick, AARO's Director at the time (who left the post in December 2023), said he suspected most events did have conventional explanations, and only remained unidentified through lack of good data. However, around 2–5 % were "possibly really anomalous" (Wendling, 2023). Similarly, in a briefing at the Pentagon on November 14th, 2024, to accompany the latest report, AARO's new director, Dr Jon Kosloski, said, "we're focusing on the truly anomalous where we don't understand the activity," noting that "there are interesting cases that I — with my physics and engineering background and time in the [intelligence community] — I do not understand and I don't know anybody else who understands" (Vincent, 2024). Providing a little more detail/context, in an interview with PBS NewsHour (2025) in May 2025, Kosloski said that of the "little over 1,800 cases" that AARO current has in its holdings, about 2 % have "sufficient scientific evidence that we can conduct a thorough investigation and after that investigation still remain anomalous." For such cases, he suggested they "send it off to our scientific and intelligence partners for a very thorough analysis that generally takes on the order of months, and sometimes we don't even close those cases, we have to sort of keep them on the shelf always looking for additional data to enrich them because the phenomena are so perplexing." A video of one such case, for example, was publicly shared by (All-domain Anomaly Resolution Office, 2025) on X in May 2025, with the accompanying text: "Eight minutes and fifteen seconds of video footage was captured by an infrared sensor aboard a U.S. platform in the Middle East in 2023 and later reported to AARO as UAP. The report remains unresolved as the available data does not support a conclusive analytic evaluation."
The case referred to at the end (ME 2023) explicitly says that it should not be interpreted as having been assessed to be anomalous:
External Quote:
Viewers should not interpret any part of the video description below as reflecting an analytic judgment, investigative conclusion, or factual determination regarding the described event's validity, nature, or significance. Viewers should not interpret the absence of a formal assessment as suggestive of anomalous characteristics.
Source: https://www.dvidshub.net//video/961723/unresolved-uap-report-middle-east-2023

The paper authors are trying to present yet-unresolved cases as proof of 'truly anomalous' unexplainable things. In the earlier passage citing Kosloski, Kosloski even says that 2% of cases AARO looks at they can't explain and they send to other experts, and 'sometimes we don't even close those cases', implying that some do get resolved by those external experts. Meaning you cannot interpret AARO merely being unable to conclusively resolve a case as implying that the case is evidence for something truly anomalous and unresolvable in principle.
 
Last edited:
So would the center of the vertical line correspond with "lacking high quality evidence one way or another -- which is where most cases seem to fall, at least initially. "Meaning the "low information zone" cases...

If not, the graph would seem to need one more dimension, I'd think, to differentiate whether or not there even IS evidence from what the evidence points towards.
 
A paper in the journal Acta Astronautica titled "The UAP assessment matrix: A framework for evaluating evidence and understanding regarding Unidentified Anomalous Phenomena" offers "an assessment matrix that allows a rigorous appraisal of any given UAP case."

The lead author is Tim Lomas of the Harvard School of Public Health, whose work on UAPs, the possibility of cryptoterrestrials, and the possibility of extraterrestrial visitors has been mentioned in this forums several times. (Though most of his publications appear to be on global wellbeing and happiness.)
The fact that "lore" is entirely inside the "supported" side of the graph and entirely outside the "refuted" side tells me he has no interest in representing things honestly at all.
 
On page 3 they are already misrepresenting some things.

Indeed. Another example from the page 4:
External Quote:
Similarly, in
November 2024, Senator Kirsten Gillibrand, one of just six senators on both the Intelligence
and Armed Services committees, said of at least some UAP: "We don't know whose they are.
We don't know what propulsion they use. We don't know the tech … It's not off the shelf
stuff… This is a body of stuff nobody knows what it is" (Laslo, 2024).
"it's not off the shelf stuff" is a positive assertion (which of course is never supported by evidence that conclusively excludes the mundane) which is incompatible with the weasel wording of "we don't know" or "nobody knows".
 
The fact that "lore" is entirely inside the "supported" side of the graph and entirely outside the "refuted" side tells me he has no interest in representing things honestly at all.

Yeah, I had to go back and look again. At first I thought, if the center point sorta equals zero, then I suppose "lore" and "anonymous sources" would be near zero as relevant to "high quality evidence". But even if that argument is made, note how "expert opinion" is lower than a "credible eyewitness" and "documentary evidence", which might include primary sources, is lower than "circumstantial evidence" and even "unverified data" :confused:.

And do these terms have some sort of reciprocal going down from the center point? Is "lore" complimented by "myth" when moving towards refutation? Does "unverified data" become "verified data" or maybe "statistically relevant data"?


1748095449850.png


Also, the "rejection" blob falls heavily into the "requires new paradigms" quadrant, while it should completely fill the opposite quad. A claim that is "refuted by high quality evidence" AND "complies with current understanding" is obvious grounds for rejection. Instead, this is a suggestion that using "high quality evidence" to refute a claim is more about rejecting "new paradigms". It's close minded.

I suppose it could be reworked a bit, but what's the point? The paper and this matrix is a quasi-scientific attempt to scientificaly reject scientific analysis of UAP in favor of a more judicial or legalistic framework. I think the example is something like glyphosates. There is limited scientific research showing a strong case for glyphosates causing cancer, but a jury of 12 people said it did, so the science is wrong. Or in the wording of the matrix, refuses to accept a "new paradigm".

It's ultimately about lowering the scientific bar for the acceptance of aliens.
 
The paper and this matrix is a quasi-scientific attempt to scientificaly reject scientific analysis of UAP in favor of a more judicial or legalistic framework.
I agree -- and as such it may not actually matter where things appear on the graph/matrix. It is not intended to convey information, so exact placement of things was not chosen to convey information. It is, I believe, just intended to look scientifical... Reminds me of the famous Venn DIagram:

Unintentional-Venn-Diagram-400x250.png

Looks nice in a presentation, unless you actually try to extract information from it, in which case it suggests that trust, partnership, innovation and performance are pretty much excluded from "our values."
 
But even if that argument is made, note how "expert opinion" is lower than a "credible eyewitness" and "documentary evidence", which might include primary sources, is lower than "circumstantial evidence" and even "unverified data" :confused:.
I read "expert opinion" as "what a UFO expert would say", and I'm quite happy with it being rated low.
"Documentary evidence" might something like a police report? Rather than relying on old witness memories, it was documented at the time. The Rendlesham tape would be documentary evidence.

Also, the "rejection" blob falls heavily into the "requires new paradigms" quadrant, while it should completely fill the opposite quad. A claim that is "refuted by high quality evidence" AND "complies with current understanding" is obvious grounds for rejection. Instead, this is a suggestion that using "high quality evidence" to refute a claim is more about rejecting "new paradigms". It's close minded.
Not really. A mirror copy of the spectrum from "Lore" to "Data" should be imagined in the bottom half of the vertical axis. The end point of that axis is "refuted by high quality evidence", and it refutes universally, just like you expect it to.
 
The value of "expert opinion" depends on matching the specifics of the event under review to a relevant set of experts. More that one might need to be consults in the name of due diligence.

"I was on a boat so I asked an expert fishing guide." Not so much.

The fact that this and other elements of the chart are ambiguous leads me to support @NorCal Dave's interpretation.
 
The bottom of the graph says "Refuted by high quality evidence", doesn't that imply you have proof of what it actually is, and its not a flying saucer?

Shouldn't the bottom also include "Total lack of any evidence whatsoever"?
 
Also, friendly reminder that millions of paywalled research articles can be accessed through sci-hub. Not linking directly due to murky legality but just google "sci-hub" if you can't find a full paper and paste in the DOI link.

Researchgate also lets you sign up and request full versions from the authors. I have used it before for a stats paper, and the author responded with the requested article and several others that were relevant and more recent!

If the above fails, you can usually find a researcher's public email on their websites, and ask directly. This is generally encouraged by researchers themselves (in case anyone is worried about this being socially inappropriate.
 
If "Lore" qualifies as evidence, then no. That's part of the problem here.

I think UFO proponents love the idea of science, but not the requirements. When Einstein put forth his ideas, they were debated and tested. Through numerous experiments over time it was shown that his ideas explained things about how the universe worked in a predictable and repeatable way.

In the UFO world, there is little to know ideas that result in a repeatable and predictable theory. The strongest evidence are observations caught on camera that MIGHT show some anonymous behavior, like TIC-TAC and GO FAST and even then there is nothing definitive. At best it's unknown. After that its a collection of observations often followed by rumors and claims far removed from the source. I've been watching a bit of the UAP Disclosure Fund presentation (where Elizondo showed the circle crop photos) and so far, it's a lot of "I know these things, because people told me about them". There is no actual, testable evidence presented so far.

So, instead of trying to fit "lore", stories, hearsay and some ambiguous videos into a rigorous scientific model, as they have for years, Lomas is proposing that it's not needed. Simply using the preponderance of evidence for UAP should be enough to convince enough people (the jury as it were), regardless of what science says.
 
If "Lore" qualifies as evidence, then no. That's part of the problem here.
but...Atlantis...

So, instead of trying to fit "lore", stories, hearsay and some ambiguous videos into a rigorous scientific model, as they have for years, Lomas is proposing that it's not needed. Simply using the preponderance of evidence for UAP should be enough to convince enough people (the jury as it were), regardless of what science says.
In the matrix, "what science says" is on the horizontal axis.
The upper right corner, "ontological shock", means "rewrite science", and they want to do that based on "unverified data", "circumstantial evidence" or worse, when the actual science has libraries full of reproducible data and experiments going for it.

That also makes it quite obvious that the "spectrum of evidence" on the vertical is useless, as there's a conflict of evidence for and against that needs to be weighed and compared to establish preponderance. And most of that evidence supports "current understanding", which the diagram shows as a separate axis unrelated to evidence.
 
Back
Top