In experimental psychology it's expected that 5 percent of the data you collect during an experiment is bogus, because of the quirky nature of human psychology and the difficulties in collecting data from humans.
An important corollary of this is that if you have a large number of studies set at a confidence level of, say, 95%, you can reasonably expect about 5% of the studies to accept a false conclusion- either the alternative hypothesis is not disproven, or the null hypothesis is retained, when in fact those outcomes aren't accurate descriptions of the objective reality.
Why is this important to the average Joe or Julia? Think of all the studies into the efficacy of homeopathy. Loads of 'em.
It is inevitable (or at least,
highly likely) that some of these studies, even if conducted honestly and competently, will conclude that the alternative hypothesis cannot be rejected, i.e. the homeopathic "remedy" being tested has some beneficial effect, even though this isn't the case.
And it is this minority of positive findings that homeopathy advocates always cite.
Same goes for other "contentious but interesting" fields of study, e.g. ESP testing using Zener cards. Do enough studies and you'll get some false positives- which will get popular media attention (whereas "null" findings are never reported).
The scientist's notion, (the thing he wants to be true) is called the alternative hypothesis.
Ideally, the scientist doesn't
want the alternative (or experimental) hypothesis to be true- she or he dispassionately attempts to falsify the alternative hypothesis without any preconception of the outcome
,
...but human nature being what it is, of course researchers tend to be invested in "their" hypothesis. I guess the whole edifice of Popperian hypothesis testing is there to "keep us honest" and reduce (primarily) type 1 errors.
Its descriptive research not hypothesis testing.
It might reflect a bias (TBH, probably a prejudice) of mine, but I think hypothesis testing- or meta studies of tested hypotheses- is the "gold standard" of scientific research, but it's difficult to see how a quantitative model can be applied in the area of UAP reports.
This is partly due to the massive inconsistency of sightings- after 1000's of them, over 76 years (!), we don't know where the next one will be, what its flight pattern will be, and the more detailed the description, the less it's likely to resemble a previous sighting (notwithstanding generic saucer- or cigar- shaped objects, then triangles, now Tic-tacs).
All we can predict is that it will impart no novel information to the witness (I'm not convinced by Betty Hill's map), it will make no attempt to communicate via radio, it will leave no unambiguous physical evidence of ever having existed, and any "crew" will not be photographed or filmed.
I guess the F-18 FLIR records (and UAV footage of "spheres" in Iraq) are rich in checkable data, but I get the impression that Mick West (with the assistance of Mendel and many others here) has done more to analyse these, and come up with realistic explanations in the public realm, than any government agency, academic body, commercial interest or "Ufology" group that I'm aware of.
Where reports of UAPs are unaccompanied by such data- such as uncorroborated eyewitness reports- we are left with the same old problem of it being a matter of belief (that the witness is honest, and accurately reporting what they saw, which was a physical object or light source that any other competent observer would describe in a similar way).