Transients in the Palomar Observatory Sky Survey

Mick West

Administrator
Staff member
2025-08-05_12-23-59.jpg


Some Transients in the Palomar Observatory Sky Survey (POSS-I) May Be Associated with Above-Ground Nuclear Testing and Reports of Unidentified Anomalous Phenomena

https://www.researchsquare.com/article/rs-6347224/v1
External Quote:

Transient star-like objects of unknown origin have been identified in the first Palomar Observatory Sky Survey (POSS-I) conducted prior to the first artificial satellite. We tested speculative hypotheses that some transients are related to nuclear weapons testing or unidentified anomalous phenomena (UAP) reports. A dataset comprising daily data (November 19, 1949 -April 28,1957) regarding identified transients, nuclear testing, and UAP reports was created (n=2,718 days). Results revealed significant (p = .008) associations between nuclear testing and observed transients, with transients 45% more likely on dates within +/- 1 day of nuclear testing. Significant (p<.001) associations were also noted between total number of transients and total independent UAP reports per date, with the largest association observed for dates on which at least one transient was identified (Spearman's rho = 0.14, p = 0.015). For every additional UAP reported on a given date, there was an 8.5% increase in number of transients identified. Small but significant (p = .008) associations between nuclear testing and number of UAP reports were also noted. Findings suggest associations beyond chance between occurrence of transients and both nuclear testing and UAP reports. These findings may help elucidate the nature of POSS-I transients and strengthen empirical support for the UAP phenomenon.

On the Image Profiles of Transients in the Palomar Sky Survey

https://arxiv.org/abs/2507.15896
External Quote:
The VASCO project has discovered groups of short-lived transients on historical photographic plates that lack conventional explanation. Hambly & Blair (2024) examined nine such transients reported by Villarroel (2021) and found that they exhibit narrower, rounder profiles, attributing this to emulsion flaws. However, well-established optical principles and atmospheric physics imply that unresolved flashes lasting less than a second naturally appear sharper and more circular than stellar images, particularly on long-exposure plates where stars are significantly blurred by seeing and tracking errors. Such profiles are an expected consequence of sub-second optical flashes, making their findings consistent with the transient interpretation.
This is quite popular in UFO circles, as supposed evidence of pre-space era satellites or spacecraft that correlate with upticks in UFO sightings.

I'm skeptical.


See also this thread on more detailed analysis.
https://www.metabunk.org/threads/digitized-sky-survey-poss-1.14385/
 

Attachments

Last edited:
I'm skeptical.
I think that negative sign is enough to arouse skepticism:
External Quote:

on dates within +/- 1 day of nuclear testing
The day after nuclear tests is a cautious "maybe". The day before such tests is a "you've got to be kidding me", unless there is a serious question about the date on which the tests took place (or there is significant confusion about the international dateline). But not clarifying that point is sloppy science, at the very least.
 
I am curious what the correlation is supposed to BE.
Aliens visit when nuke tests happen? Where are they on other days?

I would also be curious about correlations between the days when tests happen and the days when the sky survey photos where made. A preference for weekends, or Fridays or hours of daylight (time of year) or some other linkage?
 
The response to a previous paper claiming the transients represent real objects made some interesting points (it's actually noted in the second paper, "On the Image Profiles of Transients in the Palomar Sky Survey.")

In On the nature of apparent transient sources on the National Geographic Society–Palomar Observatory Sky Survey glass copy plates, Hambly & Blair write that "the putative transients are likely to be spurious artefacts of the photographic emulsion. We suggest a possible cause of the appearance of these images as resulting from the copying procedure employed to disseminate glass copy survey atlas sets in the era before large-scale digitization programmes."

The Villaroel reply snarkily dismisses this critique, saying the results "elud[e] conventional explanation" but are consistent with their theory of fraction-of-a-second flashes from... somethings.

I'm not particularly familiar with film astrophotography, but it looks like their work could benefit from someone with expertise in the process. We're talking about looking at plates that were exposed for 20 or 40 minutes at a time tracking the motion of the night sky, so what's more likely -- there were flaws in the plates or emulsions (the existence of at least some of which have been verified) or there were very bright, instantaneous flashes of light from objects in or near earth orbit in earth's shadow cone.

The mechanism for the reflections also seems questionable, theorizing that the UAPs were reflective and in geosychronous orbit and so the sun might have reflected off them, which puts them at ~22,000 miles altitude, but they'd also have to be outside earth's shadow cone. But a quick skim of the paper doesn't show any effort to determine if the observed transients would have been in the required parts of the sky to generate reflections, let alone from geosynchronous orbits. Plus, that seems like a less-effective orbit for making critical observations.

There's also a lot of hand-waving, without any particular science to justify things like the +/- one day rule. (With 124 above-ground nuclear tests occuring over their chosen 2,818 days, that's one test every ~22 days, or something like once chance in 7 that any particular transient would be associated with a test. A quick skim of the U.S. nuclear test dates show that most tests were on weekdays, which further de-randomizes when things happened.)
 
there were flaws in the plates or emulsions (the existence of at least some of which have been verified)
But a quick skim of the paper doesn't show any effort to determine if the observed transients would have been in the required parts of the sky to generate reflections, let alone from geosynchronous orbits.
They address similar points in a different paper. No idea if they used the same techniques here. You can find that one here. I personally find it more interesting than the other ones posted.

Most notably these lines in the abstract
We also find a highly significant (∼22σ) deficit of transients from Solano et al. 2022 within Earth's shadow, supporting the interpretation that sunlight reflection plays a key role in producing these events. This study should be viewed as an initial exploration into the potential of archival photographic surveys to reveal transient phenomena, and we hope it motivates more systematic searches across historical datasets.
And this excerpt from the conclusion
The origin of the transients remains unknown. One plausible explanation is that they are caused by brief light emissions from artificial objects in orbit or by objects with anomalous movements in Earth's atmosphere—emissions so brief that they appear as point sources rather than streaks, despite the telescope tracking the stars. Alternatively, they could arise from solar reflections o↵flat, highly reflective surfaces at geosynchronous altitudes. The latter interpretation is further supported by our shadow test in Section 8, which reveals a significant deficit of such events within the Earth's umbra, consistent with a solar reflection origin and difficult to reconcile with many explanations, including photographic plate defects.
 
@jdog

I am also missing more details about what they actually observed, and based their statistics on. Strange that the paper does not mention it.
 
My five cents: First impression: Respected affiliations on author list, the paper looks great, and both fun and curious read/skim.
But still, it's dots on old photos of the sky and highly chosen framework, i.e. assumptions and formulas, e.g. for arguing against false-positives.
So while argues against noise towards "something unknown", in any case the uncertainty remains, and remains more speculative than conclusive IMO. I would recommend you upload the paper to an AI and judge yourself, one example query:

"Do a scientific review of this paper, the data and methodology used, for example the total sample size, realistic noise and assumptions. Give a brief review and an accept/reject for a scientific conference."

The paper also points to an argument (Hambly+Blair@2024) about noise previously before. IMO it's just too (UFO-fan-)biased and sensational, we are talking about old dots and whether its noise or not, not like figuring expansion of the universe or "changing how we look at the night sky".
 
So while argues against noise towards "something unknown", in any case the uncertainty remains, and remains more speculative than conclusive IMO.
I'm curious here. Were there any observatories around this time that would have an overlapping view of the sky?

If other images taken at the same time fail to show anything, then it would be safe to assume that what they found is just sensor noise.
 
I'm curious here. Were there any observatories around this time that would have an overlapping view of the sky?

If other images taken at the same time fail to show anything, then it would be safe to assume that what they found is just sensor noise.
I read up a bit on the process the other day; there's a good summary on Wikipedia at https://en.wikipedia.org/wiki/National_Geographic_Society_–_Palomar_Observatory_Sky_Survey, though it's not comprehensive.

The Palomar Observatory Sky Survey (or POSS I, since there was a second survey using the same telescope -- with upgrades -- in the 1980s and 1990s) was the first major comprehensive photographic sky survey, so apparently no one else was doing similar work at the time or had the same technology. The method consisted of shooting two 14-inch photographic plates of each 6-degree section of the sky (visible from Palomar) using the 48-inch Palomar Schmidt telescope; one plate with red-sensitive emulsion, then the second with blue-sensitive emulsion, to get color data. This particular telescope is only a camera, unlike other telescopes at Palomar. (This took place night after night, with interruptions, over 9 years.)

One type of exposure took 20 minutes, the other took 40, so each section of sky took at least an hour, with the telescope continuously adjusting to follow the same patch of sky for the duration. The target areas also overlapped, so some parts of the sky would have been imaged twice within an hour, which is apparently what the researchers were looking at.

Duplicates were made. Prints were distributed in books and became a comprehensive catalog of the sky. Eventually the images were digitally scanned, and those digital scans were the subject of the Villaroel paper(s). Those scans can also be checked against copies of the original plates, which are available in various archives, which is what the first set of critics looked at to raise doubts about the nature of the transients reported in the earlier Villaroel paper. There are apparently fairly distinct differences between the photographic exposures of distant stars and galaxies and other dots on the plates, though the emulsions, the developing, and the reproduction processes are not perfect.
 
One type of exposure took 20 minutes, the other took 40, so each section of sky took at least an hour, with the telescope continuously adjusting to follow the same patch of sky for the duration. The target areas also overlapped, so some parts of the sky would have been imaged twice within an hour, which is apparently what the researchers were looking at.
So were they looking for spots that appeared during one exposure but not the other?
 
"Aligned, Multiple-Transient Events in the First Palomar Sky Survey" could do with its own thread.
@boguesuser linked it here in post #53. Seems on topic here I think?

An obvious issue with it (although I haven't read it yet!) is that the number and capability of telescopes has increased dramatically since the 1950s.
There are powerful space-tracking radars with digital filters that didn't exist in the 1950s.
Radio astronomy is massively more powerful than it was in the 1950s.

So why would anyone use 1950s materials as a data source? Were they unable to get funding (including from the Sol Foundation) to do their own sky survey with modern equipment?
Their selection of source seems designed to foster their argument, someting along the lines: these images are from before there were satellites, so if they can be demonstrated to show objects in space, then it must be aliens. It feels like a plausible, intuitive argument on the surface (perhaps that's the point), but when you start to examine the second premise it gets into a lot of uncertainty. Like, they can't rule out any of the possible mundane things it could be (plate emulsion irregularities, dirt on lens, airplanes, meteors), they can just say they don't think it's any of those things. But then that leaves them no better off than using modern surveys and saying they don't think it (a hypothetical modern transient) is a satellite. I suppose the advantage of using the old survey is they get to retain the uncertainty to tuck the hopeful "but could be aliens" inside of, whereas with a modern survey we could identify the objects with a high degree of certainty if they are mundane.
 
This paper is only peer reviewable by peers, it has not been done yet, nor is it published in a journal.

This kind of thing is very hard for a community like Metabunk to address, when a science paper written by someone who is seemingly a subject matter expert emerges with startling conclusions and dresses it in the words/techniques of that science, of course we are sceptical because of the conclusions and the lack of seeming scientific consensus, but we lack the deep knowledge to be able to provide peer review.

I would generally expect astronomers to have a paper peer reviewed and checked over again by their peers a lot before releasing via a reputable journal with the conclusions this paper has, rather than releasing it on Researchgate and then having it push through UFO circles.
 
In the Multi camera array for detection of thingies in orbit thread (I hadn't noticed this thread) I jotted down some stuff that I think is relevant here.

"Aligned, multiple-transient events in the First Palomar Sky Survey",
Beatriz Villarroel, Enrique Solano, Hichem Guergouri, Alina Streblyanska et al., dated as "Preprint July 2025", available via ResearchGate https://www.researchgate.net/public...nsient_events_in_the_First_Palomar_Sky_Survey,
PDF attached.

Beatriz Villarroel is a member of Gary Nolan's Sol Foundation:

https://thesolfoundation.org/people/beatriz-villarroel/
She leads the Vanishing & Appearing Sources during a Century of Observations (VASCO) project (www.vascoproject.org) and the EXOPROBE project. The VASCO project searches for vanishing stars with the help of automated methods as well as a citizen science project. Among the most interesting results from the VASCO project are the findings of anomalous "multiple transients" of unknown origin. EXOPROBE, that is a recently launched project, aims to find and locate an extraterrestrial probe in the Solar System.

The "external quote" below is actually a chunk of my post #7 in the "Multi camera array for detection of thingies in orbit" thread; I hope this is acceptable.

External Quote:

So why would anyone use 1950s materials as a data source?
If I remember correctly, it was because it was before the space race and there was theoretically no human made space junk.

Ah. That sort of makes sense, but we do have radar tracking near-Earth objects down to about 10cm / 4 inches size now.
This series of visualizations illustrates the population of objects orbiting Earth as of February 2024. The data comes from United States Space Command (USSPACECOM), via space-track.org, which maintains a publicly available catalog of trackable objects in space. These include active satellites, defunct spacecraft, rocket bodies, and debris fragments larger than roughly 10 cm in low Earth orbit.
NASA Scientific Visualization Studio, "Tracking Satellites and Space Debris in Earth Orbit (Feb 2024)", https://svs.gsfc.nasa.gov/5258
(Satellite-sized objects out to at least geosynchronous orbit are also tracked).

So we have vastly better means of detecting objects in Earth orbit than existed in the 1950s- and they are not dependent on reflecting sunlight, as Beatriz theorizes her transients are:

A striking clue emerged from their analysis: a significant lack of transients in regions of the sky within the Earth's shadow, suggesting that solar reflection plays a crucial role in these events.
This observation supports the theory that the transients may be caused by short, subsecondary flashes from objects illuminated by the Sun, possibly in orbits similar to those used by modern satellites.
Newsvoice, [Swedish language], "Study: Flashes of light in 1950s sky hint at possible artificial origin", 01 August 2025, https://newsvoice.se/2025/08/ljusblixtar-pa-1950-talet/; the published paper is in English.
[Being pedantic, the authors mean "subsecond", not "subsecondary".]

Our current means- chiefly radar- can detect in all directions.
Most of this information is publicly available (or at least available to serious researchers), meaning that Beatriz and her colleagues could use it to rule out known objects in Earth orbit.

So they could have- funding allowing- conducted a new sky survey, using greatly superior equipment, while still excluding known orbiting objects.
They know this, but instead chose to review 1940s/ 1950s photographic images.
... ...

The authors write

This study should be viewed as an initial exploration into the potential of archival photographic surveys to reveal transient phenomena, and we hope it motivates more systematic searches across historical datasets
Hang on... they claim their research raises the possibility that artefacts, of extraterrestrial intelligence origin, were orbiting Earth (or otherwise in close proximity to Earth) in the late 1940s/ 1950s.
But they don't want to prioritize looking to see if there are any there now?
They "...hope it motivates more systematic searches across historical datasets"?

Why are they not proposing, as a matter of urgency, that the much better telescopes and radar we have now are used in a survey of near-Earth space? Why not ask the Sol Foundation to fund such an enterprise?
Why is promoting examination of historical (i.e. resulting from the use of older technology) datasets their priority?
I think that last question might be semi-rhetorical. Looking at noisy, 65+ year-old photographs might render "findings" in line with the author's wishes.
It's very hard to prove that there wasn't an alien spaceprobe near Earth in 1956, now we might find evidence for it...

The authors might be aware that more recent sky surveys, using more modern techniques and superior equipment, might not be such fertile ground for finding "transients".
And they probably have a good reason why radar doesn't detect transient visitors (perhaps they're stealthy- apart from being highly reflective).
 

Attachments

Last edited:
So were they looking for spots that appeared during one exposure but not the other?
This 2025 paper says it uses the data for transients from their 2022 paper, Discovering vanishing objects in POSS I red images using the Virtual Observatory, which "aims at performing an automated search for vanishing object using the digitized plates of the First Palomar Sky Survey," identifying so-called transients using the red plate images.

The conclusions for the 2025 paper are largely associative -- comparing the dates of the plates when the transient images were recorded against
a) above-ground nuclear test dates as reported on three web pages, plus or minus 1 day,
and
b) reports of UFO sightings (the paper uses the modern UAP term, but the database is vintage) from UFOCAT, a database of UFO sightings that began in the 1960s, was augmented by Jacques Vallée. And the paper finds certain associations with certain statistical parameters.

I see a few potential issues with the approach:
  • The method for identifying transients as light sources in near-earth space as opposed to artifacts of the photographic process has been criticized by people with expertise in the field (which does not include me, I just get obsessive about certain things).
  • Using a three-day window for each nuclear test date, as opposed to the actual date and time, dramatically increases the potential coincidences. Also, the dates are only for completed tests by the United States, Great Britain and Soviet Union that were carried out; if you think the transients are associated with tests before they're conducted, you'd have to look at scheduled/cancelled test dates as well. (I'm also pondering the mechanism by which these things know when tests are going to occur.)
  • The UFOCAT database is a database of reports of sightings; even if individual sightings are de-duplicated, you still have massively dirty data related to misidentifications of meteors, planets, etc. The dates would also not be independent of media reports spurring people to look at the skies. And on certain dates you're going to have astronomical conditions favoring widespread misidentifications.
  • The dates when the plates were shot, nuclear tests occured, and sightings were reported are not independent. They are all tied to cycles of human behavior and of climate. You can't photograph the sky in bad weather; people don't spend as much time outside to see things in the sky in the winter; and the U.S. nuclear tests, at least, appear to have been mostly conducted on weekdays and mostly in spring and summer. The POSS-I plate log for 1953 shows many weekends and holidays when no plates were shot, such as:
    1754061866388.png
  • 1754061905824.png

So if your nuclear tests were more likely to be conducted on a weekday and your plates were more likely to be shot on a weekday, you're compressing the potential associations to ~71% of the potential dates.
Another weather factor also associates Palomar observations with nuclear test dates -- the weather. Palomar and the Nevada Test Site where 100 nuclear tests occured before 1958 are only 250 miles apart and, while San Diego clearly doesn't have the identical weather as the Nevada desert, we're both deserts and both subject to the same large-scale weather systems, such as tropical storms. And nuclear testing was highly subject to weather conditions to reduce fallout blowing over populated areas.
 
As an example of the non-randomness of the nuclear test dates, I extracted the United States test dates (the U.S. conducted 79 of the 142 tests by the U.S., UK, and USSR during the period) from their source document, removed the blasts that were below ground (not many before 1958), and totaled the instances of days of the week:
1754161809617.png

I also found a plate log showing the starting dates for plate-recording sessions at the Palomar camera, though I only transcribed the first three months of the survey log for 1953-1956 to a Google sheet. The dating gets complicated because the sessions were overnight, so they would begin around 7:30 to 11 p.m. and conclude anywhere from midnight to ~4 a.m. Also, this is just the session starts, a true weighting would address how many plates were taken on a given day. For example, from 2120 hours on Monday, Sept 14, 1953 to 0407 Wednesday, Sept. 16, 1953 they shot 13 and then plates, when the average for that September was 5.1 plates per night. A more comprehensive review using the actual timestamps of the plates over a long period might unearth different overall patterns; however, I think this illustrates how neither nuclear test dates nor POSS-I plate recording dates are randomly or uniformly distributed. (Coincidentally, the U.S conducted no above-ground nuclear tests in the second half of 1953.)

1754162567341.png
 
Just a heads up, the UFO community is pushing this paper:

1754317834782.png


I probably would have missed this, but a recent illness had me looking at Facebook more in the last month than in the last few years.
 
Has anyone determined what their definition of a transient is? Or the definition of a defect?

Looking at the Fig 7 caption, it says that the images show both transients and defects. How have they determined which is which? I assume from reading the caption that they assume that a number of temporary artefacts (3?) in a line are classed as transients, but a single artefact is a defect. There are multiple (2) defects on that slide which could also be joined by a line.


Edit: the paper in confirms this in section 1:
 
Has anyone determined what their definition of a transient is? Or the definition of a defect?
In my head, I would define a transient for photographs as a physical object present in one source but not in another (or, measurably different) - while a defect is a non-physical artifact from age-degradation, radiation, mishandling etc. But this is my own definition, I would assume astronomy has a more fixed definition but same goal: Cover real astronomic events (supernovae etc), while trying to leave out "noise".

Anyways I found this great write-up, that enlightens on the drama with Hambly+Blair@2024:


Source: https://medium.com/@izabelamelamed/not-seeing-the-star-cloud-for-the-stars-a010af28b7d6


Here is the full critical paper by Hambly/Blair:

https://arxiv.org/abs/2402.00497

Given my own experience with research and drama, my ELI5 is therefore, that this latest paper seem to me like the latest chapter to the drama story in attempt to "hit back with more data". Now I better understand why they/she spent a whole section arguing they were right and the other researchers were wrong. In any case, I'll have decided who to believe, and hope Hambly+Blair will add a comment on this latest paper too.
 
Has anyone determined what their definition of a transient is? Or the definition of a defect?

Looking at the Fig 7 caption, it says that the images show both transients and defects. How have they determined which is which? I assume from reading the caption that they assume that a number of temporary artefacts (3?) in a line are classed as transients, but a single artefact is a defect. There are multiple (2) defects on that slide which could also be joined by a line.
My initial reading is that if a sufficient number of impermanent spots (in one but not the prior or next) in a particular long exposure plate can be placed in a bounding box such that the box around the aligned points is long and thin (more linear rather than rectangular), they are interpreted as being single objects moving along a path, because the likelihood of such arrangements occurring by random chance is low, assuming spots that result from defects (or artefacts not of interest) are uniformly distributed.

Their reasoning and method is discussed here: A glint in the eye: Photographic plate archive searches for non-terrestrial artefacts (Villarroel et al 2022). (and attached). For these alignment strips, see the definition and discussion of pmax and dmax in the paper.

Screenshot 2025-08-04 at 12.42.48 PM.png


I made some drawio illustrations of what I'm thinking. e.g. transients of interest (left), and not (right)

transient-Page-1.png
transient-Page-2.png


Example r=4 alignment in a plate with N=10 transient spots. These 4 would be interpreted as the ones of interest.

transient-Page-3.png


In particular this premise below seems key to the statistical argument that sufficiently linearly aligned points is the signal:

Screenshot 2025-08-04 at 12.18.27 PM.png
 

Attachments

In the paper, at the very beginning, it is mentioned and I quote:

It is scientifically untenable to assume that all candidates are either authentic transients or all defects. A reasonable working assumption is that both populations are present in some unknown proportion. From this perspective, even a single authentic detection among many contaminants would validate the effort and warrant continued search.

Why is this untenable?
 
In the paper, at the very beginning, it is mentioned and I quote:
External Quote:
It is scientifically untenable to assume that all candidates are either authentic transients or all defects. A reasonable working assumption is that both populations are present in some unknown proportion.
(My emphasis).

The authors presuppose the existence of "transients"- their chosen term for hypothesized, previously undetected physical objects in space near to the Earth (and it is clear from the general discourse that they are at least amenable, without any other evidence whatsoever, that these "transients" if they ever existed might be ETI technological artefacts), and state it is a
External Quote:
...reasonable working assumption
that both photographic defects and "transients" are present.

They do not state why this is a "reasonable" assumption.

A reasonable assumption, based on everything that we know about astronomical observations, both optical and radio, is that there are no technological artefacts of unknown origin near the Earth now, or in the past.
Another reasonable assumption is that the authors have, at best, misinterpreted historical photographic records, made unrealistic assumptions and drawn unrealistic conclusions in line with their pre-existing beliefs and/or wants. At best, a sort of type-1 experimental error.
 
Plate emulsion artefacts being the root cause is quite imaginable. I am interested to know if plates in those days were known to have a lot of emulsion artefacts (likely). I am skeptical towards "the object in the picture is travelling in a straight line", as it is shown in the paper the artefacts never are 100% on the same line, questioning the idea of a travelling satellite (which is travelling perfectly on a straight line).
 
Since I had to look up "Texas Sharpshooter Fallacy" (I'd never heard of it) I'll drop it here for anybody else similarly positioned:


Wikipedia puts it thus:
External Quote:

The Texas sharpshooter fallacy is an informal fallacy which is committed when differences in data are ignored, but similarities are overemphasized. From this reasoning, a false conclusion is inferred.[1] This fallacy is the philosophical or rhetorical application of the multiple comparisons problem (in statistics) and apophenia (in cognitive psychology). It is related to the clustering illusion, which is the tendency in human cognition to interpret patterns where none actually exist.

The name comes from a metaphor about a person from Texas who fires a gun at the side of a barn, then paints a shooting target centered on the tightest cluster of shots and claims to be a sharpshooter.
https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy

Edit -- swapped out the definition for one that included the colorful bit and the more technical bit!
 
It looks like little more than cherry picking to me.
It's also vaguely reminiscent of the "Synthetic Aperture Radar Doppler Tomography Reveals Details of Undiscovered High-Resolution Internal Structure of the Great Pyramid of Giza" paper discussed in https://www.metabunk.org/threads/cl...very-of-a-huge-city-under-the-pyramids.14095/, where the results were interpreted as showing a substantial structure underneath the Great Pyramid -- but the method had never been ground-proofed.

In this the result is "we found not just a few, but many light sources in geosynchronous orbit in the 1950s," but there's not a single validated detection. It's more of an argument from incredulity -- "we don't see how it couldn't be anything other than a shiny object in space" where all the detections are algorithmic and all the competing explanations are simply dismissed, though the authors are not experts in the underlying technology (such as 1940s-50s telescope cameras, photographic emulsions, and potential contamination).

(The first geosychronous satellites weren't orbited until 1963, well after the POSS-I survey was completed, so there's not even a later confirmed detection using the same technology to compare against. Though later surveys were done with different equipment.)
 
In the paper, at the very beginning, it is mentioned and I quote:

External Quote:

It is scientifically untenable to assume that all candidates are either authentic transients or all defects. A reasonable working assumption is that both populations are present in some unknown proportion. From this perspective, even a single authentic detection among many contaminants would validate the effort and warrant continued search.
Why is this untenable?
It may be untenable to assume that it must be one of those, without good reasons. But then I would also say it is untenable to assume that it cannot one of those. Why can we assume that all candidates in one exposure cannot all be defects? Why must the number of authentic transients be greater than 0? This sounds similar to flawed logic we see in other cases like the waves of misidentified airplane reports. For example the Wright Patterson AFB mass drone sighting incident where every piece of imagery released (so far) was identified as airplanes, and it was officially acknowledged that many of the sightings were misidentified airplanes, but then it is still insisted that the number of sightings and the length of time it spanned was so large that it must be the case that some of the sightings were authentic drone sightings.
 
Surely if you are only taking a linear progression from a small part of the overall plate it's a classic Texas Sharpshooter fallacy?
They also only take a very small proportion of the plates. Their statistics needs to account for this. If you have a large number of plates and you find features of interest among a small proportion of them, you need to multiply the expected number of such features by the number of samples in the dataset and calculate the likelihood that you found X number of plates which had such features, assuming you either looked at all the plates or looked at a representative random sampling of them. I could be wrong, but it looked to me like their statistics was reasoning based on the temporary spots within a single plate. But for example if you look at 100 plates with 10 temporary spots on each, and 2 of the plates had approximately linear arrangements of 5, you didn't find those r=5 arrangements just within 20 spots on 2 N=10 plates, you actually looked at 1000 temporary spots across the 100 plates, and discarded 98 of the plates, 980 of the spots, as not having these arrangements of interest.
 
Last edited:
It looks like little more than cherry picking to me.
They defend this stance (kinda) in the beginning I believe.

Page 3:
We speculate that some transients could potentially be UAP in Earth orbit that, if descending into the atmosphere, might provide the stimulus forsome UAP sightings.
 
Some comments from a high level regarding the statistics presented. Spearman's rho of 0.14 is low. That is similar to the usual correlation you've heard of before, except the correlation is calculated between the ranks of the values. Say you have two vectors, x and y. Pearson correlation is the usual statistic referred to when someone says "correlation". It is calculated on the actual values in x and y. Spearman's rho takes the ranks of the values in x and y and calculates the Pearson correlation between those ranks. Example: x = {1, 10, 2.5}, y = {32, 2, 100}. The ranks are x_rank = {1, 3, 2} and y_rank = {2, 1, 3}. Spearman's rho is the Pearson correlation between x_rank and y_rank. There are legitimate reasons to prefer one over the other. I am not quite sure if it's valid here. Either way, a rho of 0.14 is low, meaning there is barely any predictive relationship between the data. The sample size is large so we get a statistically significant result. This is a good example of how statistical significance can be misleading. It's statistically significant, but the correlation is so weak that it's not practically significant, imo.

As @Ann K already mentioned, "on dates within +/- 1 day of nuclear testing" is sus. That allows for a huge window of creating spurious correlations.

The biggest red flag to me is they trimmed the data. Example:
External Quote:
In the overall sample, the number of transients per date ranged from 0 to 4,528 (across multiple locations on multiple plates), with 5% trimmed mean = 10.09 and median = 0.0. The distribution of number of transients per date was highly right-skewed (skewness = 10.35) and over-dispersed (variance = 28,938.64).
Why did they trim the data? Best I can find is
External Quote:
For characterizing the nature of group differences in these nonparametric tests, we present 5% trimmed means given the highly skewed distributions of these variables and that median values were generally uninformative (e.g., median total transients = 0).
That's not a valid reason to trim your data! Trimming data means you're removing data! Removing data is generally frowned upon unless you have a valid reason to think the data is actually wrong. Like recorded incorrectly, broken sensor, or something like that. In past decades, it was sometimes recommended by legitimate sources to remove outliers or trim data just because they may have a relatively large effect on your measured statistics. That old way of thinking has been rightfully put to rest. Keep all of your data, and only remove it if it's actually bad data.

There's also discussion of non-normal distributions of data and "fixes" for that. That is again a common misconception that all of your data has to follow a normal distribution. The actual assumptions are more nuanced and out of scope for this comment.
 
They defend this stance (kinda) in the beginning I believe.
I don't see that as connected. They had access to hundreds of such plates and only report on one tiny sub-region of the plate with the highest density of such artefacts. That's looking for a pattern in noise by chosing the noisiest data to look at. C.f. p-hacking.

Your quote is just them applying a weak CYA veneer, an orthogonal process.
 
In the paper, at the very beginning, it is mentioned and I quote:

External Quote:
It is scientifically untenable to assume that all candidates are either authentic transients or all defects.
Lmao at that quote. Just assert that a conclusion isn't allowed. Finding evidence for the paranormal is certainly much easier when you simply assert that the observations cannot be entirely explained by normal explanations!
Surely if you are only taking a linear progression from a small part of the overall plate it's a classic Texas Sharpshooter fallacy?
I'm almost certainly missing something obvious, but where do they describe how they select a portion of the plate?

(My emphasis).

A reasonable assumption, based on everything that we know about astronomical observations, both optical and radio, is that there are no technological artefacts of unknown origin near the Earth now, or in the past.
Another reasonable assumption is that the authors have, at best, misinterpreted historical photographic records, made unrealistic assumptions and drawn unrealistic conclusions in line with their pre-existing beliefs and/or wants. At best, a sort of type-1 experimental error.

This study is also another example where Bayesian analysis would superior. You can use your prior knowledge of the world, i.e. the points you make here, when setting up the analysis. The prior should weight prosaic explanations more heavily than UAP explanations. (Frequentist analysis as we have in these studies yields similar results as using Bayesian analysis with "non-informative", aka "flat" or "uniform", priors where the possibilities are given equal prior probabilities.)
 
"Transients" as a label for hypothetical unidentified geosynchronous/ NEO objects is a bit like "orbs"; it's a new bit of woo-ish jargon.

I suggest we call them "Clarke hiccups", because

(1) Arthur C. Clarke pointed out the usefulness of geosynchronous orbits in the 1940s
(2) Hiccups are transient events
(3) Had they been thought of in the 1980s they might have been mentioned in Arthur C. Clarke's Mysterious World, but they 're probably science fiction.
 
I'm almost certainly missing something obvious, but where do they describe how they select a portion of the plate?
As far as I can tell they don't and neither could Izabela Melamed


Source: https://medium.com/@izabelamelamed/not-seeing-the-star-cloud-for-the-stars-a010af28b7d6


External Quote:
This suggested her transients were part of a larger phenomenon. But with dozens in this larger slice, the odds of them being rare events — natural or alien — shrank fast. Why downsize the study to 10×10 arcmin? Her paper didn't explain the tight crop, despite DSS defaults starting at 15×15 arcmin. The next step was obvious: analyze the full plate.
 
I've read it a bit more and correct me if I am wrong but this seems to the be laymans explanation for what they are saying

They took 2 different images made from before spaceflight.
They compared them and looked for any differences, they saw differences.
Some spots that were there on the 1st were not there/in the same position on the next one.
So they are moving between images.
Because the image was a long exposure and the telescope was tracking the stars if it were continuously reflecting/emitting light then it would be a streak against the static star background.
Therefore it must be blinking.
If it's the same object it would appear in a straight line.
Some of the spots can be fitted to a reasonably straight line.


However the study

Only uses a tiny part of the available plate and thus ignores the many, many spots that look the same but don't fit any pattern. (sharpshooter/cherry pick)
Doesn't trace the 'straight line' over the whole plate, I imagine it would look fairly obvious if it were across the full 60x60 arc minute plate. (cherry pick)
 
This is quite popular in UFO circles, as supposed evidence of pre-space era satellites or spacecraft that correlate with upticks in UFO sightings.

I'm skeptical.

An analysis issue , aside from the technical aspects of the data, is a very basic stats point. When you have a very large sample size (i.e. < 2000 data points), you have massive statistical power to generate a small P-value. This means you can get low P (and reject the null hypothesis) even with small or modest effect sizes. While technically statistically significant, in real world terms the effect could be trivial. Look at their Mann-Whitney U tests for example. Huge U scores for a smallish effect size. I would not be surprised if you could pick many random events (e.g. UK bank holidays) and find an association with these and appearance of transients. P-values need to be interpreted with common sense. Could be a useful exercise for stats teaching if the dataset is released.

Update...oops @yoshy beat me to it. I agree with @yoshy
 
Last edited:
I was thinking if there is another way to determine if the anomalies on the plates are imaged objects or artefacts of the plate. As the optics of the telescopes used was never perfect, perhaps there is a way to determine which of the objects are real by showing which have "normal optical errors" like coma or other aberrations. I think it is wishful thinking though, because the optical errors are likely to be undetectable.
 
As far as I can tell they don't and neither could Izabela Melamed


Source: https://medium.com/@izabelamelamed/not-seeing-the-star-cloud-for-the-stars-a010af28b7d6


External Quote:
This suggested her transients were part of a larger phenomenon. But with dozens in this larger slice, the odds of them being rare events — natural or alien — shrank fast. Why downsize the study to 10×10 arcmin? Her paper didn't explain the tight crop, despite DSS defaults starting at 15×15 arcmin. The next step was obvious: analyze the full plate.

Very interesting read! Izabela has shown and highlights the cherry picking method..
 
Back
Top