Transients in the Palomar Observatory Sky Survey

External Quote:
...the flare reaches around 2nd mag at 01:09:56 and lasts for around 15s.
So a real geostationary glint would last several seconds and move against the fixed stars

The quoted figure of 15 seconds is from a forum for amateur astronomers of the British Astronomical Association, who almost by definition will be using relatively limited equipment (though the thread-starter Martin Lewis has an impressive homebuilt 44.4 cm / 17.32 inch reflector, https://skyinspector.co.uk/me/).
There is no discussion of a minimum duration for glints/ flares; the thread doesn't state or imply that all real geostationary glints last for several seconds.

"A high-rate foreground of sub-second flares from geosynchronous satellites", Guy Nir, Eran O. Ofek, Sagi Ben-Ami et al., Monthly Notices of the Royal Astronomical Society 505 (2) 2021 documents glints/ flares of lesser duration:

External Quote:
We present a sample of ~0.1--0.3s duration flares detected in an un-targeted survey for such transients. We show that most, if not all of them, are glints of sunlight reflected off geosynchronous and graveyard orbit satellites. The flares we detect have a typical magnitude of 9--11, which translates to ~14--16th magnitude if diluted by a 30s exposure time.

Sub-second glints/ flares from satellites appear to be a known thing, but it would be extraordinary if the "transients" reported by Bruehl and Villaroel from their examination of the POSS plates had similar causation.
 
A subsecond glint 'diluted' (by 5 magnitudes) by a 30 second exposure time is entirely different to a subsecond glint 'diluted' by a 50 minute exposure time.

50 minutes would dilute a subsecond glint into invisibility.
 
This is soooooo strange. When have serious researchers ever created a website like this? Who wants to bet this gets monetized soon?

It's "science", or in this case "UFOlogy" by press release. The Findings tab states unequivocally that they have found these transients:

External Quote:

Researchers identified 83 groups of aligned transients, with 20 groups showing four or more aligned objects. The most significant case had five aligned objects with a statistical significance of approximately 3.9 sigma, meaning it's highly unlikely to be a random occurrence.
I believe this is despite any of these papers getting past the pre-publish phase and no independent verification. Don't wait for confirmation, just go public.

But for me, this is the nail in the coffin that all of these papers will likely go nowhere except in the UFOlogical world:

1755390481798.png

https://ghostsintheglass.com/

One of Villarreol's first stops on the media circuit is with Couthart. He is the current leading purveyor of UFO BS. Link below to his other episodes where he blindly accepts the word of a couple of yokels that there is a secret US/alien underground base below a University of Northern Arizona/USDA garden a few miles from downtown Sedona AZ. Others can feel free to look up some of his other un-evidenced and outrageous claims, like the crashed UFO so large it's covered by a building. His Reality Check is becoming just a version of Secrets of Skinwalker Ranch
or Finding Bigfoot. All hype, little evidence.

Going on with Couthart makes for great hype in the UFO world, but doesn't board well for academic acceptance of Villarreaol's various papers.

Thread about various Coulthart claims, skip to post #32 for the secret underground UFO base:

https://www.metabunk.org/threads/ne...y-mystery-arizona-us-border-patrol-uap.14187/
 
It's "science", or in this case "UFOlogy" by press release. The Findings tab states unequivocally that they have found these transients:

External Quote:

Researchers identified 83 groups of aligned transients, with 20 groups showing four or more aligned objects. The most significant case had five aligned objects with a statistical significance of approximately 3.9 sigma, meaning it's highly unlikely to be a random occurrence.
I believe this is despite any of these papers getting past the pre-publish phase and no independent verification. Don't wait for confirmation, just go public.

But for me, this is the nail in the coffin that all of these papers will likely go nowhere except in the UFOlogical world:

View attachment 83201
https://ghostsintheglass.com/

One of Villarreol's first stops on the media circuit is with Couthart. He is the current leading purveyor of UFO BS. Link below to his other episodes where he blindly accepts the word of a couple of yokels that there is a secret US/alien underground base below a University of Northern Arizona/USDA garden a few miles from downtown Sedona AZ. Others can feel free to look up some of his other un-evidenced and outrageous claims, like the crashed UFO so large it's covered by a building. His Reality Check is becoming just a version of Secrets of Skinwalker Ranch
or Finding Bigfoot. All hype, little evidence.

Going on with Couthart makes for great hype in the UFO world, but doesn't board well for academic acceptance of Villarreaol's various papers.
If I remember correctly, Villarreol tried to say she didn't go into this looking for UFOs, but her entire history and connections to people like Couthart say otherwise. My understanding is she has known him for years, but I could be getting people mixed up.
Thread about various Coulthart claims, skip to post #32 for the secret underground UFO base:

https://www.metabunk.org/threads/ne...y-mystery-arizona-us-border-patrol-uap.14187/
Oh great we're back to the DUMBs (Deep Underground Military Bases), or as I like to call them, the dumb DUMBs.
 
If I remember correctly, Villarreol tried to say she didn't go into this looking for UFOs, but her entire history and connections to people like Couthart say otherwise. My understanding is she has known him for years, but I could be getting people mixed up.

Oh great we're back to the DUMBs (Deep Underground Military Bases), or as I like to call them, the dumb DUMBs.

To be clear, I'm not accusing Villarreol of having anything to do with DUMBs, other than by association with Couthart. My point is that Coulthatr's show, Reality Check, presents unsubstantiated claims like this on a regular basis. He doesn't appear to make any attempt to vet or investigate these claims, he just shares them. Villarreol probably could have gone on with him and talked about alien Bigfoot portals and received the same amount of scrutiny. It just doesn't strike me as a place where a serious scientist doing serious work would appear.
 
Even serious scientists like to be on TV, especially if somebody pays them! I'd advise them to be real careful about that, if I were advising them, but they haven't come around to ask my opinion!
Serious scientists have been conned many times, into things like giving endorsements or sound bites to unscrupulous people who edit their comments to twist the meaning completely. That's why I'd rather trust a peer-reviewed paper than a TV appearance.

It's been going on throughout history, and sometimes it happens without their knowledge at all; just think of the number of times people have come up with self-serving outright lies such as "Darwin recanted evolution on his deathbed", generally followed by far too many exclamation points.
 
I've been thinking how to approach this technically. One idea was to make a diff between two fits files in order to see the actual differences between them. I wanted a faster way for detecting potential transients. But I ended up making a visual sanity check using plate XE531 (it contains the "candidate 2") and found dozens of new potential transients fairly easily. One pictured below. Making a diff probably won't help because every plate seem to contain thousands of "transients" and lots of double or triple "candidates".

One plate that has a large amount of apparent plate defects is XE187. It's the next one after XE186 - plate that contains the "candidate 1". I found dozens of defects from the right side, closer to the edge of the plate. Including some really large blobs. I didn't spend much time with this, because it's now clear that using diff between two POSS-I plates won't help.

I keep coming back to the same conclusion: Villarroel et al. are using a dataset so small, that it doesn't make sense. Critical paper by Hambly & Blair is valid regardless of "Earth's shadow"-theory. So is this Medium post also discussed earlier in this thread:
Source: https://medium.com/@izabelamelamed/not-seeing-the-star-cloud-for-the-stars-a010af28b7d6

It doesn't look like Villarroel et al. are interested in addressing these clear concerns. They just jump directly to ET/UFO conclusion and try to enforce that with other theories. Perhaps they didn't jump at all, but actually started with ET/UFO idea and selected few suitable (partial) plates and found some "candidates" to support their hypothesis.

So perhaps there is nothing to debunk? Thousands of "glints" or "transients" can be found from the POSS-I plates. We don't know what they are. A lot of them are probably defects, but perhaps not all. Concluding that they are ET UFOs is a extraordinary hypothesis which is not supported by any actual evidence.

Screenshot from Aladin (left window from DSS2 red, right window is the full plate XE531), potential new "triple candidate" (or some artefacts)
xe531-blobs.png
 
Last edited:
I've been thinking how to approach this technically. One idea was to make a diff between two fits files in order to see the actual differences between them. I wanted a faster way for detecting potential transients. But I ended up making a visual sanity check using plate XE531 (it contains the "candidate 2") and found dozens of new potential transients fairly easily. One pictured below. Making a diff probably won't help because every plate seem to contain thousands of "transients" and lots of double or triple "candidates".

One plate that has a large amount of apparent plate defects is XE187. It's the next one after XE186 - plate that contains the "candidate 1". I found dozens of defects from the right side, closer to the edge of the plate. Including some really large blobs. I didn't spend much time with this, because it's now clear that using diff between two POSS-I plates won't help.

I keep coming back to the same conclusion: Villarroel et al. are using a dataset so small, that it doesn't make sense. Critical paper by Hambly & Blair is valid regardless of "Earth's shadow"-theory. So is this Medium post also discussed earlier in this thread:
Source: https://medium.com/@izabelamelamed/not-seeing-the-star-cloud-for-the-stars-a010af28b7d6

It doesn't look like Villarroel et al. are interested in addressing these clear concerns. They just jump directly to ET/UFO conclusion and try to enforce that with other theories. Perhaps they didn't jump at all, but actually started with ET/UFO idea and selected few suitable (partial) plates and found some "candidates" to support their hypothesis.

So perhaps there is nothing to debunk? Thousands of "glints" or "transients" can be found from the POSS-I plates. We don't know what they are. A lot of them are probably defects, but perhaps not all. Concluding that they are ET UFOs is a extraordinary hypothesis which is not supported by any actual evidence.

Screenshot from Aladin (left window from DSS2 red, right window is the full plate XE531), potential new "triple candidate" (or some artefacts)
View attachment 83248

I have trouble making sense of the medium article. They are saying Villarroel only looked at small croppings and not the full plates, but these croppings are actually just the search boxes used for detecting aligned transients. They are only relevant to the search for aligned transients, not for identifying/cataloging transients, or for the shadow study. And using a search box doesn't mean the whole plate wasn't searched, it just means that alignments spanning the full plate (or larger than the search box) weren't looked for. And also the search boxes they used ranged up to 30 arc-seconds (not 10). In the paper they do discuss 10 arc-second lengths, but this is what they estimate as the length that an object in geosynchronous orbit could travel within the 50 minute exposure time. So they set the search box size to 30 (they say conservatively), to try to make sure they would be able to detect any objects that could potentially be moving in geosynchronous orbit while glinting periodically.

Edit: some errors in my comment crossed out.
 
Last edited:
And also the search boxes they used ranged up to 30 arc-seconds (not 10). In the paper they do discuss 10 arc-second lengths, but this is what they estimate as the length that an object in geosynchronous orbit could travel within the 50 minute exposure time
That's much too small. In 50 minutes a geosynchronous satellite would move ~12.5 degrees against the fixed stars.
 
That's much too small. In 50 minutes a geosynchronous satellite would move ~12.5 degrees against the fixed stars.
I made a mistake. I don't really understand astronomical units admittedly.

1756076121299.png

Nevertheless the medium article doesn't make sense does it?
 
I have trouble making sense of the medium article. They are saying Villarroel only looked at small croppings and not the full plates, but these croppings are actually just the search boxes used for detecting aligned transients. They are only relevant to the search for aligned transients, not for identifying/cataloging transients, or for the shadow study. And using a search box doesn't mean the whole plate wasn't searched, it just means that alignments spanning the full plate (or larger than the search box) weren't looked for. And also the search boxes they used ranged up to 30 arc-seconds (not 10). In the paper they do discuss 10 arc-second lengths, but this is what they estimate as the length that an object in geosynchronous orbit could travel within the 50 minute exposure time. So they set the search box size to 30 (they say conservatively), to try to make sure they would be able to detect any objects that could potentially be moving in geosynchronous orbit while glinting periodically.

Edit: some errors in my comment crossed out.
More data is generally better in scientific research because it leads to more accurate, robust, and reliable findings by providing more examples of the phenomenon being studied, reducing the risk of overfitting and bias, and allowing for the detection of smaller effects or more complex patterns. And there is clearly more data available. Of course more *bad* data would not be helpful.

Note that the Medium post was written before the Earth's shadow study. Same applies to Hambly and Blair paper.

Possibly interesting note: I found an earlier study where Villarroel was co-author: Discovering vanishing objects in POSS I red images using the Virtual Observatory (September 2022): https://academic.oup.com/mnras/article/515/1/1380/6607509?login=false They covered a lot of data in this study. They found 298 165 sources from POSS-I plates, did reduction rounds based on various criteria, and ended up with 5399 sources. When there's large amount of data, selecting only few of them as ET UFOs looks weird. They haven't explained why only few sources out of 5399 are noteworthy.

Another important consideration is the 103a-E emulsion used in POSS-I images. If the Medium post is factually correct, the artefacts more or less disappeared when new emulsions were taken into use. If true, this should be explored. I did a quick review using just two POSS-II red plates (with new emulsion), and could not find any transients. With POSS-I images, I could easily find a lot of transients from each plate. But my dataset is just too small for conclusions. They could also expand the research outside Palomar e.g. APPLAUSE https://www.plate-archive.org/cms/home/ If several "ET UFO"s can be found from majority of POSS-I images, then it's likely they will be found from other sky surveys, too
 
Last edited:
And using a search box doesn't mean the whole plate wasn't searched, it just means that alignments spanning the full plate (or larger than the search box) weren't looked for.
I haven't read the study yet, but that smalls of cherry-picking/p-hacking.
Any legitimate sequence of 3 or more glints should continue across the plate, or possibly be found on both the red and blue plate, which I understand were taken in sequence?
But to limit it to a fraction of the full path for no reason makes me think the reason is to make the result look nice?
 
A quarter of a million transits narrowed down to five thousand? That's some hefty data winnowing right there.
A lot of winnowing. Including type of plates and images collected for a specific purpose

I keep wondering to what extent they looked at other images collected on other types of plates at the same time, or images collected later after they switched to other types of plates?

All they may be doing is proving that the particular media they are examining is prone to problems.

Before claiming that you have discovered something only found on one type of image you need to check every other type of media in use at the time. Have they done this, and to what extent?

Also, they seem to be claiming that the launching of one satellite makes every image collected afterward useless for their purpose, surely there would be a period early on where the quantity of man-made stuff in orbit was small enough to be worked around? Their focus has been too narrow I think, they found the answer they wanted and stopped looking?
 
A quarter of a million transits narrowed down to five thousand? That's some hefty data winnowing right there.

They didn't narrow it down to 5,399 arbitrarily. They explain right in the abstract,

We found 298 165 sources visible only in POSS I plates, out of which 288 770 had a crossmatch within 5 arcsec in other archives (mainly
in the infrared)

In that paper however, they don't present results claimed to be potential evidence of ET UFOs. Its in the new paper they do that, and ironically in that paper they don't winnow the data.

My source of skepticism is polar opposite to yours, did they use several hundred thousand sources that are found in other catalogs, and if so how can an object found in multiple catalogs be a glint? Or am I getting something wrong?
 
Last edited:
They found 298 165 sources from POSS-I plates, did reduction rounds based on various criteria, and ended up with 5399 sources. When there's large amount of data, selecting only few of them as ET UFOs looks weird. They haven't explained why only few sources out of 5399 are noteworthy.
They explained in the paper that 288,770 of the 298,165 had cross-matches found in other catalogs (meaning they found sources in other images taken much later that might be the same source). I guess maybe there is some uncertainty about the cross-matches actually being the same source?

But that leaves 9,395 sources detected in POSS-I that were not detected or cross-matched in other catalogs. Of those, 3,592, were confirmed to be artifacts by comparing different digitization of the same images. So then they were left with 5,803, of which they found some to be asteroids, etc.
 
A lot of winnowing. Including type of plates and images collected for a specific purpose

I keep wondering to what extent they looked at other images collected on other types of plates at the same time, or images collected later after they switched to other types of plates?

All they may be doing is proving that the particular media they are examining is prone to problems.

Before claiming that you have discovered something only found on one type of image you need to check every other type of media in use at the time. Have they done this, and to what extent?

Also, they seem to be claiming that the launching of one satellite makes every image collected afterward useless for their purpose, surely there would be a period early on where the quantity of man-made stuff in orbit was small enough to be worked around? Their focus has been too narrow I think, they found the answer they wanted and stopped looking?
I wish they had addressed this issue. They're saying they determined that these glints on vintage photographic plates can't be from A, B, C, D, or E, therefore they must be caused by F -- rather than establish that geosynchronous satellite reflections look like F, particularly when recorded on photographic plates.

I know glints and transients have been identified as a source of noise in modern astrophotography, but as far as I can tell no one does long-exposure imaging like that, let alone with less-sensitive photographic emulsion. They seem to use stacking of short-exposure images.
 
They explained in the paper that 288,770 of the 298,165 had cross-matches found in other catalogs (meaning they found sources in other images taken much later that might be the same source). I guess maybe there is some uncertainty about the cross-matches actually being the same source?

But that leaves 9,395 sources detected in POSS-I that were not detected or cross-matched in other catalogs. Of those, 3,592, were confirmed to be artifacts by comparing different digitization of the same images. So then they were left with 5,803, of which they found some to be asteroids, etc.
Sorry, my comment wasn't clear. When I wrote: "When there's large amount of data, selecting only few of them as ET UFOs looks weird." - I meant the latest paper that's now under peer review, not the older one with much more data. The older paper without ET UFO hypothesis is ok. Of course they could have looked at POSS-II images and other sky surveys, but projects like this don't have unlimited time or resources.
 
Last edited:
One additional generic comment that's perhaps missing: transients are constantly being reported, analyzed, processed etc. One example: Zwicky Transient Facility aka ZTF is public-private partnership aimed at a systematic study of the optical night sky. See: https://www.ztf.caltech.edu/

Latest ZTF transient alerts can be found e.g. here: https://lasair-ztf.lsst.ac.uk/ At certain point, transient alerts are submitted to Transient Name Server (TNS), here are their stats: https://www.wis-tns.org/stats-maps Currently TNS contain over 170K transients since 2016. Note that ZTF isn't the only group posting transient alerts there.

I'm not aware of any "ET UFO" transients in TNS or any research that would indicate transient data shows possible ET activities. There are other transient alert systems, too. But none of them seem to have ET UFO kind of transients. There are also NEO programs monitoring skies and space since 1970s. They have found some comets and such, but not any ET UFOs.

A new survey, Rubin Legacy Survey of Space and Time (LSST), will start during this year. LSST data size is 20 TB/night. They take approximately 800-1,000 panoramic images of the Southern Hemisphere sky every night. Survey is 10 years long. So there will be huge amount of data - and transients. I don't expect LSST to find any signs of green men or their saucer shaped vessels, but it will be a remarkable sky survey.

At least for me, these past, ongoing and new surveys sends a clear message: any ET UFO finding is highly unlikely in astrophysics. It would be against science not to be skeptical about ET UFO hypothesis by Villarroel et al.
 
Here is my current understanding. In 2022, they performed a process to identify transient sources. It began by checking if a source was in the POSS-I plate, but not either of two major modern image collections (specifically Pan-STARRS DR2 and Gaia EDR3). This resulted in 298 165 sources. They then tried to see if there are matches in other modern catalogs, and they found cross-matches within 5-arcseconds for 288 770 of them. Those cross-matches are produced algorithmically, and for whatever reasons, there is some uncertainty about whether those cross-matchs, or what percentage of them, are real matches. If they are real matches, it means they are normal astronomical bodies, that could be of interest for non-ufo related astronomical research. They then also reduce the remaining about 9,000, by confirming some subset of them to be digital artifacts, and were able to identify some others as normal things like asteroids. Finally, they have 5,399 sources for which they have no explanation for asides from emulsion artifacts or technosignatures or some unknown phenomenon. Now, that is what they produced in their 2022 paper.

https://arxiv.org/abs/2206.00907

Then recently, they released a new paper, that relies on that data to try and test their technosignature hypothesis. The perform two studies in this paper. The first is looking for sources which are in a straight line, and thus could hypothetically be the same object moving while rotating and glinting/flashing multiple times. The second is the shadow study, where they try to determine if there is a significant deficit in the number of transients that would be in shadow at geosynchronous orbit. This would be evidence that sunlight might be involved in prodocing some of the transients.

Now, for these studies, they don't just use the 5,399 transients. For the alignment study, I didn't see anything about how they reduced the data, although I haven't looked closely, as I have been focusing on the shadow study. But they just start by saying they rely on the 298,165 detections from the 2022 paper.

For the shadow study, they use any of those original 298,165 that didn't have cross-matches specifically in Gaia or Pan-STARRS. This means, they did have cross-matches in other catalogs. However, some unknown percentage of those cross-matches could be false matches, and thus could still be unconventional transients. But they then restrict their study to only the northern hemisphere, which leaves 106,339.

We use the transient candidates from Solano et al. (2022), but with the additional requirement that they have no
counterparts within 5 arcseconds in either Gaia or Pan-STARRS. Furthermore, we restrict our analysis to objects in
the northern hemisphere (Dec > 0◦ ). This yields a sample of 106,339 transients, which we use for our study.

https://www.researchgate.net/public...nsient_events_in_the_First_Palomar_Sky_Survey

Now, why include ~101,000 sources that have potential matches in some other catalogs and could thus be normal astronomical bodies? Because the 5,399 subset of confirmed transients is too small to get a meaningful statistical result from.

Some large number of the sources can be assumed to be artifacts. And with few expected actual glints (assuming there are any), with only 5,399 (of which only about 40 or so happen be in shadow if at geo), the noise would dominate too much.

By using the larger set of unconfirmed transients, even if some large percent are normal astronomical bodies with real matches in other catalogs, if a large enough of them are actual glints, since those glints can not occur in shadow, then you could potentially get a meaningful result, with enough support to overcome the noise.

Arguably, it would be better to first confirm the cross-matches, so you can see what percentage of those ~288,000 are actually provably just some normal astronomical sources. Then you would be left with something trying to differentiate emulsion artifacts from something unexplained.

With the 5,399 confirmed transients (which are closer to that, either mysterious or emulsion artifacts), I did a preliminary check, and found about 42.6 expected to be in shadow, and about 40.3 in shadow (so a "deficit" of about 2.3), although I can't claim certainty that I carried that out correctly. Even though that is a very insignificant result, that can just be because at that dataset size, the noise dominates. Another thing I found is that if you naively calculate a p-value from those results based on the way I did it in the other thread (randomizing the time within the exposure window, and for what I considered the null distribution also the position on the plate), you get an extremely small p-value, but not because its a significant result, but because the "null distribution" and the "true distribution" are structurally different. However, the authors of the paper did a more appropriate test than that as far as I know, so this isn't something discrediting their work, just mine if you try to look at p-values, but does demonstrate a pitfall to watch out for. I'm not an expert in statistics enough to form conclusions about their statistical methods yet, so I wouldn't comment on that.

A lot of what I am saying could be confused.
 
Last edited:
Update: the paper has reached the "Editorial decision: Revision requested" - stage today (28-Aug-2025): https://www.researchsquare.com/article/rs-6347224/v1

Basically this means that the editor sees potential in the submission, but requires changes before it can be accepted. We can't know the nature of the request (major or minor revisions request) at this point. Major revisions often involves a new peer-review round.

It will be interesting to see how this goes, but it's likely the paper will be published sooner or later.
 
Regarding Earth's shadow, I found this:

Source: https://youtube.com/watch?v=J7olrY775jw

The moon is about 10× farther out than geostationary orbit.
Article:
Since Earth's diameter is 3.7 times the Moon's, the length of the planet's umbra is correspondingly 3.7 times the average distance from the Moon to Earth: about 1.4 million km (870,000 mi). The diameter of Earth's shadow at lunar distance is about 9,000 km (5,600 mi), or 2.6 lunar diameters, which allows observation of total lunar eclipses from Earth.


Satellites that orbit far away would be in shadow for a much shorter time, anywhere from several hours to a few minutes.
Earth's diameter is ~12800km.
The moon is about 10× farther out than geostationary orbit.
So the shadow diameter would be approximately 12400 km at that distance.

Geostationary satellites orbit at 11,600 km/h.
It takes a highly excentric orbit, like a Molniya orbit, to get a satellite so slow that it takes 2 hours to cross the shadow.

The moon orbits at ~3500 km/h and takes about 3 hours maximum.
 
Yes, and the shadow is always moving, so many locations in that region of the sky would only be in shadow for part of the 50 minute exposure. This must make it very difficult to determine whether any particular 'transient' is supposedly in shadow at the moment it was recorded.
 
Yes, and the shadow is always moving, so many locations in that region of the sky would only be in shadow for part of the 50 minute exposure. This must make it very difficult to determine whether any particular 'transient' is supposedly in shadow at the moment it was recorded.
If you track their research back, the initial suggestion for looking at geostationary locations was from someone's thinkpiece on looking for technosignatures in distant star systems. That author theorized that a) a former alien civilization might have had structures in orbit around their planet, b) debris that was in geostationary orbit could potentially remain in stable orbits for thousands or hundreds of thousands of years and c) that belt of debris -- belt, not individual objects -- might have a visual signature.

There's no particular rationale provided for looking for alien objects in geostationary orbits around earth other than if they those are the only orbits where you might still find very old alien structures, since things placed into lower orbits would have fallen to the surface.

Which goes against their idea that objects that were in orbit in the 1950s are no longer discoverable; if they were reflective ancient debris 70 years ago because some self-repair mechanism kept them shiny and metallic over the millennia -- and you think the sky survey gives you enough information to figure out their orbital positions (and paths, since the key is repeating glints in a line) and whether they were in earth's shadow -- why would they have escaped detection by all of Earth's militaries since then and remain undetectable now?
 
A bit more maths.
The circumference of Geostationary orbit is 264,869 kilometres, and the diameter of the Earth's shadow is 12400km at that distance. That means that the Earth's shadow moves its own width across the stars every 67 minutes.

For a 50 minute exposure, only a small patch of sky remains in total shadow, about 26% as large as the full shadow. All the rest of the sky gets some sunlight at different times during this exposure period. Most of the sky is, of course, in full sunlight at all times.
 
There's no particular rationale provided for looking for alien objects in geostationary orbits around earth other than if they those are the only orbits where you might still find very old alien structures, since things placed into lower orbits would have fallen to the surface.
Aren't the LaGrange points even more stable?
 
Just something about the problematic red emulsion used on the original plates from the MEDIUM article:

External Quote:

Villarroel's transients appeared on glass plates from 1949–1956, when 103a-E emulsion was widely used in the Palomar Observatory Sky Survey (POSS-I) for red-sensitive imaging. But this emulsion was notoriously prone to defects. To minimize them, plates were frozen during processing — yet this still introduced microscopic clumps and bubbles.
External Quote:

By 1956, 103a-E was replaced with improved emulsions, and glass copies remained in use until digital photography took over in 2000. Curiously, the vanishing stars finally vanished around the time 103a-E was phased out.

Source: https://medium.com/@izabelamelamed/not-seeing-the-star-cloud-for-the-stars-a010af28b7d6


The Villarroel's paper notes something similar:

External Quote:
The last date on which a transient was observed within a nuclear testing window in this dataset was March 17, 1956, despite there being an additional 38 above-ground nuclear tests in the subsequent 13 months of the study period.
https://www.researchsquare.com/article/rs-6347224/v1

Villarroel claims:

External Quote:

Transients were 45% more likely to be observed on dates that were within a nuclear test window than on dates not in a nuclear test window.
If I'm reading the chart from the paper correctly, it seems there were 54 incidents of nuclear testing when a transient was observed in a 3 day window (test day +/- 1 day):

1757801362695.png


But after March of 1956, and the possible changing of the problematic red emulsion, an additional 38 nuclear test resulted in 0 transients observed.

We know Villarreol and others are pushing an alien hypothesis to account for the transients. Correlating transients with nuclear testing is just using an old UFO trope that aliens are concerned, for positive or nefarious reasons, that humans have gone nuclear, thus they are watching us closely. But if that's the case, did the aliens learn all they needed from the nuclear testing that just happened to have occurred prior to the replacement of the red emulsion, and then didn't bother to check up on 38 more tests? Or are the correlated transients just cherry-picked flaws, including ones from the red emulsion?

EDIT: It appears Villareol, or possible others before her, were using ONLY red emulsion plates to identify transients (bold by me):

External Quote:

In brief, transients were defined as distinct star-like point sources present in POSS-I E Red images that were absent both in images taken immediately prior to the POSS-I Red image and in all subsequent images.
 
Last edited:
Help a dullard out here. I've always confessed that I got a useless BA in Communications from a State University, ended up in construction and have a troubled relationship with math, including statistics, but the little chart in my previous post has me scratching my head. Recall:

1757810821772.png


If I'm reading this right, they had a sample size of 2371 (2116 + 255) somethings when adding the numbers in the first line. I assume these are the number of days during which nuclear testing might have taken place. Of those 2371 days, Villareol's group identified 255 days where a transient was observed, right?

The next line is for days when nuclear tests were being conducted. This is a bit confusing as these are designated as "test windows". I'm assuming this means the 3 total days associated with an actual test (Testing window=Test date +/- 1 day) resulting in 347 "test window" days (293+54). To get the actual number of tests conducted we would need to divide by 3, right? So, around 114-115 actual nuclear tests during the 2371 days that were looked at.

What we don't know, or I haven't found yet, is on which of the 3 test window days were the 54 observed transients found. But just going with what they offer, it seems like sample sizes, 2371 vs 347 creates a bit of an apples to oranges situation to compare the percentages. Maybe? The methods section gives more details for the more mathematically gifted among us (bold by me):

External Quote:

All analyses were carried out using the SPSS for Windows Version 29 statistical package (IBM Corp., Armonk, NY). For testing associations between dichotomous variables [Nuclear Testing Window (Yes/No) versus Transient Observed (Yes/No)], chi-square tests were used. To aid in interpretation of the magnitude of this association between nuclear testing and transients, we adopted a relative risk approach like that commonly used in medical research. That is, we calculated the likelihood of a transient being observed (the "outcome") based on whether its date was within a nuclear weapons testing window (the "exposure"). This relative risk ratio was calculated using an online calculator: https://www.medcalc.org/calc/relative_risk.php.
 
If you track their research back, the initial suggestion for looking at geostationary locations was from someone's thinkpiece on looking for technosignatures in distant star systems. That author theorized that a) a former alien civilization might have had structures in orbit around their planet, b) debris that was in geostationary orbit could potentially remain in stable orbits for thousands or hundreds of thousands of years and c) that belt of debris -- belt, not individual objects -- might have a visual signature.

There's no particular rationale provided for looking for alien objects in geostationary orbits around earth other than if they those are the only orbits where you might still find very old alien structures, since things placed into lower orbits would have fallen to the surface.

Which goes against their idea that objects that were in orbit in the 1950s are no longer discoverable; if they were reflective ancient debris 70 years ago because some self-repair mechanism kept them shiny and metallic over the millennia -- and you think the sky survey gives you enough information to figure out their orbital positions (and paths, since the key is repeating glints in a line) and whether they were in earth's shadow -- why would they have escaped detection by all of Earth's militaries since then and remain undetectable now?
The story as they seem to be telling is:

Aliens were here and left stuff in geosynch orbit...
When we started launching stuff those alien objects noticed that fact and hid themselves from us?
Because we are not detecting them now??? Or are we???
Or because we can't distinguish them from all of the junk we have launched???
So not just dumb/dead alien stuff in orbit, but still functioning Alien objects aware of earthling activity and reacting to it?

Or is it Aliens are here now/today, were watching us from geosynch before we started launching, but are not doing so now???

They claim to have found objects but don't seem to explain what their function is or was?
 
From the OP:
External Quote:
A dataset comprising daily data (November 19, 1949 -April 28,1957) regarding identified transients, nuclear testing, and UAP reports was created (n=2,718 days).
Wikipedia says:
Article:
However the Survey was ultimately extended to -30° plate centers, giving irregular coverage to as far south as -34° declination, and utilizing 936 total plate pairs.

The NGS-POSS was published shortly after the Survey was completed as a collection of 1,872 photographic negative prints each measuring 14" x 14".

So there were at best 936 days of photography, not 2718 days? and likely less?

External Quote:
The first photographic plate was exposed on November 11, 1949. 99% of the plates were taken by June 20, 1956, but the final 1% was not completed until December 10, 1958.[4]
So if there were no plates from June 20, 1956, to April 28, 1957, why are these days counted?
What we don't know, or I haven't found yet, is on which of the 3 test window days were the 54 observed transients found. But just going with what they offer, it seems like sample sizes, 2371 vs 347 creates a bit of an apples to oranges situation to compare the percentages. Maybe?
These numbers are fine. The sleight-of-hand is elsewhere: the data is not about days at all.

If Wikipedia is correct, they had 936 plates, discarding the blue half of the catalog.

So the proper way to do this analysis is to divide these 936 plates into 2 sets: N=plates taken with a nuclear test in the preceding 24 hours, NN=plates taken with no nuclear test in the preceding 24 hours.

The first slight of hand was to extend that interval to 3 days, and to go before and after. This decision is not motivated, so we have to assume it's there to make the numbers look good.

Then we'd also divide the plates in 2 other sets: T=plates with linear transients, NT=plates with no linear transients.

From this, the relative risk ratio would be computed per plate.

But they didn't do that, they computed it per day. This decision is not motivated, so we have to assume it's there to make the numbers look good.

The way these choices were made completely screws the significance of the result.
Ideally, these choices are made before you start the study, you pre-register it, then do the analysis, you get a result or not, and then you need to publish either way to avoid publication bias.
Any other way to go about this screws with the significance of the result.

As an analogy:
If you aim to roll 2 sixes with a pair of dice, your chance is 1/36 < 5%, so if you did that at random, that'd look significant.
But if you roll the dice, and 2 fours come up, and then you write a paper that says "the chance for 2 fours is 1/36 <5%, our result is significant", thst'd be deceptive, because you changed your analysis method when you didn't get the result you wanted. To roll any pair, the chance is 1/6=17%, and that's not significant at all; and if we consider that if that hadn't panned out, we could've looked for consecutive numbers and published on that, or on numbers that sum to seven, or any other criterium that gives us a good-looking result, then that's worthless.

This technique is called p-hacking, and we talked about it not too long ago. Whenever you see a study with some weird non-obvious decisions, and they don't tell you what they got when they ran the obvious analysis, p-hacking is likely involved.

So: if the observatory took more plates on days adjacent to nuclear tests, then, all else being equal, they would also detect more transients on days with nuclear tests.

Again, as an analogy, if you're trying to prove that the full moon is lucky, and you roll a dice every night, but on nights surrounding a full moon you roll two dice, then you're going to be approcimately twice as likely to roll a six on a night with a full moon. But that's not because the full moon is lucky, it's because you rolled more. If you publish a paper on this and don't correct for the number of rolls per night, it's worthless.

And we learned above that this paper counts nights when no photos were taken at all! Absolutely incredible.

If you do your full moon luck analysis per dice roll, that'd be proper: but then the effect vanishes, and it turns out full moons are neither lucky nor unlucky.

What would have happened to this study if they had looked at transients per plate, not per day? Who knows.
 
Last edited:
The first slight of hand was to extend that interval to 3 days, and to go before and after. This decision is not motivated, so we have to assume it's there to make the numbers look good.

Then we'd also divide the plates in 2 other sets: T=plates with linear transients, NT=plates with no linear transients.

From this, the relative risk ratio would be computed per plate.

But they didn't do that, they computed it per day. This decision is not motivated, so we have to assume it's there to make the numbers look good.

You claim it was a slight of hand. But you have no evidence for that. Here is what they reported.

Because there was no compelling a priori reason to assume that transients would necessarily occur on
the day of nuclear testing rather than the day before or after testing, we created a nuclear testing window
variable (coded 1/0 for Yes/No) in this dataset to indicate whether a given date fell within a 3-day
window surrounding any nuclear test (test date +/- 1 day). This decision to use a 3-day window as the
primary nuclear testing outcome was made while the authors were still blinded to the transient data.

https://assets-eu.researchsquare.co...-f881-4b1e-979c-2563322462a3.pdf?c=1753421532
 
https://www.researchsquare.com/article/rs-6347224/v1 (from the OP)
External Quote:
Transient data were available for the period November 19, 1949 – April 28, 1957, inclusive. Of the 2,718 days in this period, transients were observed on 310 days (11.4%). In the overall sample, the number of transients per date ranged from 0 to 4,528 (across multiple locations on multiple plates), with 5% trimmed mean = 10.09 and median = 0.0. The distribution of number of transients per date was highly right-skewed (skewness = 10.35) and over-dispersed (variance = 28,938.64).
Translation: this is a highly unusual distribution. We could show you a plot of it so you could understand it better, but we don't. Median=0 means the majority of dates have no transients, which is a "well, duh" when most of the dates don't have photographs.

In the same vein:
External Quote:
We also note an intriguing incidental finding regarding possible nuclear testing-transients links. The last date on which a transient was observed within a nuclear testing window in this dataset was March 17, 1956, despite there being an additional 38 above-ground nuclear tests in the subsequent 13 months of the study period.
Knowing they stopped taking photos soon after this date, that's no surprise. @NorCal Dave already commented on March 1956 being likely the time when the survey switched to a better emulsion.

My personal guess is that if they shared their datasets, their claims would collapse. It's public data, there are no privacy concerns or NDAs that could prevent sharing it. I want to see the list of nuclear tests they used, and a list of the plates they analysed, with date and number of transients found for each plate.

That data would allow everyone to sample-check the authors' transient-finding work without having to download and process the full dataset, and it would allow everyone to run their own statistics. That would be science.
 
From the OP:

Wikipedia says:
Article:
However the Survey was ultimately extended to -30° plate centers, giving irregular coverage to as far south as -34° declination, and utilizing 936 total plate pairs.

The NGS-POSS was published shortly after the Survey was completed as a collection of 1,872 photographic negative prints each measuring 14" x 14".

So there were at best 936 days of photography, not 2718 days? and likely less?

External Quote:
The first photographic plate was exposed on November 11, 1949. 99% of the plates were taken by June 20, 1956, but the final 1% was not completed until December 10, 1958.[4]
So if there were no plates from June 20, 1956, to April 28, 1957, why are these days counted?

These numbers are fine. The sleight-of-hand is elsewhere: the data is not about days at all.

If Wikipedia is correct, they had 936 plates, discarding the blue half of the catalog.

So the proper way to do this analysis is to divide these 936 plates into 2 sets: N=plates taken with a nuclear test in the preceding 24 hours, NN=plates taken with no nuclear test in the preceding 24 hours.

The first slight of hand was to extend that interval to 3 days, and to go before and after. This decision is not motivated, so we have to assume it's there to make the numbers look good.

Then we'd also divide the plates in 2 other sets: T=plates with linear transients, NT=plates with no linear transients.

From this, the relative risk ratio would be computed per plate.

But they didn't do that, they computed it per day. This decision is not motivated, so we have to assume it's there to make the numbers look good.

The way these choices were made completely screws the significance of the result.
Ideally, these choices are made before you start the study, you pre-register it, then do the analysis, you get a result or not, and then you need to publish either way to avoid publication bias.
Any other way to go about this screws with the significance of the result.

As an analogy:
If you aim to roll 2 sixes with a pair of dice, your chance is 1/36 < 5%, so if you did that at random, that'd look significant.
But if you roll the dice, and 2 fours come up, and then you write a paper that says "the chance for 2 fours is 1/36 <5%, our result is significant", thst'd be deceptive, because you changed your analysis method when you didn't get the result you wanted. To roll any pair, the chance is 1/6=17%, and that's not significant at all; and if we consider that if that hadn't panned out, we could've looked for consecutive numbers and published on that, or on numbers that sum to seven, or any other criterium that gives us a good-looking result, then that's worthless.

This technique is called p-hacking, and we talked about it not too long ago. Whenever you see a study with some weird non-obvious decisions, and they don't tell you what they got when they ran the obvious analysis, p-hacking is likely involved.

So: if the observatory took more plates on days adjacent to nuclear tests, then, all else being equal, they would also detect more transients on days with nuclear tests.

Again, as an analogy, if you're trying to prove that the full moon is lucky, and you roll a dice every night, but on nights surrounding a full moon you roll two dice, then you're going to be approcimately twice as likely to roll a six on a night with a full moon. But that's not because the full moon is lucky, it's because you rolled more. If you publish a paper on this and don't correct for the number of rolls per night, it's worthless.

And we learned above that this paper counts nights when no photos were taken at all! Absolutely incredible.

If you do your full moon luck analysis per dice roll, that'd be proper: but then the effect vanishes, and it turns out full moons are neither lucky nor unlucky.

What would have happened to this study if they had looked at transients per plate, not per day? Who knows.
Some of my points about this from way, way back:
  • Human activity does not occur on independent timelines; the scheduling of both northern hemisphere nuclear tests and astronomical observation schemes are both driven by organizational schedules and things like weekdays and holidays and winter. (The survey plates are not a continuous sample during the survey period; there are long gaps.)
  • Their windows include three days -- day of, day before, and day after testing based on published after-the-fact calendars of conducted tests. But there were many times tests were not conducted on schedule, or were delayed for weather or other reasons, so why would you expect aliens to manifest in advance of only actual detonations and not scheduled detonations?
  • And of course the Soviet nuclear tests were conducted on the opposite side of the globe from Palomar; geostationary objects visible from Palomar would be on the wrong side of the planet, wouldn't they?
 
My personal guess is that if they shared their datasets, their claims would collapse. It's public data, there are no privacy concerns or NDAs that could prevent sharing it. I want to see the list of nuclear tests they used, and a list of the plates they analysed, with date and number of transients found for each plate.

That data would allow everyone to sample-check the authors' transient-finding work without having to download and process the full dataset, and it would allow everyone to run their own statistics. That would be science.

Yeah, even for a math challenged person like myself, I can still look through a list of test dates and plate dates to see if there are any other issues. As @jdog mentioned above, there are obvious things like holidays and weather delays or other events that MIGHT end up grouping some defective plates and test dates closer together.

Then there is the whole 2718 days that they use. It appears that on most of those days there MAY or MAY NOT have been alien transients, but there were no photographs taken so we don't know. A day on which no photograph is taken is useless here I would think.

One question here though:

External Quote:

The NGS-POSS was published shortly after the Survey was completed as a collection of 1,872 photographic negative prints each measuring 14" x 14".
Are these 1872 negatives the result of the red and blue emulsion negatives being combined into a single photographic negative? If so, it would mean from the seemingly arbitrary 2718 days, over 1/2 resulted in photographs. However, as Villareol says only the red negatives were used, I'm inclined to think the negatives in question were the red and blue ones that could then be combined into a single photo. So, as you say there could be no more than 936 or so actual days where a photograph was taken.

As @HoaxEye noted up thread and saw when I went to the paper yesterday, it's been kicked back for revisions. Maybe it'll get cleaned up and published, with all the data.
 
Some of my points about this from way, way back:
Yes. The problem is, that's all speculation. If we had the data I listed, we could prove or disprove these points.

Then there is the whole 2718 days that they use. It appears that on most of those days there MAY or MAY NOT have been alien transients, but there were no photographs taken so we don't know. A day on which no photograph is taken is useless here I would think.
That's actually a good point. Counting a date with no photo as "no transient" is not warranted. Another reason why the date maths is completely bogus.

Are these 1872 negatives the result of the red and blue
I haven't checked the Wikipedia sources, but the article itself says "936 total plate pairs".

I expect the project got telescope time for nights at a time, and would take several plates on each project night, and none on nights when other projects were using the telescope.
 
I expect the project got telescope time for nights at a time, and would take several plates on each project night, and none on nights when other projects were using the telescope.
The camera was its own instrument and I've seen it would shoot from 0 to as many as 20 plates per night, though usually on the lower end, per the plate log at https://authors.library.caltech.edu/records/yq9hz-y6423.

Here's a sample with start and top times and notes -- so you can see they alternated the emulsions -- 12 minutes for O, 45 minutes for E. Some days, er, nights they didn't shoot shoot at all due to weather -- storms, clouds, fog, high wind, etc. On other pages they list stretches where the instrument was out of order and needed repair.
1757871431578.png

1757871398420.png
 

Attachments

  • 1757871336518.png
    1757871336518.png
    124.9 KB · Views: 34
Back
Top