Transients in the Palomar Observatory Sky Survey

I did this, and got a small deficit.
what was the deficit you got?
how did you determine the shadow?
Since at n=5399 the expected number of transients in shadow is only ~30 or so, you cannot get a powerful enough statistical result. Since it is expected that at least some portion of the transient candidates are not reflective objects, there is some expected amount of noise, and at n=5399 that noise will be too dominant.
not true

the 106,339 data set contains >90% astronomical objects with a nonrandom distribution. Thd fact that this nonuniform dustribution is compared with a uniform distribution leads to the observed deficit in the Villarroel paper. This is the signal.

Solano (2022) does a very good job in cleaning astronomical objects from this data, so the 5399 data set is pretty much all plate defects aka noise, and that's a lot more uniform.

You have found that if stringent criteria are applied to filtering the data set, noise remains.
 
Ok, maybe not exactly. Not sure why the first includes filtering with NeoWise critera, but not the second paper, and the second doesn't mention hemispheres, and the numbers are a little different. But regardless, you can just follow the descriptions of how they filter in each paper. I put more stock in the VASCO paper being more precise.
I agree on the lack of neoWISE filterung and the hemisphere cut.
But also note that Solano(2022) refers to "Gaia EDR3 and Pan-STARRS DR2", while Villarroel's Nature/nuclear paper references "PanStarrs DR1 or Gaia DR3".
 
@Mendel : In the meantime, walk me through YOUR replication design then - what are the specific steps to test the shadow deficit claim?
I wrote:
A proper proof would:
1) explicitly list the criteria for "in shadow"
2) compute the coverage of shadow per plate area total [*]
3) compute the coverage of shadow per all astronomical objects
4) compute the coverage of shadow on a random distribution of points across all plates that conforms to the actual plate defect distribution (more near the edges etc.)
and then compare that with what was observed in the experimental data used in the study.

The main problem is that the transient data set is a superposition of
a) astronomical objects,
b) camera artifacts (e.g. diffraction spikes)
c) plate defects (emulsion, handling)
d) scanner artifacts
e) (maybe) unknown sources (could be none)

They're each filtered out by various steps in Solano (2022), which means it is possible to determine the statistical distribution for these "false positive" data sets by looking at what the filter "threw away".

We can apply the shadow analysis to each of these data sets, to get an expected value that we would see if the "real" data set was mostly false positives. I expect we would see that 4-fold deficit in one of these.

Villarroel only compares against a uniform random distribution. If we see the same anomaly when comparing a false positive set against the uniform random distribution, then that invalidates Villarroel's findings.

Note also that >90% of the Villarroel(2025) data set is considered a false positive by Solano(2022).
 
I agree on the lack of neoWISE filterung and the hemisphere cut.
But also note that Solano(2022) refers to "Gaia EDR3 and Pan-STARRS DR2", while Villarroel's Nature/nuclear paper references "PanStarrs DR1 or Gaia DR3".
I guess my memory that the VASCO server hosted the full set of unfiltered candidates was wrong. Should still be reproducible for qualified academics who request the data. The implications of how they filtered are beyond my pay grade. But my guess is that the reviewers of the paper considered this. I am sure there will be published reproduction and replication attempts but we just have to wait and let science take its course. It is not surprising to me that replicating an astronomy paper is beyond my comfort zone, as I am not an astronomer. If I were to sample random astronomy papers, I would bet that a very small percent would be any easier for me to replicate.

Maybe you're right about them being mostly real astronomical bodies. Maybe there is something we're missing as non-experts. Time will tell.
 

References


Villarroel 2025a
Aligned, Multiple-transient Events in the First Palomar Sky Survey
Beatriz Villarroel et al 2025 PASP 137 104504 DOI 10.1088/1538-3873/ae0afe
https://iopscience.iop.org/article/10.1088/1538-3873/ae0afe was published October 17. It's about 83 claimed (5 demonstrated) motion lines and the shadow effect, and based on a 106,339 transient sample.

Villarroel 2025b or Bruehl 2025
Bruehl, S., Villarroel, B. Transients in the Palomar Observatory Sky Survey (POSS-I) may be associated with nuclear testing and reports of unidentified anomalous phenomena. Sci Rep 15, 34125 (2025). https://doi.org/10.1038/s41598-025-21620-3
https://www.nature.com/articles/s41598-025-21620-3 was published October 20. It's about correlating transient observation dates with UFO reports and nuclear tests, and based on a 107,875 transient sample.

Solano 2022
Enrique Solano, B Villarroel, C Rodrigo, Discovering vanishing objects in POSS I red images using the Virtual Observatory, Monthly Notices of the Royal Astronomical Society, Volume 515, Issue 1, September 2022, Pages 1380–1391, https://doi.org/10.1093/mnras/stac1552
https://academic.oup.com/mnras/article/515/1/1380/6607509?login=false
"Working with three large-area sky surveys (POSS I, Gaia EDR3 and Pan-STARRS DR2) and a workflow based on VO archives and services, we have searched for sources identified in POSS I but not seen either in Gaia or Pan-STARRS finding 298 165 sources. After filtering sources found in other archives (mainly in the infrared), asteroids, high proper motion objects with no information on proper motion in Gaia EDR3, known variables and artefacts, we ended up with a list of 5399 sources."
The list of 5399 transients is available at http://svocats.cab.inta-csic.es/vanish/ .
 
Last edited:
If you have any questions about my post #380, please ask.
I posed the same question many posts prior. You're right this is one of the most salient questions to ask. Although the answer might be more nuanced than you think. For example, maybe there is a risk of false cross matches. Maybe cross-matches with Gaia, Pan-STARRS and NeoWise are the most reliable, while the remaining cross matches remain in doubt. Most that were ruled out, were ruled out based on cross matches, apparently with none of these 3, but some other catalogues in the infrared. I guess those would be sources that were in the optical spectrum in the 50s and now are only in the infrared (sometimes) if they are real matches.

It's probably a bit more complicated, since you'd have to understand the uncertainty in the cross matching. Why would these sources have cross matches in some other catelog in the infrared, but not in Gaia, Pan-STARRS or NeoWise (which is in the infrared itself)? Does it make sense for there to have been a few hundred thousand optical to infrared only transitions occurring in that time frame?

And the 2022 paper states.
The adopted radius is a good compromise to ensure that not many high proper motion were left out while, at the same time, avoiding an unmanageable number of false positives.

https://academic.oup.com/mnras/article/515/1/1380/6607509?login=false

Meaning some of those filtered out were probably not real cross matches.

A lot of time science is messy and some things remain uncertain and mysterious.

In order to get enough statistical power you have to have more than n=5399, given the relatively small proportion of the sky at GEO in Earth's shadow. If there is a non-random distribution of true positive cross matches to astronomical bodies effect causing the deficit, then maybe that can be tested with evidence. How likely is it? I don't know. But you've got to go further than just asserting it, you have to test your hypothesis.

How could it be that my test with only the 5399 could only show a small low significance deficit while the ~100,000 found a high significance deficit (assuming I didn't make a mistake)? I don't know exactly. Asked ChatGPT this question several months ago and this is all I got, but of course I don't trust ChatGPT enough to consider it settled.

Me: This was my understanding, but I am trying to understand why the 5,399 transients would have a much smaller percent difference than expected than the larger dataset, when the larger dataset contains mostly objects that are likely (not sure how likely) normal astronomical objects given they have crossmatches (unverified) in other catalogues. Unless, most of them are actually transients. But then this calls into question their previous crossmatching algorithm? It just seems surprising.

ChatGPT: Ah, now I see exactly what you're puzzled about — this is a really subtle point about expectations vs. actual observed shadow deficits, and it hinges on the composition of the dataset and the nature of the events being tested. Let's unpack carefully.

1. The shadow test logic

The Earth shadow test works like this:
  1. Assume a fraction of the candidate events are GEO glints.
  2. GEO glints cannot appear in the Earth's shadow (geosynchronous satellites are in sunlight except during eclipses).
  3. If you count events in shadow vs. outside shadow, you expect a deficit of glints in shadow.
So the size of the observed deficit depends on:
  • Number of true GEO-like events in your sample.
  • Total number of events (the larger the denominator, the smaller a small subset effect looks).

2. Dataset composition matters

a) The 5,399 "orphans"

  • These are very clean: no matches anywhere.
  • The expectation might be: if some fraction are GEO glints, their deficit in shadow should show up clearly.
But here's the catch:
  • 5,399 is tiny relative to the full POSS-I catalog.
  • Only a small fraction of these 5,399 are likely actual glints; the rest could be artifacts or rare astrophysical events.
  • With such a small number, Poisson noise dominates, and even if a deficit exists, the percent difference could be modest just by statistics.

b) The 106k "Villarroel filtered" set

  • This includes many objects with crossmatches in other catalogs but not Gaia/Pan-STARRS.
  • Many of these might be ordinary stars, but because the filter excludes the very complete optical surveys, some could be true unexplained events.
  • Even if a smaller fraction of these 106k are actual glints, the large number of events inflates the absolute deficit — and so the percent difference appears larger and statistically stronger.
Think of it like this:
If you have 10 glints out of 100 events, the deficit in shadow might be 2 events → 2%.
If you have 50 glints out of 1,000 events, the deficit is 10 → 1%, but it's more statistically significant.
Because Villarroel uses a broader dataset, the absolute number of glints is larger, so their deficit appears more dramatic, even if the fraction is smaller.

3. Implications for crossmatching

Yes, it seems counterintuitive:
  • The 106k set includes sources with matches in some catalogs (IR, etc.), so some might be ordinary stars.
  • But the filtering for no Gaia/Pan-STARRS counterpart ensures many are still "optically unexplained" — i.e., the crossmatching algorithm is not invalid; it's just less strict than complete orphan filtering.
  • In other words, the 106k set probably contains a mix of ordinary objects and real transients. That mix is enough to show a significant deficit statistically.
This is consistent with:
  1. The 5,399 being very clean but tiny → weak signal.
  2. The 106k being broader → stronger, more statistically robust signal.

4. The "surprise" factor

It only seems surprising if you assume:
  • All crossmatched objects in other catalogs are "ordinary" → then the deficit should be washed out.
But Villarroel's key assumption:
  • Objects with no Gaia/Pan-STARRS counterpart might still include unexplained transients, even if they appear in some other catalogs.
  • So the actual fraction of "GEO-like glints" is higher than you might naively expect.
Essentially, the 106k dataset is less pure but larger, which paradoxically makes the shadow signal stronger.

✅ Summary intuition
DatasetPurity of unexplained eventsSizeShadow deficit visibility
5,399 orphansVery highSmallWeak percent difference, noisy
106,339 filteredMediumLargeStronger percent difference, statistically robust
The counterintuitive part: larger, less pure samples can give stronger statistical signals than tiny, extremely clean samples.

If you want, I can do a quick back-of-the-envelope calculation showing how the expected shadow deficit scales with dataset size and fraction of real glints, which will make this much more quantitative and intuitive. This often clears up why the 5,399 would have a smaller effect.
Do you want me to do that?

...

And then in follow up, ChatGPT claimed only a pretty small percentage of the ~100,000 cross-matches need be false positives and actually glints to explain the results. Again, I don't trust ChatGPT. But these are the questions you need to address.

What percent of the ruled out cross matches need to be false matches and actual glints to explain the results? How many of the transients need to be glints in total to expect these results? What is the expected false-positive rate in the cross matching, in the 2022 paper resulting only from the tests besides checking those 3 catalogues?
 
Last edited:
...

They're each filtered out by various steps in Solano (2022), which means it is possible to determine the statistical distribution for these "false positive" data sets by looking at what the filter "threw away".

We can apply the shadow analysis to each of these data sets, to get an expected value that we would see if the "real" data set was mostly false positives. I expect we would see that 4-fold deficit in one of these.

Villarroel only compares against a uniform random distribution. If we see the same anomaly when comparing a false positive set against the uniform random distribution, then that invalidates Villarroel's findings.

Note also that >90% of the Villarroel(2025) data set is considered a false positive by Solano(2022).
Thanks for laying that out. One question though:

What result would lead you to conclude the shadow deficit is real rather than artifactual?

You've outlined calculations, but not decision criteria. If the deficit persists even when controlling for artifact distributions, does that change your assessment? Or are there other reasons you'd still dismiss it? Without that, this isn't really a replication design...it's a series of calculations with no predetermined interpretation framework.
 
Solano (2022) does a very good job in cleaning astronomical objects from this data, so the 5399 data set is pretty much all plate defects aka noise, and that's a lot more uniform.
actually none of the 24 transients given in table 3 of (Villaroel 2025a) are listed among those 5399 transients that (Solano 2022) ended up with
 

Attachments

Thanks for laying that out. One question though:

What result would lead you to conclude the shadow deficit is real rather than artifactual?

You've outlined calculations, but not decision criteria. If the deficit persists even when controlling for artifact distributions, does that change your assessment? Or are there other reasons you'd still dismiss it? Without that, this isn't really a replication design...it's a series of calculations with no predetermined interpretation framework.
I did not say the shadow deficit isn't real. Vilarroel compares her transient dataset with a uniform distribution and finds the deficit. She uses this to characterize her transients as avoiding the shadow. One way to interpret it is that 80% of her dataset is a group that is never in shadow, while 20% is uniformly distributed.

One of my arguments is that the uniform distribution is not a good comparison, and that finding a distribution with a known cause that is similar to Villaroel's distribution suggests we do know the underlying cause.

Ultimately, though, the distribution of Villaroel's data set needs to be linked to a distribution that reflects orbital objects. I have outlined above, in post #373, a simple test of whether the transients avoid only the GS orbital (42000km) shadow, or if they avoid lower, larger shadow areas as well. If Villaroel's assumption is correct, then the deficit must start shrinking as the orbit lowers.
it will shrink anyway, because at altitude 0 the shadow covers 100%, since it is night on the ground. But basically, if you had your telescope trained on the Japanese flag, and zoom out, then there's a period of "all red" until you zoom out far enough so that white becomes visible and ultimately overwhelms the red. This transition point should be visible in the data.

Her distribution needs to better characterized.
 
Last edited:
Isn't neowise far more sensitive in the infrared than POSSI-I?
Yes. Doesn't matter.

If you have a sample of numbers between 1 and 20, say {2,5,9,13, 14, 19}, and you wanted to eliminate all primes from that sample, it doesn't matter how large your prime catalogue is, as long as it encompasses the sample.

You're arguing that there's something fishy about removing 2,5,13 and 19 because 79, 83 and 89 are also primes.
 
Yes. Doesn't matter.

If you have a sample of numbers between 1 and 20, say {2,5,9,13, 14, 19}, and you wanted to eliminate all primes from that sample, it doesn't matter how large your prime catalogue is, as long as it encompasses the sample.

You're arguing that there's something fishy about removing 2,5,13 and 19 because 79, 83 and 89 are also primes.

No. NEOWISE took a photo of the object and it wasn't detected. This is like doing a prime number test, and you find a factor. Except in this case, stars can change. It implies, if it is an astronomical object, then it is a strange one that was visible in the optical spectrum at one time, then in another time not in the optical but in the infrared, and then another time is was not detectable in either the optical or infrared, or perhaps some other unknown cause. I don't know how strange that is exactly. But that is really besides the point. A portion of the cross matches were expected to be false positives. To wrap our heads around the implications and sufficient conditions to explain the statistics we need to do the math that tells us what the effects are with respect to the false-positive rate.
 
ChatGPT is terrible at mathematical reasoning, and this is well known.
Think of it like this:
If you have 10 glints out of 100 events, the deficit in shadow might be 2 events → 2%.
If you have 50 glints out of 1,000 events, the deficit is 10 → 1%, but it's more statistically significant.
Because Villarroel uses a broader dataset, the absolute number of glints is larger, so their deficit appears more dramatic, even if the fraction is smaller.
Do you see the contradiction? In the example, the larger number reduces the deficit from 2% to 1%, but ChatGPT claims that it "appears more dramatic".

Instead, assume 75,000 glints out of 100,000 events. With 3% shadow, we would expect 3000 transients, but with only 25000 non-glints, we see 750 transients. 3000:750 is a 4:1 deficit.
If we have 10000 events and a 2:1 deficit, we need 5000 glints to get 300 expected, 150 actual.

This means we removed 70000 glints and 20000 non-glints when we cropped the data set.
90000 has 2700 expected, 600 actual, for a 4.4:1 deficit.
As you see, to lower the deficit, the data set we remove needs to have more deficit than what we had originally.

If you want, I can do a quick back-of-the-envelope calculation showing how the expected shadow deficit scales with dataset size and fraction of real glints, which will make this much more quantitative and intuitive. This often clears up why the 5,399 would have a smaller effect.
Do you want me to do that?
Please do!
 
Last edited:
I did not say the shadow deficit isn't real. Vilarroel compares her transient dataset with a uniform distribution and finds the deficit. She uses this to characterize her transients as avoiding the shadow. One way to interpret it is that 80% of her dataset is a group that is never in shadow, while 20% is uniformly distributed.

They claim the shadow behaves approximately randomly in these regions. I don't know enough about distributions of astronomical bodies relative to the Earth to understand the implications. Just noting this.

For these reasons, we focus our main analysis on altitudes where less than 5% of the field is typically shadowed—regions where the shadow behaves approximately randomly, and where reflective glints, if present, would be both detectable and physically plausible.

https://iopscience.iop.org/article/10.1088/1538-3873/ae0afe
 
No. NEOWISE took a photo of the object and it wasn't detected.
It's a cross-match, both detect it.
If an object is in POSSI-I, and neoWISE also took a photo of it, then it's probably an astronomical object that is visible in the near infrared.
A portion of the cross matches were expected to be false positives.
But since the stars in the catalogue don't cover the sky completely, the propirtion of such false negatives must be low. If the stars only covered 20% of the sky, then 80% of the legitimate transients must remain untouched (numbers are made up).
To wrap our heads around the implications and sufficient conditions to explain the statistics we need to do the math that tells us what the effects are with respect to the false-positive rate.
Yes. I'm explaining that.
 
ChatGPT is terrible at mathematical reasoning, and this is well known.

Do you see the contradiction? In his example, the larger number reduces the deficit from 2% to 1%, but ChatGPT claims that it "appears more dramatic".

Instead, assume 75,000 glints out of 100,000 events. With 3% shadow, we would expect 3000 transients, but with only 25000 non-glints, we see 750 transients. 3000:750 is a 4:1 deficit.
If we have 10000 events and a 2:1 deficit, we need 5000 glints to get 300 expected, 150 actual.

This means we removed 70000 glints and 20000 non-glints when we cropped the data set.
90000 has 2700 expected, 600 actual, for a 4.4:1 deficit.
As you see, to lower the deficit, the data set we remove needs to have more deficit than what we had originally.


Please do!

It's about noise to signal ratio and statistical significance. ChatGPT claimed (not saying it is correct). But still, we need to do the math ourselves.

1761724348375.png
 
It's a cross-match, both detect it.
If an object is in POSSI-I, and neoWISE also took a photo of it, then it's probably an astronomical object that is visible in the near infrared.

But since the stars in the catalogue don't cover the sky completely, the propirtion of such false negatives must be low. If the stars only covered 20% of the sky, then 80% of the legitimate transients must remain untouched (numbers are made up).

Yes. I'm explaining that.
The 100K were detected in POSSI, but not NEOWISE (and the other two aformentioned ones within 5"). Most of the ruled out in 2022 were found in some other infrared catelog (but not NEOWISE or optical catalogues). Hence they were detectable in visible spectrum, then infrared only, then neither. Probably you could form some prior probability for these events, and factor in the uncertainty in the cross-matching algorithm, and get some rough idea how many cross-matches were likely false positives. And then you can try to do what ChatGPT did but make sure it is all correct.That would be a good start.
 
Last edited:
They claim the shadow behaves approximately randomly in these regions.
yeah, but they're wrong

earth's shadow doesn't go above or below 23.5⁰ from the ecliptic

and note that Villarroel does not limit her search to glints in geostationary positions. 4 of her alignments are at >23.5⁰ declension.
 
The 100K were detected in POSSI, but not NEOWISE (and the other two aformentioned ones within 5"). Since most of the ruled out were found in infrared catelogs, the bulk of the questionable candidates were detected in POSS-I, and some other infrared catelog but not optical, but then not seen in NEOWISE.
so your argument is that these other catalogs are wrong?
External Quote:
These searches significantly reduced the number of candidates (from 298 165 to 9 395). A significant number (∼59 per cent) of the identified sources were visible in infrared catalogues (Neowise, CatWISE2020, unWISE, and the infrared catalogues included in VOSA) but not in the optical (KIDS, Skymapper, and the optical catalogues included in VOSA) or the ultraviolet (GALEX). The sources detected in the infrared but not in the optical/ultraviolet are available from the online archive (see Appendix).
vanishing objects seen in NEOWISE: 171 753 sources
vanishing objects seen in other infrared archives: 410 sources

The point is that I don't think these IR matches represent astronomical objects that were once visible, and now no longer are. I think that these objects have always been visible in the near infrared, and that's why POSSI-I red photographed them.

As you point out, the whole argument does not matter, because the neoWISE matches are not included in the transient data set used in the paper.
 
so your argument is that these other catalogs are wrong?

They said in the original paper that they set the radius high to ensure they don't miss high motion stars, but not too high that they get an unmanageable amount of false positives. So they are saying that 5,399 is from a strict criteria and some "manageable" amount of the candidates ruled out were false positives (could still be glints). I would guess that the three catalogues they cross-matched with for the filtering criteria for the 100k are the most comprehensive and accurate, implying those left were the least likely to have real cross-matches in other catalogues.

Ultimately, what has to be done is try to estimate the expected number of false cross-matches, and then see what effect the false cross-match rate has on the SNR for different hypothetical glint event percentages. Or something of that sort. That is what will allow you to determine if the aforementioned issue is actually a problem or not.

To recap, the argument it might not be a problem, is that the signal grows faster than the noise as a function of N. Thus, the noise introduced as you increase the true positive cross-matches and also N, is competing against the improved signal to noise ratio you get by increasing N.
 
Last edited:
It's about noise to signal ratio and statistical significance. ChatGPT claimed (not saying it is correct). But still, we need to do the math ourselves.

View attachment 85351
SmartSelect_20251029-092114_Samsung Internet.jpg

SmartSelect_20251029-091841_Samsung Internet.jpg
106339×0.0115=1223 events expected
N=349 events observed => 874 events missing
D=106339×g×0.0115=874 <=> g=0.715
sigma=sqrt(106339×0.0115)=35
SNR=D/sigma=874/35=25

D=5339×0.715×0.0115=43.9
sigma=sqrt(5339×0.0115)=7.8
SNR=5.3

that's plenty of signal
 
I would guess that the three catalogues they cross-matched with for the filtering criteria for the 100k are the most comprehensive and accurate
I would guess that they were easiest to access.
Solano thought these catalogs were worth including, Villarroel doesn't, but doesn't tell us why not, which means her method is not transparent
Ultimately, what has to be done is try to estimate the expected number of false cross-matches, and then see what effect the false cross-match rate has on the SNR for different hypothetical glint event percentages. Or something of that sort.
Easier: figure out the chance that a random point on a plate is matched by a catalogue. That gives us an upper bound on the number of false matches.
Solano elimited 96.8% of the transient candidates in the analysis stage by matching.
Excluding neoWISE, that's (288770-171753)/(298165-171753)=92.6%.
 
106339×0.0115=1223 events expected
N=349 events observed => 874 events missing
D=106339×g×0.0115=874 <=> g=0.715
sigma=sqrt(106339×0.0115)=35
SNR=D/sigma=874/35=25

D=5339×0.715×0.0115=43.9
sigma=sqrt(5339×0.0115)=7.8
SNR=5.3

that's plenty of signal
I'm beginning to wonder if 0.0115 is even a valid number to use - change that, and everything changes. That's a physical constant the ratio of the solid angle of the earth's shadow at GSO radius over that of a hemisphere. However, why is GSO radius an *input*, it's not a nothing-up-my-sleeve number at all? I hope this doesn't boil down to something like "If we assume the things we're interested in are at GSO, then we've found them at 21.9sigma", because that would be a vacuously true presentation of a false conclusion.
 
The "A Glint in the Eye" Paper explains why geosynchronous orbits are of interest to the study..

https://arxiv.org/abs/2110.15217
A space-faring civilisation in the distant past (> 100,000 years ago) may have sent probes to explore the Earth and under these circumstances some of these probes would have remained in high-altitude orbits around Earth. Of particular interest are the geosynchronous Earth orbits (GEOs) currently populated by communications satellites. GEO satellites nearly always remain over same region on Earth. It is tenable that the GEOs might also be used by another civilisation to study the Earth. A probe or spacecraft no longer in use, could in principle survive for billions of years in GEO, before impact with meteorites and collisions disintegrate eventual probes into very small debris pieces of only a few centimetres size. It has beenshown that radiation pressure [15] does not remove debris at GEOorbits, at least not on Myr timescales.

... and notes that they could potentially be seen....
Satellites that are uniformly illuminated at low- or medium altitude orbits leave clear streaks in the long time exposures from old photographic plates as they move at speeds projected as hundreds of arcseconds per second. At higher or GEO altitudes the presence of satellites or space debris can be detected by fast, transient glints caused by surface reflection of the Sun.

However this is quite an assumption.... :eek:
Assuming that another civilisation has left NTAs in geosynchronous orbits, it is useful to know for long could reflective pieces of e.g. metal or glass (or other reflective materials) remain in orbit? The determining factors are (1) gravity: debris at too low altitude, will fall back to the Earth within a few years. (2) collisions with natural objects will break up the objects into many smaller pieces, and also eject some out of orbit, and (3) radiation pressure from the Sun. Wright (2017) [40] argues that these natural factors will result in the artefacts surviving for less than a few Gyr in orbit. Due to gravitational perturbations their orbits will also drift over long time scales.

So their hypothesis seems to be that Alien space probes that were in geostationary orbits for Millions of years have disintegrated into tiny bits, and these objects are the ones glinting lights. If the objects were LEO or MEO they would produce streaks of light, but as the plates show point sources they are more likely to be in GEO.

And from the Discussion section they conclude..
A direct signature of solar reflections from artificial, reflective materials in geosynchronous orbits would be observation of multiple glints along a line in photographic plate images. Finding a single such case clearly merits careful on-site searches, necessary to obtain direct evidence. In view of the very rapid increase in satellite launches and human space debris in place at the GEOs, the time window for extremely short to assemble new datasets. Therefore we encourage SETI researchers to perform searches for glints in photographic plate material and to assist in VASCO's searches for these possible intriguing
technosignatures.

... which leads on to the paper that then looks for Transients in the Palomar Observatory Sky Survey - https://www.nature.com/articles/s41598-025-21620-3

I'm still at a loss as to how these pieces of debris in GEO could be seen by people on the Earths surface and have a statistical influence on UAP reports in the UFOCAT database. Or how they could be linked to Nuclear Testing.
 
Last edited:
Or how they could be linked to Nuclear Testing.
Indeed. The correlation with nuclear tests (*) rather kills the idea the glints came from satellites (or debris) in orbit since millions of years (or even years).

I interpreted the nuclear test correlation in a different way: it implies the satellites were sent to geosynchronous orbit a little time (~one day) before each test, thus becoming detectable by the POSS-1 survey, and then recovered back some time (~one day) after the test, thus disappearing. I can't see any other way to explain the correlation (given alien satellites). By trying to strengthen their 'aliens' case with the trope of nuclear tests they undermined it instead: they now need bizarre aliens, which are even more improbable than aliens tout court.

(*) granting it's real, which I'm little confident of.
 
Question for somebody who knows more than me about orbits...

So their hypothesis seems to be that Alien space probes that were in geostationary orbits for Millions of years have disintegrated into tiny bits, and these objects are the ones glinting lights.

Is this even possible? Disintegrating into tiny bits would surely mean that any ability to maintain the orbit would be gone. While I'd expect that the problems atmospheric drag would cause in LEO satellites to be significantly less, would it be functionally zero over millions of years? And over millions of years, would other effects tend to push/pull stuff away from such an orbit?

(The moon is an obvious example of something that is still in orbit after millions of years -- and then some -- but it is slowly moving away from us, albeit too slowly to make a lot of difference over a million years. But given that the moon is much lager and much further away, and given my lack of more than a very surface knowledge of orbits and how they work, I'm unsure how relevant the moon is as an example...)
 
Random Thoughts

The US has maintained constellations of satellites for monitoring Soviet/Russian territory for missile launches and nuclear tests since the 1960s. The Soviets/Russians eventually followed suite. Some are in geosynchronous orbits. Do we have examples of reflections from known geostationary satellites to compare against the author's 'glints'? Do the patterns of reflection agree with examples of those known to be real objects in geostationary orbits?

Defense Support Program (DSP)

External Quote:
DSP satellites, which are operated by the 460th Space Wing, detect missile or spacecraft launches and nuclear explosions using sensors that detect the infrared emissions from these intense sources of heat. During Desert Storm, for example, DSP was able to detect the launches of Iraqi Scud missiles and provide timely warnings to civilians and military forces in Israel and Saudi Arabia.[1]
 
Disintegrating into tiny bits would surely mean that any ability to maintain the orbit would be gone.
Orbits don't need to be maintained, because the objects in orbit are in freefall. It doesn't matter whether you throw a brick off the roof, or a handful of gravel.

Geostationary orbits don't stay stationary, they drift a little and become geosynchronous if not maintained.

But for an object to leave orbit, it must either acquire energy (e.g. solar radiation) or lose energy (e.g. tides).

Which why it's very weird that they're no longer there.
Villarroel writes that there are on average 169 of these per plate, i.e. in a 6⁰ by 6⁰ area of sky we should be able to see about 3 flashes per minute if these 200,000 objects were still there.
 
Do we have examples of reflections from known geostationary satellites to compare against the author's 'glints'? Do the patterns of reflection agree with examples of those known to be real objects in geostationary orbits?
If they don't match up, isn't this just proof that alien satellites don't behave like human satellites and it was foolish of us to assume that they did? Is it possible to falsify this study?
 
My analogy to claiming a correlation between the transients and nuclear tests – given the available data – is this:
  • You are a contractor in the Los Angeles area and some days you log how long your evening commute home took. You have a vague feeling that traffic was bad and after a year, you look back and see some days took a lot longer than others.
  • A friend suggests that the Dodgers are so popular that fans clog the freeways to get to their games.
  • You check the dates of your commutes against the schedule of Los Angeles Dodgers games and find that, yes, some proportion of your long drives fell on game days, so you infer that Dodger games cause L.A.'s traffic issues.
Unfortunately there are some confounding factors:
  • Only half of the 162 regular season MLB games are even at home.
  • Games start at different times of day during the week, so even the correct days might only be loosely related to your commute time.
  • If you went days or weeks without driving through the city, or drove through different parts of the city, or just didn't log your commute some days, you don't have a very representative sample to work with over time.
  • You can't just look at Dodger games. There are other events that might have been going on, like Anaheim Angels games or other major league sports games.
  • And, of course, not every slowdown is caused by an event. There are a lot of traffic accidents on L.A. freeways. It's the background noise of metropolitan commuting.
 
I interpreted the nuclear test correlation in a different way: it implies the satellites were sent to geosynchronous orbit a little time (~one day) before each test, thus becoming detectable by the POSS-1 survey, and then recovered back some time (~one day) after the test, thus disappearing. I can't see any other way to explain the correlation (given alien satellites).

I thought that too.
Villaroel has kind-of opined that the causes of the transients are still there (in GEO), so presumably they have been in continuous GEO for many years (including prior to their being photographed in the 1950s, as per Villaroel et al.'s "Glint in the Eye" paper referenced by @flarkey, here).

From the NewsNation interview, 21 October 2025 (viewable in post #368), approx. 7 mins 29 secs into the video

External Quote:

ROSS COULTHART:
Now, we know that they were there in pre-Sputnik 1950s, what do we know about whether they're still there, can we rule out the possibility or the probability that they are still there in geosynchronous orbit?

BEATRIZ VILLAROEL:
I have no idea because if these are what I think they are, I mean if these are artificial objects which the signatures are pointing towards, I have no idea what they could have done or if they are there still; I would assume they are there still.

If there are tens or hundreds of thousands of ETI-origin satellites in GEO, we might expect confirmation that at least one has been found in the near future. Of course, failure to do this wouldn't invalidate Villaroel's papers; having waited a few thousand years and registering some nuclear tests, the ETI-origin probes might have decided there was nothing of further interest, and moved on. For, uh, reasons.

As well as any astronomical interest in Villaroel's claims, the US and its allies are developing radar systems for monitoring objects in GEO, the Deep Space Advanced Radar Capability (DARC, https://en.wikipedia.org/wiki/Deep_Space_Advanced_Radar_Capability).
External Quote:
According to the company [defence contractor Northrop Grumman], this new radar system will enable Space Force to track "very small objects in geosynchronous orbit (GEO) to protect critical U.S. and allied satellite services."
"US Space Force's new deep space radar tracks multiple satellites 22,000 miles away in key test," Brett Tingley, Space.com 14 August 2025.
External Quote:
The use of three sites for DARC (in the UK, US and Australia) will enable 360 degree coverage of the sky at all times of day and under any weather conditions.
Deep Space Advanced Radar Capability (DARC), GOV.UK, 8 August 2024.

In addition, US Space Command recently demonstrated an ability to manoeuvre a satellite in order to image another in GEO,
"The US military just moved a satellite to inspect a UK spacecraft 22,000 miles above Earth", Space.com 22 September 2025
-albeit a satellite whose orbit and exact position at any one time was known.

If Villaroel's ETI hypothesis is correct, there's a reasonable chance it could be confirmed within the next decade. It would be truly momentous, and she will rightly be regarded as an extraordinary pioneer in astronomy.

But- and admitting I have no special knowledge of the subject- I will make a prediction: There will be no confirmation of an identified ETI-origin satellite by this time next year (29 October 2026). Or before 6 year's time when DARC becomes fully operational. Or in ten year's time, after 4 years of DARC's 360° coverage. I will happily admit I'm utterly wrong and a blinkered killjoy to boot should events prove otherwise.
 
Aliens with a mole implanted in each of the countries with nukes, no less, to get the exact dates of the tests so they can get into position the day before?

No, I think that's unlikely. Even allowing for a physically similar homeworld and convergent evolution, the probabilities of moles in extraterrestrial biospheres must- in the state of our current knowledge- be small.
I understand there were moles in the upper echelons of the UK intelligence services in the 1950s, but they came from good families and went to Cambridge University, so their former (and as it turned out constant) support for hostile totalitarian regimes was understandably overlooked.
I expect no-one thought they had good enough eyesight to read confidential documents, something like that.

There are reasons to believe this photograph isn't an accurate representation of what was actually present at the time; there may have been miniscule flaws in the emulsion.

Untitled.png
 
Aliens can do whatever you need them to do.

And conversely, they cannot do whatever you need them to NOT be able to do.

Like observing nuclear tests from airborne or even ground based sensors placed near the test sites. Their whole story is based on the aliens moving objects into an orbit where they can see and be seen but immediately moving them away where they cannot be seen. Why?

This is part of the "standard" ufologist vision of aliens that are more advanced than humans in some ways, but not TOO advanced in others. Like not using fruit-fly sized drones, looking over the shoulders of the technicians assembling the bombs, to see what humans know about nuclear weapons. Or any of the other technologies that we can anticipate coming along in the next decades that would be far more useful than objects in geosynchronous orbit. No, they have to employ technologies 'primative' enough for us to detect.

Ufologists never want to give the aliens credit for technologies so far beyond ours we would never even notice they were here.
 
No, I think that's unlikely. Even allowing for a physically similar homeworld and convergent evolution, the probabilities of moles in extraterrestrial biospheres must- in the state of our current knowledge- be small.
I understand there were moles in the upper echelons of the UK intelligence services in the 1950s, but they came from good families and went to Cambridge University, so their former (and as it turned out constant) support for hostile totalitarian regimes was understandably overlooked.
I expect no-one thought they had good enough eyesight to read confidential documents, something like that.

There are reasons to believe this photograph isn't an accurate representation of what was actually present at the time; there may have been miniscule flaws in the emulsion.

View attachment 85360
You have undermined the seriousness of this thread. You will now be known as The Underminer.
1761756388778.png
 
Aliens with a mole implanted in each of the countries with nukes, no less, to get the exact dates of the tests so they can get into position the day before?

But sometimes a day late. Remember an alien techno-signature can appear anywhere in the sky on a plate the day AFTER a nuclear test anywhere on the planet and still count as a hit.

I interpreted the nuclear test correlation in a different way: it implies the satellites were sent to geosynchronous orbit a little time (~one day) before each test, thus becoming detectable by the POSS-1 survey, and then recovered back some time (~one day) after the test, thus disappearing. I can't see any other way to explain the correlation (given alien satellites). By trying to strengthen their 'aliens' case with the trope of nuclear tests they undermined it instead: they now need bizarre aliens, which are even more improbable than aliens tout court.

Indeed. Having established that there are unexplained transients on the old Palomar red emulsion plates that no longer show up on any modern survey of the sky and then hypothesized that these transients are alien techno-signatures, they back into an old UFO trope to explain the massive amount of possible alien activity at the time.

Depending on which paper we're talking about, the number of transients varies from ~5000 to over ~100,000. Either way, that's a lot of alien probes or flying saucers, sometimes in GSO. What are they all doing and why do they all disappear after 1956? Nuclear testing, obviously. Just like in The Day the Earth Stood Still, the aliens are concerned with humanity becoming nuclear, for reasons. Thus, the alien probes appear within a few days of a nuclear test. Which means they are NOT in continuous GSO, rather they pop in to GSO around the time of a nuke test, then head back to wherever they came from. I guess. Or maybe not all the transients that happen around a nuke test are in GSO, as the authors make no effort to list any of the transients and their location.

There were approximately 121 nuclear tests between 1950 and 1956, the period studied by the authors. If I read it right, they then tried to sync up ~100,000 transients with these 121 tests in 3 day windows and found something they claim is significant. As @jdog noted up thread and I did as well, there were a number of coincidences that occurred that helped group some tests with some photographic plate production. Tuesdays were busy for both groups. October of 1954 was very busy for the Palomar group and the Soviet nuclear testing group. I doubt they were coordinating with each other.

Then after about April of 1956, despite ongoing nuclear tests, the aliens and thus the transients, buggered off for unknown reasons. Nevermind that's also when the red emulsion was changed.

It makes no sense whatsoever. They are just taking something of potential importance, the ~5000 transients from the Solano paper, goosing it up to ~100,000 transients and tying it to a well trod UFO trope because that's what plays in the UFO world. Aliens were visiting Earth in the '50s. Humans were testing nuclear weapons in the '50s. Aliens are concerned with human nuclear testing. Aliens sent probes to gather intell on human nuclear testing. After early 1956, the aliens decided they had what they needed and disappeared.
 
Back
Top