Transients in the Palomar Observatory Sky Survey

There are more stars near the ecliptic plane, therefore more emulsion flaws would be removed from the full data set, making the distribution non-random.
I agree, and that is the point I was trying to make (although probably in a hamfisted and unclear way). The distribution of emulsion flaws was presumably random before the ones closest to the ecliptic were removed.

I note that Villaroel says that only northern hemisphere plates were considered; if so that removes the big bulge in Sagittarius, which is in the southern celestial hemisphere. I may have to think again.
 
The distribution of emulsion flaws was presumably random before the ones closest to the ecliptic were removed.
It wasn't. Villarroel's own data with regard to plate edges strongly suggest that the flaws/"transients" are more dense near the plate edges, though depending on the algorithm, this may be exacerbated by plate overlap. Remember, the graphs of the various sets show the plate grid visually.

We also have strong suggestions that emulsion processing was improved to reduce the flaws at some point. If that is the case, we can't even say that each plate ought to have approximately the same amount of flaws.

But if we assumed a random distribution of emulsion flaws, then, for a valid null hypothesis, the validation process needs to be applied to the random distribution. Since the validation process does not identify astronomical objects one by one, but rather eliminates all signals within 5 arcseconds from known astronomical objects, more flaws will be eliminated from plates that have more astronomical objects on them. Since the asteroids and the milky way are nor randomly distributed across the sky, this introduces a nonuniformity in the overall distribution that we have also observed in the graphs.

The hypothesis is that this nonuniform random distribution that results from this processing would also produce the shadow deficit that Villarroel observed. This seems all the more likely as the mathematical properties that would determine the altitude of the orbits have not been demonstrated in the data by Villarroel, and in fact some remarks in the paper led me to conclude they don't exist.
 
But if we assumed a random distribution of emulsion flaws, then, for a valid null hypothesis, the validation process needs to be applied to the random distribution. Since the validation process does not identify astronomical objects one by one, but rather eliminates all signals within 5 arcseconds from known astronomical objects, more flaws will be eliminated from plates that have more astronomical objects on them. Since the asteroids and the milky way are nor randomly distributed across the sky, this introduces a nonuniformity in the overall distribution that we have also observed in the graphs.
This is what I'm trying to work out - where and when is this validation process done, and by whom?

Edit - i think maybe here:

https://arxiv.org/pdf/2009.10813 Launching the VASCO Citizen Science Project
1774696593820.png
 
Last edited:
@MonkeeSage - beat me to it!

Another paper has been released stating that they have found "evidence of transients similar to those previously reported by the VASCO Project for POSS plates" but in other plates.

https://arxiv.org/abs/2603.20407



There only seems to be one photo of a suspected 'transient'....


Their method only appears to find 'transient', that is - objects that are in one expose but not the next. It doesnt seem to have the 'three transients in a line' criteria that that the Villaroel paper had.

@MonkeeSage - beat me to it!

Another paper has been released stating that they have found "evidence of transients similar to those previously reported by the VASCO Project for POSS plates" but in other plates.

https://arxiv.org/abs/2603.20407



There only seems to be one photo of a suspected 'transient'....


Their method only appears to find 'transient', that is - objects that are in one expose but not the next. It doesnt seem to have the 'three transients in a line' criteria that that the Villaroel paper had.
So far, I have found 63 candidate transients, out of 50 plates analyzed. Many more are still being worked on, and I don't expect to have a comprehensive data set and analysis for a long time still. As for that one pic, it is for illustration purposes only; there would be no point in publishing all 63 similar pictures right now. They are for now accessible only to colaborators. Note that that is not even a full-fledged paper, but a communication (or "letter") that I put in place just to, basically, explain to collaborators what the method I used is. The method is designed to find "vanishing" fast transients, that is, something that appears in one image and is not present in another image taken minutes after. It won't find "appearing" transients such as a supernova (unless the processing pipeline is run backwards, with a few tweaks).
 
Hi, I'm Ivo Busko, the author of that preprint. I see so many comments in here, that I can't possibly address everyone. Please feel free to respond to this post in case you have comments or questions, I will do my best to answer. And thanks for the interest in the topic!

As background, I can say that my main interest isn't directly with the UAP side of things, but more with the astronomical and instrumental side. As in, the interpretation of astronomical images recorded in photographic plates. My main emphasis is, for now, to find ways to tell apart real "events", from plate emulsion and scanning flaws. (my background is in Astrophysics, and I worked for about three decades developing scientific software and systems for the Hubble and James Webb Space Telescopes).
 
I find this very sloppily worded:
"While the analysis is ongoing, one notable result is that our findings independently confirm that these transients exhibit systematically narrow full width at half maximum (FWHM) compared to stellar point spread functions."

Stars don't have point spread functions. Point spread functions are a property of the optical system in use. They represent the idealised distribution of how a theoretical point source at infinity would spread light across the sensor. Non-point-sources have their signal convolved with the PSF, but this is a linear function, the smearing of a sum of parts is the sum of the smearings of the parts. I say idealised, as there can be deviations away from the centre of the frame.

Convolution_Illustrated_eng.png

img link: https://upload.wikimedia.org/wikipedia/commons/c/c2/Convolution_Illustrated_eng.png
via: https://en.wikipedia.org/wiki/Point_spread_function

If he's saying "these things don't distort through the optics how stars distort through the optics" then that might point to "these didn't come through the optics", surely?
You are right that "stars do not have PSFs", in the sense that starlight gets first smeared by the atmosphere, and only then goes thru the telescope optics and is affected by the PSF. Because atmospheric smearing is orders of magnitude larger than the optical PSF (Airy disk, ideally), we astronomers usually do not care about these subtleties, and just call "PSF" the recorded light distribution at the sensor. I guess my wording was affected by the fact I worked for the past 30 years only with space telescopes, where the recorded image of a star *is* the actual point spread function of the optics. I will fix the wording in the next installment of the preprint. Thanks!
 
You can't even evaluate FWHM from photographic plates - they're non-linear because they're clipped - you have no idea how high the max was, so have no idea where half is.
We can always select star images that are not saturated. In the plates I got so far, very few stars are saturated, and these are accounted for by the data analysis software.
 
It strikes me that the problem with this approach of looking for more sky survey-like material shot in the same way with similar technology at the same time is that the analysis is also likely to repeat any flaws in the original papers.

That is, if the process of coating, shooting and developing photographic plates of the night sky in the 1950s -- and then later digitizing them -- had a tendency to produce tiny artifacts (as argued by the astronomers who have criticized the earlier papers), he's only found a method for replicating the error, not new contemporaneous evidence from another source.
There was no other imaging technology at the time. Our challenge is to work with whatever material there was at the time. That's why I love archival astronomy: nothing is easy.
 
They appear to have used digital scans (again) of the Hamburger Sternwarte Großer Schmidtspiegel 1.2-m camera.



One issue with the Palomar scans was that it was impossible to check the original plates for emulsion artefacts becasue they had all been destroyed. I wonder if the original plates from these scans are available, or if they have been destroyed too?

This article says 5323 plates still exist....

https://plate-archive.hs.uni-hamburg.de/index.php/en/plate-archive
I understand the Palomar plates aren't available because of policy issues. They are seen as irreplaceable assets. The Applause plates might be available for physical inspection though.
 
See Solano(2022).
https://academic.oup.com/mnras/article/515/1/1380/6607509?login=false

This is explained upthread.
Vilarroel references that paper, but it's unclear which of the steps have been applied to her data set, since it doesn't match and she doesn't say. But the star catalog matching is one of the first steps, so I assume it has been done.
Catalog matching was done. That can be tested: three MNRAS 2022 datasets are published here: http://svocats.cab.inta-csic.es/vanish/ Select the list of vanishing objects in POSS I red images (5,399 rows) and perform a cross-match against Gaia and PanSTARRS catalogs using Vizier online service: https://vizier.cds.unistra.fr/viz-bin/VizieR There should be zero matches within 5 arcsec.

However, we already know MNRAS 2022 contains "bugs", and so does the datasets. See:
  • Watters et al. (2026) arXiv:2601.21946
  • Villarroel et al. (2026) response arXiv:2602.15171
I tested 7 candidates from the remainder dataset with 5,399 rows using my version of the pipeline. None of them were selected as candidates. Reason: they failed one or more quality gates described in MNRAS 2022 / 2/v / "Other artefacts' removal". I know this wasn't a large-scale test, but if you have seven candidates in the final reminder dataset which fail your own criteria, it raises concerns.
 
Sorry, but if they were detectable by the original plates they would be detectable by any similar system in use since and by all of the newer and more sensitive systems that have come into use since. Are they still being detected? If not then they are not there.

So the aliens either decided to go away and observe us via some other methods or they were never there in the first place.

Calling on some future system to find them "real soon now" is much like those promising Disclosure any time now. Might happen but I would not count on it.
As far as I know, they *are* being detected. They are just throw away by the automatic data processing pipelines, or by the astronomers themselves, as the standard explanation for them is space junk. Modern survey observatories like Rubin and Roman will routinely find hundreds on each exposure. That's why it's so important to restrict analysis to pre-1957 images.
 
This is where a cynical person would point out that professional astronomers looking for ancient reflective objects in geostationary orbit should have already known this detection capability existed by the 2002 of that paper and might have to come up with a reason why the purported objects only manifested during the 1950s.
They *are* routinely detected by large survey telescopes, to this day. And automatically thrown out as space junk. That's why it is important (from my viewpoint) to study them. If nothing else, they have to be removed so the actual astronomical objects that DO vary in those time scales can be effectively found.
 
Last edited:
Perhaps interesting side note:
The Schmidt telescope is called a "camera" because it was specifically designed as an
astrophotographic instrument intended solely to capture images on photographic plates or film, rather than for direct visual observation.
POSS-I images also originate from a "Schmidt camera"
Source: https://en.wikipedia.org/wiki/Schmidt_camera
Right, in those days no one even used the term "Schmidt telescope". The focal plane is not accessible from the outside. The Hamburg camera is almost an identical copy of the Palomar Oschin Schmidt camera.
 
Last edited:
I tested 7 candidates from the remainder dataset with 5,399 rows using my version of the pipeline. None of them were selected as candidates. Reason: they failed one or more quality gates described in MNRAS 2022 / 2/v / "Other artefacts' removal". I know this wasn't a large-scale test, but if you have seven candidates in the final reminder dataset which fail your own criteria, it raises concerns.
That is unexpected!
Either there's a bug in your pipeline, or in theirs, or the paper is incomplete/junk.

How did you pick those 7 candidates?
Have the star catalogs you are testing against been changed recently?
 
Back
Top