Transients in the Palomar Observatory Sky Survey

There are more stars near the ecliptic plane, therefore more emulsion flaws would be removed from the full data set, making the distribution non-random.
I agree, and that is the point I was trying to make (although probably in a hamfisted and unclear way). The distribution of emulsion flaws was presumably random before the ones closest to the ecliptic were removed.

I note that Villaroel says that only northern hemisphere plates were considered; if so that removes the big bulge in Sagittarius, which is in the southern celestial hemisphere. I may have to think again.
 
The distribution of emulsion flaws was presumably random before the ones closest to the ecliptic were removed.
It wasn't. Villarroel's own data with regard to plate edges strongly suggest that the flaws/"transients" are more dense near the plate edges, though depending on the algorithm, this may be exacerbated by plate overlap. Remember, the graphs of the various sets show the plate grid visually.

We also have strong suggestions that emulsion processing was improved to reduce the flaws at some point. If that is the case, we can't even say that each plate ought to have approximately the same amount of flaws.

But if we assumed a random distribution of emulsion flaws, then, for a valid null hypothesis, the validation process needs to be applied to the random distribution. Since the validation process does not identify astronomical objects one by one, but rather eliminates all signals within 5 arcseconds from known astronomical objects, more flaws will be eliminated from plates that have more astronomical objects on them. Since the asteroids and the milky way are nor randomly distributed across the sky, this introduces a nonuniformity in the overall distribution that we have also observed in the graphs.

The hypothesis is that this nonuniform random distribution that results from this processing would also produce the shadow deficit that Villarroel observed. This seems all the more likely as the mathematical properties that would determine the altitude of the orbits have not been demonstrated in the data by Villarroel, and in fact some remarks in the paper led me to conclude they don't exist.
 
But if we assumed a random distribution of emulsion flaws, then, for a valid null hypothesis, the validation process needs to be applied to the random distribution. Since the validation process does not identify astronomical objects one by one, but rather eliminates all signals within 5 arcseconds from known astronomical objects, more flaws will be eliminated from plates that have more astronomical objects on them. Since the asteroids and the milky way are nor randomly distributed across the sky, this introduces a nonuniformity in the overall distribution that we have also observed in the graphs.
This is what I'm trying to work out - where and when is this validation process done, and by whom?

Edit - i think maybe here:

https://arxiv.org/pdf/2009.10813 Launching the VASCO Citizen Science Project
1774696593820.png
 
Last edited:
See Solano(2022).
https://academic.oup.com/mnras/article/515/1/1380/6607509?login=false

This is explained upthread.
Vilarroel references that paper, but it's unclear which of the steps have been applied to her data set, since it doesn't match and she doesn't say. But the star catalog matching is one of the first steps, so I assume it has been done.
Catalog matching was done. That can be tested: three MNRAS 2022 datasets are published here: http://svocats.cab.inta-csic.es/vanish/ Select the list of vanishing objects in POSS I red images (5,399 rows) and perform a cross-match against Gaia and PanSTARRS catalogs using Vizier online service: https://vizier.cds.unistra.fr/viz-bin/VizieR There should be zero matches within 5 arcsec.

However, we already know MNRAS 2022 contains "bugs", and so does the datasets. See:
  • Watters et al. (2026) arXiv:2601.21946
  • Villarroel et al. (2026) response arXiv:2602.15171
I tested 7 candidates from the remainder dataset with 5,399 rows using my version of the pipeline. None of them were selected as candidates. Reason: they failed one or more quality gates described in MNRAS 2022 / 2/v / "Other artefacts' removal". I know this wasn't a large-scale test, but if you have seven candidates in the final reminder dataset which fail your own criteria, it raises concerns.
 
Back
Top