Issues with Replicating the Palomar Transients Studies

Here is the quote you requested. It is a quote from N. C. Hambly and A. Blair 2024, On the nature of apparent transient sources on the National Geographic Society–Palomar Observatory Sky Survey glass copy plates. Section 2.34. (page 3).
The quote adds important context about the possible apriori interpretations of what the transient candidates might be, and how a lack of cross matches in certain catalogues may inform that interpretation.


https://arxiv.org/pdf/2402.00497
But they use less catalogs than any paper we've discussed, and they do a magnitude cutoff. It's really not comparable.
thank you for your reply.
 
no I dont claim that
Solano 2022 claimed it
read it
I've made it clear what I have been saying. Please stop mis-attributing me.
But still for weeks, the interpretation that almost all of the candidates are actually stars has been promoted as something that discredits the 2025 papers
Yes.
In Sokano 2025, Villarroel (she's a co-author) cut the data set down to 5399 by using several star catalogs and other means (e.g. identifying asteroids or high-motion stars) to eliminate transients that could possibly be astronomical objects.

In Villarroel, she cites Solano 2022, mentions the number "5000", but then doesn't use that data set, but a different, much larger one, without explaining why she doesn't use the method of the paper she cites as describing her method.

She goes back on her earlier work for no apparent reason, and we're left to speculate why, especially as that earlier paper can easily serve as a null hypothesis to destroy her data set. None of the points in the alignments on her shortlist are in the 5399 data set, so we can simply adopt the criteria of Solano 2022 to say that they might all be astronomical objects.

She should have made a case why that older finding is invalid, but she didn't.
 
I don't remember writing this, and I just checked and don't think I wrote it in this thread. Where did you see me say this?
Please quote, don't paraphrase.

Here are at least a few direct quotes.

what was the deficit you got?
how did you determine the shadow?

not true

the 106,339 data set contains >90% astronomical objects with a nonrandom distribution. Thd fact that this nonuniform dustribution is compared with a uniform distribution leads to the observed deficit in the Villarroel paper. This is the signal.

Solano (2022) does a very good job in cleaning astronomical objects from this data, so the 5399 data set is pretty much all plate defects aka noise, and that's a lot more uniform.

You have found that if stringent criteria are applied to filtering the data set, noise remains.

https://www.metabunk.org/threads/transients-in-the-palomar-observatory-sky-survey.14362/post-355581

Solano 2022 uses several star catalogs to pare down its 298165 sources. Villarroel only uses neoWISE. These 100k stars haven't moved, they're just not in the 3 catalogs that the Villarroel 2025 papers have applied.

https://www.metabunk.org/threads/is...-palomar-transients-studies.14534/post-356426
 
Yes, thank you. Always with direct reference to Solano 2022 and its star catalog pruning of the data set. It's never a statement made about the nature of the data set.
Your interpretation of Solano 2022 is not one-to-one with Solano 2022's results. I can see how you arrived at that interpretation, but I am skeptical of that interpretation. This is why I chimed in 2 weeks ago when you claimed that not only are more than 90% astronomical bodies, but that this is the root cause of the deficit.

Solano 2022 described the amount of false cross matches (false identifications as astronomical sources) subjectively and qualitatively, as manageable. The methodology itself doesn't validate or quantify the false positive rate. Therefore, unless one has sufficiently reliable expert knowledge as to the nature of the cross maching algorithm, catelogues, and characteristics of anstronomical sources, they should respect the ambiguity and uncertainty and be wary of misinterpretation.
 
Your interpretation of Solano 2022 is not one-to-one with Solano 2022's results. I can see how you arrived at that interpretation, but I am skeptical of that interpretation. This is why I chimed in 2 weeks ago when you claimed that not only are more than 90% astronomical bodies, but that this is the root cause of the deficit.
You can look at the neoWISE-matching data that Solano removed, but Hambly & Blair did not, there's a plot somewhere on these threads. It's not uniform at all. The fact that it was removed from the data set (but the matches from the other catalogs were not) makes the resulting set nonuniform, and any null hypothesis postulating a uniform distribution of the data points will not reflect the sratistical distribution of the data set. I'm also guessing that this non-uniformity causes a shadow deficit.
I stand by that argument.
 
I don't remember writing this, and I just checked and don't think I wrote it in this thread. Where did you see me say this?
Please quote, don't paraphrase.
What's the "them" in b-m's "You're saying most of them are stars" referring to? Your post to which he's replying ( https://www.metabunk.org/threads/is...-palomar-transients-studies.14534/post-356461 ) refers to many things: "any astronomical entry in the catalog", "the remaining transients", "astronomical objects", "all data points", "objects in orbit", "astronomical objects". Standard English rules about applicability of demonstrative pronouns lead me to the conclusion that most astronomical objects that can be imaged on photographic plates are stars. Which would be a statemtent not worth stating.
 
You're saying most of them are stars, Solano 2022 is saying most of them could be stars, and Hambly is saying practically none of them are stars. This is a wide range of interpretations. As a non-expert I can only say that there seems to be a lot of uncertainty.
I think you are misinterpreting Hambly&Blair.
External Quote:
Following standard practice in defining ML training data the star and galaxy sets were sub-sampled down to be the same size as the spurious detection catalogue (this being the smallest of the three)
SmartSelect_20251110-065518_Samsung Notes.jpg

Note the logarithmic scale.

Solano 2022 described the amount of false cross matches (false identifications as astronomical sources) subjectively and qualitatively, as manageable.
This is in reference to the 5 arcseconds cross-matching criterium that Hambly&Blair also use:
External Quote:
Sources in the SExtractor catalogue not having counterparts either in Gaia or Pan-STARRS in a 5-arcsec radius were kept. The adopted radius is a good compromise to ensure that not many high proper motion were left out while, at the same time, avoiding an unmanageable number of false positives.
Solano 2022 also gives this:
External Quote:

stac1552fig5.jpeg

Figure 5. Distribution of the Gaia EDR3 (blue) and Pan-STARRS (red) magnitudes of a randomly selected sample of POSS I sources. Limiting magnitudes of POSS I sources at Gaia EDR3 and Pan-STARRS are G = 18.5 and r = 19, respectively.

m_stac1552fig10.jpeg
Figure 10. Distribution of the Supercosmos R magnitudes of the final sample (5399 objects). It peaks at R ∼ 18, 5 with 80 per cent of the target with magnitudes in the range 17 ≤ R ≤ 19.
So what does this tell us about the quality of the cross-matching?

Secondly, we can't even say that plate defects are not astronomical objects. The emulsion study showed a 40 micrometer displacement on an 11cm 193a-O plate; POSSI-E uses a 33cm plate that is curved, so the position of the objects may move as the emulsion moves, and as the curved surface is projected on a flat plate.
 
The Posting Guidelines are clear. Follow them.
I believe you mean the link policy? I can only speak for myself here but I read the Posting Guidelines carefully, and I noticed that the Link Policy is only referenced at the bottom rather than explained inline. It might not be immediately obvious to everyone that it's a separate, detailed policy thread, especially since the guidelines mention it but don't summarize what it actually requires (like including excerpts and context).
 
I believe you mean the link policy? I can only speak for myself here but I read the Posting Guidelines carefully, and I noticed that the Link Policy is only referenced at the bottom rather than explained inline. It might not be immediately obvious to everyone that it's a separate, detailed policy thread, especially since the guidelines mention it but don't summarize what it actually requires (like including excerpts and context).
Well...
Article:
General Guidelines (for new threads, and any other post)

Quote from Links.
Links should not require clicking on in order to understand the post, so extract relevant excerpts and include them in your post. See: https://www.metabunk.org/threads/metabunks-no-click-policy.5158/

Check out the "Info" part of the site menu bar!

@Mick West the guidelines still refer to it as "no click policy" in several places, and the URL has changed, too.
 
I believe you mean the link policy? I can only speak for myself here but I read the Posting Guidelines carefully, and I noticed that the Link Policy is only referenced at the bottom rather than explained inline. It might not be immediately obvious to everyone that it's a separate, detailed policy thread, especially since the guidelines mention it but don't summarize what it actually requires (like including excerpts and context).
The Link Policy is part of the Posting Guidelines.
 
I believe you mean the link policy? I can only speak for myself here but I read the Posting Guidelines carefully, and I noticed that the Link Policy is only referenced at the bottom rather than explained inline. It might not be immediately obvious to everyone that it's a separate, detailed policy thread, especially since the guidelines mention it but don't summarize what it actually requires (like including excerpts and context).

Well, it's kinda right there. First header paragraph from the Posting Guidelines, note # 7 & 8. Videos must have a description of what to look for and where it is. Links must have an excerpt from the link, no telling people to look it up for themselves:

Screenshot 2025-11-10 7.47.16 AM.png


Yes the above is for new posts, but right below it is the general posting section, note #6. Again, links must include relevant information or quotes from whatever is being linked to:

1762790135438.png

https://www.metabunk.org/threads/posting-guidelines.2064/

Both examples are at least saying members should not just post links and have others go read through them or watch them if it's a video. And both examples include a link to a more detailed explanation, should one need it:

1762790500545.png


Yes, there appears to be 3 different links, but they all lead back to the above description.

For some, it takes some getting used to. One can't just throw out an opinion or even a well evidenced claim without including the evidence. I occasionally participate on an RV forum and when contentious topics, like Electric Vehicles, come up it's a complete shit-show. Everyone has an opinion and everyone has evidence, but there is no requirement to provide any of it. Sometimes people will throw in a link here and there, but even though the site uses the same XenForo setup Metabunk does, it has no EX tags for providing external information. There is a "Quote" tool buried in the options, but is rarely used. So, discussions just devolve into online shouting matches. There may be some good evidence offered, but it's hard to tell.
 
Well, it's kinda right there. First header paragraph from the Posting Guidelines, note # 7 & 8. Videos must have a description of what to look for and where it is. Links must have an excerpt from the link, no telling people to look it up for themselves:
We should spin this off into a "Link Policy" thread because this is far afield of the topic at hand... :D

In my case, Mendel had asked me for sources, I replied with academic citation to some well known papers. Apparently I was reported for NOT providing a link. That isn't obvious at all from reading the posting guidelines.
 
Back
Top