It's fine, in a sense. Of course it's scientifically viable to look for specific anomalies....
@Shen
Where I might differ slightly is in how we interpret the
intent and
sequencing of the work. Villarroel et al. weren't attempting a full taxonomic survey of all transients — that would indeed be a useful follow-on study — but a narrower, exploratory test: could
all Palomar transients be dismissed as known instrumental or astronomical phenomena? In that sense, isolating and inspecting residuals
was the experiment. A broader classification effort would be the logical next phase
after confirming that such residuals exist at all.
Likewise, the idea of defining a priori models for possible artificial-object signatures would make sense only once there's a justified basis for doing so. Starting with a fixed model before establishing that any unexplained class exists would pre-bake assumptions into the analysis and risk missing novel signal types altogether. Discovery science often begins with anomaly detection; modeling comes later.
I also don't see evidence that the team "derived satellite-like parameters from the residuals." They reported observed characteristics (brightness, spacing, persistence) and compared them against known phenomena to see what remained unmatched. That's exploratory inference, not circular reasoning.
Where I do agree with you is on the correlation section...the links to UFO reports and nuclear tests were unnecessary and methodologically weak. Those are narrative flourishes, not statistical findings, and the paper would stand better without them.
Finally, I'd push back gently on the "data-mining" charge. Intent should be inferred from design, not presumed simply because of the authors' association with the UFO topic. Their methods were transparent and replicable, and the data are open for others to test. That's the opposite of bad-faith mining; it's the invitation to replication that good science depends on.