From the OP:
Wikipedia says:
However the Survey was ultimately extended to -30° plate centers, giving irregular coverage to as far south as -34° declination, and utilizing 936 total plate pairs.
The NGS-POSS was published shortly after the Survey was completed as a collection of 1,872 photographic negative prints each measuring 14" x 14".
So there were at best 936 days of photography, not 2718 days? and likely less?
External Quote:
The first photographic plate was exposed on November 11, 1949. 99% of the plates were taken by June 20, 1956, but the final 1% was not completed until December 10, 1958.[4]
So if there were no plates from June 20, 1956, to April 28, 1957, why are these days counted?
These numbers are fine. The sleight-of-hand is elsewhere:
the data is not about days at all.
If Wikipedia is correct, they had 936 plates, discarding the blue half of the catalog.
So the proper way to do this analysis is to divide these 936 plates into 2 sets: N=plates taken with a nuclear test in the preceding 24 hours, NN=plates taken with no nuclear test in the preceding 24 hours.
The first slight of hand was to extend that interval to 3 days, and to go before and after. This decision is not motivated, so we have to assume it's there to make the numbers look good.
Then we'd also divide the plates in 2 other sets: T=plates with linear transients, NT=plates with no linear transients.
From this, the relative risk ratio would be computed per plate.
But they didn't do that, they computed it per day. This decision is not motivated, so we have to assume it's there to make the numbers look good.
The way these choices were made completely screws the significance of the result.
Ideally, these choices are made before you start the study, you pre-register it, then do the analysis, you get a result or not, and then you need to publish either way to avoid publication bias.
Any other way to go about this screws with the significance of the result.
As an analogy:
If you aim to roll 2 sixes with a pair of dice, your chance is 1/36 < 5%, so if you did that at random, that'd look significant.
But if you roll the dice, and 2 fours come up, and then you write a paper that says "the chance for 2 fours is 1/36 <5%, our result is significant", thst'd be deceptive, because you changed your analysis method when you didn't get the result you wanted. To roll any pair, the chance is 1/6=17%, and that's not significant at all; and if we consider that if that hadn't panned out, we could've looked for consecutive numbers and published on that, or on numbers that sum to seven, or any other criterium that gives us a good-looking result, then that's worthless.
This technique is called p-hacking, and we talked about it not too long ago. Whenever you see a study with some weird non-obvious decisions, and they don't tell you what they got when they ran the obvious analysis, p-hacking is likely involved.
So: if the observatory took more plates on days adjacent to nuclear tests, then, all else being equal, they would also detect more transients on days with nuclear tests.
Again, as an analogy, if you're trying to prove that the full moon is lucky, and you roll a dice every night, but on nights surrounding a full moon you roll two dice, then you're going to be approcimately twice as likely to roll a six on a night with a full moon. But that's not because the full moon is lucky, it's because you rolled more. If you publish a paper on this and don't correct for the number of rolls per night, it's worthless.
And we learned above that this paper counts nights when no photos were taken at all! Absolutely incredible.
If you do your full moon luck analysis per dice roll, that'd be proper: but then the effect vanishes, and it turns out full moons are neither lucky nor unlucky.
What would have happened to this study if they had looked at transients per plate, not per day? Who knows.