Transients in the Palomar Observatory Sky Survey

As an analogy:
If you aim to roll 2 sixes with a pair of dice, your chance is 1/36 < 5%, so if you did that at random, that'd look significant.
But if you roll the dice, and 2 fours come up, and then you write a paper that says "the chance for 2 fours is 1/36 <5%, our result is significant", thst'd be deceptive, because you changed your analysis method when you didn't get the result you wanted. To roll any pair, the chance is 1/6=17%, and that's not significant at all; and if we consider that if that hadn't panned out, we could've looked for consecutive numbers and published on that, or on numbers that sum to seven, or any other criterium that gives us a good-looking result, then that's worthless.
Or... back in my table top roll playing game misspent youth, we had a zillion different dices* including a fairly massive D100 (a hundred-sided dice!) The chance of rolling, say, a 25 was only 1%. But the chance of rolling SOME number was 100%! And after rolling a 25, the chances that you just rolled a 25 were 100%.

The chance of a particular thing happening next is not the same as the chance of what just happened being the thing that just happened!

* Yeah, I know. So sue me... ^_^
 
Here is what they reported.
I see two red flags in your quote.
External Quote:
Because there was no compelling a priori reason to assume that transients would necessarily occur on
the day of nuclear testing rather than the day before or after testing,
But that's the premise of the whole thing, isn't it: that the visitors are here for the nuclear test! So why would you nrgate your assumption and say they might not be there on the day of, UNLESS you wanted to create some ambiguity for yourself? that then stands in the way of science, which favors exactness? "They're here for the nuclear tests, but they're not" makes no sense, yet we're supposed to accept that as a reasonable hypothesis. (It's not.)

External Quote:
while the authors were still blinded to the transient data.
This phrasing implies that the data already existed, but they hadn't seen any of it. To me, this implied one of three alternatives:
1) they had a third author who did the analysis, and who distanced thrmself from the study to the point of refusing to be named. HUGE red flag!
2) they generated the dsta themselves without ever checking that their code worked and the output made sense. I'm not buying it.
3) the statement is false.

I'm not happy with that at all.
 
I see two red flags in your quote.
External Quote:
Because there was no compelling a priori reason to assume that transients would necessarily occur on
the day of nuclear testing rather than the day before or after testing,
But that's the premise of the whole thing, isn't it: that the visitors are here for the nuclear test! So why would you nrgate your assumption and say they might not be there on the day of, UNLESS you wanted to create some ambiguity for yourself? that then stands in the way of science, which favors exactness? "They're here for the nuclear tests, but they're not" makes no sense, yet we're supposed to accept that as a reasonable hypothesis. (It's not.)

External Quote:
while the authors were still blinded to the transient data.
This phrasing implies that the data already existed, but they hadn't seen any of it. To me, this implied one of three alternatives:
1) they had a third author who did the analysis, and who distanced thrmself from the study to the point of refusing to be named. HUGE red flag!
2) they generated the dsta themselves without ever checking that their code worked and the output made sense. I'm not buying it.
3) the statement is false.

I'm not happy with that at all.
I haven't looked that deep in this paper, but I just noticed some things which seemed odd to me, like the use of in-line Wikipedia links (not even an archived version, just the main page). Maybe that is not uncommon in some fields.

I think this particular study and paper is produced mainly by Stephen Bruehl, an anesthesiologist.

Being blinded to the data in an absolute sense, might be technically impossible in some cases. But what you would do, is try to ensure that your knowledge of the patterns in the data didn't inform your choice of the parameters of the study. It is essentially a claim that this choice wasn't deliberate p-hacking. In most studies like this, you pretty much have to trust the authors are reporting honestly. That's part of the reason one study isn't enough. Independent reproduction of the results can sometimes help corroborate the results. And sometimes limitations in the data itself are the main problem.

In medicine, they pretty much just report everything, no matter how strong the result actually is, and leave it up to others to make of it what they will.

I wouldn't dismiss this paper, but I would be cautious interpreting it. Alone it certainly isn't enough to convince, and in general papers like this in other fields aren't either, but they still get published.

Beatriz seems to ascribe some corroboratory value to this particular result.

Maybe they are falsifying their results, but that can be said about most scientists. I am not going to assume they are, just because of what they are studying and what results they reported.
 
Last edited:
Though some players are more focused on their rolls than on their roles.
Thanks -- I was going to try to get out of it with that dodge, but it seems less self-serving since you did it!

A fascinating thread, by the way, folks. Much of it is well above my pay-grade, but I'm enjoying learning a bit as the rest of you thrash this one out...
 
But what you would do, is try to ensure that your knowledge of the patterns in the data didn't inform your choice of the parameters of the study. It is essentially a claim that this choice wasn't deliberate p-hacking.
Right. And they should have done that by settling on their methods before collecting the data. The fact that they don't say they did that is a red flag.
 
I haven't looked that deep in this paper

As I go back and look at it, there just isn't much there. They use Villareol and Solano's previous work for a list of transients. Then they compare these transients to nuclear tests and UFO sightings looking for correlations. They present no data for any of this. No list of transients, no list of nuclear tests and no list of UFO sightings. They just say that various transients occurred within a few days of a test or UFO sighting, then give some statistics about what they claimed.

Not saying these correlations don't exist, but they present no data to show they do. For example, instead of a list of nuclear tests, the authors in essence say "go find them yourselves":

External Quote:

Nuclear Weapons Testing Data.

An SPSS dataset was created from public sources which included the dates of all above-ground nuclear weapons tests during the study period. Tests conducted by the United States were identified from:

https://nnss.gov/wp-content/uploads/2023/08/DOE_NV-209_Rev16.pdf.

Tests conducted by the Soviet Union were identified from: https://en.wikipedia.org/wiki/List_of_nuclear_weapons_tests_of_the_Soviet_Union.

Tests conducted by Great Britain were identified from: https://chrc4veterans.uk/knowledge-hub/british-nuclear-weapons-testing/.
They say what they did, but don't show it, not that I can find:

External Quote:

The final analyzed dataset began with creation of an SPSS master file with a separate record for every date within the study period, 11/19/49 to 4/28/57 (n = 2,718 days). Then, the transient database, nuclear test database, and the UAP database were merged by date with this master file. Next, dichotomous variables (coded 1/0 for Yes/No) were created to indicate whether each date in the master file was associated with at least one transient and/or with at least one UAP report. Both dichotomous and continuous variables were available for the transient data (any transient Yes/No and total number of transients identified on each date) and for the UAP data (any UAP Yes/No and total number of independent UAP reports on each date). The nuclear testing variable was only available as a dichotomous index, that is, whether each date fell within a nuclear testing window (coded 1/0 for Yes/No).
I think this particular study and paper is produced mainly by Stephen Bruehl, an anesthesiologist.

Unfortunately, this is another thing seen in UFOlogy, especially when there is attempt to get something published. The lead author is completely out of his wheel house. In this case I don't think it makes much difference as he's just looking for correlations in 3 data sets, something even I could manage. Assuming I could see the data sets.

According to he Declarations section, both Bruehl and Villareol worked on this paper, with Villareol providing the transients data from her paper with Solano:

External Quote:

S.B. helped design the study, compiled the datasets, conducted and interpreted the statistical analyses, and prepared a draft of the initial manuscript.

B.V. helped design the study, prepared and interpreted the transient data, and revised the draft manuscript.
In most studies like this, you pretty much have to trust the authors are reporting honestly. That's part of the reason one study isn't enough. Independent reproduction of the results can sometimes help corroborate the results. And sometimes limitations in the data itself are the main problem.

I'm not sure what "most studies like this" includes. It's basically a search for correlations, and as we all know, correlation is not causation. I would argue with something simple like transients, tests and UFOs happening on certain sometimes overlapping dates, we shouldn't have to take the authors word for it. It's easily presentable.

In the case of the transients correlating to a few days around a nuclear test, the authors identified 55 "hits". It would be pretty simple to list the corresponding 55 nuclear test dates and the dates of transients related to those 55 tests. Ideally, an entire list of the 347 nuclear tests considered would be included.

Something as simple as the correlation of transients with tests and UFOs can not be independently reproduced with he data given in the paper. One needs to review Villareol's previous paper for transients, then the various websites given for the dates of nuclear tests and then comb through the CUFOS' database for UFO sightings.

The authors do note that datasets are available "upon reasonable request":

External Quote:

The final analyzed SPSS dataset will be made available by the authors upon reasonable request to Dr. Stephen Bruehl (stephen.bruehl@vumc.org).
I understand some people may not be reasonable. There is the case of some Biblical creationists demanding entire complex datasets of genetic material they had no clue understanding so they could disprove evolution. But in this case, it's just some simple lists of dates. Reminds me a bit of all the publicly available information about the supposed Nazca tridyctal mummies. It was available, as long as they thought you were deserving of it.

Lastly, there is the reference section of the paper. It lists 16 total references, of those 1-4 refer to Villareol and Solano's paper about transients. References 9 & 11 are for articles by Villareol. So, 6 of the 16 references are back to the author, Villareol. I guess that makes sense, as the whole VASCO project is her gig.

While some of the references are by known experts in their field, like E. Loftus' work on the fallibility of human memory (12 & 13), there are also references to things like R. Hastings self published book, based largely on Robert Salas' flawed, confabulated and hypnotically retrieved memories regarding nuclear missiles and UFOs. List of references:

External Quote:

  1. Solano, E., Villarroel, B., & Rodrigo, C. Discovering vanishing objects in POSS I red images using the Virtual Observatory, Monthly Notices Royal Astron Soc. 515,1380–1391 (2022).
  2. Villarroel, B. et al. (2019). The Vanishing and Appearing Sources during a Century of Observations Project. I. USNO Objects Missing in Modern Sky Surveys and Follow-up Observations of a "Missing Star". Astronom J 159, 8 (2020).
  3. Villarroel, B. et al. Exploring nine simultaneously occurring transients on April 12th 1950. Sci Rep 11, 12794 (2021).
  4. Solano, E. et al. A bright triple transient that vanished within 50 min, Monthly Notices Royal Astron Soc. 527, 6312–6320 (2024).
  5. Neamtan, S.M. The Čerenkov effect and the dielectric constant. Physical Rev. 92, 1362-1367(1953).
  6. Hastings, R. Flashing sky, killing wind in UFOs & nukes: extraordinary encounters at nuclear weapons sites (2nd​ Edition). 67-95 (Self-Published, 2017).
  7. Knuth, K.H. et al. The new science of unidentified aerospace-undersea phenomena (UAP). Preprint at: https://www.arxiv.org/abs/2502.06794. (2025).
  8. Grosvenor, Sean, Larry Hancock, & Ian Porritt. 2025. UAP Indications Analysis 1945-1975 United States Atomic Warfare Complex." Limina - The Journal of UAP Studies 2, https://doi.org/10.59661/001c.131854, (2025)
  9. Villarroel, B. The vanishing star enigma and the 1952 Washington DC UFO wave. The Debrief. https://thedebrief.org/the-vanishing-star-enigma-and-the-1952-washington-d-c-ufo-wave/. (2024)
  10. Ruppelt, E. The Washington merry-go-round in The report on unidentified flying objects. 156-172 (Doubleday & Company, 1956).
  11. Villarroel, B. et al. A glint in the eye: Photographic plate archive searches for non-terrestrial artefacts. Acta Astronautica, 194, 106-113 (2022).
  12. Frenda, S.J., Nichols, R.M., & Loftus, E.F. Current issues and advances in misinformation research. Curr Dir Psychol Sci. 20, 20-23 (2011).
  13. Loftus, E.F. & Palmer, J.C. (1974). Reconstruction of automobile destruction: An example of the Interaction between language and memory. J Verbal Learn Verbal Behav. 13, 585–589 (1974).
  14. Norman, J. F. et al. The visual perception of long outdoor distances. Sci Rep. 14, 3207 (2024).
  15. Medina, R.M., Brewer, S.C., & Kirkpatrick, S.M. An environmental analysis of public UAP sightings and sky view potential. Sci Rep. 13, 22213 (2023).
  16. Watters WA et al. The scientific investigation of unidentified aerial phenomena (UAP) using multimodal ground-based observatories. J Astronom Instrument12, 2340006.
 
Lastly, there is the reference section of the paper. It lists 16 total references

Good observation. I don't know what others experiences might be, but approx. 16 refs. was the minimum expected for undergraduate coursework essays (almost certainly not of publishable quality) when I was a student.

Six of those references are to Villarroel papers (#4, Solano et al. https://academic.oup.com/mnras/article/527/3/6312/7457759 has Villarroel as one of the co-authors).
Some are no doubt useful papers, but their inclusion here smacks of padding out the reference list, e.g.
External Quote:

Loftus, E.F. & Palmer, J.C. (1974). Reconstruction of automobile destruction: An example of the Interaction between language and memory. J Verbal Learn Verbal Behav. 13, 585–589 (1974).
-Almost certainly there are more generally applicable, and more recent, papers about eyewitness testimony and the fallibility of recall than this 1974 paper.

@NorCal Dave mentioned the use of a self-published book, R Hasting's UFOs & nukes: extraordinary encounters at nuclear weapons sites as a reference; not only is this not peer-reviewed in an academic sense, by definition it hasn't been reviewed and approved for publication by an editor at any established publishing house. Which, considering George Adamski, Erich Von Däniken and Tibetan man of mystery T. Lobsang Rampa (AKA Cyril Hoskin, English plumber) overcame this hurdle, is a bit of a worry.

External Quote:
Hastings believes the earth is "being visited by beings from another world, who for whatever reason have taken an interest in the nuclear arms race" and claims that "a global conspiracy exists in which all major governments have been covering up evidence of UFOs for decades."
... According to skeptical author Benjamin Radford, Hasting's claims lack "new evidence or real proof" but instead are "merely a rehashing of old, discredited reports that didn't yield any significant evidence when they were originally reported decades ago."
Wikipedia, Robert Hastings (ufologist) https://en.wikipedia.org/wiki/Robert_Hastings_(ufologist)

One reference,
External Quote:
Knuth, K.H. et al. The new science of unidentified aerospace-undersea phenomena (UAP)
is for an as yet unpublished submission. Co-authors include Villarroel (oh, increase the number in my 2nd para. above to "seven"), the Tedesco brothers, Garry Nolan, Jacques Vallee, Ryan Graves, others.
External Quote:
Knuth attributed his beliefs on UFO matters to Robert Hastings.
Wikipedia, Kevin Knuth https://en.wikipedia.org/wiki/Kevin_Knuth(and see above re. Hastings).

The Wikipedia article includes a link for Knuth's PhD dissertation, leading to "Dynamics of the human auditory cortex in response to periodic auditory stimulation", https://www.proquest.com/openview/249a09d5ca5fe4e400290f149a0a8201/1?cbl=18750
I don't know if the linked-to material really is Knuth's dissertation. I sort of hope not.
The linked-to material contains gems like
External Quote:
Processing times on the order of tenths of seconds demonstrate that brain is a massively parallel device.
But serial-processing architectures were managing processing speeds magnitudes above the tenths of seconds for many years before Knuth's dissertation, so processing speed in itself does not indicate parallelism (the human brain undoubtedly is a massively parallel architecture, if viewed in computational terms- but we gathered that from other leads, not processing times).
External Quote:
The cerebral cortex is a two millimeter thick layer of neurons comprising the entire surface of the cerebrum of the brain.
Well, an approximately 2 - 4 mm thick layer, but who needs accuracy when discussing the brain?
External Quote:
The pinna serves the additional purpose of directing sound waves into the external auditory canal
The pinna is the external ear- the fleshy bit we can see. What we think of as an ear when someone says "ear" in day-to-day conversation.
If its additional purpose is directing sound waves into the auditory canal, what is its main purpose?

Anyway, Villaroel's list of references is short, 7 out of 16 references are to papers which Villaroel authored or co-authored, and at least one ref. is to self-published material whose validity has been established by, erm, the person who wrote it.

All looks a bit shaky.
 
Unfortunately, this is another thing seen in UFOlogy, especially when there is attempt to get something published. The lead author is completely out of his wheel house. In this case I don't think it makes much difference as he's just looking for correlations in 3 data sets, something even I could manage. Assuming I could see the data sets.

The VASCO team is made up mostly of active physicists/astronomers and they mostly publish papers in reputable physics journals. Calling them UFOogists just sounds like a defamation attempt, akin to calling SETI alien hunters.

This one paper was produced by someone outside their field, but it is an exception, probably happened because he had an idea to do this study, reached out to Villarroel and she agreed to provide the transient data. This is like a minor side thing, rather than the main body of work that Villarroel et al. have recently published.

Although, from my perspective as someone more familiar with academic standards in computer science and physics, this paper looks a bit lacking.
 
Last edited:
Good observation. I don't know what others experiences might be, but approx. 16 refs. was the minimum expected for undergraduate coursework essays (almost certainly not of publishable quality) when I was a student.
...
Some are no doubt useful papers, but their inclusion here smacks of padding out the reference list, e.g.

In academic research, you don't have to pad out the list. The expectation is they cite the work that should be cited, to be able to understand the work, it's contribution, and give credit where it is due to others who did similar work. If something is cited that isn't very relevant, the reviewers might be asked to remove the citation, and if something crucial is missing they'll ask to add a citation.
 
Last edited:
External Quote:
The final analyzed dataset began with creation of an SPSS master file with a separate record for every date within the study period, 11/19/49 to 4/28/57 (n = 2,718 days). Then, the transient database, nuclear test database, and the UAP database were merged by date with this master file.
[...]
In the case of the transients correlating to a few days around a nuclear test, the authors identified 55 "hits". It would be pretty simple to list the corresponding 55 nuclear test dates and the dates of transients related to those 55 tests. Ideally, an entire list of the 347 nuclear tests considered would be included.
Ok, I think I can identify the moment when they decided to extend the date range to 3 days per test. It was when they found they had less than 20 "hits" if they went by the date alone. I learned in highschool that, for a random sampling to be good, it should have at least 30 samples, as a rule of thumb.
 
The more I look at this, the more red flags pop up.

External Quote:
For display purposes in Figure 2, total transients and total UAPFor display purposes in Figure 2, total transients and total UAP reports have both been log10 transformed (after adding a constant [+1] to avoid zero values) in order to optimize scaling in the figure. avoid zero values) in order to optimize scaling in the figure.
External Quote:

fc357a94ec6ab9dbd7532484.png

Figure 2.

Scatterplot of total number of transients identified by total number of independent UAP reports for dates on which at least one transient occurred (n=310). Both variables have been log10 transformed to enhance scaling for clarity.
• Why are they adding +1 to the number of transients if they only include days with at least 1 transient?
• The left column should indicate days with 0 UAP reports, why is it included in the trend?
• Why do they log-scale the x-axis when it only goes from 1 to 30?
• their trend line looks straight when it wouldn't be straight at any other scale

With a linear x-axis scale, it'd be even more obvious that there are some random dots on the right that don't really support the "more transients means more UFO reports".

• They do not investigate a ratio of (days with X UFO reports and transients) / (days with X UFO reports and no transients), like they do for nuclear tests

[I've already explained above that to set this data up per date, instead of per plate, is an invalid approach.]
 
I'm still not over how they assume the probes are in geostationary orbit because they've been observing Earth for centuries, yet can be seen on a day-to-day basis.

It feels like motivated reasoning to overcome the shadow problem. At night, an observatory is in Earth's shadow, and so are the LEO orbits it can reasonably see. It's inside the shadow looking out, and the farther it looks out, the less shadow there is. So if you want to argue that these transient dots are glints, you need to assume they're far out, or it won't work.

(This is the same problem we have with UFOlogists mistaking close bugs for faraway UFOs in the absence of a second photo from an offset position that would establish distance.)

But a craft swinging by for a short visit might be on an elliptical trajectory that brings it closer to the surface, or establish itself on a tempirary lower orbit, to "see better".
 
External Quote:
An SPSS dataset was created from public sources which included the dates of all above-ground nuclear weapons tests during the study period. Tests conducted by the United States were identified from:
https://nnss.gov/wp-content/uploads/2023/08/DOE_NV-209_Rev16.pdf.
Tests conducted by the Soviet Union were identified from: https://en.wikipedia.org/wiki/List_of_nuclear_weapons_tests_of_the_Soviet_Union.
Tests conducted by Great Britain were identified from: https://chrc4veterans.uk/knowledge-hub/british-nuclear-weapons-testing/.

As an example of the non-randomness of the nuclear test dates, I extracted the United States test dates (the U.S. conducted 79 of the 142 tests by the U.S., UK, and USSR during the period) from their source document, removed the blasts that were below ground (not many before 1958),
Could you please upload your list? The paper uses 124 dates, we should be able to duplicate that.

For the people who downloaded the plates: do we have digital data for the dates when these were taken (ideally including the emulsion type)?
 
Why do they log-scale the x-axis when it only goes from 1 to 30?
• their trend line looks straight when it wouldn't be straight at any other scale
I think I think the second line explains the first line. This is a hugely manipulated set of data in order to prove their preferred point. Their conclusion preceded their data.
 
External Quote:
An SPSS dataset was created from public sources which included the dates of all above-ground nuclear weapons tests during the study period. Tests conducted by the United States were identified from:
https://nnss.gov/wp-content/uploads/2023/08/DOE_NV-209_Rev16.pdf.
Tests conducted by the Soviet Union were identified from: https://en.wikipedia.org/wiki/List_of_nuclear_weapons_tests_of_the_Soviet_Union.
Tests conducted by Great Britain were identified from: https://chrc4veterans.uk/knowledge-hub/british-nuclear-weapons-testing/.
Could you please upload your list? The paper uses 124 dates, we should be able to duplicate that.

For the people who downloaded the plates: do we have digital data for the dates when these were taken (ideally including the emulsion type)?
For the test dates, I've got some of the info in a Google sheet, I have to sort out how to share it without doxing myself and sort out what steps I was taking six weeks ago. The paper specifies above-ground nuclear tests within a certain date range, so you have to remove the below-ground tests. Also, some of the test dates were null tests, like Nov, 1, 1955, so we don't know if those dates got included:
1758036007211.png


At the time I was just trying to establish that there were behavioral patterns in the data, like the dates when plates were recorded being preferentially on Tuesdays for the dates I transcribed from the scanned logs. As it turned out, there were no U.S. tests during this date range, though for 1951 to 1957 25.3% of the test dates were Tuesdays (though I really need to remove the zero and "no yield" days, which were mostly not on Tuesdays, so that percentage should go up):
1758036334085.png
1758036351661.png
 

Attachments

  • 1758035506015.png
    1758035506015.png
    50.1 KB · Views: 30
  • 1758035474473.png
    1758035474473.png
    205.4 KB · Views: 34
Last edited:
For the people who downloaded the plates: do we have digital data for the dates when these were taken (ideally including the emulsion type)?
There is an official plate list in text format at https://gsss.stsci.edu/skysurveys/SurveyPlateList.txt, but it's for all the sky surveys and would need some cleanup and filtering.

Palomar plates are in there as "Palomar Schmidt," with the emulsion types distinguished in the Survey column as POSSI-E (for Kodak 103a-E red-sensitive emulsion) and POSSI-O (for Kodak 103a-O blue-sensitive emulsion).

plateID telescope longitude latitude altitude timezone plateLabel Survey Region RightAscension Declination SeeingCode plateGrade Epoch UTobservation hourangle zenithdistance airmass
------- ---------------------------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------- ------------ -------- ---------------------- ---------------------- ---------- ---------- ------------- ------------------------ ------------- -------------- -------------


2TR Palomar Schmidt 116.862998962402 33.3559989929199 1706 8 ES15 POSSI-E XE935 352.65228 -29.25216 unavailabl 1954.751 1954-10-01T06:32:00. 0.2833333 62.87249 2.184587
A2TS Palomar Schmidt 116.862998962402 33.3559989929199 1706 8 ES17 POSSI-E XE936 359.0975 -29.336161 unavailabl 1954.754 1954-10-02T06:53:00. 0.4 62.99483 2.193645
A2TT Palomar Schmidt 116.862998962402 33.3559989929199 1706 8 O64 POSSI-O XO321 190.31876 29.200712 unavailabl 1950.275 1950-04-10T08:92:00. -1.683333 21.89596 1.077567
 
There is an official plate list in text format at https://gsss.stsci.edu/skysurveys/SurveyPlateList.txt, but it's for all the sky surveys and would need some cleanup and filtering.

Palomar plates are in there as "Palomar Schmidt," with the emulsion types distinguished in the Survey column as POSSI-E (for Kodak 103a-E red-sensitive emulsion) and POSSI-O (for Kodak 103a-O blue-sensitive emulsion).

plateID telescope longitude latitude altitude timezone plateLabel Survey Region RightAscension Declination SeeingCode plateGrade Epoch UTobservation hourangle zenithdistance airmass
------- ---------------------------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------- ------------ -------- ---------------------- ---------------------- ---------- ---------- ------------- ------------------------ ------------- -------------- -------------


2TR Palomar Schmidt 116.862998962402 33.3559989929199 1706 8 ES15 POSSI-E XE935 352.65228 -29.25216 unavailabl 1954.751 1954-10-01T06:32:00. 0.2833333 62.87249 2.184587
A2TS Palomar Schmidt 116.862998962402 33.3559989929199 1706 8 ES17 POSSI-E XE936 359.0975 -29.336161 unavailabl 1954.754 1954-10-02T06:53:00. 0.4 62.99483 2.193645
A2TT Palomar Schmidt 116.862998962402 33.3559989929199 1706 8 O64 POSSI-O XO321 190.31876 29.200712 unavailabl 1950.275 1950-04-10T08:92:00. -1.683333 21.89596 1.077567
Caltech has a repository of most plates. Here is my painfully slow downlader for it. I have a multithreaded version. I'm upload it if anyone wants to download 900gb of plate data.

This is the MAPS catalog which is basically the combined red and blue plates then parsed. I believe it would be unsuitable as it seems to out transients.
 
I'm upload it if anyone wants to download 900gb of plate data.
I know, there's a separate thread for that. My aim is to check the metadata for statistical properties. Like, ideally we find out they took more plates near nuclear tests, then that already explains their result.
 
Well, I'm more of a metadata person myself...
This is my thread about the hunt for plates. It stalled on transient detection as that has proved difficult. I'm like 90% there but I keep overcomplicating things.

I know, there's a separate thread for that. My aim is to check the metadata for statistical properties. Like, ideally we find out they took more plates near nuclear tests, then that already explains their result.
I have snippets of metadata for almost all the red plates. Each CSV is tied to a plate and also contains the timestamp for the exposure.

Edit: I completely forgot JDog took part in the thread. I have the memory of a Goldfish. Also, here is a link to the red data CSV snippets.
 
Just to add some more confusion and show why it's important to include your data, I went to the sources listed in the paper for nuclear tests between November 19, 1949 and April 28, 1957. These dates seemed to have been chosen as they correspond to transients, even though nuclear tests were happening before and after the dates in question:

External Quote:
The initial transient dataset consisted of a list of 107,875 transients identified that occurred between 11/19/49 and 4/28/57.
Again, the paper offers these sources for tests:

External Quote:

Tests conducted by the United States were identified from:

https://nnss.gov/wp-content/uploads/2023/08/DOE_NV-209_Rev16.pdf.

Tests conducted by the Soviet Union were identified from: https://en.wikipedia.org/wiki/List_of_nuclear_weapons_tests_of_the_Soviet_Union.

Tests conducted by Great Britain were identified from: https://chrc4veterans.uk/knowledge-hub/british-nuclear-weapons-testing/.
Unfortunately the UK source is currently down for maintenance.

The wiki site for Soviet tests lists ~50 during the time frame in question:

1758039833085.png

https://en.wikipedia.org/wiki/List_of_nuclear_weapons_tests_of_the_Soviet_Union

Note the lack of dates in the listed source. There were 16 tests in 1957, but were only interested in test prior to April 28 of that year, so how many total test are we talking about?

Turns out one has to click on the year to get the dates for each actual test. The Soviet's first nuke test was in August of 1949, so outside the parameters of the paper. The first test to coincide with the transient collection was in September 1951 and of the 16 tests in 1957, there are 7 that fit the time frame:

1758040174546.png

1758040376530.png

1758040434252.png

1758040460260.png

1758040538547.png

1758040573862.png

1758040639203.png

1758040680418.png

1758040745041.png

1758040783302.png



From the above list of dates, I get 39(?) tests, that's excluding 9/21/1955 underwater test, and including the 1/19/1957 rocket launch test as it would still be considered "above ground". I think. And there's an unnumbered test thrown in between 21 and 22. So, my logic is we have test #2-40 that fit our dates. That's 39-the underwater test is 38 but add the unnumbered test back and I get 39. Please correct me as needed.

Using the source listed for US tests is a bit more straightforward with a list of tests and dates. The first test to fit our dates is #7 (Able, Operation Ranger 1/27/1951) and the last test date to fit is #86 (Project 57, Operation Plumbbob 4/24/1957):


1757988738913.png

1757988692191.png

1757988657363.png

1757988619487.png

1757988567765.png



To start with, we have 79 tests that fit the dates in question. Of those tests, as @jdog noted above, several are safety experiments or dispersal tests with 0-1 ton yields. These include test #65, 66, 67, 68 (Operation Project 56), and 86 (Operation Plumbbob). In addition we have test #64 (Wigam, Operation Wigwam) which was conducted underwater. That should leave us with 73(?) US test that fits the time frame.

That would get us 112 tests by the Soviet Union and the US that would fit into the time frame specified. As for the UK one has to search through Wikipedia a bit. As the UK tests were carried out in Australia, one can find a list of dates at:

https://en.wikipedia.org/wiki/Nuclear_weapons_tests_in_Australia

I'm pretty sure from reading around that these early tests that fit the time frame of the paper were UK tests:

1758043623574.png

1758043743933.png

1758043782843.png


I count that as 9 tests, with Operation Antler being out of the time frame. That would bring our total to 121(?) I believe.

Recall from the chart in the paper there were 347 (293+54) possible days when a transient might correlate with a nuclear test:


1758044265954.png


That is, the actual test date +/- 1 day, so 3 total days per test date, right? So, we would divide 347 by 3 to get the number of actual test dates being looked at. That gives us 115.6666, which doesn't make sense to me as if we had a set number of test dates and multiplied it by 3, we would get a number divisible by 3, right? If there were 100 test dates that resulted in 3 possible days for transients to appear for each test date, we would get 300 days. The total number of days a transients can appear should not have a decimal, right?

Using the sources and what I could find for the UK, I get 121 total test dates. Multiply that by 3 and we get 363 days (test date =/- 1 day) that a transients might show up. Note, 363/3=121, not some decimal number. This would be a lot easier if the authors just included the actual test dates they are using, but at least we have them here to reference in the future.
 
From the introduction:
Article:
From 1951 until the launch of Sputnik in 1957, at least 124 above-ground nuclear tests were conducted by the United States (U.S.), Soviet Union, and Great Britain.

121 is 3 short (but close).

Maybe I miscounted something. Or the UK number is 3 short? I was trying to read about UK tests in one wiki entry that did not have a list and compare it to the lists I gave for tests in Australia.

Even so, where does the 347 number come from in the chart? If that's the test dates +/- 1 day, it's not reflective of 124 test dates. The 2 numbers in the boxes should instead add to 372, right?
 
• Why are they adding +1 to the number of transients if they only include days with at least 1 transient?
• The left column should indicate days with 0 UAP reports, why is it included in the trend?
• Why do they log-scale the x-axis when it only goes from 1 to 30?
• their trend line looks straight when it wouldn't be straight at any other scale
Log scale would potentially make some sense if they had a model for a cause with multiple effects. If today is a cause-filled day, then many effects would be seen because of that (P(you'll see N+1 events given that you've already seen N events) is high when N is high); if today is a cause-sparse day, then any detected effects would be rare (P(you see N+1 events given that already seen N events) is low when N is low). This nuke connection does seem to be such a nebulous causal relation. I don't like it either, but I'm not statistically sophisticated to put into words, or equations, exactly what I don't like about it and why that's wrong.

The single biggest objection is that there're no hyperbolae surrounding their regression line in their graph - I bet their 95% confidence interval flares both up and down at both ends (so could be a negative correlation or a positive one).
 
Even so, where does the 347 number come from in the chart? If that's the test dates +/- 1 day, it's not reflective of 124 test dates. The 2 numbers in the boxes should instead add to 372, right?
if one test is on the 3rd and the next is on the 5th, then they're counting 5 days (2nd-6th), not 6 days. Every overlap like this removes a day from the ×3 number.
 
The single biggest objection is that there're no hyperbolae surrounding their regression line in their graph - I bet their 95% confidence interval flares both up and down at both ends (so could be a negative correlation or a positive one).
Well, we see the data plot, so it's obvious the error bars must be enormous.
 
if one test is on the 3rd and the next is on the 5th, then they're counting 5 days (2nd-6th), not 6 days. Every overlap like this removes a day from the ×3 number.

Well shit. Now I have to go back and look for overlaps like that. Not tomorrow though, I'm going to Reno to buy some used wine making equipment. Maybe I'll see a UFO or Bigfoot along the drive over the Sierras. :D
 
Well shit. Now I have to go back and look for overlaps like that. Not tomorrow though, I'm going to Reno to buy some used wine making equipment. Maybe I'll see a UFO or Bigfoot along the drive over the Sierras. :D
Make sure you have a decent camera with you, would make a nice change of pace...
 
Make sure you have a decent camera with you, would make a nice change of pace...
A good camera lowers the chance of seeing a UFO. He needs to bring a webcam from 2002 if he wants to see something.

Remember, when recording a UFO, it is also customary to have sudden and rapid involuntary convulsions while screaming "What the fuck is that thing?" loud enough to drown out any useful noise.
 
External Quote:
A dataset comprising daily data (November 19, 1949 -April 28,1957) regarding identified transients, nuclear testing, and UAP reports was created (n=2,718 days).
This is the timeframe.
Unfortunately the UK source is currently down for maintenance.
But it's on the wayback machine.
Article:

TestDateLocationYield
Hurricane3rd October, 1952Montebello Island25
Totem 114th October, 1953Emu Field10
Totem 226th October, 1953Emu Field8
Mosaic 116th May, 1956Montebello Islands15
Mosaic 219th June, 1956Montebello Islands60
Buffalo 127th September, 1956Maralinga15
Buffalo 24th October, 1956Maralinga1.5
Buffalo 311th October, 1956Maralinga3
Buffalo 421st October, 1956Maralinga10
Antler 114th September, 1957Maralinga1
Antler 225th September, 1957Maralinga6
Antler 39th October, 1957Maralinga25
Those are the same as the ones you found. The rest is "minor" or outside the time frame.
 
From the above list of dates, I get 39(?) tests, that's excluding 9/21/1955 underwater test, and including the 1/19/1957 rocket launch test as it would still be considered "above ground". I think. And there's an unnumbered test thrown in between 21 and 22. So, my logic is we have test #2-40 that fit our dates. That's 39-the underwater test is 38 but add the unnumbered test back and I get 39. Please correct me as needed.
I want to not include the unnumbered test as there was no detonation.
That's 38 tests. Note that number 15 has a yield of under 1t TNT.

NameDateDeliveryYield
2 (Joe 2)24 September 1951tower,
weapons development
38.3 kt
3 (Joe 3)18 October 1951air drop,
weapons development
42 kt
4 Usilennaya (reinforced?) (Joe 4)12 August 1953tower shot,
weapons development
400 kt
5 Tatyana (Joe 5)23 August 1953air drop,
weapons development
28 kt
6 (Joe 6)3 September 1953air drop,
weapons development
5.8 kt
78 September 1953air drop,
weapons development
1.6 kt
8 (Joe 7)10 September 1953air drop,
weapons development
4.9 kt
9 (Joe 8)14 September 1954air drop,
military exercise
40 kt
1029 September 1954atmospheric,
weapons development
200 t
111 October 1954atmospheric,
weapons development
30 t
12 (Joe 9)3 October 1954atmospheric,
weapons development
2 kt
13 (Joe 10)5 October 1954dry surface,
weapons development
4 kt
14 (Joe 11)8 October 1954atmospheric,
weapons development
800 t
1519 October 1954tower,
weapons development
less than 0.001 kt​
16 (Joe 12)23 October 1954atmospheric,
weapons development
62 kt
17 (Joe 13)26 October 1954atmospheric,
weapons development
2.8 kt
18 (Joe 14)30 October 1954air drop,
weapons development
10 kt
19 (Joe 15)29 July 1955dry surface,
weapons development
1.3 kt
20 (Joe 16)2 August 1955dry surface,
weapons development
12 kt
215 August 1955dry surface,
weapons development
1.2 kt
unnumbered #121 September 1955dry surface,no yield
22 (Joe 17)21.09.1955underwater,
weapon effect
3.5 kt
23 (Joe 18)6 November 1955air drop,
weapons development
250 kt
24 Binarnaya (Binary)? (Joe 19)22 November 1955air drop,
weapons development
1.6 Mt
25 Baykal (Joe 20)2 February 1956high alt rocket (30–80 km),
weapon effect
300 t
26 (Joe 21)16 March 1956dry surface,
weapons development
14 kt
27 (Joe 22)25 March 1956dry surface,
weapons development
5.5 kt
28 (Joe 23)24 August 1956tower,
weapons development
27 kt
29 (Joe 24)30 August 1956air drop,
weapons development
900 kt
30 (Joe 25)2 September 1956air drop,
weapons development
51 kt
31 (Joe 26)10 September 1956air drop,
weapons development
38 kt
32 (Joe 27)17 November 1956air drop,
weapons development
900 kt
33 (Joe 28)14 December 1956air drop,
weapons development
40 kt
34 ZUR-215 (Joe 29)19 January 1957high alt rocket (30–80 km),
weapon effect
10 kt
35 (Joe 30)8 March 1957air drop,
weapons development
19 kt
36 (Joe 31)3 April 1957air drop,
weapons development
42 kt
37 (Joe 32)6 April 1957air drop,
weapons development
57 kt
38 (Joe 33)10 April 1957air drop,
weapons development
680 kt
39 (Joe 34)12 April 1957air drop,
weapons development
22 kt
40 (Joe 35)16 April 1957air drop,
weapons development
320 kt
 
Last edited:
For the test dates, I've got some of the info in a Google sheet, I have to sort out how to share it without doxing myself
Export as .csv, upload here (might need renaming to .txt).

Or remove all formatting and links, left-align everything, and paste here.
 
The more I tried to parse all the dates and guess at what might a nuclear test date and what might not, the more I was struck by how imprecise the methodology was.

The arbitrary window for correlation was "test date +/- 1 day."

However, both the Soviet and UK (Australia) test sites are across the international date line from California; for half of any given date, those locations are off by one, say Feb. 2 instead of Feb. 1. A bomb detonated in Kazakstan early on Feb. 2 is occurring on Feb. 1 for Palomar evening plates.

For U.S. tests, the time differences aren't quite as stark, but we're still assigning the value of a whole day to a transient that might have been recorded at 11:59 p.m. or 12:01 a.m. or a bomb that was detonated at 11:59 p.m. or 12:01 a.m. For example, I record a transient at 12:01 a.m. on Feb. 1 and detonate a bomb at 11:59 p.m. on Feb. 2 and they count at +/- 1 day, even though they're almost 48 hours apart.

Ideally the observation times for all the transients and all the detonations would be regularized to something like UTC for comparison. But now what time range you credit for a correlation? +/- 36 hours? And why?
 
The more I tried to parse all the dates and guess at what might a nuclear test date and what might not, the more I was struck by how imprecise the methodology was.

The arbitrary window for correlation was "test date +/- 1 day."

However, both the Soviet and UK (Australia) test sites are across the international date line from California; for half of any given date, those locations are off by one, say Feb. 2 instead of Feb. 1. A bomb detonated in Kazakstan early on Feb. 2 is occurring on Feb. 1 for Palomar evening plates.

For U.S. tests, the time differences aren't quite as stark, but we're still assigning the value of a whole day to a transient that might have been recorded at 11:59 p.m. or 12:01 a.m. or a bomb that was detonated at 11:59 p.m. or 12:01 a.m. For example, I record a transient at 12:01 a.m. on Feb. 1 and detonate a bomb at 11:59 p.m. on Feb. 2 and they count at +/- 1 day, even though they're almost 48 hours apart.

Ideally the observation times for all the transients and all the detonations would be regularized to something like UTC for comparison. But now what time range you credit for a correlation? +/- 36 hours? And why?
Measuring it in hours rather than by calendar dates would make more sense when it comes to any global event that might involve the international date line. Of course, their intention might merely have been to make a preconceived point, not to "make more sense".
 
Measuring it in hours rather than by calendar dates would make more sense when it comes to any global event that might involve the international date line. Of course, their intention might merely have been to make a preconceived point, not to "make more sense".
The problem is that getting exact times of nuclear tests requires more work and may not be possible.
 
Back
Top