Transients in the Palomar Observatory Sky Survey

Not really. The 'small subset that didn't ' can also be explained by emulsion flaws, and almost certainly this is the correct explanation..
That's a possibility, sure, but it's an assertion, not a demonstrated result. The authors didn't fail to consider emulsion flaws; they examined known instrumental issues and found some residuals that didn't fit those patterns. Saying "almost certainly" without new data is just another prior dressed as a conclusion. The correct next step would be replication or targeted testing, not assumption.
 
But they have not ruled anything out. They admit that there is no difference between the anomalies caused by emulsion flaws and the ones supposedly caused by imaginary satellites.

From here;
https://iopscience.iop.org/article/10.1088/1538-3873/ae0afe

The line you quoted: "profile sharpness alone cannot conclusively distinguish between artifact and astrophysical origin" — doesn't mean "nothing was ruled out." It means that this single metric can't serve as definitive proof on its own. That's transparency, not failure.

The authors systematically ruled out known instrumental and astronomical causes. What remained were residuals that didn't match those profiles. That's how you identify candidate anomalies ,not by assuming they're real, but by refusing to assume they're impossible.
 
The authors didn't fail to consider emulsion flaws; they examined known instrumental issues and found some residuals that didn't fit those patterns.
No, they didn't.
The authors systematically ruled out known instrumental and astronomical causes
No, they didn't. What they found were some possible patterns in the flaws, which may or may not be significant. These patterns could be caused by the processing of the plates, which disappeared when they changed the emulsion.
 
Quoting Beku-mant;
Here is my current understanding. In 2022, they performed a process to identify transient sources. It began by checking if a source was in the POSS-I plate, but not either of two major modern image collections (specifically Pan-STARRS DR2 and Gaia EDR3). This resulted in 298 165 sources. They then tried to see if there are matches in other modern catalogs, and they found cross-matches within 5-arcseconds for 288 770 of them. Those cross-matches are produced algorithmically, and for whatever reasons, there is some uncertainty about whether those cross-matchs, or what percentage of them, are real matches. If they are real matches, it means they are normal astronomical bodies, that could be of interest for non-ufo related astronomical research. They then also reduce the remaining about 9,000, by confirming some subset of them to be digital artifacts, and were able to identify some others as normal things like asteroids. Finally, they have 5,399 sources for which they have no explanation for asides from emulsion artifacts or technosignatures or some unknown phenomenon.
So the data has gone from 298165 anomalies to 5,399 anomalies, which they admit could be emulsion artifacts.

For the shadow study, they use any of those original 298,165 that didn't have cross-matches specifically in Gaia or Pan-STARRS. This means, they did have cross-matches in other catalogs. However, some unknown percentage of those cross-matches could be false matches, and thus could still be unconventional transients. But they then restrict their study to only the northern hemisphere, which leaves 106,339.
But for the Earth shadow study, they used a much larger subset, which may or may not have astronomical cross-matches. This is odd. The 'shadow deficit' effect doesn't play by the rules.

With the 5,399 confirmed transients (which are closer to that, either mysterious or emulsion artifacts), I did a preliminary check, and found about 42.6 expected to be in shadow, and about 40.3 in shadow (so a "deficit" of about 2.3), although I can't claim certainty that I carried that out correctly. Even though that is a very insignificant result, that can just be because at that dataset size, the noise dominates.
So the 5399 anomalies that have been winnowed out from the data do not show a significant effect in the Earth's shadow. Does that mean they cannot be satellites? Or should we look elsewhere for an explanation for the shadow effect?
 
No, they didn't.

No, they didn't. What they found were some possible patterns in the flaws, which may or may not be significant. These patterns could be caused by the processing of the plates, which disappeared when they changed the emulsion.
That's not accurate. They did evaluate instrumental causes, including emulsion issues, and documented those analyses in the methods section.

The claim that "these patterns could be caused by processing changes" is itself a hypothesis, not a demonstration. You're speculating without reproducing the work or offering comparative data.

The authors, by contrast, made their data and methods transparent so others could test exactly that. That's what makes it science -- not certainty, but reproducibility.
 
...the links to UFO reports and nuclear tests were unnecessary and methodologically weak. Those are narrative flourishes, not statistical findings, and the paper would stand better without them.

Villaroel et al. chose to include them.
I'm not sure Villaroel considers those claimed correlations unnecessary. She has been noticeably eager to discuss her ETI hypothesis with relatively high-profile "aliens are here!" proponents whose past claims might be seen as being less than soundly based on reliable evidence.

For example, talking with Ross Coulthart, NewsNation video "Exclusive: Data showing possible nonhuman intelligence passes peer review", posted on YouTube 21 October 2025:

External Quote:

ROSS COULTHART:
This is momentous Beatriz, I know how hard it is to get anything on UAPs past peer review, you've basically got an article which has survived the scrutiny of all of the peer review experts, essentially an article suggesting that there are artificial, constructed objects that were reflective specular [sic?] in Earth orbit before humans sent up objects into space.

BEATRIZ VILLAROEL:
Exactly...
From approx. 8:00 minutes in,

External Quote:

COULTHART
But it's a reasonable conclusion isn't it Doctor that, if these are artificial objects [Villaroel, "Yes,"] some civilisation has constructed them

VILLAROEL
Yes, I would say so. I mean it's not ours...


Source: https://www.youtube.com/watch?v=zKXq-QQ9FUw&t=132s


Another astronomer, Carl Sagan, once said something along the lines of "Extraordinary claims require extraordinary evidence", Villaroel's claims are extraordinary.
 
Another astronomer, Carl Sagan, once said something along the lines of "Extraordinary claims require extraordinary evidence", Villaroel's claims are extraordinary.
Indeed. But sadly even standbys like this are once again being falsely (and one-sidedly) attacked by UAP grifters and their supporters.

This tweet from @disclosureorg exemplifies this unfortunate trend:
The flaw in the argument that "extraordinary claims require extraordinary evidence" is that it replaces scientific reasoning with philosophical bias. In science, all evidence should be judged by the same methodological standards regardless of how unconventional the claim appears. A phenomenon does not become any less real because it challenges existing theories. By demanding an undefined and subjective level of "extraordinary" proof, skeptics create a logical trap where no evidence can ever be sufficient. They assume the phenomenon cannot exist, and then use this assumption to reject any evidence that could prove it does.
 
I find the iopscience paper is insufficiently detailed, and it's not easily reproducible, which is remarkable given the data is public.

External Quote:
To add to the intrigue, Solano et al. (2023) recently reported a bright triple transient event occurring on 1952 July 19, found among a set of ∼5000 short-lived POSS-I transients (Solano et al. 2022). This highly curated data set, in which diagnostics based on photometry and morphometric parameters have been carefully applied to the sample to reduce false positives (e.g., plate defects), suggests that the phenomenon of multiple transients can be found even when stringent diagnostic criteria are applied.
External Quote:
For that reason, we shall use carefully selected transient samples in Solano et al. (2022), which average 167 transients per plate and have been matched to several modern surveys to remove variable stars, asteroids, and comets.
5000/167=30, they used a subset of 30 plates. They don't list them in this paper.
Edit: the paper does not use the 5000 data set at all, instead going for a 100,000 data set.

But later:
External Quote:
We base our analysis on the catalog of 298,165 short-duration transients presented in Solano et al. (2022), detected in red POSS-I plates with typical exposure times of 45–50 minutes. These transients were identified using an automated pipeline developed as part of the VASCO project. For full details on the detection methodology, data characteristics, and vetting steps, we refer readers to Solano et al. (2022).

From this data set, we search for spatial groupings of transients within square boxes of varying sizes, typically ranging from a few arcminutes up to 20′–30′ per side (see typical sizes in Table 2). For each group, we evaluate whether the positions of the transients fall along a straight line (or more precisely a narrow band), within astrometric uncertainties.
Now they're up to 300,000 transients, when they said they used 5000? This is no longer the "carefully selected" sample with 167 transients per plate on average.
 
Last edited:
One more contradiction:
External Quote:
they are located near the plate edge, where defects are known to accumulate
External Quote:
Plate defects, by contrast, are expected to be randomly shaped and distributed.
"accumulate near the plate edge" is not a uniform random distribution!

External Quote:
Furthermore, we note that some objects that initially appear point-like in DSS images may exhibit subtle asymmetries or deviate from a stellar PSF in the higher-resolution SuperCOSMOS images, leading us to reject them. This procedure helps ensure that the remaining candidates are not spurious artifacts introduced during digitization.
This ensures nothing. Defects that are randomly asymmetrical can also be randomly round (or asymmetrical at a scale beyond the resolution of the scanner). Also:
External Quote:
In several cases, transients initially appearing as point sources in DSS were revealed—through SuperCOSMOS images—to be either scanning artifacts or round defects likely caused by emulsion flaws.
So, is round good or bad?

External Quote:
To date, no study has systematically quantified the fraction of single-plate detections that are authentic transient phenomena versus coincidentally star-like emulsion defects.
And neither does this study.
They have no idea about the quality of their data.
They have no idea whether "authentic transient phenomena" actually exist.

They do not know what "normal" is, but conclude "abnormal" based on assumptions they contradict in their own paper: they're using a data set that other research (which they cite!) has indicated contains false positives, and their "normal" is based on a uniform random distribution that does not match the actual defect distribution.
 
Last edited:
A side note on their altitudes:

"42,164 km altitude" — that's not the altitude of a geostationary satellite, it's the radius of its orbit

External Quote:
We compare the expected and observed rates for two different altitudes capable of producing PSF-like transients, namely 42,164 km and 80,000 km.
If you consider only ¼ of the area, you'll only find ¼ of the data points, with statistical error. This shows the "orbit" likely wasn't higher.
Now run the analysis for lower orbits!
Do it automated to give us a continuous graph!
And if that graph has a steady fall-off all the way to the ground (or 100% shadow coverage, whichever comes first), you'll have disproven that the data correlates with an orbit. If it does correlate with an orbit, you would be able to identify the change from that graph.

Note also that the shadow avoidance calculations are not shown, and are not informed by the actual distribution of defects on the plates.
If they used the "uncurated" dataset for this analysis, which is presumed to contain mostly false positives, then the analysis is a priori meaningless, except to show the nonuniform random distribution of this dataset.

Their definition of "in shadow" is sufficiently unclear to make reproducing this needlessly hard. They don't even give how they arrive at "expected fraction", does that cover the shadow path over 50 minutes? as pertaining to the plate sample? using a hemispherical coverage area when the telescope is unlikely to align low east or low west (and hence more likely to look into the shadow than expected) already divides the "expectation" from reality.
 
Last edited:
https://iopscience.iop.org/article/10.1088/1538-3873/ae0afe#paspae0afes4
External Quote:
We base our analysis on the catalog of 298,165 short-duration transients presented in Solano et al. (2022), detected in red POSS-I plates with typical exposure times of 45–50 minutes.
Looking that up:
Article:
We found 298 165 sources visible only in POSS I plates, out of which 288 770 had a cross-match within 5 arcsec in other archives (mainly in the infrared)

Which means 288770 of these "298,165 short-duration transients" were no transients at all, but likely actual astronomical objects!

If they ran the shadow analysis on this dataset, it's garbage!
In that case, it just shows that stars are not equally distributed in the sky (Milky Way!), and that telescopes like to look up rather than low down where the atmosphere is thicker.
 
Last edited:
Unless you've reproduced the data and ruled out all non-instrumental causes, you're just restating your prior belief.
Reproducing the data might be possible: say, one founds which plate defect caused the unxeplained transients (or, who knows, one founds undeniable proof of alien satellites, or of the psychic astronomer). But 'ruling out all non-instrumental causes' is proving a negative, which is impossible to do with inductive reasoning, on first principles. So you're asking an impossible feat to poor @Eburacum :eek: Maybe you wanted to write 'or' instead of 'and'? But you'd had been better off just by not writing the impossible 'ruling out' part.
 
I agree with @orianda that artificial pre-Sputnik satellites are not 'ruled out'. Neither are emulsion flaws, as they admit. The emulsion flaw explanation is much more likely, so remains the default.

It would be interesting to examine the 'shadow effect', however; it apparently only happens in the large dataset, which has already been provisionally eliminated. Either that process of elimination was spurious, or there is some other explanation; perhaps an uneven distribution of emulsion flaws according to the time or date of exposure.
 
Either that process of elimination was spurious, or there is some other explanation; perhaps an uneven distribution of emulsion flaws according to the time or date of exposure.
I am provisionally of the opinion that the 'shadow effect' is caused by methodological error, relating to the sky area the telescope covered that was in shadow vs not in shadow. Their estimate here is crude and certainly incorrect by a large margin, as explained above.
 
One way to test this would be to display all the 298,165 anomalies on a single celestial map, and see if there are any blatant distribution biases. It may be the case that the 'shadow effect' is a consequence of a very uneven or patchy distribution of emulsion flaws.
 
all the 298,165 anomalies
that's not the number, though
paspae0afet4_lr.gif

If divide N/total number, I should get "observed fraction f_obs of VASCO transients in shadow", so N/f_obs should give me the total.
79/0.00074=106757
349/0.00328=106402
so there were actually only about 106500 samples used in the calculation,
External Quote:
We use the transient candidates from Solano et al. (2022), but with the additional requirement that they have no counterparts within 5″ in Gaia, Pan-STARRS and NeoWise. Furthermore, we restrict our analysis to objects in the northern hemisphere (decl. > 0°). This yields a sample of 106,339 transients, which we use for our study.
and either this is an error by a factor of 3, or it's unclear where this number came from.
79/298165=0.00026 observed, 0.0032 expected (supposedly)
349/298165=0.00117 observed, 0.0115 expected

This is off by a factor of 10. Given that the majority of these 298165 samples are known infrared stars, this should not be off by a factor of 10.
But after Solano(2022) eliminates the stars, there are less than 10,000 samples left, and based on that, f_obs would overshoot the expected value by a factor of 3.

It's just a dumpster fire, front to back.
 
Last edited:
so there were actually only about 106500 samples used in the calculation
External Quote:
We use the transient candidates from Solano et al. (2022), but with the additional requirement that they have no counterparts within 5″ in Gaia, Pan-STARRS and NeoWise. Furthermore, we restrict our analysis to objects in the northern hemisphere (decl. > 0°). This yields a sample of 106,339 transients, which we use for our study.
I'm internally screaming at this.
Solano(2022) ran its data against Gaia EDR3 and Pan-STARRS DR2 in the "Selection" step, which resulted in the 298165 transients. In the "Analysis" step, they ran this data against more databases:
Article:
These searches significantly reduced the number of candidates (from 298 165 to 9 395). A significant number (∼59 per cent) of the identified sources were visible in infrared catalogues (Neowise, CatWISE2020, unWISE, and the infrared catalogues included in VOSA) but not in the optical (KIDS, Skymapper, and the optical catalogues included in VOSA) or the ultraviolet (GALEX).
WISE (Wide-field Infrared Survey Explorer) is an infrared telescope. So "they have no counterparts within 5″ in Gaia, Pan-STARRS and NeoWise" not only bypasses many of the steps performed in Solano(2022) to eliminate false positives. We know Solano(2022) found at least (298165-9395)*41%= 118395 false positives in the visible+UV spectrum, which means they are not included in "Gaia, Pan-STARRS and NeoWise". The restriction to the Northern hemisphere trims this number down, but it still means that the overwhelming number of data points in the 106339 set is actual astronomical objects that don't care whether they're in Earth's shadow or not.

This should not be off the expected value by a factor of 4.

When you have 96000 astronomical objects, you cannot add 10000 UFOs such that the sample density outside the shadow is 4 times the sample density inside the shadow, if it wasn't like that to begin with. Because the shadow is so small, you'd need to quadruple the number, i.e. find about 300,000 UFO—you can't get there with 10,000 or less.
The "expected value" that Villaroel is using in her shadow calculation must therefore be false.

Using her own sources, her proof is false, the 'shadow effect' does not exist as claimed.


A proper proof would:
1) explicitly list the criteria for "in shadow"
2) compute the coverage of shadow per plate area total [*]
3) compute the coverage of shadow per all astronomical objects
4) compute the coverage of shadow on a random distribution of points across all plates that conforms to the actual plate defect distribution (more near the edges etc.)
and then compare that with what was observed in the experimental data used in the study.

What we have in the study is misleading.

And why does she use 3 different data sub(sets) without any explanation of the choices involved?

Edit: footnote added
[*] They sort of do this.in a Monte Carlo kind of way.
 
Last edited:
Article:
a set of ∼5000 short-lived POSS-I transients (Solano et al. 2022). This highly curated data set, in which diagnostics based on photometry and morphometric parameters have been carefully applied to the sample to reduce false positives (e.g., plate defects), suggests that the phenomenon of multiple transients can be found even when stringent diagnostic criteria are applied.

tl;dr in this paper, we examine what we can find when we leave the false positives in the data set

seriously, this is what she does

only for her shadow calculation does she uses 1 out of the 7 object catalogs used in Solano(2022), missing a huge number of false positives

her table 1 of "aligned groups" uses
none of the mitigations in the "Analysis" section of Solano(2022).

[error deleted]
 
Last edited:
External Quote:
For that reason, we shall use carefully selected transient samples in Solano et al. (2022), which average 167 transients per plate and have been matched to several modern surveys to remove variable stars, asteroids, and comets.
5000/167=30, they used a subset of 30 plates. They don't list them in this paper.
106339/167=635 to 638 plates

So it looks like she uses this data set (which is found by using the unvetted data set of Solano(2022) and only validating it against neoWISE) throughout.

It should not have taken me this long to establish that.

The numbers "5000" and "298165" are red herrings and don't apply to this paper at all.
 
Last edited:
Similarly, I do not know if Dr. Villarroel is a space alien. Also similarly, I have no evidence to suggest it is the case.
It is scientifically untenable to assume that all people are either authentic humans or all space aliens. A reasonable working assumption is that both populations are present in some unknown proportion.
;)
(yes, the italics are quoted verbatim)
"space aliens are present" is not a "reasonable assumption" without evidence
and neither is the assumption of pre-1950 satellites in orbit
 
Defects that are randomly asymmetrical can also be randomly round (or asymmetrical at a scale beyond the resolution of the scanner).
I suggest that round defects are probably caused by a bubble in the emulsion, while asymmetric ones are more likely to be the result of damage to the plate, either pre- or post-exposure.
 
Just asking- might there be a difference in sensitivity between the blue and red plates, perhaps because of the different wavelengths of the light they are sensitive to? Or might there be an inherent difference in proneness to emulsion flaws between the two plates?

I ask because Villaroel claims the absence of supposed transients on the blue plates effectively rules out optical ghosting;

External Quote:
So we had two candidates that are highly interesting, that the paper resulted in, ah, and ah, there's still one little caveat, that for these two candidates there's a miniscule chance that it might be su- something called optical ghosting, on the erm, in the instruments, erm, what speaks against it is because we looked, we, at the blue sensitive images and you don't see anything there, if it would be optical ghosting you would also see, with the same set up of the instruments you would see something there but we don't, however we always, always leave that little caveat...
-NewsNation discussion with Ross Coulthart as per post #368

(In passing, interesting that there are two "highly interesting" candidates- presumably the tens of thousands of other potential alien spacecraft Beatriz alludes to in @HoaxEye's post #383 and elsewhere are of a less interesting variety).
 
Last edited:
Just asking- might there be a difference in sensitivity between the blue and red plates, perhaps because of the different wavelengths of the light they are sensitive to? Or might there be an inherent difference in proneness to emulsion flaws between the two plates?

I ask because Villaroel claims the absence of supposed transients on the blue plates effectively rules out optical ghosting;
"optical ghosting", as far as I can tell, is a type of lens flare. So if the lenses are coated to prevent reflections, that might work better in UV than in near IR. But generally, I'd expect that to be more of a camera geometry thing than an emulsion thing. The blue emulsion is more sensitive, but the exposure time is reduced to compensate.
External Quote:
So we had two candidates that are highly interesting, that the paper resulted in, ah, and ah, there's still one little caveat, that for these two candidates there's a miniscule chance that it might be su- something called optical ghosting, on the erm, in the instruments, erm, what speaks against it is because we looked, we, at the blue sensitive images and you don't see anything there, if it would be optical ghosting you would also see, with the same set up of the instruments you would see something there but we don't, however we always, always leave that little caveat...
-NewsNation discussion with Ross Coulthart as per post #368

(In passing, interesting that there are two "highly interesting" candidates- presumably the tens of thousands of other potential alien spacecraft Beatriz alludes to in @HoaxEye's post #383 and elsewhere are of a less interesting variety).
The interesting candidates are the ones where she managed to line up 5 dots. The "line" is such that it requires zigzag, a formation, or a huge satellite, if it is to be one.
 
106339/167=635 to 638 plates

So it looks like she uses this data set (which is found by using the unvetted data set of Solano(2022) and only validating it against neoWISE) throughout.

It should not have taken me this long to establish that.

The numbers "5000" and "298165" are red herrings and don't apply to this paper at all.
All jokes aside this is impressive work. I read through this study multiple times and couldn't for the life of me figure out what 298165 had to do with anything! Turns out I went in with a prior bias expecting all the data to be relevant to the claims being made.
 
Just asking- might there be a difference in sensitivity between the blue and red plates, perhaps because of the different wavelengths of the light they are sensitive to? Or might there be an inherent difference in proneness to emulsion flaws between the two plates?
There seems to be some evidence that red-sensitive plates (Kodak 103a-E) tended to show more noticeable emulsion-related artifacts compared to the blue-sensitive plates (Kodak 103a-O), but I think the longer exposure time of red plates plays a bigger role.
There are artefacts in blue plate images, too, but not that much. I don't recall any party checking POSS-I blue plates in detail, though.
 
Sounds very much like 21st Century Ley Lines
Sort of. You can see in the table (where p_max is the width of the path) that the 5-dot options go to 10" and 15" (arc seconds), which means about 2 miles/3 km width. If that's a single object, it's going to be very noticeable as it passes the moon, as it's 30 times bigger than the ISS.

So she says, "it's a formation", because then she can add 2 more dots to a narrower line and advertise "5 on a line" and how significant that is.

If she took just the 5000 points from Solano (2022), I doubt she'd find much.
 
And five spots on the same plate could have happened at any time, in any order, during the 50 minute exposure.

The chance that they happened at regular intervals in linear progression is very small, even if they are real events rather than emulsion anomalies.

.
 
Sort of. You can see in the table (where p_max is the width of the path) that the 5-dot options go to 10" and 15" (arc seconds), which means about 2 miles/3 km width. If that's a single object, it's going to be very noticeable as it passes the moon, as it's 30 times bigger than the ISS.

So she says, "it's a formation", because then she can add 2 more dots to a narrower line and advertise "5 on a line" and how significant that is.

If she took just the 5000 points from Solano (2022), I doubt she'd find much.
Is there any reason to think that plate defects could appear preferentially in patterns, like say a line of spaced bubbles.

Another thought: if the transients represent glints off satellites, is there any inference we can draw from their spacing along the infered line?
 
Reproducing the data might be possible: say, one founds which plate defect caused the unxeplained transients (or, who knows, one founds undeniable proof of alien satellites, or of the psychic astronomer). But 'ruling out all non-instrumental causes' is proving a negative, which is impossible to do with inductive reasoning, on first principles. So you're asking an impossible feat to poor @Eburacum :eek: Maybe you wanted to write 'or' instead of 'and'? But you'd had been better off just by not writing the impossible 'ruling out' part.
Fair catch - in pure logic, yes, you can't prove a negative. But that's not what "ruling out" means in empirical work. In science we rule out in the provisional sense by exhausting known mechanisms within the resolution of the data. The authors did exactly that: they tested every instrumental and astronomical explanation available to them, found most fit, and a few that didn't.

When I said "rule out all non-instrumental causes," I obviously meant "all known instrumental causes." Otherwise every paper in astronomy would have to end with "we can't rule out gremlins in the optics." ‍:D
 
Is there any reason to think that plate defects could appear preferentially in patterns, like say a line of spaced bubbles.
An emulsion has to be applied in some fashion, I think probably with a roll coater. That could cause lines, but they'd be parallel to edges of the plate. Do we know if their selected portions are sometimes taken from a skewed section of the plate, or not?

Bubbles in an emulsion could be produced by agitating it too much, or by squeegeeing it through an application sponge. Unlike 007's preference, the best advice would be "stirred, not shaken".

Straight lines of asymmetric (non-bubble) emulsion damage could be caused by linear scratching, or perhaps by pressure from the edge of another plate or other object stacked on top.
 
An emulsion has to be applied in some fashion, I think probably with a roll coater. That could cause lines, but they'd be parallel to edges of the plate. Do we know if their selected portions are sometimes taken from a skewed section of the plate, or not?

Bubbles in an emulsion could be produced by agitating it too much, or by squeegeeing it through an application sponge. Unlike 007's preference, the best advice would be "stirred, not shaken".

Straight lines of asymmetric (non-bubble) emulsion damage could be caused by linear scratching, or perhaps by pressure from the edge of another plate or other object stacked on top.
If these were just random bubbles or damage artifacts, wouldn't we expect them to scatter across the plates arbitrarily, with differing densities, morphologies, and scales? Instead, they appear as tightly clustered, symmetric, 'star-like' points that vanish in later surveys.
 
An emulsion has to be applied in some fashion, I think probably with a roll coater. That could cause lines, but they'd be parallel to edges of the plate. Do we know if their selected portions are sometimes taken from a skewed section of the plate, or not?

Bubbles in an emulsion could be produced by agitating it too much, or by squeegeeing it through an application sponge. Unlike 007's preference, the best advice would be "stirred, not shaken".

Straight lines of asymmetric (non-bubble) emulsion damage could be caused by linear scratching, or perhaps by pressure from the edge of another plate or other object stacked on top.
see here:
About the field curvature: the focal surface of a Schmidt telescope is not flat, but curved. During exposure, plates are bent to match this curvature. After exposure, plates are flattened for scanning. This means the glass and emulsion could experience elastic deformation. As a result, there could be more artefacts like small bubbles that would look like (faint) stars.
Villaroel herself cites Hambly&Blair, noting that defects accumulate near the edges.
 
If these were just random bubbles or damage artifacts, wouldn't we expect them to scatter across the plates arbitrarily, with differing densities, morphologies, and scales? Instead, they appear as tightly clustered, symmetric, 'star-like' points that vanish in later surveys.
You would certainly expect random bubbles or damage artifacts to vanish in later surveys.

Yes, you'd expect defects to have different shapes. However, if you removed all the defects that were recognizable as defects (which they did), you'd be left with ambiguous ones that resembled point sources.

So neither of those two things is surprising. I'm not sure about "tightly clustered" - what are the numbers on that?
 
Back
Top