Digitized Sky Survey POSS-1

My last two braincells finally bumped into eachother and I realized how to do this. I can just steal someone else's working code.

Its done but its pretty slow so I'll post the results tomorrow.
I remember why I switched to Rust lmao.

View attachment 83183
From the 5399 sources. I get 39 In Shadow according to this code.

Python:
import csv
import os
import glob
from astropy.io import fits
from astropy.time import Time
from earthshadow import get_shadow_center, get_shadow_radius, dist_from_shadow_center
from astropy.coordinates import Angle
import astropy.units as u
from astropy.coordinates import SkyCoord

import matplotlib.pyplot as plt
import numpy as np

def fits_time( date_obs, ut ) :
    date_split = date_obs.split( "/" )
    return f"19{date_split[2]}-{date_split[1]}-{date_split[0]}T{ut}"

def in_shadow( ra_value, dec_value, time ) :

    center = get_shadow_center( time, obs='Palomar', orbit='GEO')
    radius = get_shadow_radius(orbit='GEO')
    dist   = dist_from_shadow_center( ra_value, dec_value, time=time, obs='Palomar', orbit='GEO')
    return dist < radius - 2*u.deg

def get_metadata():
    output_csv_file = "extracted_data.csv"
    metadata_list = []
 
    fits_files = glob.glob('fits_files/*.fits')
 
    for fits_file in fits_files:

        with fits.open(fits_file) as hdul:

            header = hdul[0].header

            c = SkyCoord( header["OBJCTRA"], header["OBJCTDEC"], unit=(u.hourangle, u.deg), frame='icrs')
        
            ra_deg  = c.ra.degree
            dec_deg = c.dec.degree

            metadata = {
                "file_name" : os.path.basename(fits_file),
                "DATE-OBS"  : header["DATE-OBS"],
                "UT"        : header["UT"],
                "SITELAT"   : header["SITELAT"],
                "SITELONG"  : header["SITELONG"],
                "OBJCTRA"   : header["OBJCTRA"],
                "OBJCTDEC"  : header["OBJCTDEC"],
                "EQUINOX"   : header["EQUINOX"],
                "EPOCH"     : header["EPOCH"],
                "EXPOSURE"  : header["EXPOSURE"],
                "SHADOW"   : in_shadow(
                    ra_deg,
                    dec_deg,
                    time=Time( fits_time( header["DATE-OBS"], header["UT"] ), format='fits') )[0]
            }

            metadata_list.append( metadata )

    with open(output_csv_file, 'w', newline='') as csvfile:
        fieldnames = list(metadata_list[0].keys())
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(metadata_list)

if __name__ == "__main__":
    get_metadata()

I am not sure how to figure out how many are expected to be accounting for location bias in the POSSI data, and fraction of the sky in the shadow etc.


More importantly
It's not incredibly small relative to LEO in terms of area. It's small as a percentage of the total area of the sphere at that distance.

This graph should show how the area shrinks in absolute terms (orange dashed) vs. relative to hemisphere area (blue). It's ChatGPT, but seems correct.

View attachment 83186

At GEO it's 1% of the hemisphere, 0.5% of the total sky.

or 0.7%, so a surplus rather than a defict.

But they say:

External Quote:



To independently verify the number of transients located within Earth's shadow, we implemented a custom code
(using ChatGPT-assisted scripting) that follows a similar principles to EarthShadow. After validating its performance
on a subset of candidates from Villarroel et al. (2025), we applied it to the full sample. The resulting counts — 374
transients at 42,164 km and 57 at 80,000 km — are in good agreement with the results obtained using EarthShadow,
supporting the robustness of our shadow deficit measurement.

To estimate the statistical significance of the difference in transient detection rates within Earth's umbra at different421
altitudes, we compute Poisson uncertainties for the observed and expected fractions. At 42,164 km altitude, we expect

N = 1223 transients in shadow out of 106,339 total, corresponding to an expected fraction of fexp = 0.0115 ±0.00033.
However, we observe only N = 349 transients in shadow, yielding fobs = 0.00328 ±0.00018. The difference between
these fractions is highly significant, with a significance level of 21.9σ, computed by combining the Poisson uncertainties
1223/106339 is 1.15%
349/106339 is 0.33%

Oh, they use geocentric radius (a sphere defined by the radius from the center of the earth), not altitude (a sphere defined by it's altitude above a nominally spherical Earth). So 42,164 is GEO.
106339 detected transients right? There are about 870 red plates. If spread evenly across all red plate exposures, thats 122 per plate or 2.4 per minute. These were either happening in bursts or no managed one noticed the sky constantly flashing.
I could be wrong but weren't they pulling the transients from only the POSS-1?
 
Here is my most recent attempt at checking if there might be a deficit in the 5,399 confirmed transients list. A small number of trials seemed to show the number of transients in shadow at GEO might actually be less than expected although I have to run for many more trials and then think about it more to interpret the results. Note that in the paper they also use a much larger dataset, and take more factors into consideration.

If anyone is an expert in statistics and/or astronomy feel free to comment on how I'm doing this.

Python:
import os
import glob
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.time import Time
from earthshadow import get_shadow_center, get_shadow_radius, dist_from_shadow_center
from astropy.coordinates import SkyCoord, SkyOffsetFrame
import astropy.units as u
from tqdm import tqdm
from scipy import stats

def time_str(date_obs, ut):
    date_split = date_obs.split("/")
    return f"19{date_split[2]}-{date_split[1]}-{date_split[0]} {ut}"

# See https://github.com/guynir42/earthshadow
def in_shadow( c, time):
    center = get_shadow_center(time, obs='Palomar', orbit='GEO')
    radius = get_shadow_radius(orbit='GEO')
    dist = dist_from_shadow_center( c.ra.degree, c.dec.degree, time=time, obs='Palomar', orbit='GEO')
    return dist < radius - 2 * u.deg

def randomize_time_within_exposure( t, exposure ):
    return t + np.random.uniform( 0.0, exposure ) * u.minute

def randomize_position_on_plate_naive( center ):

    # hardcoded approximate value for Palomar Schmidt 48" plates used for POSSI
    plate_diameter_deg=6.6

    # radius
    r_max = plate_diameter_deg / 2.0

    # random radius and angle
    r = r_max * np.sqrt(np.random.random())
    theta = 2 * np.pi * np.random.random()
   
    #
    xi  = r * np.cos(theta) * u.deg
    eta = r * np.sin(theta) * u.deg

    off = SkyOffsetFrame(origin=center)
    pos = SkyCoord(xi, eta, frame=off ).icrs

    return pos

def extract_data(fits_files) :
    extracted_data = []
    for fits_file in fits_files:
        with fits.open(fits_file) as hdul:

            header = hdul[0].header

            extracted_data.append({
                'c_obj' : SkyCoord( # coordinate of the object (vanishing source)
                    ra=header["OBJCTRA"],
                    dec=header["OBJCTDEC"],
                    unit=(u.hourangle, u.deg),
                    frame='fk5',
                    equinox='J2000.0').transform_to('icrs'),
                'c_plate' : SkyCoord( # coordinate of the center of the plate
                    ra=f"{header['PLTRAH']}h{header['PLTRAM']}m{header['PLTRAS']}s",
                    dec=f"{header['PLTDECSN']}{header['PLTDECD']}d{header['PLTDECM']}m{header['PLTDECS']}s",
                    unit=(u.hourangle, u.deg),
                    frame='fk5',
                    equinox='J2000.0').transform_to('icrs'),
                'time' : Time( #observation time
                    time_str(
                        header["DATE-OBS"],
                        header["UT"]),
                    scale="ut1",
                    format="iso"),
                'exposure' : float( header["EXPOSURE"] ) # length of exposure in minutes
            } )

    return extracted_data

def run_trial( data, randomize_coords=False) :

    results = []
   
    for obs in data:
        t = randomize_time_within_exposure( obs[ 'time' ], obs[ 'exposure' ] )
        c = randomize_position_on_plate_naive( obs[ 'c_plate' ] ) if randomize_coords else obs[ 'c_obj' ]
        results.append( in_shadow( c, t )[ 0 ] )

    return results

if __name__ == "__main__":

    ###################################################################
    # N random trials

    N = 1000

    fits_files = glob.glob('fits_files/*.fits')
    extracted_data = extract_data(fits_files)

    # Random time within exposure window with real coords
    real_percents = []
    for i in tqdm(range(N), desc="Running real coordinate trials"):
        real_results = run_trial(extracted_data, randomize_coords=False)
        real_percents.append(np.mean(real_results) * 100)
    real_percent_mean = np.mean( real_percents )

    # Random time within exposure window with random coords on plate
    rand_percents = []
    for i in tqdm(range(N), desc="Running random coordinate trials"):
        random_results = run_trial(extracted_data, randomize_coords=True)
        rand_percents.append(np.mean(random_results) * 100)
    rand_percent_mean = np.mean(rand_percents)

    ####################################################################
    #Save the results

    rand_percents = np.array( rand_percents ).astype( np.float64 )
    rand_percents.tofile( "rand_percents.npy" )

    real_percents = np.array( real_percents ).astype( np.float64 )
    real_percents.tofile( "real_percents.npy" )

    ####################################################################
    # significance

    # Welch's t-test
    t_stat, p_value_welch = stats.ttest_ind( real_percents, rand_percents, equal_var=False )

    # Mann-Whitney U test 
    u_stat, p_value_mw = stats.mannwhitneyu(real_percents, rand_percents, alternative='two-sided')

    # Cohen's d effect size
    pooled_std = np.sqrt((np.var(real_percents, ddof=1) + np.var(rand_percents, ddof=1)) / 2)
    cohens_d = (real_percent_mean - rand_percent_mean) / pooled_std

    # Convert p-values to sigma
    def p_to_sigma(p_value):
        return stats.norm.ppf(1 - p_value/2)
   
    sigma_welch = p_to_sigma(p_value_welch)
    sigma_mw = p_to_sigma(p_value_mw)
   
    print(f"Welch's t-test: t={t_stat:.4f}, p={p_value_welch:.6f} ({sigma_welch:.2f}σ)")
    print(f"Mann-Whitney U: U={u_stat:.0f}, p={p_value_mw:.6f} ({sigma_mw:.2f}σ)")
    print(f"Cohen's d: {cohens_d:.4f}")
   
    ####################################################################
    # Plot the results

    N_BINS = int( np.sqrt( 1.5*N ) )

    bins = np.linspace(
        min( np.min( rand_percents ), np.min( real_percents ) ),
        max( np.max( rand_percents ), np.max( real_percents ) ),
        N_BINS )

    fig, ax = plt.subplots(figsize=(12, 8))

    ax.hist(rand_percents, bins, color="seagreen",  alpha=0.5, label='Random Coord Mean Shadow Percent' )
    ax.hist(real_percents, bins, color="royalblue", alpha=0.5, label='Real Coord Mean Shadow Percent'   )

    ax.axvline(x=rand_percent_mean, color='green', linestyle='-', linewidth=2.5, label=f'Random Coord Mean: {rand_percent_mean:.3f}%')
    ax.axvline(x=real_percent_mean, color='blue',  linestyle='-', linewidth=2.5, label=f'Real Coord Mean: {real_percent_mean:.3f}%')

    # Add statistics text box
    stats_text = f"Welch's t-test: t={t_stat:.3f}, p={p_value_welch:.4f} ({sigma_welch:.2f}σ)\n"
    stats_text += f"Mann-Whitney U: U={u_stat:.0f}, p={p_value_mw:.4f} ({sigma_mw:.2f}σ)\n"
    stats_text += f"Cohen's d: {cohens_d:.3f}"

    ax.text(0.95, 0.95, stats_text, transform=ax.transAxes, fontsize=11,
            verticalalignment='top', horizontalalignment='left',
            bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.8))

    ax.set_title(f'Distribution Comparison for {N} Trials', fontsize=16)
    ax.set_xlabel('Overall Percentage In Earth\'s Shadow (%)', fontsize=12)
    ax.set_ylabel('Number of Trials', fontsize=12)
    ax.set_ylim(0, ax.get_ylim()[1] * 1.2)
    ax.legend(fontsize=10)

    plt.savefig( "random_plate_pos_v_real_plate_pos.png" )
    plt.show()
 
Last edited:
I have a google drive link now.

I am starting to add the red images, but I do have all the meta data snippits up.

Edit: I originally intedened to compress the images to a tarball. Its been 10 hours since I started that and it didn't even get 1/3 of the way through. I'm just uploading them as is instead.
I've maxxed out the storage for my google drive and I haven't finished uploading the red images. I guess I'll just host the metadata.

I'm about half of the way through the blue and have 133gb of data saved to disk. Yikes.

Outside if that, the shadow code was faaar to slow. The earthshadow calls are the main slow down. I tried several different attempts at parallelization and caching but it still continued to be slow as molasses. If I try manually writing it again, I'm using Rust.

Python:
import csv
import os
import glob
from earthshadow import get_shadow_center, get_shadow_radius, dist_from_shadow_center
import astropy.units as u

from datetime import datetime, timedelta
from concurrent.futures import ProcessPoolExecutor
from multiprocessing import cpu_count

import numpy as np

import warnings

# Suppress specific ERFA warnings
warnings.filterwarnings("ignore", category=UserWarning, module='erfa.core')


# Suppress the polar motion warning
warnings.filterwarnings("ignore", message="Tried to get polar motions for times before IERS data is valid.*")

# Suppress the FutureWarning about setting the location attribute
warnings.filterwarnings("ignore", category=FutureWarning, message="Setting the location attribute post initialization will be disallowed in a future version of Astropy.*")

# Parameters
viewing_angle_deg = 6.6
num_points = 5000

def in_shadow(ra_value, dec_value, time, orbit):
    center = get_shadow_center(time, obs='Palomar', orbit=orbit)
    radius = get_shadow_radius(orbit=orbit)
    dist = dist_from_shadow_center(ra_value, dec_value, time=time, obs='Palomar', orbit=orbit)
    return dist < radius - 2*u.deg

def gen_rand_points(num_rand_points, csv_folder = 'poss_1/red/*.csv'):
    # Find all the csv files in the folder
    csv_files = glob.glob(csv_folder)

    # Make the list for our output values
    list_of_random_points = []

    for csv_file in csv_files:  # Process all files
        try:
            # Open the csv
            with open(csv_file, mode='r') as csvfile:
                # Empty dict to hold all values
                data_dict = {}

                # Make a csv reader
                csv_reader = csv.DictReader(csvfile)

                # Convert the key value pairs into a proper dictionary
                for row in csv_reader:
                    data_dict[row['Key']] = row['Value']

                # Check if required keys exist
                required_keys = ['DATE-OBS', 'PLATERA', 'PLATEDEC']
                missing_keys = [key for key in required_keys if key not in data_dict]
                if missing_keys:
                    raise KeyError(f"Missing required keys: {missing_keys}")

                # Pull the data from the dictionary
                date_time = datetime.fromisoformat(data_dict['DATE-OBS'])
                platera = float(data_dict['PLATERA'])
                platedec = float(data_dict['PLATEDEC'])

                # # Generate random points on the plate
                # random_ra = np.random.uniform(platera - (viewing_angle_deg / 2),
                #                             platera + (viewing_angle_deg / 2),
                #                             num_rand_points)  # RA in degrees

                # random_dec = np.random.uniform(platedec - (viewing_angle_deg / 2),
                #                              platedec + (viewing_angle_deg / 2),
                #                              num_rand_points)  # Dec in degrees

                ## Claude's recomended change
                # Generate uniform points on celestial sphere
                u1 = np.random.uniform(0, 1, num_rand_points)
                u2 = np.random.uniform(0, 1, num_rand_points)

                # Convert to spherical coordinates centered on plate center
                theta_max = np.radians(viewing_angle_deg / 2)  # Angular radius
                theta = np.arccos(1 - u1 * (1 - np.cos(theta_max)))  # Uniform on sphere cap
                phi = 2 * np.pi * u2  # Uniform azimuth

                # Convert to RA/Dec offsets from plate center
                delta_ra = theta * np.cos(phi) * 180/np.pi
                delta_dec = theta * np.sin(phi) * 180/np.pi

                random_ra = platera + delta_ra
                random_dec = platedec + delta_dec

                # Add all points to the list
                for idx in range(num_rand_points):
                    list_of_random_points.append({'time': date_time, 'ra': random_ra[idx], 'dec': random_dec[idx]})

        except (KeyError, ValueError, FileNotFoundError, OSError) as e:
            print(f"Skipping {csv_file}: {e}")
            continue
        except Exception as e:
            print(f"Unexpected error processing {csv_file}: {e}")
            continue

    return list_of_random_points

def calculate_orbit_shadow_percentage(args):
    """Helper function to calculate shadow percentage for a single orbit"""
    orbit_num, list_of_sats, min_orbit, max_orbit, num_orbit_step, exposure_time, num_time_step = args

    sat_total = 0
    shadowed_sat = 0

    # Find the current altitude
    if num_orbit_step > 1:
        orbit_km = ((max_orbit - min_orbit)/(num_orbit_step-1)) * orbit_num + min_orbit
    else:
        orbit_km = min_orbit

    for exposure_step in range(num_time_step):
        # Find the current time of the exposure
        if num_time_step > 1:
            time_exposed = exposure_step * (exposure_time/(num_time_step-1))
        else:
            time_exposed = 0

        # Loop through the random sats
        for sat_dict in list_of_sats:
            # Find the actual time
            act_time = sat_dict['time'] + timedelta(minutes=time_exposed)

            # Find if its shadowed
            if in_shadow(sat_dict['ra'], sat_dict['dec'], act_time.isoformat(), orbit_km):
                shadowed_sat += 1
            sat_total += 1

            if sat_total % 1000 == 0:
                print("Complete: ",sat_total, "Total Sats: ", len(list_of_sats) * num_time_step,"Orbit: ", orbit_km)

    return {'Orbit': orbit_km, 'Percent_in_shadow': round(100*(shadowed_sat/sat_total), 3)}

def find_percentage_in_shadow(list_of_sats, min_orbit, max_orbit, num_orbit_step, exposure_time, num_time_step):
    # Prepare arguments for each orbit calculation
    args_list = [
        (orbit_num, list_of_sats, min_orbit, max_orbit, num_orbit_step, exposure_time, num_time_step)
        for orbit_num in range(num_orbit_step)
    ]

    # Use all available CPU cores
    with ProcessPoolExecutor(max_workers=cpu_count()) as executor:
        output_list = list(executor.map(calculate_orbit_shadow_percentage, args_list))

    return output_list

if __name__ == "__main__":
    print(find_percentage_in_shadow(list_of_sats=gen_rand_points(num_points), min_orbit=40000, max_orbit=80000, num_orbit_step=5, exposure_time=50, num_time_step=3))
 
I've maxxed out the storage for my google drive and I haven't finished uploading the red images. I guess I'll just host the metadata.

I'm about half of the way through the blue and have 133gb of data saved to disk. Yikes.

Outside if that, the shadow code was faaar to slow. The earthshadow calls are the main slow down. I tried several different attempts at parallelization and caching but it still continued to be slow as molasses. If I try manually writing it again, I'm using Rust.
I think you would normally use institutional resources for something like this, either your local supercomputing cluster or an AWS account. Or a beefy workstation.
 
For whatever reason, it is extremely slow the way I used it. But it is vectorized, so you could try computing it for all of your ra, dec, and time values in one call. Also astropy can be used vectorized.
 
Last edited:
I think you would normally use institutional resources for something like this, either your local supercomputing cluster or an AWS account. Or a beefy workstation.
Databricks is a cloud platform provider with a free tier now. Save the data anywhere cheap, like AWS S3, and Databricks can read the data from there. I'm not sure how much compute power the free tier gets on Databricks though. It might be pretty low. @boguesuser

I'm out of town right now and I don't have my personal laptop :(. I don't want to use my work laptop for this, but I want to get involved with your code :(
 
Databricks is a cloud platform provider with a free tier now. Save the data anywhere cheap, like AWS S3, and Databricks can read the data from there. I'm not sure how much compute power the free tier gets on Databricks though. It might be pretty low. @boguesuser

I'm out of town right now and I don't have my personal laptop :(. I don't want to use my work laptop for this, but I want to get involved with your code :(
I honestly think the best approach is to do what I was trying to get AI to do before and calculate the amount of shadowed area on each plate.

It would be less computationally expensive and generate a more accurate result.

This ends up being a complex geometry problem. You'd need to determine the shape of the area in the sky that the plate is observing, then find the position and shape of the Earth's shadow at the target orbit. From there, just draw the rest of the owl calculate the area of overlap.

The issue with this approach is that I've never been good at geometry. The square is the one with three sides right?
 
Please do remember that, over 50 minutes, there is no fixed "shadowed area".

The sideral year is 365 days 6 hours 9 minutes = 525969 minutes over 360×60×60 = 1296000 arc seconds. During 50 minutes, the shadow moves 50*1296000/525969 = 123.2 arc seconds, or approximately 2 arc minutes. For objects in that area, there's a chance that they'd be glinting that depends on the duration of time outside the umbra.

Then there's the issue of the penumbra, where an object moving out of the shadow proper (umbra) would enter the penumbra, at which point the sun looks like it's partially eclipsed, and reflections (glints) would be diminished in magnitude depending on where in the penumbra they are.

On top of that, atmospheric refraction means that geometry is not enough to determine the exact shadow area.

This affects the sky catalogue since the plates being compared were not exposed simultaneously, i.e. the shadow moves between exposures; and the red plates would see a smaller shadow than the blue plates.

This is why it would have been very important for the researchers to establish a baseline using modern data and known satellites in order to validate their unorthodox methods. It's also important for anyone trying to replicate their work.
 
Please do remember that, over 50 minutes, there is no fixed "shadowed area".

The sideral year is 365 days 6 hours 9 minutes = 525969 minutes over 360×60×60 = 1296000 arc seconds. During 50 minutes, the shadow moves 50*1296000/525969 = 123.2 arc seconds, or approximately 2 arc minutes. For objects in that area, there's a chance that they'd be glinting that depends on the duration of time outside the umbra.

Then there's the issue of the penumbra, where an object moving out of the shadow proper (umbra) would enter the penumbra, at which point the sun looks like it's partially eclipsed, and reflections (glints) would be diminished in magnitude depending on where in the penumbra they are.

On top of that, atmospheric refraction means that geometry is not enough to determine the exact shadow area.

This affects the sky catalogue since the plates being compared were not exposed simultaneously, i.e. the shadow moves between exposures; and the red plates would see a smaller shadow than the blue plates.

This is why it would have been very important for the researchers to establish a baseline using modern data and known satellites in order to validate their unorthodox methods. It's also important for anyone trying to replicate their work.

@beku-maunt has the good code for finding if an object is within the shadow or not. I'm just trying to figure out what the baseline for random would be. Most of these effects can be ignored if I'm trying to get the baseline for randomness right?
 
@beku-maunt has the good code for finding if an object is within the shadow or not.
The trouble is that the shadow moves, so a large fraction of these 'transients' may have occured either when that location was in shadow or when the location is out of shadow. If the shadow is only 70 minutes long, then it seems to me that only twenty minutes would be free from sunlight for the entire duration of the 50 minute exposure. Has Villarroel taken this into consideration?

Note that I put 'transient' into inverted commas. These so-called 'transients' are much more likely to be faults on the photograph than real external events.
 
On a more serious note, if earthshadow is difficult, what about pursuing their other technosignature claim: "The smoking-gun observation that settles the question unequivocally, is the one of repeating glints with clear PSFs along a straight line in a long-exposure image." (PSF being Point Source Function.) As described in https://www.sciencedirect.com/science/article/pii/S0094576522000480#b54

Though I am fairly skeptical about satellite reflections showing up in the original plates, if it took 40-50 minutes of emulsion exposure to collect stellar signatures, then a sub-second reflection would have glinted for ~1/10,000 to 1/100,000 of the exposure period.

I also don't see that they've addressed aircraft as a potential source of straight-line repeating glints, which at least in later decades were a recurring issue for Palomar Observatory, according to this 1991 International Astronomical Union Colloquium item on Site Preservation at Palomar Observatory. Aircraft:
Aircraft interfere with the observing programs at Palomar, especially those in progress at the wide-field 48-inch Oschin Telescope. The losses at the Oschin Telescope, currently engaged in the Second Palomar Observatory Sky Survey, can be substantial; the value of each ruined plate must certainly approach $1,000. Both high-level commercial and low-level private aircraft operating above Palomar are essentially out of our control. A restricted air zone around the observatory could not be enforced, even if the FAA would agree to create it. Aircraft from the United States Air Force, operating out of March Air Force Base and Norton Air Force Base (both to the north of Palomar Mountain), frequently fly over Palomar at low-level, frequently with landing lights on. In the past, the Air Force agreed to divert their flight paths 5 miles to the east of Palomar, but these ad hoc agreements seem to break down every 5 years or so and need to be renewed.

Granted the 1950s were less likely to see a lot of high-level commercial air traffic in the middle of the night, but March AFB was flying B-29 Super fortresses, B-47 jet bombers, and KC-97 tankers through the 1950s.

Did military planes actually fly anywhere close to the observatory in that era? As recounted in Remembering Valley Center's air disasters:
"...in December 1957, a fiery bomber crash in fog left three Air Force officers dead at Palomar Mountain. The six-engine B47 Stratojet plowed into a 6,000 foot peak while returning from a training flight to March Air Force Base in Riverside County. The crash site was about one-quarter mile from the telescope at the Observatory, then the world's biggest.

Flames shot 100 feet high and live ammunition went off. Flying debris nicked the dome of a smaller nearby telescope. The victims were identified as Major Tim Esmond, the plane's commander; Colonel Frank W. Ellis, the pilot; and Captain Frank Harradine, flight surgeon."
 
As I've pointed out in the other thread, the Earth's shadow is very small at GEO, and large in LEO; but if these 'satellites' were moving at all, they would be streaks in a 50 second-exposure.

I can't see any physical way for these to be satellites of any description.
I asked Grok how long were the plate exposures on the original Palomar Sky Survey. The reply: "Exposure times for each plate were generally around 45 minutes to 1 hour." So not 50 seconds, but 50 minutes! So the object would have to be in perfect geosynchronous orbit for this.

But how long would it stay in this position? I asked Grok about that, and the answer was, "A typical GEO satellite needs maneuvers every 1–4 weeks to maintain its assigned position, with north-south corrections being the most propellant-intensive." So the aliens would need to constantly maneuver each satellite to keep it in Geosynchronous orbit.
 
Villarroel's earlier paper, A glint in the eye: Photographic plate archive searches for non-terrestrial artefacts, suggests the reason to look in geosynchronous orbits is that if an extraterrestrial civilization had sent probes to look at earth at some point in the distant past those are the only orbits that would be stable over many thousands of years and where the objects could still be found. (This is not backed up by any calculations about the local stability of such orbits, just a reference to an astrophysicist suggesting that one might be able to detect an alien civilization on an exoplanet by the optical effects of an orbiting ring of space junk.)

Though the paper discusses reflectivity, it insists that that it can only result from manufactured objects and that while reflective materials would degrade from micrometeorite impact and radiation "it is reasonable to assume that an extraterrestrial civilisation that launches a probe to the Earth will have developed materials and systems that could endure space travel of up to millions of years."

Which was immediately followed by the contradictory: "The degrading of material due to micrometeorites and cosmic radiation opens up a window of new possibilities: were one to make a mission to the geosynchronous orbits to collect the debris, it is almost trivial to identify debris that has been there for thousands of years by looking for objects having the most micrometeorite hits and largest loss of reflectivity of its surface." (Though if I found and recovered an obviously non-human piece of technology in Earth orbit I doubt my first question would be about its age in thousands of years.)

This is followed by a long discussion of glints and the "how to recognise signs of artificial objects in the pre-satellite images" and the claim "The smoking-gun observation that settles the question unequivocally, is the one of repeating glints with clear PSFs [Point Source Function shapes] along a straight line in a long-exposure image."
It's not true that geosynchronous orbits would be stable over thousands of years. Far from it. As Grok explains,

"
Stability of Geosynchronous Orbit
  1. Gravitational Perturbations:
    • Non-spherical Earth: Earth's oblate shape and uneven mass distribution (e.g., due to mountains or ocean basins) cause gravitational variations that perturb the satellite's orbit, particularly its inclination and longitude.
    • Lunar and Solar Gravity: The gravitational pull of the Moon and Sun causes long-term drift in the satellite's inclination, typically shifting it by about 0.75–0.95 degrees per year toward the equatorial plane.
  2. Solar Radiation Pressure:
    • Solar wind and radiation exert pressure on the satellite, especially on large solar panels or asymmetrical structures, causing gradual orbital drift.
  3. Atmospheric Drag:
    • At GEO altitude, atmospheric drag is minimal due to the thin exosphere, but it can still have a tiny cumulative effect over long periods.......



    • Typical Maneuver Schedule
    • Average Frequency: A GEO satellite typically performs station-keeping maneuvers every 1–2 weeks, alternating between east-west and north-south corrections, depending on the specific orbit and mission requirements.
 
Another important point: an object that is in true geosynchronous orbit must necessarily be above the earth's equator. And an object that is above the equator only passes into the earth's shadow around the time of the equinoxes. As Grok explains,

Frequency of Eclipses
  • Twice per year: Geosynchronous satellites encounter eclipse seasons around the equinoxes (approximately March 20–21 and September 22–23). During these periods, the Sun crosses the Earth's equatorial plane, aligning with the satellite's orbital plane.
  • Each eclipse season lasts about 45 days, centered around the equinoxes (roughly 22–23 days before and after each equinox).
  • Within each season, eclipses occur daily when the satellite passes through Earth's shadow, typically near local midnight at the satellite's longitude.
Duration of Each Eclipse
  • The duration of an eclipse depends on the satellite's position relative to the Earth's shadow (umbra and penumbra).
  • Maximum duration: The longest eclipse occurs at the peak of the season (near the equinox), lasting up to 72 minutes (approximately 1 hour and 12 minutes).
  • Variation: The duration is shorter at the start and end of the eclipse season, gradually increasing to the maximum and then decreasing. Near the edges of the season, eclipses may last only a few minutes.
  • The Earth's shadow consists of the umbra (full shadow, no direct sunlight) and penumbra (partial shadow, reduced sunlight). Most of the eclipse duration is spent in the umbra, with brief transitions through the penumbra.
 
I asked Grok how long were the plate exposures on the original Palomar Sky Survey. The reply: "Exposure times for each plate were generally around 45 minutes to 1 hour." So not 50 seconds, but 50 minutes!
Please don't ask chatbots for factual information unless you also ask them for the source for that information *and* you check that the source both exists and is reliable, and thence does indeed contain that information.

And if you're going to do that, just cite the reliable source instead. We don't need to know that it was Elisa that led you there.
 
I assume the shadow calculation stuff indicates they believe the glints were causes by sun reflections and not emitted light?
Somewhere in their chain of papers they talk about identifying transient candidates by them occurring along a line; the plate is exposed for the full duration, so a continuous light would indeed show up as a line of light. So the suggestion is the transient object is reflecting light only intermittently, possibly as a tumbling object turns one shiny face into the sunlight.
Glints along a line. Multiple glints with typical PSFs that lie along a straight line, cannot be caused by any natural object or by any known type of plate defect (Fig. 4). They can arise when one single object, or a fragment of an object, on a particular orbit reflects sunlight as it spins. These glints can be, although they are not required to be, equidistantly placed along the line.
 
Somewhere in their chain of papers they talk about identifying transient candidates by them occurring along a line; the plate is exposed for the full duration, so a continuous light would indeed show up as a line of light. So the suggestion is the transient object is reflecting light only intermittently, possibly as a tumbling object turns one shiny face into the sunlight.
Yeah I understand that it's just the source of the light "could" be an alien strobe and then the shadow wouldn't matter, but I was just clarifying the paper assumes reflected sunlight.
 
Maybe this has been mentioned or already dismissed but is there any way to challenge the paper's thesis based on a calculated maximum brightness for the transients?

As has been discussed already: since these plates are exposed for the better part an hour to capture enough starlight and the telescope tracked against the rotation of the earth to keep the stars as pinpoints we can know that these transients must only be 'flashing' for a limited period of time or else they would appear as streaks. I'm not sure if i saw it earlier in this thread but I believe that sub-second maximum exposure time was mentioned.

Could it be calculated that an object would not be able to appear nearly as bright on the film plate given a known maximum exposure time with the proposed altitude / GEO orbit and assuming 100% reflectivity? Like even a perfect flash of the sun for a sub-second could not result in a certain transient being as bright as a known star with a known magnitude?

I'm guessing the proposed size of the transient would be a variable that would invalidate this challenge.
 
The latest paper says

Data availability
The final analyzed SPSS dataset will be made available by the authors upon reasonable request to Dr. Stephen Bruehl (stephen.bruehl@vumc.org).

Has anyone here requested the data? Or even have the software available to view and independently analyse it?
 
SPSS is a statistics package, generally students/faculty will have a license via their organisation.

And 'reasonable request' is obviously subjective.
 
Maybe this has been mentioned or already dismissed but is there any way to challenge the paper's thesis based on a calculated maximum brightness for the transients?

As has been discussed already: since these plates are exposed for the better part an hour to capture enough starlight and the telescope tracked against the rotation of the earth to keep the stars as pinpoints we can know that these transients must only be 'flashing' for a limited period of time or else they would appear as streaks. I'm not sure if i saw it earlier in this thread but I believe that sub-second maximum exposure time was mentioned.

Could it be calculated that an object would not be able to appear nearly as bright on the film plate given a known maximum exposure time with the proposed altitude / GEO orbit and assuming 100% reflectivity? Like even a perfect flash of the sun for a sub-second could not result in a certain transient being as bright as a known star with a known magnitude?

I'm guessing the proposed size of the transient would be a variable that would invalidate this challenge.
There is a magnitude calculation in one of the papers, assuming a .5 second glint, which would be 1/6,000 of the 50-minute exposure time, or a 9.4 magnitude difference in apparent brightness. So comparing the glint's apparent magnitude to the magnitude of known stars that look the same in the plates, they conclude the actual real-time glint would have been 9.4 magnitudes brighter, from which they calculate what size the glinting object would have had to be.
We can estimate how much, by using the exposure time of the POSS-1 plate – about 50 min (or 3000 s) – and assuming a 0.5 s duration glint from a geosynchronous satellite. This gives a flux dilution factor of 3000s/.5s or ~6000, which corresponds to a reduction of about 9.4 magnitudes for the actual glint. We apply the correction to the simultaneous transients listed in Table 1 of Villarroel et al. (2021) [54]. Only 5 out of 9 transients have their POSS-I magnitudes listed: three were not included as they did not have follow-up observations, and one was found in an overcrowded area. Correcting the magnitudes for the flux-dilution factor make them fall well within expectations of typical apparent magnitudes (about 8–10 mags) for glints arising from debris at the GEO [16].
Provided these are the actual apparent magnitudes of the glints, the sizes of such possible objects must therefore be similar to the sizes of typical space debris fragments described in Nir et al. (2020) [16]. Here, it is deduced that the physical objects are around a few tens of centimetres if the reflective surface is a type of transparent material, or even smaller of cm scale if it is perfectly reflective and mirror-like.
Of course if you assume the glints were of longer duration, say 1 second, the magnitude drops dramatically.

Whether those assumed durations are reasonable I don't know. As noted in other comments, longer duration emissions from objects in orbit not fixed to the stellar background being tracked would at some point result in lines of light rather than point sources during the 50 minutes the plates were exposed.
 
Maybe this has been mentioned or already dismissed but is there any way to challenge the paper's thesis based on a calculated maximum brightness for the transients?

As has been discussed already: since these plates are exposed for the better part an hour to capture enough starlight and the telescope tracked against the rotation of the earth to keep the stars as pinpoints we can know that these transients must only be 'flashing' for a limited period of time or else they would appear as streaks. I'm not sure if i saw it earlier in this thread but I believe that sub-second maximum exposure time was mentioned.

Could it be calculated that an object would not be able to appear nearly as bright on the film plate given a known maximum exposure time with the proposed altitude / GEO orbit and assuming 100% reflectivity? Like even a perfect flash of the sun for a sub-second could not result in a certain transient being as bright as a known star with a known magnitude?

I'm guessing the proposed size of the transient would be a variable that would invalidate this challenge.
Yeah I talked about this a while back what is the required light to make a glint short enough to show up as spot on given the exposure time with the plate sensitivity and is that energy available from sunlight?

And you can compare to known mag stars
 
Yeah I talked about this a while back what is the required light to make a glint short enough to show up as spot on given the exposure time with the plate sensitivity and is that energy available from sunlight?

And you can compare to known mag stars

1. Did they also estimate the size and geometric area of the assumed reflecting surface? The shorter the "glint" the larger the surface required to achieve the same visual magnitude? Below a certain albedo, the surface area would have to be pretty significant, possibly visible in transits while photographing the moon.

2. The magnitudes of all background stars and solar system objects has been well established for decades. I would expect glints brighter than those magnitudes to be candidates for alien artifacts. I did not see that addressed anywhere.
 
The problem with the variability of the transient magnitude and duration is that they are both dependent upon the size, shape, velocity and reflectivity of the object that causes them. And when those objects are potentially alien spaceships (which everyone knows are of differing shapes and can move in extraordinary ways) trying to falsify any hypothesis relating to them becomes an impossible task.
 
The problem with the variability of the transient magnitude and duration is that they are both dependent upon the size, shape, velocity and reflectivity of the object that causes them. And when those objects are potentially alien spaceships (which everyone knows are of differing shapes and can move in extraordinary ways) trying to falsify any hypothesis relating to them becomes an impossible task.
I agree. I know there are many in the membership who are better mathematicians than myself who can crunch the numbers and wonder if one can find a reductio ad absurdum that strikes directly at the unstated assumptions in the original work. I realize such a simple result may not exist but as I said, I'm not the best mathematician in the classroom.
 
I agree. I know there are many in the membership who are better mathematicians than myself who can crunch the numbers and wonder if one can find a reductio ad absurdum that strikes directly at the unstated assumptions in the original work. I realize such a simple result may not exist but as I said, I'm not the best mathematician in the classroom.
The logic behind this requires no numbers.

Given brightness and exposure time, you can always compute the size of an equivalent solar mirror that would have produced this result. The only obstacle is that the size can't be so large as to escape notice when it's in front of the moon, but other than that, everything is possible since we do not know the size.
 
The logic behind this requires no numbers.

Given brightness and exposure time, you can always compute the size of an equivalent solar mirror that would have produced this result. The only obstacle is that the size can't be so large as to escape notice when it's in front of the moon, but other than that, everything is possible since we do not know the size.

I think the authors are depending too much on just that ambiguity.
 
Well maybe we can calculate it and see where we end up?
The logic behind this requires no numbers.

Given brightness and exposure time, you can always compute the size of an equivalent solar mirror that would have produced this result. The only obstacle is that the size can't be so large as to escape notice when it's in front of the moon, but other than that, everything is possible since we do not know the size.
 
Not having read every document mentioned here I have a question: Are they saying that these objects are being seen in every direction, north pole to south pole, or only in a band around the equator where geosynchronous objects would be found? And was the density of objects uniform in all directions?
 
They say both:
External Quote:

At 42,164 km altitude, we expect
N = 1223 transients in shadow out of 106,339 total, corresponding to an expected fraction of fexp = 0.0115 ±0.00033.
But then a few lines later:
External Quote:

Out of the 114 300 simulated points (180 points per plate), 610 were found to lie within Earth's shadow, implying
that approximately 0.53% of the survey area should be shadowed at GSO.
The second one seems to correctly use the percentage of the sky (derived numerically), while the first (with the super high sigma) uses the percentage of a hemisphere (derived analytically). Is this an error?
Mick, minor but important geometry check: in #35–38 you treat 1.15 % 'of a hemisphere' as equivalent to the paper's reported 1.15 % 'of the sky.'
The paper's Monte Carlo sample covers the whole sky, so halving it again undercounts the expected fraction by ×2. That difference alone can flip a 'deficit' into a match, so the normalization needs to stay consistent
 
Not having read every document mentioned here I have a question: Are they saying that these objects are being seen in every direction, north pole to south pole, or only in a band around the equator where geosynchronous objects would be found? And was the density of objects uniform in all directions?
Not all geostationary satellites remain in a band around the equator.
https://en.wikipedia.org/wiki/Geosynchronous_orbit#Elliptical_and_inclined_geosynchronous_orbits
Satellites that move away from the equator move quite a lot with respect to the ground, in a shape known as a 'lemma'. This makes them less useful for some applications and more useful for others. I have no idea what Villaroel et al. think these alien satellites were being used for.
 
I have no idea what Villaroel et al. think these alien satellites were being used for.

They are fulfilling an old UFO trope of aliens checking up on human nuclear capabilities, as they note in the paper. Now, why the aliens sometimes show up the day before a nuclear test, but don't hang around for the actual test or sometimes show up a day late isn't really explained.
 
I added an Earth shadow visualization to Sitrec
https://www.metabunk.org/threads/transients-in-the-palomar-observatory-sky-survey.14362/post-355369
1761517518463.png


I think a useful next step would be to add a FITS viewer, which would take a FITS file and paste it in the correct position on the celestial sphere, with adjustable transparency so you can see the (larger) stars that match the plate. You could then also (based on the plate time) animate the shadow over it.

I think this would be useful for visualizing what is going on and maybe also for validating some things.

Can someone suggest a good FITS file to start with? Do we know what the 635 transient files they used were?

Alternatively, do we just have the metadata for them, so I can display their outlines all at once?
 
Back
Top