Aguadilla Infrared Footage of 'UFOs' - Probably Hot Air Wedding Lanterns

A new analysis, published by 3AF (also has some minor descriptions of the Chilean and US Navy cases)
https://www.3af.fr/global/gene/link.php?doc_id=4234&fg
Section start on page 48. PDF is in French, I attach an auto-translated version

They consider three hypotheses that match the data - an object close to the ground/ocean (the SCU hypothesis), a slow-moving object close to the airport at around 1,000 feet (the mundane hypothesis), and an object following the plane (the "just for completeness" hypothesis). They narrow it down to the first two.



From a kinematic point of view two hypotheses emerge. One corresponds to a local path in the vicinity of the airport in slow descent (2 ft/s) between 1000 and 800 ft, compatible with that of a balloon or a Chinese lantern, or even a micro-drone, drifting at low speed while being carried by the wind. But this hypothesis, which would have the merit of corresponding to a simple scenario and a classical kinematics, is not consistent with radiometric data (hot spot, occultation). The other hypothesis could be 100 ft field monitoring, at least in the second part of the trajectory; which could explain some observed phenomena (hot spot, temporary occultation of the signature in flight grazing over the sea). It could be a micro-drone with extreme high-speed capabilities (nearly 300 km/h) at the beginning of the trajectory, as there are some prototypes. However, the use scenario of such a drone seems very atypical. A flight level change hypothesis with rapid descent could potentially change the initial speed peak but does not resolve issues such as duplication.

There is nothing to confirm a case of extraordinary PAN (Phénomènes Aérospatiaux Non-identifiés, UAP), even if we are faced with indeterminations on the restitution of trajectories and therefore of the type of flying object, or even with questions on certain IR phenomena (occultations, duplication).

Both assumptions have advantages and disadvantages.

Content from External Source
 

Attachments

  • lettre_3af_n44.pdf-en.pdf
    3.6 MB · Views: 505
Hard to tell if it's that translation but the Go Fast analysis seems a bit off.

Object is not hot, it's cold and they seem to ignore parallax and camera rotation to determine a fast object, they do seem to get the altitude correct.

Go Fast is a good litmus test of analyses as it has relatively easily determinable factors that are counter to the initial conclusions one might have based on the video alone.
 
I noticed in the SCU report that the Airforce conveniently refused to give them radar data from the airport itself. It seems like this case remains conveniently ambiguous until we can get that radar data (if indeed the unit was capable of resolving such a small object). As usual, the organizations that are supposed to serve us aren't transparent and allow the myth to live on in the murky low information zone created by classification of information.
 
I noticed on the spec sheet for the MX-15D camera that the thermal imager has several fixed fields of view: 26.7°, 5.4°, 1.1°, and 0.36°.

(Low-res image from the SCU pdf)

The last three settings correspond to the focal length figure at the top centre of the video overlay - which switches between 135, 675 and 2024. Assuming those are horizontal fields-of-view, the maths works out for a 16mm 4:3 sensor (12.8mm x 9.6mm), which is not an uncommon size.



I was able to verify this by taking a measurement across the "peaked building" in this frame of the video - it's 311px from the left hand edge of the sloping roof, to the right hand edge of the flat roof adjacent:


The image resolution itself is 704x480, which seems to be some old NTSC CCTV format. The YouTube 480p has small black bars.


So the building's width occupies 311/704 of the image's horizontal pixels.

Taking a real-world measurement in Google Earth, the building's width is 34.5m, near enough:


So the horizontal extent of the image at the distance of that building is (704/311) * 34.5m.

Punching in the GPS coords from that video frame shows that the camera was 4054 meters away:


Also, the camera aircraft's altitude is shown as 2208ft, which is 673m, and the building's roof is 68m above sea level.


So the total distance via Pythagoras is 4099m. Dividing the horizontal extent of the image by this distance gives the tan of the angle of view. Here are my workings:


So it looks like the calculated FOVs for a 16mm sensor are bang on, and the ones in the brochure are rounded down to 2 significant figures.

135 = 5.43°
675 = 1.09°
2024 = 0.362°

The next step is to pick some candidate frames from the video, compare with Flarkey's KML to calculate the distance from the camera to the hypothesised path, measure the object's width in pixels, and see if these two values stay in agreement. Vertical FOV could be used too.

Notably, that the SCU couldn't get this to work:
Calculations of the object's size were done on multiple frames whenever the object was a known distance from the ground which allowed accurate values of the object's distance and its angular size. These values varied significantly from a minimum size of 3.0 feet to a maximum size of 5.2 feet. The variation in size is believed to be due to either varied angular sides of the object as it is tumbling or temperature variations that are reflected in the shape that the object presents to the IR camera.
Or it could be because their distance measurements are wrong due to misunderstanding parallax.

It's a bit hard to say whether this will work, since extremely long lenses flatten out perspective dramatically. There may not be enough discrepancy between the SCU's path and the straight-line path for the numbers to falsify one or the other, especially with such low quality video to take the pixel measurements from.
 
I think the problem we'll find here is that the the range to the object and the object's size are the two unknown variables in this. And the number of pixels that the object appears as is dependent upon (guess what!?!) the objects range and size. The SCU got over this problem as they knew the exact position of the object "when it went in the water" !! This is where they initially went wrong. They stated that the object was 'conclusively shown to be between 3 and 5.2ft" - not very conclusive!

Am i right in what you're thinking, we could check the size and range against my suggested line and then the size should remain much more constant....?
 
Last edited:
Am i right in what you're thinking, we could check the size and range against my suggested line and then the size should remain much more constant....?

That's the idea, yes. The distance from the camera to the object is the 'adjacent' side on a right-angle triangle where the 'opposite' is the object's unknown width. At the 675mm focal length, the angle theta can be found by counting the object's pixel width and multiplying by (1.09/704). (Since the horizontal FOV is 1.09º, each pixel has an angular size of 1.09º/704 - or use the advertised 1.1º if preferred, we're well into small angle approximation territory anyway).

Then the solution is: opposite = adjacent * tan(θ).


(B in this example is the 3D distance from camera to hypothesised object location, A is the object's resulting width).

Since tan θ ≈ θ very closely for these small angles, this is basically saying that (object width in pixels * distance from camera) should be constant - and the same is true for its height in pixels. The tough part would be consistently measuring a fuzzy blob that's less than 10 pixels wide in a low-grade video. And if the object isn't rotationally symmetric (e.g. those heart-shaped lanterns), the width won't be consistent from all viewing angles anyway (although the height ought to be).
 
Quick example using this frame, corresponding to ACFT2 / TGT2 in the KML:



Horizontal distance to intercept the model path is 2102 m:


Height where camera vector intersects the path is 270 m:



Aircraft height is 1807ft or 550.8 m, and object pixel width is... well... 13-ish? Maybe 12?


It's possibly a good idea to measure across 3-4 subsequent frames and take an average pixel count.

Anyway, this gives an object width of 75+/- 5cm depending on measuring 12-14 pixels.


Meanwhile, if we believe the object is skimming the ground further along the camera vector, the size works out as 125+/- 10cm:


That's a decent discrepancy, but I'd have to check the SCU model to see where exactly they believe the object is at that moment. Then repeat for a few frames with the camera at different distances.

EDIT: Here's another for the ACFT 5 point, it's consistent:



Here's the SCU's "Speed of object at known positions" table, which should probably be the next line of enquiry, since all the data is there to reconstruct the calculation for each (video timestamp, putative object coordinates and altitude).


(The asterisks mean either "in water" or "in water and air", so effectively at 0m altitude).
 
Last edited:
this is the best i can do with the SCU's flight path. The kml also includes their radar data, and overlay of their timings at each location. Taking the speed data from this will also give more points of reference.

kml link
 

Attachments

  • SCU Flightpath - 2.kml
    25.6 KB · Views: 294
Last edited:
this is the best i can do with the SCU's flight path. The kml also includes their radar data, and overlay of their timings at each location. Taking the speed data from this will also give more points of reference.

kml link

Yeah I tried a few of their points and found some of their co-ords don't even put the object in the frame at the supposed timestamp. Interestingly the last few are on the 2204 focal length, so a larger number of pixels are covered by the object. Unfortunately all of the plottable SCU points are where the object is relatively close to where the straight-line model predicts, and the camera is far away - meaning the effect of the discrepancy is lessened. The effect would be more pronounced earlier in the video when the camera is nearer to the hypothetical path of drift.

After some quick trials I've found the calculated object size using the SCU objects locations varies by about 12%, while using the straight-line locations it varies by about 4%. But it's from from watertight so far.
 

Attachments

  • Aguidilla Object Analysis Model 2.kml
    79.6 KB · Views: 290
  • Aguadilla Object Analysis Report.pdf
    26.4 MB · Views: 335
I made a stabilized (AI upscaled) version of the Aguadilla video using Blender's tracking system. I know this is old news by now, but I haven't seen anyone post one, and it very clearly shows the movement of the object itself with respect to the ground is exactly what one would expect of a balloon drifting with the wind with a consistent ground speed, given parallax.



Short clip to show stabilization:
aguadilla_01.gif
 
Last edited:
I made a stabilized (AI upscaled) version of the Aguadilla video using Blender's tracking system. I know this is old news by now, but I haven't seen anyone post one, and it very clearly shows the movement of the object itself with respect to the ground is exactly what one would expect of a balloon drifting with the wind with a consistent ground speed, given parallax.

Wow , great work. It really does highlight the benign and ordinary motion of the object.

I'm still not completely convinced that it actually splits in two. It seems more likely that its either a compression aretefact or an infra-red / optical artefact, although it would be hard to demonstrate this.
 
I'm not sold on Lianza's tethered Chinese lantern explanation. It seems designed to answer why two objects seem to appear at times,but I'm satisfied that is just due to resolution problems from the IR imaging, the same shortcomings that makes the object seem to vanish at times over land an then over water near the end.

To be honest, many of the disappearances look like they're just an over-active despeckle filter. It seems to happen when they're over noisier backgrounds, and the signal from their dot might fall under the noise floor.
 
Wow , great work. It really does highlight the benign and ordinary motion of the object.

I'm still not completely convinced that it actually splits in two. It seems more likely that its either a compression aretefact or an infra-red / optical artefact, although it would be hard to demonstrate this.

"AI upscaled" means that you can't take any details you see in that vid at face value. The details could have been added by the AI from what it's been trained on. The overview of the situation - the position and velocity over time - are where that vid simplifies the analysis, as we can see from it that the object's movement is utterly mundane. Then again, the blender fly-through above is pretty convincing of that too.
 
I'm still not completely convinced that it actually splits in two. It seems more likely that its either a compression aretefact or an infra-red / optical artefact, although it would be hard to demonstrate this.

I've been trying to find examples of this happening in other IR footage because it seems reasonable. The double image only shows up one time at a high zoom level, and then the anomaly goes away immediately after the camera zooms back out.

(EDIT: air/sea temperatures for mirage hypothesis)
It also happens near the distance and angle from the plane where the object is no longer able to be detected by the camera, so my initial thought was that it could be some kind of superior mirage. But this normally happens when the air above is warmer than the air below. And the video appears to be from ~8pm local time in late April, which would mean the average air temperature (approx 24C-26C) would be cooler than the average sea temperature (approx 27C), so it doesn't exactly make sense that a superior mirage would occur.

Example of Fata Morgana superior mirage [1]:




To be honest, many of the disappearances look like they're just an over-active despeckle filter. It seems to happen when they're over noisier backgrounds,
This is an interesting hypothesis, I might try to test it.


[1]: https://www.amusingplanet.com/2019/12/fata-morgana-mirage.html
 
Last edited:
my initial thought was that it could be some kind of superior mirage. But this normally happens when the air above is warmer than the air below. And the video appears to be from ~8pm local time in late April, which would mean the average air temperature (approx 24C-26C) would be cooler than the average sea temperature (approx 27C), so it doesn't exactly make sense that a superior mirage would occur.

There's physically no way of getting such a mirage at that angle. Plus, it would affect the ocean behind it. It's not a mirage.
 
There's physically no way of getting such a mirage at that angle. Plus, it would affect the ocean behind it. It's not a mirage.
I was just about to post a correction that the angles involved would not allow a mirage like this. Mirages occur only within a couple degrees of the horizon, so I agree it cannot be a mirage. But in my defense, jumping to "maybe a mirage?" is possibly less of a stretch than "maybe fantastical technology from lifeforms that originated from another planet?" :cool:
 
To be honest, many of the disappearances look like they're just an over-active despeckle filter. It seems to happen when they're over noisier backgrounds, and the signal from their dot might fall under the noise floor.
Not sure you even need the despeckle filter. The bitrate on that video is very low, and the object is a handful of relatively static pixels against a background where the full frame is moving - maybe 12-16 pixels out of ~330,000. When there's a lot of entropy in the scene (i.e. lots of detail in the moving background - lots of buildings/trees/waves on the ocean), the codec sacrifices the detail of the small object to spend its bandwidth elsewhere.

I suspect the video has also been recompressed - not including whatever YouTube does to uploads - so one codec is trying encode the artefacts left by another. (If you look closely at the edges of the video where the 704x480 original has been matted into a 720x480 frame, there's softness at the edges as if it's not snapped exactly to a pixel boundary - I can't believe it comes off the camera like this, or at such a low bitrate. Hence thinking there are multiple generations of compression at work here).
 
Last edited:
I've been trying to find examples of this happening in other IR footage because it seems reasonable. The double image only shows up one time at a high zoom level, and then the anomaly goes away immediately after the camera zooms back out.
The doubled image is apparent for a few frames at 2:35, before the zoom in:


Also the upper image has faded from view prior to switch back to the wider lens at 2:46.

This sequence of frames does seem to show a physical separation, not a suddenly-appearing optical effect:
agua-split.gif
 
Last edited:
Not sure you even need the despeckle filter. The bitrate on that video is very low, and the object is a handful of relatively static pixels against a background where the full frame is moving - maybe 12-16 pixels out of ~330,000. When there's a lot of entropy in the scene (i.e. lots of detail in the moving background - lots of buildings/trees/waves on the ocean), the codec sacrifices the detail of the small object to spend its bandwidth elsewhere.

I suspect the video has also been recompressed - not including whatever YouTube does to uploads - so one codec is trying encode the artefacts left by another. (If you look closely at the edges of the video where the 704x480 original has been matted into a 720x480 frame, there's softness at the edges as if it's not snapped exactly to a pixel boundary - I can't believe it comes off the camera like this, or at such a low bitrate. Hence thinking there are multiple generations of compression at work here).

Indeed, this is why we should attempt to get things as close to the source formats as possible - do we even know what format/bandwidth the systems record their video in, and are the official archives of that raw data or themselves transcoded? However, whilst of course high entropy will drown the signal we're looking for, the properties that were being discussed (diving into the sea, or however it was described) at the time would have been in the source, so nothing to do with youtube or other recoding that we're seeing now.
 
Not sure you even need the despeckle filter. The bitrate on that video is very low, and the object is a handful of relatively static pixels against a background where the full frame is moving - maybe 12-16 pixels out of ~330,000.
In the SCU upload from the OP, the object sometimes moves much faster than the background due to camera movement. It does also have less relative movement at other times. But the object is closer to 30 pixels around the middle of the video (1:34 for example), and larger when the camera is zoomed in more. More of a nitpick, though.

. . . do we even know what format/bandwidth the systems record their video in, . . .
I don't know how the recording for offline viewing works, but the interface looks like a Wescam MX series, and SCU confirms this is the case (page 5):
Wescam Inc. is a subsidiary of L-3 Communications Holdings, Inc. A Wescam representative
confirmed that it was their state of the art Wescam MX-15D thermal imaging system.
...
Each individual frame is comprised of a set of 345,600 (720 x 480) picture elements (pixels)...
Content from External Source
It is unfortunate that they didn't use the MX-15HD version that appears to have been available 5 years earlier:

wescam_480_vs_1080.png

But, if HD versions of these UFO videos were available, there most likely wouldn't be UFO videos in the first place.
 
I work in this field, and the standard type of video encoding in 2013 was H.264. The encoders usually allow you to set the frame size and frame rate to whatever you want so that it can be tuned to suit either 1) your network link bandwidth or 2) your digital storage capacity.
 
I work in this field, and the standard type of video encoding in 2013 was H.264. The encoders usually allow you to set the frame size and frame rate to whatever you want so that it can be tuned to suit either 1) your network link bandwidth or 2) your digital storage capacity.

I worked in the field in the early-mid 90s (Loughborough Sound Images, got swallowed up and spat out by Motorola later). It looks like very little has changed over the decades, except back then it was probably something like H.261, the resolution's pretty much unchanged. Back in those days it was a 25MHz 56K (we even designed some of the DSP dev boards for TI at that time, we were a small company, but not nobodies, we had our niche - it's a shame TI hadn't bought us up instead, they had a better sense of direction) that was doing the hard work (no dedicated videnc block back in those days), it's amazing how little things have moved forward considering how many Moore's Law doublings have been made.
 
I worked in the field in the early-mid 90s (Loughborough Sound Images, got swallowed up and spat out by Motorola later). It looks like very little has changed over the decades, except back then it was probably something like H.261, the resolution's pretty much unchanged.

But it looks like the newer devices are using 1080 vertical lines now instead of 480. Are you saying the CCDs use the same underlying resolution? At least to me, the HD IR systems look much better than the old analog ones from even the mid-late 2000s.
 
But it looks like the newer devices are using 1080 vertical lines now instead of 480. Are you saying the CCDs use the same underlying resolution? At least to me, the HD IR systems look much better than the old analog ones from even the mid-late 2000s.

I wasn't in the military imaging department, I'm not 100% sure what resolution was in use back in the early 90s, but that department also did industrial imaging, and I remember that both 576 and 704 line sensors were in use there. So 1080 wasn't that great an improvement over 10-15 years, in particular when compared to all the other increases of density that occured in the IT field at the same time.
 
Been reading through this, very interesting. Quick question that I don't believe has been addressed: What wind speed can a lantern stay aloft at?
 
Been reading through this, very interesting. Quick question that I don't believe has been addressed: What wind speed can a lantern stay aloft at?
It's not the speed, it's the turbulence. A lantern (or a balloon) moves AT wind speed. If it's a smooth air flow, then it can float along at 100mph.

It feels a little counterintuitive, as we think of speed as ground-based creatures. A 10mph wind is very noticeable. But in a hot-air balloon, you're probably not going to feel the wind at all.
 
It's not the speed, it's the turbulence. A lantern (or a balloon) moves AT wind speed. If it's a smooth air flow, then it can float along at 100mph.

It feels a little counterintuitive, as we think of speed as ground-based creatures. A 10mph wind is very noticeable. But in a hot-air balloon, you're probably not going to feel the wind at all.
Interesting. But turbulence is caused by wind speeds yes? And these things are less sturdy surely than a hot air balloon designed for a human.
 
Interesting. But turbulence is caused by wind speeds yes? And these things are less sturdy surely than a hot air balloon designed for a human.
No, turbulence is not caused by wind speed. You can have very fast Moving air with no turbulence, or relatively slow moving air that is very turbulent.

The winds on the day the Aguadilla video was taken were between 12 - 16 knots, so not very windy.
 
Wind speed is a factor in turbulence, you can read about it on this website https://www.weather.gov/source/zhu/ZHU_Training_Page/turbulence_stuff/turbulence/turbulence.htm

Though I agree that the wind speeds that day wouldn't seem to contribute to much turbulence.
Oh, I absolute agree it is a factor, and the turbulence is due to the movement of the air across different boundaries, eg wings, land, buildings and other air masses with different temperature or pressure. The point I was trying to make was that faster-moving air does not necessarily mean more-turbulent air. And as Mick has said earlier in the thread, an object floating and moving with the air will experience less buffeting and turbulence compared to a fixed object in the same moving air.
 
Yeah, I cant bring myself to watch this video as I think it would probably trigger me.
We do it so that others don't have to :)

You could skip to the shoot-yourself-in-the-foot moment at 1722s - 1745s (youtube vid ID 001fbB5BMNw):
"You know it's that weird light. It's just going to show up. Even here, look, you see this super-bright light not making any weird reflections in his phone. This thing [pointing at the dot of the UAP] makes this blue light."

Cue facepalmeing chez FatPhil.

Watch the position of the super-bright light and that blue light as the phone pans around, and notice that they are always diametrically opposite in the frame (except when they coincide in the centre obviously) and moving in lockstep. The superbright light is *precisely* the thing that's causing that blue light due to the reflections in his phone.

It's hard to be more wrong in so short a statement.
 
Last edited by a moderator:
Hang on a mo, wasn't the wind speed coming from the resort 13km/h? And from what I can tell the recommended wind speed for launching is 5...

1. 13kmh is 8mph and the "5" in the recommendations is in mph so 13kmh is 3 miles an hour above that.
2. The recommended speed is just a coverall recommendation so you don't lose control of it and it drifts across an airfield or into a house or something. Chinese Lanterns are dangerous and have been banned in some places for these reasons.
3. Even with a higher wind speed you could very easily light and release it from a sheltered area into stronger winds.
 
Windspeeds at ground level were recored as 4 kts = 4.60312 mph (albeit in San Juan, which is 100km away, but close enough).

From page 86 of the SCU report...
Capture11.JPG
 
Not picking on you, Lil, I'm as guilty as anyone, but after an evening of contemplation (carousing with a philosopher) I felt now was a good time to say that we should not be flinging ad homs around, it only reflects badly on us. If you watch the whole vid, you'll see that with one simple assumption that matches the chinese lantern hypothesis (viz size), his calculations demonstrate that a chinese lantern fits the bill perfectly (e.g. w.r.t. speed). He is trying, and I think he's earnest, he's just making the occasional mistake. That's not a good reason to dogpile or denigrate him. I'm sure he's a nice person, let's keep things nice.
 
Last edited by a moderator:
Back
Top