Reverse Engineering the ATFLIR to find Range and Temp to/of the Gimbal UAP

Mick West

Administrator
Staff member
Travis Taylor (the former chief scientist at the Pentagon's UAP Task Force) made a very specific claim on CBS 8 News Now:
From recent calculations I've done, if this device [i.e. Gimbal], whatever this object is, is further away than 8km, then it has to be at the temperature of the melting point of aluminum, and if it's further than 50km, it's at the melting point of steel. That's how hot it would have to be to show up in this way in this sensor. So, it's not a jet that's 50 miles away, and we are getting glare of it
Content from External Source
This comes from work he showed at a recent UFO conference, where he presented an overview of analysis that he claims demonstrates a limit on how far away the Gimbal object can be.

He bases this around a notion of "saturated" that seems poorly defined. Essentially he's using it as a synonym for "overexposed". That means for an individual pixel in an image it's brighter than the maximum value the digital encoding supports, so we can't tell exactly how hot it is relative to other pixels (other it being hotter than all of the unsaturated pixels). There are several problems with the way he uses that, but we can address those later.

The short version of his argument is:
  • In another video of a jet, only the region directly around the engine is saturated. (Mick: correct)
  • The "entire gimbal object" is saturated, so it is very hot. (Mick: I think we are seeing glare, just from the engine region, obscuring the actual shape)
  • With the other video of identifiable jets:
    • We can tell the distance to the jet based on trig (Mick: they actually have the distance on-screen and in the audio, so we don't need to, 5.5NM)
    • We can use the temperature of jet exhaust and the size of the jet exhaust to calculate how much power is arriving at the ATFLIR for this to be saturation (p_atflir_saturated)
  • Given p_atflir_saturated we can calculate how hot the UAP would be at various distances, and make a graph
  • The graph shows it must be close, as longer distances require too hot a temperature.

Here are the relevant slides and transcript
2022-06-25_10-53-12.jpg
So we just want to point out one more thing, that this object is really hot, right. And I want to figure out how hot it is. We now kind of know how big it is, a range. And then we got to figure out how far away it is in the video. See, here's another thing Mick West says it's like 50 miles away. And it's a jet airliner flying and making a certain curve, and they've proven that you can have a jet airliner that gives the motion that's in the video and, and all this kind of stuff. And I looked and I said "really, dude is saturating the camera. A jet airliner gonna saturate the camera at 50 miles away? So I've took the video of the F-18 that we have, we know that it only saturates the camera at the jet engine at the Jet Engine, right?

Well, first thing we need to do is figure out how far away this thing is. So looking at the video, there's information in the video about its magnification of this of the jet of what the magnification of the camera is, we know that the jet is 17.1 meters in one dimension. Now, I can look at the field of view in the camera and say, well, this much is 17 meters. So the whole thing is, says my ruler, right, I can tell you how big the image is in meters. And that tells me how far away it is because I know how far away 17 meters looks like using simple trigonometry. You know, the Pythagoras theorem will tell me all day long, how high that is, if I know this length and that angle, right?

Well, you can look up how hot a jet engine when f 18 gets there, they boast about how great the materials are they've invented that can handle the 1500 degrees Fahrenheit temperature of the combustion engine as it flows out of that thing. And, man, they are so proud of that engine, right? Well, it's pretty cool. Well, so then you can use a thing called the Stefan Boltzmann law.
Content from External Source



2022-06-25_09-22-47.jpg
And you can realize that the heat of this engine is emitting electromagnetic radiation based on this equation on the top right under Stefan Boltzmann law, and it tells you the power that is coming out of the jet in optical power, meaning like, you know, you talked about your lights being 100 Watts, or lumens or whatever. That's exactly what this law tells us is how many watts of power per square meters, well actually is how many watts per power is coming out of the jet engine, when it's saturated at this distance. So I can use that equation and calculate back to my target, knowing the distance now, exactly how much power was on the aperture of my ATFLIR that saturated those pixels. So I know exactly, 1500 degrees at this range saturates the pixels. This tells me this is the threshold of saturation for the FLIR. Well, Raytheon probably didn't want anybody to figure that out, either. So now I can tell you if I know how big an object is, and I know that it's saturating the camera, how far away it is based on this math right here. And so I created this bottom equation is the temperature of the UAP in range, and I did it in kilometers, and I did it in Fahrenheit, so we can speak in Fahrenheit. My math had to use, you know, the Kelvin because that's what the Boltzmann law uses, but so it's converted so that you can understand what you're seeing and so, what we can see from this data alone, just from these videos that I've got that the UAP is somewhere in this range of heat.
Content from External Source


2022-06-25_09-27-42.jpg
And based on distance, so if the far left of the bottom of the screen is one kilometer away, the far right is 100 kilometers away. Well, when it gets to 20, 30, 40, 50, 60, 70 kilometers, if that thing was 70 kilometers away, which is about what, 30 miles or something, if it was that far, no, that's seven kilometers, I'm sorry, seven kilometers. So that's, that's three miles do you three and a half miles, right. So if it's three and a half miles away, it's hot enough to melt aluminum. That's what the blue line is, is the melting point of aluminum. Right now at if you come on out here to 10,20, 30,40,50,60 kilometers away, 60 kilometers away. Now that's about 30 miles or so. All right. It's 60 kilometers away, it's at the melting point of steel. We don't have vehicles that can do that. And they're saying in the video that they the math, they figured out that this was an airliner, you know, 40 miles away, or whatever it was, that means it's in this range somewhere that is so damn hot. That is melted aluminum and or the steel. So guess what? It ain't a damn jet airliner. Right? Right. [applause]

And this is the part that he threw out. He's not going to talk about that part of it. He only wants to talk about well, I know is glare and the gimbal. .... the gimbal jumping into blah, blah, blah, well, whatever the math was good you did you get you tracking it, you know how the gimbal works great. But what you've thrown out is some of the data instead of looking at all the data to figure out the problem. This is a smoking gun, folks. We don't have a drone that can be that hot. Even at you know, seven miles away. It's not flying around being or three miles five miles or whatever it is flying around being as hot as the melting point of aluminum. Those jet engines are only 1500 degrees Fahrenheit. And we're talking about right there. You're already at 1200 degrees Fahrenheit. So what is that? What is this thing and why is it so dadgum hot? So that's that's where I am what with this so far.
Content from External Source

I think this argument fails simply because he's ignoring the effects of exposure settings - or "level" and "gain" on the ATFLIR.

But it might be interesting to replicate his math, see what else there is there.
 
Last edited:
Travis Taylor seems to be ignoring, or perhaps is unaware of, the Chilean Navy case, where a commercial airliner produced a large glare at over 40 nm, according to flight path data. Was that airliner also at the melting point of steel? If challenged with this counter-example, I suspect he would just double down and claim that the Chile object cannot be an airliner either (or at least, not one at over 40 nm away). But it would be worth presenting him with the challenge. For anyone not already committed to an 'extraordinary' hypothesis, the obvious conclusion is that not all FLIR images will look the same. They will be affected not only by the precise nature of the equipment, but by its maintenance, etc. He seems to be basing his argument on a single particularly 'clean' example, and ignoring (or again, being unaware of) others, like that huge diffraction spike around a fire in Afghanistan or Iraq.
 
The temperature comparisons are a bit strange.

First, nobody uses aluminium in a hot part of a jet engine.
Article:
Untitled-1_1.png

Secondly, the fighter jet engine.
Well, you can look up how hot a jet engine when f 18 gets there, they boast about how great the materials are they've invented that can handle the 1500 degrees Fahrenheit temperature of the combustion engine as it flows out of that thing.
Content from External Source
I wish he'd source actual data, here's how hot the F100 engine in a F-15 can get:
Article:
SmartSelect_20220626-001839_Samsung Notes.jpg
SmartSelect_20220626-001817_Samsung Notes.jpg
SmartSelect_20220626-001710_Samsung Notes.jpg

Note that doesn't stop at 1500⁰F.

Modern passenger jet engines save fuel by running their turbines leaner and hotter than a fighter jet engine. These are the specs for the General Electric GE90 used e.g. in Boeing 777 aircraft:
Article:
SmartSelect_20220626-003647_Samsung Notes.jpeg

On Taylor's diagram, the GE90's maximum continuous temperature of 1922⁰F (1.9E3) gets us almost to 30 nm [Edit: x-axis is km, not nm!], where we know a straight and level GIMBAL UAP flight path is:
2022-06-25_09-27-42.jpg
Consider also that these turbines are huge, the GE90 engine diameter is about 150" (4m), while the top of the canopy of an F/A-18 is merely 126" (3.2m) above ground.

So, everything basically checks out if you use the right temperature!
 
Last edited by a moderator:
Many problems with the analysis. The lack of consideration for gain and exposure is one (and this guy has a doctorate in optical science and engineering!). Another is how he assumes that the power emitted by the jet in the example is _just_ enough to saturate the sensor at that range (rather than, what is far more likely, that it's well in excess of that). Another one is how he assumes the IR radiation spreads isotropically over the entire hemisphere of sky behind the jet. In reality, the IR from jet engines is partially collimated, like a flashlight, and that has to be taken into account. When he does the calculation for the UAP, he instead assumes that the radiation is spreading isotropically across the entire sky, in disagreement with the previous assumption so he's not comparing apples to apples.

Doesn't seem salvageable to me.
 
The short version of his argument is:
  • In another video of a jet, only the region directly around the engine is saturated. (Mick: correct)
  • The "entire gimbal object" is saturated, so it is very hot. (Mick: I think we are seeing glare, just from the engine region, obscuring the actual shape)
  • With the other video of identifiable jets:
    • We can tell the distance to the jet based on trig (Mick: they actually have the distance on-screen and in the audio, so we don't need to, 5.5NM)
    • We can use the temperature of jet exhaust and the size of the jet exhaust to calculate how much power is arriving at the ATFLIR for this to be saturation (p_atflir_saturated)
  • Given p_atflir_saturated we can calculate how hot the UAP would be at various distances, and make a graph
  • The graph shows it must be close, as longer distances require too hot a temperature.

I think this argument fails simply because he's ignoring the effects of exposure settings - or "level" and "gain" on the ATFLIR.

But it might be interesting to replicate his math, see what else there is there.

Camera settings are one important variable. Natural light conditions another. And camera pixel size (light sensitivity) is yet another. Most of us who've gone camera shopping know that a camera with larger pixels is more sensitive than a camera with smaller pixels and produces a noisier picture. The former requires less light to produce an image of similar brightness with the same settings.

Under Travis Taylor's logic, a picture of the same object under the same circumstances taken with a more sensitive camera features a hotter object.

As a mildly redeeming fact for Pentagon's credibility, an ideologue committed to an alien hypothesis displaying strong bias towards mundane ones is no longer employed as their 'Chief Scientist' at the AOIMSG.
 
And camera pixel size (light sensitivity) is yet another. Most of us who've gone camera shopping know that a camera with larger pixels is more sensitive than a camera with smaller pixels and produces a noisier picture. The former requires less light to produce an image of similar brightness with the same settings.
You say larger pixels produce more noise.
Article:
Noise and high ISO performance: Smaller pixels are worse. Sensor size doesn’t matter.

You say a larger pixel requires less light. The light required to detect a signal is called "noise-equivalent power" (NEP), and it's less for smaller sensor pixels.
Article:
Dividing the power of the laser times the optical density and dividing by the difference in gray levels multiplied by the noise floor (1.5 gray level), we get our approximation to the noise equivalent power (NEP) of one pixel which is 0.1 fWatts (10^-16 Watts!). That’s the very minimum light level my CMOS sensor can detect with the exposure set to maximum; or at least an order of magnitude of that level.

Let us now compare this to the NEP of Thorlabs FDS100 photodiode which is 12 fWatts/√Hz. As the exposure time of the camera was set to 1 second, we can roughly estimate that the CMOS-based spectrophotometer is 120 more sensitive than the photodiode. [...] Clearly, the photodiode is outperformed here by the CMOS sensor.

But how can it be? The answer lies in the size of the detector element. When you compare the datasheets of photodiodes, you can see that the noise levels increase as the size of the photosensitive elements increase. And because one of our CMOS pixel is much smaller than our photodiode element (13mm²), it has a smaller noise floor and thus a better sensitivity.


And, as contradictory as this seems (it's really not), that's entirely immaterial, as I understand Taylor is using the same ATFLIR camera system (and hence the same sensor) for his comparison.
 
Last edited:
I guess this twitter thread Keith Kloor posted about his discussion with Travis in May about the Gimbal object is pertinent.
If this is true, Travis had thought at least at that point, it might be a spy plane or something and that it seemed to be edited

1656235769974.png
1656235841842.png
 
You say larger pixels produce more noise.
Article:
Noise and high ISO performance: Smaller pixels are worse. Sensor size doesn’t matter.

Yes, smaller pixels have more noise in terms of per-pixel upstream read noise. But not necessarily in terms of whole-image noise especially in modern equipment. Your citation and article from 2012 discusses per-pixel noise.

You say a larger pixel requires less light.

Is there a reason you cut what I "say" mid-sentence and thereby entirely changing my point? To repeat: A camera with a larger pixel size requires less light to produce an image of similar brightness than a camera under the same settings with smaller pixels. The smaller pixels make the latter camera of otherwise similar build and settings less sensitive to light. Your citation and article discusses NEP and in no way contradicts this fact.

Article:
When the size of a CMOS imaging sensor array is fixed, the only way to increase sampling density and spatial resolution is to reduce pixel size. But reducing pixel size reduces the light sensitivity. Hence, under these constraints, there is a tradeoff between spatial resolution and light sensitivity.


And, as contradictory as this seems (it's really not), that's entirely immaterial, as I understand Taylor is using the same ATFLIR camera system (and hence the same sensor) for his comparison.

As evidenced by your misreading, you would be in a better position to discuss the 'materiality' of what was written after first reading it correctly.

Pixel size affects light sensitivity. The pixel sizes of the ATFLIR camera equipment factor in when analyzing the brightness (Taylor's "saturation") of an image among other variables discussed in the foregoing.
 
On Taylor's diagram, the GE90's maximum continuous temperature of 1922⁰F (1.9E3) gets us almost to 30 nm, where we know a straight and level GIMBAL UAP flight path is:

So, everything basically checks out if you use the right temperature!

If this diagram is accurate, 30-35Nm is 55-65km, in the graph this corresponds to 2500-2700⁰F.

Gimbal at 10Nm (~20km) would be ~1600⁰F.
 
If this diagram is accurate, 30-35Nm is 55-65km, in the graph this corresponds to 2500-2700⁰F.

Gimbal at 10Nm (~20km) would be ~1600⁰F.
My mistake, thanks for spotting that.

1900⁰F gets us to 27km≈15nm on the diagram, but then Taylor was probably wrong about the 1500⁰F as well (more likely to be 1000⁰F with the afterburner off?).

30 nm ≈ 55km, corresponding to 2500⁰F, but if his calibration is off that much (plus the issues the other posters have raised)...???

if that thing was 70 kilometers away, which is about what, 30 miles or something
Content from External Source
70 km=43.5 miles
 
Last edited:
I get a little nervous when some try to mix up photo detectors behaviour and (absolute) radiometry.
"I once took an overexposed picture of a jet engine (I don't know how hot it actually was) with this IR camera, and now it's calibrated for accurate distance and temperature readings."
 
1- The sensor is NOT measuring the radiance emitted from the object, but the irradiance received by the sensor. Which is an obvious statement, but sometimes it is necessary to make explicit the difference between both quantities. They are of course related through the distance, solid angles and other stuff, so having the latter, the former can be calculated (IF the other stuff is known). That's what he is trying to do.

2- The calculation is basically: I have a maximum irradation the sensor can detect (the saturation), I know a few things of the detector (field of view, aperture), and the object (angular size) and I assume a distance: What temperature must the object have in order to generate a radiance (at least) enough to create that saturation irradiance on the sensor? Which is a valid approach, IF you really now all the stuff you need to know.

3- All the calculation relies on that saturation irradiance. How does he know/deduce that value? Maybe it could be theoretically calculated, but it is more practical to calibrate the sensor to determine that value. So you have a relationship between the electric current created by each pixel, and irradiance on the sensor.

That value also depends on the setup of the camera: optics (WFOV, MFOV, NAR), the "gain" and "level" settings, and maybe some other parameters we are not aware of. That means several calibrations depending on the setup.

4- The displayed image is created by imaging the current values of each pixel using a color scale. So having access to the currents raw values, gives you access to the values of irradiation, IF the calibration is known (which is not from the leaked video), and this is for sure classified information.

Also, the leaked video shows only 8-bit values in a greyscale after a compression algorithm to create the mpeg format. So, there may be a way to relate these mpeg values to the original raw values, but from the video, this is also cannot be deduced in anyway!

Even if we knew the calibration and the saturation irradiance for that setup, we still don't know how to go from the mpeg values we can extract from the video, to the values of the raw data.

So then : Is Travis Taylor working with raw data or with a mpeg video? How does he know the saturation irradiation value on which the whole calculation is dependent on?

Now, take into account some of the documents The Black Vault was able to get via FOIA (https://www.theblackvault.com/docum...merge-about-uap-encounter-briefing-breakdown/), about the gimbal encounter:
The staffers also asked about the chain of custody of the video of the encounter. The staffers are generally dissapointed that the Navy does not save any video of these encounters and the chain of custody is lacking
Content from External Source
(emphasis added)

Which strongly suggests that the raw data does not exist anymore.


And some other problems I see:
He is using the Stefan Boltzmann law, which gives the total radiance emitted by the source. However, ATFLIR is only sensitive to a narrow range of wavelengths (3 -5 microns). That is a small fraction of the total radiance, and varies with the temperature.

The 4th power dependence of Temperature is for the total radiance. I have to check, but I think the radiance for a narrow band does not necessarily follow the same law (although it could be an approximation)

He should use Planck's law for a black body emission, and integrate in the sensible range of ATFLIR instead.
 
4- The displayed image is created by imaging the current values of each pixel using a color scale. So having access to the currents raw values, gives you access to the values of irradiation, IF the calibration is known (which is not from the leaked video), and this is for sure classified information.

Also, the leaked video shows only 8-bit values in a greyscale after a compression algorithm to create the mpeg format. So, there may be a way to relate these mpeg values to the original raw values, but from the video, this is also cannot be deduced in anyway!
Based on the frames of the video that Taylor showed, he was not even using the original WMV released by the military, instead, he seems to have been working off a version downloaded from YouTube. There's another slide where he's mistaking interlacing (smoothed somewhat by YouTube) as a kind of optical interference (which he then uses to calculate the distance, but that's another story).

2022-06-26_07-59-35.jpg

But even the "original" video, as you say, is just 8-bit. The raw data is 14-bit.


A1-F18AC-746-100

24. Digital Non-Uniformity Correction. The digital non-uniformity correction (DNUC) channels pixel-to-pixel non-uniformity and converts the analog signal to digital. The low-noise 14-bit DNUC receives the ATFLIR detector video output and applies level and gain corrections to each pixel. A fine level calibration takes less than one second.

25. Six sets of corrections are pre-stored and calibrated for each FOV and detection gain combination. Any correction set can be applied within 200 milliseconds. Also, fine real-time level corrections, derived from term generators on the scene based non-uniformity correction (SBNUC) are also applied.

26. Dead pixel and blinker detection is also a function of squares calculations. SBNUC receives 10-bit digital video from the EOSU and converts EOSU timing for a 1:1 aspect ratio display. The DNUC also processes electro optic sensors. The DNUC has a VME bus interface and a fibre channel interface.

30. Scene Based Non-Uniformity Correction. The scene based non-uniformity correction (SBNUC) provides optimal uniformity over all environments in the rapidly changing background environment of a fighter aircraft.

31. The SBNUC incorporates field and line buffering and filtering and motion detection circuitry to support a real-time NUC algorithm. The SBNUC scene based correction is for both high and low frequency active non-uniformities. For high frequency non-uniformities, the module calculates non-uniformity terms based on video from the DNUC and uses a 3LM non-linear filter based on a multi-stage median architecture. For low frequency non-uniformities, dual bandpass filters are used to filter the DNUC video. The SBNUC interfaces with the DNUC using 14-bit digital video and VME bus interface.

37. Video Processor Module. The video processor module (VPM) receives two 14-bit video inputs from the DNUC. The VPM accepts 640X480 FLIR video and 576X494 visible video. The VPM provides FLIR video using a fiber channel to the signal processor for tracking and does image enhancement, symbol generation and conversion to modified RS-170 for the pilot composite analog video display.
Content from External Source
14-bit means an individual pixel can have a range from 0 to 16,383
10-bit means 0 to 1,023
8-bit means from 0 to 255

So
  • The raw signal is analog (i.e. essentially the waveform of a voltage level).
  • This gets converted to 14-bit and sent to the DNUC
  • The DNUC does some math on this for level and gain, where it will clip values out of range, and sends a 10-bit version to the (internal) SBNUC for more processing
  • The symbology (numbers, etc) is added on top of this digital signal, and it's re-converted to 14-bit
  • This gets converted to modified RS-170. RS-170 is the US standard interlaced monochrome TV signal format in use since 1954. The "modification" is likely to restrict it to 480 lines. But essentially it's black and white NTSC. Interlaced.
  • That mono NTSC signal is what is seen in the cockpit on the DDI screens
  • The DDI's mono NTSC signal is recorded using (I think) TEAC 8mm hi-8 analog tapes. part of the the CVRS (Cockpit Video Recording System)
  • These tapes are then physically transported elsewhere, and digitized from taped NTSC into 8-bit WMV files
  • The WMV file is leaked to NYT and TTSA
  • Unknown conversions and operations are done, and the files are loaded on YouTube
  • Someone downloads from YouTube, resulting in a smoothed low information versions
  • Taylor uses this.
The degradation via YouTube is highly problematic, but not even the biggest issue. The full information from the sensor has already been lost during the DNUC processing. It's then further degraded by the conversion to interlaced NTSC monochrome - which is what is eventually recorded. And even if it's was digitised in-plane onto SSD, it's still digitizing NTSC Mono (as we can tell by the interlacing)

So basically the calculation, even if technically correct, is useless.

There's more. The video he's using for reference is (I think) from the Air Force's version of the ATFLIR, which never actually went into operation as they decided to use the Sniper pod. So it's essentially a different system, a decade earlier, using different algorithms, and possibly different sensors.

So an unfortunate case of GIGO - Garbage In, Garbage out. Even IF his math and physics is correct in all other ways, which seems highly unlikely.
 
Last edited:
Travis Taylor seems to be ignoring, or perhaps is unaware of, the Chilean Navy case, where a commercial airliner produced a large glare at over 40 nm, according to flight path data. Was that airliner also at the melting point of steel? If challenged with this counter-example, I suspect he would just double down and claim that the Chile object cannot be an airliner either (or at least, not one at over 40 nm away). But it would be worth presenting him with the challenge.
I would be interested in what @DocTravis has to say about this case, as the solution of it being flight IB6830 is pretty much inarguable, even Leslie Kean conceded that. And the glare, at ranges from 40 to over 100 miles, is much bigger than the engine.

Fig 2 - Object size matches timestamped positions.jpg
 
But it might be interesting to replicate his math, see what else there is there.
I feel he is messing with different asumptions: having (π·r^2) as the area of the uap / jet means he is considering a finite size source (a flat disk), but on the other hand, having 4πr^2 or 2πr^2 factors for distance seems to be related to the propagation of radiation from a point source, or spherical source.

But despite those factors, the relationship of temperature with distance is correct - I'm not saying the numbers are correct, only that the equation behaves correctly: Temperature is proportional to the square root of distance.
The radiance (R) emited by the source is proportional to the 4th power of temperature (T)
R ~ T^4 (from the Stefan-Boltzman law
The radiation propagates, and the irradiance (I) on the detector is proportional to the inverse of the square of the distance (d):
I ~ R / d^2 = T^4 / d^2

For a given irradiance, get the relationship between Temperature and distance is:
T ~ (I · d^2)^(1/4) = I^(1/4)·d^(1/2)

(the omitted constant factors scale the values, but do not change the relationship of T with d)
 
The radiance (R) emited by the source is proportional to the 4th power of temperature (T)
R ~ T^4 (from the Stefan-Boltzman law
Temperature—>radiance comparison
F-15, F100 engine without afterburner at 1000⁰F = 811K
B777, GE90 113B engine maximum continuous power at 1922⁰F = 1323K

Disregarding size,
R(GE90)/R(F100) = (1323/811)^4 = 7.08
This passenger jet engine is 7 times brighter than the fighter jet engine from the temperature difference alone.

Size and thrust comparison
https://en.m.wikipedia.org/wiki/Pratt_&_Whitney_F100
Diameter: 34.8 inches (88 cm) inlet
Maximum thrust: 14,590 pounds-force (64.9 kN) military thrust,

https://en.m.wikipedia.org/wiki/General_Electric_GE90
Fan diameter: 128 in (3.3 m)
Takeoff thrust: 115,540 lbf (513.9 kN)

Area comparison:
A(GE90)/A(F100)=(128/34.8)^2=13.5
The passenger jet's engine cross section is 14 times bigger than the fighter jet's.

The radiation propagates, and the irradiance (I) on the detector is proportional to the inverse of the square of the distance (d):
I ~ R / d^2 = T^4 / d^2
Distance comparison
Passenger jet on GIMBAL straight&level course, at 30 nm
Fighter jet, Taylor's comparison (see post #1 above), at 5.5 nm
I(Fj)/I(Pj) = (30/5.5)^2 = 29.8
Using Taylor's fighter jet as comparison, a passenger jet engine's radiation would be attenuated by a factor of 30 from distance alone.

Overall radiation comparison
7*14/30 = 3.27
Taking temperature, area and distance into consideration, the irradiation from a GE90 passenger jet engine exceeds the fighter jet engine by a factor of 3.

Screen image size and pixel brightness
At the same magnification, the apparent area (in pixels) of the engine image on the screen is proportional to the engine cross-section and inversely proportional to the square of the distance. This means that engine size and distance should not affect pixel brightness at all: the irradiation effects of engine cross-section and distance cancel out, and the pixel brightness is only affected by the engine temperature.

The distance simply makes the engine appear smaller or larger, but does not affect the brightness of its video image, unless we consider atmospheric attenuation (which Taylor does not).

Thus, the brightness of the engine image depends on the engine temperature alone, the distance does not matter.

Intuitively, in a photograph, objects that are farther away are not darker than nearby objects; perceived brightness in a photo does not vary with distance. Things that move away get smaller, not darker!

Taylor's reasoning is fallacious.
 
Last edited:
Taking temperature, area and distance into consideration, the irradiation from a GE90 passenger jet engine exceeds the fighter jet engine by a factor of 3.
Makes total sense, since you would like your fighter to have the smallest IR signature possible to minimize the probability of detection. Passenger jets do not take that into account when being designed.

Thus, the brightness of the engine image depends on the engine temperature alone, the distance does not matter.

Intuitively, in a photograph, objects that are farther away are not darker than nearby objects; perceived brightness in a photo does not vary with distance. Things that move away get smaller, not darker!

Taylor's reasoning is fallacious.
I think I am not following you here. Just to clarify, what are you calling "brightness"?
 
Makes total sense, since you would like your fighter to have the smallest IR signature possible to minimize the probability of detection. Passenger jets do not take that into account when being designed.
Passenger jets are also heavier, and their engines develop much more thrust. The GE90 is one of the biggest engines out there, and it runs a lean mixture to burn the fuel as completely as possible. This is good for fuel efficiency and pollution, but also makes it hotter.
I think I am not following you here. Just to clarify, what are you calling "brightness"?
I've edited that section just now for more clarity.

Brightness of a digital pixel is its value, which corresponds in some fashion to the light falling on it via the lens system.

Basically, the radiation equations look at the total radiation received by the camera. But the camera breaks this radiation down into several pixels, we see a shape. This shape is bigger when the jet engine is bigger, and smaller when the jet is farther away.

This increase and decrease in shape area is directly proportional to the radiation we've considered before:
• bigger engine = more pixels and more irradiation;
• more pixels = less irradiation per pixel, in exact proportion​
• more distance = fewer pixels and less irradiation;
• fewer pixels = more irradiation per pixel, in exact proportion​

So if you pick an engine pixel on the screen, its brightness won't change with distance (as long as the coverage is 100%). The same goes for photographs: if you move away from an object, you don't need to change the exposure, because its brightness doesn't change.

No thermal camera is calibrated by distance. It's an amateur idea to think it should, and it turns out this idea is based on an error.
 
Last edited:
Basically, the radiation equations look at the total radiation received by the camera. But the camera breaks this radiation down into several pixels, we see a shape. This shape is bigger when the jet engine is bigger, and smaller when the jet is farther away.

This increase and decrease in shape area is directly proportional to the radiation we've considered before:
• bigger engine = more pixels and more irradiation;
• more pixels = less irradiation per pixel, in exact proportion​
• more distance = fewer pixels and less irradiation;
• fewer pixels = more irradiation per pixel, in exact proportion​

So if you pick an engine pixel on the screen, its brightness won't change with distance (as long as the coverage is 100%). The same goes for photographs: if you move away from an object, you don't need to change the exposure, because its brightness doesn't change.
I see. You are saying that considering individual pixels, the effects cancel out (still not sure, I have to think about it)

But if you consider the bunch of pixels that make up the shape of the source, you should see an increase/decrease of the total irradiance with distance.

No thermal camera is calibrated by distance. It's an amateur idea to think it should, and it turns out this idea is based on an error.
The calibration is only the relationship between the received irradiation and the output voltages/currents/whatever. It is independent of the distance to the source. Usually, you would use a black body to uniformly irradiate all the pixels. And change the irradiation (by changing the aperture of the BB) to relate irradiation with voltage.

Then it is up to the user to calculate the emitted radiance by whatever means (that should include the knowing the distance to the source).
 
I see. You are saying that considering individual pixels, the effects cancel out (still not sure, I have to think about it)
Yes.
But if you consider the bunch of pixels that make up the shape of the source, you should see an increase/decrease of the total irradiance with distance.
Yes.
A square object at 40m distance might have 8x8=64 pixels.
The same object at 80m distance only has 4x4=16 pixels, all else being equal.
If these individual pixels still receive the same amount of radiation individually, the total irradiance reaching the sensor has shrunk by a factor of 4, because there are now 4x fewer pixels lit up (16 instead of 64).

As long as we don't look at a point source (or an approximation thereof), this always cancels out.
 
As long as we don't look at a point source (or an approximation thereof), this always cancels out.
I think Mendel is right, but it's a tricky point. The exclusion of point sources is important. Stars, for example, are treated as effectively point sources, and their perceived brightness is assumed to be inverse-squarely related to distance. Without this assumption, most of the estimates of distance in astronomy would be invalid. Of course, the total radiation received from an object onto a given surface area will vary with the distance from the radiating object, and for some purposes this might be what counts. Other things being equal, a fire will warm you to a higher temperature if you are closer to it. It is only if you are measuring the intensity of radiation received from a given angular size of the radiating surface that the cancelling-out effect applies. There could be disagreement about whether this is relevant in the Gimbal case.

It's not an easy problem, and from my history-of-science readings I recall that in the 18th century scientists of the rank of Bouguer, Lambert and even Euler argued about [edit] related problems, such as whether the disk of the sun should appear equally bright all over'.
 
Last edited:
It is only if you are measuring the intensity of radiation received from a given angular size of the radiating surface that the cancelling-out effect applies.
It's definitely tricky. I'd go along with most of your post except for this.

If you reduce the angular size of the radiating surface (and thus reduce the amount of irradiation), it cancels out when your lens system projects it on a smaller image surface such that the energy density (light/surface) remains constant.
And because energy density on the sensor translates to brightness on the image, the pixels look the same, there are just fewer of them.
 
If you reduce the angular size of the radiating surface (and thus reduce the amount of irradiation), it cancels out when your lens system projects it on a smaller image surface such that the energy density (light/surface) remains constant.
That's not quite what I had in mind, but I can see that my statement was badly worded!

What I had in mind was something like this:

Suppose we are observing a disk-shaped radiation source with an angular diameter, as measured by the observer, of 5 arc minutes. Within that observed size, consider a smaller disk-shaped area with an angular diameter, as measured by the observer, of 1 arc minute, which therefore covers 1/25 of the total observed area. This smaller area will therefore account for 1/25 of the total radiation received from the object.

Now remove the object to twice the distance from the observer, so that the observed angular diameter is reduced to half its original size (i.e. 2.5 arc minutes), and the total observed area is reduced to 1/4 of its original size. If now we consider a disk-shaped area within that total having an angular diameter of 1 arc minute, as before, that area will account for 1/2.5^2 = 1/6.25 of the total radiation received by the observer, instead of 1/25. The proportion of the radiation falling within a 'window' of constant angular size (in this case 1 arc minute) is therefore increased by a factor of 25/6.25 = 4, while the total radiation received is reduced by a factor of 4. The two factors cancel out, so the amount of radiation falling within a 'window' of constant angular size is also constant.

Regardless of the geometrical arguments, the empirical fact seems to be that if we exclude (effective) point sources, and any atmospheric effects, the perceived brightness of an object is independent of its distance. As you said earlier,

Intuitively, in a photograph, objects that are farther away are not darker than nearby objects; perceived brightness in a photo does not vary with distance. Things that move away get smaller, not darker!
Anyone can verify this in a crude way by taking two objects of equal brightness - say, two post-it notes - and holding one at arm's length and the other at half the distance, and positioning them so that the nearer one overlaps the further one. Provided the conditions of illumination are the same, they will still appear equally bright. (In practice, I find it is difficult to get the conditions of illumination exactly the same, but there certainly isn't a factor-of four difference in brightness, as an inverse-square law would imply!)
 
The two factors cancel out, so the amount of radiation falling within a 'window' of constant angular size is also constant.
Ah, yes, thank you for elaborating.
Your "window of constant angular size" is my "pixel" by another name, so we're agreeing.
 
The "window of constant angular size" works only at larger distances, meaning the ratio (measure) aperture and distance is large. This is because the cosine error is very small. But if you measure objects at shorter proximity, this is no longer correct radiometrically.
 
The "window of constant angular size" works only at larger distances, meaning the ratio (measure) aperture and distance is large. This is because the cosine error is very small.
The cosine error is <1% for angular sizes <8⁰, irrespective of distance.

ATFLIR as used in GIMBAL has a field of view of only 0.35⁰.
 
Anyone can verify this in a crude way by taking two objects of equal brightness - say, two post-it notes - and holding one at arm's length and the other at half the distance, and positioning them so that the nearer one overlaps the further one. Provided the conditions of illumination are the same, they will still appear equally bright. (In practice, I find it is difficult to get the conditions of illumination exactly the same, but there certainly isn't a factor-of four difference in brightness, as an inverse-square law would imply!)
Sunlight, being at the same angle over short distances, solves this, and give a nice illustration.

I put two post-it pads at the same position on two identical chairs at the same angle (right angles to my house), both in identically direct sunlight. So these two pads are emitting exactly the same amount of light. The far pad is 6x the distance of the near pad. Is it 1/36 the brightness?
2022-06-27_11-50-06.jpg

They are the exact same brightness.
2022-06-27_11-52-25.jpg

Illustrating that an area light source does not get darker with distance.
 
So Tayor's argument seems to be:
  1. An example jet's engines are overexposed (which he calls "saturated") at a certain distance
  2. We would expect that as distance increases, things get darker, due to inverse square law of radiation
  3. So objects that are further away than the example would have to be hotter to be overexposed
  4. They can't be much hotter, or they would melt, so it must be pretty close, like <10 miles or so.
  5. Hence it's not a distant jet
But, as we see with the Post-it example, things don't get darker with distance, they just get smaller. Twice the distance, quarter the size, quarter the radiation, same brightness per pixel. So they don't need to be any hotter to be overexposed. Hence his reasoning, and the graph, are invalid. (As noted by @Mendel, @jplaza, and @DavidB66)

This is in addition to the issues noted with differences in exposure (level/gain) between the two videos.
 
Last edited:
An example jet's engines are overexposed (which he calls "saturated")
To clarify that — obviously an individual pixel sensor can have more light hitting it than it can measure, which would be "irradiance saturation" (irradiance = incoming radiation rate). In the case of the ATFLIR with a 14-bit digitizer, this would give a value of 16383 (i.e the largest 14-bit number, 2^14-1)

All we know from the video is that the resultant 8-bit value is 255 (2^8-1) this tells us nothing about the irradiance saturation, other than it is higher than other things in the scene. It's simply overexposed after a variety of digital adjustments, an analog re-encoding to NTSC, possible analog recording, then more digital adjustments.

I suppose even "overexposed" might be the wrong term here, as it suggests some physical process (since it dates back to film days, and just means the film got too much light). However, the concept of post-sensor digital adjustments to exposure is pretty common now.
 
If you max a digital sensor value that's it, there's no going back its maxed and you have no idea what the actual value would have been, this is why we often under expose to avoiding blowing highlights.

My main issue with this analysis (as per above) it is like trying to use an edited photo to try and work backwards to know how bright it was, without knowing any of the settings or editing there's just no way to know.

The ATFLIR image is processed to give high contrast etc for the pilots to be able to distinguish the object, we can see this from the white outline etc, I just do not see how you can go backwards from that to work out a temperature.
 
If you max a digital sensor value that's it, there's no going back its maxed and you have no idea what the actual value would have been, this is why we often under expose to avoiding blowing highlights.

My main issue with this analysis (as per above) it is like trying to use an edited photo to try and work backwards to know how bright it was, without knowing any of the settings or editing there's just no way to know.

The ATFLIR image is processed to give high contrast etc for the pilots to be able to distinguish the object, we can see this from the white outline etc, I just do not see how you can go backwards from that to work out a temperature.

For a digital image to be of any use to the viewer it must present a range of pixel brigtnesses. If they are all white you can distinguish nothing, if they are all black you distinguish nothing. The ATFLIR is not a radiometer intended to show absolute temperature values, it is showing relative temperatures values. The strongest pixels as black and those with weaker signals in a range of increasingly bright pixels. (Assuming it is set at black = hot) By the time the radiation entering the ATFLIR has been processed for viewing by the pilots the pixels are just showing relative temperatures, because that is information they can use. They have no use for absolute temperature values. People are trying to find absolute data from a system that was never intended to capture or record it.
 
I think the argument laid out above by Mendel and others is broadly correct in geometric optics (which is an approximation that applies wonderfully across most of everyday life, and in particular, it applies in the post it example), but gimbal is a touch more subtle. Gimbal, after all, is a glare, and the glare of a point source has a constant angular size that's independent of distance. All the irradiance will be spread over that constant area rather than in a properly resolved image of the source.

This is the relevant distinction between a "point source" and a resolved source, at least in an optics-limited system (which ATFLIR is).
This is how Betelgeuse can be some 12,000 times brighter than Alpha Centauri, but Alpha Centauri appears ~60% brighter in our sky -- Betelgeuse is 125 times farther away, but the glare in our eyes is always about the same size.

Taylor made no consideration of this in his analysis and made no attempt to characterize the point spread function of ATFLIR (which might be something of a fool's errand anyway because if, as some suspect, there was some kind of smudge or scratch on the outer cover, the PSF would be different from all other ATFLIR examples anyway). An F135 engine at a distance of 10 nmi would be just 3 pixels or so wide, so some sort of relation like he described might hold in this regime (modulo the many caveats already pointed out here). There's just little hope in finding it, even in possession of classified information, even if he had correctly accounted for the PSF when analyzing other ATFLIR videos.
 
@Mendel's argument, apparent brightness is independent from distance for non-pointlike obects, is correct.

Just I don't like the "non-pointlike" part: when does a small area start to be a 'point'? I'd add the concept of resolution: if an object is resolved by the optics then it has a (measurable) angular size and what @Mendel says holds. But when an object angular size becomes so small not to be resolvable then the apparent brightness starts to drop off with the square of distance, because the apparent angular size of the object stays constant at the resolution limit whatever it originally was, and this is why stars shows an inverse-square dependence of apparent brightness vs. distance while post-it's (excellent demonstration, @Mick West) don't.

Then in the case of Gimbal, as @markus points out, we're talking about glare, which does indeed have a constant angular size. But this size is totally unrelated to the object, it just depends on the optics, so it's totally useless for calculating the object parameters
 
@MapperGuy

Good points, this was exactly what I was referring to. Radiometry is pretty cool, but absolute radiometry is something else altogether.
 
For a digital image to be of any use to the viewer it must present a range of pixel brigtnesses. If they are all white you can distinguish nothing, if they are all black you distinguish nothing. The ATFLIR is not a radiometer intended to show absolute temperature values, it is showing relative temperatures values. The strongest pixels as black and those with weaker signals in a range of increasingly bright pixels. (Assuming it is set at black = hot) By the time the radiation entering the ATFLIR has been processed for viewing by the pilots the pixels are just showing relative temperatures, because that is information they can use. They have no use for absolute temperature values. People are trying to find absolute data from a system that was never intended to capture or record it.

Spot on. Taylor's fallacy boils down to using a digitally processed IR image as a thermometer from which an object's temperature can be accurately extrapolated while ignoring a range of digital image-production and optical variables.

A related fallacy is the assumption that invisible electromagnetic radiation isn't physically light and hence cannot produce a glare or other optical effects on an IR camera which are similar to the optical effects of visible light in standard cameras.

Article:
Electromagnetic radiation is a type of energy that is commonly known as light. Generally speaking, we say that light travels in waves, and all electromagnetic radiation travels at the same speed which is about 3.0 * 108 meters per second through a vacuum. We call this the "speed of light"; nothing can move faster than the speed of light.


Article:
Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. The fundamental difference between radiometry and photometry is that radiometry gives the entire optical radiation spectrum, while photometry is limited to the visible spectrum. Radiometry is distinct from quantum techniques such as photon counting.
 
Article:
Electromagnetic radiation is a type of energy that is commonly known as light. Generally speaking, we say that light travels in waves, and all electromagnetic radiation travels at the same speed which is about 3.0 * 108 meters per second through a vacuum. We call this the "speed of light"; nothing can move faster than the speed of light.


Article:
Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. The fundamental difference between radiometry and photometry is that radiometry gives the entire optical radiation spectrum, while photometry is limited to the visible spectrum. Radiometry is distinct from quantum techniques such as photon counting.

Those are terrible quotes/explanations, both I would say are *jast plain wrong*. EMR is *not* typically known as light, nor should it be, as most of it is not light, as it's not in the visible range. And the "optical" radiation spectrum *is* the "visible* spectrum.
 
Those are terrible quotes/explanations, both I would say are *jast plain wrong*. EMR is *not* typically known as light, nor should it be, as most of it is not light, as it's not in the visible range. And the "optical" radiation spectrum *is* the "visible* spectrum.

Optics also commonly include the infrared range (not just visible), and in physics all EMR is light in terms of their quanta being photons and vacuum velocity the speed of light.

The citations are good enough to convey the point in a truthful manner.

Article:
In physics, the term "light" may refer more broadly to electromagnetic radiation of any wavelength, whether visible or not.[4][5] In this sense, gamma rays, X-rays, microwaves and radio waves are also light. The primary properties of light are intensity, propagation direction, frequency or wavelength spectrum and polarization. Its speed in a vacuum, 299 792 458 metres a second (m/s), is one of the fundamental constants of nature.[6]


Article:
Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.[1]
 
Last edited:
Back
Top