Calculating and visualizing Gimbal angles.

Yes the model is based on the values we have on the screen : azimuth, bank angles, elevation (constant -2deg). What I understand is that the model takes these values and estimate the pitch rotation of Gimbal (based o AoA), then the yaw rotation (based on bank angles), and this allows to retrieve the roll (which makes Gimbal rotates at the end). What is unclear to me are these lines in the code :

# setting up fictitious cartesian positions at distance r0 = 1 with the
# indicated azimuth and elevation so we can calculate ATFLIR pitch and roll
x = -r0 * np.cos(azimuths)
y = -r0 * np.sin(azimuths)
z = np.array([r0 * np.sin(np.deg2rad(elevation)) for _ in t])

# pitch rotation
x = x * np.cos(aoa_r) - z * np.sin(aoa_r)
y = y
z = x * np.sin(aoa_r) + z * np.cos(aoa_r)

# bank rotation
x = x
y = y * np.cos(bangles_r) + z * np.sin(bangles_r)
z = -y * np.sin(bangles_r) + z * np.cos(bangles_r)

We start from x, y and z, coordinates of Gimbal in function of time. This is function of azimuths only, because elevation is constant, angle of attack is assumed to be constant, as is the distance to Gimbal. Then pitch rotation is "applied" to the coordinates, then yaw rotation (bank angles), which leaves us with the change in position (from the origin) that must be brought by the roll. Am I understanding correctly ?

My question is : what justifies making the Gimbal adjustments in that order during the calculations ? Does it work like that on a pod ?
 
I'm playing with the Python script, it'd be good to recognize the uncertainties in this model.

Here I'm simply switching the calculation of pitch and bank rotation in the code :
inverted_calculation.png

The model predicts the Gimbal roll starts at ~15' and has a more gradual buildup. But the glare only rotates at 24'. But if there is a justification for the order of calculation never mind.

Going back to the original code, remember that the angle of attack makes a difference (as Markus has already shown) :

AoA = 1 AoA =10
AoA 1.png
AoA10.png


For AoA=1 (left), the 0'24 rotation of the glare happens before the model predicts a significant Gimbal roll. At AoA=10 (right) the Gimbal roll starts around 15', but the glare only rotates at 0'24. The AoA=6 maximizes the fit, if it's justified and not cherrypicked that's fine, but we don't know the AoA.

Of course the model will always will get the big picture right, because the Gimbal camera has to roll when crossing the 0 azimuth, and the object rotates at the end of the video when the azimuth is around 0. We've know that for 4 years, and the preliminary work of Mick on the rotating glare hypothesis. Still, there is the possibility that the object rotated around the time when the Gimbal had to roll, and this has been the raging debate since 2018 right ?

This is why this model is supposed to go further and show that the object rotation perfectly match the Gimbal camera rotation (i.e. it's not a coincidence).

But given, the uncertainties, is it the case ? The model has a lot of value in illustrating the rotating glare hypothesis, but it should not be overfitted to look like it explains exactly every rotation movement of the glare (like in the latest Mick animation). It does not, because of the uncertainties, and because we cannot predict exactly how the rotation steps would have occurred without knowing the exact algorithm in the pod.
 
When estimating the roll of Gimbal, based on its pitch (AoA) and yaw (bank angles), you first calculate the pitch rotation, then yaw, and retrieves the roll. It seems calculating the yaw first (bank angles), then pitch, makes quite a difference and decreases the fit. What is the justification for doing the calculation in that order ?
3D transforms are confusing. I get confused, Markus gets confused, You get confused. But a big benefit of the 3D model is that you can see that the results are as expected. The pod needs to point at the target when asked. To help verify it's doing things correctly I've added a "physical" pointer, which is basically just an extrusion of the ball.
2022-02-11_09-58-32.jpg

Now you can see that the pod head is pointing exactly at the ball.

My question is : what justifies making the Gimbal adjustments in that order during the calculations ? Does it work like that on a pod ?
Physical reality does not have a coordinate transform order. You don't get to choose, there's only one way things go. It's determined by how the axes are mounted. If rotating axis A changes the orientation of axis B, then to use simple rotations (like, rotate about the X-axis, assuming both A and B are aligned to X, Y or Z) you need to apply the axis B rotation first. You start at the tip, and work backwards.

So to convert Pod Pitch, Pod Roll, and AoA to X,Y,Z on a unit sphere, I take the point that's 1 unit along the FORWARD axis, rotate it about LEFT by the pod pitch, then about FORWARD by the pod roll (adding in the bank angle), then about LEFT by the AoA.

This all is complicated, and made difficult to explain, by differences in coordinate systems. I did my math and initial spreadsheets using a left handed system (X = Right, Y=Up, Z = forward. But Three.js use right handed (negative Z = forward). Blender uses right handed, but with X = Right, Y = forward, and Z=up).

Then there is the frame of reference you use for the cartesian coordinates. Where is the origin? Which way is up?

I'm ignoring the distance to the object, as all we are interested in are angles. So I have my cartesian (X,Y,Z) origin in the middle of the ball - as that the center of the Az,El spherical coordinate systems and the Pod Pitch, Pod Roll, system. So I also use that the center of rotation of AoA.

This shows the axes in Blender
2022-02-11_11-10-53.jpg


The heading of the jet does not change, just the bank angle. I'm simply using the video data to adjust the heading of the target from -54 to +7

Here's the code that transforms from Az/El to XYZ, and from Pod head pitch, pod head roll to XYZ (accounting for AoA, as well as the same in reverest (i.e. XYZ to Az/El and XYZ+AoA to pod pitch and roll).

That allows two more functions to convert between Az/El and Pod Pitch/Roll and back.

JavaScript:
// These were written for Left Handed Coordinates, so returning -z
// as THREE.js is right handed

// pitch = rotation of the "ball", relative to straight ahead
// roll = clockwise roll of the entire system along the forward axis
// aoa = angle of attack, ie tilt about x (right) axis
// These have somewhat nominal orientations, but 0,0 is straight ahead
// and when roll = 0, a positive pitch is vertical.
// The forward axis is -z, vertical is y, right = x.
function PRA2XYZ(pitch, roll, aoa, r) {
    roll -=180
    if (roll < 360) roll += 360

    var x = r * sin(radians(pitch)) * sin(radians(roll))
    var y = r * sin(radians(pitch)) * cos(radians(roll))
    var z = r * cos (radians(pitch))
    var aoaR = radians(-aoa)
    var za = z * cos(aoaR) + y * sin(aoaR)
    var ya = y * cos(aoaR) - z * sin(aoaR)
    return new THREE.Vector3(x,ya,-za);
//    return new THREE.Vector3(x,y,-z);
}

// el = Elevation is the angle above horizontal
// az = Azimuth is the angle in the horizontal plane relative to the direction of travel

// Calculations here assume positive z is forward, right hand
// but return -ve z (left hand)
// the XYZ result is in the GLOBAL frame of reference

function EA2XYZ(el, az, r) {
    var x = r *  cos(radians(el)) * sin(radians(az))
    var y = r * sin(radians(el))
    var z = r *  cos(radians(el)) * cos (radians(az))

    return new THREE.Vector3(x,y,-z);
}

// note, this assumes normalized global x,y,z coordiantes (on the surface of a sphere centerd at 0,0,0))
function XYZ2EA(v) {
    var el = degrees(atan2(v.y, Math.sqrt(v.z*v.z + v.x*v.x)))
    var az = degrees(atan2(v.x,-v.z))
    return [el,az];
}

// convert global X,Y,Z and AoA to Pitch and Roll.
// AoA is angle of attack of the pod
// so first we need to convert to the frame of reference of the pod
// by rotating about x (right) by AoA
// Will always return a positive pitch.
// But there's always a solution with the same negative pitch
// and roll+180 or roll-180
// if you are seeing minimum movement, then the algorithm should consider that.
function XYZA2PR(v,aoa) {

    var aoaR = radians(-aoa)
    var x = v.x
    var y = v.y * cos(aoaR) - v.z * sin(aoaR)
    var z = v.z * cos(aoaR) + v.y * sin(aoaR)

    var pitch = degrees(atan2(Math.sqrt(x*x + y*y),-z))
    var roll = degrees(atan2(x,y))
    roll += 180
    if (roll >180) roll -= 360;
    return [pitch, roll]
}

// Convert El, Al, and AoA to Pitch and Roll
// by converting global El and Al first to global x,y,z
// then converting that global x,y,z, together with AoA to pitch and roll
function EAA2PR(el, az, aoa) {
    return XYZA2PR(EA2XYZ(el,az,1),aoa)
}

// convert a pitch, roll, and aoa to elevation and azimuth
// first convert pitch, roll, and aoa to global xyz, then that to global El and Az
function PRA2EA(pitch, roll, aoa) {
    return XYZ2EA(PRA2XYZ(pitch,roll,aoa,1))
}
 
But given, the uncertainties, is it the case ? The model has a lot of value in illustrating the rotating glare hypothesis, but it should not be overfitted to look like it explains exactly every rotation movement of the glare (like in the latest Mick animation). It does not, because of the uncertainties, and because we cannot predict exactly how the rotation steps would have occurred without knowing the exact algorithm in the pod.
It's not an animation, it's an interactive tool, although still a work in progress. Check it out.
https://www.metabunk.org/gimbal/

It's not trying to predict the rotation steps, it's using the glare angle changes as the rotation steps. In the default configuration of the Sim, the glare angle is used to control pod roll. Pod pitch can't be determined, so it's just using the constant track to follow the target's Az changes.

This shows up as the green dot. The green dot is where the outside of the pod head is pointing. You can see this a bit better with the new "physical pointer"

The white dot is just the target position

The small white circle is a 5° radius "flex" ring, assuming the internal mirrors can adjust the beam by as much as 5°.

So long as the green dot (the physical pointing direction of the pod head) is inside the white circle, then we are good, the target can be tracked.

Try adjusting the AoA. The magenta graph shows the error across the entire traverse (noted different scale, on the right)


You can adjust for 0° to 8° and it all stays within the 5° flex.

Probably the best AoA is 5° if we want a fit that best matches the theory. Here you see corrections consistently initiated at about 3° deviations.

2022-02-11_11-40-17.jpg


Note the glare does not rotate in the first 20 seconds. I use this to set the "Glare Start Angle" to simply the average of the "ideal" roll over the first 20 seconds. This ignores all the rest of the curve and the steps, so it's not a fit attempt, and does not always do the most perfect fit overall - but I chose it because it's a fairly neutral way of picking the arbitrary start angle. You can tweak it to see what effect different values would have.

Focussing just on this curve might lead to the objection "you fudged the numbers to fit", so I encourage people to try the different numbers, and see just what the effect is on the error graph. I also encourage people to consider this in the context of the other three observables (lack of glare rotation while banking, camera bumps before glare rotation, rotating full-image light patterns in sync with the glare rotation).

The curve matches (and more importantly the error curves) here are very much "close enough" with a wide range of values, and become inarguably close with a quite reasonable AoA of 5°. The glare hypothesis is a fit.
 
Thanks @Mick West , but I cannot open https://www.metabunk.org/gimbal/, it gives me a blank page.
Tried in Firefox and Chrome. Maybe it's just me.
It needs a relatively recent browser, but should work. I know it does not work on my old iPhone 6.

Can you enable the debug console in Chrome (View->Developer->Javascript console), refresh the page, and see if there's an error? You should see something like this, if it's working:
2022-02-11_13-05-06.jpg
 
Thanks Mick, did what you said and I had an error. After installing the newer version of Chrome it works. I'll play with your simulation later. And thanks for all the details above, it is really helpful to understand what you're doing.
 

Attachments

  • Screenshot from 2022-02-11 13-15-35.png
    Screenshot from 2022-02-11 13-15-35.png
    74.4 KB · Views: 223
Thanks Mick, did what you said and I had an error.

That was from the usage of "await", which needs Chrome 55 or better. That's pretty old though, so should work on 99% of browsers out there. I know there are some people who just don't like to upgrade.

2022-02-11_13-27-23.jpg
 
Haha this is my PC at work, my Ubuntu version gave me trouble when updating. But it works for what I need to do so I leave it as is. Not against updates in general though
 
I also encourage people to consider this in the context of the other three observables (lack of glare rotation while banking, camera bumps before glare rotation, rotating full-image light patterns in sync with the glare rotation).

Maybe it's been discussed before, but what's going on at 0'29 ? Here the glare seems to rotate with the bank angle and the background. This is the only time when I see light patterns in sync with the rotation, but it happens to be when the whole image rotates with the glare.



So to convert Pod Pitch, Pod Roll, and AoA to X,Y,Z on a unit sphere, I take the point that's 1 unit along the FORWARD axis, rotate it about LEFT by the pod pitch, then about FORWARD by the pod roll (adding in the bank angle), then about LEFT by the AoA.

That's the part I'm unsure, are we sure the pod cannot make the two rotations at once (pitch and roll) to allow for a smoother roll at the singularity ? That would avoid having these weird steps.
 
Maybe it's been discussed before, but what's going on at 0'29 ? Here the glare seems to rotate with the bank angle and the background. This is the only time when I see light patterns in sync with the rotation, but it happens to be when the whole image rotates with the glare.
The light patterns start to rotate in sync a second earlier before the entire image rotates. There's a longer rotation of the glare here because the jet bank angle changes clockwise, which means the pod head has to rotate more counter-clockwise to compensate.

That's the part I'm unsure, are we sure the pod cannot make the two rotations at once (pitch and roll) to allow for a smoother roll at the singularity ? That would avoid having these weird steps.
It can, and it does. The "weird steps" are presumably to avoid longer periods of rapid roll, during which it's hard to track the target. It's not entirely clear why it does it in steps, but fairly clear that it does.
 
Hi Mick. This error function is simply calculated Roll angle - Glare angle?
No.

It's the angle between the target (white dot) and where the pod is looking with just the two axes (green dot, "physical pointer")

i.e. it's how much the internal mirrors need to adjust the line of sight.

This might be a point that needs more explanation - so let me know if that does not make sense.
 
No.

It's the angle between the target (white dot) and where the pod is looking with just the two axes (green dot, "physical pointer")

i.e. it's how much the internal mirrors need to adjust the line of sight.

This might be a point that needs more explanation - so let me know if that does not make sense.
Thanks Mick. It all makes perfect sense. Some time ago I analyzed the various angular speeds of the subjects in the video. As the ATFLIR failed to lock onto the target throughout the video, I wanted to analyze the effort it was making to track it. And I then calculated the angular velocity of the LOS. The two graphs overlap exactly in the peaks (not in amplitude, but that is certainly due to the incompleteness of the exact data).
1644750472892.png

So I think this too has a physical meaning and a logical explanation.
 
I was wondering if it is possible that the ATFLIR tracking/rotation algorithm works differently when in pure auto track mode versus being in SLAVE mode with a RADAR track assisting, maybe it has to make allowances in how it adjusts based on being in a either auto only, auto with RADAR as well and RADAR only.
 
I was wondering if it is possible that the ATFLIR tracking/rotation algorithm works differently when in pure auto track mode versus being in SLAVE mode with a RADAR track assisting, maybe it has to make allowances in how it adjusts based on being in a either auto only, auto with RADAR as well and RADAR only.
I would like to analyze the behavior of the angular velocity of the LOS also in the GOFAST video (the target is locked) and compare it. Is there a data sheet similar to Gimbal Extracted Bank Angle.csv for this footage?
 
I would like to analyze the behavior of the angular velocity of the LOS also in the GOFAST video (the target is locked) and compare it. Is there a data sheet similar to Gimbal Extracted Bank Angle.csv for this footage?
Both videos show autotrack though, not SLAVE.

I think angles were extracted from Go Fast but will not be as detailed/smoothed etc as the GIMBAL ones as they were only roughly done to show rough speed/motion of the Go Fast object. To debunk the "low and fast" TTSA analysis.
 
My question is : what justifies making the Gimbal adjustments in that order during the calculations ? Does it work like that on a pod ?
You ask some very good questions. I started writing a post explaining why the choices we made were the right ones, realized we had gotten confused along the way, then dug around and learned something about ATFLIR that validates the choice we made after all (with one caveat, which I'll get to). The tl;dr of the story is: we do the rotation in the order pitch, then roll, because that order of rotations doesn't change the aircraft heading. To see why we want this, read on.

First, a small correction, so you won't get confused in what follows: yaw is a rotation about the vertical axis of the aircraft (like the pilot turning his head left and right), roll (not to be confused with pod roll) is a rotation around the longitudinal axis (like a steering wheel), and pitch is a rotation about the transverse axis (up and down from the pilot's perspective).

Take a rigid object in 3D space. It's a fact (that I'll state without demonstration) that you can rotate it into any orientation that you like by composing three rotations. There are different choices and conventions for what rotations are performed and in what order (you can read about them here), but ultimately they're all equivalent. This means that you could do what we're doing here using a combination of pitch, yaw, and roll rotations in whatever order, so long as you choose the rotation angles appropriately.

So our choice is about what we can calculate, or estimate, given what we know about the problem. So what have we got? First, we know this is coordinated level flight. Secondly, we have bank angles, which can be tracked from the image, and we get to estimate the angle of attack (which for the F-18 is the angle between the aircraft longitudinal axis and the flight path) using a simulator. You're right that we don't really know, and without a recording of the HUD or something we can't know for sure because it depends on the aircraft weight and even loadout. Using DCS (a simulator which is generally considered to be accurate, at least in this gentle flight regime) we get an AoA of 6 degrees when heavy and 5 degrees when light. That said, DCS simulates the F-18C legacy hornet, while the gimbal video was shot with the F-18F Super Hornet. MSFS and a video I found of an FSX payware Super Hornet seem to suggest an AoA of about 4 degrees in similar conditions, so around 4 to 6 degrees is our ballpark.

So let's get a little airplane in the correct orientation for a level turn. Start from it wings level, pointing the nose straight at the horizon. I want to roll it so it matches the bank angle from the video and pitch it so it matches our estimated AoA. I claim we have to perform the rotations in the order roll, pitch. Why? Say we do the opposite: pitch, roll: then the aircraft will have an elevation angle matching the AoA, but the elevation has to be less than that because the airplane is in a bank, and performing the roll rotation afterwards will do nothing to the aircraft elevation. Here's some cartoons with some exaggerated angles (80 degrees of bank, 50 degrees of pitch) illustrating the difference between the two orders. The starting point:
basec.png

Notice how I'm in "local" rotation mode. This is so the axes get dragged around with the rotations, which makes it do what you intuitively expect when I say "pitch by x degrees, roll by y degrees). Now let's pitch the airplane up by 50 degrees:
rx-50c.png

And then roll by 80 degrees:
rx-50rz-80c.png

That's clearly not the attitude I would expect from an aircraft in level coordinated flight with a bank angle of 80 degrees and AoA of 50. It's more like some aggressive turning climb at some small AoA.

So let's do the opposite. First roll by 80 degrees,
rz-80.png

and then pitch by 50 degrees:
rz-80rx-50.png

Now that's the expected attitude. Wings are near vertical, taking a big bite out of the air. That's what this cartoon turn would look like.

Now notice that I mentioned that I'm using the "local" rotation mode in blender. That's what I didn't incorporate correctly; I simply multiplied by rotation matrices in the coordinate axes that corresponded initially with pitch and roll. In order to do it correctly, you simply apply the rotations in the reverse order. So for example with y the axis corresponding initially to pitch, and x the axis corresponding to roll, in order to get the airplane properly oriented you do the transformations in the order pitch, roll, that is, y by 50 degrees, then x by 80 degrees.

So that's how you get the aircraft oriented. What I did in my python script is to trace out the path the gimbal object would make in the sky from the perspective of the pilots (and the ATFLIR pod), so you have to invert the above transformations, which means you apply the inverse of each step in reverse order: so x by -80 degrees, then y by -50 degrees, that is, -roll, -pitch!

This was the point when I thought I might've gotten it wrong. And since my intent was to do the transformation above, I did get it wrong. But somehow, I got to an answer that agrees with Mick's, and which I still claim is mostly right. How can that be?

So we've been working off some Navy documents that detail how ATFLIR is operated. In particular, A1-F18AC-746-100 states

External Quote:
Azimuth readout is the ATFLIR pointing angle left or right of aircraft ground track.
Now, to me, this never made any sense, at least for the Air-to-Air mode we're concerned with. It's adding a bunch of workload for the pilot who'll have to work out wind directions and speeds at whatever their altitude is in order to figure out where to go in order to find a target... that's probably subjected to the same wind he is anyway, so simply pointing the airplane towards him would be a much more expedient way to do it. So my assumption was that the manual was using imprecise language, and that the ATFLIR azimuths were referenced with respect to the horizontal projection of the aircraft velocity vector. This certainly makes more sense than it being the ground track, but if you think about it, ATFLIR is not just a thermal telescope, it's a component of a weapon system. The function of a weapon is to shoot things, in this case, missiles. If you want to shoot something like an AIM-9X at a target, you want to know how far off the aircraft boresight the target is to know if the missile can hit them or not. Furthermore, in an aircraft with integrated sensor systems, it'd be silly to have the FLIR pod and the radar work off different reference angles, and the radar azimuths are no doubt zeroed with respect to the boresight. So maybe the manual was right, but incomplete: perhaps the azimuths are referenced to the ground track when in air-to-ground mode, but to the boresight in air-to-air mode.

So I fired up DCS, turned on the ATFLIR, and did some sideslips while keeping an eye on what the pod does. Here it is in a/g (snowplow) mode,


And here it is in a/a (not tracking anything) mode:


It does just that -- in a/g mode, it keeps the pod pointing directly in front of the aircraft's trajectory, while indicating zero azimuth, while in a/a mode it doesn't move the pod at all, and the azimuth is locked at 0 regardless of the amount of slip. You can also see this in this video, at 10m41s. During a level turn, the HUD shows the velocity vector pretty much exactly below the target, but the ATFLIR indicates it's pointing 1 degree to the right -- because there's some angle of attack during that level turn so the aircraft boresight is 1 degree to the left of its target.

Another source I found for this behavior is the manual for Jane's F/A-18 (attached). Of course, we don't know for sure whether the real Hornet works like this, but really it's the only thing that makes sense.

So, even though my initial code was (accidentally) right about how to transform the sky to match what the pilots would see, it started from the incorrect assumption that the indicated azimuths are referenced to the velocity vector, and so the pod is ahead of where it should be. That explains why that model looks better if delayed by a second or so. So to fix that, you run the transformations in an order that doesn't move the F-18's boresight laterally, so the reference remains correct. So that's why my revised code (accidentally) works, and why Mick's simulator also works.

Finally, the caveat: if doing the transformation in the order pitch, roll, the angle of pitch is not the angle of attack, but rather the elevation, the angle of the aircraft's longitudinal axis with respect to the horizon. In this case that's the projection of the angle of attack onto the vertical, that is, AoA * cos(bank). Since the AoA will have to increase with the bank angle to maintain level flight, using a constant elevation that's 85-90% of the real (but unknown) AoA is probably good enough. So that's the caveat -- the pitch we want to use is a touch smaller than the AoA estimated from the simulator.

(Incidentally, the assumption that the azimuths are referenced to the velocity vector was also made, often tacitly, by just about everyone who tried to work out the gimbal/gofast trajectory using the bank angles and indicated azimuths -- myself, Mick, Edward Current, Chris Lehto, Parabunk, etc. Those analyses might have to be revisited).
 

Attachments

First of all, thanks @markus for your efforts in explaining your calculations.

So you are saying your original model was using the right order for coordinate transformations, but it is wrong because the azimuths are relative to ground track, and not velocity vector. And the second model has error compensations that makes it accurate ? This goes over my head at this point, sorry. I hope others chime in on the implications of what you're saying for the ground versus velocity vector, this is interesting.

A thought : are the DCS simulations realistic enough to simulate how the pod head would have behaved in that case ? I'm thinking about this reconstruction by @MclachlanM :
https://www.metabunk.org/threads/gi...lines-of-bearing-and-or-dcs.11836/post-254039

I guess it is possible to see how the pod head rotates in the simulator, right ? Even if the trajectory is wrong (wrong cloud motion), the azimuths/bank angles are like in the video and the pod head should react realistically. Aren't the pod motions in these simulators similar to the real deal ?
 
First of all, thanks @markus for your efforts in explaining your calculations.

So you are saying your original model was using the right order for coordinate transformations, but it is wrong because the azimuths are relative to ground track, and not velocity vector. And the second model has error compensations that makes it accurate ? This goes over my head at this point, sorry. I hope others chime in on the implications of what you're saying for the ground versus velocity vector, this is interesting.

A thought : are the DCS simulations realistic enough to simulate how the pod head would have behaved in that case ? I'm thinking about this reconstruction by @MclachlanM :
https://www.metabunk.org/threads/gi...lines-of-bearing-and-or-dcs.11836/post-254039

I guess it is possible to see how the pod head rotates in the simulator, right ? Even if the trajectory is wrong (wrong cloud motion), the azimuths/bank angles are like in the video and the pod head should react realistically. Aren't the pod motions in these simulators similar to the real deal ?
As you probably already found DCS does not simulate the POD movements accurately enough to cover the GIMBAL issue, nor does it simulate IR light gathering, it uses an idealised model externally and some shaders for the internal view to provide a "good enough" simulation for the virtual pilots, but it does not mimic a lot of the flaws of a real system.

If you search the DCS forums you can find quite a few post about what it doesn't do and some other issues with it.
 
So you are saying your original model was using the right order for coordinate transformations,
Correct.
but it is wrong because the azimuths are relative to ground track, and not velocity vector.
My working theory based on docs + what DCS does + my idea of what would be the most useful to the operator is that in a/g mode the azimuths are relative to the ground track, and in a/a mode they are relative to the boresight, that is, the direction the aircraft's nose is pointing.
And the second model has error compensations that makes it accurate ?
The second model used the opposite order than I initially intended, but because it's an order that doesn't change the aircraft heading it correctly incorporates the idea that the azimuth is relative to the boresight (as long as we use elevation instead of AoA directly).
I found the answer to my question looking at this other thread (this debate around Gimbal will go down in history) :
https://www.metabunk.org/threads/fl...aims-to-refute-micks-claims.11933/post-255706

The pod has a nice smooth rotation that starts earlier and do not show bumps.
Ah, yes, I remember that thread. I believe those posts are what let us conclude DCS uses a simplified two-axis model which is similar to what our "ideal" reconstructions are doing. They're not modeling the internal mirrors which, we claim, are what allows the pod to track the object smoothly even with the roll movement happening in steps. With DCS you couldn't, for instance, see the effect at 19 seconds where the plane banks and the glare remains stationary. It should otherwise be accurate, provided the trajectory is accurate.

The thing is, if you look at the indicated azimuths, it looks like MclachlanM (who, credit to him, was hand-flying the thing trying to get a close match) turned a little faster than the real video, particularly there at the end. Or maybe it could be that the object was a little too close. Either way, the difference in the indicated azimuths is between 5-7 degrees. A better comparison might be to plot both ATFLIR angle and the glare angle as a function of indicated azimuth. I expect the biggest source of error in that comparison would be the difference in AoA between the simulated F-18C and the actual F-18F (with whatever its fuel state/loadout was during that particular mission at that particular time).
 
My working theory based on docs + what DCS does + my idea of what would be the most useful to the operator is that in a/g mode the azimuths are relative to the ground track, and in a/a mode they are relative to the boresight, that is, the direction the aircraft's nose is pointing.
Thanks @markus.
Your explanations are very clear.
Ever since I started following the ATFLIR topic, I have always had doubts about the real meaning of the value of Azimuth. As you have rightly assumed there is the possibility that this works differently depending on the mode of use (A / G or A / A).
However I was very perplexed by the behavior of another indicator in the display, namely the LOS cue. This should provide the actual position of the target with respect to the nose of the aircraft in Situational Awareness.
And in the GIMBAL movie this frame appears:
azimuth_loscue.png

The azimuth indicated is 4 ° R, when the target is exactly in front of the aircraft. This is very confusing, and the only solution at the moment I have is that the behavior would be much more similar to what is described in the official ATFLIR manual.
 
If the dot shows the velocity vector angle, this is the angle relative to which origin on the screen ? Center of the screen ?
 
If the dot shows the velocity vector angle, this is the angle relative to which origin on the screen ? Center of the screen ?
Yes. At the beginning of the footage the azimuth corresponds exactly to the angle between the vertical axis (nose of the plane) and the line joining the center of display and the dot. This trend is constant up to 20s. Then it constantly begins to deviate.
 
Yes. At the beginning of the footage the azimuth corresponds exactly to the angle between the vertical axis (nose of the plane) and the line joining the center of display and the dot. This trend is constant up to 20s. Then it constantly begins to deviate.
@Leonardo Cuellar , isn't this a sign that the two objects go roughly in the same direction in the first 20', and that their directions of flight diverge at the end? Because they are not affected by the wind in the same way ?
 
@Leonardo Cuellar , isn't this a sign that the two objects go roughly in the same direction in the first 20', and that their directions of flight diverge at the end? Because they are not affected by the wind in the same way ?
I had also verified this behavior in GOFAST. The angle of the LOS cue began to diverge as the aircraft began the turn. At first I thought it depended on the inclination of the horizontal plane due to the turn, but in the gimbal this behavior does not happen with the aircraft that is already turning.
I have to analyze if the trajectory of the target is affected in any way. The LOS cue is an immediate graphic support in the SA display to provide the real position of the object with respect to the boresight. More than deciphering the azimuth POD angle. So I suppose his trend is very accurate.
 
The difference with GoFast is that altitudes are very different, so the effect of wind shear is probably greater. If with Gimbal we assume that the wind is about the same for the fighter and the object (which is what have been done so far, because we don't know the wind), then the divergence between azimuths and LOS cue is more useful in that it tells us about how each object's direction align.

I'm trying to visualize this effect, let me know if that looks correct.
Let's start with two objects flying in a similar direction, and under similar wind, blowing from their left in this example.
They are both affected by the wind in a similar way (displaced to the right). Their ground tracks differ from their local velocity vector, but because the wind impacts them the same, the azimuths are similar to the angle between their velocity vectors (LoS cue, little dot on the screen).

1644950460398.png


Now, in situation 2, the plane has banked and their directions do not align anymore. The saucer is now to the left of the plane. In that configuration the ground tracks of the saucer and the plane start diverging relative to the angle between their velocity vectors, i.e. the LoS cue starts diverging from the azimuth.

1644951965784.png

Does that make sense ?

If yes, this is a great information that can be used to constraint the potential trajectories. The fact that azimuth and Los Cue diverge at the end of the video means that the directions of each object against the wind are different, which is contradictory with the fighter seeing the rear of a plane.
 

Attachments

  • 1644950742210.png
    1644950742210.png
    91.4 KB · Views: 196
The fact that azimuth and Los Cue diverge at the end of the video means that the directions of each object against the wind are different, which is contradictory with the fighter seeing the rear of a plane.
This is a very delicate point that must be thoroughly investigated. Give me some time. I ask you to analyze as many scenarios as possible (converging trajectories, incidents, diverging, at various altitudes and before the pivot point and beyond).
 
@Leonardo Cuellar : Yes I am going to revisit the potential trajectories with this new information in mind. But this is one more thing that would align with what Ryan Graves describes, and it's starting to be pretty damning if you ask me.

I think this effect also explains a mysterious aspect of the video : at the end, the clouds do not move anymore, while the plane still banks and the azimuths keep changing. This is because the azimuth change leads by ~4deg the velocity vector at this point. At the very end, the velocity vector points directly at the object.

Completely unrelated but I miss your avatar with the soccer player, it looked cool and was very recognizable ! Cheers
 
The problem here is we don't know for sure what the Cue Dot actually is. We are not even 100% sure about the Az angle at the top of the screen, although I'm pretty sure it's relative to boresight, at least in air-to-air.

But then what would the cue dot be? I extracted the angle relative to the center of the screen, and got the grey line here, with the orange line being Az (top of screen number, 54L to 7R)

2022-02-15_16-53-56.jpg


(Spreadsheet attached)

The blue line is bank angle (the angle of the plane's wings relative to horizontal). The noisy light orange line is a zoomed in look at the difference between frames for the cue angle, with the black being a moving average (not noisy at the end as I had to keyframe it over the UI numbers).

Grey (cue dot angle) lags slightly behind Az for the first half, then lags behind. There's some noticeable correlation with bank angle changes.

I've added an option to the "Tweaks" menu "azType" that lets you use the cue dot angle as Az. The result is a slightly less pleasing fit, but still stays under 4° error.
https://www.metabunk.org/gimbal/
 

Attachments

I think this effect also explains a mysterious aspect of the video : at the end, the clouds do not move anymore, while the plane still banks and the azimuths keep changing. This is because the azimuth change leads by ~4deg the velocity vector at this point. At the very end, the velocity vector points directly at the object.
The clouds never stop moving, they just slow down. There's a illusion of stopping at 33.3 seconds when the camera moves right after the jolt to the left.

I think there's a significant risk of confusion here if you use terms like the "velocity vector" if there's some uncertainty as to what the velocity vector is. Are you referring to 0° Az (on screen number) - which I think is actually the boresight vector?

@Edward Current has done simulation with the cloud movement at the end accounted for by the leftwards motion of the target becoming dominant.
 
The problem here is we don't know for sure what the Cue Dot actually is. We are not even 100% sure about the Az angle at the top of the screen, although I'm pretty sure it's relative to boresight, at least in air-to-air.

But then what would the cue dot be? I extracted the angle relative to the center of the screen, and got the grey line here, with the orange line being Az (top of screen number, 54L to 7R)

View attachment 49617

The blue line is bank angle (the angle of the plane's wings relative to horizontal). The noisy light orange line is a zoomed in look at the difference between frames for the cue angle, with the black being a moving average (not noisy at the end as I had to keyframe it over the UI numbers).

Grey (cue dot angle) lags slightly behind Az for the first half, then lags behind. There's some noticeable correlation with bank angle changes.

I think we know, this is specified in the ATFLIR manual and like @markus noted it makes sense. Azimuth is relative to the ground track, the cue angle is relative to the plane boresight. I don't think this is a coincidence that when this dot is at 0deg, the cloud motion almost comes to a stop. The fighter is perfectly in the alignment of the object and we don't see parallax motion anymore.

The fact that the cue angle lags then leads the Az is not a detail and rather a very important aspect. I think it reflects how the angle between each object's direction evolves in time, which is a great clue to constrain the trajectory of Gimbal. See my schematic above for this effect.

When the angles are equal, the objects go in the same direction, hence their ground tracks are influenced by the wind in the same way. Which means Azimuth=cue dot angle. So here I think that the lead-lag relationship reflects the following : 1) the plane has a direction that is closing on the object direction, 2) it is aligned with it around the middle of the video, 3) it passes beyond the "direction-alignment" point.

Which is exactly what happens in the scenario of an object inside a 10Nm radius (before the LoS intersection point), that stops mid-air (making the cloud motion almost stop as the fighter is pointing at a fixed object in front of its boresight).

On the other hand, this new information does not align at all with a distant plane beyond the LoS intersection. In that scenario, we would see gradual decrease in the mismatch between Az and Cue dot angle, as the fighter closes on the plane until it is aligned behind it. You may argue wind shear may have an impact, but I would imagine it's secondary compared to the direction of flight versus the wind. It's not like the object is super distant or very far below, where the wind may be quite different.

This would be another damn coincidence that this new data very much align with what the pilots have described. At this point these are not coincidence anymore, and I am more and more convinced they gave a very accurate report of the event.

I know this is a big problem for the glare hypothesis, so I don't expect much support on this. Happy to reconsider if you point me where this is a wrong interpretation.
 
I think we know, this is specified in the ATFLIR manual and like @markus noted it makes sense. Azimuth is relative to the ground track, the cue angle is relative to the plane boresight.
Isn't that the opposite of what Markus said?

emphasis mine

Now, to me, this never made any sense, at least for the Air-to-Air mode we're concerned with. It's adding a bunch of workload for the pilot who'll have to work out wind directions and speeds at whatever their altitude is in order to figure out where to go in order to find a target... that's probably subjected to the same wind he is anyway, so simply pointing the airplane towards him would be a much more expedient way to do it. So my assumption was that the manual was using imprecise language, and that the ATFLIR azimuths were referenced with respect to the horizontal projection of the aircraft velocity vector. This certainly makes more sense than it being the ground track, but if you think about it, ATFLIR is not just a thermal telescope, it's a component of a weapon system. The function of a weapon is to shoot things, in this case, missiles. If you want to shoot something like an AIM-9X at a target, you want to know how far off the aircraft boresight the target is to know if the missile can hit them or not. Furthermore, in an aircraft with integrated sensor systems, it'd be silly to have the FLIR pod and the radar work off different reference angles, and the radar azimuths are no doubt zeroed with respect to the boresight. So maybe the manual was right, but incomplete: perhaps the azimuths are referenced to the ground track when in air-to-ground mode, but to the boresight in air-to-air mode.

The DCS F/A-18C manual says:
External Quote:
FOV Azimuth/Elevation. These fields indicate the ATFLIR field-of-view's angle away from boresight. In the image, the ATFLIR is pointing 1° right of boresight and 12° below boresight.
2022-02-15_21-24-54.jpg


Notable here though, this is in Air-to-Ground mode. No difference is noted for Air-to-Air.

I know this is a big problem for the glare hypothesis, so I don't expect much support on this.
I don't think it's right (I think the on-screen Az is relative to boresight), but it's also not a big problem for the glare hypothesis. Using the cue dot angle as target heading in the sim still has the glare-driven track within 4° of two-axis track. Even if the curve fit using the ground-track-relative Az is a bit more janky, it still works and the other three main observables (lack of rotation in the first 20 seconds even thought eh plane and the horizon bank, bumps before rotation, and full screen light-patterns rotating in sync with the glare) are incredibly difficult to explain if it's not a glare.
 
So we've been working off some Navy documents that detail how ATFLIR is operated. In particular, A1-F18AC-746-100 states

External Quote:
Azimuth readout is the ATFLIR pointing angle left or right of aircraft ground track.
Now, to me, this never made any sense, at least for the Air-to-Air mode we're concerned with. It's adding a bunch of workload for the pilot who'll have to work out wind directions and speeds at whatever their altitude is in order to figure out where to go in order to find a target... that's probably subjected to the same wind he is anyway, so simply pointing the airplane towards him would be a much more expedient way to do it. So my assumption was that the manual was using imprecise language, and that the ATFLIR azimuths were referenced with respect to the horizontal projection of the aircraft velocity vector. This certainly makes more sense than it being the ground track, but if you think about it, ATFLIR is not just a thermal telescope, it's a component of a weapon system. The function of a weapon is to shoot things, in this case, missiles. If you want to shoot something like an AIM-9X at a target, you want to know how far off the aircraft boresight the target is to know if the missile can hit them or not. Furthermore, in an aircraft with integrated sensor systems, it'd be silly to have the FLIR pod and the radar work off different reference angles, and the radar azimuths are no doubt zeroed with respect to the boresight. So maybe the manual was right, but incomplete: perhaps the azimuths are referenced to the ground track when in air-to-ground mode, but to the boresight in air-to-air mode.

I was referring to this comment by Markus. But yes it is unclear which is which. Even if it was the other way around, with cue dot relative to ground track, and azimuth to boresight, the divergence between them is what I want to point out because I think it may tell us something very interesting. This would be weird for the cue dot to be relative to ground track though, as it is more of a direct quick visual of where the target is.
 
The DCS F18 manual also says :

Situational Awareness Cue. This diamond moves left or right from center to indicate that the pod has a left or right azimuth offset from boresight. The diamond moves up or down to indicate that the pod has an up or down elevation offset from boresight. When boresighted, the diamond is centered laterally and close to the top of the display. The extreme edges of the display roughly correspond to the slew limits of the pod. The diamond is centered vertically on the screen when the pod is pointed straight down.

So if they are both relative to boresight, how can they be different ? Or the cue is not very precise ?
 
Back
Top