Calculating and visualizing Gimbal angles.

It's a valid confusion though. Here I'm showing it from the POV of a camera fixed to the jet. It might also be useful to have a view where the target stays fixed,
This is now optional, with unchecking "Lock Camera to Jet" (which is on by default)

I've also made a few minor tweaks, and added the ability to drag the timeline across the graph, which makes it easier to look at particular regions
https://metabunk.org/gimbal/
 
I've also added another thing that's quite revelatory. I made the pod roll track the glare angle. So now by default it's not locked onto the "ideal" curve (minus bank angle). The pod pitch angle still comes from there, but the pod roll now is controlled entirely by data from the video.

View attachment 49260

This is very interesting, as you can see the corrections in real time. When the pod roll is not changing and the plane is not banking, then all the pod can do is change pitch, meaning it follows the blue lines (the green dot shows where the pod is physically pointing). So it's natural path is to rise up (look at the blue lines).
Indeed. Another thing that's interesting and very apparent in this view is that the system always allows approximately the same amount of angular excursion away from the ideal track before issuing a correction.
To keep on target it needs a counter-clockwise rotation. For the first 22 seconds, this is largely supplied by the jet banking. The jet is making a CCW bank (ie a bank more to the left) right at the start. This makes the green dot go down, removing the need for the pod itself to roll. The green ball rises, but the jet does two more increases in left bank, and 9 and 19 seconds. All of these do not rotate the glare.
I have a suspicion that the pilot himself accidentally set it up that way. He was conducting an intercept, so he was probably trying to make the glare look stationary in his display to keep a steady trajectory as he formed up on it... until, of course, the thing rotated too far to follow with the type of small corrections that would have been automatic for someone used to formation flying.
Step 1 is all pod roll.
Step 2 overlaps another left bank, which helps, meaning it's a little less
Step 3 at about 28.5 seconds, overlaps a right bank, which means the pod has to rotate even more to the left (CCW) to copensate for the jet rotating the "wrong" way. That's slightly why this rotation is so much larger than the others (but mostly because it's close to the singularity - see how steep the white curve is)
Step 4 is also affected a bit by the CW rotation of the jet, but essentially it's just following the white curve in steps.

The video ends with an error of about 2.2°, almost certainly just before another roll correction which would have taken the glare around 30° past vertical (i.e. it would be upside down).
People in UFO circles have often speculated that what happens after the cut at end of the video is the object takes off at high speed. Since the previous large correction almost resulted in a loss of track, amusingly, that might well have happened!
This is all fascinating confirmation of the rotating glare hypothesis, but the challenge of explaining it remains.
Yeah, explaining much of this will be difficult, with a danger of overwhelming the viewer with too much stuff. But I think it's getting close, especially with the little inset you seem to be working on now (maybe to make a mockup of the video). Probably one key thing there is to make it very clear that the ideal track comes from the numbers on the screen, that you could be given only those and predict almost exactly the observed amount of rotation. The little pip indicating target position in the ATFLIR should help explain that: it matches exactly the position of the target in the animation, after all.

One possible point of doubt to be anticipated is the issue of the internal mirrors, which are guaranteed to be accused of being a fudge (again). I think that's where highlighting when the glare doesn't rotate with the bank (e.g. 19s) is most important.

I just thought of something and I don't know if it's a good idea, but the relationship between the ATFLIR pod head and its internal mirrors is like that between your head and your eyes. If you're tracking an airplane flying overhead your head will do exactly the same kind of movements we're calculating here, except of course your eyes can swivel a lot more than 5 degrees. But still, the ATFLIR moves almost like a person laying on the wing would. Maybe there's a decent demonstration to be done by looking at an airplane through a kitchen towel paper tube or something.

While this is all complicated, it has a virtue compared with some of the other pieces of evidence that directly support the glare hypothesis. Those are no doubt persuasive when looked at objectively (sure persuaded me), but can be (and have been) ignored by not seeing them, or pretending not to. I don't see any bumps matching the rotation, I don't see any light artifacts rotating together with the object, and so on. This cannot. The main risk is the impression that the viewer is being dazzled with math, so it's great there's interactive visual aids :) The thing those might need the most right now is a background to help make the space more tangible as the user drags the pod around. To me it's very clear as is but I've been thinking at the problem for some time.

Come to think of it, the evidence seems to align in a natural story: first you have indications that whatever the thing is, it's something in the camera. Then you notice that it looks just like proven examples of glare, and see that its orientation over time matches just what you would expect from the tracking information. Then you reconstruct the F-18 trajectory and see with the aid of cloud parallax that the object is probably some 30+ Nm away, moving right to left and away from the F-18... just as, beautifully self-consistently, you'd expect from a jet engine. I don't know how much of it can be covered in a video of reasonable length, though. That's probably the hard problem in all of this.
 
I have a suspicion that the pilot himself accidentally set it up that way. He was conducting an intercept, so he was probably trying to make the glare look stationary in his display to keep a steady trajectory as he formed up on it... until, of course, the thing rotated too far to follow with the type of small corrections that would have been automatic for someone used to formation flying.
The transformation azimuth/elevation to roll/pitch already predicts that for large azimuts, the roll change will be small and small corrections may be done with the internal mirrors. Taking your own script and data:

gimbal_roll.png

(where positive azimuth indicates "left". I would have used negative angles for left, but I kept all your numbers. The only correction here was adding an offset to glare angle to fit all lines.)
Even with a constant bank, the roll change is small when we are at an azimuth > 20 degrees. Seems like small corrections in the mirrors would be enough before the rolling is needed.


I just thought of something and I don't know if it's a good idea, but the relationship between the ATFLIR pod head and its internal mirrors is like that between your head and your eyes. If you're tracking an airplane flying overhead your head will do exactly the same kind of movements we're calculating here, except of course your eyes can swivel a lot more than 5 degrees. But still, the ATFLIR moves almost like a person laying on the wing would. Maybe there's a decent demonstration to be done by looking at an airplane through a kitchen towel paper tube or something.
I have been thinking it shouldn't bee difficult to make a setup using a smartphone in a (roll/pitch) gimbal, and just recreate a tracking from side to side with a small tilt. Track a light that creates a glare, o just put a mark on the camera window. Then, when you de-rotate the resulting video, the glare or mark will rotate, while the background remains level.
 
(where positive azimuth indicates "left". I would have used negative angles for left, but I kept all your numbers.
I chose positive azimuth as left because counterclockwise is the positive sense; this keeps coordinate transformations in their usual right-handed forms with the z axis nice and upright.
Even with a constant bank, the roll change is small when we are at an azimuth > 20 degrees. Seems like small corrections in the mirrors would be enough before the rolling is needed.
That's probably fair.
I have been thinking it shouldn't bee difficult to make a setup using a smartphone in a (roll/pitch) gimbal, and just recreate a tracking from side to side with a small tilt. Track a light that creates a glare, o just put a mark on the camera window. Then, when you de-rotate the resulting video, the glare or mark will rotate, while the background remains level.
That might work. To be truly persuasive though the orientation over time would have to match that of the video. Would that be easy to do? I also don't know if it's worth worrying about the people who'll immediately pooh-pooh the whole thing because surely the fundamental physical principles at work in a multi-million dollar pod can't possibly be the same as those in a $500 camera phone, right?
 
That might work. To be truly persuasive though the orientation over time would have to match that of the video. Would that be easy to do?
I was thinking in something easy, manual... low cost. Just a scan from one side to other, resembling the actual roll of the pod. A constant bank can be added, I guess. Problem is that doing it manually depends on having a good pulse to have a nice result. (maybe frame by frame, animation style? o_O )

Then show how video needs to be de-rotated to maintain the correct orientation. But in contrast, artifacts created by the front window will appear to rotate. Those are the fundamentals for why this UAP rotates, regardless of the price of the full system. It's not the camera. It's the tracking system.

I also don't know if it's worth worrying about the people who'll immediately pooh-pooh the whole thing because surely the fundamental physical principles at work in a multi-million dollar pod can't possibly be the same as those in a $500 camera phone, right?
Sure, I already count with that kind of handwaving. Actually, my smartphone is even cheaper than that :P
 
Given one of the most common objections I see is that the pilots say "it's rotating" in the audio it almost certainly needs the FOV of the camera i.e. what the camera sees versus what a person sees covering, a slight complication with this is the IR glare likely increases the apparent object size in the frame over what it would look like in TV mode.

I had wondered if it could be physically recreated with a zoom lens camera on a motorised astro tracking mount mounted on it's side, with the output fed to an external display. But it seems like a complicated setup for something that is likely to be dismissed with a scoff about "million dollar state of the art infallible military hardware" Also astro tracking mounts are designed for accuracy and may not have the speed to give a good demonstration.

But then there are things like this available now but they might do too much that we don't want to do to recreate ATFLIR


Source: https://www.youtube.com/watch?v=JtlI3xLVNxs
 
But it seems like a complicated setup for something that is likely to be dismissed with a scoff about "million dollar state of the art infallible military hardware"
A smartphone has magnitudes more computing power than a Space Shuttle onboard computer. Technology trickles down.

Also, the smartphone setup is cheapified by
- not having a zoom to 0.35⁰ FOV
- not having to stabilize that zoom
- not doing automated tracking
- not having IR mode
- not having a HUD
- not hardened to operate at high altitude pressure and temperature
- not having adjustable mirrors
- not undergone reliability/durability testing
 
A smartphone has magnitudes more computing power than a Space Shuttle onboard computer. Technology trickles down.

Also, the smartphone setup is cheapified by
- not having a zoom to 0.35⁰ FOV
- not having to stabilize that zoom
- not doing automated tracking
- not having IR mode
- not having a HUD
- not hardened to operate at high altitude pressure and temperature
- not having adjustable mirrors
- not undergone reliability/durability testing
The problem is the issue is seemingly caused by some of the things a smartphone doesn't do, hence us needing to pair it with gimbal/extra kit to emulate an ATFLIR, but camera gimbals are not designed for the exact same purpose the ATLFIR gimbal/dero is.

The issue with trying to provide 'smoking gun' evidence for these types of advanced debunks is you generally have to recreate the same thing seen in the video. I've done it twice as well by going to the same place on Google earth.

With phones/normal cameras/webcams we have easy access to the same tech the person used with mil tech like this we are SOL, for the Chilean one we got lucky because we had locations/dates/time and the object was right there on ADSB.

The Navy ATFLIR videos are very unique, there are hardly any full screen real ATFLIR videos out there for comparison and the tech involved is impossible to acquire and information available is sparse and likely older than the videos, the developers of the DCS F/18 sim actually were able to learn stuff they didn't know from the 3 Navy videos which shows how unique they are.

So demonstrating this complex theory is very hard because you have prove it from 1st principles.
 
Last edited:
The problem is the issue is seemingly caused by some of the things a smartphone doesn't do, hence us needing to pair it with gimbal/extra kit to emulate an ATFLIR, but camera gimbals are not designed for the exact same purpose the ATLFIR gimbal/dero is.

I remember trying to use a 3D lab roboter to recreate ship motion, attaching a sensor to it and measure that for my thesis. Recreating motion of something which isnt made for recreating these kind of motions is shit hard. So I would back up that statement... it sounds easy at first but using hardware to emulate gimbal motion may result in much work.

Generally:
For the scripts I would use quaternion rotation instead of euler angles. Easier to use, can deal with singularities (free libs for c / matlab shall be available somewhere in the web). If you want to dig more into the topic it may be worth the effort.
 
I remember trying to use a 3D lab roboter to recreate ship motion, attaching a sensor to it and measure that for my thesis. Recreating motion of something which isnt made for recreating these kind of motions is shit hard. So I would back up that statement... it sounds easy at first but using hardware to emulate gimbal motion may result in much work.

Generally:
For the scripts I would use quaternion rotation instead of euler angles. Easier to use, can deal with singularities (free libs for c / matlab shall be available somewhere in the web). If you want to dig more into the topic it may be worth the effort.
The thing is the tracking algorithm operates based on the physical constraints of the mechanism which is more complicated than we can recreate.

We may even be able to recreate a gimballed tracking camera system that doesn't show the same flaw as we see in GIMBAL because the flaw in GIMBAL is the result of the algorithm being designed to prevent a worse flaw in a more common scenario that we are unaware of.

Ultimately gimbal lock is known potential issue with all 3 axis gimbal systems so it's up to the designers to work out the most common lock situations and program avoidance algorithms and this is probably something Mick has covered/will cover in his new video if he has time, with all the other things he's gonna have to go over.
 
Yeah, I hear you. For the analysis I've been using the raw data as much as possible, but it does pay to show something nicer. Smoothing this guy is a bit difficult though because of the age old problem where if you discard high-frequency information you also lose edges and structure that you want.
I've tried a different approach - manual keyframes. I tried to keep it as visually accurate as possible frame to frame.




There's only 30 keyframes, and I linearly interpolate (as AfterEffects does) them after loading.
 

Attachments

IMO in some places the keyframed angles don't provide as good a fit as the smoothed track, so I gave it some small tweaks. I also fixed a small problem with the tracking algorithm that was responsible for that small step around 13s.

smoothed_glangle_s.png

And here's comparing original and new keyframed:
kfd_glangles.png

Some videos showing the difference in tracking:
Original kf:

New kf:


Tracked (smoothed):
 

Attachments

I remember trying to use a 3D lab roboter to recreate ship motion, attaching a sensor to it and measure that for my thesis. Recreating motion of something which isnt made for recreating these kind of motions is shit hard. So I would back up that statement... it sounds easy at first but using hardware to emulate gimbal motion may result in much work.

Generally:
For the scripts I would use quaternion rotation instead of euler angles. Easier to use, can deal with singularities (free libs for c / matlab shall be available somewhere in the web). If you want to dig more into the topic it may be worth the effort.
I can think of a few reasons why quaternions may not be the best fit for this problem, but probably the two most important ones are:

1. The Euler angles have physical relevance. The bank angle gives you the rate of turn, and the pitch angle is whatever it needs to be to keep the airplane at constant altitude. We have a hope of estimating the pitch from simulators, for instance, but I have no idea what quaternion I would use for the rotation and I would end up converting from Euler angles anyway. And, of course, the singularity in the ATFLIR tracking system is the root cause of the rotation of the glare, so for our own purposes here it's not some undesirable artifact.

2. Quaternions are a rather specialized topic. I personally only encountered them once (in computer graphics class). You'd think that e.g. because the imaginary quaternions satisfy the same algebra as the Pauli matrices they'd find ample application in physics, but AFAICT nobody really bothered with them too much since it was figured out how to write Maxwell's equations in vector calculus notation. In a video aimed at a general audience (possibly most of which aren't even comfortable with complex numbers) quaternions would need a separate explanation, but they can be expected to understand at least intuitively how to rotate something about an axis.
 
And, of course, the singularity in the ATFLIR tracking system is the root cause of the rotation of the glare, so for our own purposes here it's not some undesirable artifact.
Agreed, and as the paper "APPLICATION OF NONLINEAR GENERALIZED MINIMUM VARIANCE TO THE NADIR PROBLEM IN 2-AXIS GIMBAL POINTING & STABILIZATION" says:

External Quote:
"Sufficient FoR is ensured by a 2-axis gimbal device with one gimbal rotating over the azimuth axis and the other over the elevation. A significant problem exist with this configuration however when the target moves such that the line-of-sight (LoS) vector approaches the azimuth axis, at around -90 degrees in elevation, the system loses one degree of freedom and is therefore unable to maintain accurate track. Often this degree-of-freedom loss is known as one of the following interchangeable terms – "gimbal lock", "keyhole singularity" or "Nadir cone", although we shall use the latter for the remainder of this paper. Practically, the engagement kinematics driving tracking in the neighbourhood of the nadir require significant agility from the outer, azimuth gimbal axis, to the limit that LoS vector tracking through the singularity when the LoS and azimuth axis are collinear results in infinite acceleration & rate demands to the OG axis. Obviously, the acceleration demands within this range are high enough to saturate the servos, leading to large tracking errors that heavily impair the precision of the system
...
alternative transformations, like the quaternion, cannot be employed as the problem itself is a result of the turret's physical attributes rather than just an implication of the modeling kinematics."
Quaternions are a rather specialized topic. I personally only encountered them once (in computer graphics class).


Quaternions are used extensively under the hood of 3D game engines, so I'm familiar with them from game programming - particularly for "slerping". They are even used in the engine I'm using here. My code that draws the 5° circle uses a Three.js function applyAxisAngle which rotates a point around an axis. It is implemented in three.js as:
Code:
applyAxisAngle( axis, angle ) {

   return this.applyQuaternion( _quaternion$4.setFromAxisAngle( axis, angle ) );

}

This is essentially to avoid singularity issues of using Euler angles, but, as noted, it does nothing about the actual physical issues so it's not useful.
 

Attachments

IMO in some places the keyframed angles don't provide as good a fit as the smoothed track, so I gave it some small tweaks. I also fixed a small problem with the tracking algorithm that was responsible for that small step around 13s.
That's great, thanks. Having it overlay the extracted angle as you did is A) more accurate, and B) important in avoiding accusations of fudging the numbers. I'll incorporate it later.
 
It's fun to think about what happened after the end of the videos, and why the videos end where it do. It's not like the tape ran out. It seems that in the Gimbal case, one of the following must have happened: the pilot either continued to pursue the object and made a course correction, or continued banking to the left and abandoned it. If the former, and the aircraft went past 0° again (even minutes later), the 3D animation suggests that the pod would flip back, and so would the glare rotation.

If the pilot flew exactly straight at the object, obviously the pod would not go haywire. Any idea what kind of angular window the aircraft would have to stray from for the pod to flip back?

Speculating, maybe the crew saw a second or third rotation and noticed the connection. Someone could have said, "It's just the gimbal dude," and that's the genesis of the video title. (But our director's cut was missing the spoiler ending.)
 
This theory implies that the pilots know it's something mundane and are simply lying (Ryan Graves gave a very precise description of the encounter). Is that plausible ? What would be your explanation for it?
 
This theory implies that the pilots know it's something mundane and are simply lying (Ryan Graves gave a very precise description of the encounter). Is that plausible ? What would be your explanation for it?
I wouldn't rule that out. Generally though, I'm more interested in physical mechanisms than people's memories and stories.
 
Did we talk about the shape changes of the Gimbal "glare/object" in the main thread? Watching Mick's tool loop the video, the shape changes as the video progresses with the "diffraction spikes" getting more apparent as it progresses, which seems like it would be consistent with the heat source becoming more head on and thus glaring more, i.e. fitting the profile of turning in to be more directly behind a jet engine heading away.
 
@Mick West I see on Twitter that you mention somebody named Paul Bradley, a mechanical engineer who made an analysis of Gimbal. Who is he and what was his analysis ? Could you please link to it here ?
 
Last edited:
@Mick West I see on Twitter that you mention somebody named Paul Bradley, a mechanical engineer who made an analysis of Gimbal. Who is he and what was his analysis ? Could you please link to it here ?
He was the source of the graph in Chris Lehto's video.

Source: https://www.youtube.com/watch?v=6PYPtjj01Qs&t=971s

He had a whole presentation on the gimbal video, and the rotating glare hypothesis not working because the glare did not rotate enough. He says he got tired of the attention and deleted his twitter account. I'm not sure what he published and what he just shared with Chris.
 
It's interesting (and encouraging) that Chris Lehto now accepts the conclusion that it was glare. Even alpha check seems to have conceded the point (albeit most begrudgingly).
This theory implies that the pilots know it's something mundane and are simply lying (Ryan Graves gave a very precise description of the encounter). Is that plausible ? What would be your explanation for it?
Since the pilots in question have not come forward, we have basically two source of information about their perception of what happened:
1. Ryan Graves' statements
2. Some off-hand comments by Fravor on the Lex Fridman podcast (IIRC) in which he describes the Gimbal pilots as being somewhat close.

Those would seem to suggest the pilots continued to find the incident strange, though it's definitely hearsay and the Fravor connection is at best suggestive (he could be aware of their relationship and assumed they were earnest rather than having direct knowledge). Graves could've been ordered to lie, for instance.

Personally, I don't find this likely: during an intercept the pilots' most likely course of action would be to pull up alongside the object rather than trying to chase directly behind it, and they seemed to be lining up to form up on the object's left. Continuing the hypothetical, they would've rolled wings level and sped up, the object continuing its stepwise rotation to follow the slowly increasing azimuth as they creep closer. Upon catching up to the object it would again be at some high azimuth, this time to the right, though now it'd be easy to see with the naked eye. Assuming no monkey business on the part of the pilots, we can conclude this never happened. I can think of two reasons why:

1. Insufficient fuel (they assumed the object was some 10 Nm away, but we have reason to believe it was farther than that). Fuel is a known constraint of the Hornet, and the fact that the simulation is a better fit with AoAs on the lower side suggests the jet was light.
2. Loss of track following the crossover, essentially repeating the FLIR1 video.

It's worth asking why we don't get to see the rest of the video. If it was a loss of lock, maybe it was cut because the Navy doesn't want people to see a vulnerability of a system still in active use. Yes, FLIR1 shows the same thing, but it was leaked. The Gimbal, in contrast, I believe was released deliberately by Elizondo. Could he be lying? Let's just say I wouldn't be shocked.
 
Last edited:
It's interesting (and encouraging) that Chris Lehto now accepts the conclusion that it was glare. Even alpha check seems to have conceded the point (albeit most begrudgingly).
Sorry if I missed it in this thread, but where does Lehto say that?
 
Wow never thought I'd see that

I disagree, he never seemed disingenuous, just a bit too certain that his hasty conclusion from limited inputs was solid. With better inputs, he came up with a better conclusion. Not just that, he credits Mick - that's strength of character, it would be easier to just go quiet and pretend the prior conclusions didn't exist.
 
I think they've concluded that it likely doesn't matter by now, the video has been all over the news looking like a rotating flying saucer with "Navy confirms UFOs" as the banner text for long enough now that this debate is irrelevant to the wider PR battle. The horse is long gone.
 
I think they've concluded that it likely doesn't matter by now, the video has been all over the news looking like a rotating flying saucer with "Navy confirms UFOs" as the banner text for long enough now that this debate is irrelevant to the wider PR battle. The horse is long gone.

The overall response on r/UFOs is "well, okay it does not rotate.. but still it is an ufo!"
It is rather funny that because now Letho gives this statement, all of a sudden the whole sub agrees. Like "well, Mick is always wrong of course, but Chris is "with us", so ok now we will agree". Again, I see similarities to religious group behavior.
 
It's interesting that I've not even made the new explainer video yet, and already lots of people have accepted that it's glare (with the expected rush to dismiss it as irrelevant)
 
It's interesting that I've not even made the new explainer video yet, and already lots of people have accepted that it's glare (with the expected rush to dismiss it as irrelevant)
You need to find the relevant quotes and TV/documentary appearances of those "big names" who say it is a rotating object and have them in your video. You need to point out exactly who said and stood by the rotating claim. Specifically Fravor (if he ever did.) Point out who was at TTSA when they made the claims (Elizondo.)

The name of people are all seemingly all that matter in this charade.

And ultimately even then it won't matter too much because there's going to be no news broadcasts / 60 mins about this debunk.

You have to go to the "way back when machine" to get the TTSA pages as they've all been taken down.

https://web.archive.org/web/20210201200405/https://thevault.tothestarsacademy.com/gimbal

Here's the Gimbal page, however note all the "seems to" and other qualitative statements about the rotation and the aura etc.
 
The point I am trying to make is that faithful ufo believing persons do not listen to arguments, unless it is communicated by somebody from "their side". Thus, it does not matter how well we debunk stuff, they will not agree, unless Luis, or Chris or other talking ufo heads "blesses it".

Like I say, similarities with religion..
 
The overall response on r/UFOs is "well, okay it does not rotate.. but still it is an ufo!"
It is rather funny that because now Letho gives this statement, all of a sudden the whole sub agrees. Like "well, Mick is always wrong of course, but Chris is "with us", so ok now we will agree". Again, I see similarities to religious group behavior.
I see "trust" behaviors. IF laymen don't understand the sciency or mathy explanations, it comes down to .. do you "trust" the messenger giving you the explanation.
 
I see "trust" behaviors. IF laymen don't understand the sciency or mathy explanations, it comes down to .. do you "trust" the messenger giving you the explanation.
Agreed. Trust is also a very (imo) dangerous thing, specially in the science/engineering field. That's why I like to compare ufology with religion. They both are heavily based on trust and believing it must be out of this world (or in religious terms: above this world).
 
The point I am trying to make is that faithful ufo believing persons do not listen to arguments, unless it is communicated by somebody from "their side". Thus, it does not matter how well we debunk stuff, they will not agree, unless Luis, or Chris or other talking ufo heads "blesses it".
In this case the arguments are quite complicated, so many people prefer to defer to experts. The selection criteria for experts vary.
 
In this case the arguments are quite complicated, so many people prefer to defer to experts. The selection criteria for experts vary.

I'd say the real experts are the ones who can build a simulation convincing enough to convert one of the other side's experts to change his view.
 
I'd say the real experts are the ones who can build a simulation convincing enough to convert one of the other side's experts to change his view.
I don't think you cease to be a "real" expert just because you're wrong once, or become one by being right once.

That said, Mick has been right about UFOs many times.
 
Catching up with this, I was looking at Markus code and wondering what are the justification behind some of the calculations.
When estimating the roll of Gimbal, based on its pitch (AoA) and yaw (bank angles), you first calculate the pitch rotation, then yaw, and retrieves the roll. It seems calculating the yaw first (bank angles), then pitch, makes quite a difference and decreases the fit. What is the justification for doing the calculation in that order ? I don't understand this part :

Ah, finally, found the error which had been hurting the accuracy of the reconstruction: I was applying the coordinate transformations in the order bank rotation, then pitch rotation, under the rationale that this is the correct rigid body rotation to get the aircraft in the proper "banked" position at a given angle of attack. But I forgot that to back out the pitch and roll angles for the ATFLIR main axes I have to apply the transformations in reverse, since I was starting from spherical coordinates with zero azimuth aligned with the F-18's velocity vector. With this error corrected the fit becomes excellent. Check it out:

A more general question is : how do we know for sure if Markus and Mick's Gimbal model is an accurate representation of how this specific pod would behave when avoiding Gimbal lock ?
 
Catching up with this, I was looking at Markus code and wondering what are the justification behind some of the calculations.
When estimating the roll of Gimbal, based on its pitch (AoA) and yaw (bank angles), you first calculate the pitch rotation, then yaw, and retrieves the roll. It seems calculating the yaw first (bank angles), then pitch, makes quite a difference and decreases the fit. What is the justification for doing the calculation in that order ? I don't understand this part :



A more general question is : how do we know for sure if Markus and Mick's Gimbal model is an accurate representation of how this specific pod would behave when avoiding Gimbal lock ?
As I understand it this isn't a simulation of the pod's algorithms, it's simulation that allows you to plug in the values you get from a video and see the relevant rotations from all angles.

I.e. you can't ask it to track any scenario like an ATFLIR would, you have to put in all the values and it shows you what it going on.
 
Back
Top