GIMBAL Video: Simulating the ATFLIR Tracking and Gimbal Rotation

Let me know what you think of this 3D version of the Gimbal video.
Interesting. Is the movement of the glare image linked to the pod though? Seems like the pod rotation is more continuous.

I've just started playing the with Blender again, trying to get some virtual versions of my backyard experiments
 
Interesting. Is the movement of the glare image linked to the pod though? Seems like the pod rotation is more continuous.

So for this first version I have manually keyframed the pod to look two degrees below the horizon and to the heading angle given in the FLIR video.

I found it pretty interesting how given just those two inputs the rotation of the pod lense starts to line up with the video. From here we can add in more variables that could account for the difference between the FLIR video and the 3D model. Some things I can think of that would change the way gimbal moves are the alignment of the pod to the aircraft, turbulence in the pitch and yaw axis, and the true angle of attack of the aircraft.

What I think this demonstrates at the very least is how much the rotation rate of the outer lense changes as you approach 0 degrees.
 
What I think this demonstrates at the very least is how much the rotation rate of the outer lense changes as you approach 0 degrees.

The system tries to minimize turns of the outer gimbals, as they are heavy, vibration-inducing, and inaccurate. So most of the time it relies on the inner gimbaled mirrors to point the line of sight. It can't avoid some turns close to 0°, but the algorithm it uses to decide when to do this is unknown.
 
The system tries to minimize turns of the outer gimbals, as they are heavy, vibration-inducing, and inaccurate. So most of the time it relies on the inner gimbaled mirrors to point the line of sight. It can't avoid some turns close to 0°, but the algorithm it uses to decide when to do this is unknown.

This behavior actually became pretty clear when playing around in 3D. Here's the output of my tests. I think it shows where the pod is moving the outer gimbals and where it's using internal mirrors. Right now I have the pod automatically tracking an object set to follow the heading shown in the video. The "Simulated Lens Rotation" is the actual change in rotation of the gimbal lens in the normal direction facing the object, so that's how the glare would rotate in this model.

Simulated Lens Rotation vs Observed Object Rotation.png
 
Here's a render from the simulated gimbal camera's point of view.

The animated elements in this video include:
- The relative position of the object from the aircraft using the heading value in video.
- The roll attitude of the aircraft aprox. From original video.
- The heading of the aircraft, manually keyframed to (attempt to) match the original video.

The rest is rendered as is. The glare object is an actual light emissive object and the background is a static skybox.

With this setup we may be able to determine the turn rate of the aircraft and with that the actual movement using the onscreen airspeed. This particular render should not be used for analyzing the turn rate as I have yet to determine the correct field of view for the camera. It's currently set to 6 degrees and the jet is turning at a rate of about 2 degrees per second which I got as the standard turn rate from a F-18 NATOPS manual for the indicated bank angle at the indicated true airspeed.

Overall the simulation is proving to be a pretty good resource for working with and cross checking the variables at play in the gimbal video.

I welcome all observations and inputs as I keep working on this. I personally feel all signs point to a glare but this seems be a good way to provide concrete data/visualizations that support the hypothesis.


Source: https://www.youtube.com/watch?v=WpgmXTL9VL0
 
This is great! Do you plan to do do a side-by side with an overhead view?

Maybe combine four views, the ATFLIR POV, the focus on the ATFLIR pod rotating, a follow-cam behind the jet, and an overhead view. With some kind of FOV volume indicated.

Also, maybe a simulated HD VIS mode - showing the ATFLIR's POV, but with an actual jet model at the target position. Then that overlaid with the IR simulated view.
 
This is great! Do you plan to do do a side-by side with an overhead view?

I've been trying to figure out how to best use this sim to explain the Lens/Glare rotation without getting into anything else. We may be able to show the jet's movement but that's not what's important here.

I think at this point the glare theory is proven beyond a reasonable doubt. But how do we communicate this idea? It's incredibly important to get this right because the real data clearly contradicts the professionals that are being used as witnesses in this case.

I actually believe Fravor saw something. I think he's a good pilot and squad leader. But how do we show people that what he's being led to repeat is inaccurate? I'm just conflicted on how to approach this problem.

------------------------------------
PS
I just need to scrub the Blender file of my name so I can post it. After that I'll put it out there for anyone to use and pick apart. It's not like I need absolute privacy but people seem to get very offended when you throw out alternate ideas.
 
Sorry if I missed something but how are you actually simulating the glare and and decoration mechanism in your video? I'm interested in how you did that.

I would be interesting to have a "pilot view" camera as well the pod view camera with glare.
 
Sorry if I missed something but how are you actually simulating the glare and and decoration mechanism in your video? I'm interested in how you did that.

I would be interesting to have a "pilot view" camera as well the pod view camera with glare.

I have a virtual camera in the targeting pod following an object that emits light. The targeting camera and pod are set to track that "Target" in the same exact way the ATFLIR pod would.

The glare is an effect added directly to the pod camera in post, flaring up the bright pixels exactly as you would see in a normal camera. As with all real world physical glare artifacts that post effect follows the orientation of the glass lens of the ATFLiR pod. Finally, The entire video is rotated to match the horizon as it does in the original "UFO" video.

The more I describe this effect the more I understand why people aren't just "getting it"... It's a geometry problem and there's a bunch going on at once.
 
Are you keyframing the camera to simulate the gimbal restriction as it crossed 0 degrees? I tried to do something similar for GIMBAL in Blender but my skills were lacking. Very interested in playing with your Blender file, i did some Go Fast simulations back when we were demonstrating the parallax effect was what was causing the apparent motion in that video.

I wonder if it's possible to use Cycles and have the glare actually be created by an object in front of the camera.
 
No the gimbal isn't keyframed. I have it using constraints to follow the object automatically. The outer gimbal axis points the best it can on the aircraft's roll axis and the inner lense points best it can after that on it's own axis. Both work together to track the object which is keyframed to follow the heading given in the original video.

I don't know if cycles creates glares automatically. I was looking at options and the fastest solution was just doing it through blender's compositor. The glare follows the lens so you can just plop it on there before the derotation and you'll get the right output. Like camera "up" always equals glare *up" so as long as your order of operations is correct then you can present the effect without actually simulating it in render.

But yeah, Blender is a mess until you know what you're doing. Even then ... It's a lot. Been messing with that program for over 15 years now...
 
I'm just thinking about if we were to do a video with a breakdown using the glare effect might be seen as "cheating"
 
The more I describe this effect the more I understand why people aren't just "getting it"... It's a geometry problem and there's a bunch going on at once.
Yea, it's perfectly understandable why people don't get it. It's very unintuitive. Even with graphics programming and 3D modelling/rendering experience it can make my head hurt following the various transforms.
 
You almost need to start with showing the solving of the ATFLIR image issue (presenting a stable tracked zoomed image of a moving target filmed from a moving viewpoint to the MFD) so all the reasons for why the system is designed like that are apparent. Mick has done that in part with some of his videos, maybe a 3d recreation using F18 models etc would help.
 
I'm just thinking about if we were to do a video with a breakdown using the glare effect might be seen as "cheating"

Yeah I'm constantly thinking about how to make sure this comes across as transparent and fair since a 3D simulation could be dismissed altogether if it seems like I just made a video to look like the gimbal video, rather than plugging in numbers and getting the results we're seeing here.

I looked at using cycles to physically simulate a glare but didn't have any luck. I'm sure it can be done but that's out of my wheelhouse. I think what's most important there is explaining how the 3D process follows the same physical rules, even if it's not a raytraced facsimile.
 
You almost need to start with showing the solving of the ATFLIR image issue (presenting a stable tracked zoomed image of a moving target filmed from a moving viewpoint to the MFD) so all the reasons for why the system is designed like that are apparent. Mick has done that in part with some of his videos, maybe a 3d recreation using F18 models etc would help.

I think you're right on this. One of the things that threw me off while working on this was why the glare rotates counterclockwise. I even had a moment where I thought "oh we might be totally wrong". It took a few minutes but I unwrapped my brain.

So yeah it's probably going to need to be a very efficient A-Z explanation including why the video is presented to the pilots in the way that it is.
 
Great job Vizee. Very nice model.

Does the model align with the actual video in terms of amount of rotation observed?

Your model seems to predict an almost full 180° rotation in a single continuous movement but that is not what we see.

Is the yellow line in your graph below taken from the observed motion in the video? (Does it account for the aircraft varying it's bank angle slightly?) Why doesn't it align with the red line (prediction) by almost 50%?

Thanks!

This behavior actually became pretty clear when playing around in 3D. Here's the output of my tests. I think it shows where the pod is moving the outer gimbals and where it's using internal mirrors. Right now I have the pod automatically tracking an object set to follow the heading shown in the video. The "Simulated Lens Rotation" is the actual change in rotation of the gimbal lens in the normal direction facing the object, so that's how the glare would rotate in this model.

Simulated Lens Rotation vs Observed Object Rotation.png
 
Great job Vizee. Very nice model.

Does the model align with the actual video in terms of amount of rotation observed?

Your model seems to predict an almost full 180° rotation in a single continuous movement but that is not what we see.

Is the yellow line in your graph below taken from the observed motion in the video? (Does it account for the aircraft varying it's bank angle slightly?) Why doesn't it align with the red line (prediction) by almost 50%?

Thanks!

The yellow line is the amount of rotation of the outer lens in the axis pointing at the object. If the camera was fixed to the outer lens that's how much the background/horizon would rotate over the course of the video.

We're seeing a difference because my simulation behaves as if the outer gimbal (the parts we can see moving) is used to track the object directly. In reality the actual targeting pod has a more percise internal mirror that can look up/down and left/right within the field of view of the outer gimbal's window. So what we see in that graph, especially at the end of the video, is the outer gimal moves in distinct steps to give that mirror a line of sight to the target while limiting the actual amount of time those heavy vibration inducing motors are running.
 
The yellow line is the amount of rotation of the outer lens in the axis pointing at the object. If the camera was fixed to the outer lens that's how much the background/horizon would rotate over the course of the video.

We're seeing a difference because my simulation behaves as if the outer gimbal (the parts we can see moving) is used to track the object directly. In reality the actual targeting pod has a more percise internal mirror that can look up/down and left/right within the field of view of the outer gimbal's window. So what we see in that graph, especially at the end of the video, is the outer gimal moves in distinct steps to give that mirror a line of sight to the target while limiting the actual amount of time those heavy vibration inducing motors are running.

I see. What sources is this theory based off? Do we have proof that this is what actually happens on ATFLIR or is it just an educated guess?

Thanks
 
I see. What sources is this theory based off? Do we have proof that this is what actually happens on ATFLIR or is it just an educated guess?
It's described in the patents that you saw in this thread:
https://www.metabunk.org/threads/gimbal-lock-and-derotation-in-flir-atflir-systems.10792/

And specifically where I answered what looks like the same question from you:

You mean within the system there are mirrors that "point" the camera without moving the external window?

Yes, this patent describes them as "coelostat mirrors", also discusses not wanting to use the main roll axis.
https://patents.google.com/patent/US9121758

Conventional airborne sensor systems generally have ability to maintain a desired pointing direction as the aircraft rolls and changes forward direction in azimuth. However, conventional systems generally cope poorly with significant changes in aircraft pitch. One approach to compensating for aircraft pitch uses the roll axis. However, as illustrated schematically in FIGS. 1A and 1B, there is typically significant hardware, including the complete afocal telescope 110, mounted on the roll axis. As a result, compensating for aircraft pitch by rotating the roll axis may require significant power to move the large associated mass, and also is not fast (or agile) and may not be particularly accurate. The problem is particularly challenging in the case of a multi-function airborne sensor, such as that discussed in U.S. PG Publication No. 2012/0292482, where alignment and pointing accuracy must be maintained for several different optical sub-systems performing different functions.

Aspects and embodiments are directed to an optical configuration for an airborne sensor that allows for agile compensation of platform pitch while also maintaining all the functionality and advantages of the multi-function airborne sensor disclosed in U.S. PG Publication No. 2012/0292482. In particular, aspects and embodiments include a dual coelostat airborne sensor configuration that enables level horizon pointing when the platform is pitched at large angles. Referring to FIGS. 2A and 2B, a gimbaled optical portion 310 a of an airborne sensor system according to one embodiment includes afocal foreoptics 110 optically coupled to a fold minor 210, a first coelostat mirror 220, and a second coelostat mirror 230. The first coelostat minor 220 corresponds to the coelostat minor 120 discussed above as used in a similar system. The afocal foreoptics 110, fold mirror 210, and first and second coelostat minors 220, 230 are mounted on a roll gimbal that rotates about an outermost roll axis 242 (first gimbal axis) that is generally parallel to the beam of electromagnetic radiation output by the afocal foreoptics 110. The first coelostat minor 220 rotates around a first rotation axis 244 (second gimbal axis) that is parallel to the beam of radiation 250 a reflected by the first coelostat minor 220, and perpendicular to the roll axis 242, as discussed above. The second coelostat mirror 230 rotates around a second rotation axis 248 (third gimbal axis) that is parallel to the beam of radiation 250 b reflected by the second coelostat mirror 220 and substantially perpendicular to the first rotation axis 244. This rotation of the second coelostat mirror 230 is used to compensate for pitching motion of the platform, thereby allowing the line of sight of the system to be maintained in a desired direction (determined by rotation of the first coelostat mirror 220 to a desired angle) even as the platform pitches over a relatively large angular range, as discussed further below. Rotation of the first coelostat minor 220 about a fourth gimbal axis 246 is used to compensate for a gimbal singularity, as also discussed above and further below.
Content from External Source


This, basically, because the patent are both not easy to read, and often have various possible embodiments.
 
Back
Top