Some Refinements to the Gimbal Sim

So you "apply that value" and "Use those figures" in some unspecified way and get a panorama that is clearly wrong?

This is not helpful.

Could the panorama stitching be based on the wrong projection? The curving up of the presumed-level clouds has the exact opposite effect of what happens when you take a flat rectilinear world and map it into a sphere or cylinder, so could these frames be in reality from a sphere (or, given the elevation changes are small compared to azimuth, pretty close to a cone), and they're being flattened as if they were cylindrical.

Or in glorious clunkovision:
bent_but_straigh.png

where the red-bordered slice shows the blue part bending up, yet in reality it's perfectly level. So the panorama might be "right", we're just viewing it wrong?
 
Stitching after leveling the image makes sense (up is up).

Now if the background is slanted, you're not gonna stitch the frames by putting them aligned along a horizontal line. You have to follow background to account for elevation change.

Imagine you're doing this for GoFast. You're going to level the frames, then stitch the frames to the left by progressively going down. Not putting them aligned next to each other. Same here.
 
Now it's not gonna give a nice 3D view of the encounter due to how perspective changes with a moving camera, on a turning plane, progressively looking down. It's not a nice panorama from a fixed point of view.

But I start visualizing how the scene happened and why the cloud layer looks different at the beginning versus end.

@Zaine M. the problem with these stitched panoramas is that we are trying to project a 3D scene with a moving point of view on the 2D plane, so they will always look weird. However I think doing it after getting things level as you do ("up is up") it the less bad way of doing it (more natural). And what's missing is how the perspective would actually change as the camera is getting closer of the thing.

I try to illustrate this with an edit of the stitched panorama:
export(2).png


We are looking sideways at first, with perspective change as the plane is moving. More depth in the clouds at the beginning, and as the plane is getting closer to the object the elevation goes down (or the scene rises here), revealing closer clouds which explains that they look more vertical, bumpy, and less distant, at the end versus the beginning.
EDIT: clarify a sentence
 
Last edited:
Just adding that micks sitrec for go fast, doesnt have a straight line path for the ocean that would match or justify placing them in line next to each other.
Screenshot (3953).png
 
Plane bank and camera tilt due to planes pitch.
so you're using your own formula to guide the stitching, which means the resultig shape is a direct consequence of that formula, right?
It's not evidence of anything, then, it just visualizes what your formula does, i.e. curve the clouds. Do I have that right?
 
Last edited:
so you're using your own formula to guide the stitching, which means the resultig stape is a direct consequence of that frrmula, right?
I wouldn't say my formula, but accounting for what we know is rotating the image,

1. theres bank incorporated,

This suggests that the discrepancy is caused by the pitch-up attitude of the aircraft, which tilts the wing-mounted camera sideways, the more the camera turns to the side.

2. theres how much the camera is tilted

There is still mismatched cloud line to artificial horizon.

It's not evidence of anything, then, it just visualizes what your formula does, i.e. curve the clouds. Do I have that right?
Its the video in the correct up position.

Happy to change, but i see background angular motion, i dont understand where this, its not background angular motion, the camera needs more rotation comes from.
 
Put another way @Mendel even accounting for the camera tilt you pointed out, the figures are wrong because there needs to be something else that isnt being accounted for to have the clouds move through level.

That thing is claimed to be software rotation because the clouds are level.

Are we starting from the clouds are level because that looks right?

Using go fast as an example. If I said, go fast is going fast because thats how it looks, every person on metabunk would, deservedly, jump on me and point out that you cant say that because, when we use the data, it isnt.

So how do we know that the angle the clouds move through isnt angular motion, but the camera needing more rotation?
 
So how do we know that the angle the clouds move through isnt angular motion, but the camera needing more rotation?
because angular motion isn't a thing

the apparent motion of the clouds in the footage is caused by the camera moving and panning. the cloud layer does not move upward at that speed.
 
the apparent motion of the clouds in the footage is caused by the camera moving and panning. the cloud layer does not move upward at that speed.

Can you explain how you came to that conclusion, and what did i miss in the rotation calculation?

because angular motion isn't a thing

Are you able to elaborate on that also? Because I dont understand how the Mosul footage isnt angular motion also?

line.jpg
 
the apparent motion of the clouds in the footage is caused by the camera moving and panning. the cloud layer does not move upward at that speed.

The ocean never shoot upwards towards the sky but it's how it looks in GoFast. Why would perspective be absent from Gimbal, and everything is 2-D and flat? Based on what?
 
Can you explain how you came to that conclusion, and what did i miss in the rotation calculation?



Are you able to elaborate on that also? Because I dont understand how the Mosul footage isnt angular motion also?

View attachment 87036
please google "angular motion", it's defined differently from how you are using it here, i.e. it relates to rotation. You are using it as "moves at an angle to the horizontal".

please correct me if I'm wrong: what happens is that you rotated the gimbal video frame by frame according to your formula, which you advertise as a "refinement" over Mick's formula. You expect the resultant apparent cloud motion to be horizontal, but there is a vertical component to the motion when the camera is not looking straight ahead. (The tilt of the motion varies with the bearing of the camera.) Your conclusion is that the scene physics are not understood correctly by us.

Our conclusion is that the physics and maths are not understood correctly by you, and that the existing formula is better than yours.
 
you rotated the gimbal video frame by frame according to your formula, which you advertise as a "refinement" over Mick's formula.
There is no formula. @Zaine M. simply levels up the frames by removing bank and the effect of estimated pitch (3.6°).

Mick/LBF have a formula to go one step further (topic of this thread), based on what they think the pod does as far as derotation. Under the assumption that the last visible cloud line is at a same and unique elevation angle throughout the video (at 0.05° precision).
 
Mick/LBF have a formula to go one step further (topic of this thread), based on what they think the pod does as far as derotation. Under the assumption that the last visible cloud line is at a same and unique elevation angle throughout the video (at 0.05° precision).
It's not based on the "visible cloud line" (which you can't really determine with any great confidence).. It's based on the motion of the clouds (i.e., the motion of the background).

It also works for GoFast, adding it in (a few days ago) corrected the difference between the sim and the video.
 
You assume the background should move exactly at the horizontal throughout the video (after being leveled), and that any deviation is dero doing its thing. Which is equivalent to say that the cloud layer is perfectly aligned with the horizon.

Questionable when looking at the clouds in their entirety.

Precisely, the "sideways" distance seems to change. Do you feel like the cloud layer at the very beginning (top picture, to the right) looks the same as the cloud layer at the end (bottom picture, to the left), regardless of white vs black hot ? We see a flat and distant-looking layer first, with some perception of depth, versus what gradually becomes a more vertical-looking and bumpy layer of clouds at the end.

What makes you feel sure this is a consistent marker for elevation angle, to the hundredth of degree?

Screenshot from 2025-12-10 08-44-57.png
Screenshot from 2025-12-10 08-45-08.png



Even more questionable when looking at your "dero without roll" claim for the first 20sec.
Can you show us the section where dero without pod roll rotates both the clouds and the thing CW?

If thing=glare:
#1 Dero with roll-> thing rotates relative to the clouds (final step rotation with your step pod)
#2 Dero without roll (that you say happen in the 1st 20sec)->thing and clouds both rotate (by ~7° you say).

Where is #2 here?


Source: https://www.youtube.com/watch?v=-KVvebg4cXc
 
There is no formula. @Zaine M. simply levels up the frames by removing bank and the effect of estimated pitch (3.6°).

Mick/LBF have a formula to go one step further (topic of this thread), based on what they think the pod does as far as derotation. Under the assumption that the last visible cloud line is at a same and unique elevation angle throughout the video (at 0.05° precision).

Does he not use a formula to remove bank and the effect of estimated pitch? I'm still having trouble reproducing that stitched together image from post 124 in this thread.
 
Does he not use a formula to remove bank and the effect of estimated pitch? I'm still having trouble reproducing that stitched together image.

Removing bank is not a formula, it's just measuring an angle and leveling the image accordingly (or rotating the image until artificial horizon is flat, in other words).

Retrieving how pitch would tilt the camera is just simple trig, the same as Mick/LBF.

Post #73:
From that we can now calculate how much the camera is tilted with the formula - =DEGREES(ATAN(TAN(RADIANS(I3)) * SIN(RADIANS(A3))))
I3 is planes pitch calculated per frame, and A3 is the planes azimuth per frame

Clouds are slanted after removing bank, and even more after removing the effect of plane pitch (like in Mick's sim, positive pitch has the same effect).

This is hardly a formula, more of a method. And again, it's the same as Mick/LBF before they introduce the influence of dero.
Which they don't know and guess, based on their intuition of what it should do.

 
Once the frames are leveled, the frames are stitched together, keeping them in the same orientation, to recreate the panorama.

I think I address your point here.
Now it's not gonna give a nice 3D view of the encounter due to how perspective changes with a moving camera, on a turning plane, progressively looking down. It's not a nice panorama from a fixed point of view.

But I start visualizing how the scene happened and why the cloud layer looks different at the beginning versus end.

What this suggests is that there actually was a change in the elevation angle of the camera greater than 0.05°.
 
Once the frames are leveled, the frames are stitched together to recreate the panorama. Like any other attempt to create a panorama.
What do you use as a base point to stitch together these frames? Is it the horizon or the cloud cover?

When I use the horizon to try to stitch together the frames it doesn't look anything like what you all have produced. Am I doing it wrong?
 
I'll let @Zaine M. reply because he's done it.

The idea is you keep all frames "leveled" (after removing the effect of bank and pitch), and stitch them together to have a continuous cloud layer.
So basically artificial horizon remains level (minus small effect of pitch that disappears as Az gradually decreases).
And keeping the cloud cover (the scene) continuous indicates diagonal motion, i.e. change in elevation angle.

Better illustrated in @Zaine M. 's video:


Source: https://www.youtube.com/watch?v=6dXHWFAXB3Q
 
Last edited:
Okay then this is more a question for Zaine, is that the only orientation that keeps the image level or are there multiple possibilities that keep the stitch more flat? How many different stitches have you made that 'fit?'
 
The idea is you keep all frames "leveled" (after removing the effect of bank and pitch), and stitch them together to have a continuous cloud layer.
So basically artificial horizon remains level (minus small effect of pitch that disappears as Az gradually decreases).
And keeping the cloud cover (the scene) continuous indicates diagonal motion, i.e. change in elevation angle.
But as I raised earlier, the resultant image is clearly wrong. It seems more an artifact of the dero method Zaine chose, and naive stitching.

@FatPhil raised some issues with panorama stitching earlier, but I think it's even more complicated than that. The camera is moving along a curve towards the clouds while panning to keep the target centered. You can't expect to get a meaningful 2D image that somehow represents the view from the camera, because there are multiple different views.

This whole thing seems a bit like a misunderstanding, but a rather complicated one.
 
There is no formula. @Zaine M. simply levels up the frames by removing bank and the effect of estimated pitch (3.6°).

Mick/LBF have a formula to go one step further (topic of this thread), based on what they think the pod does as far as derotation. Under the assumption that the last visible cloud line is at a same and unique elevation angle throughout the video (at 0.05° precision).
but you dont have all of the motion accounted for with that
imagine the camera platform flies straight and level, no bank no pitch
(a) 0⁰ camera looks straight ahead, cloud surface below moves down in the image, yes?
(b) 90⁰ camera looks left, cloud surface moves right-to-left, right?
at any camera angle in-between, cloud surface movement must appear diagonal, because it moves from down to sideways as the camera turns

in gimbal, this gets added to the parallax from the platform and the target moving
 
But as I raised earlier, the resultant image is clearly wrong. It seems more an artifact of the dero method Zaine chose, and naive stitching.

@FatPhil raised some issues with panorama stitching earlier, but I think it's even more complicated than that. The camera is moving along a curve towards the clouds while panning to keep the target centered. You can't expect to get a meaningful 2D image that somehow represents the view from the camera, because there are multiple different views.

This whole thing seems a bit like a misunderstanding, but a rather complicated one.

This applies to thinking the cloud layer is a perfect marker of the horizon when it's clearly changing perspective.

Now it's not gonna give a nice 3D view of the encounter due to how perspective changes with a moving camera, on a turning plane, progressively looking down. It's not a nice panorama from a fixed point of view.

But I start visualizing how the scene happened and why the cloud layer looks different at the beginning versus end.
1765562458227.png

Attempt at visualizing how the scene looked like, given perspective change. Just to illustrate the concept.


We are looking sideways at first, with perspective change as the plane is moving. More depth in the clouds at the beginning, and as the plane is getting closer to the object the elevation goes down (or the scene rises here), revealing closer clouds which explains that they look more vertical, bumpy, and less distant, at the end versus the beginning.
 
I think it's exactly what @Zaine M. is pointing at here. Why expect this is from a complicated dero algorithm, if this is a natural effect?

Because the complicated dero algorithm is a better fit for the actual video.

When I re-create Zaine's process I can get a hundred different angles for descent (including no descent) and I'm still not sure how he chose the one he did as the best fit.
 
We are looking sideways at first, with perspective change as the plane is moving. More depth in the clouds at the beginning, and as the plane is getting closer to the object the elevation goes down (or the scene rises here), revealing closer clouds which explains that they look more vertical, bumpy, and less distant, at the end versus the beginning.
And how far away do you think those clouds are? In Sitrec I have them starting about 70 miles away, with the cloud horizon (the blue line) calculated at about 120 miles.

I also have them as a wide fairly flat layer

2025-12-12_10-20-53.jpg
 
Because the complicated dero algorithm is a better fit for the actual video.
It's also not actually complicated at all. It's just a matter of calculating exactly how much the horizon gets rotated by pod roll/pitch/yaw (pod horizon), and a function for what it should be rotated to instead (human horizon), to match what pilots might see out the window, or what a Wescam style pod would see if looking at the same target. You rotate the pod horizon to the human horizon and you're done. There's a misunderstanding that stems from the fact that the Gimbal sim needs to do a more complicated reverse operation, but the derotation function that Raytheon would need to implement is still very simple for them.
 
Last edited:
Last edited:
test the prediction from your step-rotating pod that object and clouds move CW in concert in the first 20sec.
All I can tell you is that those numbers I pointed to were not predictions. They were not eyeballed subjectively from some unverifiable video. They're measurements. I linked to the full source code for them and all of the results. There were three different numbers for how much the clouds rotate, how much the target rotates, and how much the plane banks. The model was checked against them and it fits.
 
Last edited:
For completeness,

I took both methods of de-rotation (place footage in the up is up position), and tested them side by side to see the results.


Source: https://www.youtube.com/watch?v=WFhi_Kq-WEk


That is that video

What it demonstrates is that the formula, that was used originally, but has since been tweaked,, manipulates the footage to put it into an unrealistic result.

We should be seeing variations of how the background moves through the scene, like this example, the angles the background change based on elevation and azimuth etc


Source: https://www.youtube.com/watch?v=PT8_O8-piPw


So we can rule out the formula method as being the correct one, as it doesn't change (I tested micks new method, TWICE, and it varies by only two degrees, so still unrealistic.

First test -
Source: https://www.youtube.com/watch?v=deuuQC2ljKc


Second test -
Source: https://www.youtube.com/watch?v=_sfeVg4FH-M


Put another way, we tested the claims that were made, and evaluated them on the basis of the results they gave.

[edit - clarity]
 
Last edited:
We don't need some full source code
You do actually. There is a bunch of noise in the data. The angle fluctuates. I don't know whether you picked snapshots which were a minima or maxima in the neighborhood of those frames. There's no accounting for the discontinuity between B/H and W/H. The target's angle suddenly jumps when switching if you don't correct for that. This thing needs to be measured, very very carefully, not eyeballed.
And I'm not even sure whether we're talking about the same thing when you say "object has not rotated CW like the clouds". Again they're three different numbers. The object doesn't follow either the clouds or the bank angle. The total rotation of the clouds includes the contribution from the dero, which also rotates the object.
 
I'm starting to understand why I didn't see any formulas posted and why I can't reproduce the results. There is no systematized process it's just eyeballing it.
 
The total rotation of the clouds includes the contribution from the dero, which also rotates the object.

You say dero is responsible for realigning the clouds. Where do you see the dero rotating the object in the same way? This just doesn't happen. The object doesn't move in the leveled video, maybe rotates slightly the other way.

If an observation contradicts a theory /model, it needs to be changed. Time to do it. Or take a step back and reconsider some of the very early assumptions, like "cloud layer is a perfect indicator of horizon and elevation angle of the camera".
 
You say dero is responsible for realigning the clouds. Where do you see the dero rotating the object in the same way? This just doesn't happen. The object doesn't move in the leveled video, maybe rotates slightly the other way.

If an observation contradicts a theory /model, it needs to be changed. Time to do it. Or take a step back and reconsider some of the very early assumptions, like "cloud layer is a perfect indicator of horizon and elevation angle of the camera".
Your observations are based on assumptions that may or may not be true and that you definitely haven't proven to be true. Zaine should scrap this model and try to rebuild it from the ground up using the same process and I want to see if he gets the same results. Because you can align these images in hundreds of ways that make sense physically because of all the different factors of both the camera moving and the platform moving while recording.

I would suggest scrapping it running the numbers again using whatever process you did and seeing if you get the same results. Because right now your argument hinges on how you stitch together these frames to create a 2-D representation panorama of a three-dimensional space. I'm not sure why you think it matches better other than "I looked at it and I thought it matched better." is that enough proof to overturn the old hypothesis? I'm not so sure.
 
Back
Top