Some Refinements to the Gimbal Sim

@JMartJr even @Mick West has said there is rotation in the first 20 seconds,


Source: https://x.com/MickWest/status/1691164573074911232


With Twitter user LBF_tweet, remarking the same


Source: https://x.com/lbf_tweet/status/1853613752559579298


I wasn't getting into the mechanics or debate of that, noting the glare claim (there is a lot that could be said in that regard), but more of a, by way of background, to account for the rotation, a computer generated de-rotation, was suggested to account for this.

See post #27

https://www.metabunk.org/threads/some-refinements-to-the-gimbal-sim.12590/post-358637

[Edited to include additional reference to rotation ]
 
Last edited:
To @logicbear

I've already made it clear, twice, that this formula is not actually used to account for anything in the Gimbal sim. Therefore any analysis that includes it and tries to infer anything about the Gimbal sim is technically wrong.
What I said was,

Now, position one is the formula, (used to account for the rotation from point 1)
Formula "jetPitch = (ObservedCloudAngle - (jetRoll*cos(abs(radians(az)))) / sin(abs(radians(az)))

Is used to de-rotate the footage and put it back into the same state it was filmed in, IE how the camera actually saw the encounter. This is the clouds are perfectly level.

I made no mention as to the sim, but de-rotation of the footage, using that formula and have provided examples of how that is incorrect,

If you wish to claim that some other formula, in particular one that does not depend at all on the elevation, can physically represent the horizon, then you need to prove that with more than just words.
Noting I have,
1. provided an example, using go fast to demonstrate the result,
2. provided a full breakdown, plane pitch values, formula for that, indicative comparison with vectors, demonstrated that the camera is tilted, provided formula for frustum roll calculation, demonstrated why banking is @ 1:1 and visual examples of footage de-rotated in that method.

I really do not understand why you are of the opinion that all ive done is "say words"
 
I really do not understand why you are of the opinion that all ive done is "say words"
The following was just "saying words". Using this formula may or may not happen to be close enough in certain scenarios, but I very much doubt that it is correct in any physical sense.
1. If I am filming with a phone, point it down and pan it up, the orientation to the horizon hasn't changed, same as when i point the camera at 45 degrees left and pan up.
2. Checking in my excel, I get the same result, I'm not seeing any rotation due to elevation change. And as the FOV orientation is driven by Azimuth, Can you to explain it a different way?
Instead you would need to start from a mathematical model of the horizon angle which does not assume that elevation doesn't matter, e.g by rotating 3D vectors like my functions do, and then actually test the degree to which the elevation changes the resulting horizon angle. It might still turn out that elevation doesn't change the result *enough* for it to matter in whatever argument you're making, but it's difficult to have any confidence in the rest of the analysis when it starts from a false premise.
 
it's difficult to have any confidence in the rest of the analysis when it starts from a false premise.
Which is exactly my point, noting the issues I have stated for the sin cos formula, we can account for bank, we can account for frustum roll (camera tilt due to planes pitch), where are the extra degrees coming from to say that the horizon clouds are passing through level?

Which now brings us to today, where I am simply asking,

1. Where is my methodology incorrect,
2. Where are the additional degrees, to make the clouds pass through level, and not at an angle come from.
Hence, the clouds are perfectly level to start with, is a false premise.

Where are these extra degrees coming from?

To Micks reply
The glare rotates. That does not mean the object rotates.

I merely stated there was rotation as an undisputed fact to the context of where everything was coming from for the claims

I dispute that for two reasons.

We are not seeing an object, we are seeing glare from a heat source that obscures whatever is emitting it.

The rotation is not happening out at the object presumably hidden in the glare, it is happening at the camera.

had commented that, at which point all i did was provide additional substantiation, I was not making any claims past that.

[edited with reference to the clouds, not horizon]
 
Last edited:
Hence, the clouds are perfectly level to start with, is a false premise.
I still don't know why you keep insisting that it is a premise of the analysis, or what exactly you mean by that. The clouds are obviously tilted, not level, throughout the footage. That's why their angle needed to be measured. But I think you know that.
I made no mention as to the sim, but de-rotation of the footage, using that formula and have provided examples of how that is incorrect,
Getting back to this. I can imagine an infinite number of formulas that are wrong and none of that has any bearing on anything we're discussing. From the beginning of this thread this function was always assumed to be incorrect, and so a different function was used instead. So what's the point of attempting to prove that it is not ? You still seem to be implying that this formula is used in some way in some aspect of the Gimbal analysis, when in fact it is not. It is not used to derotate the clouds or anything whatsoever.
 
I still don't know why you keep insisting that it is a premise of the analysis, or what exactly you mean by that.
Trying a different way, using the formula for de-rotation results in the clouds perfectly level start to finish.


Source: https://www.youtube.com/watch?v=Z9O5L2MASOs


Using what I am asserting, measuring angles here, remove plane bank, remove frustum roll (camera tilt) results in this

fill.jpg


Which now results in measurable elevation changes.

410.jpg


I have laid out the exact methodology I used.

For clarity, I started my posts with this,

as per @Mick West suggestion, I shall share what I have found in regards to the cloud line, frustum rotation and horizon.

I was seeking feedback on that, because it is dramatically different to what is/ has been used, as I mentioned it now presents a picture more consistent with a camera getting closer to a close by target (mainly so everyone could understands the implications of what I was saying, i am not trying to "trick" anyone into anything).

With the primary goal of establishing, for 3d recreation work, how the camera is to be orientated during this. (do i use my method or apply the sin cos formula for this, you can check with mick, because that is what i was asking him before i posted here)

Screenshot (3876).png


I have the camera tilt included for accuracy.

The clouds are obviously tilted, not level, throughout the footage. That's why their angle needed to be measured.

Now you have said the clouds are tilted, tilted like how I said with elevation being derived from that tilt?

Looking back, yes, there are a few different points going on (you were asserting a computer de-rotation to account for the rotation that shouldnt be there as the glass of the pod isnt rotating etc), so I am just going to focus on the above part first. ( i dont have you blocked on twitter, feel free to reach out in DM so we can resolve additional items quicker)

Is my claim more substantiated than the sin cos formula for de-rotating the footage (can we now throw out sin cos?).
 
I sense that there may be some confusion caused by word choice in the current conversation. It may be that not everybody is at home in English as are others? I think it would behoove us all to be VERY exact in word choice going forward here. If we mean that an object rotates, say so, if we do not think the object rotates, but that it only appears to, say that. Similarly, if it is somebody's contention that the clouds are (or should be) level or tilted, please say specifically whether you mean in the Real World or as shown on the screen.

I am not sure, but I suspect some confusion on points like this may be preventing us from communicating clearly with each other.
 
Fair point, allow me to, temporarily, rephrase, soliciting feedback for it.

Facts NOT in dispute,
1. The object in gimbal rotates, in the first 20 seconds,

1. the thing being targeted, that we are seeing in the gimbal footage rotates, in the first 20 seconds.

This one has actually stumped me for a rewrite, I didnt intend for object to be a physical object, but the object we are seeing, regardless physical, artefact. "The *thing* we can see.

* I didnt want to get into the glare not glare debate (otherwise I would jump into an appropriate thread), but I do want to point out why i am not just calling it "glare", what lbf said in the quote tweet from post #41 that

"He's just perplexed by a small CW glare rotation in tandem with the background (more than banking) while the pod's not rolling."
The *thing* is rotating when, *allegedly*, the pod isnt.

[Edited to include more context]
 
Last edited:
Just to provide a, hopefully, better context.

The allegation is that there is a computer generated de-rotation causing rotation of the thing, when it shouldnt have any.

Which is why, I have repeatedly said, that I am trying to work out how a 3D camera will operate and the sin cos formula integrates into that.

Screenshot (3879).png

*only sharing this as it is what I said*


Which then resulted in me starting off about how de-rotation works with what I have found and we are currently in the process of "what method is used for derotation", or put more plainly "How do we orientate the camera in 3D recreations"

I hope that provides more context as to the nature of my posts.
 
Is my claim more substantiated than the sin cos formula for de-rotating the footage (can we now throw out sin cos?).
You can't throw something out that has already been thrown out from the very beginning of the thread. And you don't actually need to go through any of your analysis to see that it needed to be thrown out. It was enough to note that it did not account for elevation and did not produce the same values as a function which does and is more physically based. Your spreadsheets may have other issues since you didn't catch that elevation is needed. That doesn't mean that the function we do use (not SinCos) is necessarily the right answer. Maybe there could be some other function that is also physically based (unlike SinCos) and still matches the data in Gimbal while producing a better fit in other situations. I haven't had time to dive deeper into your other formulas. I wouldn't rule out that further refinement may still be needed. But if you're focusing too much on SinCos it doesn't sound like you're on the right track.
 
Last edited:
You can't throw something out that has already been thrown out from the very beginning of the thread

Well, its in Sitrec, essentially.

Clouds are perfectly level and the camera is just tilted to match the angle, the exact same thing we get with Sin Cos.

So not sure where you are going with, its already been thrown out.

[edited to include essentially]
 
Last edited:
Well, its in Sitrec.
Again, no, it is not used in Sitrec. It is disabled. It has been thrown out and an option is provided in the sim only for comparison with the old, thrown out values. Anyone who reads the source code can see that. The two functions are not the same. It's not just a floating point noise difference. You can see the difference between them even during Gimbal here (green vs blue) and in some other parts of the parameter space they can even differ by many degrees. Maybe it's similar enough during Gimbal to cause confusion, but it is important to note that they are not identical. Elevation has an effect on one but not the other. One of them is based on actually trying to model a simple camera, while SinCos is not. Arguments against one do not necessarily apply against the other, so it is important not to use the same name for them, even if you believe some argument might apply to both.
 
Having read and re-read what was written, at the start, it appears to come down to


I'm assuming that it's intuitive for pilots to look left/right, up/down, but not to tilt their head, so I try to recreate what the horizon would look like if you had a camera strapped to the jet that can only rotate left/right in the wing plane, or up/down perpendicular to that. First I find how much I need to rotate the camera along those lines to look directly at the object, then I compute the angle between a vector pointing right in the camera plane, and the real horizon given by a vector that is tangential to the global 'az' viewing angle.

Because that is what mick also quoted.

Screenshot (3880).png


So you are rotating the camera??? I want to be clear about this part, because I asked where these extra degrees come in to play.

So we have the claim being,

1. A de-rotation mirror that keeps the image in the correct orientation to the camera, no matter the azimuth/ elevation.
2. except when there is a claim the pod isnt rolling, and this is where a second? de-rotation mechanism is employed, when the camera/ footage is rotated by an amount that is inconsistent with bank and camera tilt, so there are extra degrees included
3. and all of this is perfectly synced, except for the mismatch degrees mentioned in point 2.

And you were able to rule out that its just the camera panning down due to elevation how?
 
How did you determine, that the pod wasnt affected by elevation change, still kept within a 1 degree range.

Larger/ main, Blue boxes are fov 0.35

410.jpg
 
Because I asked


1. Derived from the Gimbal Sit-Rec, in which the clouds were already levelled??? Can you clear that up @logicbear did you work forwards, generating a hypothesis and testing it or worked backwards using the levelled clouds to come to a formula that fit it? (By your response above, you don't want me to misunderstand what you are saying).
Did you work backwards off a level cloud (sin cos) type or work forwards, Hey we have cloud motion here we can't reconcile, lets see what the options are?

Two questions, just to be clear,

1. How did you determine that the pod wasn't impacted by elevation changes (that would result in my above stitched together gimbal image)
2. How did you determine that the camera needed rotating, if you weren't using a clouds are perfectly levelled position to start with?

[edited for clarity]
 
Last edited:
1. A de-rotation mirror that keeps the image in the correct orientation to the camera, no matter the azimuth/ elevation.
2. except when there is a claim the pod isnt rolling, and this is where a second? de-rotation mechanism is employed, when the camera/ footage is rotated by an amount that is inconsistent with bank and camera tilt, so there are extra degrees included
3. and all of this is perfectly synced, except for the mismatch degrees mentioned in point 2.
There's no exception for 2. The model effectively has one function for determining how much you need to rotate the image in order for the background to end up at the right angle. Its parameters include podPitch,podRoll,jetPitch,jetRoll as well as extra degrees of freedom for what should instead be podYaw. Sometimes changing az leads to the pod rolling, sometimes roll is avoided and only pod pitch and yaw changes, but that still changes the resulting derotation function a bit. It turns out the latter is enough to account for a few degrees of initial background roll.
But it's still not all perfectly synced. Perfectly instantaneous derotation is nice to have but not mission critical. You still have some bumps where the pod starts to roll or stops rolling and one idea is that it takes some time for the dero to catch up and fix the angle of the background. So at the same time as the image goes off center, the background suddenly rotates back and forth.
 
Last edited:
Many thanks for your reply, can you talk to these questions?

Two questions, just to be clear,

1. How did you determine that the pod wasn't impacted by elevation changes (that would result in my above stitched together gimbal image)
2. How did you determine that the camera needed rotating, if you weren't using a clouds are perfectly levelled position to start with?

Because I have, respectfully, submitted what I found, with the end result being indicative of pod elevation changes.

I just want to be clear with this part, so anyone else reading this has a better contextual understanding of the thread, so please correct me if I am mistaken.

Reading the reasoning for this "camera needs to be rotated",

I'm assuming that it's intuitive for pilots to look left/right, up/down, but not to tilt their head, so I try to recreate what the horizon would look like if you had a camera strapped to the jet that can only rotate left/right in the wing plane, or up/down perpendicular to that. First I find how much I need to rotate the camera along those lines to look directly at the object, then I compute the angle between a vector pointing right in the camera plane, and the real horizon given by a vector that is tangential to the global 'az' viewing angle.

I believe needs to be cleared up, on one hand we have what i have said,

If you,
1. remove plane bank
2. calculate planes pitch to encompass bank
3. use the calculated pitch to determine how much additional tilt the camera has and remove that

We have a demonstrable, repeatable method, that results in elevation changes within the defined parameters of the elevation figure, to understand the encounter.

As opposed to the implemented in Sitrec method
1. if you take the levelled clouds
2. calculate backwards how much we have to rotate the camera and apply that.

We now get a system that doesnt use a mirror to de-rotate (I am not sure how you are reconciling mirror use with camera rotation or if your model has a mirror???) but we now have a computer/ software generated rotation that allows for glare to rotate independently of what the glare is orientated on.

Finding common ground, we both agree there is tilt looking sidewards, but its the amount we are differing on.
- Mine uses planes parameters pitch and azimuth, and no additional rotation.
- Yours uses plane parameters plus more rotation because why? (comes back to the question how did you determine it was camera roll as opposed to pod elevation changes)

[edited for more context and clarification]
 
Last edited:
The GoFast camera code is different from the Gimbal camera code. Gimbal was the original app that Sitrec grew from, and there's still a bunch of code in there that's Gimbal-specific, including the horizon adjustment.

The horizon adjustment code is needed because a "camera" in three.js assumes you want up to be up, so before today, the GoFast camera was not rotated, which was incorrect, and made the motion of the ocean not match the video. I've fixed this with basically this bit of code:

JavaScript:
const humanHorizon = get_real_horizon_angle_for_frame(f);
const camera = objectNode.camera;
camera.rotateZ(radians(humanHorizon));

in more depth:

JavaScript:
export function get_real_horizon_angle_for_frame(frame) {
 var jetPitch = jetPitchFromFrame(frame) // this will get scaled pitch
 var jetRoll = jetRollFromFrame(frame)
var az = Frame2Az(frame)
var el = Frame2El(frame);

return getHumanHorizonFromPitchRollAzEl(jetPitch, jetRoll, az, el)
}

export function getHumanHorizonFromPitchRollAzEl(jetPitch, jetRoll, az, el) {


//     if (type == 1) {
//         return jetRoll * cos(radians(az)) + jetPitch * sin(radians(az));
//     } else {
//         // rotate the absolute 3D coordinates of (el, az) into the frame of reference of the jet
//         vec3d relative_AzElHeading = EA2XYZ(el, az, 1)
//             .rotate(vec3d { 1, 0, 0 }, -radians(jetPitch)) // reverse both the order and sign of these rotations
//              .rotate(vec3d { 0, 0, 1 }, radians(jetRoll));

    var AzElHeading = EA2XYZ(el, az, 1)
var relative_AzElHeading = AzElHeading
.applyAxisAngle(V3(1, 0, 0), -radians(jetPitch))
.applyAxisAngle(V3(0, 0, 1), radians(jetRoll))

//         // caclulcate (el, az) angles relative to the frame of reference of the jet
//         auto [relative_el, relative_az] = XYZ2EA(relative_AzElHeading);
    var relative_el, relative_az;
[relative_el, relative_az] = XYZ2EA(relative_AzElHeading)

//
//         // compute the jet's pose in the global frame of reference
//         auto jetUp = vec3d { 0, 1, 0 }
//     .rotate(vec3d { 0, 0, 1 }, -radians(jetRoll))
//     .rotate(vec3d { 1, 0, 0 }, radians(jetPitch));
    var jetUp = V3(0, 1, 0)
.applyAxisAngle(V3(0, 0, 1), -radians(jetRoll))
.applyAxisAngle(V3(1, 0, 0), radians(jetPitch))

//         auto jetRight = vec3d { 1, 0, 0 }
//     .rotate(vec3d { 0, 0, 1 }, -radians(jetRoll))
//     .rotate(vec3d { 1, 0, 0 }, radians(jetPitch));
    var jetRight = V3(1, 0, 0)
.applyAxisAngle(V3(0, 0, 1), -radians(jetRoll))
.applyAxisAngle(V3(1, 0, 0), radians(jetPitch))

//    DebugArrowV("jetUp",jetUp)

//         // rotate the camera by relative_az in the wing plane so that it's looking at the object
//         // the camera pitching up by relative_el has no effect on a vector pointing right
//         auto camera_horizon = jetRight.rotate(jetUp, -radians(relative_az));
    var camera_horizon = jetRight.applyAxisAngle(jetUp, -radians(relative_az));

//    DebugArrowV("camera_horizon",camera_horizon,100,0xff0000) // red

//    pointObject3DAt(gridHelperNod, camera_horizon)

//         // the real horizon is a vector pointing right, perpendicular to the global viewing angle az
//         auto real_horizon = vec3d { 1, 0, 0 }.rotate(vec3d { 0, 1, 0 }, -radians(az));
    var real_horizon = V3(1, 0, 0).applyAxisAngle(V3(0, 1, 0), -radians(az))
//    DebugArrowV("real_horizon",real_horizon,100,0x00ff00) // green

//
//         // it can be shown that the real horizon vector is already in the camera plane
//         // so return the angle between the camera horizon and the real horizon
//         return -degrees(camera_horizon.angleTo(real_horizon));

    var horizon_angle = -degrees(camera_horizon.angleTo(real_horizon))

var cross = camera_horizon.clone().cross(real_horizon)
var dot = cross.dot(AzElHeading)
if (dot < 0)
return -horizon_angle

    return horizon_angle
//     }
// }
}

Which is @logicbear's function taking pitch, roll, az, and el, and giving the horizon angle. (Originally implemented in C++, that code being there in the comments)

The fact that it greatly improved the background motion angle would seem to validate its correctness.

Note the old SinCos code
JavaScript:
//     if (type == 1) {
//         return jetRoll * cos(radians(az)) + jetPitch * sin(radians(az));
//     } else {

is commented out (i.e. removed)

I think the lack of this code in GoFast (the full function, not SinCos) might have led to some confusion.
 
Many thanks for your reply, can you talk to these questions?
I don't really follow what the basis for those questions is. Perhaps the previous post will clarify things for you in a way that will help either explain what you mean, or realize it was wrong (perhaps based on GoFast not having the correct horizon, which is really my fault)
 
This shows GoFast, before (top) and after (bottom) the horizon fix.

 
The estimated elevation change, that can be adjusted in Sitrec (as well as in the model used in our paper), assumes that the last cloud line is a consistent reference for elevation, i.e. elevation change in the pod must follow how the object moves relative to that cloud line. The object/glare/thing gets a bit higher versus the clouds, the El angle of the pod varies slightly in the sim to match this (~0.05° if I recall).

It's an assumption, that is questionable in my opinion, when looking at the stitched-frames picture. There are multiple cloud lines in view, is the one at the beginning at the same elevation as the one in the end? That's the question.
 
I don't really follow what the basis for those questions is.
How was it determined that camera rotation was required as opposed to it being the natural effect from pod elevation change downwards.

Put another way, why is my method and result wrong?
410.jpg


And how was it determined that there was another rotation method actually occurring instead?

[edited for clarity]
 
Last edited:
How was it determined that camera rotation was required as opposed to it being the natural effect from pod elevation change downwards.
That still makes no sense at all to me. I think there's a language issue. These seem like unrelated concepts.
 
When we account for bank, and account for camera tilt that we all agree is present, when we stitch that together we get this.

fill.jpg


The counter to this is, "the camera needs to be rotated"


What was the reason behind deciding that camera needs to be rotated?

How do we distinguish between, "there's a pod elevation change" and "the camera needs more rotation"

As Cholla said

"There are multiple cloud lines in view, is the one at the beginning at the same elevation as the one in the end? That's the question."

[edited for clarity]
 
Last edited:
This shows GoFast, before (top) and after (bottom) the horizon fix.

View attachment 86800
Sorry, but that is quite difficult to see, I do see some motion items that are a little off, but i am also getting those, and working through that. But I would need to check it directly myself. As an aside, are you implementing this change in any other recreations or just for gimbal and go fast?

Just to illustrate what I am doing, and why correct de-rotation method is crucial (why I bought it up in the first place)


Source: https://www.youtube.com/watch?v=qnRte2G2uxw


The top two video players are the 3d camera and the original footage as is, and i can overlay the 3D camera on top of it to check.

the bottom two video containers are de-rotated, bank and frustum roll, so i can check that motion (what the camera is doing)

Left one is double the FOV size, with the red inner box representing the 3D camera view at the correct fov. just to see how the background is passing through in an overview capacity (Do I have a camera orientation issue, or are the figures not correct type thing)
 
Last edited:
When we account for bank, and account for camera tilt that we all agree is present, when we stitch that together we get this.
You do. I don't follow how that sentence leads to that image.

The counter to this is, "the camera needs to be rotated"
Who countered this?

Left one is double the FOV size, with the red inner box representing the 3D camera view at the correct fov. just to see how the background is passing through in an overview capacity (Do I have a camera orientation issue, or are the figures not correct type thing)
Why not implement the horizon correction, which seems intuitively correct, and works in both Gimbal, and now GoFast?

As an aside, are you implementing this change in any other recreations or just for gimbal and go fast?
I tried adding it to FLIR1, but (as I expected) it makes little difference, as the camera plane is in level flight and the Az does not vary much.

It does not apply to every camera situation, even within military settings. Like I'm not sure how it would apply to MQ-9 footage like
 
How do we distinguish between, "there's a pod elevation change" and "the camera needs more rotation
This may be clear to everybody but me, but can you clarify -- when you say "elevation change," are you referring to the tilt of the camera changing, or the actual altitude of the aircraft changing. Or something else I haven't thought of! ^_^
 
You do. I don't follow how that sentence leads to that image.
The clouds move through at an angle. Stitching it together, to match cloud features, results in that image.


Source: https://www.youtube.com/watch?v=6dXHWFAXB3Q


and by using the FOV 0.36X0.35, we can measure how much pod elevation changes has occurred

410.jpg


Screenshot (3841).png


This is the same thing we see occur in @Mick West example of a helicopter getting closer to a balloon. The background moves through at an angle, because the pods elevation is looking more down.

d7dfa0dd-0fa0-4161-ba65-862a4928cb58.png

Who countered this?

You guys that are saying that didnt occurs and we need to rotate the camera. Which is why I am asking how are you determining if background objects moving through at an angle are
1. elevation change or
2. camera rotation.

By way of example, in the balloon example, the formula for lbfs camera rotation, would make the background go through the camera view perfectly level (removing the natural effect of elevation change.)


Why not implement the horizon correction, which seems intuitively correct, and works in both Gimbal, and now GoFast?


Its more of a check value to ensure the camera is setup correctly. IE if it is passing through correctly in one view, and not in the other we can determine that.
 
This may be clear to everybody but me, but can you clarify -- when you say "elevation change," are you referring to the tilt of the camera changing, or the actual altitude of the aircraft changing. Or something else I haven't thought of! ^_^
I said "pod elevation change"

Screenshot (3882).png


Meaning the elevation of the pod, or tilting the camera down (as opposed to camera tilt that is a result of planes pitch).

Currently everyone uses a -2 degree down angle with little change, referencing Chollas post, 0.05 degrees change.

I am saying it is a measurable significant change of 0.37 degrees


Screenshot (3841).png
 
Perhaps this is a better example, when Bellingcat worked on the mosul footage, the other one, when they stitched it together, this is what they got, matching ground features
line.jpg


All I am saying is that when we

1. remove plane bank
2. remove the camera tilt due to planes pitch

we get elevation changes.

IE the elevation of the cloud line at the start is NOT the same elevation of the cloud line at the end.
 
All I am saying is that when we

1. remove plane bank
2. remove the camera tilt due to planes pitch

we get elevation changes.

IE the elevation of the cloud line at the start is NOT the same elevation of the cloud line at the end.
You keep saying you did those things, but don't explain how.

Define "elevation", in this context, exactly.

We know the altitude of the camera is fixed at 25000 feet


I really don't know what you are try to get at here. Why is the cloud horizon curved?
2025-12-05_14-33-24.jpg


This seems more like a sitching artifact than anything. Like, why not:
2025-12-05_14-36-42.jpg
 
I explained how i did it in my first post, perhaps not clear enough?

https://www.metabunk.org/threads/some-refinements-to-the-gimbal-sim.12590/post-358343

1. Footage shows clouds are tilted

Screenshot (3883).png

To place it in the global up orientation, we

1. Remove the planes bank,

Screenshot (3884).png


We still have the clouds at an angle. We know that the camera is tilted (due to how it is mounted on the plane, so we calculate how much tilt there is

1. calculate how much additional pitch, over 3.6 degrees for level flight is required.

Screenshot (3885).png

We now know what the planes pitch is.

From that we can now calculate how much the camera is tilted with the formula - =DEGREES(ATAN(TAN(RADIANS(I3)) * SIN(RADIANS(A3))))
I3 is planes pitch calculated per frame, and A3 is the planes azimuth per frame

Screenshot (3886).png


We remove the camera tilt and we still have the clouds at an angle.

Screenshot (3887).png


As the footage is now de-rotated so up is the up direction for everyone (global), when we stitch that together we get footage that has elevation change.

We see this background motion in
- mosul footage
-go fast
- helicopter and balloon example
- ME24 etc

and we all say, the background features go through at an angle are because there are azimuth and elevation changes, but for some reason, when it happens in gimbal, the camera is broken and we need to rotate the camera?

Put another way, how do we know that the elevation figure for the pod is -2 degrees for the entire thing? and its NOT -2 degrees at the start, down to -2.37 degrees before increasing up to -2.35 degrees?

410.jpg


Screenshot (3841).png
 
Last edited:
This seems more like a sitching artifact than anything. Like, why not:

That is why i am asking

1. where these extra degrees (for more rotation) are coming from to get this.
2. How was it determined that its NOT elevation changes but the camera needing to be rotated?

How was it determined that the only elevation figure for gimbal is -2 degrees from start to finish?

*Because when i read the rationale for camera rotation,
I'm assuming that it's intuitive for pilots to look left/right, up/down, but not to tilt their head, so I try to recreate what the horizon would look like if you had a camera strapped to the jet that can only rotate left/right in the wing plane, or up/down perpendicular to that. First I find how much I need to rotate the camera along those lines to look directly at the object, then I compute the angle between a vector pointing right in the camera plane, and the real horizon given by a vector that is tangential to the global 'az' viewing angle.

I am only seeing, "the clouds are level and working backwards, if we rotate the camera the clouds pass though level again" with no reason why the clouds are in that orientation to start with.

[edited for more context]
 
Last edited:
I just want to add the following,

When we all worked on this, using ONLY -2 degrees for the entire encounter, it was determined that there is an altitude increase for the closeby trjectory.

Now Ryan Graves made the following tweet, quoting Cholla
Screenshot (3888).png


He maintained for quite a while that he did NOT see an altitude increase on the SA page, and I am looking for the clip, and will update when i find it, or if anyone has it, please drop it below, but he was being interviewed and remarked that it was "micks website" (sitrec) or words to the effect it was "micks website that convinced him of the altitude increase".

Cholla has remarked, because it would, that elevation changes, cancel out the object needing to increase altitude

I will just note that your estimated change in the El angle of the pod (0.35° or about one FOV) would pretty much cancel out the need for gain in altitude of the object in the <10Nm close path (0.35° corresponds to 220-300ft at 6-8Nm, which was our estimate of the object rise in altitude with assumed close-to-constant El angle).

This again would give weight to it being an elevation change as opposed to a camera rotation issue.
 
I am only seeing, "the clouds are level and working backwards, if we rotate the camera the clouds pass though level again" with no reason why the clouds are in that orientation to start with.
You are going round in circles. This thread is about why the clouds don't match the artificial horizon, and referring the earlier thread where we discussed different methods.

https://www.metabunk.org/threads/gimbal-derotated-video-using-clouds-as-the-horizon.12552/

The artificial horizon is the angle of the horizon when looking forward. It's basically just roll.

The cloud horizon (since cloud layers are generally flat) is the angle of the horizon when looking to the side. It involves roll, pitch, az, and el.

And it's not just working backwards. You have to handle cases like Az=45°, roll=0, pitch=6°. As az increases in magnitude towards 90°, you have to move from roll-dominated horizon to pitch-dominated. Or, like I said 3 years ago:
The artificial horizon angle is a measure of the bank angle of the plane, it would match the real horion when looking forward, in level flight.

If the pilot is banked 45° left, and looks 90° to the left, they would not expect the horizon to be tilted 45°, but if they look forward they would.

It's an interesting issue, but I'm a bit busy right now.

Since roll is bigger than pitch, and Az didn't get that close to 90, this was glossed over in initial sims. Then approximated with Sin/Cos. Now it's fixed.
 
You are going round in circles. This thread is about why the clouds don't match the artificial horizon, and referring the earlier thread where we discussed different methods.
The mentioned thread has no discussion about the mismatch being a function of a change in elevation.

The artificial horizon is the angle of the horizon when looking forward. It's basically just roll.

Fully agree, with the caveat for frustum roll (camera tilt due to pitch).
The artificial horizon angle is a measure of the bank angle of the plane, it would match the real horion when looking forward, in level flight.

If the pilot is banked 45° left, and looks 90° to the left, they would not expect the horizon to be tilted 45°, but if they look forward they would.

It's an interesting issue, but I'm a bit busy right now.
The dero mirror keeps the image in the correct orientation, up is up, you even said so in your gimbal analysis video, so when we have, 45 degree bank, looking left 90 degrees, the horizon will be tilted according frustum roll (camera tilt, due to planes pitch) plus planes bank.

We see this in gimbal specifically, when the plane banks from 22 degrees to 35 degrees, the angle of the cloud line changes by the amount of bank. So I am not sure where you are going with "they wouldn't expect the horizon to be tilted 45 degrees". Gimbal demonstrates that bank angle is added at a rate of 1:1

1 degree bank = 1 degree of horizon change
30 degrees of bank = 30 degrees of horizon change


Source: https://www.youtube.com/watch?v=ubHceIFDATE


That is levelled on the artificial horizon, when the plane increases with bank, the "box" rotates, and there is no change to the cloud line, hence bank is reflected @ 1:1.

The difference between the planes artificial horizon and cloud line is calculated via frustum roll (how much the camera is tilted)
Since roll is bigger than pitch, and Az didn't get that close to 90, this was glossed over in initial sims. Then approximated with Sin/Cos. Now it's fixed.
Correct me if I am wrong, but you are now talking about de-rotating the image correct?

IE, the pod head eye, and actual camera, when looking left (90degrees) are actually orientated 90degrees to when compared looking straight ahead. that pyramid on the right. So its using the pitch axis in the pod head less than what roll? Pod roll or plane bank (roll)?

Screenshot (3889).png


But there is a de-rotation mirror between the pod head eye and the camera, meaning that, even though its at 90degrees, the mirror corrects the image to be in the up direction as if it were looking straight ahead. So camera orientation is cancelled out, except for "tilt" due to planes pitch.

So I'm not understanding where the Roll is bigger than pitch is coming into this? Can you reword that part please?

Screenshot (3889) - Copy.png
 
Last edited:
Perhaps it would just be easier if you can point out why,

1. remove plane bank
2. remove camera tilt

Is the inccorect method for putting the footage in the correct orientation, and

1. Remove bank
2. Use formula that adds extra degrees of rotation in

is the correct method for putting the footage in the correct orientation.

I am not trying to be difficult, and i understand there is no obligation for anyone to do anything, but I cant reconcile background angular motion isnt actually background angular motion its a camera, that even though its fixed in place, is actually rotating, or some software that is rotating its image for ??? some reason.

4eed383d-b7fc-4ff9-8e03-381ed4a65151.png


Noting that when Sin Cos was used for go fast, it removed background motion changes due to elevation, into the background only moves through in one direction.


Source: https://www.youtube.com/watch?v=WFhi_Kq-WEk


We would not expect the background to move through in only one direction, it should be dynamic, like in this example of a reaper being overtaken by the camera


Source: https://www.youtube.com/watch?v=PT8_O8-piPw
 
Last edited:
I now understand your line of reasoning, @Zaine M.

Do you think you could demonstrate the perspective effect you describe in a 3D recreation?

i.e. showing that in the Gimbal configuration, the F-18 getting closer to an object in the <10Nm range would result in that angle in the clouds, which would gradually disappear with Az->0. Versus some weird dero algorithm, for pilot comfort (?).
 
Back
Top