Some Refinements to the Gimbal Sim

The dero mirror keeps the image in the correct orientation, up is up, you even said so in your gimbal analysis video, so when we have, 45 degree bank, looking left 90 degrees, the horizon will be tilted according frustum roll (camera tilt, due to planes pitch) plus planes bank.

We see this in gimbal specifically, when the plane banks from 22 degrees to 35 degrees, the angle of the cloud line changes by the amount of bank. So I am not sure where you are going with "they wouldn't expect the horizon to be tilted 45 degrees". Gimbal demonstrates that bank angle is added at a rate of 1:1

1 degree bank = 1 degree of horizon change
30 degrees of bank = 30 degrees of horizon change
That's true for the artificial horizon, because that just shows the bank. The issue here is what the actual horizon looks like, when looking sideways. We see that the more sideways we look (i.e. the larger absolute values of az), the more the actual horizon deviates from the artificial horizon.

If you are in a plane that is banked 45° and you look forward, then the real horizon (and the artificial one) would be at 45°

But if you looked due right (or left), why would you see a 45° tilt?
 
I appreciate your request for clarification, as I do feel bad that i have tried to explain everything multiple different ways, @TheCholla suggestion to demonstrate and not explain repeatedly is taken on board, so i will pull back from replying.

But to your question, I hope this footage assists as the plane goes from level to 90 degree bank with the ATFLIR footage visible in the bottom right. (I clipped this to work on how much delay between input on the stick and camera delay there was)


Source: https://www.youtube.com/watch?v=DPrvuhWt16A



It certainly appears to reflect bank directly, just like in gimbal. (does that clear up your question? But to answer this part of your post directly, "why would you see a 45° tilt?" IDK all the footage shows its there.)

Just adding,

We see that the more sideways we look (i.e. the larger absolute values of az), the more the actual horizon deviates from the artificial horizon.

100 percent agree, I raised the camera being tilted to account for that, because thats what the calculations show. More azimuth, more tilt due to planes pitch.

So at 90 degree az, mismatch between planes artificial horizon and actual horizon = planes pitch

Which is why it is really odd to me that the formula calls for 7-8 degrees of rotation, which exceeds what the planes pitch is.

Screenshot (3886).png


[edited to agree with mick, the more az the more mismatch and provide an example of planes pitch impacting mismatched horizon]
 
Last edited:
But if you looked due right (or left), why would you see a 45° tilt?
The question may be reversed, why would you not see a tilt?
Tracking at full left or full right while banking aggressively must not be a situation that ; 1) often happens ; 2) is one when the aircrew care for how the true horizon looks in the display.

With that much aggressive bank the Az will quickly decrease and end that situation anyway. Unless you're in a dogfight but again I don't think it's one when you have time to care about how the background horizon is tilted. So in the end we simply don't know, and the point that "we should not see banking in the real horizon when looking full side" remains an assumption. One that seems far in the weeds but one that is actually critical to the complicated dero algorithm you have for Gimbal.
 
I just put together a comparison to determine how much rotation, of the thing we are looking at, with my figures,

Screenshot (3893).png


There are two rotation amounts, top and bottom, for each of the isolated object views. The top blue line has no rotation and has a fixed angle, the bottom is dynamic, 19-16 degrees, so three degrees of rotation.

I can see three degrees of rotation, from the start to the first major rotation of target, but at this stage, I can NOT say that is definitive, i will have to work on a more stable version.


Source: https://www.youtube.com/watch?v=-KVvebg4cXc
 
Can you clarify why you show this? Is the point you're making that, after stabilization, the background rotates by a certain amount due to perspective, while the object doesn't?
 
Sorry for that posting error.

Here is the vid I intended to post.

This shows a camera mounted on a "leaning" tripod, to simulate a banking plane. The horizontal porch railing here stands in for the horizon. Perhaps it will help visualize what happens when looking straight ahead as opposed to off to the side. NOTE: There is no "derotation" software here, this is just showing what the camera sees "as is."


While attached to the same mount, while looking "straight ahead" the camera is effectively rolled to one side and the horizon crosses the screen on a diagonal. When the cameras pans 90 degrees, it is no longer rolled but is tilted downward (or upward if panned the other direction) -- the horizon now crosses the screen horizontally. This does not account for any other changes in the roll, yaw or pitch of a plane in flight, or any software manipulation of the image, such as derotation. (Edit to add a Note -- there is a small "adjustment" in camera position after the pan, as I let go of the camera it shifted a bit as it is on a rig that was smashed a bit more with the weight of my hand on the tripod!)

To make sure we're on the same page, terminology wise, in the above description Pan is what is happening in blue, tilt is what is happening in green, and roll is what is happening in red in this diagram. In terms of plane movement, different terminology may be used, such as yaw (blue), pitch (green) and roll (red).

delme.png


Hopefully this will be useful. If not, forget it's here! ^_^
 
Can you clarify why you show this? Is the point you're making that, after stabilization, the background rotates by a certain amount due to perspective, while the object doesn't
I made these remarks, for context of the computer/ software generated rotation

Facts NOT in dispute,
1. The object in gimbal rotates, in the first 20 seconds,
2. The cameras orientation is responsive to planes pitch due to how it is mounted.

The undeniable rotation was based on the Sin Cos formula, and as I have only posted "wide angle" images and video, I posted that version, based on my method for de-rotration.
 
This shows a camera mounted on a "leaning" tripod, to simulate a banking plane.
Nice little video, thank you, my take away is the same as what I mentioned here.

So at 90 degree az, mismatch between planes artificial horizon and actual horizon = planes pitch

Which is why it is really odd to me that the formula calls for 7-8 degrees of rotation, which exceeds what the planes pitch is.
the horizon tilt = camera tilt.

At 90 degrees to direction of tripod tilt
 
The clouds move through at an angle.

I am asking how are you determining if background objects moving through at an angle are
1. elevation change or
2. camera rotation.

Can't this be answered by searching for the axis of rotation of the image, in the stabilized version when we only see clouds realignement?

#1 is your suggestion, i.e. elevation change (in the pod, El angle) is changing the angle by which the clouds appear in the image.
This is different than a rotation around the center of the image, the cloud line enter in the same spot of the FOV but leaves in a different one.

#2 is rotation around the center of the image -> the clouds move up on the left as much as they move down on the right

Have you ever looked into this?

EDIT: typo
 
Last edited:
Can't this be answered by searching for the axis of rotation of the image, in the stabilized version when we only see clouds realignement?
As lbf is, essentially the architect for this formula being used, check the first post mick credits them with it, these arent questions that can be answered by going to the video.

I was only asking how they were able to determine that its not angular background motion (elevation change), but a computer/ software rotation/ de-rotation that is occurring.

I cant reconcile background angular motion isnt actually background angular motion its a camera, that even though its fixed in place, is actually rotating, or some software that is rotating its image for ??? some reason.

At this point I am not going to pursue explanation of that formula any further, but i do ask that it is noted that I demonstrated,
- taking the original format of the footage, (the DOD release)
- applying the corrections i mentioned,
- for the reasons i gave,
- results in background angular motion.
 
#1 is your suggestion, i.e. elevation change (in the pod, El angle) is changing the angle by which the clouds appear in the image.
This is different than a rotation around the center of the image, the cloud line enter in the same spot of the FOV but leaves in a different one.
Simplest way,

planes bank is dynamic, frustum roll (camera tilt) is dynamic

by removing those two elements, to de-rotate the footage, up is up for everyone, (if you look out the window you see this) the cloud starts in the bottom left, and moves higher and to the right.

The only way that cloud reference point can move up and to the right is if the camera is panning left and looking down (elevation angle decreases to look at the ground).

Same thing we see in go fast, wave starts bottom left, and moves to the top right, happens because the camera is looking left and panning down
 
So in other words you're saying we expect this effect, because in Gimbal the camera looks left and down (like in GoFast but much less pronounced tilt because El angle is much smaller).

But why this effect would not be in Sitrec then, and it needs an additional derotation term to get it?
 
Yes, and I apologise, I mis-read your initial message and will address it more appropriately,
#1 is your suggestion, i.e. elevation change (in the pod, El angle) is changing the angle by which the clouds appear in the image.
This is different than a rotation around the center of the image, the cloud line enter in the same spot of the FOV but leaves in a different one.
Yes. (the camera is in the upright position throughout)
#2 is rotation around the center of the image -> the clouds move up on the left as much as they move down on the right

This is best illustrated visually, the go fast footage is de-rotated by both methods side by side. Left my method, right sin cos method.


Source: https://www.youtube.com/watch?v=WFhi_Kq-WEk


Left Side
We see that even if a wave has the same entry point in the field of view, as the video progresses the exit point from the fov changes.

Right Side
No matter what, the start and exit points will always be the same, so the formula is rotating the footage to make that happen.

To determine which is the correct method, we have to compare to other examples of footage for how background motion behaves in similar circumstances.


Source: https://www.youtube.com/watch?v=PT8_O8-piPw


This is comparable, and only illustrative, for how the background moves, when we have a camera panning left and looking down, like we see in go fast (although the camera view here is straight not curved).

And what stands out is that entry and exit points vary (as well as the way the background moves through the scene) so its confirmation that my method is correct.

The only way we would be able to produce a result that matches the formula, is to start rotating the camera in a non standard way to keep the background always moving in one direction.
 
Is this still the case with Mick's updated method? Hard to see when bank isn't removed.
GENERAL AND NOT TO HAVE DEFINITIVE CONCLUSIONS DRAWN FROM
Because of the format, in 2 X 2 squares, its not "easy" to work with this, so BIG grain of salt with results, I did this over only 20-30 minutes to give a General idea and NOT definitive.

After effects to track artificial horizon on bottom left panel, rotation, Premiere for overlays and values.

Noting i did NOT do it for the whole thing, but roughly the same timecoded part, as mine above was what i was focussed on.

Screenshot (3896).png


Rotation values, and only working on the updated code box, bottom left, that mick said he implemented.

While it says 1 degree linear progression, it doesnt match halfway through, so from what i can tell its "close" to staying the same angle, personal opinion and not an authoritative take, but that could just be me.

Video below.


Source: https://www.youtube.com/watch?v=deuuQC2ljKc


Again, Big grain of SALT, not great video to work with and i was time constricted.

Maybe mick can swap out the GF footage, have side by side of the two methods in recreation, removing bank, and formula for one side and removing bank and frustum roll on the other panel? Similar to my two video containers?

Screenshot (3897).png

Mine remove bank currently and frustum roll dependent on plane pitch.

(my bigger thing is the actual motion, where I can see that mick is getting the same type of "pull" in the background I get)

[edited for clarity and for suggestion on resolution]
 
Last edited:
I'm sorry, I don't have time for this.
All good, we will keep chugging away at this and will ping you direct when we get some results of more interest.


I did have another run at determining the background motion angle in the updated 3D camera view.


Source: https://www.youtube.com/watch?v=_sfeVg4FH-M


I only got 2 degrees, if anyone wants to check, just play the footage in YouTube at a slower speed.

But I did have someone reach out and ask me to "do a proof on concept" as to my method for determining elevation change, on go fast, and I am posting it here, spoiler it doesnt actually work, broadly, but it does work for reasons.

Screenshot (3900).png


This is my Go Fast FOV work, and napkin math says there is roughly 7 degrees of fov elevation, yet the actual figures from the pod indicate 15 odd degrees, -21 to -36 degrees.

So, on the face of it, i failed. But, theinitial elevation angle and the horizontal forward motion of the plane needs to be taken into account as that motion is converted into vertical angle change, steeper elevation angle is, the more it is impacted.

So illustratively,
Screenshot (3901).png

Just using these figures,
Fov 4.27 degrees

ACTUAL elevation,
point 1 - -35.54
point 2 - -78.69

BUT we only have 1 (4.27 degrees) of motion. So method is off by

point 2 - point 1 = -43.15 degrees
-43.15 / 4.27 = 10.1

The method is off by a factor of ten, in that example (Go fast example is off by a factor of 2, calculated 7 degree, actual 15, rough napkin but close enough)

This is because the ground reference points are really close, 8ish Nautical miles away, coupled with steep elevation to start with, hence horizontal motion is converted into vertical elevation change.

But in Gimbal, we are looking between -1.5 and -2.49 (says -2 degrees) which is, really close to level, PLUS the clouds are, according to metabunk, roughly 120-150 miles away (im just going to use 120 Nautical miles for simplicity)

Screenshot (3902).png


So both second positions are the same distance away from the start position, we have the same fov,

New example is 2 degrees down initially, with 4.27 degree FOV.

ONLY 2 differences between them both.
1. Elevation angle initial
2. Target reference point changes from 8 Nautical miles to 120 Nautical miles

Screenshot (3903).png


FOVs are lined up the same as the steeper angle one.

Stepping through this, CASE 2, shallow angle.

Position 1 initial angle down is 2 degrees,
Position 2 elevation is now 6.49 degrees,

Screenshot (3904).png

link to geogebra
https://www.geogebra.org/classic/fuws4f3g

My method would say, -2 elevation, with one fov change, 4.27 degrees, the elevation would be -6.27 degrees

The Actual 6.49 degree elevation minus my result, 6.27 degree elevation equals 0.22 degree difference.
My method would be off by 5.15 PERCENT of the fov size.

But this is all dependent on how much closer the jet gets to the clouds, noting in gimbal that the jets ISNT flying towards the clouds initially (-56 azimuth) I believe that distance closure is negligible.

Relating this back to Gimbal, the FOV is 0.35 degrees, I am asserting an elevation change of 0.37 degrees, so even if I was incorrect by 5 percent, (5 percent of 0.37 is 0.0185 degrees) meaning there is still just over 0.35 of elevation change accounting for horizontal motion of the jet (camera), converting that motion into vertical elevation change by, getting closer to the reference point.

Which is a long winded way to say that we can use my method with a degree of certainty because,

1. the cloud reference points in gimbal are soooo far away from the camera
2. as opposed to go fast, where the ground reference points are so much closer, plus it has a steep angle that results in the planes forward motion, being converted into pod actual elevation change.

Any questions?
[edited punctuation, grammar]
 
Illustrating what i am saying with google street view,

Screenshot (3898).png


We have a close by water tank and a distant tree highlighted. In this second image

Screenshot (3899).png


While the distant tree hasn't really moved in relation to the horizon, the water tank now has a significant elevation increase, because the cars forward movement, is converted into elevation change.

% decrease in distance = increase/ decrease in elevation amount.

Further away things (smaller % distance change) are more reliable sources of reference points as opposed to close by reference points.

Gimbal, plane is flying at 6NM per minute, so 3NM in the 30 seconds, even if it were to by flying directly at clouds 120NM away, its like only closing the distance by 2.5 percent, factoring in that majority of it is at an angle, we are talking less than 2 percent of the distance closed to the clouds. So they are reliable reference points.

[edited to include gimbal reference for distance change]
 
Last edited:
I decided to test my claim in the above gimbal specific setup, clouds 120NM away, with plane being able to get 3NM closer to the clouds.

https://www.geogebra.org/classic/mpqjbamf

Link to the geogebra

(Geogebra doesn't like 0.175 degrees, so I have used 0.18 degrees for each half of the FOV in relation to the LOS)

Screenshot (3906).png


Screenshot (3907).png


And the results are actually interesting, because at the full 3NM towards the cloud we get, the steeper the elevation figure actually is.

Using 2 degree down with FOV of 0.36 (for the above reason i couldn't use 0.35 that it actually is)

a full 3nm towards the clouds results in an elevation of 2.42 degrees (which would be above my 2.36 degrees)

Screenshot (3908).png


Which means my results appear to be UNDER REPORTING the elevation amount, but i will stick to them currently and be conservative.
 
Relating this back to Gimbal, the FOV is 0.35 degrees, I am asserting an elevation change of 0.37 degrees, so even if I was incorrect by 5 percent, (5 percent of 0.37 is 0.0185 degrees) meaning there is still just over 0.35 of elevation change accounting for horizontal motion of the jet (camera), converting that motion into vertical elevation change by, getting closer to the reference point.
Implications of this are: 1/ the camera is tracking an object that is getting closer (no distant plane) ; 2) Graves' scenario is there without altitude rise ; 3/ no need for weird background dero when pod isn't rolling, to explain away CW rotation of the object in the absence of roll (thing being a glare greatly questioned).

Pretty massive for the little world of Gimbal analyses. Doesn't mean it's aliens, nobody really cares and this will probably remains in the abyss of this forum and X. But personally I just need to look at the stitched images to see that there is a problem with assuming that the highest cloud line at the end has the exact same elevation than the one at the beginning. Say differently, that El angle did not change by more than a few hundreths of degrees (versus tenths). From which years and years of analyses and arguments have ensued. Which again, all depend on "the highest cloud line at the end has the exact same elevation than the one at the beginning". It's key.
 
Implications of this are: 1/ the camera is tracking an object that is getting closer (no distant plane) ; 2) Graves' scenario is there without altitude rise ; 3/ no need for weird background dero when pod isn't rolling, to explain away CW rotation of the object in the absence of roll (thing being a glare greatly questioned).
Yes, although a distant plane needs to be porpoising, decreasing altitude by 1200 odd feet before starting to climb again, I am of the opinion that contextual discussion about glare needs to involve the first step, with how its being generated in the first place.

This thread is about refinements to sitrec, my endeavours in identifying my own 3d camera orientation, generated my initial claims as to what i discovered about the f-18s camera and de-rotation of the footage, so it stayed on topic, now noting the implications as @TheCholla laid out, and that there is a lot of discussion about glare, I respectfully take this opportunity to ask the mods or @Mick West if he sees this,

Am I able to raise in this thread discussion about the glare? Or if I can be directed to the thread where it would be more appropriate please.

Noting in this thread we have substantive identification of how the horizon is orientated, I am asking to keep the two primary contentious issues "together". More than happy to follow the directions of the mods and mick for where they feel its appropriate.

*(Its just the inverse square law application to IR energy with examples)

[edited to add with examples]
 
Last edited:
Implications of this are: 1/ the camera is tracking an object that is getting closer (no distant plane) ; 2) Graves' scenario is there without altitude rise ; 3/ no need for weird background dero when pod isn't rolling, to explain away CW rotation of the object in the absence of roll (thing being a glare greatly questioned).
4/ algorithm still not correct

Is the target getting demonstrbly closer to the clouds?
Or has Zaine falsely constructed cloud movement, and that target movement is a self-induced consequence of that?

Neither of you has succeeded in explaining what Zaine is doing, nnd we can't critique it from the maths/code itself, because it hasn't been posted. All we have is under-explained diagrams.

At this point, it hasn't been demonstrated that Zaine's work is correct, nor that the previous work is wrong.
It's too soon to be drawing these kinds of conclusions.
 
It's not a conclusion, it's an observation that a lot depends on a very early assumption, that the cloud line has a constant elevation throughout.

This assumption: "The highest cloud line at the end has the exact same elevation than the one at the beginning"

@Zaine M. adds to some line of evidence that there is a problem with that assumption, looking into the mismatch between clouds angle and artificial horizon. It's not only about derotating the image with some new dero term to fix it, the clouds enter in the same spot but with a fluctuating angle as Az decreases, so there is something else going on here.

On top of that cloud angle problem, there are some cues in that image showing that the assumption is risky, like the different perceived depth of the clouds at the beginning versus end, how the cloud depth seems to flatten progressively. We don't even know where the clouds line stops exactly, as they get progressively washed out in the distance (remember the ATFLIR can only resolve features to a certain distance).
1765234105127.png


We are only looking at about 3° of clouds in total (amount of background scanned by the camera). Clouds are not always flat, especially over such a small portion of the sky. And we are looking at then from the side then from the front of plane in a bank, with pitch influence that gradually disappears in the image, from a different perspective in other word. They are not a flat 2D image in the background.

What @Zaine M. is doing is relatively straightforward, remove bank, remove the effect of (estimated) pitch and look at what's left.
1765234159610.png
 
Is the target getting demonstrbly closer to the clouds?
Or has Zaine falsely constructed cloud movement, and that target movement is a self-induced consequence of that?
Great question @Mendel
Two questions, just to be clear,

1. How did you determine that the pod wasn't impacted by elevation changes (that would result in my above stitched together gimbal image)
2. How did you determine that the camera needed rotating, if you weren't using a clouds are perfectly levelled position to start with?
Currently there are no responses/ substantiation to those questions.

Noting that Mick has made this remark, that they are parameters that need to be addressed

Screenshot (3919).png


asking why the stitched version is in the orientation that I have it in, and followed up with this

Screenshot (3918).png


With, from what I can tell is "but the horizon looks better like this" is the only reason to orientate it the way.

As an example, go fast looks like its going fast, but when we get into how its generated we discover its not moving fast, when we get into how the horizon is generated, there is no reason given to account for rotating the image more, other than, it matches a formula that removes natural camera rotation, and background motion, with me providing an example of side by side comparisons on other footage.


Source: https://www.youtube.com/watch?v=WFhi_Kq-WEk


At this point, it hasn't been demonstrated that Zaine's work is correct, nor that the previous work is wrong.
It's too soon to be drawing these kinds of conclusions.

I appreciate your response, I provided the formulas, methodology, excels etc, noting that even @JMartJr provided an example that the camera tilt doesnt exceed the tilt of the tripod (planes pitch) where as the formula calls for more rotation than (plane pitch/ camera tilt).

https://www.metabunk.org/threads/some-refinements-to-the-gimbal-sim.12590/post-358855

I have also taken the direction and commented about the glare here,

https://www.metabunk.org/threads/mick-vs-marik-rotation-glare-gimbal.13739/post-358971

[edited for clarity - background motion reference and link to new discussion part]
 
Last edited:
Great question @Mendel
Currently there are no responses/ substantiation to those questions.
The response to this is actually known, it's been assumed the clouds are perfectly flat, that the last cloud line we see at the end is at the exact same elevation as at the beginning.

I look at this and I think this is what it is, a strong, risky assumption. Cause the way we perceive the cloud depth is quite different, imo. As is the angle by which they enter the FOV, as you've shown and explained with the perspective effect of looking down and on the side (aka GoFast effect in less dramatic).

Screenshot from 2025-12-08 15-08-25.png
Screenshot from 2025-12-08 15-08-50.png
 
The response to this is actually known, it's been assumed the clouds are perfectly flat, that the last cloud line we see at the end is at the exact same elevation as at the beginning.

I look at this and I think this is what it is, a strong, risky assumption. Cause the way we perceive the cloud depth is quite different, imo. As is the angle by which they enter the FOV, as you've shown and explained with the perspective effect of looking down and on the side (aka GoFast effect in less dramatic).

View attachment 86950View attachment 86951
I think your screenshots are at the same magnification, NAR Z 2.0? Then the apparent distance of the object to the clouds has increased.


We are only looking at about 3° of clouds in total (amount of background scanned by the camera).
Yes. Your comparison photo does not, and it shows a different type of cloud layer.
 
The dero mirror
As I recall you had a rather unique take on what kind of dero might be used. How exactly do you think that works ? Does it even rotate at all ? If it does, then if you rotate it by 1 degree how much does the resulting image rotate ? What rotates it and how much exactly ? Either way how much does the resulting image get rotated as a function of the current pod/jet roll/pitch or whatever ? Can you show examples of such a thing ?

I think what we've assumed is that it is something at least functionally similar to this:

Source: https://www.youtube.com/watch?v=lMyuHy9WZGM
So a device where the image goes in, always along the same central axis, and it comes out as an image that is rotated by an angle that is proportional (2x here) to the angle by which the device was rotated about its axis, possibly by a motor which is then controlled by software.
 
Last edited:
As I recall you had a rather unique take on what kind of dero might be used. How exactly do you think that works ?

Short version
1. Remove bank @ 1:1 (i say this because when we level on JUST the artificial horizon, any banking is cancelled out and we dont see any appreciable change in the cloud line, IE it would be derotated @ 1:1 not as per Sin or Cos of bank)
2. Account for planes pitch increase due to bank, over the standard 3.6 degree pitch for level flight, apply that value to determine how much camera tilt there is. (As we all agree the camera is parallel to the planes fuselage, and any pitch the plane has, is reflected in how much the horizon is offset from the planes artificial horizon).
3. Use those figures to derotate the footage, to put the footage into a global state where up is up for everyone.

Illustrated version

https://www.metabunk.org/threads/some-refinements-to-the-gimbal-sim.12590/post-358791
 
How exactly does it do any of things you think it does ? By what physical principle ?

I listed them out, account for bank in the footage, account for camera tilt, and the footage results in the clouds moving through at an angle.

Screenshot (3935).png


Would you like to chat in DM?
 
I listed them out, account for bank in the footage, account for camera tilt, and the footage results in the clouds moving through at an angle.
That's a list of what you think it does. It is not a list of *how* it does any of those things. You seem to think it is a mirror. What kind of mirror or mirrors ? Can you show an example ? Can you draw a diagram of how the incoming IR gets deflected in just the right way for the resulting image to have all of the properties you ascribe to this device ? It is critical to your argument.
 
All im saying is the mirror does what mick says in here.
In the video Mick says "we want the horizon in the display to look like the horizon out the window", and that the dero rotates it to the correct angle. The implementation at the time had a simpler function for the dero which didn't quite achieve that goal. The current refinement does.
 
"we want the horizon in the display to look like the horizon out the window", and that the dero rotates it to the correct angle.
Which may be why they include bank in the footage.

Said this for a while

Which now brings us to today, where I am simply asking,

1. Where is my methodology incorrect,
2. Where are the additional degrees, to make the clouds pass through level, and not at an angle come from.

And I do note that you never replied to these,

Two questions, just to be clear,

1. How did you determine that the pod wasn't impacted by elevation changes (that would result in my above stitched together gimbal image)
2. How did you determine that the camera needed rotating, if you weren't using a clouds are perfectly levelled position to start with?

It would really assist me if you can let me know how you were able to determine that its not background angular motion, but the camera needing to be rotated
 
Which may be why they include bank in the footage.
But you know that the angle you see out the window is not just bank. If it's looking straight ahead then yes, but if it's looking 90 degrees to the side then the horizon should reflect pitch to a greater degree. Unless you wish to propose some novel dero method, you need software to rotate that according to the function we currently have. And if you do that, then your proposed solution is falsified, because the degrees of initial CW rotation are already accounted for.
 
1 degree bank = 1 degree of horizon change
30 degrees of bank = 30 degrees of horizon change


Source: https://www.youtube.com/watch?v=ubHceIFDATE


That is levelled on the artificial horizon, when the plane increases with bank, the "box" rotates, and there is no change to the cloud line, hence bank is reflected @ 1:1.


But you know that the angle you see out the window is not just then bank

Yes


but if it's looking 90 degrees to the side then the horizon should reflect pitch to a greater degree.
Yes, another metabunk user provided footage of a tripod that demonstrates it,

https://www.metabunk.org/threads/some-refinements-to-the-gimbal-sim.12590/post-358855

here I agree with them.

Nice little video, thank you, my take away is the same as what I mentioned here.

the horizon tilt = camera tilt.

At 90 degrees to direction of tripod tilt


at 90 degrees, planes pitch equals cameras tilt.
 
Can you elaborate on that and where are these values coming from?
If you run the numbers on how much the dero needs to rotate the image in order to produce a horizon angle that exactly matches what we would see out the window, you get that initial CW rotation. This is a robust finding. The current model assumes that the pod doesn't roll initially, but I also tried it without that assumption. It doesn't make any significant difference whether it is avoiding roll or not, whether el is off by tenths of a degree, you *still* get around the same initial degrees of CW rotation if you just assume that Raytheon didn't do a half assed job here, if they just implemented a dead simple function to get the horizon exactly right.
 
If you run the numbers on how much the dero needs to rotate the image in order to produce a horizon angle that exactly matches what we would see out the window, you get that initial CW rotation.
How did you determine that its not background angular motion and the camera needing rotation?
 
Back
Top