Some Refinements to the Gimbal Sim

Your observations are based on assumptions that may or may not be true and that you definitely haven't proven to be true. Zaine should scrap this model and try to rebuild it from the ground up using the same process and I want to see if he gets the same results. Because you can align these images in hundreds of ways that make sense physically because of all the different factors of both the camera moving and the platform moving while recording.

I would suggest scrapping it running the numbers again using whatever process you did and seeing if you get the same results. Because right now your argument hinges on how you stitch together these frames to create a 2-D representation panorama of a three-dimensional space. I'm not sure why you think it matches better other than "I looked at it and I thought it matched better." is that enough proof to overturn the old hypothesis? I'm not so sure.

The problem that I point about Mick/LBF model is independent from the stitching, these are two different things.
Their model makes a prediction that doesn't happen. The stitching problem is something else entirely. Their model is not a stitching image.
But stitching reveals why the "refinement to the Gimbal sim", that was done in their model, seems not justified, especially with the contradiction it leads to.

Going back to stitching, following a simple method (leveling the frames, then recreating the by-design distorted scene) points to change in elevation angle. What would be a good formula for stitching? They have all been done by trying to get a continuous cloud line, but they all end up with some warping of some sort because it's a moving camera in a 3D scene. Here there is a clear method and justification behind what's been done (level-up then follow the clouds).

I have a hard time to see how you get horizonal clouds after leveling up the frames like Zaine, with slanted clouds in there? Can you show an example?
 
Last edited:
The problem that I point about Mick/LBF model is independent from the stitching, these are two different things.
Their model makes a prediction that doesn't happen. The stitching problem is something else entirely. Their model is not a stitching image.
But stitching reveals why the "refinement to the Gimbal sim", that was done in their model, seems not justified, especially with the contradiction it leads to.

Going back to stitching, following a simple method (leveling the frames, then recreating the by-design distorted scene) points to change in elevation angle. What would be a good formula for stitching? They have all been done by trying to get a continuous cloud line, but they all end up with some warping of some sort because it's a moving camera in a 3D scene. Here there is a clear method and justification behind what's been done (level-up then follow the clouds).

I have a hard time to see how you get horizonal clouds after leveling up the frames like Zaine, with slanted clouds in there? Can you show an example?

What contradiction does it lead to and why are you confident that the stitching demonstrates it? Can you stitch them again starting from the bottom and get the same results? If not, why not?

You are following the clouds with no explanation of why they would remain at the same observational level on a moving platform with a rotating camera. This is the crux of the matter, I don't understand the justification for eyeballing the level of the clouds without even giving a reason why your intuition on it is correct.

When did I say I got horizonal (sic) clouds after leveling up the frames like Zaine?
 
If an objective careful measurement contradicts your attempt to subjectively eyeball a change of a couple degrees in a very noisy inconsistent video, then the former automatically wins. If you disagree, propose a better methodology for measuring it frame by frame. Patches welcome.

Hey this is Mick and others like you who have said there are "observables". It should not take some Python code for folks to see them.
 
Why don't you explain why it did not pass? There have been many posts in this thread already asking you to show your work and I have never had so much trouble following a thread here as I have following this one. I keep seeing a lot of claims and a lot less falsifiable hypotheses.

Here? No descent means horizontal no? Or going up?

Because the complicated dero algorithm is a better fit for the actual video.

When I re-create Zaine's process I can get a hundred different angles for descent (including no descent) and I'm still not sure how he chose the one he did as the best fit.

Can you show an example of your stitching attempts, would be easier to discuss.
 
Hey this is Mick and others like you who have said there are "observables". It should not take some Python code for folks to see them.
I'm having trouble believing this argument is in good faith if that is how you took what logic bear is saying.

There should be a backbone of reproduceability involving Math when we're mapping a three-dimensional event into a two dimensional panorama. If you're just basing it on intuition and what looks correct to you then it will never be reproduced because there is no process.

I'm still not convinced that the refined formula does not pass muster and I haven't seen anything in this thread that looks like proof that it is not sufficient to explain what is seen on the screen.
 
Here? No descent means horizontal no? Or going up?



Can you show an example of your stitching attempts, would be easier to discuss.
Unfortunately I am phone posting right now but I have no problem showing my attempts when I'm back on my computer if necessary.

Before I go through the trouble of uploading them though, can Zaine reproduce Zaine's results if he scraps these results and starts again using the same process?
 
I'm having trouble believing this argument is in good faith if that is how you took what logic bear is saying.
You quoted the link to the explanation
Screenshot (3965).png


Why don't you explain why it did not pass?
And then post that, and are complaining that Cholla isnt "in good faith"?


@Mick West is this metabunk or twitter?

[edit - to include image]
 
There should be a backbone of reproduceability involving Math when we're mapping a three-dimensional event into a two dimensional panorama. If you're just basing it on intuition and what looks correct to you then it will never be reproduced because there is no process.
What would be a good math/formula to recreate stitching then? If there is anything like a math formula to do so.

I'm still not convinced that the refined formula does not pass muster and I haven't seen anything in this thread that looks like proof that it is not sufficient to explain what is seen on the screen.

Then you think that the object has rotated like the clouds in the top versus bottom image here, or in the 1st 20sec of the leveled video.

1765576663110.png


1765576689295.png


One can say it's subjective or noisy but Zaine has measured the angles, it's somewhere in the thread, the objects rotates counterclockwise by a few degrees, versus the clouds by 8-10 degrees clockwise. Seems pretty clear to me.

I understand it's not clear to everyone why this matters. But it's a direct test for the hypothesis that the dero is responsible for clouds realignment in the first 20 seconds (cf 1st post of the thread). The object and clouds are supposed to rotate in concert, because dero without roll is supposed to rotate the whole image. Dero alone does not make a glare or other artifact to rotate one way, and the rest of the image the other way. Dero with pod roll yes, because pod roll changes orientation of the background versus glare, and dero rotating the whole image (glare+background) creates this decoupling between glare and the background. But here it's supposed to be without pod roll (because the pod is supposed to roll in steps, that's the basis of Mick's step-pod theory).
 
Last edited:
One can say it's subjective or noisy but Zaine has measured the angles,

Fully agree with @TheCholla even I can not say that there is or isnt rotation, I can see three degrees, but on the same side of the coin, I can also see how there is no rotation, but just an adjustment by the object at 18 seconds.

Screenshot (3969).png


As to where my measurements come from,

Everyone has measured the angles, even mick, should be cued up.


Source: https://youtu.be/qsEjV8DdSbs?t=421


The only thing I have done, is calculate the planes pitch, using micks 3.6 degree pitch for level flight as the base value, then use that to calculate the frustum roll, angle the camera is tilted due to planes pitch, and used those values to put the footage in the correct orientation.

Screenshot (3966).png



[edited - previous response was rushed]
 
Last edited:
You quoted the link to the explanation
View attachment 87054


And then post that, and are complaining that Cholla isnt "in good faith"?


@Mick West is this metabunk or twitter?

[edit - to include image]

That explanation tells me the argument you are making but it does not show it. This is the difference I'm highlighting.

I don't understand your Twitter comparison. I try to keep my comments focused on the idea and not the person. If you feel like I've crossed a boundary DM me and we can talk about it.
 
What would be a good math/formula to recreate stitching then? If there is anything like a math formula to do so.



Then you think that the object has rotated like the clouds in the top versus bottom image here, or in the 1st 20sec of the leveled video.

View attachment 87055

View attachment 87056

One can say it's subjective or noisy but Zaine has measured the angles, it's somewhere in the thread, the objects rotates counterclockwise by a few degrees, versus the clouds by 8-10 degrees clockwise. Seems pretty clear to me.

I understand it's not clear to everyone why this matters. But it's a direct test for the hypothesis that the dero is responsible for clouds realignment in the first 20 seconds (cf 1st post of the thread). The object and clouds are supposed to rotate in concert, because dero without roll is supposed to rotate the whole image. Dero alone does not make a glare or other artifact to rotate one way, and the rest of the image the other way. Dero with pod roll yes, because pod roll changes orientation of the glass (where the glare resides) versus the sensor, and dero rotating the whole image creates this decoupling between glare and the background image. But here it's supposed to be without pod roll (because the pod is supposed to roll in steps, that's the basis of Mick's step-pod theory).

I appreciate this post it provides a lot of clarity for the argument being made. I can't meaningfully reply yet but just wanted to express the appreciation for having it all concisely in one place. Trying to track down all the parts of this argument through the various threads has been tough.
 
That explanation tells me the argument you are making but it does not show it. This is the difference I'm highlighting.

I don't understand your Twitter comparison. I try to keep my comments focused on the idea and not the person.
There is side by side comparisons, of the initial formula being used compared to mine, there are two videos demonstrating the updated formula, there is a comparison footage, 3d recreation, of a reaper being over taken demonstrating background motion, you just have to press play, not sure what else i can do to "show you"
 
If you are about to complain (reason, I've read your posts) that its a 3d recreation and not real footage,


Source: https://www.youtube.com/watch?v=oA8oa9xZnV4


I had that recreated in 3d tracking the reaper, as opposed to that footage where its not tracked, but what you can see in this footage is there is elevation, and the background motion is responsive to the left and right swaying of the camera changing the angle the background moves through the scene.

But perhaps you would like a longer example of how background motion changes?


Source: https://www.youtube.com/watch?v=FfIFbP94xho&t=155s


stable version tracking the object, note how the background motion changes throughout the footage.

Background motion does NOT stay in one orientation - Formula does not work.

[edit - save me from putting in another post and clarity]
 
Last edited:
I will address one more thing,

"the clouds dont look right",

I did NOT start from the position that the clouds need to look a certain way. I outline the two factors to place the footage into the correct up orientation, and worked from the result, there is clear angular motion because the clouds do NOT move through level.


Source: https://www.youtube.com/watch?app=desktop&v=PpJ6ppruH70


As I have said before,

Using go fast as an example. If I said, go fast is going fast because thats how it looks, every person on metabunk would, deservedly, jump on me and point out that you cant say that because, when we use the data, it isnt.

So please allow me to ask again,


Hence, the clouds are perfectly level to start with, is a false premise.

Where are these extra degrees coming from?
 
the object has not rotated CW like the clouds
Yes. That. is. the. point.
Because the camera is an a gimbal, it rotates independently of the aircraft (in intervals!), and the fact that the target does not rotate with the clouds proves it. You've been following this analysis forever, how do you not know this? Why do you think Raytheon engineered derotation?

and the fact that there's a camera bump before each rotation that affects the whole image, and that it matches the rotations Mick's simulated camera algorithm does, proves that it's 100% in-camera, i.e. the rotating shape is an in-camera artifact ("glare", for short) that looks nothing like the actual IR source 30 nm distant.

I had thought the only thing we disagreed on was whether the distant IR source followed a straight constant-velocity path, or whether it followed a J-hook vertical turn that results from the curve that the fighter is flying when you project that constant-velocity flight path to a shorter distance (based on witness recollection what the radar looked like, and assuming the radar target and the IR target are the same).
 
I do have a question, @Mick West

In your, gimbal a new analysis video, you said theres no rotation in the first 20 seconds, but you also say that bank is compensating for pod roll, plane banks, changes the pod head orientation but the thing being filmed didnt rotate?, was that bought up?

Glare is relative to the glass/ camera sensor

Should be cued up


Source: https://youtu.be/qsEjV8DdSbs?list=PL-4ZqTjKmhn5Qr0tCHkCVnqTx_c0P3O2t&t=444


And I checked your sim, it shows clear signs of the pod head orientating differently, so why wasn't it rotating
 
The "leveling" that has been done to it causes the apparent motion vector of the clouds to rotate.
See my post #175.
The leveling doesn't change how things move in relation to each other in the image. That realignment is there in the original video.This is just stabilized to remove the effect of bank/plane pitch here.

This thread was created to propose that this realignment of the clouds is caused by the dero mechanism in the pod. Can we agree on that? It's the whole point of the thread.
 
The leveling doesn't change how things move in relation to each other in the image. That realignment is there in the original video.This is just stabilized to remove the effect of bank/plane pitch here.
In relation to what do the clouds change their motion vector?
The target is essentially a point (without a direction), and the rest of it is HUD elements.

This thread was created to propose that this realignment of the clouds is caused by the dero mechanism in the pod. Can we agree on that? It's the whole point of the thread.
Post #1 of this thread explains that @logicbear 's way of describing the derotation is more accurate than the method Mick previously used in the gimbal simulator, and that this addresses the "difference between the cloud horizon and the artificial horizon". It links to the actual code.
 
Ok, so the idea is that dero is responsible for the clouds realigning versus artificial horizon (long way to say yes!).

Now can you explain how in that segment when they realign (first 20 sec), the clouds rotate CW but not the object? How does the dero decouple the object from the clouds?
 
In your, gimbal a new analysis video, you said theres no rotation in the first 20 seconds, but you also say that bank is compensating for pod roll, plane banks, changes the pod head orientation but the thing being filmed didnt rotate?, was that bought up?
I'm sorry, I don't follow. Can you quote exactly what I said?
 
@Mendel I take the dislike as a "no".

Nobody can explain it because if it truly was the dero being the cause for realigning the clouds, the object would rotate with it. It doesn't, and what we see is consistent with perspective and change in elevation angle of the camera, as shown by @Zaine M. in this thread.

In other words we are looking at an object getting closer (which btw explains why it gets bigger), object that takes a path aligned with aircrew testimony : fly against the wind (westward), reverse with minimal radius of turn and fly to the east. With much less altitude rise as found before in the close path reconstructions.
 
I had a thought,

if they want to use the formula and run with that, let them.

If they want to think some software/ hardware resolver is generating additional rotation, in addition, to the mirror, meaning glare can do anything, let them.

If they want glare to get bigger no matter what the tail aspect is doing, (it decreases and it gets bigger, it increases it gets bigger) let them.

The dislike is just a way of saying, pull your head in and think like me, in my opinion
 
what else i can do to "show you"
You could provide source code that downloads the original Gimbal video, runs all of the analyses, and generates the videos that you're showing. That should resolve any ambiguity in your description of what you're showing in your videos. Ideally all of the results should be reproducible from scratch by anyone with just a few clicks. That's the bar that I've tried to set with the open source automated measurements I've provided. Now you can still make some useful contributions without always meeting that high bar, but I think that's what we should all aim for, particularly when the feedback is that people don't understand or are unable to verify what you're doing.

dero is responsible for clouds realignment in the first 20 seconds (cf 1st post of the thread). The object and clouds are supposed to rotate in concert
That's inaccurate. The jet's bank angle changing by 12 degrees causes most of the rotation of the glare relative to the background/clouds over the first 20 seconds. So the bank angle rotates the clouds by around 12 degrees while not changing the glare angle much, but then the whole image is rotated by the dero by another 6-7 degrees. So the glare ends up rotating around 6-7, while the background rotates by a total of ~19.

Here's the data for the bank angle, glare angle and cloud motion angle (for frame_diff = 5):
Python:
#added after getting the data in https://github.com/0x1beef/uap_nb/blob/main/src/gimbal_adjust_clouds.ipynb
import matplotlib.pyplot as plt
fig, axs = plt.subplots(3, figsize=(8, 15))
def plot(series, title, ax):
  series[0:650].plot(ax=ax)
  ax.set_title(title)
  ax.set_ylabel('degrees')
  ax.set_xlabel('frames')

plot(object_data.jet_roll, 'jet roll', axs[0])

df = common.gimbal_fix_wh_to_bh(object_data, ['glare_angle'])
plot(df.glare_angle, 'glare angle', axs[1])

df2 = cloud_data.query('frame_diff == 5').angle
df2 = df2.reset_index(level=[0,1])
df2 = common.gimbal_fix_wh_to_bh(df2, ['angle'])
plot(df2.angle, 'cloud motion angle', axs[2])
figures.png

There's some chance the glare is rotating less, as the 6-7 degrees assumed by the model is near the edge of what you could interpret from the data, but the idea that it's rotating much more, 12 degrees to match the bank angle as some have claimed, just doesn't make any sense. I don't know what kind of illusions might be responsible for some people thinking that.

The pod pitch changing also has a small effect on the pod horizon, on the angle of the front facing pod components relative to the background, something like a degree over the course of 20 seconds. To illustrate this I calculated the difference between the change in pod horizon and the change in bank angle relative to the first frame.
Python:
#added after getting the data in https://github.com/0x1beef/uap_nb/blob/main/src/gimbal_adjust_clouds.ipynb
from scipy.spatial.transform import Rotation
import math

def normalize(v):
    return v / np.linalg.norm(v)

# rotate a vector by a certain angle around an axis.
def rotate(vec, axis, angle_degrees):
    angle = math.radians(angle_degrees)
    rot = Rotation.from_rotvec(angle * normalize(axis))
    return rot.apply(vec)

# get the angle between vectors 'a' and 'b'.
# the sign is relative to the vector 'c' which is orthogonal to 'a' and 'b'.
def signed_angle(a, b, c):
    # using https://stackoverflow.com/a/33920320
    return math.degrees(math.atan2(np.dot(np.cross(a, b), c), np.dot(a, b)))

def apply_jet_roll_pitch(vec, jet_roll, jet_pitch):
    vec = rotate(vec, [0, 0, 1], -jet_roll)
    vec = rotate(vec, [1, 0, 0], jet_pitch)
    return vec

# get the angle of the horizon in the pod's eye view without dero
def get_pod_horizon(jet_roll, jet_pitch, pod_pitch, pod_roll):
    # a vector point forward along the jet's boreline:
    jet_forward = apply_jet_roll_pitch([0, 0, -1], jet_roll, jet_pitch)
    # a vector pointing right in the jet's wing plane:
    jet_right = apply_jet_roll_pitch([1, 0, 0], jet_roll, jet_pitch)
    # the pod's horizon: a vector initially pointing towards jet right, rotated by pod roll
    pod_right = rotate(jet_right, -jet_forward, -pod_roll)
    # a vector pointing at the target, rotated according to pod roll/pitch:
    pod_forward = rotate(jet_forward, pod_right, -pod_pitch)
    # pod_forward,the global 'az' viewing angle and a vector pointing up are coplanar.
    # the global horizon is a vector pointing right, orthogonal to that plane.
    right = np.cross(pod_forward, [0, 1, 0])
    # a signed angle between the global horizon and the pod horizon:
    return signed_angle(right, pod_right, pod_forward)

#df is a pandas dataframe containing the gimbal data extracted from Sitrec
start_pod_roll = df.loc[0].pod_roll_glare # the pod roll calculated based on the glare angle
start_horizon = get_pod_horizon(df.loc[0].jet_roll, df.loc[0].jet_pitch, df.loc[0].pod_pitch, start_pod_roll)

def pod_horizon_const(d):
    return get_pod_horizon(d.jet_roll, d.jet_pitch, d.pod_pitch, d.pod_roll_glare) - start_horizon - \
        (d.jet_roll - df.loc[0].jet_roll)
 
df.apply(pod_horizon_const, axis=1)[0:650].plot()
1765713447990-png.87079

The precise shape of that graph is uncertain since it is sensitive to pod roll, which in the current code is off by ~2 degrees relative to the intended flat profile. Also this is using a simplified pod horizon function without yaw. But the point is just that even while the pod is not rolling the glare rotates relative to the background by something like a degree more than just the bank angle during the first 20 seconds, which could also be slightly complicating efforts to visually see that rotation.
 

Attachments

  • 1765713447990.png
    1765713447990.png
    47.1 KB · Views: 31
Last edited:
@Mendel I take the dislike as a "no".

The dislike is just a way of saying, pull your head in and think like me, in my opinion
I use the X reaction for disagreement; if I don't explain it, it should be obvious from my previous posts, and we're probably going in circles.

I use the thumbs down reaction for posts that set the discussion back. Peings paraphrased me badly, and then asked a question that had already been answered, all the while maintaining the imprecise language that impedes following the thread and clear thinking.
 
That's inaccurate. The jet's bank angle changing by 12 degrees causes most of the rotation of the glare relative to the background/clouds over the first 20 seconds. So the bank angle rotates the clouds by around 12 degrees while not changing the glare angle much, but then the whole image is rotated by the dero by another 6-7 degrees. So the glare ends up rotating around 6-7, while the background rotates by a total of ~19.

The stabilized video as well as your graphs prove that things do not happen like you describe them. In the leveled video we would see the object rotates CCW with the edges during bank.And it would progressive rotate CW with the clouds (dero). This doesn't happen, as the video shows, as well as your graphs actually. Your measured glare angle is not a progressive CW rotation of 6-7°, especially between frames 1-100 when there is the step of 4° change that closely aligns with bank.

That's the bar that I've tried to set with the open source automated measurements I've provided.
Coding is great but it's also important to make sure the initial assumptions are correct. Here you treat everything like if we were looking at a 2-D image, measuring angles and interpreting them as if everything had no depth/perspective, when we are looking at a video of a complicated 3-D scene with a lot of depth and a moving point of view.

I'm sorry but what we see in the video, as well as your angle measurements actually, don't support your conclusions about fixed glare in the camera frame, and dero realigning the clouds with artificial horizon.
 
Last edited:
Your measured glare angle is not a progressive CW rotation of 6-7°, especially between frames 1-100 when there is the step of 4° change that closely aligns with bank.
Coincidentally there's also some banking at the start, but that's not really an explanation since we don't see these steps at later points in the video with the same change in bank angle. One thing that's unique about the start of the video is that it's the part where the shape of the target changes the most. So one could argue that initial step might just be an artifact of that, and maybe the reality is a progressive change of 6-7 overall. Or one could argue that it's real and there might be a way to tweak the model and posit some initial rotation step that might account for that. I've proposed modifications like that in my original study of the glare angle results. Or I don't know whether it could end up being a better fit if there's a degree or two of contribution from el or perspective or whatever. I did say that I wouldn't be surprised if more refinements are needed.
But you know what's even more of a stretch than any of those things ? Your proposal that it's actually a change of 12 degrees or more to match or exceed the bank angle. So you can't have it both ways. If you argue that there's an initial step but then it actually changes much less, then that means your perception of how much it rotates in your stabilized video is still off by over 8 degrees ! If you throw the measurements out entirely in favor of your perception, then it's all subjective. A lot of people originally saw the glare as being static for the first 20 seconds. Maybe the truth lies in the middle, e.g 6-7.
 
Last edited:
You should stop torturing these poor angles to find a "better fit". That's working backwards to find the solution you want.

We see no hint of progressive clockwise rotation in the first 20sec that would follow dero. So there is no glare and dero progressively realigning the glare and the clouds, this doesn't work. After the first rotation that follows bank (4°!) there is at best 1° of CW rotation in your measurement, far from what the clouds do. And we just need to look at @Zaine M. leveled video to see there is really nothing there.


Source: https://www.youtube.com/watch?v=-KVvebg4cXc&t=34s


Then it's a mixed bag, there is the bank at 18' when it seems to move in the stabilized video, but look at the clouds and tell me they don't also move a bit. The clouds don't even follow bank closely (your graph). The whole thing is a mess, and again I think it's because we are looking at a complex 3-D scene with a lot of depth and perspective (and in a very narrow FOV). I've always said I think we are looking at a diffuse, "glare-like" object from which we don't see a clear outline. But real glare doesn't care about what's behind in the image and 3-D, it should just remains fixed in the camera frame or follow hypothetical dero. But it doesn't.

You completely ignore the change in perspective in the clouds, it's pretty clear though. Take that into consideration, look at the change in elevation angle going with it, and we retrieve the described path, without the progressive rise in altitude that everyone here treated like if it wasn't an artifact of keeping the El angle almost exactly constant.

No need for wild guesses about tail angle, the object is getting bigger because it's getting closer. No need for pilots locking on the wrong target, radar errors, all these added speculations and assumptions, the list is long.

While the described event is right there in the lines of sight, an object getting closer and doing what the aircrew said it did, in the 120-kt wind. Plus nothing breaking the laws of physics to absolutely refuse considering it.
 
Last edited:
You could provide source code that downloads the original Gimbal video, runs all of the analyses, and generates the videos that you're showing. That should resolve any ambiguity in your description of what you're showing in your videos.
Im sorry, i missed the part where your method is demonstrated de-rotating, placing the footage into the correct orientation, another video. (I don't have source code, just rotate the footage to make the artificial horizon level, then remove the camera tilt value. It works on the go fast footage as well as the gimbal footage).

That's the bar that I've tried to set with the open source automated measurements I've provided. Now you can still make some useful contributions without always meeting that high bar,

I know you are claiming to set the bar high, with a jab at me for "making some contributions without meeting that high bar", Not saying you haven't done a lot of work, you have, but proving your method works should be high on the list.

Heres an example, if my neighbour tells me a 3000 pound pink elephant was in his loungeroom and I go over and have a look and cant see any evidence for it, I'm not going to spend time finding out,
1. the elephants' name
2. what the exact shade of pink is
3. what the elephant was wearing
So without demonstrating your method can be used on other videos, and just assuming it works, what shade of pink did the neighbour tell you the elephant was?

Also isn't your justification for this to make it more intuitive for the pilot? Something about looking out the window etc?

It doesn't say exactly how it controls the dero, but if you already have a software controller for the derotation angle, then you might as well make it as intuitive to the pilots as possible, even if the benefit in doing so were marginal.

So to be clear, you are of the belief that Raytheon, is going to make it intuitive to the pilots, by deliberately mismatching the horizon and the artificial horizon? You cant see how that would be a problem? You are aware of cockpit confusion leading to accidents/ incidents right?

But back to your method of de-rotation, I wouldn't ask you to do something I haven't, so there's the post where i evaluated your method, tested side by side with mine, with comparable examples for comparison.

https://www.metabunk.org/threads/some-refinements-to-the-gimbal-sim.12590/post-359223

A lot of people originally saw the glare as being static for the first 20 seconds. Maybe the truth lies in the middle, e.g 6-7.
Yes, we saw the gimbal new analysis video, but 6-7 degrees of glare rotation??? Really?

Now, looking at micks Gimbal roll sim and there is 11.5 degrees of camera rotation (first 20 seconds), due to banking and not pod roll, I couldn't find that angle reported anywhere, so I measured it, can anyone else confirm? Azimuth value is in the top right. So we are expecting 11.5 degrees of rotation due to bank compensating for pod roll, as opposed to 14 degrees if it rolls as normal.

Glare rotation = Camera Rotation

LBF 6-7 degrees of rotation = 11.5 degrees of camera rotation?

But the difference is because Raytheon is mismatching horizons on purpose to make it more intuitive for pilots right? Do the pilots trust their eyes or trust what the plane is telling them? Because in this example the plane is deliberately lying to them about which horizon is the actual horizon and what the plane is doing (bank angle) isnt with respect to the horizon, its the horizon from the front of the plane, so not reflective of any horizon in the video footage.

If you can clear that up, I'd appreciate it.

Screenshot (4015).png

Screenshot (4016).png

Screenshot (4017).png

One last thing, I did say to you on twitter a couple years ago that we should all meet up for dinner sometime, offer still stands and I do think we would have a great night LBF. Let me know if you are up for it.

[edited to include invitation for dinner with lbf]
 
Last edited:
We see no hint of progressive clockwise rotation in the first 20sec that would follow dero.
To illustrate this, if the object was truly a glare fixed in the camera frame, here is how we would expect it would rotate over the first 23 seconds before the step rotation.
expected_rotation_with_moving_average.png

Frame of reference is the leveled video (up is up). Negative value is CCW rotation, positive is CW. Intial glare angle is set to 0, this shows the expected variation in glare angle.

I took the values of bank and cloud angle I have, calculate the mismatch, that's the amount of progressive CW rotation we expect for the glare, due to dero rotating both the clouds and the object CW.

Then I take the frustrum roll values calculated by @Zaine M., this is the rotation of the frame edges in the leveled video (mostly CCW). It essentially reflects the change in bank but with a small contribution of pitch tilt, that decreases with Az.

I combine the two, this is what we would expect if the object was a glare fixed in the camera frame. It should move in steps CCW with the frame edges, while being progressively rotated CW by dero.

These changes are not minor, up to 4 degrees both ways, so we should see them. We don't see this at all.

Note: these angles could be overlaid with the object in the leveled video, to illustrate the problem further. Maybe later.
 
And how far away do you think those clouds are? In Sitrec I have them starting about 70 miles away, with the cloud horizon (the blue line) calculated at about 120 miles.

I am only getting them at max of 65NM, using @TheCholla initial 305 heading for the F-18 (thats using the raw figures, not smoothed)

Screenshot (4040).png

But looking around I like this one also, clouds are only 40NM away, but the figures are interesting.

Screenshot (4044).png

[edit - context]
 
Last edited:
This section about the 1st observable (at least) should be amended.


Source: https://youtu.be/qsEjV8DdSbs?si=r-6vLdzCDfbK3ezW&t=875


"Instead, it's fixed over the 20 seconds" -> False, hence refinement to the sim

"The horizons, both the artifical horizon and the real one indicated by the clouds, rotate 12° in 3 steps, over the first 20 seconds"
-> False, clouds rotate much more and about half of cloud rotation is progressive realignment with artificial horizon, not bank.

• "This is probably the most significant of the 4 observables"-> then it's a major red flag for the rotating glare theory

And the "refinement" to the gimbal sim should also be amended, unless you can show any sign of dero in the object's attitude. It should wobble (ironic given what Graves said happens after the cut), in the leveled video, if dero was responsible for cloud realignement.

I have no dog in this race, it's just what I think is an objective statement of the facts, given what we see in the video, and also LBF and Zaine angle measurements. Can the rotating glare theory survive without its head (the 1st observable)?
 
Last edited:
This is probably the most significant of the 4 observables"-> then it's a major red flag for the rotating glare theory
I think it was put well, just not with any supporting evidence, it obviously rotated, but this line resonated with me,

"if this was a real saucer shaped craft, then it should also rotate with the horizon"

For the record, mick, i advocated that you should join the lads in an announcement that its not glare, but a real craft.

"Due to recent developments, the weight of evidence has swayed my position to endorse Ryan graves etc recollection of the encounter, doesnt mean its "aliens" just that some 15 foot "drone" is pumping out enough IR energy to completely overwhelm the camera to the point it generates light patterns, that rotate when the craft does. (hot knife against the wall as an example)"

Then Mick and Marik can all do interviews or whatever you guys want.

[edit- remove Cholla from interviews, more media time for the other two.]
 
Last edited:
"if this was a real saucer shaped craft, then it should also rotate with the horizon"

This part is true, it does not follow artificial horizon as you would expect from a well-defined object outline. I'm not saying the 1st observable came out of nowhere, there is a real aspect to it. But also contradictions because the object does not behave like a glare fixed in the frame.

So continuing to say it's 99% a rotating glare, I'm not sure where that comes from but it's certainly not based on the 1st observable because it does not behave like a fixed-in-the-frame artifact 99% of the time.

But pod roll reconstruction in the sim is based on 1° dero=1° glare rotation ("derotation=glare angle"), so the premise is that glare exactly follows roll/dero or dero alone (refinement), and that it is fixed when no roll/dero.
 
Last edited:
Back
Top