Some Refinements to the Gimbal Sim

Assuming distant plane idea, how are the clouds going to rotate, instead of just pass through level or at an angle?
This sentence makes no sense in isolation, you are going to confuse people.

The hypothesis for why the clouds rotate slightly at an angle that is different to the artificial horizon is explained in the very first post of this thread, years ago.

So what are you asking?
 
The hypothesis for why the clouds rotate slightly at an angle that is different to the artificial horizon is explained in the very first post of this thread, years ago.
Yes its a hypothesis

The short version is that by measuring the direction the clouds moved in (parallel to the horizon), it was clear that this was at a shallower angle than the artificial horizon, most so when looking left.
And that's why I asked how you were able to determine the clouds angle, mis match with the planes artificial horizon was due to camera rotation and NOT as a function of elevation, the simplest explanation is, when we track the motion of the clouds, we still get what looks like pod elevation change.

(As an update, its repeatable)

panorama1_full (2) - Copy.jpg


and with the FOV overlay

panorama1_full_borders (1) - Copy.jpg


I am not just saying words, or "I think", I built a side by side comparison tool comparing both de-rotation methods.

Screenshot (4156) - Copy.png

Where the user can track each background feature for stitching purposes, and it even gives the pixel displacement in FOV terms, degrees relative to inputted FOV settings (0.35X0.35).

Screenshot (4135) - Copy.png


Time is the time code, angle is the angle of motion, d_v is the degrees of vertical motion d_h is the degrees of horizontal motion.

To be clear,
1. the original release of the gimbal video shows that there is rotation, not static for the first 20 seconds, which would indicate that it is a real object and not glare,
2. there are now 2 competing methods to de-rotate the footage, to place it in the global up is up orientation,
a. remove the planes bank and camera tilt
b. use the metabunk formula by LBF, that hasnt been demonstrated to de-rotate any other footage successfully, or even a statement to say "This is how we KNOW its not a camera elevation change, but is actually computer generated artificial rotation"
3. There is some rotation in both methods, not enough to match the 11.5 to 14 degrees required for it to be glare, but the sitrec is using the formula, Why?
 
Is this an automatic detection algorithm?

Unfortunately i wasn't able to get Open CV to work correctly.


Source: https://www.youtube.com/watch?v=SSBAbrpCX-M


In here I briefly show (as its easier to demonstrate how it works as opposed to writing a novel), that I did work with Open CV, but have gone with user generated feature tracking and how that is implemented,

1. click feature,
2. second click of that feature to determine its angle and new position at a later frame,
3. showing that it generates individual pairs of frames overlaid,
4. showing when completed that it provides two copies of the panorama , one that is stitched, another one that is also stitched on the features with a Field of View overlay,
5. show that it exports the angles, the horizontal displacement/ vertical displacement of the pixels (in degrees- the excel has fov settings to define those values) as well as the time stamps for them.
 
I know there has been many discussions about the cloud-sky line, what is the faint horizon line we discern above the clouds (some even proposed the ocean) etc... in the different Gimbal threads.

But I wonder if it has been considered that we do not see the cloud-sky line, but only the visible horizon, up to where the ATFLIR can resolve cloud features.

In other words that they may be more clouds beyond the clouds, that just fade out and become indiscernible. Which of course changes everything about judging changes in elevation angle from the apparent horizon or cloud-sky line.

This does not affect your method and estimate because it is based on cloud motion angle.
 
But I wonder if it has been considered that we do not see the cloud-sky line, but only the visible horizon, up to where the ATFLIR can resolve cloud features.
If that wasn't considered as an option I am not surprised, as the angle of the clouds being a function of pod elevation change doesn't appear to of been considered in this instance either.

But what is interesting is that, when we use the 305 degree initial heading from the gimbal paper, i am resolving the clouds to be only 3NM further away than the distant plane is alleged to be, 32NM, but watching the bottom right Intercept angle box, we can see that the red dot for motion slows down, which is telling that the distant plane is either changing its heading or it is slowing down to stall speed? And factoring in that a distant plane would need to be reducing its altitude by 1200 odd feet, before starting to climb again isn't making sense to me.

With LBF now apparently unavailable to defend their claim for this whole image rotation, but mirror de-rotation at the major orientation changes??? Which makes no sense, I have a rough draft explainer, and am happy to take feedback on it, I don't mention LBF by name because I honestly wish nothing but the best for them.

There's some placeholders in there, and the script needs updating to make it clearer for folks.

The TLDR is there was rotation in the first 20 seconds, I tested both de-rotation methods, to place the footage in the global up is up orientation, and its a real object, punctuated at the end with the crew that night saying "Its rotating".


Source: https://www.youtube.com/watch?v=Bj5fz8irCuk
 
@Zaine M. have you used your tool to stitch GoFast and could we see the end result?

With also the numbers it retrieves in terms of El change (that we can compare to the video).
 
I have done a quick stitch, primarily to track the angle of the background motion that both methods result in, Metabunk formula kept it pretty close to 1 direction.

But I will do a more comprehensive panorama later and post up the results.

What I am struggling to come to terms with is, despite continual investigation leading to further data and clarity about UAP videos, taking note of this from another thread


In a delightful surprise, I found LA330 in the video!

It came up on Twitter, with Marik and @TheCholla having some questions about the angles of the contrails and suchlike (things that were probably answered back in early 2017). It occured to me I never put LA330's KML into Sitrec, so I did.

Why there appears to be reluctance in accepting, or in the alternative, rebutting with the formula, the correct method of de-rotating the footage. and the results from that.
 
These changes are not minor, up to 4 degrees both ways, so we should see them.
You might personally not see steps in those stabilized videos, but the algorithm does. The following compares jet roll, glare angle, Zaine's total rotation numbers, and pod horizon in the current model. The graph shows the change in these values relative to the first frame.
Python:
#added after getting the data in https://github.com/0x1beef/uap_nb/blob/main/src/gimbal_adjust_clouds.ipynb
from scipy.spatial.transform import Rotation
import math

def normalize(v):
    return v / np.linalg.norm(v)

# rotate a vector by a certain angle around an axis.
def rotate(vec, axis, angle_degrees):
    angle = math.radians(angle_degrees)
    rot = Rotation.from_rotvec(angle * normalize(axis))
    return rot.apply(vec)

# get the angle between vectors 'a' and 'b'.
# the sign is relative to the vector 'c' which is orthogonal to 'a' and 'b'.
def signed_angle(a, b, c):
    # using https://stackoverflow.com/a/33920320
    return math.degrees(math.atan2(np.dot(np.cross(a, b), c), np.dot(a, b)))

def apply_jet_roll_pitch(vec, jet_roll, jet_pitch):
    vec = rotate(vec, [0, 0, 1], -jet_roll)
    vec = rotate(vec, [1, 0, 0], jet_pitch)
    return vec

# get the angle of the horizon in the pod's eye view without dero
def get_pod_horizon(jet_roll, jet_pitch, pod_pitch, pod_roll):
    # a vector point forward along the jet's boreline:
    jet_forward = apply_jet_roll_pitch([0, 0, -1], jet_roll, jet_pitch)
    # a vector pointing right in the jet's wing plane:
    jet_right = apply_jet_roll_pitch([1, 0, 0], jet_roll, jet_pitch)
    # the pod's horizon: a vector initially pointing towards jet right, rotated by pod roll
    pod_right = rotate(jet_right, -jet_forward, -pod_roll)
    # a vector pointing at the target, rotated according to pod roll/pitch:
    pod_forward = rotate(jet_forward, pod_right, -pod_pitch)
    # pod_forward,the global 'az' viewing angle and a vector pointing up are coplanar.
    # the global horizon is a vector pointing right, orthogonal to that plane.
    right = np.cross(pod_forward, [0, 1, 0])
    # a signed angle between the global horizon and the pod horizon:
    return signed_angle(right, pod_right, pod_forward)

import matplotlib.pyplot as plt
def plot_diff(series, title):
  abs(series[0:650] - series[0]).plot(label = title)

plot_diff(object_data.jet_roll, 'jet roll')

df = common.gimbal_fix_wh_to_bh(object_data, ['glare_angle'])
plot_diff(df.glare_angle, 'glare angle')

# frustum_roll_df is a dataframe containing Zaine's spreadsheet data
plot_diff(frustum_roll_df['Total rotation'], 'total rotation with "frustum roll"')

def pod_horizon_for_frame(d):
  return get_pod_horizon(d.jet_roll, d.jet_pitch, d.pod_pitch, d.pod_roll_glare)
plot_diff(df.apply(pod_horizon_for_frame, axis=1), 'pod horizon')

plt.legend()
plt.ylabel('degrees')
plt.xlabel('frames')
plt.plot()
1767309556883.png

The glare angle data doesn't have large steps besides perhaps one at the start. All the other quantities have multiple large steps. So when they're combined to produce a stabilized video of any kind, the resulting glare angle should change in steps. If you don't see those glare angle steps in your stabilized video, then this data suggests there might be an issue with either your perception or the video you've produced. I would stay away from stabilized videos which requires interpolation and might involve some filtering. Who knows whether those might introduce some illusions.

Then I take the frustrum roll values calculated by @Zaine M., this is the rotation of the frame edges in the leveled video (mostly CCW). It essentially reflects the change in bank but with a small contribution of pitch tilt, that decreases with Az.
You include a different set of assumptions than what the model in Sitrec is making. Those assumptions were advertised from the beginning as producing different results, and there's no consensus with regards to them being correct. So technically you're not arguing against the model in Sitrec. You're providing your own different model and testing its predictions. So it needs to be highlighted that your results might just be showing that your model is wrong while Sitrec could be right.
I would like to see the frustum roll formula expressed as a series of 3D vector operations instead of trig. For example take 3D vectors representing the edges of the imager (FPA), rotate them around the right axes to account for jet pitch/roll, pod pitch/roll. Generate a vector that represents the horizon in the distance, project it onto the imager, then calculate the angle between the edge of the imager and that projected horizon. That would involve operations that can be done with library functions instead of doing the math by hand. I find that it's much easier to see what some formula is actually doing and to verify its correctness that way. It would not be the first time that claims are made in this thread that some formula involving some simple trig should do X and then that doesn't turn out to be the case mathematically in 3D in general.

with a jab at me for "making some contributions without meeting that high bar",
The intent was the opposite of a jab. I was actually trying not to be too discouraging with my remarks. I said some contributions can still be useful even if they don't always meet that high bar, and I meant in terms of transparency and reproducibility. Of course transparency and reproducibility do not, on their own, prove that something is correct, but they help others quickly review your argument, which is particularly important when people don't have a lot of time to spend on this. You asked what more you can do to show what you mean. I just answered the question.
For example there's a video of yours that has been repeatedly cited recently and implausible claims have been made based on it. I'm not even sure what you've rotated the video to. At first I thought it was the artificial horizon but then I saw that still rotates a bit. You compare rotating vs not rotating. I'm not sure what those refer to exactly. The video just doesn't convey much information to me. You could elaborate, but I'd probably still be left wondering what you really mean and whether some mistakes might've been made during the creation of the video. It was repeatedly pointed out to you that you're not actually using the function I wrote, the one that's used in Sitrec, and yet you're still claiming the contrary, so without reproducing everything from scratch it's impossible to trust anything here, even seemingly simple things. That leaves me with two options. Either I wait for some other people to take a closer look and reproduce it, or I wait until I find the time and inclination to guess what you might've done exactly and try to write the code to reproduce it myself.
If instead you could provide source code that generates the video then there would be no ambiguity and I could more quickly check it myself and tinker with it to see what's really going on. Ideally there should be no manual work involved in reproducing the analysis. For example I posted the code to generate all of the videos in this thread. I know it takes some effort to learn how to do so, but it really pays off in the long run. AI can help you with that code as well these days.
 
Last edited:
The intent was the opposite of a jab. I was actually trying not to be too discouraging with my remarks.
I do appreciate your input lbf, I am acutely aware that there is a real person on the other side of the keyboard, thank you for coming back into the conversation.

If instead you could provide source code that generates the video then there would be no ambiguity and I could more quickly check it myself and tinker with it to see what's really going on.
I had sent the code through to @Mick West for review,

Screenshot (4179).png


I am not using Sin Cos, but humanhorizon, although the code, not the formula, has been updated since.

I provided a screen capture of what it does and how it works, plus the flow on effects, using triangulation to range the clouds.


Source: https://www.youtube.com/watch?v=SSBAbrpCX-M

[Edit - Context]
 
Last edited:
You include a different set of assumptions than what the model in Sitrec is making. Those assumptions were advertised from the beginning as producing different results, and there's no consensus with regards to them being correct. So technically you're not arguing against the model in Sitrec. You're providing your own different model and testing its predictions. So it needs to be highlighted that your results might just be showing that your model is wrong while Sitrec could be right.

The assumptions are below.

This is a direct test for the glare hypothesis, based on two simple things:
- glare should rotate with frame edges in a leveled video (because glare is said to be fixed in the original video)
- the whole image is derotated by the derotation mechanism (including glare)

You overcomplicate things but there is nothing complicated about the glare theory, or your refinement to the theory.

The glare should be an artifact fixed in the frame (1° dero or bank w/o roll=1° rotation or non-rotation vs background), and if the clouds realign because of dero, in the absence of pod roll, the object should follow exactly. Again, glare 101.

A better argument than drowning us in 3D vectors would be to say that 1° dero or bank w/o roll isn't always 1° rotation of the glare (or non-rotation vs background).
Then explain why, show some example, maybe using Mick's glare examples like the sun through the ziploc bag or others, to show that it doesn't always follow dero, or sometimes rotates with bank.

Because you're asking us to accept a theory (the "refinement") for which evidence cannot be seen with our eyes, and is also not measured by your algorithms.

"You can't see it, I can't measure it, but believe me it's there". Based on your intuition of what dero should do for the pilots. That doesn't cut it.

Especially with @Zaine M. showing that at the first test, your hypothetical dero algorithm fails to reproduce the real deal, what is observed. For the best test possible, GoFast (same pod, same flight).
 
Last edited:
If you don't see those glare angle steps in your stabilized video, then this data suggests there might be an issue with either your perception or the video you've produced.
If glare, and dero explains clouds misalignment with artificial horizon, your glare angle curve (orange) should show a progressive CW rotation of about 8°, like the clouds. And not a 4° step during the 1st bank, followed by what looks like 1° rotation.
1767330551576.png


This has nothing to do with stabilizing the video, we see the problem in the original video, it's just easier to see when leveling it up. And your angle measurement confirms.
 
Last edited:
I wait for some other people to take a closer look and reproduce it
That's where I'm at with this. The old Gimbal code is too crufty to do much more with, so I'd have to re-implement it as a modern custom sitch. This is something I'll do eventually, but it's not a high priority, as I think the glare case is perfectly sound for several reasons, and this quibble over the early angle isn't likely to be consequential. All the claims being made are very unclear, and I'm not willing to waste days of my time on something so specific that nobody actually fully understands.

I am working on fun stuff that will eventually be helpful, like built-in motion tracking, and more general input tracks, graphs, and editors. But it all takes time. I prefer to work on making Sitrec a useuful tool than waste time tweaking old code that won't get used again.

Work in progress: OpenCV.js integrated into Sitrec.
2026-01-01_22-43-58.jpg
 
Last edited:
this quibble over the early angle isn't likely to be consequential
It's just your 1st observable, pillar of the glare theory. On which you patched the dero refinement as a quick fix, that doesn't hold water.

You don't have evidence-based arguments, just that it's a quibble, understood and good to know.
 
The quibble is actually what's expected from a slight decrease in El angle and closing on a close object. It completely flattens the close path after accounting for the angle in cloud motion, observation at the origin of the quibble.

So maybe it's not a "quibble" but an important piece of the puzzle.
 
It's just your 1st observable, pillar of the glare theory. On which you patched the dero refinement as a quick fix, that doesn't hold water.

You don't have evidence-based arguments, just that it's a quibble, understood and good to know.
No, it's not. As I explained in the very first post, that observable remains. The clouds rotate with banking when the glare does not. That's it. The quibble is about the difference between the cloud motion angle (i.e. the horizon) and the artificial horizon.

Back in 2022:
And just to reiterate, this does not really change any of the observables. The initial slight continuous rotation does not change the fact when the jet banks, the horizon rotates, but the object does not.
Those are two different things.
 
The quibble is about the difference between the cloud motion angle (i.e. the horizon) and the artificial horizon.
Which involves dero to explain it, dero that should rotate the entire image, i.e. the glare too (slow realignment with the clouds).

It's two very related things (what a glare should do).

The object rotates with the clouds during 1st bank (see LBF graph) above, so what, it's a sometimes observable, sometimes not?
 
Dero being out of the equation points to a close object (again!) to explain cloud misalignment.

This changes a lot for the case, including glare.
Also this cannot explain initial CW rotation of the object (measured, see LBF's graph), at odd with glare.
 
I should specify, at odd with "rotating glare" because the object is diffuse and glare-like, yes.

But rotating like a glare fixed in a camera frame, unclear because it doesn't always behave like that (1st bank especially).
 
How does "glare, and dero explains clouds misalignment with artificial horizon"

I don't understand what this means. Why does glare explain it? Ignore the glare, there's a difference in angles. Glare does not explain this.
 
Who said glare explained cloud misalignment?
Sounds to me like you feign not understanding the problem to avoid addressing it.

What explains cloud misalignment according to you? It's the topic of this thread.
 
Last attempt: what are your arguments against these very simple claims/observations/facts that make the "refinement to the sim"?

1. The mismatch between cloud and artificial horizon is due to the derotation mechanism doing a slow realignment of the image.

2. Derotation without roll rotates the whole image, so the object, glare or not, should show a progressive CW rotation of about 8°, like the clouds do, in the original video.

3. It cannot be seen neither measured with objective angle measurement methods (LBF, Zaine measurements). Looking at the original or stabilized versions of the video, that doesn't matter, just easier to check in the leveled vid.

4. This means that dero cannot explain cloud realignment with artificial horizon, the "refinement" is unsupported by evidence.

5. There still is a slight CW rotation of the object over the first 20sec that cannot be explained by dero and is at odd with the glare hypothesis, 1st observable. How do you explain it?

Very simple and you can say which point you disagree with and why, with evidence. Because this discussion/investigation should be evidence-based, no?
 
Who said glare explained cloud misalignment?
If glare, and dero explains clouds misalignment with artificial horizon, your glare angle curve (orange) should show a progressive CW rotation of about 8°, like the clouds.
Oh, it seems I misread you. "If glare", there, means: "if the shape and the rotation of the apparent obect is a result of glare"?

So your question is, if dero explains cloud misalignment, then why isn't the glare continually rotating with the clouds (because dero presumably rotates the entire scene, clouds and glare together).

You also say:
The object rotates with the clouds during 1st bank (see LBF graph) above, so what, it's a sometimes observable, sometimes not?
Which is really talking about the first observable. Anyone can see that scrubbing through the 8s and 18s banks that the horizon rotates a lot more than the glare, which does not seem to rotate much, if at all. That would not happen with a real object. The 0s glare rotation on the graph seems more like an artifact of the intial jittery state of the video , and changing shape of the glare with increase in gain.

Regretfully, I've already wasted too much time on this. I'm certainly willing to revisit it down the road, but perhaps you could firm up and clarify your arguments for a more general audience. I know Marik's fans will accept anything you put out, but that's ultimately not who you need to be communicating with.
 
I know Marik's fans will accept anything you put out, but that's ultimately not who you need to be communicating with.

Why @Zaine M. has submitted his recent work here, the goal is to get reviews and feedback from people who know this stuff.

For the following feedback:

- the glare rotating when it should not is an artifact of initial jitteriness in the vid (?)

- the dero explains clouds misalignment, even though this cannot been measured in the object's angle (the 8° progressive CW rotation with clouds is nowhere to be seen)

This would not pass peer-review but ok, that's your answer.
 
My answer is that there are some unresolved issues, but the basic glare hypothesis seems sound on the weight of the evidence. I do not, as I have said too many times, want to waste time on another deep dive. But when you publish your paper, I might revisit it.
 
There is no need for a deep dive, it's all pretty straightforward.

The alternative hypothesis that's going to be presented is this, so if you object you may as well do it now, it will save some time. Rather than in months from now on X.

1. Initial cloud mismatch is not the derotation inducing additional dero for pilot's comfort, because the evidence is lacking for this. Rather, it's a marker of a change in elevation angle of the camera, as seen in other footage.

2. Leveling up the footage after correcting for plane bank and pitch allows us to estimate El change, after stitching the image accordingly. The method works for GoFast for which we know El angle and we can verify it.

3. This confirms the close path as this corresponds to the F-18 getting closer from the object. It also makes it flat, suppressing the rise in altitude that everyone here said was unphysical and then impossible. No need for pilot's error, radar glitches etc.

4. You'll say "but the clouds should rise in the FOV" if El angle decreases, based on Sitrec.
To which there are two answers: 1/ Sitrec has an infinite and flat cloud layer ; 2/ Sitrec sees/resolves cloud features to infinity, while both may not be true in reality. Sitrec does not capture the angle of motion of the clouds without forcing some additional dero, so it is unrealistic anyway, it's missing something.

Bottom line, the close path seen by the crew on radar is retrieved. If rotating glare it's not from a hot engine in the distance. And there are good arguments for it not being rotating glare given the flight path, and also the object not always behaving like a rotating glare.
 
it's missing something.
Presumably, the same something you are missing. Your explanation seems to rely on some rather specific clouds that curve upwards when stitched. If I were you, I'd wait until you make a full 3D model of what you think is going on.

And in your model, why doesn't the "object" rotate during banks?

Are you planning a full model, or just pointing out a few uncertainties?
 
No, it could also be that we are not seeing the cloud-sky line, just the "visible" horizon with more clouds fading out in the distance. It's rather odd that the object, close object or distant plane, would happen to fly right where the cloud-sky line is, and stay there the whole time. Unless this isn't the cloud-sky line, just the limit where cloud features are resolved by the ATFLIR.

Angle of motion of the clouds points to decrease in El angle, I can verify it in your sim too. Without it, the clouds have too much of an arching motion.

And yes details matter for estimating pod elevation angle to the tenth of degrees, this is not a quibble.
 
And in your model, why doesn't the "object" rotate during banks?
Have already addressed this.

Object is diffuse and we don't see a clear outline, first. Glare-like, but not rotating glare fixed in the frame, or it would behave like it all the time (why not?).

Second, have you measured clouds angle and do they exactly follow bank? It's all mixed up in the progressive realignment of the clouds, so that when you speed it up it looks more compelling than it is (your Gimbal a new analysis video).

Third if a real object there is also change in perspective as it gets closer and possible change in attitude too. But again we don't see a clear outline and we progressively see more of it, so difficult to judge.
 
I have no great urgency to get back into it
This strikes me as odd, namely as your Gimbal a new analysis video was solely directed at demonstrating glare, I believe the quote is

"Here I'm only going to make the case that what we are looking at is a rotating glare that hides the actual object" @ 2min mark of video

and excluding additional context of flight path recreation, the audio, the wider context of military planes in a training range etc.

"Just the rotating glare part of the puzzle"

Where this is in direct opposition to that position, so odd to me that you would be reactive, by waiting for the paper to be released, instead of proactive, lets get this sorted out. (Just odd to me, fully understand that you are busy, you have a life outside of this plus all the other work into the topic you are doing).

We have provided, fairly and objectively, side by side comparisons demonstrating the results, as well as acknowledging that the formula keeps the clouds more level than the bank tilt method, but saying "I like the look of clouds orientated like this", is more subjective than testing de-rotation on an appropriate/ comparable video.

When we test for background motion using the Formula we would expect to see a non linear progression, easiest to see via bellingcats stitch for Mosul Sphere.

actual v projected 04.jpg

For clarity as the object is reported to be half way between the plane and the water, Go Fast, the plane going from fairly level straight flight and then banking, needs to have that plane banking reflected in the path on the ground, assuming straight line trajectory for target.

Yet we get very little, by way of curvature, using the formula on Go Fast.

Metabunk Go Fast Stitch - Copy.jpg


Using the Bank Tilt method we see a more pronounced curved path.

Bank tilt stitch Go Fast - Copy.jpg



To this,

why doesn't the "object" rotate during banks
When the footage is in the correct, up is up global orientation, we can see more easily the orientation changes.

Also noting that as per the ranging of the clouds, it is remarkable that at the point the object goes level, preliminary results are indicating that the lines of sight are giving results that indicate there is motion change on behalf of the target.

Many thanks for your time and consideration

[edit - clarity]
 
When we test for background motion using the Formula we would expect to see a non linear progression, easiest to see via bellingcats stitch for Mosul Sphere.

actual v projected 04.jpg
What does this have to do with anything? What can you easily see here?

I don't think even @TheCholla understands what you think you mean.
 
We wouldnt expect straight line motion, like in the go fast stitch using formula.


For clarity as the object is reported to be half way between the plane and the water, Go Fast, the plane going from fairly level straight flight and then banking, needs to have that plane banking reflected in the path on the ground, assuming straight line trajectory for target.

Bellingcat, does the object follow the redline?

actual v projected  (1) - Copy.png


actual v projected 01 - Copy.png



1767401433893.jpg


Qualifying my remarks with regards to background motion angle is resrictive compared to Bank Tilt for go fast. 7 ish degrees of variance for Formula, 19 ish degrees for bank tilt.

[edit- clarity]
 
Last edited:
Bellingcat, does the object follow the redline?
Sorry, I've got no idea what you are trying to say with the Bellingcat example. It's a video shot from an MQ9 of a windblown balloon, with manual following, so the field of view moves around in various ways.
 
Plane in bank, tracking either stationary or slow moving target, will have the effect of having a curved path for the plane and as a result, that curved path is reflected in the path along the ground, it wouldn't be a straight line.

Simplified example, assuming plane at 25k feet, in a bank (changing heading), stationary object (halfway between the plane and ground), camera will see the ground in a curve

Screenshot (4186).png

For clarity I asked grok to concisely summaries the point I am making,

"as the plane banks and turns toward the target, the direction the camera is pointing (still locked on the target) starts swinging in an arc. Because the plane is curving its path through the air while closing in, the spot on the ground that the camera is "looking past" the target traces a curved path."

Is that more understandable?

[Edit- additional context]
 
Last edited:
I think what's important is to check if Sitrec has the right motion angle, because it includes the additional dero formula, for both GoFast and Gimbal. It's off in Gimbal (too slow, too much arching motion) and last time I checked it was also off in GoFast.

Given the sensitivity to small variations in angles, getting the background motion with this kind of method should not be about just loosely matching it, but very precisely matching it.
 
Back
Top