It strikes me that an aircraft on a track that is not straight, attempting to track and range a distant object which may or may not be on a straight-line path, is exactly the situation we find in the GIMBAL clip.
Yes. And if you assume the target is on fixed-speed straight track (aka traveling at constant velocity), then its range is ~30 nm.
Since we have more than 3 observations, we may even be able to confirm the assumption, within the limits of the available accuracy.
Gimbal, the assumptions for estimating the range are:
- straight path, constant speed, for a context where the anomalous objects that were tracked were changing direction (see range fouler report and anything about the context). A good way to miss the target.
- known elevation angle, which we don't have beyond knowing that it's between -1 and -3°. Small variations in El angle make a ton of a difference in the lines of sight at the distance.
Comparing this to passive-ranging that happens in targeting pods is a bit of a stretch.
The main evidence for the range not being from radar is that the WSO slews the pod and catches the object on the spot, when he could just have slaved the FLIR to the radar track. Plus the aviators who assume it's from a passive algorithm.
Because the crew saw a formation of objects, it's possible the objects were observed on the SA, not their inboard radar, and the WSO decided to check one out, slewing the pod and being pleasantly surprised he caught it.
It is probably worth keeping in the back of our minds that it is possible that the reference to "a whole fleet of them" might refer to a group that this UAP is a part of, or another group of targets altogether, perhaps or perhaps not the same sort of object.
This is also worth keeping in mind -- thanks for reminding me! It has always sounded, to me and thus subjectively I admit, like a young man having fun: excited that he pulled off a tricky bit of capturing, not somebody who was confronted with an unknown and possibly hostile something-or-other that seemed to be flying at impossible speeds
If the GOFAST objects were birds, perhaps they were several of them flying in some sort of formation or migration. 1.55 metres is the size of a large gull (such as the blackback, 165cm) or maybe, dare I say it, a pelican.
Air-to-air passive ranging is HARD! I worked along-side the passive ranging folks on a project in ~1985. I was the stabilization guy. My work was to to make a gimbaled system point were it was told to point and stabilize it so that aircraft maneuvers, turbulence etc. had a minimal impact on the data coming from the sensor. Pointing information came from 4 sources:1) radar (usually pointing where the radar was pointed, but not always), 2) human input (joysticks or equivalent), 3) tracking error from the sensor (essentially the difference between the center of the Field of View and the target, and 4) a designated search location and pattern provided by human or other system on board the aircraft. Stabilization was provided by a number of methods, the primary being a really good balanced platform system with low friction and gyroscopes in each rotational dimension (oh yeah, a lot of good electronics). Part of our project was to develop passive ranging algorithms so that an active system wouldn't give away either stealth or information to the target that it was being tracked.
As has been pointed out in previous posts, there are a number of approaches to this problem.
Air-to-ground ranging was theoretically easy if you knew where you were, knew where the aircraft was pointed, knew where the sensor was pointed relative to the aircraft, had accurate 3D mapping of the terrain and a computer to crunch all the data. In 1985, this was not practical for lots of reasons. For example, the inertial navigation systems (INS) in aircraft at that time were pretty good at telling you where the aircraft was at any given moment, but not quite so good at telling exactly the orientation of the aircraft. The airborne computers of the day were very basic, and the 3D mapping either didn't exist or wouldn't fit in all the airborne storage in the world at that time. All of these problems have been pretty much solved, with the exception of maybe the accurate orientation of the aircraft. The orientation problem gets eliminated if you have image recognition match 3D maps to the sensor images.
The approach that we focused on was the "Trig" approach since our operating scenarios and available technology ruled out most others. We were tasked with air-to-air tracking and ranging at long distances (>> 30NM). At these ranges, we were lucky to have the equivalent of 1 pixel. Trying to count pixels was a non-starter when mostly you had a noisy signal varying between 1 and 2 pixels as tracking could not do better than this until the target got closer. Also, looking at brightness and brightness variation was no help at these ranges as noise overwhelmed variations due to the distance changing. What we were left with was LOS (Line-of-Sight) data, which by the way was quite good). What is never shown in any of these UAP videos is the sensor platform pointing data at any useful resolution. For obvious reasons, much of this information is confidential and I don't know if it is even available to pilots. We could tell where we were pointed at at any given point of time to a highly accurate degree relative to the aircraft. If the aircraft was flown in a modest flight profile, we could tell where we were pointed at to a pretty high accuracy over time. Violent aircraft maneuvers would degrade this accuracy considerably due to INS issues. Our ace in the hole was that we also had excellent rate data on the target LOS. Since the sensor platform is inertially stabilized and it is tracking the target, any change in the LOS rate was due to aircraft rate change (minimal if not maneuvering) and the target movement. This rate of movement plus a preplanned aircraft flight profile and lots of trig made the problem solvable under a set of assumptions.
In 1985, the math itself was almost too much to handle. Digital control systems were lucky to run at 20 HZ, and anything executed on a computer in this small amount of time had to be simple. At that time, airborne computer systems did not have the speed necessary to execute complex trig (and inverse trig) functions. The math was well understood and I could do it on my hand calculator, eventually. The problem just got very long because of multiple coordinate transformations. Consider this: the platform I was working with had an inner gimbal set (AZ, EL), an outer gimbal set (AZ,EL, ROLL) housed within a pod that also de-rolled the aircraft roll. Add in that the aircraft is also maneuvering and the trig equation moves from a couple of SIN/COS calculations to very lengthy 2 to 3 pages of handwritten trig and inverse trig calculations. Now do this on a simple computer (Intel 8086 chip seemed fast compared to what we had to use) and no way could we do the calculations fast enough. We rewrote a lot of compiled code in machine language and used extensive lookup tables to speed things up and at the end of the day the passive ranging experts said: "Some day we'll be able to do this."
In the past almost 40 years, technology has improved dramatically. The calculation burden should no longer be an issue. We have good GPS. Sensor resolution is much higher. Maybe we have improved INS and higher resolution positioning on the sensor platform. We should be able to do passive ranging, under a set of assumptions. The assumptions are the killer to accurate passive ranging for the objects tracked in the videos under discussion. It boils down to the same assumptions pointed out numerous times in this and many other discussions on Metabunk. Is it a large target at a great distance, or is it a small target near by. Is the target rapidly maneuvering or is it on a steady flightpath. The passive ranging makes assumptions on these factors and if fed the wrong assumptions, the output will likely be misleading.
All this is to say, if the ATFLIR range is from passive ranging, and the target was truly an unknown, then I don't believe the range estimates to be at all accurate and certainly not reliable (it could be close by accident, but probably not by design).
I'm confused by how things separated by a mile can be "abreast". At least without it only appearing like they are abreast from one particular viewpoint.
The crew and objects being separated by a mile makes more sense but that's not how it comes across.
If these objects were four seabirds, then they could easily be separated by a mile or so, and all travelling in a similar direction for migration purposes (or some other reason). Did anyone see these objects visually, with their unaided eyeballs?
I would tend to doubt it, based on the Gimbal footage taken at around the same time on the same system. Shown here is the Gimbal target superimposed on the full moon:
I made it small intentionally, the moon as seen in the sky is 1/2 a degree across, about the size of an aspirin tablet at arm's length. Set your computer or phone where that image of the moon is about that apparent size, the gimbal "UAP" is not going to be particularly visible! Now, if I am reading the numbers on the screen correctly, "Gimbal" is 2X zoom, "Go Fast" is only 1X. So move your screen to where the moon as about twice the diameter of an aspirin. To my eyes, the UFO is a speck, visible but not very easy to see (and the "Gimbal" UAP is bigger in the frame, and highlighted by the edge-enhancement glow to make it "pop" visually!) -- and if there is a mile spacing between four of them during "Go Fast," seeing them all by naked eye is going to be tricky as some of them are much further away.
This afternoon, I'll make up a pic similar to this one for "Go Fast," unless somebody beats me to it. So if you are reading this and want to correct any points I got wrong before I do that, you have a window of opportunity! ^_^
The important thing about observing a small object at night visually is its luminosity. The Gimbal object seems to be roughly the same size as Saturn in the sky, which can be seen quite easily on a dark night. A jet plane (without afterburners or lights) on would be much less luminous, and a bird or balloon would be invisible.
GIMBAL definitely happened at night; I don't know about GOFAST.
I made it small intentionally, the moon as seen in the sky is 1/2 a degree across, about the size of an aspirin tablet at arm's length. Set your computer or phone where that image of the moon is about that apparent size, the gimbal "UAP" is not going to be particularly visible! Now, if I am reading the numbers on the screen correctly, "Gimbal" is 2X zoom, "Go Fast" is only 1X. So move your screen to where the moon as about twice the diameter of an aspirin. To my eyes, the UFO is a speck, visible but not very easy to see (and the "Gimbal" UAP is bigger in the frame, and highlighted by the edge-enhancement glow to make it "pop" visually!) -- and if there is a mile spacing between four of them during "Go Fast," seeing them all by naked eye is going to be tricky as some of them are much further away.
I'd forgotten that, so yeah, brightness may trump visual size, if it was really bright. So this may not be as useful as I'd hoped. But since I made it, might as well post it... here:
With your monitor at the distance where the moon is about an aspirin tablet held at arm's length, gthat's how big GoFast would be. Unless it was pretty bright, I don't think you'd see it with the naked eye -- I certainly can't, anyway.
But those birds a mile apart would not be considered abreast.
It's hard to put a mile between anything and consider them being side by side. ESPECIALLY if you can't identify what the things are so they are UAPs.
Fairly certain no two people could illustrate the description and have the same picture. Personally, I can't even draw anything from it. Would love to see other people's attempts.
I've tried to look into the question of whether the range values can be trusted in the specific circumstances of GoFast IF some passive ranging method is used which assumes a constant target velocity and IF the object did indeed take the straight line / constant velocity path currently reconstructed in Sitrec.
I came up with a relatively simple ranging solution using some linear algebra and made some assumptions about the accuracy of their tracking. As mentioned above, this is a hard problem but also a problem that people have been working on for over half a century. So if I can't get an accurate range, then perhaps it's still not impossible that I might've missed something or they might've used something more sophisticated that could get a better result. But if I do get an accurate range, then it's more plausible that perhaps their implementation could also be accurate in this case.
LaTeX:
\[\text{At time $t_i$ we record the position of the camera, $C_i$,}\\
\text{and the LOS towards the target, the unit vector $L_i$.} \\
\text{If at time $t_i$ the position of the target is $P_i$} \\
\text{and the distance to the target at $t_0$ is $d$ then:} \\
P_0 = C_0 + d \, L_0 \: \text{(1)} \\
\text{If $V$ is the constant velocity vector then:} \\
P_i = P_0 + V \, (t_i-t_0) \: \text{(2)} \\
\text{Since $P_i$ must be along the line of sight $L_i$ from $C_i$:} \\
(P_i - C_i) \times L_i = 0 \: \text{(3)} \\
\text{Substituting (1),(2) into (3) and rearranging we get:} \\
d \, L_0 \times L_i + V \times L_i \, (t_i-t_0) = (C_i-C_0) \times L_i \\
\text{This is a system of linear equations we represent as:} \\
A X = B \, \text{where} \, X = \begin{bmatrix} d \\ V_x \\ V_y \\ V_z \end{bmatrix} \\
\text{We solve for $X$ using linear least squares.}\]
Python:
# return the speed and distance to the target at t[0]
def constant_velocity_ranging(t, C, L):
n = len(t)
# We have 3*(n-1) linear equations and 4 variables.
(eqs, vars) = (3*(n-1), 4)
A = np.zeros(shape=[eqs,vars])
B = np.zeros(shape=[eqs,1])
for i in range(1, n):
eq = 3*(i-1)
L0xLi = np.cross(L[0], L[i])
dt = t[i]-t[0]
# V x L = (Vy Lz - Vz Ly, Vz Lx - Vx Lz, Vx Ly - Vy Lz)
A[eq:eq+3] = np.array([
[ L0xLi[0], 0, dt*L[i,2], -dt*L[i,1] ],
[ L0xLi[1], -dt*L[i,2], 0, dt*L[i,0] ],
[ L0xLi[2], dt*L[i,1], -dt*L[i,0], 0 ]
])
B[eq:eq+3,0] = np.cross(C[i]-C[0], L[i])
ret = np.linalg.lstsq(A,B,rcond=None)
X = ret[0]
d = X[0,0]
V = X[1:]
v = np.linalg.norm(V)
return (v, d)
If the lines of sight are perfectly accurate then this method perfectly reproduces the distance between the jet's path and Sitrec's straight line reconstruction. In reality the visual tracking of the object will be off by some fraction of pixels. So I apply a random Gaussian noise to the tracking angles corresponding to a given tracking error (σ) that is the standard deviation in pixels of the tracking.
For the ATFLIR this tracking error is unknown and probably classified. But its manual says it uses a 14 bit sensor. The following article argues that tracking should be more precise on higher bit depth sensors, although in practice other sources of noise will dominate.
Article:
for the most typical case of 8 bit cameras, the minimum movement that is detected is δp = 1∕256 ≅ 0.004 pixel. For cameras of 10 or 12 bits, the detection limit is of the order of 1e−3 and 2e−4, respectively
...
Nevertheless ... experimental methods rarely reached absolute shift detection smaller than 0.01 pixel, which is far off the theoretical limits. Noise and inaccuracies in the sensor can degrade the signal and thus reduce the optimal accuracy in 1 or 2 orders of magnitude
A while back I also implemented a simple sub pixel tracking for GoFast and the results seemed to show a tracking error under half a pixel, and that's with the video quality severely degraded in many ways. So in the end I used σ=0.25 pixels for the results presented later.
With σ=0 it is enough to use just 3 consecutive frames, but as the tracking error grows more data is needed to keep the range error reasonably low. So I include data from 'n' number of frames in the system of equations the method above attempts to solve, with n-1 of those being prior to the current frame. The following shows the results for n=60 frames. The "real range" from Sitrec's current reconstruction is compared to the passive range estimate.
The first frame where the square tracking box reacts to and starts tracking the object is ~338, and the first frame where the range is displayed is 370, so during the first second after that less than 60 frames are used for the range estimate. But that's not the main issue because we see that the results are completely off until around frame 550-600. I think this might be due to the bank angle being too close to 0 in that range. As mentioned before, these methods require some acceleration of the velocity components of the ownship, so without enough banking the turn rate of the jet is insufficient (for this particular tracking error, frame count and algorithm).
The results after frame 550 are still fairly noisy compared to the video, but I suspect a well implemented Kalman filter might smooth the results out enough there. If somehow it could get a roughly correct initial state, then a Kalman filter could also help with maintaining that state and ignoring the largest errors while the measurements are known to be unreliable due to the bank angle. But it's unclear how you'd get that correct initial state other than by pure chance. At least in my simulation I have to decrease σ to 1/1000 of a pixel before I start getting reasonable values during that first second.
The full source code is available here with links at the top to open it with either Google Colab or Kaggle, both of which allow running and tweaking the code online (Kaggle just needs the internet access to be enabled in the side pane to be able to download Sitrec).
So it seems that if the range did come from a roughly correct passive range measurement of a constant velocity object then at least until frame 550-600 that's more likely to be a lucky coincidence, while after those frames it's more plausible that a well implemented range tracking could've yielded reliable results. AARO claimed high confidence in the object's altitude. Unless they made a mistake, the range either came from some onboard/offboard radar or perhaps at least the latter part of the video might've provided a sufficiently accurate passive estimate for that.
Cool, this indeed seems to support passive ranging is absolutely achievable even for air-to-air. And that it may be considered unreliable by jet fighter pilots who usually don't chase straight line / constant velocity objects. But if this is indeed an object flying in the wind then the range may be quite accurate.
I'm suspicious about AARO getting confirmation of accurate passive-ranging, because Kirkpatrick has said they were cross- and double-checking results with NASA, and NASA assumed the range was from laser-firing.
Can you try your method for Gimbal? Or explain why it would not work for it?
I'm kinda leaning towards what is rude is taking someone's mistake, broadcasting it to the world as though it's not a mistake, then not owning up to that you made a mistake thinking it wasn't a mistake.
View attachment 73544
With your monitor at the distance where the moon is about an aspirin tablet held at arm's length, gthat's how big GoFast would be. Unless it was pretty bright, I don't think you'd see it with the naked eye -- I certainly can't, anyway.
One thing to note is that the 3-D reconstructions do not find a straight line at the range before ~frame 520. Before the plane banks, the path is curvy. I think this points to the range being dependent on plane bank, i.e. being from passive-ranging.
This is from Sitrec, I find something similar in my reconstruction. There is a significant change in the estimated path when the plane starts banking. There is also a hint for a similar change in path at the very end, when the plane increases bank.
One thing to note is that the 3-D reconstructions do not find a straight line at the range before ~frame 520. Before the plane banks, the path is curvy. I think this points to the range being dependent on plane bank, i.e. being from passive-ranging.
I've explained before (and it is also explained on the maths site I cited) that a constant-velocity target cannot be ranged trigonometrically from a constant-velocity observer. Once the camera aircraft banks, it no longer moves straight, and only then can a valid range be derived.
There is a better match than this in your sim. I had tried to set the parameters to get as close as possible from the speed and angle of background motion. I think it's a closer match than your setup above: metabunk.org/u/zE4IrM.html
The angle of motion is more accurate, speed it's hard to tell because judging the match is based on eyeballing, but I think it's closer too. This is for a local wind of 225, versus 270 in your setup.
The speed of the object is more around 90-100 kts in that config. Apart from not deviating too much from the speed range given in your YT video (20-40 Kts), I don't see a clear justification for not trying to get the best fit in the sim, and go with what the numbers say.
We don't know the exact wind speed, because we don't know the exact location/time of the event. There are areas where the wind was ~90-100 kts at 13k ft, on Jan 24 2015 (I had posted the wind maps). Although it starts being quite far from the shore.
And the range is likely approximate, as it seems to come from passive-ranging. So the speed could be 60 kts, or who knows what, somewhere else along the lines of sight.
All can be said is "it could be a balloon in the wind", but that needs to be said after recognizing all the limitations and uncertainties with the data.
It's what has hurt the NASA and AARO analyses of GoFast, throwing numbers out there without taking into account all the unknowns. And even worse imo not trying to contact the crew for clarification.
It's what has hurt the NASA and AARO analyses of GoFast, throwing numbers out there without taking into account all the unknowns. And even worse imo not trying to contact the crew for clarification.
Yeah we won't be able to confirm this relatively simple math until we consult people's memories from at least a decade ago for verification. After all we know the best form of evidence is eyewitness testimony from 15 years ago.
GoFast was in 2015. It's not a matter of having the crew to confirm the math, but why/how they locked onto the object, do they recall the object(s) going with the wind, its approximate speed and behavior, what's the link with Gimbal, what is the source for the RNG on screen, do they think it could be a balloon, a bird, if not, why not, etc.
There has been a lot of questions raised by people who have analyzed this video, and I don't see how talking with the crew would not be interesting. Even if the conclusion is that they don't recall anything, at least we'd know.
This would be no more than a matter of conjecture. The GOFAST object was several thousand feet below them, and they had no instruments that could inform them about the speed or direction of wind at that altitude, even if they could detect the wind at their own altitude.
So it would not be a matter of 'recalling' whether the object was going with the wind; instead it would be little more than a guess.
GoFast was in 2015. It's not a matter of having the crew to confirm the math, but why/how they locked onto the object, do they recall the object(s) going with the wind, its approximate speed and behavior, what's the link with Gimbal, what is the source for the RNG on screen, do they think it could be a balloon, a bird, if not, why not, etc.
There has been a lot of questions raised by people who have analyzed this video, and I don't see how talking with the crew would not be interesting. Even if the conclusion is that they don't recall anything, at least we'd know.
I'd agree, it would be interesting. But I doubt it would be particularly useful, given how much human memory distorts over time, and how often initial impressions are just wrong even before time goes to work on our memories.
After years of seeing their video(s) as the Poster Children of UFOlogy (and seeing their videos attributed to the Nimitz incident pilots, which must get really irritating for them after awhile), and seeing all the speculation and analysis of what they experienced that day, their memories are going to be "refined" and "enhanced," and as we see when online UFO fans "enhance" videos and pictures in Photoshop, that doesn't make for better accuracy, it introduces additional faulty data and errors.
FWIW -- I think the object is most likely a balloon, but am also pretty comfortable with the "bird" hypothesis. Can't PROVE that, but if I had to bet, that's where my money would go. But IF the crew came forward today and said "Yeah, we thought it was a balloon," that would not count for much in my view, as there has been too much intervening time for them to decide they maybe NOW think it might have been a balloon and for that to distort their memories. The statement would not have much if any evidentiary value at this point.
But yeah, it would be interesting, I'd read it because why not? ^_^
I've explained before (and it is also explained on the maths site I cited) that a constant-velocity target cannot be ranged trigonometrically from a constant-velocity observer. Once the camera aircraft banks, it no longer moves straight, and only then can a valid range be derived.
What's interesting is that no range popping up in Gimbal means it cannot find a straight and steady path along the lines of sight (i.e. a plane just casually flying away).
What's interesting is that no range popping up in Gimbal means it cannot find a straight and steady path along the lines of sight (i.e. a plane just casually flying away).
it would only mean that if ARFLIR actually tried to do passive ranging there (and there's no maximum distance in effect), which we have by no means ascertained
it would only mean that if ARFLIR actually tried to do passive ranging there (and there's no maximum distance in effect), which we have by no means ascertained
Why I was asking if @logicbear had tried his method for Gimbal. If he can approximate what the passive-ranging algorithm does for GoFast, what would make it unusable for Gimbal?
Probably the distance. You can see even in the GoFast plot that after frame 550 the error grows with distance. I generated a perfectly straight / constant velocity traversal for Gimbal from two points and ran the same passive ranging method. With σ=0, n = 3 it perfectly estimates the range, but with the more realistic σ=0.25, n = 60 it doesn't work and I get bogus results with small negative ranges. Even with σ=0.01 it can be off by up to 15 nm. Some of the ATFLIR modes the pilot selected could also affect whether the pod attempts to show a range or not, but it seems entirely possible that some passive ranging system could sometimes work at 3-4 nm range but never at 32 nm range, not even with a good turn rate and perfectly constant velocity target.
I wouldn't be surprised if the error-bars grow prodigiously with range, much like in trilateration/triangulation problems. That growth might be the in some ways similar to the reciprocal fractions related to the hyperfocal distance when considering focus: eventually, the algorithm will be unable to reliably tell the difference between some finite distance and infinity. Where that distance is will of course depend on the setup.
While writing that, I was initially thinking "but trilateration is tan(Pi/2-eps) related, and focusing is reciprocal related - it can't be similar to both", but if you view the tan from the opposite direction, it drops out immediately:
so the tan() is reciprocal plus irrelevent terms that go to zero.
Of course, it could be that neither relation applies, but I just feel it in my water that it ought to. The "reality" might well be the outcome of some Fourier Analysis, but in that domain a delta_x.delta_F(x) > const (just like the heisenberg uncertainty principle) relation appears. Which is again trivially rearrangeable into a reciprocal form. If all three approaches all point to the same kind of relationship, and they're all wrong, then I'm calling a mathematical conspiracy!