What are the timestamps of both screencaps? Maybe in the first snapshot the FLIR was in NAR, which may have magnified the object's appearance, wheras the second screen is not in narrow field but normal? Or different zoom levels 2x vs. 4x. If the target ranging is accurate the MQ-9 tracks a <5m object at 22nm distance from the sensor. In consideration of the poor footage quality and flickering it doesn't strike me as too unusual.Another key point is that what we see seems to be a kind of glare. Kind of unusual from a cold object, but did the object really change size? Both boxes are 5m wide.
View attachment 88370
Looking to that exchange on X, you're talking past each other with Marik. You're saying there is no variation in the rate of change for the MGRS coordinates, which is what he's also stating to hypothesize that the zipoff is from the object.This is what I was stating on X.
@Mick West your example above is for different FOV (ULTN vs 4X), it doesn't shrink between these two segments.
But to your point, the object does shrink at the end and I haven't seen a good explanation as of why.
To expand on that:So basically, in the camera of the person recording it off the screen. For unknown reasons, but maybe from recording off a CRT.
I see what you're saying. But that dip in brightness happens at other times without that much change in the object. Still super bright here for example. So I guess you're saying it's a broader region being affected at the end, but this is hard to tell.Same reason the word "ACFT" shrinks, it's a localized change in gain that happened AFTER the actual recording.
It happens in other places, that one's just the greatest.But that dip in brightness happens at other times without that much change in the object.
If it is a recording from a CRT is it possible the recording camera shutter (even if at the correct exposure time) is not sync locked to the CRT scan so as to cause a phasing effect? i.e. shutter speed drift/CRT line scan drift. Also, CRT's have some inherent brightness persistence ~1ms to 6ms typically.I'm not sure this is what is happening here - as normally this would cause flickering, but clearly SOMETHING is making some regions of the screen darker, and that seems to be what is making the object smaller.
It seems to lag at a very small amount sometimes, not the 15 frames of the fly-off.do you notice a lag between camera motion and the change in MGRS coordinates? Like one frame or so? Or it's instantaneous?
You know this, I know this, but clearly not everyone in the world knows this: If there's a discontinuity between the ratio of the size at a distance to the nearground object, and the size at the distance to the background, and you're tracking an object on a continuous path, then the plane capturing the images must have travelled in a discontinuous path. Congratulations, Marek, you've just added space-warping planes to the menu. I bet the pilot didn't even know he was on one of those. Gotta love the "looks really bad" part of his post, false confidence is hilarious.Marik seems to misunderstand this, saying the 5 meters is "to the ground", which I'm charitably assuming he means "on the ground", which is correct, but, since the ground hasn't really moved, irrelevant. And I'm not talking about distance. I'm really a bit bemused as to how he's misunderstanding this.
Sterling work!Here is a plot of "Easting" and "Northing" (MGRS values) versus frame number, using Mick's cleaner version from post #51, that starts a bit before the loss of lock.
Here is a plot of "Easting" and "Northing" (MGRS values) versus frame number, using Mick's cleaner version from post #51, that starts a bit before the loss of lock.
Sure here it is in attachment.Can you upload the raw data that you have? I don't mind trying out a few ideas. Plain text/CSV would probably be best, as my libreoffice is ancient and probably won't handle modern spreadsheets.
In particular, the plateaus and jumps can't be real physical data.
It's even had to say what is a "frame" here. We've got a 60fps video of something that looks like it was replaying at some point at 10 or 15 fps, and seems to be videoed off a screen, with the aforementioned weird contrast changes.
I just didn't think about it. I think maybe the next step is to get a simulation with an object in the right position, try to match the FOV, and see how it zips off. A bit fiddly with the noisy data. I coded all the OSD data extraction rather quickly, and there are a few issues I'd like to resolve, and I have other work to do, so it might be a while.Why didn't you include your analysis about this? Omission or because it's uncertain?
I have figured it out! It was NOT the right place, this is.Now I can accept that it might be normal distortion from imperfect elevation models, and that river beds can change a lot. But this is on the back slope of the valley, and that should make it more stretched vertically, not compressed. It makes me wonder if it's even the right place. The features seem to match
Fixed!Still need to code in the FOV changes.
I think this is an artifact of recording off a CRT. The top half of the screen is showing two individual frames one at a time, but the bottom half is showing both frames combined. Notice how the last digits of the lat/long (in the top half of the screen) are clear (if rather dark), but the non-changing MGRS in the lower half are indecipherable. This is because it's two or more frames blended together.
I don't think syncing the frame rate eliminates raster lines, unless you mean interlacing.So, going back to post #46, why do we not see the classic raster lines that result from recording a CRT? Were they using various frame rates on the recording device until the usual raster lines resolved or got replaced by these faint echos of them? Backing my '80s video days, the trick was to synch the recording device with the display to eliminate the raster lines, assuming one had the ability to do so.
I don't think syncing the frame rate eliminates raster lines, unless you mean interlacing.
https://www.smecc.org/ikegami.htmExternal Quote:
System Connection Features
· A built-in genlock feature (standard) makes external sync (VBS/BBS signals) operation possible by a coaxial cable.
https://www.tvtechnology.com/miscellaneous/analog-video-synchronizationExternal Quote:
The process of combining video signals originating from different local sources requires perfect synchronism and relative timing of all the signals present at the input of a production switcher. The synchronization is obtained by locking all video signal sources to a common reference black burst signal generated by the master sync generator. Modern equipment provides adjustments to meet the specs.
More importantly for more modern situations, I did find this guy talking about basically doubling the frames and overlying the double frames a bit off to create a crude comb filter that removes the scan lines when recording or using old CRT images:External Quote:
Recording CRT screens with old analog cameras involved matching the camera's frame rate and shutter speed to the television's scanning rate (typically 60Hz/29.97fps in NTSC or 50Hz/25fps in PAL) to avoid flickering or rolling scan lines.
https://creativecow.net/forums/thread/remove-scan-lines-from-a-filmed-crt/External Quote:
I did something like that ages ago. If you're stuck with that footage, here's what you can try. Double the video layer. Now offset the bottom layer down by one scan line, if possible. You can try making the top layer transparent, or compositing as an add or a screen. Here's an example:
This is actually a photo with simulated scan lines… I don't have any analog televisions hooked up these days. So here's a snap-shot from Vegas, with the two layers offset, using 50% transparency on the top layer:
And here's the two set up as a composite set, in "screen" mode:
You could probably do a better job lining things up.. this was a quickie just to demonstrate the technique. And of course, if you have more than just the screen in your shot, you may have to use mask out the TV and do this in a few separate layers.
This is actually a crude attempt at building a simple comb filter, which is what modern televisions, particular digital televisions, do to de-interlace, remove scan lines, and filter out chroma, all in one operation.
Fixed!
Source: https://www.youtube.com/watch?v=sQAIVXGsgnI
Early track isn't perfect, but it all looks a lot better than before. I added a HD overlay from Apple Maps at the end.
Sorry, yes, posted that before dinner time.Would you have the link to the Sitrec stitch with the correct elevation/terrain?