Subpixel Motion Tracking: Methods, Accuracy, and Application to Video of Collapsing Buildings

First run is single color selection (only the flat top), equal pixel weighting.

Deviation in x maxes out at under 0.00045 pixel:


Deviation of y (only moving portion shown, last 7 stationary frames omitted):


I have some wiggle, too. I assume it's moving at a constant rate, no? I put a linear trendline on top which is not the same as the actual constant rate, it's the equivalent constant rate of my data.

Zooming in on frame ranges... 0-6:


Why it deviates so much here, I don't know.


10-15:



25-34 (where some of the highest deviation is):



It's never more than a pixel away from the mean straight line, but I don't know what the actual line is supposed to be. Nevertheless, it seems the performance is comparable to AE. Certainly better in the x dimension, which should stay flat.

Mick, AE did okay in my book. You can't get something for nothing, even with sophisticated algorithms.

Chances are I can do better than this. This is the simplest selection filter and weighting there is.

Perhaps you can, too, by more careful placement of the tracking hints?
 
Mick, AE did okay in my book. You can't get something for nothing, even with sophisticated algorithms.

Chances are I can do better than this. This is the simplest selection filter and weighting there is.

Perhaps you can, too, by more careful placement of the tracking hints?

Not without removing the transparent band. Something I had not really intended when contrived the example. There's too much going on there, effectvely changing the shape of the objet. You get around that by selecting the middle color only.

 
Not without removing the transparent band. Something I had not really intended when contrived the example.
Really? I knew right away it would affect my results significantly. Even in the case of selecting only one color with even weights. I was initially surprised the first few frames would be that bad. Now I know the first three frames form a much straighter line, that they were thrown off the average by the bars and it made them look as bad.

There's too much going on there, effectvely changing the shape of the objet.
Yes, but isn't that the case most of the time in the real world? This was one of your objections to my method, hence the contrived example which you didn't believe possible for me to track effectively.

Non spherical targets change their apparent shape with rotation, sometimes drastically. Jets, cars, boats, people, tumbling debris. Sometimes they deform like a surface bowing or crinkling. The background is always blended in with objects in video, effectively changing their shape significantly either if the contrast is great - like your example - or the object is small. Small objects are also subject to noise.

If they're small enough, they end up looking like femr2's example. Like the motorcycle in one of my previous examples. These are real world targets, too. If you want to track a big section of rigid wall moving, great. I can do that, but it's more work. Not necessarily less accuracy, from what I'm seeing. But if all you have to track is a small point, and many times this will be the case, it will be blended with the background and noise and will change shape a lot.

I can't imagine that tracking moving buildings represented a large proportion of tracking problems before 9/11. I would say that's still the case.

You get around that by selecting the middle color only.
True. Mainly because I have some parameters to adjust in my algorithm. AE also does -- two boxes, is that all? Moreover, the act of selecting a single color and weighting evenly is the second simplest strategy, next to averaging the entire image.

Edit: and I don't entirely get around it, it still affects the measurement directly because you can see the change in shape of the mask as it eats into the solid color. I could get around it with one simple additional rule - shrink selection radially by 10 pixels. I know that from examining the mask frames that it's even less of a bite and, in my method, I get to do that (examine frames, set numbers to tell it what to look for, etc).
 
Last edited:
Really? I knew right away it would affect my results significantly.

I knew it would affect your results, that what I set it up to illustrate. However I did not realize there was so much blending going on. I just blurred the circle thinking of close-ups of videos. But I did not notice the black bars cut so much onto the circle. An unintended side-effect.
 
And it does. But you assumed AE would do better?
Yes. But it didn't. I though it take color into account. Apparently not.

Here's a subpixel moving ball over a bar. AE fails on this.

Should be exactly 45 degree motion, steps only accurate to 1/10th of a pixel.
 
Yes. But it didn't. I though it take color into account. Apparently not.
I did think it could do better, didn't know if it would.

Here's a subpixel moving ball over a bar. AE fails on this.

Should be exactly 45 degree motion, steps only accurate to 1/10th of a pixel.

I wouldn't call that a failure, just the cost of doing business.
 
After Effects did what looks like a slightly closer fit:





Data attached if you are interested in a more direct comparison.
 

Attachments

  • AE-track-subpixel-building-test.xlsx
    81.8 KB · Views: 732
Blimey! I just discovered that After Effects CC comes with a free version of Mocha, a professional motion tacking program.
https://www.imagineersystems.com/products/what-is-mocha/

In AE CC it's limited to 2D (planar tracking), but that's what we are doing here. Without knowing how to use it at all, it gave what seems to be subpixel perfect tracking of the building test. Although it's possible it's just really smoothing it.



It's some different approach where you select regions with enclosed splines.


That was just some random shape I made.
 
Just looked into the Mocha platform and it seems like its planar tracking algorithm could potentially help remove from measurements of vertical displacement the errors introduced by lateral movements and distortions. It would be very interesting to see that applied to some definable portion of the face of WTC 7. Mick, I'm assuming your version doesn't have that capability? $500 is quite a steep price tag for something that may not generate significantly more accurate measurements, but it looks like Imagineer offers a free a trial version that, while limiting actual video output, may enable measurement output. I probably won't be able to try my hand at it until the weekend, but it'd be interesting to see the results that our resident experts can generate using this software.
 
Just looked into the Mocha platform and it seems like its planar tracking algorithm could potentially help remove from measurements of vertical displacement the errors introduced by lateral movements and distortions. It would be very interesting to see that applied to some definable portion of the face of WTC 7. Mick, I'm assuming your version doesn't have that capability?

This is the version I have:
https://www.imagineersystems.com/products/partner-products/mocha-ae-cc/
mocha AE CC is licensed by Adobe and ships free inside After Effects Creative Cloud.This free version is launched from within AE and features mocha’s planar tracking and masking functions, limited to support for After Effects Creative Cloud.
Content from External Source
Although I barely understand what I'm doing with it at this point. Here's a quick attempt at tracking the WTC1 antenna dishes.
 
Very nice. Excellent performance from both apps. Mocha has a lot of flexibility in set up, from what I can see on the page @benthamitemetric linked.

I'm a believer in the ability of both of these to track little blobs as well as complex things (always have been). I'll even take back the claim about my methods being the reference standard for little blobs, since AE did a bang-up job on the light in the window.

I think you see that my methods can be both accurate and useful in the real world. Just not practical when there are tools like these around. Which makes them the natural choice for any further investigation into over-g. I'm inclined to go back to playing guitar now. Cool with that?
 
I'm inclined to go back to playing guitar now. Cool with that?

Works for me. I'm still a little interested in if your method is more accurate in some cases, but I suspect that with tweaking that AE or Mocha could duplicate the accuracy in those cases with a bit of filtering.

I think all this effort would probably have been far more usefully applied to accurately determining the feet/pixel in the videos :)
 
I'm still a little interested in if your method is more accurate in some cases...
We can hit a little from time to time, it is interesting. Just need to attend to other things for a while.

...but I suspect that with tweaking that AE or Mocha could duplicate the accuracy in those cases with a bit of filtering.
Most likely.

I think all this effort would probably have been far more usefully applied to accurately determining the feet/pixel in the videos :)
Not a shred of doubt. But it was fun, wasn't it?
 
The camera were quite are away... there was hot air from fires... wouldn't this introduce a distortion of even static images... like a mirage for example?
 
The camera were quite are away... there was hot air from fires... wouldn't this introduce a distortion of even static images... like a mirage for example?
It does. You can see some distortion of the antenna from time to time. It will show up in the data if it happens. The positive is that it's noise; the magnitude is limited and the deviations are about the mean, Unfortunately, it's low frequency, too, and is right in the band of interest.

Practically speaking, though, hasn't been much of a problem.
 
I think you see that my methods can be both accurate and useful in the real world. Just not practical when there are tools like these around. Which makes them the natural choice for any further investigation into over-g.

But you couldn't have known that without testing and comparing the tools. There is clearly a learning experience going on from page 1 of this thread. This was a process of discovery and testing.

Great exchange with good, probing examples. Thanks to the both of you for doing this.



I think all this effort would probably have been far more usefully applied to accurately determining the feet/pixel in the videos :)

I disagree here. That is the easier part and the available info on WTC7 is limited to 'trust' in the NIST. The tracking tools already introduced, when applied, can easily demonstrate that trusting the NIST blindly is not a wise thing to do.

From my point of view this effort was well spent doing exactly what you did: in probing and comparing your tracking tools for limits. Now they can be put to use in an open, transparent way.
 
Thought I'd drop by after spotting this on my bi-annual google check on femr2 :)

An interesting discussion, and wonderfully balanced. Great job.

If there are any outstanding requests, I'm happy to try and deliver (though turnaround time may not be as-of-old). From memory the little blob test was not rigorously generated and tested, and intended simply as a proof of sub-pixel tracing accuracy possibility rather than a proper accuracy limitation check.

The blob was created in 3dstudio max probably as the end of a cylinder, set on a circular path. I don't recall exact details of output and rescale, but virtualDub was likely involved. I'll have a dig into the piling system if there's call.

It would perhaps be interesting to re-evaluate tracing accuracy using shared inkscape image collections with known motion (to whomever generates the shared image set) in blind tests across OWE custom, SE and AE platforms.
 
Greetings femr2 - long time no see.

I've been a bit active on another forum where the sky is often not blue.

e73bd29aed2167cfc826c74eddf0e847.gif
 
How can we argue, with straight faces, the "accuracy" of subpixel tracking in this scenario, when the data obtained by this method is so ridiculously noisy.

Surely the fact that the data is so unrepresentative of reality, having the building accelerating upwards and downwards, proves that the method is flawed?
 
Hey femr2, good to see your words again! What a coincidence. I don't check here too frequently but I caught you same day.

How can we argue, with straight faces, the "accuracy" of subpixel tracking in this scenario, when the data obtained by this method is so ridiculously noisy.

Surely the fact that the data is so unrepresentative of reality, having the building accelerating upwards and downwards, proves that the method is flawed?
All I can say is noise in and of itself is not a deal-killer, generally. It might be in this case, don't know; it certainly is on the margin. This is one of the reasons I can't throw a lot of confidence into the result.

Many years ago, when I test fired rocket motors for a living, I was exposed to data from various types of instrumentation. The first exposure to accelerometer data for a wet-behind-the-ears engineer was humbling. I delivered the raw data package (by hand, no servers then) to the design engineer's desk, with an accelerometer plot on the top of the stack. I asked how he got anything useful out of that "noise". I paraphrase his response:

"This? This is raw data. You deal in raw data, and your boss makes you hand me this stack of stuff as part of your job. But this pile of shit is useless to me. You want to know how to get anything meaningful out of this? FILTERS."
 
Having said that, I think I'm smarter than that guy and, now having more experience in general engineering than he did at the time, it's quite possible he was too trusting in his cookie-cutter filter workflow.
 
It would perhaps be interesting to re-evaluate tracing accuracy using shared inkscape image collections with known motion (to whomever generates the shared image set) in blind tests across OWE custom, SE and AE platforms.
It would be interesting, but you do want me to live, don't you?
 
It would perhaps be interesting to re-evaluate tracing accuracy using shared inkscape image collections with known motion (to whomever generates the shared image set) in blind tests across OWE custom, SE and AE platforms.


I'd like to see that. Thanks for checking in.


I extracted your comments on sub-pixel measuring from the JREF/ISF forum and spliced them together at this link.

A bit later I'll be extracting the work presented by both Mick and OWE on this subject so they can be compared with what you have done and place them side by side for anyone who takes an interest in this subject in the future.

The thread and the link to your earlier comments already give a number of ways the tracing techniques can be compared head to head.




After that I will re-examine the traces already compiled on WTC1, WTC2, and WTC7. That will be a great way to test each method head to head in real world applications. The case of WTC1 has already been introduced in this thread and it was moved to a new thread linked here. Don't worry about the name of the thread or the arguments which I will address shortly. I do not see anyone objecting to a single WTC1 measurement and anyone participating is encouraged to produce parallel or even better measurements if they wish.


The case of WTC2 will be introduced after that and then, finally, the infamous case of WTC7 over which a few people have already expressed objections.


That is pretty much all I am looking for. No hurry with results as these measurements are already the best on the planet and they can only get better, clearer, and more easy to read from this point onward.

I am looking for quality over speed of production and I want people to express any reservations they have in this calmer, more cooperative environment so they can be addressed once and for all and put together in an FAQ form for future readers.
 
A wish list for Femr2 or anyone who wants to try...


I'd like to see the displacement and velocity profiles of the NW corner of WTC7 and the NW corner of the penthouse compared. The magenta and dark blue points on this graph:


7010fc31e1e1ef94b93e430fe9a09ee4.jpg


This is connected to WTC7 collapse mechanism wrt to the claim that the NW corner of the building is capable of reaching freefall or over-freefall acceleration.


The previous effort by Achimspok looks like this:


036be12a9a0259f25079cdd00ad8d562.png

Note the comparison between the dark blue and magenta velocity profiles which show a coupling action between core and perimeter.
 
While I too would love to see such data (or any data or methodology related to this graph), I am angered that you felt the need to "slip in" the following:
Note the comparison between the dark blue and magenta velocity profiles which show a coupling action between core and perimeter.
Please can you explain why this graph shows "a coupling action between core and perimeter".
 
Back
Top