Subpixel Motion Tracking: Methods, Accuracy, and Application to Video of Collapsing Buildings

benthamitemetric

Senior Member
[Thread split from: https://www.metabunk.org/acceleration-greater-than-g.t5635/page-8 ]


Dimension would be ft/s^2. That translates to between 9,72109539648 m/s^2 and 9.90558620352 m/s^2.
Waaaaay too many decimals!
To my best determination, g is 9.805 m/s^2 at NYC latitude, sea level. So those values are 99.1% to 101.0% of g. That's the average for the 1.75 (?) seconds of the Phase 2 of the north wall descent, right?


Don't underestimate the ability to measure position to an accuracy way below pixel size. I have yet to see a solid mathematical approach to estimate systematical and random errors of the methods used, given the individual video quality. However, I remember mention of a 1/200th pixel resolution, and there seems to be confidence that accuracy is better than 1/10th of a pixel (oz and M_T - feel free to correct me, although exact numbers aren't important at this point to get the gist across).

Frame-to-frame there is of course noise. But when NIST can claim an R² = 0.9906 for their coarse method over a 1.75 (?) s time interval, why shouldn't it be possible to get the same or a better confidence level for a shorter interval (say, 0.5 s) when using a finer method - and a lower bound for a > g?

I'm not sure how it could be possible to determine so-called sub pixel accuracy without first knowing more precisely about the motion of the object being video taped. If you knew--independent of the pixel movement measurement mean--that an object was moving with a relatively constant acceleration, then I can imagine how you could do a bit of calculus and extrapolate subpixel accuracy for movement of the object across multiple pixels. But if you don't know the motion is smooth, I don't think you can get there without injecting a good amount of potential error into the calculations by way of assumption re the smoothness of the acceleration.

In between every frame, there is an infinite number of paths whatever point of measurement you are tracking could take from one pixel to the next. You can see this problem, I think, pretty clearly presented in the cookie cutter graphs produced by the raw data produced by what are being touted as the best measurement techniques in this thread. If your best data set needs to be corrected by a smoothing algorithm to even seem remotely plausible, it seems reasonable to me to assume there is a significant margin of error inherent in the measuring technique (error that the measurers are trying to correct with the smoothing).

Just eye balling the amount of smoothing applied to the data sets in the graphs, it seems to me there is a margin of error far in excess of 1% at play here, though I readily admit I do not know statistically the best way to conceptualize and approach precisely calculating that error given the particular measurement technique used here. Has anyone more familiar with the subpixel measurement techniques tried to calculate the margin of error?
 
Last edited by a moderator:
I'm not sure how it could be possible to determine so-called sub pixel accuracy without first knowing more precisely about the motion of the object being video taped.
There are assumptions of continuity and invariance of projected appearance - that is, if the object being tracked changes size, shape, color and brightness at random, and disappears and reappears all over the image, it will be hard to track. But, even then, sometimes it can be. Here, we dealing with features on a building. Granted, the buildings are collapsing and disintegrating, but there still tends to be continuity from frame to frame.

If you knew--independent of the pixel movement measurement mean--that an object was moving with a relatively constant acceleration, then I can imagine how you could do a bit of calculus and extrapolate subpixel accuracy for movement of the object across multiple pixels.
This is not extrapolation or interpolation, it's direct measurement. Here, do it with your eyes:



A 4x3 pixel area is enlarged 100x to show a bright white object in a black background. The object itself is of apparent size one pixel or less, and it's slowly moving to the right. See it move?

Pixels are discrete spatial units but they're not binary bits of information, all the way on or all the way off. They contain three channels of color/luminosity info as well. Originally, the object is fully contained in one pixel but as it moves it overlaps the pixel to the right and thus begins to show up there as a dim square. Correspondingly, the brightness reduces on the original pixel as it no longer fully contains the object.

Motion in this fashion is discerned with a very fine resolution. Of course, real world video is noisy, shaky, etc. But this is the basis for subpixel tracking.
 
But if you don't know the motion is smooth, I don't think you can get there without injecting a good amount of potential error into the calculations by way of assumption re the smoothness of the acceleration.
Hopefully you can see now that the motion extraction process will just see whatever is there, same as eyes only much, much better. No need for there to be smooth motion, it's just easier.

In between every frame, there is an infinite number of paths whatever point of measurement you are tracking could take from one pixel to the next.
And hopefully you can see there's only one path taken by the one pixel white dot above, and you know what it is just from looking at it. Had it taken an excursion upward during its drift right, you'd see that, too.

You can see this problem, I think, pretty clearly presented in the cookie cutter graphs produced by the raw data produced by what are being touted as the best measurement techniques in this thread.
There is plenty of noise and even systematic error in all the videos being measured, but this error exists for other reasons.

If your best data set needs to be corrected by a smoothing algorithm to even seem remotely plausible, it seems reasonable to me to assume there is a significant margin of error inherent in the measuring technique (error that the measurers are trying to correct with the smoothing).
Except for the qualifier "remotely plausible" (because it's WAY more than plausible, let alone remotely), this is correct. There's a lot of noise and generally people like to see a smoothed graph. Not me, but most people. So it has to be dealt with, and smoothing techniques like filtering and time-averaging are the normal engineering tools for this task. There's nothing unusual or flaky in that. In the past, I've dealt with data from accelerometers and, for the most part, you can't do anything with the data until it's been filtered to the band of interest.

Just eye balling the amount of smoothing applied to the data sets in the graphs, it seems to me there is a margin of error far in excess of 1% at play here, though I readily admit I do not know statistically the best way to conceptualize and approach precisely calculating that error given the particular measurement technique used here. Has anyone more familiar with the subpixel measurement techniques tried to calculate the margin of error?
This is another subject, kind of big and much harder to explain than the process itself, which usually is adequately explained with animation like from the previous post. I may give it a try if you're interested.

The aspect of interest here is - yes, there's a lot of noise. This poses problems for measurements of medium to fast motion (multiple pixels delta per frame) and is one of the reasons I still have to take the over-g finding with a grain or two of salt. For very slow motion (quasi-static), this method can reliably reveal motion as small as a few cm from a mile distance. And has.
 
The one pixel white dot animation is to demonstrate the validity of subpixel displacement derived from video, but never is the target as neat and tidy as that, obviously. There are times when the target is a small group of pixels which is color or brightness isolated from the surroundings, and this makes for a very nice target. But I don't want to give the impression the tidy example above is best case, because it's not. It's best case for an example.

While isolation of the target is beneficial in most cases, smaller is not better. The more pixels which are subtended by the object in question, the more information is available and the stronger correlation possible between frames. There's a limit to the benefits in size as the object should not occupy a large viewing angle (this leads to projected shape variation as the object moves), with 'large' being subjective. Therefore, there's a sweet spot for size, as well as a host of other criteria which makes target selection something of an art. A brainless art after you do it a few times, but art all the same.

Maybe not so much an art with SynthEyes or AfterEffects, but when you do your own algorithms, it definitely is. There are many ways to infer differential motion in video. It's stunning. I understand your skepticism, @benthamitemetric, but I assure you the stuff we've been looking at IS the gold standard for these measurements. However, I still have reservations. I think it's on the fringe of what can be done with the information available.

The reason there's so much jumpiness is this is the double differences on the raw data, to give acceleration. That is... I don't know... legitimate, but questionable. As such, I don't seek to quash doubt, merely to redirect it to the proper channels.

It would be interesting to first interpolate the position data with Lagrange polynomials, then twice differentiate to obtain acceleration. Honestly, I think it would give a better result in terms of being more easily justified. This was a really good attempt to make a platinum ring out of a silk purse, and it might be right on the money, but it's definitely fringe.
 
@benthamitemetric
I understand your concerns over smoothing and the unknowns of frame-to-frame motion.
Forget that.

NIST determined there was an average acceleration of g +/- ~1% over a period of 1.75 (?) s.
Average acceleration is computed as (v(1.75s)-v(0)) / 1.75 s.
You need to hedge velocity at the end points. NIST had to do it with a cruder method of obtaining h, and over a time interval of 1.75 s (?) they were confident to have nailed a to within +/- 1%. Or do you doubt their confidence interval? Now same procedure, same rationale, only this time your accuracy measuring h improves by a factor of, say, 4. Shouldn't you then be able to reach the same level confidence in 1/4 the length of time interval?
 
...cookie cutter graphs...
And, on that: I don't know if Major_Tom has pulled out the "successive smoothing degree" animation in this thread but it's just one aspect of the painstaking and meticulous work femr2 applied to this task. When (e.g.) critics questioned a 50 degree poly fit - a perfectly reasonable objection I'd voice myself - he prepared a gif showing fits going from degree 1(?) to 50, which also showed a stable convergence to the higher degree fits.

In other words, there was due diligence throughout all aspects of the process. That may not be entirely evident from the information that's presented here, but it is so. I'm not sure what "cookie cutter" means in this context, but it sounds a bit disparaging, and certainly isn't an accurate characterization. A lot of work went into producing each of the graphs. I don't believe achimspok went to the lengths that femr2 did, but the work is comparable. All of us have replicated NIST's fine motion measurements, the stuff works like a champ.
 
Major_Tom did link a couple of times to a pretty good exposition of the techniques, here:

http://www.sharpprintinginc.com/911...op=view_page&PAGE_id=275&MMN_position=736:736

I reiterate because it seems there are criticisms and objections which could be put to rest if the linked references were viewed. Edit: I would explain it differently, and I disagree that an interpolation filter needs to be part of the process, and maybe differ on a few other points, but it's still good. Edit2: a little disjointed unless you already know what you're looking at. It's a concatenation of posts.
 
Last edited:
And, on that: I don't know if Major_Tom has pulled out the "successive smoothing degree" animation in this thread but it's just one aspect of the painstaking and meticulous work femr2 applied to this task. When (e.g.) critics questioned a 50 degree poly fit - a perfectly reasonable objection I'd voice myself - he prepared a gif showing fits going from degree 1(?) to 50, which also showed a stable convergence to the higher degree fits.





Femr2: "The animation shows the effect of increase in order of the poly fit (steps of 2 if I recall)..."





Femr2: "I think that the animation shows that significant increase in order does not significantly change the shape of the graph."
 
Last edited:
Major_Tom did link a couple of times to a pretty good exposition of the techniques, here:

http://www.sharpprintinginc.com/911...op=view_page&PAGE_id=275&MMN_position=736:736

I reiterate because it seems there are criticisms and objections which could be put to rest if the linked references were viewed. Edit: I would explain it differently, and I disagree that an interpolation filter needs to be part of the process, and maybe differ on a few other points, but it's still good. Edit2: a little disjointed unless you already know what you're looking at. It's a concatenation of posts.


The technique can be compared to one which skips the filter process, but this is the best layout of a structured method I've seen on this subject. Details can be experimented with.

I can understand why he uses the filter he does. From the link:


I generated the following animation of a simple circle moving in a circle itself...





I then downscaled it until the circle is a small blob...





If we look at a blow-up of the shrunken image, such that we can see the individual pixels, it looks like this...






Sub-pixel feature tracking essentially works by enlarging the area around the feature you want to track, and applying an interpolation filter to it. Lots of different filters can be used with varying results.

Applying a Lanczos3 filter to the previous GIF, to smooth the colour information between each pixel, results in the following...





I think you will see that there will be no problem for a computer to locate the centre of the circle quite accurately in that smoothed GIF, even though the circle in the original tiny image was simply a random looking collection of pixels. This process of upscaling and filtering generates arguably more accurate results than simply looking at inter-pixel intensities.

>>>>>>>>>>>>
Content from External Source

When one looks at the resolution obtained using this method, to 3 decimal places of a (freaking) pixel, it isn't hard to see he is moving in the right direction.

The language of classical mechanics is calculus. When compared to, say, the mathematics of fractals, it becomes obvious that calculus requires smoothness and continuity. No continuity, no calculus.

These images show how one is returning from pixelated visual representations back to the smoothness that classical mechanics requires. First and seconds derivatives would be a nightmare without the concept of smoothness and continuity within a function.

One generally doesn't question the underlying assumptions of classical mechanics and the language of calculus, but it is interesting to think about. This level of detail from pixelated visuals really goes into that grey area.

It is experimental and pushes boundaries. Beautiful stuff.
 
Last edited by a moderator:
It would be interesting to first interpolate the position data with Lagrange polynomials, then twice differentiate to obtain acceleration. Honestly, I think it would give a better result in terms of being more easily justified.


Sure. Why not? It is yet another treatment of the same displacement data.


The reason there's so much jumpiness is this is the double differences on the raw data, to give acceleration. That is... I don't know... legitimate, but questionable. As such, I don't seek to quash doubt, merely to redirect it to the proper channels.

As for the Achimspok graph, I use it to show the coupling between core and perimeter. There will always be jumpiness if one doesn't fit the displacement data to some type of function, a polynomial. One can spot some of the limitations in the Achimspok method just by reading the steps Femr2 took.

I use the information I do because it is the best available. You have experience with how this information was extracted, where, and under what conditions. The average reader does not. To them it is a whole new world and people tend not to like new worlds. They are naturally uncomfortable at first, and sometimes perceived as a threat to the old one.

If anyone produces better, I'll use theirs. If you produce better, I'll use yours.
 
Wow, I didn't know I was replying to a joke...



Anyway -



These images show how one is returning from pixelated visual representations back to the smoothness that classical mechanics requires.
My take is that smoothness is artificial, in the sense that no information was added to produce it and some may be lost in obtaining it. And, while smoothness is desired, it can be introduced anywhere in the process. I don't think it makes much difference one way or the other, but I would definitely acquire a target like this raw and see what comes out. It looks jiggly as hell, but I bet the trace would not be like that. Reason being, it's the raw material that lets a "dumb" smoothing routine produce the nice circle. If the smoothing process was applied to something that was really and truly jumpy - as in following a jagged path - and we wanted to see those jaggies, at best the smoothing would not transform away those jags and at worst it would. As it is, we know the original target is moving smoothly; what if we don't know what the target is actually doing? What if the image above is all you have (which is usually the case)?

I need to drag out the scripts I have and give the image above a run. I could be wrong, but I think the trace will come out fairly smooth. But one thing I'm sure of is it will be honest and true to what's in the raw data.
 
I look at this more from a signals and information perspective than mechanics.

Can't find the words at the moment to express why an irreversible transformation, at least early in the process, doesn't sit well with me.
 
...As it is, we know the original target is moving smoothly...
I think this was about M_T's artificial circle.

But it is also true for any edge we might be tracing on the surface of the towers!
The pixels tracked in those videos don't show atoms colliding with pollen. They contain fat chunks of steel and granite. There are practical limits to the accelerations that can be imparted on those assemblies, due to their deforming when forces are applied. I have no clear idea on how to determine or hedge these limits, but I have a hunch that it can be done. Not from observation alone, obviously, but from considering the materials and how they are assembled in the vicinity of the tracked feature.
 
And I had to write the code from scratch since the prior post as it was on an old computer and too much of a pain to access. ;)
 
The pixels tracked in those videos don't show atoms colliding with pollen. They contain fat chunks of steel and granite. There are practical limits to the accelerations that can be imparted on those assemblies, due to their deforming when forces are applied.
This is also exactly why there's no concern about continuity, as we were discussing yesterday.
 
For good measure, I ran the little circle in case anyone thinks I got a smooth result because I was working with an enlargement.





The major gridlines are pixels, so this should put to bed any lingering doubt about subpixel accuracy, at least in theory. This result is every bit as good as femr2's:



To be honest, I think it's better. I don't seem to have the same deep divots he does. I think the point about filtering not being necessary and possibly counterproductive - at least in a case like this - holds.

It's embarrassing how little code it takes to do this.
 
This is also exactly why there's no concern about continuity, as we were discussing yesterday.
Sorry if I missed that.
But is there a lead on how to apply this to improve derivation of acceleration? Can an engineering argument/model be constructed that gives rise to upper limits of feasible a? And a "smoothing" algorithm that heeds such limits?
 
Sorry if I missed that.
But is there a lead on how to apply this to improve derivation of acceleration from measured data? Can an engineering argument/model be constructed that gives rise to upper limits of feasible a? And a "smoothing" algorithm that heeds such limits?
 
But is there a lead on how to apply this to improve derivation of acceleration?
I believe if one is willing to put in an ungodly amount of work, the status quo could be improved. Until about 10 minutes ago, I wouldn't have said it would be much of an improvement, but now I'm not so sure. The next post explains more on that.

Can an engineering argument/model be constructed that gives rise to upper limits of feasible a?
That is a tough question. I would think so, but I can't elaborate it. Simple models are wonderful things but I suspect establishing a bound on realistic aggregate acceleration of a region would require an FEA much better than NIST's physics simulation. Think about it; the damn thing was twisting, heaving, falling apart. When internal impulse is generated, who can really say what happens without detailed modeling of that specific situation? The discussion in the "Multi-ton sections" thread ties in to this; some pieces get quite a kick. As I like to say... try modeling a bookshelf falling over and tell me (e.g.) which book goes the farthest or even what the farthest expected distance would be.

And a "smoothing" algorithm that heeds such limits?
Again, out of my pay grade. But I like how you're thinking.
 
If you produce better, I'll use yours.
There's a huge difference between the trite measurement of a white disk on a black background and tracking the corner of WTC7 during descent, but...

Here's femr2's trace against a perfect circle:




Here's mine:




Now, the trajectory is generated originally via a discrete method, so is NOT a perfect circle and should not be expected to be tracked as such. However, the generation is very good so should be quite close to a circle. femr2's trace is decidely lumpier and I have a strong degree of confidence that this is inaccurate. I would even go so far as to say, with this extremely simple contrived example, my method is every bit nearly as accurate as the drawing of the circle itself was!

See, no smoothing.
 
For good measure, I ran the little circle in case anyone thinks I got a smooth result because I was working with an enlargement.





The major gridlines are pixels, so this should put to bed any lingering doubt about subpixel accuracy, at least in theory. This result is every bit as good as femr2's:



To be honest, I think it's better. I don't seem to have the same deep divots he does. I think the point about filtering not being necessary and possibly counterproductive - at least in a case like this - holds.

It's embarrassing how little code it takes to do this.

Here's what happens if you run the small gif through After Effects' default motion tracker:


Considerably nosier. The question is if this is because:
  1. It's a GIF, and you were tracking a video, with higher color depth?
  2. After Effects is terrible at tracking motion?
  3. The sample case is artificially simplistic, and favors a simplistic approach ("embarrassing little code", "huge difference between the trite measurement of a white disk on a black background and tracking the corner of WTC7 during descent")?
What was the "little code" exactly? Just summing the pixels?
 
Considerably nosier. The question is if this is because:
  1. It's a GIF, and you were tracking a video, with higher color depth?
  2. After Effects is terrible at tracking motion?
  3. The sample case is artificially simplistic, and favors a simplistic approach ("embarrassing little code", "huge difference between the trite measurement of a white disk on a black background and tracking the corner of WTC7 during descent")?

Or option #4, crappy setup on my part. Here's after I focussed the tracking region to be centered on the dot.


This illustrates the problem of motion tracking being a bit of an art rather than an exact science.
 
Oystein & OneWhiteEye--

Thank you for the master class. I believe I now understand the tracking techniques much better and see they can be far more accurate than I would have supposed. I guess my only major reservations about them in this case would be the inability to differentiate between purely vertical movement and movement in other directions. Maybe it isn't that big a deal here as it looks to be mostly vertical movement involved in the descent of the facade, but, again, since we are really trying to determine the significance of very small variations in measured acceleration, it seems even slight non-vertical movement could be a significant source of measurement error.
 
Oystein & OneWhiteEye--

Thank you for the master class ...
Uhhh actually I am more on the pupil's bench with you. I took part of the course before, but really thank OneWhiteEye and Master_Tom, and also Mick White for repeating some of the excercises in our stead.
 
Considerably nosier. The question is if this is because:
  1. It's a GIF, and you were tracking a video, with higher color depth?
  2. After Effects is terrible at tracking motion?
  3. The sample case is artificially simplistic, and favors a simplistic approach ("embarrassing little code", "huge difference between the trite measurement of a white disk on a black background and tracking the corner of WTC7 during descent")?
1) No, tracked the frames of the gif.
2) No, as we saw in your next post.
3) Yes, definitely.

What was the "little code" exactly? Just summing the pixels?
You got it. Sum of pixel locations weighted by intensity.

Ruby code:
Code:
require 'RMagick'
include Magick
 
input_folder = "C:\\Documents and Settings\\General\\Desktop\\yadayada\\subpixel\\tiny circle"
basename = "moving circle"
num_frame_digits = 4
frame_range = (0..100)
 
fmt = "%0" + num_frame_digits.to_s + "i";
 
frame_range.each { |frame|
  frame_number = format(fmt, frame);
  input_filename = input_folder + "\\" + basename + frame_number + ".png"
  image = Magick::ImageList.new(input_filename)[0];
  rows = image.rows
  columns = image.columns
  pixels = image.get_pixels(0, 0, columns, rows);
  x_sum = 0.0
  y_sum = 0.0
  i_sum = 0.0
  (0...columns).each { |column|
	(0...rows).each { |row|
	  pixel = pixels[columns*row + column]
	  i = pixel.intensity.to_f
	  x_sum += i * column.to_f
	  y_sum += i * row.to_f
	  i_sum += i
	}
  }
  x_weighted = x_sum / i_sum
  y_weighted = y_sum / i_sum
 
  puts([frame,x_weighted,y_weighted].join(","))
}

Most of that is overhead to make filenames, alias variables, loop, etc. The meat is just five lines. It is only suitable for something this simple; it sums over the whole image, for instance, because in this case that's okay. The same principle can be extended with thresholding, masks, operating on gradients (edges), and all manners of heuristics. But, at its heart, it's entirely analogous to finding the center of mass of an object.
 
Last edited:
Oystein & OneWhiteEye--

Thank you for the master class.
You're welcome! Your initial reaction is not atypical, it seems like a funky process at first but it's actually quite solid.

I guess my only major reservations about them in this case would be the inability to differentiate between purely vertical movement and movement in other directions.
That is a legitimate concern. Usually the easiest part is the subpixel tracking, where converting into (real) meters of displacement in each axis runs from difficult to impossible.

Maybe it isn't that big a deal here as it looks to be mostly vertical movement involved in the descent of the facade, but, again, since we are really trying to determine the significance of very small variations in measured acceleration, it seems even slight non-vertical movement could be a significant source of measurement error.
In this case, we know the initial descent of the NW corner is primarily vertical, less so over time. At the early small angle of tip, this won't be a significant problem but will become more so later in the trace. It's still one of the many things that add up to give substantial uncertainty.
 
There was a time when femr2 wanted to put SynthEyes up against my methods. Methods plural because I never relied on one routine for everything.

The simple code above will work in a surprising number of situations, if only extended by cropping of the original frame, but some tracing was quite elaborate and generally followed the complexity of what was to be tracked. By contrast, femr2 could just tell SynthEyes what was to be tracked, sit back, and let the canned software do all the work.

That comparison challenge fell by the wayside because it was too much work for me when I didn't have a lot of time. It would have been interesting to carry it through. I've always felt that the pro of working with SynthEyes (or AfterEffects or Tracker) was the ease of getting the job done - which counts for a lot. If the job doesn't get done, what good is it? The downside, however, is that you're a slave to the software. If it can't do it, you've got nothing. When you roll your own routines, the scope of possible measurements is greater.

I recall that femr2 could not get SE to latch onto the roofline of WTC1, being indistinct to begin with and the problem exacerbated by a lot of smoke. The window washer, the antenna, yes... but not the roofline. The roofline was considered a worthy goal because the antenna's motion with the building was not rigid as the collapse progressed, and there were similar suspicions about the washer. The roofline also could give a good indication of projected tip angle.

Nothing I had at the time could work with the roofline, so I pressed until coming up with a routine which could reliably pick out a large number of points on the roofline. It required heuristic assistance in that the location obtained from a given frame had to be used to narrow the search space for the next frame, specifically by deriving a line segment with error bound and an expected location. This was a hassle. More hassle than manually drawing lines to tell it where to look, and that was too much of a hassle, and this is when I stopped caring so much about how these buildings were moving.
 
An interesting extension for the obsessed might be, instead of tracking a few select points, track ALL possible features automatically.
 
I've always felt that the pro of working with SynthEyes (or AfterEffects or Tracker) was the ease of getting the job done - which counts for a lot. If the job doesn't get done, what good is it?
Which is very much the reasons that appealed to me - pragmatic engineer looking for answers. So I took the "Master Class" as femr confronted and patiently won ground of credibility again the antagonistic "truthers are always wrong" claque on that other forum. Including two very hostile specialists in personal credibility attack - one engineer one an academic. My respect for femr's objective rationality grew as I followed his progress in the face of such a back lash. I cannot recall him ever claiming more that he had "proved" to date - even if it was "his way". Objectivity exemplary.

BUT though I understand it and am convinced because I had been through said "Master Class" there was no way that I could myself of my own expertise justify it from zero base when the issues were raised here. My frustration evident if you read the series of posts.

From a "pragmatic engineering" POV I only needed "methods at least an order better than NIST or Chandler". I'm persuaded that the work is better than that but "an order better" was all I needed - and I have rested comfortably on that. My reasons for accepting "over G" in the WTC7 specific example based on "argument stronger than the opposite case" rather than any commitment to absolute measurement. I'm open to criticism on that which I can address if it ever comes up in discussion.

Oystein was also a member of that same Master Class but he, like you OWE, is far more oriented to general science research than I am.

THEN - the software. Any where between 40-45 years ago I could have written those sort of routines off the top of my head in several of the then current programming or scripting languages. (Punched them into cards and got the batch job run overnight. <<< Just to make sure we all remember what era of technology it was.)

I've enjoyed observing this sequence of posts. Thanks everyone.
 
Mick and OWE, love those traces.


But hold your horses, guys, from drawing more general conclusions just yet, since that is not the whole test of the circular blob.

This is a continuation of the same test in post #275, quoted from the same external source:

Another test using exactly the same small blob rescale, but extended such that it takes 1000 frames to perform the circular movement. This results in the amount of movement being much much smaller between frames. This will give you an idea of how accurate the method can be...




(click to zoom)


Would you believe it eh.

Here's the first few samples...


0 0
-0.01 0
-0.024 -0.001
-0.039 -0.001
-0.057 -0.001
-0.08 -0.002
-0.106 -0.002
-0.136 -0.002
-0.167 -0.002
-0.194 -0.004
-0.214 -0.005
-0.234 -0.005
-0.251 -0.007
-0.269 -0.008
-0.289 -0.009
-0.31 -0.009
-0.337 -0.012
-0.365 -0.014
-0.402 -0.015
-0.431 -0.018
-0.455 -0.019
-0.48 -0.02


For this example, I'll quite confidently state that the 3rd decimal place is required, as accuracy under 0.01 pixels is clear. There are other sources of distortion, such as the little wobbles in the trace, which are caused by side-effects of the smoothing and upscaling when pixels cross certain boundaries. This reduces the *effective* accuracy. Can be quantified, by graphing the difference between *perfect* and the trace location, but not sure how much it matters.

Now, obviously this level of accuracy does not directly apply to the video traces, as they contain all manner of other noise and distortion sources.




Get SynthEyes (or equivalent)
Get the video.
Unfold it.
Trace the NW corner in both frames (fields if you will)
Export the traces.
Open in excel.
Save.
Upload.



Okay, new blob test variance results...




Quite interesting. Shows the error across pixel boundaries (which makes sense), and the *drift* given circular movement (which is slightly surprising, but about a third of the pixel boundary scale, so may also make sense). Also shows variation in the oscillating frequency dependant upon rotation angle (which makes sense), and flattened, non-oscillating regions at 180 degree intervals (which again makes sense).

A useful image, and should assist in defining trace accuracy considerably.

Will look at the same thing with a square object, and then again with linear movement, rather than circular.



Behaviour for square and linear movement is very similar to that of circular movement.

So, from simple observation of the variance graph, I would suggest...

a) The highest accuracy is attained when movement in parallel to the axis being traced.
b) The highest accuracy is maintained when on-axis movement is < 1/4 perpendicular-to-axis movement. < 1/4 gradient. Within this margin for the example equates to within +/- 0.01 pixel accuracy.
c) The highest *drift* is attained when movement is at maximum velocity.
d) *drift* is recovered when velocity reduces.
e) On such small regions (49 pixels) inter-pixel transitions can result in oscillating positional error of up to 0.06 pixels. It is expected that this will reduce as region size increases (and will be tested)
f) Pixel transition error oscillation period is obviously related to movement velocity.
g) Error does not appear to favour an axis.
h) For pure on-axis movement, for a 7*7 region, minimum positional error lies within +/- 0.005 pixels.
i) Interestin' :)



Please consider mapping a more drawn-out version of the same movement to see the limits of accuracy of your trace tools.


Femr 2 then tests tracing the corner of a box in perfect freefall. This is a nice test because one can see how changes in velocity affect accuracy of the tracing tool.
 
Femr 2 then tests tracing the corner of a box in perfect freefall. This is a nice test because one can see how changes in velocity affect accuracy of the tracing tool.

I'm not sure what the point of this incredible "accuracy" is. What difference will it make?
 
I still expect comparable results. That is, mine is as good or better. No reason small displacement should alter that.
 
If we are doing a sub-pixel-trackoff maybe some practical source reference video might be more useful, rather than an artificially accurate source.

What you are doing is extracting information. The information is there in the moving blob animations, but is it actually there in the WTC collapse videos?
 
The circumferential path for the small blob above is about 22 pixels in length. When it takes a hundred frames to move the distance, one frame shows about 1/5th of a pixel's travel. I KNOW my method can discern better than 50x greater resolution, so I KNOW I can track the motion if it took 5000 frames to go the full circle. All the same, I'll make a blob animation tonight with 1000 frames.
 
The circumferential path for the small blob above is about 22 pixels in length. When it takes a hundred frames to move the distance, one frame shows about 1/5th of a pixel's travel. I KNOW my method can discern better than 50x greater resolution, so I KNOW I can track the motion if it took 5000 frames to go the full circle. All the same, I'll make a blob animation tonight with 1000 frames.

You could figure out the limits of your method by figuring the number of discreet possible states between two positions one pixel apart. For a thought experiment, reduce it to one spatial dimension, so you've got a row of cells, each of which can have a value between 0 and 255 (assuming byte encoding, and monochrome). For a purely cell sized object (i.e. a single pixel capable of moving subpixally) there's only 256 possible states, so a theoretical limit of 1/256th of a pixel. I do not think tracking a larger object, or a more blurred small object, will significantly alter this. i.e if something moved 1/1000th of a pixel at a time, it's only going to change the state every four frames or so.

Again though, this is a very artificial case, and real world limits may differ.
 
Back
Top