Yes, easy.So it could track this: (Which AE can track fine)
Correct. It will give you the center of what's left on the screen.But not this (Which AE also cannot track off the bottom)
It can track this. There will be some distortion if I let it grab the blended parts. as you would expect. Here's what the color contours look like as it passes over the bars:Or this: Which AE can track fine.
Or this:
Roughly, I use the weighted sum method one at a time to get the locations of all the small pieces, then use a composite of those pieces to get the motion of the whole face. It's complicated by rotation, but not unworkable.Yes, there are lots of nice features, but how do you track them over multiple frames using the weighted sum of an area to get the center of the area?
Yes. Subject to interference from the background.I can see how in the small blob on black example, the average gives you the center of the object
Correct. If an object can be distinguished from the background in any way so as to be called a feature, it's by way of color information and that alone. If you can see it with your eyes, software can, too. I can do anything Photoshop (plus operator, to some extent) can do selection-wise - programmatically. Even more probably, since I can introduce any constraint desired for growing or shrinking the selection space. It only has to be good for the problem at hand. It's best to keep it as simple as possible, for a variety ofAnd in your above examples, you take advantage of unique colors to isolate an object, which then reduces it to "blob on black".
Uh... I tell it? Man, I'm over here writing it all from scratch.But with the building, you come up with something to extract a feature, then you have to determine how far it moves from frame 1 to frame 2. How do you know what to extract from frame 2 unless you already know how far it has moved?
... a single light in a window as blob will provide an x-y trace of the building motion ...
You likely see now that the background matters only to the extent it happens to be co-mingled with the object. The vast majority of the background, save for a minority of pixels, can be eliminated. We need only consider what it it does to disrupt the object at its edges.It would seem to me that it is limited to tracking a feature over a background only if:
1) The total weight of the background is unchanged (realistically meaning it's flat, or finely textured)
It must stay within the region, for sure. The total weight, however, can change. The higher it is, the better potential resolution as more distinct position states can be resolved. Say the sun goes behind a cloud, or someone turns off a light. Ignoring specular highlight change (which can be a problem) and the possibility of it now being too dim, only consider the global illumination change: it will still calculate to the same location even though intensity decreased.2) The total weight of the feature is unchanged (meaning it's always within the analyzed region)
The masks are generated by my script in some fraction of a second. Running frame #2 is as simple as letting it happen after frame #1.But what about frame #2? Here you've got a mask for a feature in a single frame. You've found the bounds and center of that mask. You could cookie cut the pixels under the mask to get a different center.
But then what? The mask itself is not sub-pixel.
The masks are generated by my script in some fraction of a second. Running frame #2 is as simple as letting it happen after frame #1.
Maybe I don't understand your question. I don't make the mask for the program, the program does the mask for me. The mask is not used. It simply represents the pixels chosen by the algorithm so I can see what it's using.
What do you do for frame #2?
It usually will be different from frame to frame. It's not necessary to change the filter specs unless the conditions change. Most times they don't. If they do, say for example the object enters shadow and is now dimmer, okay. I may wish to exclude the few frames of transition, then re-calibrate for the new condition.Frame #2, you repeat, possibly moving the crop rectangle, but using the same filter to generate a new mask. The mask might be different to the mask in step #1?
Okay, I can't resist. What if it doesn't find it? This is how the question is coming off to me.I just let AE run. It's essentially searching frame #2 for the positioning of the feature area that best matches frame #1
It sounds at this point like you're grasping at picayune details.
Okay, I can't resist. What if it doesn't find it? This is how the question is coming off to me.
I haven't demonstrated the full-up version yet here because it isn't done. But I will, and I know (how well) it works.No, what I'm trying to determine is how accurate it would be for measuring the fall rates of parts of falling buildings. So i was trying to figure out how it would work beyond the simple "sum the entire image" case. You have provided explanations, but they seem theoretical so far?
Yes, it works in the real world, I've already done it plenty of times. Works great. It was 6-8 years ago and it's a tossup whether it's easier to find and sort out my old code or write new from scratch. I chose the latter, and you're seeing it in progress. But I've done this before.In terms of actual tracking?
Sounds pretty much like the same situation I have!If it does not find it, it will stop tracking. It gives a "confidence" measure for each frame, and you can stop when it gets below a certain level of confidence. You can also manually adjust the tracking regions at any time to get past such problems.
Sounds pretty much like the same situation I have!
Well, yeah, that goes without saying (even though it's been said many times already). AE is easy, general purpose, accepts a wide variety of target types, has the most sophisticated feature matching money can buy, no one's quibbling with any of that.Except AE does not have to isolate a "blob" in the scene from a background.
I shall check ....If passing over the bars does influence the quality of its measurement, how much error does it introduce?