WTC7: Determining the Accelerations involved - Methods and Accuracy

Status
Not open for further replies.
NIST 'admits' free fall acceleration for 2.25 seconds, but the way they arrived at this conclusion as explained in Section 12.5.3 of the report didn't really convince me. The division in three stages seemed a bit arbitrary and it annoyed me that they don't give any attention to the margin of error.
The issue popped up a couple of times in discussion I had with truthers. Again this week. So I decided to have a closer look at the calculation by NIST and I found it even more questionable than before.



NIST did a linear regression on the 10 velocity data points in Stage 2. However, these velocity data points are derived from 11 data points (time/height) that are the actual measurements made on the video. So NIST defines Stage 2 as running from 1.75s till 4s while the velocity data points they use in the regression are actually based on data points that lie in a wider interval. And the way the calculated these velocity data points is done under the assumption that the speed between two data points is constant, which is not correct.

I also wondered why they didn't just do a second-degree polynomial fit on the actual data points because that seems more straightforward. So I tried this myself.
I read off the values from Figure 12-76 as best as I could and then fitted a polynomial through the 11 points running from 1.6s till 4.1s. This gives an acceleration of 9.75 m/s^2. Then I noticed that the first point is quite a bit off the curve and repeated the calculation with leaving this point out and this gives an acceleration of 9.46 m/s^2.


You can read it in more detail on my website: link.
 
Last edited by a moderator:
This issue has been subject of far better research that that performed by either NIST or Chandler. I can source the data if needed. Probably the most important aspect at the level of finesse you are discussing is that neither NIST nor Chandler's data points are based on measurement of a specific point. They crudely reference zones on the façade and neither the start nor finish nor intermediate points actually refer to the same point. Given that and several other issues about the accuracy of the raw data there is little benefit about superimposing mathematics that is not supported by the crude source data.

But I doubt that the issues of 'finesse' of measurement are relevant in most discussion with truthers. Debate usually involves fundamental false premises such as assumptions that the whole building fell with FFA. Or that the whole façade fell at FFA. Plus a common assumption that FFA means a state of free fall. And in the WTC 7 scenario it near certainly does not.

My recommendation would be that you review and check what is your objective. If it is for debate with truthers is precision critical in the argument? If for debate or for your own interest I can point you to the other researche....there is far too much to simply bring raw material to this forum.
 
Last edited:
...it annoyed me that they don't give any attention to the margin of error.
...
NIST did a linear regression on the 10 velocity data points in Stage 2. However, these velocity data points are derived from 11 data points (time/height) that are the actual measurements made on the video. ...
I also wondered why they didn't just do a second-degree polynomial fit on the actual data points because that seems more straightforward. So I tried this myself.
I read off the values from Figure 12-76 as best as I could and then fitted a polynomial through the 11 points running from 1.6s till 4.1s. This gives an acceleration of 9.75 m/s^2. Then I noticed that the first point is quite a bit off the curve and repeated the calculation with leaving this point out and this gives an acceleration of 9.46 m/s^2.

You can read it in more detail on my website: link.
You note that there must be a margin of error - and there very certainly is quite some -, and you also note that there are only 11 data points (with error margins). You correctly point out that NIST did a linear fit.

And then you repeat all the same errors or weak methods, by assuming that a 2nd degree polynomial would be justified, and by giving your derived accelerations with no less than 3 relevant digits.

Why not do 3rd, 4th, 5th, 10th order polynomials?
Why do polynomials at all?
Why not state your acceleration with 12 relevant digits?

The research that econ41 refers to (I'll drop screen names: femr2, achimspok, Major_Tom)
a) has refined the measuring techniques to derive far more, and far more precise (key-word: sub-pixel precision), data points from video
b) has thought a great deal more about fitting and smoothing algorithms - key-word: Savitzky–Golay filter
c) found short intervals of >g acceleration
d) and yet it still failed to determine error margins sufficiently to make a robust statement of whether or not this >g episode was real, or a potential artefact of the algorithms used

What can be said with some confidence is that some point(s) on a part of the building (the north wall roofline) descended at an average acceleration that is equivalent to g during a brief interval that may have been in the vicinity of 2 seconds, during which a vertical descent of roughly 8 stories occurred.
 
^^^ And those three - all of them starting from a pro-truther standpoint - did some of the best detailed research work available. I still rely on bits of their work.

Then - Oystein is right about the status of the over G measurement. I'm persuaded it is likely correct. THEN - think about this aspect: Whether it is "over G" OR "averaging G" almost certainly proves that that portion of the façade was NOT in free fall..

And the other issue is the relationship of high precision measurements of one location (whether point (femr2) or zone (NIST/Chandler)) to the façade as a whole. High precision and assumptions that one point represents the lot don't fit well in argument.

The 8 storeys gross motion aspect however can stand alone from the more precision stuff.
 
And then you repeat all the same errors or weak methods, by assuming that a 2nd degree polynomial would be justified, and by giving your derived accelerations with no less than 3 relevant digits.

Why not do 3rd, 4th, 5th, 10th order polynomials?
NIST did a linear regression on those velocity data points, which is the same as assuming that the acceleration was constant for Stage 2. That's why I compared it with a 2nd degree polynomial fit on the actual data points NIST measured. Of course, it's just an assumption that the acceleration was constant at this interval. I'm not claiming that the acceleration was 9.46 m/s^2, only that that seems more reasonable if you take the measurements by NIST and their assumption that the acceleration was constant for a certain Stage for granted.

The main reason I looked at the NIST calculation (apart from curiosity) is that in the discussion I had with this particular truther the argument was made that because NIST has 'admitted' that the acceleration was equal to FFA, this should be considered a fact.
 
...The main reason I looked at the NIST calculation (apart from curiosity) is that in the discussion I had with this particular truther the argument was made that because NIST has 'admitted' that the acceleration was equal to FFA, this should be considered a fact.
That is the situation I am advising about. It is accepted fact that acceleration was equal to FFA as explained by NIST and more fully explained by Oystein and me in recent posts.

And the 'admitted' is truther mendacity as I think you realise given the 'scare quotes'. The NIST reports were 'put out' for public comment - a more rigorous process than 'peer review' and common for Government Statutory reports. Chandler asked questions. NIST adapted the draft report to address the issues....and there is a whole topic in those aspects including the public policy issues of 'how far do you go' in responding to the conspiracy biased fringe of the community. ( Topic in is own right and one I am familiar with from my career experiences.)

So the starting point is that FFA was near enough to true fact. That is not the issue with truthers. Recall my caution about FFA does NOT prove free fall - the unstated issue here is the truther meme - truther false belief - that free fall PROVES CD. It doesn't. CD is a means of initiating, triggering or assisting a collapse. FFA - IF it happens - is a feature of the collapse not what started it. A (very) rough analogy - the starting gate does not win the horse race - it merely releases the horses to run.

FFA may follow from a CD initiation but is may also result from non-CD triggering. So the truther you are debating is following false truther lore of FFA proves CD. AND he will almost certainly be assuming that FFA means it must be free fall which is also not necessarily true.

Overall I am suggesting dont fall for the "trees versus forests" trap - focusing on details whilst missing the fatal errors in the bigger picture. The flaws in the truther argument are more fundamental than the precision of the maths.

That will always be my preferred starting point. I'm sure Oystein will discuss the rigour of the mathematics and such details... I doubt that the problem with your truther discussion partner is in the details. In my experience it rarely is.
 
Last edited:
NIST did a linear regression on those velocity data points, which is the same as assuming that the acceleration was constant for Stage 2. That's why I compared it with a 2nd degree polynomial fit on the actual data points NIST measured. Of course, it's just an assumption that the acceleration was constant at this interval. I'm not claiming that the acceleration was 9.46 m/s^2, only that that seems more reasonable if you take the measurements by NIST and their assumption that the acceleration was constant for a certain Stage for granted.

The main reason I looked at the NIST calculation (apart from curiosity) is that in the discussion I had with this particular truther the argument was made that because NIST has 'admitted' that the acceleration was equal to FFA, this should be considered a fact.
Acceleration of course was not constant.

I don't think NIST meant to imply that it was.

Merely that one can identify a time interval during which acceleration was "about g", as in "g would be within any reasonable confidence interval, given the imprecise data, during all of that time interval".

As you are interested in margins of error, your analysis ought not have come up with a single value, with 3 relevant digits, but instead with a confidence interval: "there is an X% likelihood that acceleration averaged 9.46 m/s^2 +/- Y m/s^2 in the time interval from ... seconds to ... seconds".
 
As you are interested in margins of error, your analysis ought not have come up with a single value, with 3 relevant digits, but instead with a confidence interval: "there is an X% likelihood that acceleration averaged 9.46 m/s^2 +/- Y m/s^2 in the time interval from ... seconds to ... seconds".
Such a statement would be preferable, but without the raw data that NIST used for Figure 12-76 (and the exact methods they used to find those), I don't think it makes much sense to look into the margins of error. I am merely looking at the methodology used by NIST taking the data points as they've provided those.
 
Why are you "...looking at the methodology used by NIST ..."?

IF your purpose is to critique the methodology you have already been advised that their measurements were crude. And that they were arguably "good enough" for the purpose of explaining the gross mechanics and responding to the D Chandler inquiry. I have identified the two key problems with "the exact methods they used to find those".

So if you accept that the NIST methods were 'crude but good enough' THEN any analysis based on the NIST data points will inherit the same level of error, inaccuracy or lack of predictability for any extrapolations.

If you want to pursue greater accuracy - then say so and we can help you access the higher precision research. This forum has a strict policy re links and there is no point me quoting the full list of shortcomings with NIST methodology if you are not interested.

If you want to stay within the limitations of NIST (and Chandler) no problem. But there is little point discusing issues of mathematical finesse.
 
What can be said with some confidence is that some point(s) on a part of the building (the north wall roofline) descended at an average acceleration that is equivalent to g during a brief interval that may have been in the vicinity of 2 seconds, during which a vertical descent of roughly 8 stories occurred.
This cannot be said with any serious confidence.
 
This cannot be said with any serious confidence.
Why do you think so?

It obviously can be said with serious confidence. See: I say with some confidence that "some point(s) on a part of the building (the north wall roofline) descended at an average acceleration that is equivalent to g during a brief interval that may have been in the vicinity of 2 seconds, during which a vertical descent of roughly 8 stories occurred".

See? I said it with some confidence.

My confidence is derived from the fact that I have seen data, won by much more precise methods than NIST or Chandler used, that does in fact show such an average acceleration of g (and even a brief excursion into >g).
 
I have also followed and reviewed the research and am confident of the average acceleration near enough to G and the brief excursion into >G.
 
^^^ don't overlook the issues of system--sub-system boundaries. For one part to go over G that part could not be in free fall. So the same issue I have identified several times....SOME aspects of local v or a are not directly linked to gross movement of the whole façade. It cannot be legitimately claimed that measured 'over G' at the one point identified by the femr2 analysis must apply anywhere other than that one point.
 
Last edited:
My confidence is derived from the fact that I have seen data, won by much more precise methods than NIST or Chandler used, that does in fact show such an average acceleration of g (and even a brief excursion into >g).
What data?

I think we are talking about the same data (https://www.metabunk.org/posts/161384/), and it is bad data badly analysed. Not the kind of results one could publish in science or engineering journal.
 
Last edited:
The motion profile was analyzed from videos with high precision and you can read about the methodology it on another 911 forum. Where did you learn about the approximate G acceleration? What was the methodology qed?
 
What data?

I think we are talking about the same data (https://www.metabunk.org/posts/161384/), and it is bad data badly analysed. Not the kind of results one could publish in science or engineering journal.
Yes, femr2's data.

I clicked the link, read a few posts before and after, and found that your analysis of femr2's analysis seems to be oblivious of femr2's actual final analysis. I think so, because the key-words "Savitzky-Golay" do not feature in any of your posts back then. Your talk about the behaviour of nth-degree polynomials away from a local region under consideration was a bit odd and did femr2's work no justice.

I have myself, back in the day, criticized femr2's methods sharply - in particular I pointed out that, to my knowledge, he never evaluated error margins of positional data, and how they translate to error margins of derived velocity and acceleration. But positional data over long enough intervals is good enough to establish that, with some confidence, there was an average of g over some suitably picked time interval - and it follows straightforwardly that instantaneous acceleration must have momentarily exceeded g.

"With some confidence" means I realize there is a real chance that positional data is insufficient to be sure of =g or >g, but that we can't decide either way. Still, I am rather confident that there was acceration around g.
 
"With some confidence" means I realize there is a real chance that positional data is insufficient to be sure of =g or >g,

It is called Occam's razor.

<g = normal, no extra explanation needed
>g = abnormal, extra explanation needed (explosives, special pulling, etc)
 
@Oystein And please give me the precise period of >g. It was from this time to this time.

And you do realize that you three (@econ41 @Jeffrey Orling @Oystein ) are the only people in the whole world pushing this bunk. You three. Not even femr2 (the pod man) any more.
 
If I currently google for "911 collapse faster than g" I am lead to Metabunk, where I learn that indeed the building collapsed >g! :oops:

upload_2019-2-9_23-54-21.png
 
I am not pushing any theory. If one is to accept the data that femr2 gathered from his analysis there was some period that the points he measured moved at > FF. I don't recall when this is said to have occurred during the collapse. However there is not reason why the frame at some place at some time as it was collapsing acted like a spring and the spring like action added energy in a downward direction resulting in the >FF. The collapse was reasonably straight down... but it was not perfectly straight down. So there were torsional movements etc. which distorted the rigid prismatic form. This distortion would seem to likely be driven by gravity which we know acts vertically. So there were mechanisms in the structure which would enable lateral motions during a gravity driven collapse. This also applies to the long vertical kink in the facade... which was a manifestation of horizontal movement.

So it would seem to me unless a collapse is a sort of complete crushing of the structure at the bottom and every point in the structure's movement is literally straight down then there would be non vertical forces in play.
 
It is called Occam's razor.

<g = normal, no extra explanation needed
>g = abnormal, extra explanation needed (explosives, special pulling, etc)
O geeze.

A) This is a serious misapplication of Occam's razor. When something has been observed, the observation requires no explanation. Occam does not apply
B) >g is not abnormal at all.
C) Neither explosives nor "special pulling" would provide the explanation. Simple free body physics, and rotation, do
 
@Oystein

Which is more probable?

[1] There is an error in femr2 results and the building collapsed close to but <g

or

[2] femr2 is correct that the building collapsed >g via rotation, etc. And at exactly the one and only point examined.

[... The mentioning of your three avatars is not an argument. It is a shaming. I am pointing out that if just you three stopped mentioning this bunk, no one in the world will be talking about it. The bunk only continues to exist because of you three. This is Metabunk, and you must stop bringing it up. Ok? ...]
 
Last edited:
@Oystein

Which is more probable?

[1] There is an error in femr2 results and the building collapsed close to but <g

or

[2] femr2 is correct that the building collapsed >g via rotation, etc. And at exactly the one and only point examined.
The two propositions are not mutually exclusive. Femr2 could have an error in his results AND there was a brief episode of >g.

I note that everybody who ever measured that period of collapse frame by frame found an average of g for some period of around 2 seconds. From this follows that most likely, acceleration also exceeded g briefly.

You have zero measurements to support your contrary claim "no instantanous acceleration = g at any point in time". So your bare assertion is rejected based on the extant observations.

[... The mentioning of your three avatars is not an argument. It is a shaming.
I am glad you admit to not presenting an argument.
Of course, your shaming tells nothing about us - only something about you.

I am pointing out that if just you three stopped mentioning this bunk, no one in the world will be talking about it. The bunk only continues to exist because of you three. This is Metabunk, and you must stop bringing it up. Ok? ...]
You have not, in fact, shown it to be bunk.
 
Once you debunk the proposition that there is a significant, even good chance, that the theory is correct.
How good?

You obviously established error margins and had them checked. What are they? And give us a reference please (and to a particular post, not a collection of 64 pages) to the calculation.
 
Last edited:
@Oystein

How many different building points was your experiment run on? With reference please.
Please explain the relevance of this question! You are not going to make me jump through random and irrelevant hoops.

Present your own argument, if you have any.
 
Please explain the relevance of this question! You are not going to make me jump through random and irrelevant hoops.

Present your own argument, if you have any.

You have not presented any evidence whatsoever, yet ask us to believe (I have had to trace through hundreds of pages of blogs to get to the theory, and even then you say I don't have it all, but don't yourself present the theory). So I am asking you questions to try and find out. Surely you are prepared to tell us how your results where obtained?

So far you have said that you have not calculated error ranges, yet are somehow still confident in the result, and ask us to put aside normal scepticism and accept you on your belief.

So now I am asking how many points on the building you ran this test, and how many others give the same result. The more points the more we can compare and contrast. Too few, and I will just say you are being scientifically irresponsible.

  • How many different building points was your experiment run on? With reference please.

[... Questions such as (1) did you calculated error ranges and (2) how many experiments were run, are not random nor irrelevant. They are exactly the kind of questions one has to ask when interrogating such scientific claims. Surely you know this? ...]
 
Last edited:
I note that everybody who ever measured that period of collapse frame by frame found an average of g for some period of around 2 seconds. From this follows that most likely, acceleration also exceeded g briefly.
No, that does not follow.

If you have free fall, and measure height with some inaccuracy, then, due to that inaccuracy, the average measured acceleration will be g, but short intervals will be seen to vary in either direction. However, that is an artefact of the measuring error: if in a series of measurements that ideally should be 1 2 3, one measurement is off due to inaccuracy 1 2.05 3, then one interval appears longer and another appears shorter, without there being an underlying change in the acceleration.

2 seconds of video means 60 frames? so 60 data points that can show >g or <g randomly. The probability that 6 consecutive frames show >g is 1/2^6=1/64, we have 54 consecutive random experiments in those 60 data points (which are not strictly independent, but it mostly cancels out, I think?), so p=1-(63/64)^54=57% chance of seeing this 0.2s "excursion" in the data.

But that is already a strong formulation, as you could now sprinkle some additional <g frames in and still have the short-term average exceed g. So basically, what we want to compute is the probability for a subsequence of given length (how long?) to exceed g by (how much?) on average, given the measurement error (how much?) of the individual data points for an underlying freefall acceleration. If this probability is below 5%, you might be justified in rejecting the hypothesis that free fall caused the observed excess.

If it isn't, you need additional evidence.
 
...
So now I am asking how many points on the building you ran this test, ...
This question is loaded nonsense on at least two counts, and that makes me think you are not here to play an honest game.
a) I ran no "tests" whatsoever - and gave you no reason to believe I ever did. I followed, back in the day, the discussions about femr2's data and his smoothing algorithms and derived velocity and acceleration profiles. There is not need whatsoever for ME to do ANY testing at all.
b) It doesn't matter at all how many points on the building anyone ever ran any "tests" on! it suffices entirely that somebody once tested 1 point - if that one point exhibited >g acceleration, then my original claim that I am confident that SOME point of the structure exibited >g for SOME time interval is justified.

True, there exists no error analysis, I admitted as much, and clearly indicated the problems this brings.
And still, it is possible to walk away from all this confident that SOME >g episode is real. This stems from informed judgement, not from rigorous, precise proof. Call it an opinion - it is. It's quite ok if you have a different opinion based on your informed judgement.

You claimed, or implied, earlier, that some >g measurements would be unlikely and would require some special, even extraordinary, explanation - but that would be wrong. Just wrong. Very simple, ordinary mechanics, where things are very simply falling under gravity, with simple constraints, can easily lead to excursions of SOME points of an assembly into >g territory. I am sure you know a couple of examples: The top end of a straight vertical beam that's pinned at and rotating around the base WILL show >g. A horizontal beam that's free-falling, and then hitting an obstacle at one end, WILL show >g at the other end. Any object that falls freely and simply rotates at a steady rate while it falls WILL show >g on one side.
 
No, that does not follow.

If you have free fall, and measure height with some inaccuracy, then, due to that inaccuracy, the average measured acceleration will be g, but short intervals will be seen to vary in either direction. However, that is an artefact of the measuring error: if in a series of measurements that ideally should be 1 2 3, one measurement is off due to inaccuracy 1 2.05 3, then one interval appears longer and another appears shorter, without there being an underlying change in the acceleration.

2 seconds of video means 60 frames? so 60 data points that can show >g or <g randomly. The probability that 6 consecutive frames show >g is 1/2^6=1/64, we have 54 consecutive random experiments in those 60 data points (which are not strictly independent, but it mostly cancels out, I think?), so p=1-(63/64)^54=57% chance of seeing this 0.2s "excursion" in the data.

But that is already a strong formulation, as you could now sprinkle some additional <g frames in and still have the short-term average exceed g. So basically, what we want to compute is the probability for a subsequence of given length (how long?) to exceed g by (how much?) on average, given the measurement error (how much?) of the individual data points for an underlying freefall acceleration. If this probability is below 5%, you might be justified in rejecting the hypothesis that free fall caused the observed excess.

If it isn't, you need additional evidence.

Far too complicated.

I said that everybody who ever measured the fall arrived at the conclusion that there was an average acceleration =g for some time interval, and that this time interval was something like 2 seconds (60 frames).

I said that this makes me confident that for SOME point, an average of =g for SOME time interval is real.

That's then my starting assumption for the next step of the argument - I assume SOME time interval a average acceleration =g.

With me so far?

Ok, here comes the next step:

We know that the the observed point never was in actual free fall - it was always connected to a solid assembly and thus subject to numerous force up and down and left and right in addition to gravity. Right?
We know that acceleration changed before and after the time interval in question. Right?
It is thus unlikely that acceleration was constant during the time interval that averaged g. Agreed?
Now, if, during that time interval, acceleration was <g for some finite sub-interval, it follows aritmetically that it must have been >g during at least one other finite subinterval.
Therefore, it is unlikely that >g did not occur - assuming that an average =g was real, as ALL who have measured the descent agree upon.
 
So it was only one point that you guys measured.:oops:

Now comes the moment I ask you for a null-hypothesis.

  • What can I show that would disprove that the building collapsed >g for this period (which you still haven't given)?

And you know that "show where the hole is in my theory" is not a null-hypothesis.

  • Please can you also present your theory in a single post. So that we can properly understand it, and repeat it.
Here is how I have understood it, trolling through the pages and pages of blog posts. You say there is something that I am missing that makes you far more confident. Perhaps you can take this as a starting point and correct me.
It is my understanding that femr2 establishes a period of super g acceleration as follows.
  1. Begin with the time/pixel-position data set femr2 obtained from the Dan Rathers footage.
  2. Use this raw or normalized against the trace of a static point (and/or dijittered in various ways).
  3. Obtain the time/position-meter (or time/position-foot) data by applying a scale-metric.
  4. Choose a subset of the data for a particular segment of time 10s-17s.
  5. Obtain a degree n position polynomial P by least-squares smoothing.
  6. Differentiate P twice to obtain a degree n-2 acceleration polynomial A=P''.
  7. Plot A.
  8. Show that A curves below g.
  9. Deduce that for period 10s-17s WTC7 NW corner collapsed at super g acceleration.
 
Last edited:
Status
Not open for further replies.
Back
Top