The elephant in the room for Truthers is that this type of forensic analysis is

*not* simply a matter of programming a computer, running the model, and then mindlessly publishing whatever appears at the bottom of the printout.

While ANSYS and similar tools have revolutionized both design and forensic engineering, they are not be-all and end-all tools for reasoning about failures. Especially in a forensic context, NIST and others are well aware that any computer simulation model is based on a number of assumptions and estimates for values that cannot be known, and mechanisms that cannot have been observed. It is an effort to understand the various things that

*may* have happened in as objective a manner as possible.

As such the result of simulation is never trusted as the inevitable determination of what must have happened. Conversely, minute discrepancies between what the model determines and what is theorized to have occurred do not doom the theory. Simulation is one of many tools brought into play.

Why add a stiffener here that isn't in the design? Because structure joints can fail in several ways, and conscientious forensic engineering examines how probable each failure mode may be in isolation as well as in combination. To determine the likelihood of failure by walk-off alone, you gimmick the model so that the seat off of which the girder might walk cannot itself fail and thus undermine your experiment. Similarly to investigate the possibility of seat failure, you might artificially "fasten" the girder to its seat either by upping the yield strength of the designed fasteners to some magically infrangible value, or alternatively tell the model that the girder and its seat are the same piece of metal. The point in either case is to keep the girder loads coupled to the seat no matter what, as the simulation progresses over time.

If you do this, it's not to doctor the study to reach a predetermined conclusion. It's because you're using the

*simulation* tool the way it's meant to be used -- to investigate a range of possibilities and determine which is most likely and which are patently unlikely, by changing variables and characteristics of the problem to build up a picture of overall behavior. We use fine-grained simulation as the tool because it combines very small and simple effects into a macro-scale observation in a way that avoids any bias that might arise from assumptions at the macro scale.

In another way, it's the same philosophy the Mythbusters use to determine likelihood of outcomes: first they try to replicate the purported event using reported or reasonable values, then they achieve the observed outcome at any cost to determine what would be required to replicate the event. We use simulation at times to "bound" problems to reasonable safety and performance envelopes using both reasonable and unreasonable values for unknowns.

This is why the inch or so of errata isn't significant. The purpose of the model is to determine whether the higher-level theory of failure mode is plausible, not whether it must have happened according to every detail the computer is able to spit out. Not all those spat-out details matter, nor are confident values. The purpose of the model with the gimmicked seat is to isolate the hypothesized walk-off behavior and put some sort of qualitative validation and quantitative ballpark value to it. That the beam walked to within an inch or so of falling off is validation enough for the conclusion that was drawn on it -- which was not a supremely confident conclusion. Why? Because the model deals only in constitutive relationships and simply doesn't involve, embody, or capture all the physical effects that may come into play in the real world. You can easily attribute the final inch to variables that exist in the real world but that can't be modeled easily. Yes, it's a crap-shoot, but that's why this sort of analysis is still partially an art.

Further, as we discussed, models are frequently optimized toward a certain kind of observable outcome. This is because Time-To-Solution is a cost factor in this type of analysis. I dealt with that at length so I'll take it as read and move on to an illustration of the qualitative limitation argument above it.

In our work for Boeing in the early 2000s developing the wing for the Boeing 787 Dreamliner, we were hobbled by limitations in the existing algorithms for structural analysis and fluid dynamics. Basically at that time the two were independent regimes of simulation that did not work well together.

Fluid dynamics is the body of physics that describes the motion of fluids through environments composed of solid objects, incorporating all the classic fluid effects -- especially for gases, the models that incorporate varying temperature and pressure. Computational fluid dynamics is the investigation of those effects by simulation according to finite-element methods: little "packets" of air that interact with each other and with solid surfaces according to constitutive relationships derived from classic gas laws and from the Navier-Stokes models.

Structural dynamics has been discussed here. Computational structural dynamics is the application of finite-element methods to problems in indeterminate mechanics with constitutive relationships defined according to classic mechanics of materials.

Both are required in wing design. CSD determines the behavior of the wing structure under flight loads. CFD determines the aerodynamic viability of an airfoil of a given shape. But in the real world, these regimes interact such that aerodynamic loads affect wing shape, and wing shape in turn determines airflow. Boeing has proprietary CFD methods that achieve more accurate results using less computer time by means of non-unform elements. Our job was to integrate classic CSD methods with the proprietary CFD methods and derived integrated constitutive relationships at the wing-slipstream boundary. This was not new math at the time, but very few practical solutions to this problem existed at the time.

Prevailing computer methods produced structural solutions that were valid in the structural regime, and airflow models that were valid in the fluid dynamics regime. But in order to to get usable fluid-dynamics answers for our previous Boeing work, we had to manually "flex" the wing into the in-flight shape, i.e., manually simulate the effect of aerodynamic loads on the wing. This is because the wing in the early CFD models is not a flexible structure; it is assumed to be rigid and immovable, and we had to make it "rigid and immovable" at a certain pre-flexed position.

This is suitable for the design goals of the 1990s and early 2000s. We accepted that the computer simulations would produce only approximations of the actual behavior, that other analytical tools would be needed to round out the design and projected performance of the airfoils, and that design and operational envelopes would necessitate margins for safety at the expense of performance.

The Dreamliner aspired to a more efficient design where the structure of the wing could safely accept a narrower design margin, and the aerodynamic behavior could be known confidently across a wide range of structural responses. More importantly, the interactive behavior of the wing system in flight was more likely to be revealed in the coupled model.

The lesson here is that any computer simulation is ever only based on a subset of the variables at play, arising from a select few of the bodies of physical law (e.g., materials, acoustic, thermal, mechanics, chemical) that naturally apply. The models implement constitutive relationships only -- a sort of theater-scenery mock-up of the actual underlying behaviors. These break down in some situations, and are predictive only within a certain epsilon even within their useful range. And combining them from different regimes, as discussed above, is problematic for reasons of the underlying mechanics, and vastly consumptive of computing resources -- roughly O(N(N-1)/2) over single-regime models. (I deployed about $20 million worth of supercomputer clusters to perform the above analysis.)

Because those who are adept in the use of these tools know the nature and quantity of the limitations imposed, and because they are used to employing them in what-if scenarios that depart here and there from ground-truth reality, they generally don't look at NIST's conclusions as necessarily wrong or necessarily based on "flawed" models.

What is happening here is that the Truth movement is counting on their followers adopting the layman's "crystal ball" view of computer analysis (which is instead a highly-specialized art and science) and buying into the hooey about meaningless and irrelevant "flaws" in the models and completely overblown assertions about the role of such analysis in overall findings. It is understandable when laymen look at the computer analysis and see what they believe to be suspicious behavior. It's inexcusable when people in the engineering profession deliberately misrepresent the characteristics of computer modeling and its role in an overall engineering decision-making process.

But hopefully this sheds some light on why we use these models and how the NIST use of them does not lie outside ordinary engineering practice.