9/11 WTC1 & WTC2
09.02.2009 um 19:41
Eine Schlüssel-Methode des Reports scheint zu sein, um seine Theorie realistischer erscheinen zu lassen, dass man zu beobachtende Daten mit purer Spekulation vermischt, ohne darauf hinzuweisen. Nur zum Beispiel mischt der Report die Schätzungen die Zahl der beschädigten Kernträger (welche hochspekulativ ist, da es keine Erkenntnisse darüber gibt) neben Schätzungen der beschädigten Außenwandträger, die man durch das Beobachten der Fotografien ableiten kann.
In the case of WTC 1 the lesser alternative predicted only one severed core column, the moderate alternative predicted three, while the extreme alternative predicted five to six. In the case of WTC 2 the disparity was even greater: The lesser alternative predicted three severed columns, the moderate five, and the extreme case no less than ten.[41]
***
Also:
WTC 1 Nordturm:
a.) Szenario geringe Schäden: 1 beschädigter Kernträger
b.) Szenario zu erwartende "normale" Schäden: 3 beschädigte Kernträger
c.) Szenario große Schäden: fünf oder sechs beschädigte Kernträger
Von insgesamt 56.
Also
a.) 1,8%
b.) 5,4%
c.) 8,9% bis 10,7% aller Träger beschädigt
WTC 2 Südturm:
a.) Szenario geringe Schäden: 3 beschädigter Kernträger
b.) Szenario zu erwartende "normale" Schäden: 5 beschädigte Kernträger
c.) Szenario große Schäden: 10 beschädigte Kernträger
a.) 5,4%
b.) 8,9%
c.) 17,9%
Wenn das die zu erwartenden Beschädigungen innerhalb der normalen Bandbreite sind, warum dann immer die schlimmsten Vorschädigungen annehmen? Mit welcher Begründung? Vor allem, wenn laut eigener Analyse bei keinem dieser Fälle Trümmer auf der anderen WTC-Seite austraten, was aber im real life passiert?
Und wie glaubwürdig soll das sein, mehr Schäden im Südturm als im Nordturm, wenn doch im Südturm die Träger doppelt so dick waren auf der Einschlagsetage (was NIST niemals erwähnt, wurde erst durch die Blue Prints herausgefunden, die man den echten Aufklärern zugespielt hat) und der direkte Weg des Flugzeugs am Kern vorbeirauschte?
Warum das NIST überhaupt so etwas annahm? Nein, nicht, weil es mit der Realität etwas zu tun hatte. Oder weil man durch Triebwerke oder anderes wirklich mit schlimmeren Schäden rechnen musste. Der einzige Grund dürfte gewesen sein, irgendwie hinzurechnen, warum WTC 2 trotz aller hinderlichen Umstände nach kürzerem Brand und fast verfehltem Einschlag nach nur der halben Zeit wie der Nordturm einstürzte.
Although the NIST never satisfactorily resolved these differences, it immediately threw out the less severe alternatives, citing two reasons in the summary report: first, because they failed to predict observable damage to the far exterior walls; and second, because they did not lead to a global collapse.[42]
On 9/11 the first tower sustained visible damage to its opposite. i.e., south wall, caused by an errant landing gear and by a piece of the fuselage, which were later recovered from below. Also, at the time of the second impact a jet engine was seen exiting WTC 2’s opposite wall at high speed, after passing straight through the building. It was later found on Murray Street, several blocks northeast of the WTC. In its summary report the NIST leads us to believe that it used the observable damage to the far walls caused by these ejected jet plane parts to validate its simulations. Yet, in one of its supplementary documents the NIST admits that “because of [computer] model size constraints, the panels on the south side of WTC 1 were modeled with a coarse resolution...[and for this reason] The model....underestimates the damage to the tower on this face.”[43] But–––notice–––this means that none of the alternatives accurately predicted the exit damage.[44]
As the supplementary documentation states, “None of the three WTC 2 global impact simulations resulted in a large engine fragment exiting the tower.”[45] Yet, here again, the NIST rejected the lesser alternative. We can thank researcher Eric Douglas for digging deeper than the summary report. Otherwise, this flaw, tantamount to the devil lurking in the fine print, might never have come to light.
But the NIST was not deterred by its own biased reasoning. Later, it also tossed out the moderate (base) alternatives, ultimately adopting the most extreme scenarios in its subsequent global collapse analysis–––even though, as noted, the moderate alternatives were no less accurate, from a predictive standpoint, than the extreme cases. In fact, with regard to predicting the entry damage to WTC 1, as noted, the moderate alternative was actually a better match. The NIST report offers no scientific rationale for this decision, only the pithy comment that the moderate alternatives “were discarded after the structural response analysis of major subsystems were compared with observed events.”[46] Here, of course, “observed events” refers to the ultimate collapse of the tower. The NIST, though oblique, is at least more forthright than in the case of the lesser alternatives. Things get worse.
As it happened, even the extreme alternatives required further tinkering to be acceptable. The report informs us that “Complete sets of simulations were then performed for cases B and D [the extreme alternatives]. To the extent that the simulations deviated from the photographic evidence or eyewitness reports, the investigators adjusted the input, but only within the range of physical reality.” [my emphasis][47] In other words, NIST scientists worked backwards from the collapse, tweaking the extreme alternatives until their computer model spat out the desired result consistent with their assumption, which never wavered, that the 767 impacts ultimately were at the root of everything on 9/11. Of course, the NIST report never tells us what the “additional inputs” were.
That the NIST’s impact study and subsequent global collapse analysis were biased, hence, unscientific, ought to be obvious. But I will go even further: The impact simulations were very nearly a waste of time, since the NIST had almost no information about the actual conditions at the WTC core. Had the computer model been robust enough to properly characterize the far walls, things might have been very different. In that case investigators could have used the observable damage to the exterior of those walls to discriminate between the three alternatives, hence to select the best choice, validating the model. As it was, the NIST had no sound basis for rejecting the lesser and moderate impact alternatives. Both were at least as plausible as the extreme alternative. Why were they not given equal weight? The reason is obvious: That would have compelled NIST investigators to entertain the unthinkable, i.e., the possibility that some other causative agent was responsible for the WTC collapse. Still, one has to admire, in a perverse sort of way, the NIST’s triumph of circular reasoning.
Mit Auszügen aus:
Dead On Arrival
The NIST 9/11 Report on the WTC Collapse
By Mark H. Gaffney