In 1967, I left pure research and, after some years in the aerospace industry and in college teaching, I spent most of my working career in the computer industry. Six years after retirement, I once again found myself in the field of scientific research, this time examining the events of 9/11. Here I have chosen this field for a peer review case study, not only because it has been my field of research for the past 10 years but also because it illustrates many of the current problems in the peer review process and their possible solutions.

Since the 1960s there has been a vast increase in the literature on peer review, together with large international conferences on the issue. For example, Lawrence Souder in his 2011 paper, “The ethics of scholarly peer review: a review of the literature,” summarizes a subset of the available literature, namely that on the ethics of peer review [ 9 ]. At the same time, the peer review process has become more formalized and technologically driven, and more the subject of study and criticism.

Contrary to my own recollections, peer review processes with outside reviewers did exist in the 1950s, and many of the main journals had formal systems in place. Eugene N. Parker wrote a ground-breaking theoretical paper in 1959 that predicted an outflow in interplanetary space of plasma from the sun—the solar wind [ 6 ]. The paper was strongly opposed and was rejected by its two reviewers, but saved by the editor, S. Chandrasekhar, who published it anyway in the Astrophysical Journal. The presence of the solar wind in interplanetary space was later confirmed by radio astronomers A. Hewish and J.D. Wyndham as well as by satellite observations [ 7 8 ]. Hewish’s continuation of his line of research led a few years later to the discovery of pulsars for which he shared the 1974 Nobel Prize in physics with Martin Ryle.

The submission process for publications, such as Monthly Notices of the Royal Astronomical Society, Nature, the Astronomical Journal, and the Astrophysical Journal, was to send in the paper and wait to be notified when it was accepted and then published. Or so it seemed to a young researcher in his twenties. Although the idea of peer review dates back to the mid-18th century, it was not until the mid-20th century that a journal-oriented formal process using outside reviewers began to take shape [ 3 ]. For example, in 1967, Nature, under the editorship of John Maddox, established a formal peer review process. The increasing volume of papers necessitated this development. Prior to that, Jack Brimble, the previous editor of Nature, would often hand out papers at the Athenæum Club in London for informal review by other scientific members [ 4 5 ].

A half century has elapsed between my first scientific publications and the present ones. My early research and publication of papers in the field of radio astronomy occurred over 50 years ago in the period 1959 through 1967. At that time I was not aware of any rigorous review process, and I cannot recall receiving any significant reviewer comments, objections, changes, or corrections for the nine papers that I authored or co-authored during that period [ 1 ]. I do remember a sharp rebuke from S. Chandrasekhar, editor of the Astrophysical Journal, to me on a multi-author paper on radio sources and their optical identification leading to the discovery of new quasars [ 2 ]. However, this was for some infraction of the submission process, and not connected to a peer review of the paper’s contents.

2. A Peer Review Case Study—The Events of 9/11

In 2006 I was handed a DVD that questioned the official account of 9/11. I examined it reluctantly with a view to debunking the claim that the official story was false. To my great surprise and consternation, I found that the physical evidence for the controlled demolitions of the World Trade Center (WTC) buildings in New York City—the Twin Towers (WTC1/2) and Building 7 (WTC7)—was compelling [ 10 ]. Controlled demolition may be defined as the “intentional destruction of a building by placing explosives in strategic areas [ 11 ].”

Dr. James G. Quintiere, fire protection expert, stated: “I know of no peer review of the NIST work on WTC. They had a[n] Advisory Committee, and even some of them did not agree with the NIST work and conclusions.” In a paper on the WTC investigation, Quintiere ends with this statement: “I would recommend that all records of the investigation be archived, that the NIST study be subject to a peer review, and that consideration be given to reopening this investigation to assure no lost fire safety issues.” In 2009, I became a founding member of Scientists for 9/11 Truth, serving as Coordinator of that organization for the past eight years [ 12 ]. In 2014, I co-authored with Wayne H. Coste and Michael R. Smith a paper on the ethics, or lack thereof, of the official reports on the WTC building destructions [ 13 ]. These official reports were produced by scientists and engineers at NIST (National Institute of Standards and Technology) [ 14 ]. For these reports, there was no independent peer review process whatsoever, as shown in the passages quoted in the following paragraph [ 15 16 ]:

The absence of peer review for the NIST reports is alarming, especially in view of the consequences of 9/11. These consequences include the preemptive wars that have led to a devastating loss of life and property and to a substantial refugee problem. Additional consequences include the imposition of mass surveillance and the erosion of civil liberties, as well as a need to resolve fire safety and building code issues. However, contrary to NIST’s claims for the Twin Towers and WTC7, no steel-framed structure before or since 9/11 has ever been so completely devastated by damage and/or fire alone.

While it is true that reports are often not peer-reviewed, and that the military actions and some of the other consequences mentioned above occurred before the NIST reports were written and available, widespread public questioning of the official story that was launched within two days after 9/11, as well as the many omissions and distortions found in the 9/11 Commission Report of 2004, demanded an investigation of unimpeachable integrity with an independent peer review of the NIST work [ 17 18 ]. The 9/11 Commission Report never mentioned the destruction of WTC7, a 47-storey building, and NIST never examined the actual fall and aftermath of the WTC Twin Towers’ destructions. Also at stake were the lives of many in the ongoing wars and the treatment and care of thousands who had breathed the lethal dust or powder in New York City [ 13 ].

Ironically, the very seriousness of NIST’s ethical failure in omitting meaningful peer review has resulted in a thorough, but non-official, independent peer review of the NIST reports. At the present time, over 2800 highly qualified scientists, engineers, and architects, as well as many other scholars, have examined the official account including the NIST reports and have found it to be in violation of the scientific method and the norms of genuine scientific research [ 12 19 ]. For example, as stated previously, the NIST investigation never examined the actual fall of the Twin Towers, nor did it examine the building remains for explosives, as required by the NFPA (National Fire Protection Association) guidelines in case of catastrophic collapse [ 20 ]. Independent analysis of the physical evidence, including the WTC debris powder or dust, points to the controlled demolition of the three buildings cited above [ 21 ]. For example, a very high percentage of iron-rich micro-spheres found by R.J. Lee group, USGS (U.S. Geological Survey) and others in the powder indicated the use of thermite, a substance that can have both incendiary and explosive properties [ 22 13 ]. The independent findings show the great value of the peer review process and point to the need for a more advanced and open form of peer review.

Such an open process occurred during one of the public input sessions held by NIST in 2008, with startling results. NIST had invited public comments on its preliminary findings on why WTC7 collapsed. In responding to a comment by David Chandler, Shyam Sunder, the lead NIST investigator, claimed that in NIST’s structural model the visible portion of WTC7 fell for a distance equivalent to 17 floors in 5.4 s, which is 1.5 s or 40% longer than a time of 3.9 s that would be the case for free fall [ 23 ]. NIST had stated previously that this is “consistent with physical principles.” In his comment, David Chandler, a high school physics teacher, pointed out that a variety of methods showed from the motion of the top NE corner of the building that there was in fact free fall. Chandler’s measurements indicated free fall for the first 2.5 s, equivalent to a distance of 8 floors or about 30 m [ 24 ]. However, NIST did not acknowledge this fact. As Shyam Sunder had previously stated: “[A] free fall time would be an object that has no structural components below it. […] [T]here was structural resistance […] in this particular case.” Later, NIST simply incorporated a value of 2.25 s of free fall, based on its own measurement, into its final report without comment and quietly removed the statement about its analysis being “consistent with physical principles.” By failing to address the implications of freefall, NIST’s final report, in this context, has all the earmarks of attempted scientific fraud [ 25 ].

Two days after 9/11, on 13 September 2001, Professor Zdeněk P. Bažant of Northwestern University submitted to peer review a paper with one of his students, Yong Zhou, as co-author [ 26 ]. The paper was a theoretical analysis of the WTC Towers’ collapses. It argued that, “if prolonged heating caused the majority of columns of a single floor to lose their load carrying capacity, the whole tower was doomed.” The paper was submitted to the ASCE (American Society of Civil Engineers) Journal of Engineering Mechanics and was, after peer review and some modifications, published in 2002. Later, NIST cited this paper as support for its own conclusions [ 27 ]. However, Bažant’s and Zhou’s paper never attempted to explain the many different physical observations, such as lateral high-velocity ejections of materials for hundreds of meters, and the fact that there was no pile driver to crush each tower, since all materials were blown outside the buildings’ footprints [ 28 ]. The acceptance of Bažant’s and Zhou’s paper by ASCE and its use by NIST is therefore highly questionable. In this important instance, the peer review process allowed publication of a theoretical paper purporting to explain an event with serious and ongoing consequences for society, but which ignored the major physical observations that disproved the paper’s theory. Moreover, Bažant’s critics have had difficulty in getting ASCE to publish their significant criticisms. See, for example, the experience of James Gourley [ 29 ].

The highly charged political environment surrounding 9/11 has greatly impeded the acceptance and publication of research papers that question or contradict the official account of that event. A glaring example of bias on the part of the ASCE editors is provided by the experience of Tony Szamboti and Richard Johns who submitted a critique of a subsequent paper by Jia-Liang Le and Zdenek Bažant entitled “Why the Observed Motion History of the World Trade Center Towers is Smooth” [ 30 ]. The latter paper appears to be a response to an earlier paper and critique of Bažant by Graeme MacQueen and Tony Szamboti that predicted a “jolt” if indeed the top 12 stories of WTC1 had fallen on the lower, undamaged portion of the building [ 31 ]. The Szamboti and Johns paper was rejected by ASCE editors as being “out of scope.” As Szamboti and Johns have since noted, “It is not possible for a Discussion paper, one that simply corrects errors in a paper that is already published, to be out of scope for a journal [ 32 ].” This is seen by independent researchers as clear proof that the editors were unwilling to allow Le’s and Bažant’s paper to be corrected.

Despite the difficulties encountered by independent scholars in their study of the events of 9/11, a number of important papers have survived the submission and peer review process in mainstream journals, though sometimes with attendant controversy. A particularly important peer-reviewed paper by Harrit, Farrer et al. analyzed red-gray chips found in the WTC debris powder, and showed them to contain nano-thermite, an advanced form of thermite that has incendiary and explosive properties, and is manufactured in military facilities [ 33 ]. This finding by independent scientists, after the NIST investigators had neglected to examine the WTC dust, has thus far not been acknowledged or contested by NIST. The paper was published in the Bentham Open Chemical Physics Journal, whose editor in chief, Marie Paul-Pilenie, subsequently resigned, claiming she had not been informed of the paper’s publication [ 34 ]. This incident further illustrates the high degree of tension and politicization surrounding this very important field of research. It is no wonder, under these conditions, that the peer review process appears to be broken, as illustrated by the examples cited in this paper and elsewhere [ 35 ].

In a very recent paper published in the Europhysics News (EPN), a paper by Steven Jones, Robert Korol, Anthony Szamboti, and Ted Walter concludes that the physical evidence points to controlled demolition as the real cause of the three total building destructions in New York City [ 21 ]. The editors of EPN stated that they “considered that the correct scientific way to settle this debate was to publish the manuscript and possibly trigger an open discussion leading to an undisputable truth based on solid arguments.” After receiving comments both pro and con the paper’s arguments, including a letter from an NIST spokesman, and a letter from a former NIST employee urging NIST to “blow the whistle on itself now” before awareness of the “disconnect between the NIST WTC reports and logical reasoning” grows exponentially, the editors declared in a letter that they themselves “do not endorse or support these (the paper’s) views.” The editors’ premature conclusion while the debate is in progress clearly illustrates the pressures on editors and institutions that can lead to suppression of research [ 36 ]. However, in this case, the editors have allowed the paper to stand, and the paper now has over half a million views or downloads, a fact EPS (European Physical Society) President, Christophe Rossel, declared to be a “good thing”. The editors of EPN are to be commended for allowing the light to shine on this issue.

The universities, once bastions of independent thinking and research, appear to be increasingly controlled by corporate interests, on whom they are dependent for funding in a race for survival or competitive growth. This development not only impacts the choice of research topics, but also affects the peer review process. It is naïve to suppose that papers unfriendly to powerful corporate interests will always be treated fairly. For example, the venerable University of Cambridge, U.K., now includes the BP (British Petroleum) Institute [ 37 ]. According to the Institute’s website, “The University of Cambridge BP Institute was established in 2000 by a generous endowment from BP, which has funded faculty positions, support staff, and the Institute Building, in perpetuity. The Institute research focuses on fundamental problems … spanning six University Departments.” Prominent individuals who have pointed to the pursuit of oil as the primary reason for the Iraq War include former Federal Reserve Chairman Alan Greenspan, former Senator and Secretary of Defense Chuck Hagel and General John Abizaid, former head of U.S. Central Command and Military Operations in Iraq [ 38 ]. Since it is openly admitted that the Middle East wars, spawned by 9/11, were driven by oil interests, can a university, funded partly by oil interests, now deal with the events of 9/11 both scientifically and with intellectual integrity?

Conspiracy and Democracy , that examines 9/11 as one of many “conspiracy theories” [ Like most other universities, Cambridge, with one exception (a theoretical paper by K. A. Seffen that, like Bažant, ignores the physical evidence), has not yet dealt scientifically with the events of 9/11 at all [ 39 40 ]. Instead of scientists analyzing 9/11 using the available observations and physical evidence, the university sponsors a Leverhulme-funded project,, that examines 9/11 as one of many “conspiracy theories” [ 41 ]. “Conspiracy theory” is a pejorative term coined and promoted by the CIA (Central Intelligence Agency) since the 1960s to denigrate the views of anyone who questions the official accounts of Deep State events such as the John F. Kennedy assassination [ 42 ].

One notable exception to the universities’ failure to deal with 9/11 is the work of Professor Leroy Hulsey at the University of Alaska [ 43 ]. His work models the destruction of Building 7 (WTC7). According to Hulsey, his research is a “completely open and transparent investigation” that invites input from other technical experts and the public. Preliminary results of this research cast serious doubt on the NIST reports for WTC7 [ 44 ]. When completed, Hulsey plans to submit his work to peer review by engineering journals. Hulsey’s open approach is in striking contrast to that of NIST, which clearly and unaccountably rejected the peer review process by stating that it will not release details of its WTC7 collapse initiation model because it “might jeopardize public safety [ 45 ].”