An iceberg melts in Kulusuk Bay, eastern Greenland. (AP Photo/John McConnico)

In the steady stream of published articles on climate change there seems to be a consistent omission of a critical element. This was once again brought to my attention when, as an MIT alum, I read the January/February issue of the MIT Technology Review [TR] dedicated to climate change.

Unlike many of the environmental pop pieces, the TR editors, thankfully, were careful to present realistic views of dealing with climate change technologically and economically. Wind and solar power were treated with uncommon candor in admitting that in spite of declining costs these intermittent energy sources are not sufficient at affordable prices for a modern industrial society.

With a mature appreciation of the uncertainty in climate forecasts, one article described a policy alternative to the so-called “precautionary principle”—the politically popular idea that policy makers should focus on avoiding an imagined worst-case scenario at all costs. Instead, it described a pragmatic trial-and-error approach at far lower cost that deals with specific consequences of climate change as they actually occur, not as imagined by computer models.

However, omitted in these articles are discussions of climate change in the pre-industrial past, particularly in the last two millennia, for which there is significant source material. The reason this neglect is so critical is that it is commonly assumed that atmospheric CO2 is the only meaningful variable behind climate change. Since pre-industrial levels of CO2 appear to have consistently been below 300ppm with little variation, and levels since the mid-twentieth century have steadily risen to about 400ppm today, atmospheric CO2 is an obvious anthropogenic variable. And, the reasoning goes, since both CO2 levels and global temperature averages have increased together, man’s fossil fuel usage must be causing global warming. The science is settled. We can move on.

Well, let’s continue this line of reasoning and see where it goes. If atmospheric CO2 levels and global temperature are truly correlated, then they ought to be as correlated before the industrial revolution as they are afterward. Pre-industrial CO2 levels remained relatively constant. Did pre-industrial global temperature levels also remain relatively constant? No, they did not. Over the last thousand years, for example, there was the well-attested Medieval Warm Period from about 900–1300 followed by the equally well-attested Little Ice Age from about 1350–1800.

Why does the “settled science” of the CO2-temperature relationship fail to work prior to the age of heavy industry? Obviously, global temperature must be affected by at least one other variable. What is it, or what are they? Until this missing element is known, no climate change mitigation policy based solely on the single variable CO2-temperature correlation can be assured of success. And until this element is openly discussed and debated, published articles on climate change, no matter how well intentioned, deceptively leave readers with a tunnel vision on a very important topic.

Charles A. Clough, M.S. (Atmospheric Science), Th.M. (Old Testament and Semitics), Retired Chief, U.S. Army Atmospheric Effects Team, Aberdeen Proving Ground, and Adjunct Professor, Chafer Theological Seminary, Bel Air, MD, is a Contributing Writer for The Cornwall Alliance for the Stewardship of Creation.