The following essay is adapted from the book Back on the Road to Serfdom: The Resurgence of Statism, edited by Thomas E. Woods Jr. (ISI Books, 2011).

We are perhaps apt to forget that during the Cold War, it was generally conceded that the Soviet Union had a higher rate of economic growth than the United States. Given that the United States accounted for nearly half of world output in 1945, the logic held that it did not have room to grow like the other nations of the world, which collectively accounted for the other half. Starting from a much lower base—and having gained an empire—the USSR surely could expect greater economic expansion than the United States.

There was no more conﬁdent advocate of this position than the postwar world’s premier economist, Paul A. Samuelson. Samuelson touted the growth record of the USSR in his book Economics: An Introductory Analysis, the leading economics textbook of the era, and he said the same thing as adviser to those in power. When John F. Kennedy was running for president in 1960, Samuelson wrote to the Democratic candidate,

“America has deﬁnitely been falling behind not only with respect to the USSR, but with respect to most of the other advanced countries of the world.

For years, our production has been growing more slowly than that of Russia, Western Germany, Japan, [and a host of other countries]” (emphasis in the original).1

JFK offered no resistance to this point, and few others in Washington did either. By the 1980s, the CIA’s national estimates held that the USSR’s economy, which had been at mass famine levels four decades prior, was now half the size of that of the United States. The Soviet Union’s rates of growth had been so much higher than those of the United States, according to U.S. intelligence, that the two economies were possibly on a path of convergence.2

Then, in 1989, an ofﬁcial in the USSR’s national accounts bureau named Yuri Maltsev defected to the United States and revealed that by good standards of measurement, the Soviet economy stood at only 4 percent of the U.S. total. After the Soviet state collapsed two years later, investigations by the World Bank, the International Monetary Fund, and the Organization for Economic Cooperation and Development concluded that the Soviet economy had been only half as big as the CIA reckoning, reaching about a fourth or a ﬁfth the size of the U.S. economy. Maltsev stuck with his number, and soon he was joined by dissenters from within the Western statistical bureaucracies, such as William Easterly at the World Bank. An old rule of thumb in the face of two clusters of professional estimates is to split the difference. Applying the rule in this case, we can say that the Soviet economy peaked at about one-eighth the size of the American economy.3

Although the economic failures of the centrally planned Soviet state are now well documented, no less a champion of the free market than F. A. Hayek expressed doubts that free-market capitalism was superior to planning when it came to total output and standards of living. In The Road to Serfdom Hayek wrote:

Which kind of values ﬁgure less prominently in the picture of the future held out to us by the popular writers and speakers . . . ? It is certainly not material comfort, certainly not a rise in our standard of living or the assurance of a certain status in society which ranks lower. Is there a popular writer or speaker who dares to suggest to the masses that they might have to make sacriﬁces of their material prospects for the enhancement of an ideal end? Is it not, in fact, entirely the other way round?4

The Road to Serfdom was a warning that collectivism is a temptation of the most serious sort, in that it had the ring of a good trade. In exchange for civil liberties, which is to say a high degree of personal, familial, and community autonomy, submission to a centralized state stood both to eliminate social inequality and to bring material well-being if not afﬂuence.

This is one of the great overlooked aspects of The Road to Serfdom: Hayek is careful to argue for the market not on the grounds of what it may produce in terms of standards of living. Rather, he urges that yielding to the market will make us better persons, though it may make us economically poorer. Under “individualism,” we will develop good values and habits “which are less esteemed and practiced now—independence, self-reliance, and the willingness to bear risks, the readiness to back one’s own conviction against a majority, and the willingness to voluntary cooperation with one’s neighbors.”5

This defense of individualist over collectivist values is the book’s strongest suit. But history has shown that the actual road to serfdom not only leads to the uncivilized value structure of which Hayek wrote so eloquently. It also debilitates living standards, despite Hayek’s fears that there were legitimate reasons to be tempted by collectivism.

We should be careful not to fall into the common trap holding that economics is inevitably the science of trade-offs—a trap that snared even Hayek. He felt compelled to write The Road to Serfdom “to the socialists of all parties” because he believed that material well-being and social equality were plausible results of collectivism. Hayek’s conclusion was perhaps not unreasonable at the time, given that the ascendant Nazi Germany was achieving a higher rate of economic growth than the less collectivist Britain to which he had ﬂed. But in fact, such beneﬁts are not plausible. More importantly, preparing the ground for collectivism at all may well introduce the slippery slope toward impoverishment more quickly than we think.6

Today, more than sixty-ﬁve years after the publication of The Road to Serfdom, the United States seems to be taking alarming steps in the collectivist direction. To understand where this path leads, we need not look at something so manifestly disastrous as the Soviet economy, whose history is one of privation, supply-demand disconnect, constant rescues by foreign capital, and unsustainability tantamount to simple preposterousness. America’s own history, while blessedly bereft of analogues to the Soviet experience, is itself quite clear about what happens when nods are made in the collectivist direction. For an investigation of the course of American economic history since the Civil War reveals a remarkable truth: all periods of prosperity in the United States have coincided with decided efforts to keep collectivist inclinations at bay, and all periods of economic weakness have occurred in the context of dalliances with collectivism—that is, with efforts to impose governmental management on the economy.

The frightening truth is that if America’s leaders do not understand this history, our government may only double down on economic policies that have caused trouble in the past.

The American Economy: Potency and Act

The most significant fact about the past century and a half, treated as a statistical run, is that it had an inﬂection point. This was the one-third mark, 1913. Before that year, the macroeconomic performance of the United States, by the main measurements, was regular and strong. After that point, however, extended contractions and bouts of new, unfamiliar negative side effects—namely, unemployment and inﬂation—emerged rather out of the ether.

The most impressive half century in American—arguably world—economic history was that which followed the Civil War: the nearly ﬁfty years from 1865 to 1913. The American economy expanded at a yearly rate of 3.62 percent from 1865 to 1913. By way of comparison, from 1913 to 2008 (also a peak-to-peak period), the American economy grew at 3.26 percent per year. The difference of about four-tenths of a percent per year proved enormous. Had the United States maintained the trend that held in the half century after the Civil War, it would now be about half again richer than it is now, in the second decade of the twenty-ﬁrst century.

Macroeconomic performance is generally judged on two criteria: growth and “variation.” Variation refers to the degree of steadiness of growth and of macroeconomic ill-effects, above all unemployment and price instability. Here again, the era of the Robber Barons is the shining one. The greatest decades of economic growth in American history were the 1870s and 1880s, when the economy expanded by two-thirds each time. There was one signiﬁcant recession in this period, in 1873. It was overwhelmed so soon and so comprehensively that the 70 percent real growth gained in the 1870s amounts to the largest of any decade in the peacetime history of the United States.

As for the “panic of 1873” of textbook lore, that year brought a big drop in output, with people thrown out of work. The episode was a function of the incredible depreciation of the dollar that had been undertaken in the Civil War, when (following decades of price stability) the Union government printed greenbacks so quickly that the dollar suddenly lost half its value. After 1865, the U.S. government pledged to restore the value of the dollar against gold (and consumer prices), but doubts about this led to speculative investments to hedge the uncertainty and ultimately produced the asset crash of 1873.

In the wake of the 1873 bust, however, the dollar slowly reclaimed its value, just as the U.S. government had pledged. The price level declined by 1.4 percent per year on average for the next two decades, such that by the 1890s, a dollar saved before 1860 achieved its original purchasing power. As for unemployment, the term was not coined until the tail end of the century for a reason. The United States was importing tens of millions of immigrant workers on account of labor shortages given the growth boom.

President Barack Obama’s ﬁrst chair of the Council of Economic Advisers, Christina Romer, owes her professional reputation to her bringing to light these realities in her doctoral dissertation at MIT in the 1980s. Romer found that the era of the American industrial revolution (and by her analysis the trend held until 1930) was so superior in terms of growth and variation—growth was high; recessions were rare, shallow, and short; prices changed little as employment boomed—that it effectively defined the kind of results that governmental macroeconomic management should aspire to. The irony was that there was precious little macroeconomic management at all for most of this era. We can say with statistical precision that there has never been a golden era in American macroeconomic history like the 1870s and 1880s.7

There were two other significant recessions in the half century after the Civil War. These occurred in 1893 and 1907. Both cases correlated to governmental overtures to introduce macroeconomic policy. In 1890, the United States signaled that, despite having attained the very price level that had held for decades before the Civil War, as well as having watched growth cruise at more than 5 percent per year for the long term, it was now going to monetize a new asset, silver. The prospect was of too much currency in the economy (1873 redux), and the markets quickly swelled and crashed. The recovery from 1893 stayed tepid while President Grover Cleveland spent his term trying to end the silver lark. Aggregate output was ﬂat from 1892 until the next election year, 1896; in the latter year, free-silverite William Jennings Bryan succeeded Cleveland as Democratic nominee for president. The strong recovery began only when, with the election of Republican William McKinley in 1896, the United States committed to dropping the program for the extra silver money. Overall, growth was slower in the 1890s than it had been in preceding decades—33 percent for the decade, a typical twentieth-century number. But from the year McKinley was elected until 1907, growth came in at 4.6 percent per year, approaching the 1870s–1880s standard of 5.2 percent annually. This is tantamount to saying that the real trend of yearly growth in the post–Civil War period was not 3.62 percent but something like 5 percent per year—because 5 percent held as long as the government stayed out of the way.

In 1907, there was another market crash and recession, only this time a strong and sustained recovery did not follow. The recovery, such as it was (3.3 percent growth per year until the 1913 peak), was haunted by a new prospect: that comprehensive new tools allowing governmental intervention into the economy would be put in place. Immediately in the wake of J. P. Morgan’s famous settling of the markets in the fall of 1907, measures were introduced in Congress to create a federal reserve (or central banking) system that would be the ﬁrst line of defense in any future crisis. In addition, the push for a federal income tax, which had died in the courts in recent years, gained renewed momentum.

Both of these massive means of governmental intrusion in the economy, the Federal Reserve and the income tax, were ﬁnally established in the same year: 1913—our inﬂection point.

Even though the mild recovery after 1907 occurred before 1913, its characteristics actually may have owed themselves to 1913. Capital is known for looking to the future to take a gander at prospective returns. Had there been no prospect of the Fed and the income tax in the wake of the economic events of the fall of 1907, there may not have been a recession at all, let alone a weak recovery. For if 1908 had brought a recovery on the order of nearly 5 percent annual growth as had been initiated in 1896, we would not even call the 1907 event a recession. There were episodes in the 1880s where growth dipped and assets were sold, but the recoveries were so quick and so big that the down periods do not register to the naked eye. It is not out of the question that this fate was in store for the economy had the 1907 crash not been met with calls for a Fed and an income tax. This, of course, is a hypothetical point, but there is no shortage of historical evidence that is consistent with it. Christina Romer calculated that the recovery in industrial production from the 1907 event proceeded according to recent precedent until 1912.

The Variation Era

Perhaps the most forgotten period in American economic history is the eight years that followed the creation of the Fed and the income tax in 1913. From 1913 to 1921 the growth rate came in at just 1.4 percent per year. The period included two long recessions: one beginning in 1913, in which that year’s level of production was equaled only two years later (and with the assistance of military production that did nothing for living standards), and another from 1919 to 1921 that was simply the worst depression the nation would ever suffer outside of the 1930s. “Unemployment” quickly joined the parlance; people scrambled to measure the phenomenon, and the consensus was that it stayed in the high double digits in the latter recession. And then this novelty: the price level went up by 110 percent from 1913 to 1920, and then swerved down in the year following by 25 percent. Strikes swept the land, since wages had no hope of keeping up with the unprecedented inﬂation, and the new income tax system hit persons making as little as $1,000 a year ($11,000 in today’s terms).8

Before 1913, there had been at most only shadows of government ﬁscal and monetary policy, and the United States had cruised at its 5 percent per-annum rate of expansion, with the price level making small oscillations around the antebellum number. But after 1913, the government used its new macroeconomic policy tools to the hilt. Immediately after its creation, the Fed arranged for a doubling of the money supply—this in the face of a manifest recession. The inevitable result was the doubling of the price level. As for income taxes, the ﬁrst top rate, upon passage of the Sixteenth Amendment in 1913, was 7 percent. In four years’ time, it was up elevenfold, to 77 percent. Meanwhile, someone whose income merely kept up with the inﬂation engineered by the Fed—that is, someone who saw no actual gain in income—could be pushed into the stratospheric top tax bracket, since the progressive tax brackets were not adjusted for inﬂation. (This is the phenomenon known as bracket creep.) The investor class soon adjusted away from entrepreneurship and into tax shelters.

Then there was the recovery—perhaps the most famous recovery in American history. The Roaring ’20s that followed 1921 aped the bygone era very well: 4.7 percent yearly growth through 1929, unemployment gone, and a price level that barely moved. The government’s macroeconomic policy posture during this period is unmistakable: the Fed expressly got out of the business of trying to undo the 1913–20 inﬂation via a commensurate massive deﬂation, and the marginal rate of the income tax was cut by 52 points. In other words, ﬁscal and monetary policy retreated.

How we have ever associated the onset of the Great Depression with a “crisis of capitalism” is anyone’s guess. In fact, the years 1929–33 brought historic governmental intrusions in the economy. In late 1929, the Fed resumed its 1920–21 efforts to reclaim the 1913 price level by appreciating the value of the dollar. Deﬂation held at 9 percent per year from late 1929 to early 1932, blowing away the gentle deflation standard of the post-1873 years that had seen constant, rapid growth. Over the same interval, the marginal income tax rate jumped by a magnitude of one and a half, to 63 percent. Severe deflation and conﬁscatory taxes led to a capital strike, with savage unemployment being the inevitable result. And this is not to mention the Smoot-Hawley Act of 1930, which raised tariffs to record levels, cut foreign trade in half, and convinced the world that convertible currencies—and indeed international economic cooperation—were no longer useful.

In other words, ﬁscal and monetary policy extended their scope and sway as never before. In turn, real conditions in the United States became as horrendous as any developed country had experienced since the dawn of the industrial age.

All of this macroeconomic intervention occurred during the Herbert Hoover administration, before Franklin Roosevelt took ofﬁce and instituted his New Deal. Under FDR, the Fed and the U.S. Treasury actually dropped the misguided deflationist policy. By raising the gold redemption price 75 percent, to $35 per ounce, the government effectively announced that the United States would never strive to appreciate the dollar again. It remained an open question whether the U.S. government would strive to depreciate the currency, but in point of fact it did not. The consumer price index from 1934 to 1940 mimicked the band of oscillation that had prevailed in the era of the Robber Barons: small moves around par.

But while the Roosevelt administration reversed course on monetary policy, it only built on Hoover’s ﬁscal policy. FDR increased the marginal tax rate even more, sending it up to 73 percent—nearly triple the rate that had supervised the Roaring ’20s.

This mixed record on monetary and ﬁscal policy produced a mixed recovery at best. Output did go up slightly during this period, and by 1939 it ﬁnally returned to the 1929 level (adjusting for population, which grew at a tiny rate). But instead of posting a peak-to-peak growth rate in output of 4–5 percent per year, as had been usual before 1913, the New Deal recovery—not the mot juste—was nil peak to peak.

From 1940 to 1944, gross domestic product (GDP) boomed in the United States as living standards collapsed. We should not be detained by the aggregate output, or even the employment, statistics of the World War II years if the topic under consideration is economic recovery. The amount of goods and services produced for the real sector hit bottom with the war. Government/military goods, which are not real goods, became the exclusive specialization of the American economy in this period. Calling the 1940–44 run tantamount to a recovery (let alone a great one), as is so often done, is one of the great misnomers of modern economic history.

Consider two pertinent questions about this period. First, given that employment rebounded massively during the war, but that pay for those employed had to be saved on account of the shortages, did that saved pay retain its value after the war? And second, was the GDP boom of 1940–44 consolidated and built on as the economy cycled into real production?

The answer to the ﬁrst question is that the saved pay did not retain its value, meaning that one cannot really hold that there had been a true return to full employment during the war. From 1944 to 1948, the United States experienced inﬂation of 42 percent (the Fed had been expansionist again), devaluing savings accrued before that time. Moreover, redemptions of U.S. war bonds (where so much of workers’ pay had gone during World War II) were taxed at one’s marginal income tax rate, and rates were jacked up across the board, the top one reaching 91 percent. Therefore, when World War II employees redeemed the bonds after the war, the World War II employer—the government—recovered much of what it had laid out in pay to its workers. A conservative estimate is that given inﬂation and taxes, the average World War II worker lost half of his or her pay to the government. In economic terms, this means that World War II solved the unemployment problem of the 1930s only half as much as is commonly supposed.

As for the second question, GDP fell precipitously from 1944 to 1947, by 13 percent, as prices soared. This was a clear indication that the growth of the war years was artificial. Nonetheless, living standards improved, as the real sector made huge inroads into the government’s share of economic production. Then a transition hit: the postwar inﬂation stopped. This occurred because the U.S. government focused on its commitment to the world made at the 1944 Bretton Woods conference that it would not overproduce the dollar so as to jeopardize the $35 gold price. And when Republicans won control of Congress in 1946, they insisted on getting a tax cut; they ﬁnally passed it over President Harry Truman’s veto in April 1948. The institutions of 1913 had signaled a posture of retreat.

That is when postwar prosperity got going. From 1947 to 1953, growth rolled in at the old familiar rate of 4.6 percent per annum, as unemployment dived and prices stayed at par except for a strange 8 percent burst just as the Korean War started.

Taxes were still high, however, with rates that started at 20 percent and peaked at 91 percent. When recession hit in 1953, a chorus rose that they be hacked away. But for the eight years of his presidency, Dwight D. Eisenhower resisted these calls for tax relief. Despite the common myth of “Eisenhower prosperity,” the years 1953 to 1960 saw economic growth far below the old par, at only 2.4 percent, and there were three recessions during this period. Monetary policy, for its part, was unremarkable. Once again the coincidence held: unremarkable monetary policy and aggressive tax policy led to a half-baked result.

Much ink has been spilled on how the JFK tax cuts of 1962 and 1964 were “Keynesian” and “demand-side.” Whatever we want to call the policy mix of the day, in the JFK and early Lyndon B. Johnson years, ﬁscal and monetary policy clearly retreated. Income taxes got cut across the board, with every rate in the Eisenhower structure going down, the top from 91 percent to 70 percent, the bottom from 20 percent to 14 percent. And monetary policy zeroed in (at least through 1965) on a stable value of the dollar, with the gold price and the price level sticking at par after making startling moves up with the ﬁnal Eisenhower recessions. The results: from 1961 to 1968, real U.S. growth was 5.1 percent yearly; unemployment hit peacetime lows; and inﬂation held in the heroic 1 percent range before the latter third of the period, when it began creeping up by a point a year. The real effects inspired slogans. If four decades prior had been the “Roaring ’20s,” these were the “Swingin’ ’60s” and “The Go-Go Years.”

At the end of the decade, however, the government loudly signaled a reversal in ﬁscal and monetary policy. The Fed volunteered that it would ﬁnance budget deﬁcits, and LBJ pleaded for and got an income tax surcharge, soon accompanied (under Richard M. Nixon) by an increase in the capital-gains rate on the order of 100 percent. This two-front reassertion of ﬁscal and monetary policy held for a dozen years. The nickname eventually given to that period, in view of the real effects, was the “stagﬂation era” (for stagnation plus inﬂation). From 1969 to 1982, real GDP went to half that of the Go-Go Years, to 2.46 percent; the price level tripled (with gold going up twentyfold); average unemployment roughly doubled to 7.5 percent; three double-dip recessions occurred; and stocks and bonds suffered a 75 percent real loss. It was the worst decade of American macroeconomic history save the 1930s, and it inspired Christina Romer to write a dissertation.

Paul Volcker took over the Fed chairmanship in 1979. He was determined to stabilize the dollar (given the recent 200 percent inﬂation) at least against prices, if not against gold and foreign exchange. He ultimately did this well enough with the support of the Ronald Reagan administration. The average inﬂation rate for the period after 1982, and beginning strongly in that year, was about a third of what had prevailed in the 1970s—3 percent as opposed to 9 percent. The monetary authorities even came to announce that they were pursuing “inﬂation targeting.” This retreat in monetary policy was once again coupled with Kennedyesque tax policy, with all rates getting reduced substantially, and most of the brackets eliminated in the bargain. “The Great Moderation” became the term coined to describe the 1982–2007 period, where annual growth came in at 3.3 percent, with seven-year runs at 4.3 percent in the 1980s and 1990s. There were only two recessions in this period, both mild. GDP growth got in the tightest band ever recorded since quarterly statistics began in 1947. Average unemployment went down to half the stagﬂation level.

Finally, with the “Great Recession” of 2008–10—which even with its ﬁve down quarters of GDP growth and 10 percent unemployment does not equal the extent of the 1980–82 double-dip recession—monetary policy has declared its everlasting intention to be relevant again. Taxes are set to rise by statute in 2011, and by commission after that so as to cover federal spending 50 percent larger than we are accustomed to. Once again the series is maintained. A growth stoppage along with variation coincides with the rearing of the heads of ﬁscal and monetary policy.

Business versus Busy-ness

The post-1913 period of American economic history is a world of ﬁts and starts, at least until the Great Moderation which dissipated with the government bailouts of 2008–9. In contrast, the pre-1913 era has an integrity, a statics, with patterns that hold for a long time. Its story is easier to relate. Variation, when it came in that bygone time, coincided with the weird appearance of a shadow, that of an overseer seeking power to bend things to a different course.

The era of the Robber Barons was one of business, perhaps the most supreme there ever was. The post-1913 era—the macroeconomic era, the era of policy—was rather one of busy-ness. Economic performance shed its regularity and constant peak nature in favor of previously unheard-of growth swings, so much so that a clamor started to measure that very thing, and to do so quarterly.

In the canons of macroeconomics, ﬁscal and monetary policy are supposed to bring “stabilization” to an economy. That is, policy will smooth out the cycle of boom and bust and reduce the parameters of inﬂation and unemployment. Advocates of macroeconomic policy have long conceded that there will have to be trade-offs in exchange for these beneﬁts. Lower growth will be the price for smoothness. Some unemployment or inﬂation will have to exist at the expense of the other.

And yet from a simple statistical perspective, it is clear that the macroeconomic era gave evidence not so much of trade-offs as diminutions across the board. Growth was both smoother and higher in the pre-1913 era. Unemployment and inﬂation not only did not exist inversely to each other; they did not exist at all.

What have been the costs of having macroeconomic policy? Recall that the real growth trend of the pre-1913 era was something like 5 percent per annum, not the recorded 3.62 percent. The unusual breakdowns in the long peak-growth runs in that era occurred when the government attempted to introduce macroeconomic policy. This means that the real output lost to us since 1913 is not 50 percent, but 500 percent. Had we grown at 5 percent annually since 1913, instead of at the 3.26 percent that in fact happened, we would be ﬁve times better off today.

We can remonstrate that correlation is not causation. Perhaps ﬁscal and monetary policy had nothing to do with the sub–Gilded Age performance of the economy since 1913. Perhaps their absence had nothing to do with the impressiveness of economic performance before then. After all, other things were at work. Maybe so. But we can say one thing for certain. The correlation is fact. Every period of sustained peak economic activity in the United States since 1865 has correlated to the nonexistence, or the blanket retreat, of ﬁscal and monetary policy.

Although correlation is not causation, the United States will be foolish and reckless to maintain current policy in the face of its unambiguous economic record. Macroeconomic policy, as much as any outright push toward collectivism, is on the record as putting us on the road to serfdom. And if we think there is a high bottom which will always catch us in our mistakes, we are indulging an optimism not based on the lessons of history.

Books on the topic discussed in this essay may be found in The Imaginative Conservative Bookstore. This article appears in the Spring 2011 edition of the Intercollegiate Review and is published here with their gracious permission.

Notes