Originally, the Supreme Court of the United States met in a drafty room on the second floor of an old stone building called the Merchants’ Exchange, at the corner of Broad and Water Streets, in New York. The ground floor, an arcade, was a stock exchange. Lectures and concerts were held upstairs. For meeting, there weren’t many places to choose from. Much of the city had burned to the ground during the Revolutionary War; nevertheless, New York became the nation’s capital in 1785. After George Washington was inaugurated in 1789, he appointed six Supreme Court Justices—the Constitution doesn’t say how many there ought to be—but on February 1, 1790, the first day the Court was called to session, upstairs in the Exchange, only three Justices showed up and so, lacking a quorum, court was adjourned.

People’s attitude toward judicial review usually depends on the makeup of the Court. Illustration by The Heads of State

Months later, when the nation’s capital moved to Philadelphia, the Supreme Court met in City Hall, where it shared quarters with the mayor’s court. Not long after, the Chief Justice, John Jay, wrote to the President to let him know that he was going to skip the next session because his wife was having a baby (“I cannot prevail on myself to be then at a Distance from her,” Jay wrote to Washington), and because there wasn’t much on the docket, anyway.

This spring, the Supreme Court—now housed in a building so ostentatious that Justice Louis Brandeis, who, before he was appointed to the bench, in 1916, was known as “the people’s attorney,” refused to move into his office—is debating whether the Affordable Care Act violates the Constitution, especially with regard to the word “commerce.” Arguments were heard in March. The Court’s decision will be final. It is expected by the end of the month.

Under the Constitution, the power of the Supreme Court is quite limited. The executive branch holds the sword, Alexander Hamilton wrote in the Federalist No. 78, and the legislative branch the purse. “The judiciary, on the contrary, has no influence over either the sword or the purse; no direction either of the strength or of the wealth of the society; and can take no active resolution whatever.” All judges can do is judge. “The judiciary is beyond comparison the weakest of the three departments of powers,” Hamilton concluded, citing, in a footnote, Montesquieu: “Of the three powers above mentioned, the judiciary is next to nothing.”

The Supreme Court used to be not only an appellate court but also a trial court. People also thought it was a good idea for the Justices to ride circuit, so that they’d know the citizenry better. That meant more time away from their families, and, besides, getting around the country was a slog. Justice James Iredell, who said he felt like a “travelling postboy,” nearly broke his leg when his horse bolted. Usually, he had to stay at inns, where you shared rooms with strangers. The Justices hated riding circuit and, in 1792, petitioned the President to relieve them of the duty, writing, “We cannot reconcile ourselves to the idea of existing in exile from our families.” Washington, who was childless, was unmoved.

In 1795, when John Jay resigned from the office of Chief Justice to become governor of New York, Washington asked Alexander Hamilton to take his place; Hamilton said no. So did Patrick Henry. Anyone who wanted the job had to be a little nutty. The Senate rejected Washington’s next nominee for Jay’s replacement, the South Carolinian John Rutledge, whereupon Rutledge tried to drown himself near Charleston, crying out to his rescuers that he had been a judge for a long time and “knew of no Law that forbid a man to take away his own Life.”

In 1800, the capital moved to Washington, D.C., and the following year John Adams nominated his Secretary of State, the arch-Federalist Virginian John Marshall, to the office of Chief Justice. Adams lived in the White House. Congress met at the Capitol. Marshall took his oath of office in a “meanly furnished, very inconvenient” room in the Capitol Building, where the Justices, who did not have clerks, had no room to put on their robes (this they did in the courtroom, in front of gawking spectators), or to deliberate (this they did in the hall, as quietly as they could). Cleverly, Marshall made sure that all the Justices rented rooms at the same boarding house, so that they could at least have someplace to talk together, unobserved.

Marshall was gangly and quirky and such an avid listener that Daniel Webster once said that, on the bench, he took in counsel’s argument the way “a baby takes in its mother’s milk.” He became Chief Justice just months before Thomas Jefferson became President. Marshall was Jefferson’s cousin and also his fiercest political rival, if you don’t count Adams. Nearly the last thing Adams did before leaving office was to persuade the lame-duck Federalist Congress to pass the 1801 Judiciary Act, reducing the number of Supreme Court Justices to five—which would have prevented Jefferson from naming a Justice to the bench until two Justices left. The newly elected Republican Congress turned right around and repealed that act and suspended the Supreme Court for more than a year.

In February, 1803, when the Marshall Court finally met, it did something really interesting. In Marbury v. Madison, a suit against Jefferson’s Secretary of State, James Madison, Marshall granted to the Supreme Court a power it had not been explicitly granted in the Constitution: the right to decide whether laws passed by Congress are constitutional. This was such an astonishing thing to do that the Court didn’t declare another federal law unconstitutional for fifty-four years.

The Supreme Court’s decision about the constitutionality of the Affordable Care Act will turn on Article I, Section 8, of the Constitution, the commerce clause: “Congress shall have power . . . to regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes.” In Gibbons v. Ogden, Marshall interpreted this clause broadly: “Commerce, undoubtedly, is traffic, but it is something more: it is intercourse.” (“Intercourse” encompassed all manner of dealings and exchanges: trade, conversation, letter-writing, and even—if plainly outside the scope of Marshall’s meaning—sex.) Not much came of this until the Gilded Age, when the commerce clause was invoked to justify trust-busting legislation, which was generally upheld. Then, during the New Deal, the “power to regulate commerce,” along with the definition of “commerce” itself, became the chief means by which Congress passed legislation protecting people against an unbridled market; the Court complied only after a protracted battle. In 1964, the commerce clause formed part of the basis for the Civil Rights Act, and the Court upheld the argument that the clause grants Congress the power to prohibit racial discrimination in hotels and restaurants.

In 1995, in U.S. v. Lopez, the Court limited that power for the first time since the battle over the New Deal, when Chief Justice William Rehnquist, writing for the majority, overturned a federal law prohibiting the carrying of guns in a school zone: the argument was that gun ownership is not commerce, because it “is in no sense an economic activity.” (In a concurring opinion, Justice Clarence Thomas cited Samuel Johnson’s Dictionary of the English Language.) Five years later, in U.S. v. Morrison, Rehnquist, again writing for the majority, declared parts of the federal Violence Against Women Act unconstitutional, arguing, again, that no economic activity was involved.

However the Court rules on health care, the commerce clause appears unlikely, in the long run, to be able to bear the burdens that have been placed upon it. So long as conservatives hold sway on the Court, the definition of “commerce” will get narrower and narrower, despite the fact that this will require, and already has required, overturning decades of precedent. Unfortunately, Article I, Section 8, may turn out to have been a poor perch on which to build a nest for rights.

There is more at stake, too. This Court has not been hesitant about exercising judicial review. In Marshall’s thirty-five years as Chief Justice, the Court struck down only one act of Congress. In the seven years since John G. Roberts, Jr., became Chief Justice, in 2005, the Court has struck down a sizable number of federal laws, including one reforming the funding of political campaigns. It also happens to be the most conservative court in modern times. According to a rating system used by political scientists, decisions issued by the Warren Court were conservative thirty-four per cent of the time; the Burger and the Rehnquist Courts issued conservative decisions fifty-five per cent of the time. So far, the rulings of the Roberts Court have been conservative about sixty per cent of the time.

What people think about judicial review usually depends on what they think about the composition of the Court. When the Court is liberal, liberals think judicial review is good, and conservatives think it’s bad. This is also true the other way around. Between 1962 and 1969, the Warren Court struck down seventeen acts of Congress. (“With five votes, you can do anything around here,” Justice William Brennan said at the time.) Liberals didn’t mind; the Warren Court advanced civil rights. Conservatives argued that the behavior of the Warren Court was unconstitutional, and, helped along by that argument, gained control of the Republican Party and, eventually, the Supreme Court, only to engage in what looks like the very same behavior. Except that it isn’t quite the same, not least because a conservative court exercising judicial review in the name of originalism suggests, at best, a rather uneven application of the principle.

The commerce clause has one history, judicial review another. They do, however, crisscross. Historically, the struggle over judicial review has been part of a larger struggle over judicial independence: the freedom of the judiciary from the other branches of government, from political influence, and, especially, from moneyed interests, which is why the Court’s role in deciding whether Congress has the power to regulate the economy is so woefully vexed.

Early American colonists inherited from England a tradition in which the courts, like the legislature, were extensions of the crown. In most colonies, as the Harvard Law professor Jed Shugerman points out in “The People’s Courts: Pursuing Judicial Independence in America” (Harvard), judges and legislators were the same people and, in many, the legislature served as the court of last resort. (A nomenclatural vestige of this arrangement remains in Massachusetts, where the state legislature is still called the General Court.)

In 1733, William Cosby, the royally appointed governor of New York, sued his predecessor, and the case was heard by the colony’s Supreme Court, headed by Lewis Morris, who ruled against Cosby, whereupon the Governor removed Morris from the bench and appointed James DeLancey. When essays critical of the Governor appeared in a city newspaper, Cosby arranged to have the newspaper’s printer, John Peter Zenger, tried for sedition. At the trial, Zenger’s attorneys objected to the Justices’ authority, arguing that justice cannot be served by “the mere will of a governor.” Then DeLancey simply ordered Zenger’s attorneys disbarred.

Already in England, a defiant Parliament had been challenging the royal prerogative, demanding that judicial appointments be made not “at the king’s pleasure” but “during good behavior” (effectively, for life). Yet reform was slow to reach the colonies, and a corrupt judiciary was one of the abuses that led to the Revolution. In 1768, Benjamin Franklin listed it in an essay called “Causes of American Discontents,” and, in the Declaration of Independence, Jefferson included on his list of grievances the king’s having “made Judges dependent on his Will alone.”

The principle of judicial independence is related to another principle that emerged during these decades, much influenced by Montesquieu’s 1748 “Spirit of Laws”: the separation of powers. “The judicial power ought to be distinct from both the legislative and executive, and independent,” Adams argued in 1776, “so that it may be a check upon both.” There is, nevertheless, a tension between judicial independence and the separation of powers. Appointing judges to serve for life would seem to establish judicial independence, but what power then checks the judiciary? One idea was to have the judges elected by the people; the people then check the judiciary.

At the Constitutional Convention, no one argued that the Supreme Court Justices ought to be popularly elected, not because the delegates were unconcerned about judicial independence but because there wasn’t a great deal of support for the popular election of anyone, including the President (hence, the electoral college). The delegates quickly decided that the President should appoint Justices, and the Senate confirm them, and that these Justices ought to hold their appointments “during good behavior.”

Amid the debate over ratification, this proved controversial. In a 1788 essay called “The Supreme Court: They Will Mould the Government into Almost Any Shape They Please,” one anti-Federalist pointed out that the power granted to the Court was “unprecedented in any free country,” because its Justices are, finally, answerable to no one: “No errors they may commit can be corrected by any power above them, if any such power there be, nor can they be removed from office for making ever so many erroneous adjudications.” This is among the reasons that Hamilton found it expedient, in the Federalist No. 78, to emphasize the weakness of the judicial branch.

Jefferson, after his battle with Marshall, came to believe that “in a government founded on the public will, this principle operates . . . against that will.” In much that same spirit, a great many states began instituting judicial elections, in place of judicial appointment. You might think that elected judges would be less independent, more subject to political forces, than appointed ones. But timeless political truths are seldom true and rarely timeless. During the decades that reformers were lobbying for judicial elections, the secret ballot was thought to be more subject to political corruption than voting openly. Similarly, the popular vote was considered markedly less partisan than the spoils system: the lesser, by far, of two evils.

Nor was the nature of the Supreme Court set in stone. In the nineteenth century, the Court was, if not as weak as Hamilton suggested, nowhere near as powerful as it later became. In 1810, the Court moved into a different room in the Capitol, where a figure of Justice, decorating the chamber, had no blindfold but, as the joke went, the room was too dark for her to see anything anyway. It was also dank. “The deaths of some of our most talented jurists have been attributed to the location of this Courtroom,” one architect remarked. It was in that dimly lit room, in 1857, that the Supreme Court overturned a federal law for the first time since Marbury v. Madison. In Dred Scott v. Sandford, Chief Justice Roger B. Taney, writing for the majority, voided the Missouri Compromise by arguing that Congress could not prohibit slavery in the territories.

“This isn’t about the beagles, is it.” Facebook

Twitter

Email

Shopping

In 1860, the Court moved once more, into the Old Senate Chamber. When Abraham Lincoln was inaugurated, on the East Portico of the Capitol, Taney administered the oath, and Lincoln, in his address, confronted the crisis of constitutional authority. “I do not forget the position, assumed by some, that constitutional questions are to be decided by the Supreme Court,” he said, but “if the policy of the government, upon vital questions affecting the whole people, is to be irrevocably fixed by the decisions of Supreme Court, the instant they are made . . . the people will have ceased to be their own rulers, having to that extent, practically resigned their government into the hands of that eminent tribunal.” Five weeks later, shots were fired at Fort Sumter.

In the decades following the Civil War, an increasingly activist Court took up not only matters relating to Reconstruction, and especially to the Fourteenth Amendment, but also questions involving the regulation of business, not least because the Court ruled that corporations could file suits, as if they were people. And then, beginning in the eighteen-nineties, the Supreme Court struck down an entire docket of Progressive legislation, including child-labor laws, unionization laws, minimum-wage laws, and the progressive income tax. In Lochner v. New York (1905), in a 5–4 decision, the Court voided a state law establishing that bakers could work no longer than ten hours a day, six days a week, on the ground that the law violated a “liberty of contract,” protected under the Fourteenth Amendment. In a dissenting opinion, Justice Oliver Wendell Holmes accused the Court of wildly overreaching its authority. “A Constitution is not intended to embody a particular economic theory,” he wrote.