“What are you going to do with that -- teach?” Uttered with disdain, it’s a question history majors have been asked many times. Clio’s defenders have a response. The head of the American Historical Association says that the study of history creates critical thinkers who can “sift through substantial amounts of information, organize it, and make sense of it.” A university president asserts that the liberal arts endow students with the “features of the enlightened citizen” who possesses “informed convictions … and the capacity for courageous debate on the real issues.” Historians pride themselves on the evidence for their claims.

So, what’s the evidence?

Not much, actually. Historians aren’t great at tracking what students learn. Sometimes they even resent being asked. Recently, however, the winner of the Bancroft Prize, one of history’s most distinguished awards, washed the profession’s dirty laundry in public. The article’s title: “Five Reasons History Professors Suck at Assessment.”

Anne Hyde described what happened when accreditors asked her colleagues to document what students learned. They paid little heed to the requests -- that is, until Colorado College’s history department flunked its review. Committed teachers all, her colleagues “had never conducted assessment in any conscious way beyond reporting departmental enrollment numbers and student grade point averages.”

Among many college history departments, this is routine. To address the issue of assessment, the American Historical Association in 2011 set out on a multiyear initiative to define what students should “be able to do at the end of the major.” Eight years, dozens of meetings and hundreds of disposable cups later, the Tuning Project produced a set of ambitious targets for student learning. But when it came to assessing these goals, they left a big question mark.

Which is one of the reasons why we were convinced of the need to create new assessments. With support from the Library of Congress, we came up with short tasks in which history students interpreted sources from the library’s collection and wrote a few sentences justifying their response. For example, one assessment, “The First Thanksgiving,” presented students with a painting from the beginning of the 20th century and asked if the image of lace-aproned Pilgrim women serving turkey to bare-chested Indians would help historians reconstruct what may have transpired in 1621 at the supposed feast between the Wampanoag and English settlers.

In the March issue of the Journal of American History, we describe what happened when we gave our assessments to students at two large state universities. On one campus, we quizzed mostly first-year students satisfying a distribution requirement. All but two of 57 ignored the 300-year time gap between the Thanksgiving painting and the event it depicts. Instead, they judged the painting on whether it matched their preconceptions, or simply took its contents at face value -- an answer we dubbed the “picture’s worth a thousand words” response.

We weren’t terribly surprised. When we tested high school students on these tasks, they struggled, too, and many of these college students were in high school only months earlier. But what would happen, we wondered, if we gave our tasks to college juniors and seniors, the majority of whom were history majors and all of whom had taken five or more history courses? Would seasoned college students breeze through tasks originally designed for high school?

What we found shocked us. Only two in 49 juniors and seniors explained why it might be a problem to use a 20th-century painting to understand an event from the 17th century. Another one of our assessments presented students with excerpts from a soldier’s testimony before the 1902 Senate Committee investigating the war in the Philippines. We asked how the source provided evidence that “many Americans objected to the war.” Rather than considering what might prompt a congressional hearing, students mostly focused on the document’s content at the expense of its context. Rare were responses -- only 7 percent -- that tied the testimony to the circumstances of its delivery. As one student explained, “If there hadn’t been such a huge opposition by Americans to this war, I don’t believe that the investigation would have occurred.”

We suffer no illusions that our short exercises exhaust the range of critical thinking in history. What they do is provide a check on stirring pronouncements about the promised benefits of historical study. In an age of declining enrollments in history classes, soaring college debt and increased questions about what’s actually learned in college, feel-good bromides about critical thinking and enlightened citizenship won’t cut it. Historians offer evidence when they make claims about the past. Why should it be different when they make claims about what’s learned in their classrooms?