In 2006, the Commission on the Future of Higher Education, convened by Margaret Spellings, the secretary of education at the time, issued a scathing critique of universities. “Employers report repeatedly that many new graduates they hire are not prepared to work, lacking the critical thinking, writing and problem-solving skills needed in today’s workplaces,” the commission’s report complained.

Educators scrambled to ensure that students graduate with these skills — and to prove it with data. The obsession with testing that dominates primary education invaded universities, bringing with it a large support staff. Here is the first irony of learning assessment: Faced with outrage over the high cost of higher education, universities responded by encouraging expensive administrative bloat.

Many of the professionals who work in learning assessment are former faculty members who care deeply about access to quality education. Pat Hutchings, a senior scholar at the National Institute for Learning Outcomes Assessment (and former English professor), told me: “Good assessment begins with real, genuine questions that educators have about their students, and right now for many educators those are questions about equity. We’re doing pretty well with 18- to 22-year-olds from upper-middle-class families, but what about — well, fill in the blank.”

It seems that the pressure to assess student learning outcomes has grown most quickly at poorly funded regional universities that have absorbed a large proportion of financially disadvantaged students, where profound deficits in preparation and resources hamper achievement. Research indicates that the more selective a university, the less likely it is to embrace assessment. Learning outcomes assessment has become one way to answer the question, “If you get unprepared students in your class and they don’t do well, how does that get explained?” Mr. Eubanks at Furman University told me.

When Erik Gilbert, a professor of history at Arkansas State University, reached the end of his World Civilization course last fall, he dutifully imposed the required assessment: an extra question on the final exam that asked students to read a document about Samurai culture and answer questions using knowledge of Japanese history. Yet his course focused on “cross-cultural connections, trade, travel, empire, migration and bigger-scale questions, rather than area studies,” Mr. Gilbert told me. His students had not studied Japanese domestic history. “We do it this way because it satisfies what the assessment office wants, not because it addresses concerns that we as a department have.”

Mr. Gilbert became an outspoken assessment skeptic after years of watching the process fail to capture what happens in his classes — and seeing it miss the real reasons students struggle. “Maybe all your students have full-time jobs, but that’s something you can’t fix, even though that’s really the core problem,” he said. “Instead, you’re expected to find some small problem, like students don’t understand historical chronology, so you might add a reading to address that. You’re supposed to make something up every semester, then write up a narrative” explaining your solution to administrators.