“The problem is we don’t know what we’re trying to measure,” said Ellen Hazelkorn, Dean of the Graduate Research School at the Dublin Institute of Technology and author of “Rankings and the Reshaping of Higher Education: the Battle for World Class Excellence,” coming out this March. “We need cross-national comparative data that is meaningful. But we also need to know whether the way the data are collected makes it more useful — or easier to game the system.”

Dr. Hazelkorn also questioned whether the widespread emphasis on bibliometrics — using figures for academic publications or how often faculty members are cited in scholarly journals as proxies for measuring the quality or influence of a university department — made any sense. “I understand that bibliometrics is attractive because it looks objective. But as Einstein used to say, ‘Not everything that can be counted counts, and not everything that counts can be counted.”’

Unlike the Times Higher Education rankings, where surveys of academic reputation make up nearly 34.5 percent of the total, Shanghai Jiao Tong University relies heavily on faculty publication rates for its rankings; weight is also given to the number of Nobel Prizes or Fields Medals won by alumni or current faculty. The results, say critics, tip toward science and mathematics rather than arts or humanities, while the tally of prizewinners favors rich institutions able to hire faculty members whose best work may be long behind them.

“The big rap on rankings, which has a great deal of truth to it, is that they’re excessively focused on inputs,” said Ben Wildavsky, author of “The Great Brain Race,” who said that measuring faculty size or publications, or counting the books in the university library, as some rankings do, tells you more about a university’s resources than about how those resources impact on students. Nevertheless Mr. Wildavsky, who edited U.S. News and World Report’s Best Colleges list from 2006 to 2008, described himself as “a qualified defender” of the process.

“Just because you can’t measure everything doesn’t mean you shouldn’t measure anything,” said Mr. Wildavsky, adding that when U.S. News published its first college guide in 1987 a delegation of college presidents met with the magazine’s editors to ask that the whole exercise be stopped.

Today there are over 40 different rankings — some, like U.S. News, focused on a single country or a single academic field like business administration, medicine or law, while others attempt to compare universities on a global scale.

Mr. Wildavsky freely admits the system is subject to all kinds of bias. “A lot of ratings use graduation rates as a measure of student success,” he said. “An urban-setting university is probably not going to have the same graduation rate as Dartmouth.”