Now, each new attempt at grading colleges appears to raise even more doubts about the still-influential U.S. News rankings, a system The Atlantic’s Gillian White recently questioned for its failure to tell prospective students “what they most need to know.” The federal government’s new College Scorecard—which doesn’t rank schools but allows users to filter and sort institutions based on factors including academic program, location, attendance cost, graduation rate, and salary expectations—reinforces that trend. U.S. News “had too much influence that was starting to negatively affect the behavior of colleges and students,” Rothwell said, in part because it grades colleges based on things like alumni giving, faculty pay, selectivity, and reputation. Eventually, colleges started “gaming the system, trying to get more people to apply to their schools even if [those people] had no chance.”

The question is: Will that competition actually help refine the country’s higher-education landscape and improve students’ college outcomes—or will it only add to the hodgepodge of misinformation and political conflict, making the college-exploration process even more confusing for the students who need the most support?

In 2013, The Atlantic’s John Tierney gave readers their “annual reminder” to ignore the U.S. News college rankings. “The list’s real purpose,” he argued, is “to ‘exacerbate the status anxiety’ of prospective students and parents.” But he concluded by acknowledging that few readers would likely heed his warning. And as the former Atlantic editor Eleanor Barkhorn reported a few months later, Tierney was right: She cited a report out of the American Educational Research Association finding that both the U.S. News and Princeton Review lists actually have a huge impact on where students apply to college. Inclusion in U.S. News’s top-25 list—regardless of whether it’s in the No. 1 spot or No. 25—boosted the number of applications received by college between 6 and 10 percent. The study’s authors attributed the rankings’ influence to their ability to simplify the college-application process at a time when prospective students are overwhelmed and undergoing major information-overload.

And as much as people love to hate the U.S. News’s “Best Colleges” list, they probably hated (or would’ve hated) the pre-U.S. News’s “Best Colleges” era even more—and for similar reasons. In an article last year about Northeastern University's notorious gaming of the rankings, Boston magazine explained that in creating a formula to grade colleges, the U.S. News editors “quantified something previously thought to be intangible”:

For generations, colleges and universities had generally relied on a mysterious brew of prestige and reputation. Suddenly, legacies and tradition—qualities that had taken decades, and sometimes centuries, for schools to cultivate—were less important than cold, hard data. Schools that once relied on children of alumni and word of mouth were exposed by their own stats, including graduation and retention rates, admissions data (acceptance rate, average SAT score), academics (class size, number of full-time faculty), and reputation (peer reviews). Needless to say, U.S. News’s college rankings landed on the world of higher education with a thud.

Many of today’s myriad college rankings share certain priorities, but each has its own unique algorithm for weighting the criteria and calculating the data. In a 2011 New Yorker critique of such lists, Malcolm Gladwell highlighted a challenge faced by U.S. News and all the other organizations that have since sought to grade schools: “There’s no direct way to measure the quality of an institution—how well a college manages to inform, inspire, and challenge its students. So the U.S. News algorithm relies instead on proxies for quality—and the proxies for educational quality turn out to be flimsy at best.”