On Sunday, the NCAA basketball selection committee will reveal the bracket for this year’s NCAA Tournament. The lynchpin of this process is the Ratings Percentage Index, a ranking tool that sorts college hoops teams based on wins, losses, and strength of schedule. As Selection Sunday approaches, commentators on CBS and ESPN always discuss teams’ tournament worthiness in terms of RPI—how many wins they have over top-50 teams, losses against teams below 100 in the RPI, and so forth.

Tournament poobahs have always insisted that the RPI—which has been used by the committee for 30 years—is just one tool of many, both objective and subjective, that go into picking which teams make the Big Dance. Amateur bracketologists, however, have been able to simulate the selection process fairly precisely using RPI data alone. No matter what the NCAA says, then, the RPI is a significant factor in the bracketing process. That wouldn’t be a problem, except that the RPI works against the committee’s stated procedures. The NCAA Tournament selectors are charged with selecting the “37 best at-large teams” after the tourney’s automatic qualifiers have been decided. The RPI, however, is a primitive tool that doesn’t do a good job of accomplishing this task.

RPI is made up of three components: 25 percent comes from a team’s own winning percentage, 50 percent from its opponents’ winning percentage, and 25 percent from of its opponents’ opponents’ winning percentage. Kansas leads this season’s RPI rankings, followed by Ohio State, San Diego State, BYU, and Duke. It’s not an unreasonable top five.

Not every team’s RPI ranking is that sensible. The biggest problem with the metric is how it uses strength of schedule. Theoretically, the best team in the country could play the weakest possible slate of opponents. While playing bad opponents shouldn’t imply that you’re a bad team, three-quarters of the RPI is determined by a strength-of-schedule component. That means who you play is often more important than whether you win or lose.

It’s difficult for a team to have a highly rated schedule and not also have a high RPI ranking. Georgetown, which has played the nation’s toughest schedule according to the RPI, is ranked 12th despite a 21-10 record, which probably overrates them by 10-20 spots. They could have even suffered a few more losses and still had a very nice RPI simply due to the boost they receive from playing good teams.

Because strength of schedule is so important, a team can drop in the RPI by playing an opponent with a poor record, regardless of the outcome. Some coaches, most notably Gonzaga’s Mark Few, have gotten wise to this. Instead of scheduling the dregs of Division I, they play teams that are much, much worse—Division II squads that are off the radar to the RPI, which only counts games against D-I opponents. (It seems these games are ignored by the selection committee as well. In 2009, Utah got a five-seed despite having lost at home to Division II Southwest Baptist.)

The RPI also does not account for context. A loss against a great team is more valuable than a win against a poor team, no matter the circumstances. It’s also undeniable that teams that beat quality opponents by bigger margins are superior to those that win close games. Yet the RPI, like college football’s BCS, does not take into account margin of victory, seemingly because the NCAA’s administrators don’t want to encourage teams to run up the score.

If you want to create a fair bracket, you need to account for how a team wins. Going into last year’s NCAA Tournament, New Mexico was 10-1 in games decided by five points or fewer—one of the best records a college basketball team has ever produced in close games. The RPI formula, though, counted those tight victories just the same as if they were 50-point wins. New Mexico went to the tournament as a three-seed, thanks in large part to a top-10 RPI ranking. New Mexico lost in the second round to 11th-seeded Washington. While using one game as proof of anything is dangerous, it’s telling that oddsmakers actually listed Washington as the favorite in the game. The RPI gave New Mexico full credit for its gaudy win total, but Vegas knew it was the result of good fortune.

A ranking system that doesn’t account for margin of victory isn’t particularly useful as a predictor of future results. It also hurts teams that play weaker opponents. Since conference play makes up the bulk of a team’s schedule, the teams that play the weakest schedules tend to be from the weakest conferences. The most obvious example this season is Belmont, currently 53rd in the RPI. Playing in the Atlantic Sun, the Bruins had just three games this season against respectable opponents. All of these games—two against Tennessee and one against Vanderbilt—were on the road, and while Belmont was competitive in each, they lost all three. In their other 31 games, Belmont played as a tournament team should, winning 30, with 25 of those wins coming by double-digits.

It’s the kind of success that any team currently projected as an eight- or nine-seed would be expected to have against a similar schedule. Yet because of the RPI, Belmont wouldn’t have been considered for an at-large bid due to its lack of quality wins. Fortunately, Belmont won its conference tournament title game by 41 points to receive an automatic bid. Still, most observers are projecting them as a 12- or 13-seed—quite a bit lower than they’d deserve if the NCAA seeded teams on the basis of how good they were rather than who they played.

There’s a reasonable debate to be had about how much to reward a team, like Belmont, that beats up on inferior competition. However, there are smart ways to include margin of victory in a ranking system while continuing to keep sacred the ultimate value of winning the game. A system could be devised, for instance, that ignores how a team performs during the parts of a game that had no impact on the outcome. If a team is up 30 at the half, it should be irrelevant whether they ultimately won by 10 or 50. In this way, a system would treat a game the way players do—winning is the goal, but building a big lead is preferable to needing heroics in the final seconds.

Even if the NCAA opposes the inclusion of any context in their ratings system of choice, there are still better systems out there than the RPI. For starters, I’d direct them to the work of Jeff Sagarin or Kenneth Massey. Both Sagarin and Massey look exclusively at outcome and location and, unlike the RPI, are rooted in solid mathematical theory. Massey, for one, has an elegant way of incorporating strength of schedule. Not only does he include the location of the game, but he rewards a team more for playing a few elite teams and a few poor teams than for playing a series of mediocre teams. It’s easier for a bubble team to rack up a good record against a bunch of middling squads, and Massey’s system recognizes this, unlike the RPI. (And for what it’s worth, Sagarin and Massey both give Belmont its due, ranking the Bruins 35th and 34th respectively.)

The RPI may be one of many tools that the committee uses to do its job, but it’s clear that it’s a very important tool. It’s also one that was invented in the age of punch cards. Since then, we’ve learned a lot more about what makes a good team, and more sophisticated methods have been developed to quantify that goodness. Perhaps someday the NCAA will take advantage of this information.