Over the long run, R.P.I. has predicted the outcome of N.C.A.A. games more poorly than almost any other system. And it shows some especially implausible results this season. Southern Mississippi, for instance, was somehow ranked ahead of Missouri, even though it has endured seven losses to Missouri’s four (some of them against middling teams like Houston, Texas-El Paso, Alabama-Birmingham and Denver).

The committee’s use of R.P.I. is not quite as obsessive as you might think: more advanced systems like those developed by Ken Pomeroy and Jeff Sagarin were just a mouse click away, they told us — and it was perfectly well within the rules to look at them. The discussion of each team, moreover, was exceptionally thorough. It was clear from the officials we met that the committee has plenty of basketball knowledge and cares passionately about getting things right.

But R.P.I.’s fingerprints were all over the process. When a computer monitor displayed the teams that we were considering for the bubble, the R.P.I. ranking was listed suggestively alongside them. The color-coded “nitty gritty” worksheets that the committee has developed, and which often frame the discussion about the bubble teams, use the R.P.I. rankings to sort out the good wins and the bad losses.

In truth, almost any other computer ranking system would do a better job of what the N.C.A.A. is trying to accomplish. These systems tend to produce more intuitive results and to reward teams more for quality wins, especially away from home.

I developed a quick statistical procedure to identify teams that have a conspicuously poor R.P.I. ranking relative to the number of quality wins they have secured. Five teams stood out by this test. Missouri was the most obvious case: despite having gone an exceptional 9-3 against the R.P.I. top 50, the system ranks the Tigers 16th. Kansas, Notre Dame, Kansas State and Cincinnati also had a seemingly low R.P.I. ranking.

The problem, in turns out, is not that computer rankings in general are incapable of perceiving their talent: it’s that R.P.I. is the wrong system. Missouri was rated anywhere from the fifth to ninth by systems that are more reliable. Cincinnati rated as high as 20th in one system — much better than its R.P.I. rank of 58. And the other systems see Kansas as a No. 1 seed, as most analysts do; the R.P.I. rankings do not.

In the end, the committee will probably get most of its decisions right. In fact, my analysis has showed that a team’s seed line does have some power to predict the outcome of tournament games even once its power ratings are accounted for. That suggests that the committee’s sausage-making process pays some dividends, and it has some good insight about who would win the shirts-and-skins test.

But to a large extent, the N.C.A.A. process is a case of two wrongs making a right, counterbalancing R.P.I. flaws by making a series of mental adjustments to it. Using a more reliable system as the point of departure would give the committee a better way through the unenviable task of sorting through the bubble.