One of my absolute favorite early “sabermetric” football studies was conducted in early 2005 by Doug Drinen, the founder of Pro-Football-Reference.com. At the time, Drinen was spending a week guest-writing football posts at the blog of a fellow professor, the economist J.C. Bradbury, and was performing innovative statistical research using the first iteration of the Pro-Football-Reference database. In those days, digitized historical football data was very difficult to come by, and Drinen’s collection — mind you, a small fraction of the current PFR database’s size — was the best on the web.

As one of his experiments, Drinen made an unorthodox attempt to rank modern wide receivers relative to one another. (For what it’s worth, the conundrum of how best to rate receivers is still a problem nine years later.) His unique twist? The method he used treated receivers the same way he would, elsewhere, treat college football teams in a power-rating system such as the Bowl Championship Series (BCS) used.

Drinen’s rationale was as follows:

Wide Receiver is the only position where even small groups of players are actually competing against each other under nearly identical circumstances… [Two receivers] are working in the same system with the same quarterback, the same offensive line, even the same game conditions. Raw numbers probably are a good way to determine to what extent [one is better than the other]… Every season, every team has a group of 3 to 5 guys that can, for the most part, be rank-ordered by their numbers. This situation is unique to wide receivers. But how does this help us compare [receivers]? Think college football. USC didn’t play Auburn [in 2004]. So who was better? Well, you know USC was good because, among other reasons, they crushed Oklahoma, who we suspect was pretty good; they beat Texas, for example. We know Auburn was good, in part, because they beat Tennessee, Georgia, and LSU, all solid teams. While there is unfortunately no direct evidence to help us settle the Auburn/USC debate, there are piles and piles of indirect evidence. Every game played by either team, or the opponents of either team, or the opponents of those teams, serves as a tiny sliver of indirect evidence about how good USC and Auburn were. And many very intelligent people have devoted lots of their time and talent to convincing computers to assimilate all this information. So why not put this technology to work ranking wide receivers?

Drinen went on to describe his system. In it, each receiver competes against his fellow teammates for receiving yardage; the degree to which one beats the other is how much he outgains him statistically (after adjusting for aging effects). When receivers change teams, they face different matchups against a different set of opponents, which help tell us about not only the receiver’s own quality, but also the relative quality of his old and new teammates. Do this for every season in NFL history, and we have a rough way to gauge how much each receiver would outgain (or be outgained by) the average NFL pass-catcher, adjusted for his strength of schedule (teammates).

While doing research for my article about receiving stats and the Pro Football Hall of Fame, I replicated Drinen’s “BCS rating” — right down to the aging curve — but applied it to True Receiving Yards per game (Drinen used total yards). Here were the leaders among pass catchers who started their careers in 1950 or later:

You can find the full results for this rating alongside each player’s career True Receiving Yards and With or Without You (WOWY) scores, as well as the data for the TRY per game aging curve, on GitHub.