You can now find full, updated 2008 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)

What changes have I made?

I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.

First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.

Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:

I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall. Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.) To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.

One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.

By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.

It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:

Fresno State: originally ninth, now 16th

UCF: originally eighth, now 18th

Utah State: originally 19th, now 21st

Appalachian State: originally 11th, now 29th

It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.

Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?

Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?

It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?

To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).

(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)

On with the rankings.

2008 S&P+ rankings Team Rec S&P+ Rk OFFENSE Rk DEFENSE Rk ST Rk Team Rec S&P+ Rk OFFENSE Rk DEFENSE Rk ST Rk USC 12-1 37.0 1 44.4 5 7.2 2 -0.3 64 Texas 12-1 36.0 2 50.3 2 15.8 16 1.5 17 Florida 13-1 34.7 3 43.7 7 11.2 6 2.2 8 Oklahoma 12-2 32.4 4 52.3 1 18.1 23 -1.8 109 Penn State 11-2 29.5 5 41.1 10 13.5 10 2.0 10 Iowa 9-4 25.5 6 35.4 19 10.9 4 1.0 29 Missouri 10-4 24.0 7 43.8 6 22.3 49 2.5 3 Oklahoma State 9-4 23.4 8 46.5 4 24.5 56 1.4 21 Texas Tech 11-2 23.0 9 49.8 3 25.5 66 -1.2 102 Alabama 12-2 23.0 10 34.2 23 10.9 5 -0.3 68 Ohio State 10-3 22.7 11 32.7 31 12.2 8 2.3 7 Georgia 10-3 20.9 12 39.2 13 18.3 26 0.1 55 LSU 8-5 20.8 13 35.7 18 17.5 20 2.6 2 Boise State 12-1 19.8 14 33.8 25 15.0 14 1.0 30 Oregon 10-3 19.6 15 38.4 16 18.3 25 -0.5 80 TCU 11-2 19.2 16 30.5 37 11.8 7 0.5 45 Nebraska 9-4 18.9 17 42.0 8 23.8 55 0.7 36 Florida State 9-4 18.7 18 33.8 27 17.7 21 2.6 1 Utah 13-0 18.2 19 31.5 34 15.6 15 2.3 6 California 9-4 18.0 20 32.7 30 14.3 13 -0.4 69 Ole Miss 9-4 17.1 21 35.1 21 19.6 36 1.6 15 Arizona 8-5 16.3 22 34.9 22 20.5 43 1.9 13 Kansas 8-5 16.0 23 41.9 9 25.4 64 -0.5 73 Ball State 12-2 14.3 24 38.9 15 26.1 69 1.5 16 Oregon State 9-4 14.2 25 33.7 28 19.1 31 -0.4 71 Clemson 7-6 12.8 26 27.0 57 14.2 12 0.0 58 BYU 10-3 12.2 27 38.3 17 26.3 70 0.1 53 Tennessee 5-7 12.0 28 19.3 97 6.8 1 -0.5 75 Boston College 9-5 11.0 29 23.0 79 10.0 3 -2.0 112 Miami-FL 7-6 10.9 30 26.8 59 17.9 22 2.0 9 Wisconsin 7-6 10.5 31 31.3 35 22.0 48 1.2 24 North Carolina 8-5 10.2 32 27.1 56 17.5 19 0.6 38 Pittsburgh 9-4 9.0 33 26.2 63 19.2 33 1.9 11 Illinois 5-7 8.9 34 33.2 29 23.8 54 -0.5 78 Cincinnati 11-3 8.6 35 25.9 64 19.7 38 2.4 4 Georgia Tech 8-4 8.6 36 29.5 41 18.9 29 -2.0 111 Virginia Tech 10-4 8.1 37 20.6 89 13.1 9 0.6 39 South Florida 8-5 8.0 38 28.8 49 20.9 45 0.1 52 South Carolina 7-6 7.8 39 23.7 75 16.3 17 0.4 48 Maryland 8-5 7.6 40 28.8 48 21.6 47 0.3 49 Northwestern 9-4 7.3 41 26.7 60 19.3 34 -0.1 62 Tulsa 11-2 7.0 42 39.3 12 31.8 89 -0.5 74 Notre Dame 7-6 6.8 43 26.7 61 19.0 30 -1.0 92 Michigan State 9-4 6.6 44 24.9 68 19.8 40 1.5 19 Houston 8-5 6.4 45 39.2 14 31.5 85 -1.2 103 Wake Forest 8-5 6.4 46 20.3 91 13.6 11 -0.2 63 West Virginia 9-4 6.2 47 25.5 65 21.1 46 1.7 14 Kansas State 5-7 6.0 48 35.3 20 30.3 82 1.0 28 Troy 8-5 5.0 49 27.8 52 22.6 52 -0.3 65 Connecticut 8-5 4.9 50 22.3 84 18.3 24 0.8 35 Arkansas 4-8 4.8 51 32.3 32 25.8 68 -1.7 107 Rice 10-3 4.8 52 40.5 11 35.1 104 -0.6 82 Baylor 4-8 4.0 53 32.3 33 27.4 74 -0.9 90 Arizona State 5-7 3.4 54 22.3 85 19.8 39 0.8 33 Southern Miss 7-6 3.1 55 31.0 36 27.0 72 -1.0 91 Minnesota 7-6 3.0 56 26.8 58 25.0 60 1.2 25 Rutgers 9-4 2.4 57 28.8 50 25.2 61 -1.1 99 Purdue 4-8 2.4 58 24.6 69 20.6 44 -1.6 106 Bowling Green 6-6 2.4 59 29.2 45 25.7 67 -1.0 96 Stanford 5-7 2.2 60 30.2 38 29.5 76 1.5 18 NC State 6-7 1.8 61 27.4 55 26.8 71 1.2 26 Michigan 3-9 1.6 62 20.5 90 19.2 32 0.2 50 Auburn 4-8 1.4 63 19.7 94 17.4 18 -0.8 86 East Carolina 9-5 1.3 64 21.9 86 20.2 41 -0.3 67 Nevada 7-6 0.9 65 34.0 24 33.0 92 0.0 59 UCLA 4-8 -0.1 66 18.7 101 19.6 37 0.8 32 Virginia 5-7 -0.6 67 19.0 99 18.6 28 -1.0 94 Air Force 8-5 -0.7 68 23.0 80 24.6 58 0.9 31 Northern Illinois 6-7 -1.6 69 23.3 76 25.5 65 0.6 43 Kentucky 7-6 -1.8 70 18.1 103 19.5 35 -0.4 72 Vanderbilt 7-6 -2.0 71 15.9 113 18.5 27 0.6 40 New Mexico 4-8 -2.0 72 18.2 102 20.2 42 0.0 60 Colorado 5-7 -2.7 73 23.0 78 23.2 53 -2.5 118 UTEP 5-7 -2.7 74 33.8 26 38.8 113 2.3 5 Iowa State 2-10 -2.8 75 29.2 44 33.2 95 1.2 23 Louisville 5-7 -3.0 76 24.6 70 25.3 62 -2.3 116 Western Michigan 9-4 -3.1 77 28.0 51 29.8 80 -1.3 104 Colorado State 7-6 -3.3 78 29.8 40 33.8 99 0.7 37 Memphis 6-7 -3.4 79 28.9 47 31.5 86 -0.8 85 Navy 8-5 -3.5 80 25.3 67 30.2 81 1.3 22 Fresno State 6-7 -3.5 81 29.4 42 33.4 96 0.5 46 Marshall 4-8 -4.2 82 23.2 77 27.4 73 0.0 57 Central Michigan 8-5 -5.6 83 30.1 39 36.3 107 0.6 42 Texas A&M 4-8 -5.6 84 27.4 54 33.1 94 0.2 51 Duke 4-8 -6.7 85 17.8 104 24.9 59 0.4 47 Akron 5-7 -6.7 86 29.0 46 34.7 103 -1.0 95 Hawaii 7-7 -7.6 87 23.8 74 29.7 79 -1.7 108 Indiana 3-9 -8.3 88 24.3 72 31.6 88 -1.0 93 Florida Atlantic 7-6 -8.5 89 25.5 66 33.6 97 -0.4 70 Kent State 4-8 -8.5 90 27.6 53 34.6 101 -1.5 105 Mississippi State 4-8 -8.8 91 16.5 110 22.5 51 -2.9 120 Ohio 4-8 -9.3 92 21.6 87 29.7 78 -1.2 101 Temple 5-7 -9.4 93 19.0 98 27.9 75 -0.5 79 Arkansas State 6-6 -9.5 94 22.8 82 32.3 90 0.1 54 Louisiana Tech 8-5 -9.7 95 20.2 92 29.6 77 -0.3 66 UAB 4-8 -10.3 96 24.5 71 35.4 106 0.6 44 UL-Lafayette 6-6 -11.1 97 29.3 43 39.9 115 -0.5 77 Toledo 3-9 -11.4 98 22.6 83 33.1 93 -0.9 89 UNLV 5-7 -12.4 99 24.1 73 36.5 109 0.0 56 San Jose State 6-6 -13.2 100 12.9 116 25.4 63 -0.7 83 Buffalo 8-6 -14.4 101 26.4 62 39.7 114 -1.1 98 Wyoming 4-8 -14.5 102 12.1 118 24.5 57 -2.2 114 Syracuse 3-9 -14.6 103 19.3 95 35.4 105 1.4 20 Middle Tennessee 5-7 -15.0 104 17.2 107 31.6 87 -0.6 81 Miami-OH 2-10 -15.8 105 16.1 111 33.8 98 1.9 12 Central Florida 4-8 -15.8 106 7.3 120 22.3 50 -0.8 88 Florida International 5-7 -17.3 107 15.0 114 31.2 84 -1.1 97 Tulane 2-10 -17.4 108 19.8 93 34.7 102 -2.5 119 Washington State 2-11 -17.5 109 16.0 112 32.4 91 -1.1 100 San Diego State 3-9 -17.9 110 19.3 96 36.7 110 -0.5 76 SMU 1-11 -18.5 111 23.0 81 42.1 118 0.6 41 Utah State 3-9 -18.9 112 17.3 106 37.2 112 1.0 27 Army 3-9 -20.3 113 10.2 119 30.4 83 -0.1 61 Washington 0-12 -21.3 114 17.4 105 36.3 108 -2.4 117 New Mexico State 3-9 -21.7 115 17.1 108 36.9 111 -1.9 110 Eastern Michigan 3-9 -22.0 116 21.3 88 41.2 117 -2.1 113 Western Kentucky 2-10 -22.1 117 12.8 117 34.2 100 -0.7 84 UL-Monroe 4-8 -26.7 118 16.6 109 41.1 116 -2.2 115 Idaho 2-10 -28.8 119 14.4 115 44.0 119 0.8 34 North Texas 1-11 -30.9 120 18.9 100 48.9 120 -0.8 87

The almost-dynasty, continued

From my 2007 write-up:

Under Pete Carroll, USC shared the 2003 national title with LSU and won it outright in 2004. Even though we’re supposed to pretend some of those now-vacated wins didn’t happen, we saw them — they happened. And they kept happening for a few more seasons. And while 1.5 titles will forever be impressive, it was close to so much more. USC ranked second in S&P+ in 2005, second in 2006, first in 2007, and, [spoilers], first in 2008. They were the best or second-best team in the country every year for six straight seasons. But they managed to lose by a combined six points at Oregon State and UCLA in 2006, by a combined eight to Stanford and Oregon in 2007, and by six at Oregon State in 2008. Erase two or three of those losses, and you’ve got a three-, four-, or maybe five-time national champion.

You could make the case that Pete Carroll’s 2008 USC squad, his last truly great one, might have been his greatest one. This was a vengeful team — the Trojans beat Penn State (fifth in S&P+) by 14, Ohio State (11th) by 32, and Oregon (15th) by 34. They won only one game by single digits, and it was on the road against a good Arizona team (22nd).

And yet...

For the second time in three years, a trip to Corvallis quite possibly kept USC out of the BCS Championships. (It might have in 2006, it definitely did in 2008.) The Rodgers brothers and their orange-clad cohorts might have almost by themselves turned a potential four-time national champ into a two-time champ.

Peak Big 12

From my 2008 advanced box scores piece:

The Big 12 experienced a perfect storm of innovation and quarterback experience, with OU’s Sam Bradford winning the Heisman and nearly every team starting either a senior (Mizzou’s Chase Daniel, Texas Tech’s Graham Harrell, Nebraska’s Joe Ganz) or junior quarterback (Texas’ Colt McCoy, OSU’s Zac Robinson, Kansas’ Todd Reesing, Kansas State’s Josh Freeman). Big 12 teams nabbed seven of the nine spots in Off. S&P+, plus an eighth in the top 20, and Baylor, with a freshman named Robert Griffin III, surged to 33rd.

The conference-level adjustments I added to S&P+ did very, very happy things to the Big 12’s 2008 S&P+ ratings. Oklahoma State, Texas Tech, Kansas State, Colorado, and Iowa State all saw their rankings rise by at least 10 spots, and not only were there seven league teams in the Off. S&P+ top 10, there were five in the overall top 10.

I’ve maintained for a while that Missouri’s 2008 team, which went 10-4, was quite possibly/likely better than the 2007 edition that went 12-2 and finished fourth in the AP poll. The numbers back me up. But while the Tigers improved a little, much of the rest of the conference improved a lot. OSU was suddenly awesome, and unlike in 2007, Missouri had to play Texas. That made quite a difference.

By the next year, Mizzou had lost Daniel and Jeremy Maclin, Tech had lost Harrell and Michael Crabtree, OU had lost Bradford to injury, etc. But 2008 was indeed a perfect convergence of innovation and experience, and if a 4-team CFP had been in place in this season, the conference almost certainly would have had two teams in it. (It’s possible we’d have had an OU-Texas rematch in the semifinals, too. That wouldn’t have sucked.)

Other notes: