Will Adrian Peterson leave Minnesota for a warmer climate in 2015?
23 Dec 2011
by Bill Connelly
As each season progresses, I phase out preseason projections and raw data in favor of the opponent-adjusted S&P+ you've come to know and
despise love. This year, however, I noticed something: even seven or eight weeks into the season, leaving a little more of the raw data in, and diluting the opponent adjustments a bit, seemed to improve both the correlations between S&P+ and win percentage, and with it, the general predictive success of S&P+ as a whole.
The former improvement makes sense, of course -- the raw data is likely going to correlate well with win percentage when major conference teams are playing major conference teams and mid-majors are playing mid-majors. The whole point of using opponent adjustments is to figure out how to see through the ridiculous raw statistics a team may post while playing nobody in particular. But it was the latter that surprised me. (Honestly, I would have assumed more adjustment was needed.) There will be more observation in this regard in the offseason, but it appears that diluting the opponent adjustment a bit improves S&P+ overall. And if it improves S&P+, then obviously it improves F/+ as well.
So with time between the regular season and the chaos of next week's bowl previews (catch up on my bowl work for SB Nation here), I set about updating previous years of data to see if the same improvement took shape. Again, there will be a more in-depth investigation into general predictive abilities this offseason, but for now, the changes have absolutely improved the general correlations of S&P+ and F/+ to wins and losses.
When combined with Brian Fremeau's recent 2007-10 FEI changes, which allow us to make the same special teams adjustments to F/+ that we are using in 2011, that gives us a whole new set of better, more accurate rankings.
(National champions shaded in yellow below.)
|F/+ Rk||Team||Record||F/+||S&P+||S&P+ Rk||FEI||FEI Rk|
S&P+ and FEI mostly agreed on the top teams this season. And really, as long as Texas is No. 1 and USC is No. 2, then everything else is secondary.
|F/+ Rk||Team||Record||F/+||S&P+||S&P+ Rk||FEI||FEI Rk|
Ohio State built enough of a lead in S&P+ that they maintained their No. 1 ranking despite the dreadful BCS Championship result, but FEI pushed Florida over the top overall.
This season has the weakest overall correlations, which makes sense, really, as this season was one of the craziest on record. Basically every team with a good offense had a porous defense, and every team with a good defense couldn't move the ball very well. That's how you get a two-loss national champion (and a four-loss team in the top four).
Florida was easily the most well-rounded, balanced team in the country, but USC almost overtook them on the power of a ridiculous defense, and TCU squeezed into the top six because of the same.
The interesting team here is Texas, which actually graded out worse than they had in 2008, but they capitalized on a weaker Big 12 (and some luck in the Big 12 title game) to make the BCS Championship. All of their luck was apparently used up in the process, as they watched their quarterback get hurt on their opening drive of the title game.
This almost certainly produces the most odd result at the No. 1 spot. Auburn, which skated through close game after close game but still finished undefeated in the SEC, falls one percent behind Boise State thanks to the S&P+ adjustments. Honestly, though? I can see it. Boise State's one loss (on the road to No. 24 Nevada) was almost as fluky as a couple of Auburn's wins. Auburn earned the crown, but I'd have loved to see the Broncos get a shot (and they wouldn't have, even if they had beaten Nevada).
Some general conclusions to this process:
The new S&P+ adjustment helps mid-major teams, which is somewhat counter-intuitive to a lot of us. But if it results in both stronger correlations to wins and better predictive potential, then that's the way it's going to be. And let's be honest: the two most consistently great mid-major teams in recent years -- Boise State and TCU -- did little to disprove the notion that they were both very good. From 2008 to 2010, Boise State went 4-0 versus major conference teams, and TCU went 6-1 (the lone loss: to 2008 Oklahoma). We can talk about a major conference "grind" if we want, and there's no way to prove or disprove the grind's existence, but facts are facts, and the few results we have favor Boise State and TCU tremendously.
One of the major projects moving forward will be the predictive side of a measure like this. It is going to be very difficult for the same measure to both evaluate and predict at a high level, and it is quite possible that there will be two different F/+ or S&P+ types of measures -- one for reflecting on the year to date, and one for making predictions. And honestly, F/+'s recent success (knock on wood) in bowl projections might be justification for this. Once you remove week-to-week momentum from the equation, the predictive ability of F/+ seems to improve. Last year, it was awful in the bowls; this year, it has started out 5-1.
Just for fun, here are the top ten teams of the last seven years (including 2011 teams and their pre-bowl ratings):
1. 2011 LSU (+38.1%)
2. 2005 Texas (+35.9%)
3. 2008 Florida (+35.4%)
4. 2009 Alabama (+33.9%)
5. 2008 USC (+33.8%)
6. 2005 USC (+33.3%)
7. 2009 Florida (+32.8%)
8. 2009 TCU (+30.7%)
9. 2011 Alabama (+30.5%)
10. 2010 Boise State (+30.2%)
We'll see what changes after the bowls.
1 comment, Last at 26 Dec 2011, 1:14am by Steve C