Final 2008 FEI Ratings

Final 2008 FEI Ratings
Final 2008 FEI Ratings
Photo: USA Today Sports Images

by Brian Fremeau

The Fremeau Efficiency Index principles and methodology can be found here. Like DVOA, FEI rewards playing well against good teams, win or lose, and punishes losing to poor teams more harshly than it rewards defeating poor teams. Unlike DVOA, it is drive-based, not play-by-play based, and it is specifically engineered to measure the college game.

FEI is the opponent-adjusted value of Game Efficiency, a measurement of the success rate of a team scoring and preventing opponent scoring throughout the non-garbage-time possessions of a game. Like DVOA, it represents a team's efficiency value over average. Strength of Schedule is calculated from a privileged perspective (explained here) and represents the likelihood that an elite team (top 5) would post an undefeated record against the given team's opponents to date.

The following ratings are calculated based on data from all FBS games played through Thursday, January 8. Only games between FBS teams are considered.

Rank Team Record FEI Last Week vs. Top 10 vs. Top 40 GE GE Rank SOS SOS Rank
1 Florida 12-1 0.363 1 3-1 6-1 0.422 1 0.091 12
2 USC 12-1 0.287 5 1-0 5-1 0.383 2 0.231 50
3 Oklahoma 11-2 0.279 2 0-2 5-2 0.324 4 0.063 4
4 Texas 12-1 0.276 4 1-0 4-1 0.378 3 0.219 47
5 Penn State 10-2 0.253 3 0-1 2-2 0.304 5 0.216 44
6 North Carolina 7-5 0.229 6 0-1 5-3 0.103 23 0.132 25
7 Mississippi 8-4 0.218 15 1-1 3-2 0.168 15 0.087 11
8 Alabama 12-2 0.216 8 1-1 4-2 0.225 9 0.080 10
9 Virginia Tech 9-4 0.207 9 1-1 4-4 0.075 32 0.104 15
10 Florida State 7-4 0.205 10 1-1 4-4 0.084 28 0.054 2
11 Boston College 8-5 0.184 11 2-2 4-4 0.056 39 0.077 9
12 Utah 12-0 0.182 23 1-0 3-0 0.262 7 0.411 75
Rank Team Record FEI Last Week vs. Top 10 vs. Top 40
GE GE Rank SOS SOS Rank
13 Pittsburgh 9-4 0.179 13 0-0 5-3 0.086 26 0.205 40
14 Georgia Tech 7-4 0.172 7 1-2 5-3 0.033 47 0.112 17
15 Wake Forest 8-5 0.171 18 2-0 4-4 0.045 44 0.136 26
16 Texas Tech 9-2 0.169 14 1-2 2-2 0.187 12 0.162 33
17 Iowa 8-4 0.167 17 1-0 1-1 0.187 13 0.318 65
18 Ohio State 9-3 0.166 12 0-3 0-3 0.150 17 0.131 24
19 West Virginia 8-4 0.156 24 1-0 4-3 0.076 31 0.206 41
20 Rutgers 7-5 0.150 21 0-1 4-4 0.105 22 0.188 37
21 Clemson 5-6 0.150 16 0-2 2-4 0.031 48 0.136 27
22 TCU 10-2 0.150 25 0-1 1-2 0.242 8 0.278 61
23 Boise State 11-1 0.143 22 0-0 1-1 0.299 6 0.553 97
24 Georgia 9-3 0.141 26 0-2 1-3 0.060 38 0.101 14
25 Cincinnati 10-3 0.131 20 0-2 4-3 0.068 35 0.128 22

Adjusted Offensive Efficiency and Adjusted Defensive Efficiency are the opponent-adjusted values of Offensive Efficiency and Defensive Efficiency, explained here. Like FEI, the multiple-order adjustments are weighted according to both the strength of the opponent and the relative significance of the result; efficiency against a team's best competition faced is given more relevance weight. AOE and ADE represent a team's value over/under average. Positive AOE and negative ADE are the most valuable.

Rank Team Record AOE AOE Rank ADE ADE Rank OE OE Rank DE DE Rank
1 Florida 12-1 0.580 2 -0.567 1 0.587 8 -0.587 2
2 USC 12-1 0.470 6 -0.483 5 0.500 11 -0.601 1
3 Oklahoma 11-2 0.581 1 -0.312 24 0.843 3 -0.061 55
4 Texas 12-1 0.521 4 -0.421 9 0.934 1 -0.243 26
5 Penn State 10-2 0.514 5 -0.257 27 0.500 12 -0.389 11
6 North Carolina 7-5 0.214 25 -0.443 7 -0.086 67 -0.285 18
7 Mississippi 8-4 0.339 13 -0.405 11 0.181 31 -0.328 15
8 Alabama 12-2 0.281 18 -0.329 15 0.075 43 -0.483 5
9 Virginia Tech 9-4 0.138 40 -0.406 10 -0.216 88 -0.310 16
10 Florida State 7-4 0.298 15 -0.320 20 0.004 54 -0.189 31
11 Boston College 8-5 0.138 39 -0.547 2 -0.178 79 -0.481 6
12 Utah 12-0 0.194 28 -0.330 14 0.211 27 -0.356 14
Rank Team Record AOE AOE Rank ADE ADE Rank
OE OE Rank DE DE Rank
13 Pittsburgh 9-4 0.234 22 -0.314 23 0.061 46 -0.168 35
14 Georgia Tech 7-4 0.318 14 -0.249 28 0.008 53 -0.125 44
15 Wake Forest 8-5 -0.033 67 -0.511 3 -0.267 99 -0.278 20
16 Texas Tech 9-2 0.524 3 -0.211 32 0.921 2 0.153 83
17 Iowa 8-4 0.143 37 -0.393 12 0.070 44 -0.479 7
18 Ohio State 9-3 0.165 34 -0.322 19 0.031 50 -0.377 13
19 West Virginia 8-4 0.182 32 -0.307 25 -0.082 65 -0.307 17
20 Rutgers 7-5 0.228 23 -0.210 33 0.191 29 -0.165 36
21 Clemson 5-6 0.052 53 -0.491 4 -0.239 94 -0.387 12
22 TCU 10-2 0.093 47 -0.437 8 0.228 26 -0.559 3
23 Boise State 11-1 0.116 44 -0.328 16 0.370 15 -0.536 4
24 Georgia 9-3 0.421 8 -0.088 55 0.191 30 -0.105 47
25 Cincinnati 10-3 0.023 58 -0.315 22 -0.121 72 -0.275 22

The Final FEI Ratings for all 120 FBS teams can be found here. Expanded FEI Ratings data can be found here.

Defense Wins Championships

The BCS Championship game had all the elements of a classic, but none of the polish. Goal-line defensive stands, massive momentum swings, aggressive (sometimes too aggressive) tackling, several superstars with solid performances and several did-you-see-that game-defining plays. Unfortunately, the broadcast team wasn't paying attention, the referee crew was rusty, and the game never reached its full potential. Nevertheless, the Florida Gators were the best college football team for one night and for all of 2008, and their success can be mostly attributed to their all-world, life-affirming, inspirational, transcendent, radiant star: the Gators defense.

In the Football Outsiders BCS Championship Preview, I noted that short and long field position drives may be the key to the game's outcome, tipping the scales in favor of Florida. I was partially right, but not for the particular reasons I had envisioned. Unlike much of their 2008 season, Florida played the entire first half against Oklahoma at a significant field position disadvantage. Disregarding a kneel-down possession from their own 3-yard line, the Gators' average starting field position on its four first-half drives was its own 12-yard line; Oklahoma began its five first-half possessions on average from its own 40-yard line. That field position advantage and extra possession should be worth an extra touchdown according to national efficiency averages (6.53 points to be exact), but the explosive Oklahoma offense only managed a 7-7 tie at halftime.

Florida thwarted the highest-scoring offense in college football history on their shortest fields of the night, an area the Gators have excelled at protecting all season long. On 32 opponent possessions begun 60 or fewer yards from the goal line, Florida gave up only four touchdowns, forced 11 punts, collected three interceptions and turned the opponent over on downs three times. (The much-revered USC defense gave up seven touchdowns on 22 such drives in 2008). In the championship, in fact, Oklahoma came away with zero points on two golden opportunities following egregious Tim Tebow first-half interceptions. Tebow led the late game-clinching fourth-quarter drives with precision passing and strong running, but the defense's ability to protect short fields after his first half miscues put him in position to be a hero in the end.

Oklahoma's offensive frustrations were also tied to the flow (or lack thereof) of the game. Injury timeouts, officiating delays, and even a bizarre two-timeout-then-punt sequence all contributed to the game's stunted rhythm. Florida's ball-control had something to do with it as well, limiting the Sooners to only ten total drives in the game, two fewer than any other Oklahoma game this season and well below their average pace. There were fewer total possessions in the BCS Championship than in any other bowl game in 2008, but it didn't faze the Gators. Florida was victorious in all four games it played this season in which they possessed the ball ten times or fewer.

Great defense not only won Florida its championship, but it prevailed over offense more often than not this season. Eight games were contested in 2008 between FEI top-10 offenses and defenses, and the top defenses won six of those showdowns, hold the offenses under 30 points five times and under 20 points three times.


Top-10 Defense versus Top-10 Offense in 2008
Date Top-10 Defense Top-10 Offense Game Winner
Sept. 20 Texas (No. 9 ADE) Rice (No. 10 AOE) Texas (52-10)
Sept. 27 TCU (No. 8 ADE) Oklahoma (No. 1 AOE) Oklahoma (35-10)
Oct. 11 Texas (No. 9 ADE) Oklahoma (No. 1 AOE) Texas (45-35)
Oct. 25 Texas (No. 9 ADE) Oklahoma State (No. 7 AOE) Texas (28-24)
Nov. 1 Texas (No. 9 ADE) Texas Tech (No. 3 AOE) Texas Tech (39-33)
Nov. 1 Florida (No. 1 ADE) Georgia (No. 8 AOE) Florida (49-10)
Jan. 1 USC (No. 5 ADE) Penn State (No. 5 AOE) USC (38-24)
Jan. 8 Florida (No. 1 ADE) Oklahoma (No. 1 AOE) Florida (24-14)

Several top-10 FEI defenses populated the ACC, a conference that befuddled FEI all season long and throughout bowl season. The league exceeded national expectations in the postseason (7-3 against the spread in bowls), but did little to boost its actual credibiity (4-6 straight-up). An FEI offseason project is required to fully investigate a love affair of a particular conference, but the simple explanation is that as developed, FEI rewards tightly contested wins and losses. Five of the conference's six bowl losses were by a touchdown or less, just like virtually every league game in the regular season. North Carolina tallied four losses on the year by a combined nine points, played 11 of 12 games against FEI top-50 teams, and stomped No. 14 Georgia Tech, No. 20 Rutgers, and No. 26 Connecticut. Is the Tarheels' profile disproportionately boosted by its early-season domination of teams that turned their season around? FEI doesn't factor in when the games were played, perhaps to a fault. But since many interconference games are played early in the year, is there a sound, viable alternative?

The other major storyline of 2008 was the Utah Utes, a team that has received well-deserved attention and recognition for its undefeated season and dismantling of Alabama in the Sugar Bowl. The win boosted their FEI rating as much as any team in the final standings, but they still don't move into the conversation of elite teams according to this system. This system, like any computer ranking system, is cold and calculating. But that doesn't make it necessarily any more "right" about Utah or the other top teams in 2008 than any other individual's opinion. A human opinion (mine) created the algorithm that produced the FEI rating, and the ratings for Utah, North Carolina, and even Florida are simply observations of data processed by the algorithm.


Updated FEI Top 25 Non-BCS Conference Teams 2003-2008
Year Team Year Rank Record FEI
2004 Utah 5 12-0 0.243
2006 BYU 12 11-2 0.206
2006 Boise State 16 12-0 0.198
2008 Utah 12 12-0 0.182
2007 BYU 14 10-2 0.177
2003 Miami(OH) 12 13-1 0.176
2005 TCU 20 11-1 0.167
2003 Utah 15 10-2 0.162
2004 Boise State 21 11-1 0.155
2008 TCU 22 10-2 0.150
2008 Boise State 23 11-1 0.143
2004 Fresno State 25 8-3 0.128

Does a formula, any formula, reveal something about the teams that we might not observe as fans of the game? I created FEI not as mathematician seeking a thesis subject, but as a fan looking for a more objective and true evaluation of data fromt the game I love. As a fan of college football, I saw an ultra-motivated Utah team in the Sugar Bowl that would give fits to any team in college football this year, including the Gators, Trojans, and Longhorns. As an analyst, I'm trying to reach more tangible conclusions about not just that particular game, but every game played by Utah and the other 119 FBS teams in 2008. As I illustrated with "The Cloud" in the final pre-bowl FEI ratings column, college football teams are perhaps not best categorized linearly, despite our inclinations as analysts, writers, and fans to rank them that way. We want to argue over "Who's Number One?," even when, or especially when, the answer is complicated. And so before we rev up next year's debate over "Who's Number One?," Football Outsiders will continue to develop new statistical strategies to best answer that question. See you next season.

Comments

6 comments, Last at 16 Jan 2009, 11:20am

#1 by BK (not verified) // Jan 14, 2009 - 5:39pm

Excellent explanation of the model failure on Utah. I'm anxious to read about the re-jiggering to account for the analytics not matching up to the reality - not just Utah, but Big 10 teams as well.

Nice work!!!

Points: 0

#2 by Parker W. (not verified) // Jan 14, 2009 - 6:59pm

To BK's point:

I disagree. This is exactly the type of process Bill James was describing when he wrote about how the BCS is constantly tweaked in order to more reflect the human polls.

Tweak the FEI ratings to better approximate the strength of teams but don't do it so that it better matches up with human opinion (in the sense of AP polls and the like). If a computer system is perfectly refined to the point that it reflects the opinions of human observers, then what is the point in having that computer system in place?

Rather, tweak based on some stream of logic.

Points: 0

#3 by BK (not verified) // Jan 15, 2009 - 10:28am

I'm absolutely not talking about tweaking to match the polls. I'm referring to tweaking to reflect reality. I only watched the first half of the Bama Utah game because it really wasn't much of a game, as Utah was clearly better than Alabama. Yet the final results have Utah four places behind Alabama. And based on reality, Penn State and the rest of the Big Ten teams seem too high as a result of the teams being outclassed.

I am absolutely in love with a model that does a good job of measuring a team's success rather than, say, ranking Notre Dame high because they are historically a good team. But a good model should be predictive, and based on the final results I would wager against predictions that this model implies.

Points: 0

#4 by parker (not verified) // Jan 15, 2009 - 10:46am

Its easy to rank any system. Track its record of wins and losses. Maybe do something like take it each teams best win and worse loss and then compute the wins and losses. For the most part they should match up with what the system said. If not then its time to tweak. My personal opinion(not that it matters) is that the system would do better to tweak the ade.

Points: 0

#5 by young curmudgeon (not verified) // Jan 15, 2009 - 8:21pm

"A human opinion (mine) created the algorithm that produced the FEI rating, and the ratings for Utah, North Carolina, and even Florida are simply observations of data processed by the algorithm."

This is a great (and very honest) point to make, and one that often gets lost when people start talking about "the polls" vrs. "the computers"

Points: 0

#6 by Parker W. (not verified) // Jan 16, 2009 - 11:20am

I can't believe two parkers posted on this thread.

Points: 0

Save 10%
& Support Brian
Support Football Outsiders' independent media and Brian Fremeau. Use promo code FREMEAU to save 10% on any FO+ membership and give half the cost of your membership to tip Brian.