Writers of Pro Football Prospectus 2008

Most Recent FO Features

LockettTyl15.jpg

» Futures: Kansas State WR Tyler Lockett

The Wildcats receiver isn't the best athlete you'll ever see, but Matt Waldman says he could be an effective pro with small improvements in his technique.

31 Oct 2007

Fremeau Efficiency Index: Week 9

by Brian Fremeau

The Fremeau Efficiency Index (FEI) principles and methodology can be found here. Like DVOA, FEI rewards playing well against good teams, win or lose, and punishes losing to poor teams more harshly than it rewards defeating poor teams. Unlike DVOA, it is drive-based, not play-by-play based, and it is specifically engineered to measure the college game.

FEI is the opponent-adjusted value of Game Efficiency, a measurement of the success rate of a team scoring and preventing opponent scoring throughout the non-garbage-time possessions of a game. Like DVOA, it represents a team's efficiency value over average.

Only games between FBS (Division 1-A) teams are considered. The Week 9 Ratings represent the results of games played through Sunday, October 28, 2007.


Rank Team W-L FEI Last Wk GE GE Rank
1 LSU (7-1) 0.298 1 0.261 7
2 South Florida (5-2) 0.242 2 0.128 22
3 Oregon (7-1) 0.240 4 0.256 8
4 West Virginia (7-1) 0.236 6 0.357 2
5 Arizona State (8-0) 0.220 10 0.274 6
6 Auburn (7-3) 0.215 5 0.128 21
7 Ohio State (8-0) 0.213 8 0.377 1
8 Oklahoma (7-1) 0.209 7 0.299 4
9 Boston College (7-0) 0.205 9 0.225 9
10 Florida (4-3) 0.202 3 0.093 28
11 Connecticut (6-1) 0.189 21 0.152 18
12 Alabama (5-2) 0.182 18 0.086 32
Rank Team W-L FEI Last Wk GE GE Rank
13 Georgia (5-2) 0.172 35 0.046 46
14 Kansas (7-0) 0.170 12 0.308 3
15 California (5-3) 0.167 15 0.076 37
16 Clemson (5-2) 0.167 31 0.188 13
17 Florida State (5-3) 0.161 24 0.084 33
18 Kansas State (4-3) 0.147 19 0.110 25
19 Cincinnati (5-2) 0.147 16 0.179 15
20 Georgia Tech (4-3) 0.144 20 0.090 29
21 Missouri (6-1) 0.144 13 0.192 12
22 BYU (4-2) 0.144 25 0.137 20
23 South Carolina (5-3) 0.141 23 0.034 50
24 Kentucky (5-3) 0.139 11 0.032 52
25 Wake Forest (6-2) 0.134 28 0.107 26

Click here for rankings of all 119 teams.

Happy Halloween everyone, and welcome to a trick-or-treat edition of the FEI ratings. After an eventful, though relatively by-the-books weekend in college football, I was prepared to dish out a bit of candy to the undefeated teams, all of which emerged from the weekend unscathed. I didn't expect to see South Florida on my doorstep still dressed up like the No. 2 team in the land even after dropping their second straight game.

Before the egging of these ratings commences, allow me to try to defend USF for a third straight week. The reason for the Bulls' FEI ranking ultimately boils down to their 2-1 record against top 11 teams (West Virginia, Auburn, Connecticut). LSU is 2-0 against the top 11. West Virginia is 0-1, Auburn is 1-2, Florida is 0-2, and Connecticut is 1-0. The rest of the top ten has a combined record of 2-1 against the FEI top 20. South Florida hasn't done anything to add to their resume the last two weeks, but no other teams have done enough themselves to stake a claim to the spot. It is important to note, too, that though USF's ranking hasn't budged, their FEI rating itself has dipped. The difference between then-No. 2 South Florida and then-No. 3 Auburn in week seven is now about equivalent to the current difference between USF and No. 12 Alabama.

As surprised as I was by this week's output, I'm going to go out on a limb and confidently say that Saturday's marquee match-up between No. 3 Oregon and No. 5 Arizona State will likely disrupt the top-two standings regardless of its outcome. The winner, in addition to taking the driver's seat in the Pac-10 standings, likely will vault and the loser likely won't fall far, if at all.

No. 1 LSU travels to No. 12 Alabama in the other monster game of the weekend, one that could ultimately decide the SEC West title. The game will probably also further bolster the profile of the SEC as a whole in FEI, even if that doesn't seem possible. Nine SEC teams currently rank in the FEI top 31 and outside of that group, No. 47 Mississippi State has defeated two of those top nine. If these ratings scream SEC bias to you, consider that only one team out of twelve in the conference (Mississippi) has a losing record in games against other FBS teams. The ACC (four out of 12), Big 12 (three out of 12), Big East (two out of eight), Big Ten (two out of 11), and Pac-10 (four out of 10) all have more bottom-feeders than does the SEC. The SEC also boasts the best out-of-conference record (32-5) among all conferences, with all five losses coming against teams rated in the FEI top 21.

Offensive/Defensive Efficiency

As is explained in the process and methodology, FEI ratings are created from raw Game Efficiency data, the measurement of the success rate of a team both scoring on its own possessions and preventing scores on its opponent's possessions. Though I believe this combined information is valuable for FEI, it may also be useful to isolate and examine each team's offensive and defensive efficiency rates in order to identify unique strengths and weaknesses.

To produce the following offense/defense split ratings, I first calculated each team's raw offensive efficiency (OE) as a simple function of the modified scoring outcome of each individual drive -- the total points from touchdowns (six points each) and field goal attempts (2.13 points each, representative of the national college FG success rate) divided by the total number of competitive possessions played. Raw defensive efficiency (DE) was similarly calculated for every competitive possession faced by each team's defense.

Based on national offensive efficiency averages from each yard line on the field, each team's actual OE was then measured against a combination of both what any given team could expect to score against an average opponent from that starting field position and the DE of the opposition faced.

For example, on the dramatic game-winning drive in the Boston College vs. Virginia Tech game last Thursday, the Eagles began the possession at their own 34-yard line. An average offense versus an average defense will score a touchdown from that field position approximately 29 percent of the time, and the Hokie defense is about twice as stout as average. The previous BC scoring drive from its own eight-yard line, adjusted for both field position and Virginia Tech's defense, had a touchdown expectation of only 9 percent. Each of BC's actual drive successes (and failures) for the season are likewise measured against each situation's field position and opponent strength to produce the Adjusted Offensive Efficiency (AOE) according to the following equations:

AOE = (OE / Expected OE) * National OE
ADE = (DE / Expected DE) * National DE

The table below lists the FEI top 25 teams along with their OE/AOE and DE/ADE ratings and rankings. The complete list of 119 teams can be found here, along with the 2006 ratings.


FEI Rank Team W-L OE OE Rank AOE AOE Rank DE DE Rank ADE ADE Rank
1 LSU (7-1) 0.423 9 0.459 5 0.166 9 0.165 7
2 South Florida (5-2) 0.310 48 0.328 39 0.189 18 0.175 12
3 Oregon (7-1) 0.491 1 0.479 2 0.240 39 0.213 22
4 West Virginia (7-1) 0.476 2 0.465 4 0.174 11 0.192 15
5 Arizona State (8-0) 0.376 19 0.358 25 0.165 8 0.159 6
6 Auburn (6-3) 0.287 57 0.335 35 0.183 13 0.150 3
7 Ohio State (8-0) 0.385 16 0.326 41 0.050 1 0.060 1
8 Oklahoma (7-1) 0.464 5 0.429 9 0.179 12 0.168 10
9 Boston College (7-0) 0.320 39 0.390 15 0.116 3 0.168 9
10 Florida (4-3) 0.464 6 0.531 1 0.385 104 0.316 81
11 Connecticut (6-1) 0.276 64 0.263 68 0.157 5 0.171 11
12 Alabama (5-2) 0.315 45 0.342 32 0.254 42 0.236 34
FEI Rank Team W-L OE OE Rank AOE AOE Rank DE DE Rank ADE ADE Rank
13 Georgia (5-2) 0.346 32 0.385 18 0.276 49 0.238 37
14 Kansas (7-0) 0.378 18 0.326 42 0.106 2 0.111 2
15 California (5-3) 0.361 26 0.402 13 0.296 56 0.286 60
16 Clemson (5-2) 0.383 17 0.389 16 0.160 6 0.185 13
17 Florida State (5-3) 0.238 86 0.250 79 0.189 17 0.199 17
18 Kansas State (4-3) 0.316 44 0.372 22 0.214 28 0.218 26
19 Cincinnati (5-2) 0.360 27 0.359 24 0.188 15 0.209 19
20 Georgia Tech (4-3) 0.261 70 0.309 46 0.193 20 0.242 41
21 Missouri (6-1) 0.431 8 0.475 3 0.278 52 0.264 47
22 BYU (4-2) 0.317 43 0.330 37 0.206 22 0.214 23
23 South Carolina (5-3) 0.230 88 0.246 81 0.226 35 0.191 14
24 Kentucky (5-3) 0.409 10 0.442 8 0.332 79 0.294 67
25 Wake Forest (6-2) 0.250 77 0.233 89 0.227 36 0.238 36

Of note, only two teams this season -- LSU and Oklahoma -- currently boast top-10 AOE and ADE. At the end of last season, only Florida, USC and LSU held that distinction, and they finished the year ranked 1-2-3 in FEI.

Also of note, Ohio State's ADE rating is almost twice as good as the second best defensive team, Kansas, and almost three times as good as the best defensive team of 2006, Wisconsin.

These ratings will be updated each week along with FEI for the remainder of the season.

Underrated

Florida State (5-3; NR BCS, NR AP, No. 17 FEI)
FSU hasn't looked very flashy this season, and as I mentioned in the season preview, I expected the Seminoles to fall short of their preseason poll position. But wins over No. 12 Alabama and No. 32 Colorado and single-score losses to three ACC foes, along with a top-20 defense, have kept them afloat. Florida State also rates as one of the absolute best teams in the country at defending a short field, surrendering a grand total of only 20 points on opponent drives begun at or inside the FSU 45-yard line this season. Boston College, Virginia Tech, Maryland, and Florida all remain on Florida State's schedule, three of them on the road. I do not expect them to run the table, but a 3-1 finish heading into a bowl game is possible.

Cincinnati (5-2; NR BCS, NR AP, No. 19 FEI)
Like their upcoming foe South Florida, the Bearcats jumped out of the gate strong this season, defeating both Oregon State and Rutgers to move into the national poll picture before falling flat the last two weeks. In spite of those losses, Cincinnati remains one of only nine teams in the country to boast a top-25 AOE and ADE. They will need both to contend with South Florida, Connecticut, and West Virginia in the next three weeks, but do not be shocked if Cincinnati, like FSU, pulls out a win or two.

Overrated

Wisconsin (6-2; No. 21 BCS, Receiving votes AP, No. 46 FEI)
The Badgers have played a virtual Who's-Not-Who thus far this season, and have lost to the only two teams of consequence they have faced, No. 33 Illinois and No. 44 Penn State. Both their AOE and ADE ratings are modest, but nothing they have done so far on the field suggests they can even compete with Ohio State this weekend. Michigan and Minnesota round out their Big Yawn schedule, and the Badgers will be off to a bowl game of little significance.

Hawaii (6-0; No. 14 BCS, No. 12 AP, No. 50 FEI)
Last season, Boise State crashed the BCS party in a big way, executing one of the most exciting (and creative) come-from-behind victories in college football history in their Fiesta Bowl win over Oklahoma. If that bid was supposed to usher in a new era of second-tier conferences earning BCS respect, this Hawaii team, if given the same opportunity, will usher that new era right out. The Warriors have played absolutely no-one this season, and haven't even looked spectacular doing it, eking out victories over No. 82 Louisiana Tech and No. 118 San Jose State in overtime. Last season's 10-3 Hawaii team would probably trounce this season's undefeated one. Look for Hawaii to drop their November 23 matchup with Boise State, and don't be surprised if they trip up at least once more.

Posted by: Brian Fremeau on 31 Oct 2007

28 comments, Last at 03 Nov 2007, 7:29pm by Dwayne

Comments

1
by Jake (not verified) :: Wed, 10/31/2007 - 4:16pm

I still haven't seen any evidence these ratings are in any way worthwhile. I'm not trying to be unnecessarily mean or negative, but why should we pay any attention to this formula? DVOA is worthwhile because except for basing rankings directly on scores (Pythagorean etc) it best predicts actual future results while correlating with past outcomes AND allows for evaluating components of a team.
It does not appear that FEI does well in evaluating future games, at least not as well as many simpler and more open methods. It also doesn't do a good job at reflecting past performance given how many times we see above Team A with a better record and a win over Team B is actually ranked below team B. And I still haven't heard a reasonable explanation of what a "good team" is in college football.

2
by mmm... sacrilicious (not verified) :: Wed, 10/31/2007 - 4:20pm

I am in no way sure about this, but it seems offhand like the FEI might reward teams for splitting their schedule into "very hard" and "very easy" games. If a team is extremely efficient against creampuffs, but schedules a few top teams, it will have high efficiency and high SOS no matter how it does against the good teams.

I hope somebody can debunk this, but it seems like it could be a flaw in the system.

3
by Will Allen (not verified) :: Wed, 10/31/2007 - 4:25pm

Even with three losses Wisconsin will get a decent bowl game, given how their fans travel. Signifigance is in the eye of the beholder, I guess.

A question to you folks who watch a lot more college ball than I; do you think bowl games give a reliable gauge of conference strength, or does the layoff mask conference strength? How have you reached that conclusion?

I guess a question directed at Brian would be is whther the index has been around enough to get a handle on how strongly the rankings correlate with bowl game outcomes.

4
by mmm... sacrilicious (not verified) :: Wed, 10/31/2007 - 4:36pm

Will - I think bowl games are bogus as a measure of conference strength simply because of small sample size. You're really attempting to judge a conference based on the result of 2-3 games, since many of the lower-tier bowl games are set up to be mismatches from the start.

5
by NewsToTom (not verified) :: Wed, 10/31/2007 - 4:44pm

Re #3
My personal opinion is that bowl games are indicative of how well those two teams played that day, and almost nothing else. Subjectively, college teams seem to have a much higher variability than pro teams, even when playing on a weekly or every other week-ly basis. Throw in another game 3.5 to 6 weeks after a team has already played its last game, in weather conditions not necessarily seen in three months, where there are obligations other than football, and the game result doesn't necessarily matter very much to anyone other than the alumni, and you have a recipe for something with as much inherent value as your average NFL preseason game.

Alternatively, bowl games are totally irrelevant if your preferred conference and/or team don't do well and are the only real way to judge comparative rankings if your team/conference does well.

As always with the college game, YMMV.

6
by Will Allen (not verified) :: Wed, 10/31/2007 - 4:46pm

I guess a better question would then be whether bowl games are so different from regular interconference games that the bowl games should not be used at all for measuring conference strength. One reason I ask is that I've heard some conference homers argue that their conference's record in bowl games should not be used for evaluating total conference strength, when the losing team is one that had national title aspirations that were ended at the end of the season. The reasoning seems to be that once the chance for a national title was lost, the bowl game didn't provide enough motivation to get the team's best effort. This has always struck me as a ridiculous argument, but like I said, I don't watch a ton of college ball.

7
by Dave (not verified) :: Wed, 10/31/2007 - 4:49pm

As a fan of Big East team (and thus of a conference that went 5-0 in bowls last season, though my Orange, of course, stayed home), I'd like to think that the bowls are a reliable gauge of conference strength. But I really don't think they are; some conferences have much better bowl deals than others, which creates a lot of inherrent mismatches. And motivation levels for teams in bowl games can vary wildly. Big East teams have had something of a chip on their shoulder lately (particularly vs. the ACC), intent to justify the conference's place in the BCS. Small schools and non-traditional football powers typically care far more about lower-tier bowls than traditional powers. Etc. So while bowl performance says something, I don't know that it says a lot.

8
by Kal (not verified) :: Wed, 10/31/2007 - 5:01pm

I think bowl games by themselves are not a great indicator, but they are a good indicator when combined with 2nd order evaluations and the rest of the nonconference schedules to determine relative conference strengths. Some times teams are just not equipped to deal with a certain player or a certain scheme (Michigan vs. Oregon comes to mind), and some times it's bad injury luck, or some times the refs just hand the game over. You don't want to just use win-loss record either, because big wins over teams should count as a better sign than close wins, especially in bowl games.

9
by Pat (not verified) :: Wed, 10/31/2007 - 5:30pm

It is important to note, too, that though USF’s ranking hasn’t budged, their FEI rating itself has dipped. The difference between then-No. 2 South Florida and then-No. 3 Auburn in week seven is now about equivalent to the current difference between USF and No. 12 Alabama.

This is basically akin to what I said in a previous week: if you made a ranking that showed the likelihood distributions (the "error bars") for each team this year, they'd be huuuge. Everyone is pretty shaky, except LSU.

10
by Seth Burn (not verified) :: Wed, 10/31/2007 - 5:34pm

#1: Wow! Tough question. You are correct in that the value of data lies in its predictive ability. If FEI cannot predict future results it is noise. While FEI can predict that LSU would beat Hawaii, so could pretty much anyone who watches college football. What would you consider a fair test of FEI to be? What objective standard does DVOA pass that FEI fails?

I feel it is appropriate to put FEI on trial to prove its worth but we need to come up with the test in advance.

11
by Will Allen (not verified) :: Wed, 10/31/2007 - 5:43pm

Start with matching FEI against Sagarin, perhaps?

12
by Brooklyn Buckeye (not verified) :: Wed, 10/31/2007 - 5:52pm

I really like the addition of the FEI ratings as a regular feature here on FO.

However, the algorithm needs work. Yes, I'm a Buckeyes homer, but I'm a Bearcat Alum. I'm quite familiar with their football program both past and present, and if your stats tell you that the Bearcat offense is better than the Buckeye offense, comparatively, then your stats are flat out wrong.

OSU behind LSU, WV and Oregon? Yeah, maybe.

Behind Auburn? Arguable. Auburn is talented but WILDLY inconsistent.

Behind ASU and SF? Come on now...tell me honestly that SF can beat OSU on ANY field.

I know the Buckeyes underperformed last year in the title game. I know the SEC is stronger than the other conferences.

But maybe, just maybe the SEC isn't THAT much stronger. Maybe, just maybe, the SEC is filled with talented but INCONSISTENT teams, which is why there is so much parity...not because they are all GREAT, but because they are all INCONSISTENT. It's like a conference full of the Jacksonville Jaguars.

And, given that, USF's win over Auburn wouldn't skew the data quite so much and the Big East wouldn't have Cinci in the top 25 while Illinois and Mich are left out. Do you really think Mich is worse than Kansas State?

Subjectively, this list is interesting but really needs some tweaking. We'll see how the rest of the season plays out...

13
by NewsToTom (not verified) :: Wed, 10/31/2007 - 6:03pm

Re #6 (and others)
The biggest problem I have with the use of bowls is the next year. More than one SEC fan has used the argument that Florida's win over tOSU in the BCS CG in January is proof positive that the SEC in the fall of 2007 is superior to the B10. In its maximum form, this means the SEC champion and/or best team (if LSU doesn't win the SEC CG) would deserve to play in the BCS CG over an unbeaten tOSU team.

IMO, bowls occupy tremendously outsized influence in people's perceptions of how good a team will be the next year. To use one recent example, Florida State scoring 44 points in last year's Emerald Bowl against a UCLA team that in their previous game had allowed 9 to USC did not mean that FSU's offense would be good in 2007. Instead, one should look every game a team played and evaluate the performance holistically, taking into account additions, losses, and expected growth, to develop a projection, which projection would say that FSU's offense wouldn't be that great. To the extent bowls say anything, it's about the year the bowls signify the end of, not the next one.

14
by Jake (not verified) :: Wed, 10/31/2007 - 6:21pm

#10 - I'm not sure what a good test is... I guess thats part of the problem. There are 119 D1A teams, and conferences and regions mean there can be even more degrees of separation than that number of teams suggests. Is the SEC a powerhouse? Most years the answer is yes. This year? I'm not so sure. The problem is... we don't really know.
I would guess the best way is what is suggested by #11. Compare FEI's performance as a predictor against the established computer rankings (Sagarin is the most famous) that purport to be predictive. See how different the FEI method is to the established rankings and see if those aberrant teams turn out to be better (or worse) than people suspected (using the same rankings I guess).

15
by Kal (not verified) :: Wed, 10/31/2007 - 7:30pm

FEI vs. Sagarin on games they disagree on would be a good idea for a test.

Testing how accurate the scores are relative to the actual would be another interesting variant. You would expect FEI to predict close games against well-matched teams well; if you can show that overall value, that would be good.

Also, the Buckeye offense isn't that great, or at least it wasn't early on. Perhaps like DVOA, we need a weighted FEI that takes current performance into account more than past performance.

16
by BillWallace (not verified) :: Wed, 10/31/2007 - 7:52pm

re: #1

Small sample size, but the FEI is so far better than .500 using my simple methodology of picking 5-10 games a week against the spread. I'm not betting money, but just keeping track to see how it does.
Notable wins this last week were Oregon -3, Idaho +17.5, Ohio +3, and Colorado +13.5, losses were Maryland +3, and Rutgers ML. Kentucky the last couple weeks has been a let down.

Note that I know very very little about College ball, so I'm only using FEI to pick, no bias.

Again, the results so far are that FEI is easily ahead of .500 by more than enough to beat the vig, but not by a ton.

17
by Pat (not verified) :: Wed, 10/31/2007 - 10:19pm

You are correct in that the value of data lies in its predictive ability.

No. No, it isn't. Why is this the most frequent wrong statement I've seen on this website?

Football is a game. It contains, inherently, non-predictive elements. You can easily show that - it is impossible to regularly order teams in such a way that no upsets occur, and an upset implies that the game is non-predictable. (This is less true in college football, but only because there's more degrees of freedom due to the ridiculous number of teams).

Pushing for more and more predictivity is a wild goose chase. Knowing that USF has a "66% chance to beat OSU" or something stupid like that is silly. They don't play 100 times, so you won't know if USF will win 66 times out of 100. They play once. What you want to know is "what will happen in that one game", and statistics can tell you a lot more than just "USF might win" or "OSU might win."

Data is valuable if it contains information - if it helps describe the teams. DVOA is useful not solely because it's predictive - it's useful because it allows you to compare performance of a team in situations using a common baseline - a team that has a 10% DVOA on offense on certain plays scores 10% more points in the game on average than a team that has a 0% DVOA on offense.

I'm not sold on FEI yet, because I still don't understand fully how it's calculated, but at a minimum it seems to be a pace-free measure of scoring ability (and score-preventing ability). That's an advantage over simple margin of victory, which is pace-dependent (winning 34-10 in 10 drives is much much different than winning 34-10 in 15 drives).

I don't think the offense and defense efficiency rankings are all that interesting. I mean, it's better than points alone, but saying "Oregon's offense is better than WVU's offense because they score points more often given their average starting field position than WVU does" neglects all of the non-scoring drives, which are what, roughly two thirds of the drives in a game?

I guess the problem I have with it is that you're normalizing offense for starting field position: but starting field position is not independent of offense or defense. If a team starts on their 2 yard line, gains nothing, punts, the defense holds the other team, they punt, and the offense gets the ball back on the 2 yard line again... it was the offense's fault they started there again.

Not really a true criticism, since you're obviously limited by the data available. But there must be a correction that can be made for average yardage/drive in competitive drives. A team which goes 0 yards, 80 yards and a TD, 0 yards, 0 yards is not the same as one which goes 50 yards, punt, 80 yards and a TD, 50 yards, 50 yards. And since you're normalizing away the defense's starting field position (which, obviously, the second team's offense provides the defense with much better field position!) you're basically treating those two offenses as equal, as far as I can tell.

18
by David (not verified) :: Wed, 10/31/2007 - 11:18pm

Any system that continually keeps a team as #2, when it loses to teams with a lower rating is clearly flawed. If the rating system is not going to reflect who is going to win when the teams play, what is the point? Of course they also have wins against Elon, Florida Atlantic, and UCF...

19
by tlt (not verified) :: Wed, 10/31/2007 - 11:34pm

on #17, good stuff, pat. though, information needs also to promote predictivity, or it's useless to dangerous information, nicht wahr?

tlt

20
by Pat (not verified) :: Thu, 11/01/2007 - 12:11am

#19: or it’s useless to dangerous information, nicht wahr?

System 1: "USF has a 55% chance to beat Ohio State if they meet in the BCS Championship Game this year."

System 2: "Ohio State can score explosively and quickly - if the game is kept short, Ohio State has the edge - our models show that in a game of 8-10 drives, Ohio State has a 60% chance of winning. If the game's 10-12 drives long, USF has a 55% chance of winning, and as the game gets longer, their margins grow, up to 65% for 12-15 drives."

If the average game is 10-12 drives long, if USF and Ohio State played 100 games, these two models would have the same predictivity.

Yet the second model is adding information which is certainly not useless nor dangerous.

21
by Yinka Double Dare (not verified) :: Thu, 11/01/2007 - 11:42am

I second the idea that some sort of weighting for more recent performance would make some sense. Teams are not necessarily what they were at the beginning of the year. Not to mention these ratings seem to have a 'sticking' problem -- once you're in a ranking it's tough to move far away from it, which presumably is self-propagating in the case of the Big East and SEC this year. Most weeks they're playing another team rated well by these ratings, so a loss doesn't hurt, but a win helps. Meanwhile Ohio State runs about as effecient of an offensive game against a non-cupcake this year (one non-EOH/EOG drive that didn't end in a score) and they move up a whopping one spot. Something's fishy.

The one thing any statistical system will never be able to do is account for teams playing without their starting skill players though. Cal didn't look the same w/o Longshore (and he still doesn't look right). Michigan is an entirely different team with Mallett at QB, one almost totally dependent on the bomb in the passing game. There are numerous other examples. That, of course, is not fixable in this system or any other if you want to not be the Billingsley ratings, you just have to remember these things when using the statistical systems to evaluate an individual game matchup.

22
by Eddo (not verified) :: Thu, 11/01/2007 - 12:14pm

20: Pat, that's kind of a strawman argument (although I agree with you, a system needs to be more than just predictive). Number 19 never said a system should not include extra, desriptive information, but that it should promote predictive information. Your example in #20 does both, and therefore is actually in support of his point.

23
by Brooklyn Buckeye (not verified) :: Thu, 11/01/2007 - 4:10pm

Pat, I'd argue that predictive ability is very important...though not the only important element.

Take, for example, these FEI ratings. USF remains #2 on the basis of their record against teams the FEI rates as "top 10," completely ignoring the fact that USF lost two straight games to inferior opponents. Rather than adjust the rating system to match the actual game output, Mr. Fremeau spends time arguing that his rankings are good because his #2 ranked team beat other teams ranked highly by his system. Uh...cyclical logic, anybody?

Therefore, because the FEI has poor predictive ability, I am skeptical. Because I've watched USF play, I'm confident in asserting that the FEI is very, very wrong.

So I certainly agree with you that other information must be considered. But I do think predictive ability shows some sort of objectivity in the data collection and evaluation process.

24
by Brooklyn Buckeye (not verified) :: Thu, 11/01/2007 - 4:12pm

...which is not to say I don't appreciate Mr. Fremeau's efforts. I suck at math, so this is doubly impressive to me.

But I still think it needs work before it can be highly regarded.

25
by JKL (not verified) :: Thu, 11/01/2007 - 8:32pm

"Nine SEC teams currently rank in the FEI top 31 and outside of that group, No. 47 Mississippi State has defeated two of those top nine. If these ratings scream SEC bias to you, consider that only one team out of twelve in the conference (Mississippi) has a losing record in games against other FBS teams."

I think a big problem with these ratings is home field advantage, and how to account for it. I assume there is no accounting for it in the adjusted numbers, based on the SEC's rankings.

Here are the number of home/neutral/road games against non-conference FBS opponents, by each of the major conferences, to date:

SEC: 23/1/6
BIG TEN: 22/3/8
PAC 10: 17/1/9
BIG XII: 23/2/14
BIG EAST: 19/0/14
ACC: 20/1/15

Those 6 "road" games include LSU and Miss State at Tulane, and Ole Miss at Memphis. Only two times has the SEC traveled a substantial distance for a non-conference game: Tennessee at Cal and Mississippi State at West Virginia.

With DVOA and the NFL, the ratings can be made without adjusting home field, because every team plays 8 road and 8 home (in most cases). This does not mean that home field does not exist, or that teams are not on average more efficient at home than on the road. However, if we are trying to compare apples to oranges in college, the fact that one conference has scheduled a higher percentage of home games than others is going to skew the rankings.

_______ (insert name of team that played all games at home) is, to some extent, overvalued because they played their games at home. Then the ratings perpetuate this by valuing that team for playing another team that is overvalued for the same reason.

Brian, do you have enough data to say what the efficiency difference is for home vs road games on average, in games between BCS opponents?

26
by beargoggles (not verified) :: Thu, 11/01/2007 - 8:39pm

Re: 17. Pushing for more and more predictivity is a wild goose chase. Knowing that USF has a “66% chance to beat OSU� or something stupid like that is silly. They don’t play 100 times, so you won’t know if USF will win 66 times out of 100. They play once. What you want to know is “what will happen in that one game�, and statistics can tell you a lot more than just “USF might win� or “OSU might win.�

Fair enough. But although any system is limited in it's predictivity for one game, you could compare systems over a multitude of similar games. For example, according to somebody else's suggestion, you could choose 2 systems (Sagarin and this), and choose only games where the 2 disagree on the outcome. Once 20, 30, 50 games like this happen, you may get an idea of which one has higher predictive value overall, even if it misses on the subtleties of different types of teams.

27
by BC (not verified) :: Fri, 11/02/2007 - 11:01pm

I'm going to lean on the side that no bowl game effectively determines anything other than money and awkward watercooler conversation. They are played 4-6 weeks after the last game. Almost no bowl game has a team playing at its best.

I'm not a big stat terminology guy, but is predictivity the same thing as predictability?

28
by Dwayne (not verified) :: Sat, 11/03/2007 - 7:29pm

Ridiculous, you cannot leave South Florida at #2, fix the system. The defense that they lost to the #11 team does not hold up because Connecticut is not the 11th best team in the nation, another flaw with these rankings.