Writers of Pro Football Prospectus 2008

Most Recent FO Features

manningbrees.jpg

» Scramble for the Ball vs. DYAR Fantasy Football

Mike and Tom finally get around to a candid discussion about the oft-requested and never-implemented DYAR fantasy football league.

15 Apr 2009

Varsity Numbers: A '+' look back at 2008

by Bill Connelly

As spring football culminates with school-color-versus-other-school-color games throughout April, this seems like a pretty good time to revisit some 2008 statistics, put one final stamp on the season, and in coming weeks, start looking toward 2009.

The inaugural season of Varsity Numbers saw quite a few numbers and concepts thrown against the wall. Some did not stick, but one that did was the "+" concept, explored here in its introductory form and here with 2008 conference numbers.

Here's a quick refresher on the "+" approach:

[T]he "+" concept [is] a primitive attempt at a college-level look at DVOA-type comparisons discussed here. The formulas behind the "+" concept have been strengthened since the writing of that column (instead of looking at game totals and comparing them to opponents' per-game averages, it actually looks at per-play totals, eliminating the general per-game problem of small sample sizes), but the idea remains the same: Figure out a way to factor a team's strength of schedule into statistics to truly find the best offense, defense, and possibly overall team. You read "+" ratings the same way you would read OPS+ or something similar in baseball: 100 is exactly average, under 100 is bad, and over 100 is good.

(If you want to look at this as a DVOA figure of sorts, just subtract 100 and view it as a percentage. So a "+" figure of 122 could also be viewed as +22%. A "+" figure of 90 would be -10%. The Overall "+" figure below is the same way, only the base is 200 instead of 100.)

The "+" formula has been tweaked to include the strength of schedule of a team's opponents (there is now a component of a team's performance, a team's opponents' performance, and a team's opponents' opponents' performance, not unlike the RPI in college basketball), so you now get a truer read of how a team performed versus how an average team could have been expected to perform. In future weeks, we can break these stats down into more focused categories (non-passing downs vs passing downs, et al.), but we will keep it simple at first.

One other thing to remember about the "+" Rankings below: Stats are derived only from plays taking place in "close-game" scenarios, defined previously as follows:

  • First quarter: If the scoring margin is within 24 points or less, the game is "close."
  • Second quarter: If the scoring margin is within 21 points or less, the game is "close."
  • Third and fourth quarters: If the scoring margin is within 16 points or less (i.e., two possessions), the game is "close."

So without further adieu, here are the final 2008 "+" rankings. Teams are ranked by combining their close-game offensive and defensive S&P+ ratings.

2008 Final "+" Rankings
Rank Team Record Off. S&P+ (Rank) Def. S&P+ (Rank) Total S&P+ Final AP Rank
1 USC 12-1 146.3 (3) 141.7 (3) 288.0 3
2 Florida 13-1 152.1 (1) 135.1 (4) 287.2 1
3 Oklahoma 12-2 151.7 (2) 129.3 (7) 280.9 5
4 TCU 11-2 103.5 (52) 170.3 (1) 273.8 7
5 Texas 12-1 124.6 (10) 146.4 (2) 270.9 4
6 Penn State 11-2 145.7 (4) 120.5 (15) 266.2 8
7 Boise State 12-1 120.5 (15) 133.0 (5) 253.4 11
8 Ohio State 10-3 120.5 (16) 132.3 (6) 252.7 9
9 Alabama 12-2 126.6 (9) 123.5 (8) 250.1 6
10 Missouri 10-4 124.4 (11) 116.7 (20) 241.1 19
11 Oregon 10-3 127.9 (8) 112.5 (29) 240.4 10
12 Georgia 10-3 135.4 (5) 102.5 (57) 237.9 13
13 Iowa 9-4 115.3 (25) 122.2 (11) 237.5 20
14 Oklahoma State 9-4 123.5 (13) 113.2 (27) 236.7 16
15 Texas Tech 11-2 128.0 (7) 105.7 (45) 233.7 12
16 Utah 13-0 116.7 (22) 116.5 (21) 233.1 2
17 Oregon State 9-4 115.1 (26) 117.7 (19) 232.8 18
18 Mississippi 9-4 122.8 (14) 109.6 (36) 232.4 14
19 Cincinnati 11-3 110.7 (34) 120.7 (14) 231.4 17
20 Kansas 8-5 118.1 (19) 111.9 (30) 230.2 --
21 Nebraska 9-4 119.9 (18) 108.9 (41) 228.8 --
22 Pittsburgh 9-4 110.3 (38) 117.9 (16) 228.3 --
23 South Florida 8-5 117.4 (21) 109.1 (38) 226.5 --
24 LSU 8-5 116.4 (23) 109.7 (35) 226.1 --
25 Illinois 5-7 112.8 (31) 111.8 (31) 224.6 --

Only one team in the S&P+ top nine was not in the AP's final top ten, but there is still plenty of variation, especially after the first nine names. Let's look at some of the oddities.

Oddities

Illinois: First things first: There was a team with a losing record in the S&P+ top 25, and it wasn't a team from the SEC or Big 12. How does this happen? Primarily it happens because of turnovers and poor special teams, both of which severely prohibited Illinois from making another serious run at the Rose Bowl. They were -6 in turnover margin (No. 89 in the country), plus they were No. 113 in net punting and No. 94 in punt returns.

[Why weren't turnovers factored into the "+" number formula (beyond just having a point value of zero where applicable)? Because turnovers are, to an extent, arbitrary. Interceptions are somewhat predictable and within a team's control, but fumbles bounce wherever they bounce, and sometimes a team just gets lucky and recovers more of them. Teams that benefit greatly one year from turnover margin tend to regress in the category the next year (case in point: despite little turnover in personnel, Missouri went from +13 in 2007 to -4 in 2008), so leaving turnovers mostly out of the S&P equation A) keeps things simple, and B) gives you a solid idea of who may be getting a bit lucky or unlucky.]

The final scores of eight of their 12 games were within two possessions, and Illinois went 2-6 in those games. They lost to No. 6 Penn State by 14 (it was 24-17 heading into the fourth quarter). They lost to No. 8 Ohio State by 10. They lost to No. 10 Missouri by 10. They were close to being a really good team, but they could not put it together. There are a number of potential reasons for this, but their down-to-down performance shows they could be dangerous in 2009.

Virginia Tech: You will not find Virginia Tech on the above list because the Orange Bowl champions finished No. 41 in S&P+ rankings. They were the anti-Illinois -- average performance (No. 79 offense, No. 22 defense, not a ton of credit given for shutting down a series of average ACC offenses), but a great turnover margin (+14). That, plus a "Beamer Ball" special teams unit, was good enough to get them to the ACC title game. It helped, too, that then-freshman running back Darren Evans surged down the stretch of the season (last six games: 157 carries, 778 yards), and the Hokies won four of their final five conference games (including the conference title game) to make the Orange Bowl.

Kansas: How does one become the highest-ranked five-loss team in the country? By beating the No. 10 team while losing to the No. 3, No. 5, No. 15, No. 21 and No. 23 teams. As much as their schedule helped them in their 11-0 start to the 2007 season, it killed them last year, and it will probably be the biggest obstacle between the Jayhawks and their first Big 12 North title in 2009.

Missouri: While we're at it, how does a four-loss Missouri team slip into the Top 10? In much the same fashion, actually. They lost to No. 3 and No. 5, and their other losses were statistically-even, last-second heartbreaks to the No. 14 and No. 20 teams. They also destroyed No. 21 Nebraska, which helped. Otherwise, as mentioned above, turnovers were not their friend, and that cost them dearly in the two tight losses.

TCU: How does a two-loss Mountain West team finish in the top four? By losing on the road to the No. 3 team (while holding them to their lowest regular-season point total), and losing to Utah in a statistically even road game (while giving up only 13 points to a good offense). They had by far the No. 1 defense in the country -- their 170.3 Defensive S&P+ was as close to No. 2 Texas' as No. 2 Texas' was to No. 10 Tennessee's. They held seven teams to single digits. Meanwhile...

Utah: The Utes went undefeated by winning five games by a possession or less and posting a +13 turnover margin. Utah was good, and while the Utes were better in the one statistic that mattered most when they played TCU -- the scoreboard -- the Horned Frogs were overall better statistically.

Other Interesting Rankings

Tennessee, No. 32: Phil Fulmer's last Vol squad had one of the biggest ranges of rankings. They had the No. 102 passing offense and the No. 3 rushing defense. Overall their defense ranked No. 10, but their No. 80 offense very predictably held them back. They were the second-highest ranked team with a losing record.

Tulsa, No. 48. The Golden Hurricanes went 12-2, but their ridiculously weak schedule held them back. That, and a weak defense (No. 86). They sure had a fun offense (No. 20), though.

Wisconsin, No. 64: The S&P+ formula may have overrated Illinois, but that wasn't the case with the Badgers, who surprisingly came in with the No. 108 defense (No. 103 rushing, No. 102 passing). Not very Wisconsin-like.

West Virginia, No. 75: No ranking comes more surprising than this. Their No. 27 offense was balanced by their No. 116 defense (No. 114 rushing, No. 110 passing). We'll just say that West Virginia is to the "+" ranking what North Carolina was to the FEI last year -- a somewhat inexplicable outlier.

Washington State, No. 118: Interesting only because they're the lowest-ranked major-conference team, ranked higher than only No. 119 Idaho and No. 120 Western Kentucky.

"+" vs FEI

This will be fodder for a future column, but for now we'll take a quick look at the major differences between the "+" ranking system and Brian Fremeau's FEI system.

West Virginia: The differences between FEI's No. 19 ranking of the Mountaineers and the No. 75 of the "+" could be the basis for its own entire column.

WAC Teams: The "+" likes the WAC a lot more than does FEI. Whereas the FEI had Hawaii ranked No. 108, Nevada No. 81 and Fresno State No. 83, the "+" rankings had them at No. 53, No. 36 and No. 61 respectively, differences of 55, 45 and 22 spots. In all, WAC teams averaged a ranking of 18 spots higher in the "+."

Big 12 Teams: The "+" also liked the Big 12 more, giving those teams an average ranking of 15 spots higher. Kansas was No. 20 (as opposed to No. 60 in FEI), Nebraska No. 21 (No. 59 FEI), Baylor No. 40 (FEI No. 75), Oklahoma State No. 14 (FEI No. 36) and Missouri No. 10 (FEI No. 29).

ACC Teams: On the flipside, the FEI was much, much higher on the ACC. North Carolina ranked only No. 46 in the "+" rankings, as opposed to No. 6 in the FEI rankings. Additionally, Miami (No. 67 vs No. 31), Georgia Tech (No. 49 vs No. 14), N.C. State (No. 62 vs No. 28), Virginia Tech (No. 41 vs No. 9), Duke (No. 83 vs No. 52), Florida State (No. 37 vs No. 10), Wake Forest (No. 39 vs No. 15) and Boston College (No. 35 vs No. 11) were all significantly different. In all, ACC teams averaged an FEI ranking 26 spots higher.

MAC Teams: The MAC was the mid-major version of the ACC, coming in much more favorably with the FEI (an average of 17 spots). Northern Illinois (No. 100 vs No. 61), Western Michigan (No. 106 vs No. 74), Temple (No. 95 vs No. 63) and Buffalo (No. 81 vs No. 51) had the highest variations.

The "+" rankings also ranked Big Ten teams an average of 10 spots higher, while the FEI slightly favored Big East and Conference USA teams (an average of five spots higher each).

What is the cause for this? The obvious answer is likely special teams to a small extent (the "+" rankings do not much account for that yet), the differences in measurement methods ("+" rankings look at play-level data, FEI at possessions) to a larger extent, and to an even larger extent the difference in means of calculating strength of schedule adjustments.

In the next offseason Varsity Numbers, we will begin to take a look at unit rankings and other interesting tidbits that could signify encouraging news for some teams and signs of foreboding for others.

Posted by: Bill Connelly on 15 Apr 2009

23 comments, Last at 28 May 2009, 5:58pm by pokes

Comments

1
by parker (not verified) :: Thu, 04/16/2009 - 12:47am

This is good stuff. Have you gone deeper in detail to try and figure out how it is that a team like Utah could destroy a team like Alabama which was better by almost every measure. I have a hunch that you might have a more informative ranking system by giving two measures. 1 for all the major categories which tend to be consistent and then a 2nd rating for the stuff that matters to the scoreboard but isn't necessarily consistent game to game.

2
by Will :: Thu, 04/16/2009 - 1:05am

Turnovers are a big reason why things like that happen. Also, don't discount the effect of the immaturity of players and the timing of bowl games (i.e. looong after the season has ended, esp. for Big Ten and Pac Ten schools). More so than in the NFL, college football does have that "any given saturday" vibe to it. Every once in a while, Appalachain State beats Michigan on their home turf - the same Michigan team that goes on to beat the defending (and future) champion Florida Gators in the Capital One Bowl.

Will

4
by parker (not verified) :: Thu, 04/16/2009 - 11:46am

Will,

I totally agree with you. However, my gambling eyes have showed me that you can find that kind of stuff "in the numbers". I don't think that these are just random occurences where if you switch to a different date, or a different nuetral location the result would be different.

23
by pokes (not verified) :: Thu, 05/28/2009 - 5:58pm

Parker: Actually if you had a strong college betting background you would not be asking how Utah could kill Alabama. Bubble Burst teams like Alabama struggle late in the season or in the bowls immediately after their bubble is burst. Alabama's heart was set on playing for the national title and when that went out the window they no showed the Sugar Bowl. In cases like this, the numbers go out the window. Once Georgia's SEC shot went out the window versus Florida they almost got beat by Kentucky the next week. The emotional situation can counter the box score statistics.

3
by oljb (not verified) :: Thu, 04/16/2009 - 10:05am

The Mountaineers this past year were appallingly bad in defense measured by yards, but not nearly as bad in scoring defense. I was discussing this phenomenon with some other WVU fans and there was some speculation that it had to do with the defensive scheme (this is not the first time that Casteel's unit has over-performed in scoring defense relative to other statistics) and its tendency to force turnovers in the defensive red-zone.

Another thought is that the offense had an unusually high number of failed red zone efforts resulting in turnovers on downs. That factor, combined with the generally poor defense wound up increasing opponent yardage statistics compared to situations in which they might receive a kickoff and have some yards classified as return yards.

5
by Big Johnson (not verified) :: Thu, 04/16/2009 - 8:21pm

Just another reason why USC is underrated. Every year they lose to some "crappy team on the west coast" and finish with 1 loss. Every year they get screwed out of the championship game. Every year they have people talking about how they lose their talent in the draft and they wont be that good. Every year those people are wrong. 9 losses since the start of the 2002 season?! 1 2/7 losses per season. And almost all of those losses came to pac 10 teams. oregon st. ucla cal twice oregon. Matter of fact I can only think of 1 non pac 10 team they lost to since 2002 and thats texas in the championship game. 9 losses since 2002 and they only have 2 championship games under their belts. Imagine what would happen if the west coast wasn't completely ignored by the other 46 states.

7
by Kibbles :: Thu, 04/16/2009 - 10:15pm

Wait, wait, wait, wait, waaaaaaaaaaait a minute here. USC is *UNDERRATED*? We're talking about USC, right? As in "University of Southern California"? We're not secretly talking about South Carolina, are we? The University of Southern California is underrated?

This is USC we're talking about, right? The team that caused ESPN to run a feature declaring them the greatest team in the history of college football... TWO WEEKS BEFORE THEY LOST THAT YEAR'S CHAMPIONSHIP GAME? The team whose fabled "threepeat" consisted of precisely ONE BCS national championship? And don't blame that on them being overrated by the east coast voters- those east coast voters are the reason why they got the AP National Champ, while the *WHOLLY UNBIASED COMPUTERS* are the ones that put LSU in the championship game over them. I guess that's just an example of Silicon Valley bias. You know, since Silicon Valley is so far away from Southern California and everything. Those computer programmers just have an axe to grind.

Every year they lose their talent to the draft, but what people are ever talking about how USC won't be that good? From 2003-2008, the preseason AP poll has ranked USC 3rd, 1st, 6th, 1st, 1st, and 8th. They've been preseason #1 50% of the time. From the looks of it, USC is facing the prospect of yet another preseason top-10 ranking, possibly even another preseason top-5 ranking.

Any supposed anti-USC bias is clearly nothing more than the overactive imagination of a delusional fan-base. You want to know why USC doesn't have more titles? Start with the second sentence of your post- because *EVERY YEAR*, USC loses to some crappy team it has no business losing to late in the season at a time when it controls its own destiny in the National Championship race. Basically, it chokes its chances away. Maybe next year when it's a top-2 team with 3 weeks to go, they won't lose to a clearly inferior team and we won't have this discussion.

9
by DoubleB (not verified) :: Fri, 04/17/2009 - 12:49pm

"You want to know why USC doesn't have more titles? Start with the second sentence of your post- because *EVERY YEAR*, USC loses to some crappy team it has no business losing to late in the season at a time when it controls its own destiny in the National Championship race. Basically, it chokes its chances away."

I don't where to begin with this statement that couldn't be less true. They lost this year on the last weekend in SEPTEMBER to an Oregon State team that went 9-4. They, like every other BCS team in the country this year, had a bad half (1st half of this game) and it cost them. Unfortunately for USC, losing first didn't help as every BCS contender lost after them (Florida, Texas, Bama, Tech, Oklahoma, and Penn State). How come those teams didn't "choke their chances away," particularly Florida?

In 2007 they lost their second game to Oregon the last weekend in October. Here is the list of 1 and 2-loss contenders that lost games later than that: LSU, Missouri, West Virginia, Kansas, and Ohio State. Apparently LSU and Ohio State didn't choke their chances away.

USC is 26-1 in the last 6 seasons playing teams out of the conference (the loss is to Texas). Here is the list of BCS teams they have beaten in that time span: at Auburn, at Notre Dame (3 times), Notre Dame (3 times), Michigan (2 times), at Virginia Tech, Arkansas, at Arkansas, Nebraska, at Nebraska, at Virginia, Ohio State, Illinois, and Penn State.

USC doesn't choke it's chances away. It loses a conference game (hey, just like everyone else) and then gets no respect from the rest of the nation for it's perenially difficult non-conference schedule and the 9-game Pac-10 schedule.

10
by Big Johnson (not verified) :: Fri, 04/17/2009 - 1:10pm

But thats the bias that im talking about! If USC loses any game its chances of a national championship are out the window. If any other team loses a game it always somehow wins the tie breaker. PAC 10 football is underrated every year. They do exceptionally well in bowl games when they get to play against weaker other conferences. 5-0 this past year. 4-2 the year before. 3-3 the next year. 3-2 in 2005. 3-2 in 2004. 18-9. I want someone to find me any conference with a better winning percentage since 2004. 5 game minimum. It probably isnt gonna happen. Assuming bowl games pin even teams against each other (i.e. the #1 vs the #2, the #3 vs the #4, etc.) this winning percentage is a decent indicator of the PAC 10 being the most underrated conference since 2004 because every game should be roughly even. Oh and USC is also +103 points in those 5 bowl games since 2004. When they play against "similar outside of conference games" they do better than when they play against "crappy pac 10 opponents" yet when they have that 1 loss next to their name everyone assumes USC doesnt deserve a title shot because they lost to a PAC 10 team. Cal twice, oregon, oregon st., UCLA, Stanford, Texas, two others. They have lost 1 game to a non PAC 10 team in 5 years? 6 years?! And in that texas game they lost by 3 in a very close nail biter to a great team in Texas yet I am pretty sure USC still would have had a higher dvoa. Yet every year when it comes voting time they will be outkicked by a "similar team" that will get thwomped in the championship while they dominate their opponent by an average of 20+ points. Its just sad how easy it is to see

6
by parker (not verified) :: Thu, 04/16/2009 - 9:15pm

Any team other than USC and I would completely see your point. The espn guys spend more time making sure Pete Carroll's manhood is properly salived than there is time in a day. I would argue that USC is too overhyped each year so that when they do have a bad game it looks like a worse loss than it actually is.

12
by Rich Conley (not verified) :: Fri, 04/17/2009 - 3:30pm

Any team other than USC, and they would have been in the championship instead of Utah.

Thats the point.

14
by Big Johnson (not verified) :: Fri, 04/17/2009 - 7:43pm

Oklahoma and florida both had 1 loss and they were in the championship. Why not USC? 2007 LSU had 2 losses and USC had two losses. Why not USC? Both 2008 and 2007 USC beat their bowl game opponents into a pulp. Im not buying the argument that they choked their chance away because if all things are fair (which apparently they arent) then in 2008 florida and oklahoma also choked their chance away by losing their game. Oh wait that only counts when its USC.... uh um yeah um duh? PAC 10 is 18-9 in bowl games since 2004! Best conference in the country! West Coast are people too. Spread the news that west coast are people too

15
by Rocco :: Sat, 04/18/2009 - 9:13pm

"2007 LSU had 2 losses and USC had two losses. Why not USC?"

Cause they lost at home to Stanford.

16
by Big Johnson (not verified) :: Sun, 04/19/2009 - 3:03pm

Like i said, the Pac 10 is underrated. When the Pac 10 plays other teams outside of conference they do exceptionally well. They actually travel around the country unlike SEC teams. Every game the SEC plays outside of conference is against toilet scum at home. Can you even name the last time they traveled anywhere far for a road game? trivia question for anyone that reads it. And even if your argument is valid that still only stands tall for one of the two years and doesnt shed light on whether or not USC deserved a championship spot or not. Seems like in depth analysis like "they lost at home to stanford" isnt the best indicator. HELL baltimore lost to 0-14 miami two years ago. Shows how terrible they were this last year. You are probably the same person that argues that "any given sunday". well that was any given sunday. if anything it shows that atleast USC loss was a fluke. Please dont respond with 1 sentence. That in depth analysis does nothing for the argument

17
by parker (not verified) :: Sun, 04/19/2009 - 7:33pm

Off the top of my head Georgia went to Arizona St. last year. Tennessee went to UCLA last year. Arkansas had a home and home with USC a few years ago.

18
by Big Johnson (not verified) :: Sun, 04/19/2009 - 9:10pm

And the year before TEN played UCLA they lost to CAL. I dont know about the travel but i know that CAL won. so the PAC 10 went 2-0 against tennessee? Can we get any other juggernauts from the SEC that lost an "upset" to a PAC 10 team in recent years? Terrible excuses! So the Pac 10 since 2004 has had a better bowl win percentage than the SEC but gets no credit. Its just ridiculous. AND they beat them in head to head games and still dont get credit? Seems like the east coast will always have its head up their asses and will never give West coast the credit they deserve! Oh well ill have to be content with the conference of champions..... One of these years USC wont get screwed by voters OR there will be a playoff and they will be unleashed. After all, every college football fan rejoices the day that USC loses because they know they are the biggest threat to every other teams title hopes.

19
by Rocco :: Sun, 04/19/2009 - 10:20pm

The Pac-10 was underrated in 2007? They had two teams lose to Notre Dame. USC lost at home to a terrible Stanford team when they were 40 point favorites. Ohio State and LSU lost to bowl teams; both of LSU's losses were in overtime. USC lost at home to a 4-win team, and then dropped another game. They have no complaint about 2007. Last year they had a great argument about belonging in the title game as everyone else lost. In 2007 though? None at all.

20
by Big Johnson (not verified) :: Mon, 04/20/2009 - 3:56am

The same UCLA team that beat CAL who beat tennessee? Thats fine im content with that. Notre Dame > UCLA > CAL > Tennessee. That argument doesnt seem to work very well for you. If the pac 10 isnt good cause they had two teams lose to notre dame then the SEC must not be good either for having a team lose to a team that lost to a team that lost to notre dame! Seems like a very hypocritical stance that your taking. AND LSU lost to Arkansas. Arkansas finished 8-5 with 4 of their wins at home against Troy, north texas, chattanooga and florida international. They basically were 3-5 with that big win against LSU. Go ahead and try to hype up those victories. Arkansas got a bowl game for winning 1 big game in 2007 and got slaughtered in the bowl game because retard voters bought into the hype. What if stanford would have gotten a bowl game for beating USC? Only seems fair. Florida played mighty teams like Western kentucky at home, troy at home, and florida atlantic at home. Nothing like the SEC running up their record against terrible teams. They lost to a michigan wolverine team that lost to Oregon by 32 points at home. Every SEC team that gets linked to the PAC 10 is far inferior. So every SEC team goes undefeated outside of conference (unless if they play a pac 10 team) and its just assumed that they are better so whichever team wins the big game against each other gets the bid for the championship game. thats ridiculous!! Next years championship bowl game will just hand a spot to whoever wins between florida and LSU? Its so sad that people buy into this crap. West coast are people too!

8
by Kibbles :: Thu, 04/16/2009 - 10:48pm

Bill, I really like what you're doing with Varsity numbers, and I think it's great to see two competing formulas for evaluating college football (I know they aren't technically "competing", but they're definitely taking different approaches to the same problem and trying to appeal to the same target audience). I think competition just brings out the best in everyone, and I can see FEI and Varsity Numbers each introducing new concepts and pushing each other to become better.

With that said, I have a couple questions/comments/concerns/what-have-you. First off, are your numbers designed to be predictive or descriptive? Neither? Both? If descriptive, are you trying to describe the season as it happened, or are you trying to describe the "true quality" of a certain team? The reason I ask is because I really question throwing out turnover margin based on a few anecdotal examples. If you mean for your statistic to be a descriptive tool... then throwing out turnovers is a huge mistake. Turnovers are the most game-changing plays this side of touchdowns- you can't tell the story of what happened during a season or during a game without including them. If you're trying to say that turnovers are luck-based and therefore shouldn't factor into discussions of a team's quality... well, okay, but first you're going to have to do a much better job convincing me that interceptions and forced fumbles are merely dumb luck.

If you mean for your statistics to be more predictive in nature, then I'd like to see some more reasoning behind throwing out turnovers, still. Maybe some data that shows no correlation between turnovers in year N and turnovers in year N+1. Maybe some regressions showing that including turnovers doesn't increase the predictive ability of the model. Something. I just feel that throwing out turnovers, regardless of the reasons, is an extreme measure tantamount to just throwing out all touchdowns. If it improves the metric, then sure... but how do we know it improves the metric?

I'd also like to see a similar analysis done for the data from blowouts. It seems like removing blowout data might improve the metric, but FO's studies have found that discarding blowout data actually LOWERS the predictive value of DVOA (and obviously throwing out ANYTHING has to be viewed as a bad idea for a measure that was designed to be descriptive). If anything, it seems like completely discarding data from when the game is out of hand severely penalizes dominant teams. A team that was just a bit better for the course of an entire game that ultimately won by 14 will probably show better than a team that absolute dominated another team and was winning by 28+ before the end of the first quarter, just because all of that extra dominance just gets discarded. Maybe giving teams bonuses depending on when they hit the "blowout" stage? Maybe weighting plays based on how much of the game counted (i.e. if only 75% of the game is considered "close", weight all of the plays 33% higher to compensate)? I'm afraid I have more questions that answers, but I suppose that's why you get to call yourself an analyst while I'm just an armchair QB.

I will say that I thought your conference-only analysis before the bowl games was some of the best college football analysis I've seen. I didn't know how I felt about it at first, but after some reflection, I think it's some of the most elegant analysis given the limitations of the game. The way the schedule is set up, there's so little connectivity between, say, the SEC and the Big 10 that it's almost impossible to accurately compare teams from one to teams to the other (which was the problem before 2006- everyone assumed that the Big 10 was great and Ohio/Michigan were therefore amazing teams, when in reality, the Big 10 was pretty mediocre and Ohio/Michigan were just very good teams). In addition, most non-conference games come in the first three to four weeks of the season, but thanks to heavy roster turnover, most teams in week 12 barely resemble how they looked in week 1. We can, however, very effectively compare teams within the same conference and then add an asterisk that says "and after the bowls, we can probably compare the conferences to each other pretty well, too".

22
by Bill Connelly :: Mon, 04/20/2009 - 11:28am

Great comment and questions...I don't know where to start, so I'll just give it the item-by-item treatment...

First off, are your numbers designed to be predictive or descriptive? Neither? Both?

I would say more descriptive. Anything descriptive is going to be at least somewhat predictive too, but there are other factors to take into account if making predictions from this.

If descriptive, are you trying to describe the season as it happened, or are you trying to describe the "true quality" of a certain team? The reason I ask is because I really question throwing out turnover margin based on a few anecdotal examples. If you mean for your statistic to be a descriptive tool... then throwing out turnovers is a huge mistake. Turnovers are the most game-changing plays this side of touchdowns- you can't tell the story of what happened during a season or during a game without including them. If you're trying to say that turnovers are luck-based and therefore shouldn't factor into discussions of a team's quality... well, okay, but first you're going to have to do a much better job convincing me that interceptions and forced fumbles are merely dumb luck.

First of all, my non-inclusion of turnovers is a lot more legitimate than just being based on a few anecdotal examples. Phil Steele has done wonderful work over the last several years in talking about how those who benefitted most significantly from turnover margin in a given year almost always sees a downturn the next (and vice versa). When evaluating teams at the end of the year (from the "true quality" standpoint), I do think it is fair to withhold turnovers from the conversation. That said, fumbles are far more random than interceptions, so as I continue to tinker with the numbers, I might try to include INTs while withholding much fumble impact. Do remember that the "+" concept is something that's only been developed for the last year or so, and I need both further tinkering and more years of data to figure out the absolute best numbers. Plus, Aaron still tweaks the DVOA formulas every offseason, so it will always be an ongoing process...

If you mean for your statistics to be more predictive in nature, then I'd like to see some more reasoning behind throwing out turnovers, still. Maybe some data that shows no correlation between turnovers in year N and turnovers in year N+1. Maybe some regressions showing that including turnovers doesn't increase the predictive ability of the model. Something. I just feel that throwing out turnovers, regardless of the reasons, is an extreme measure tantamount to just throwing out all touchdowns. If it improves the metric, then sure... but how do we know it improves the metric?

Turnovers would certainly need to be taken more into account from a predictive standpoint. While they are somewhat random in nature, it does seem that (and this is as far from a stat-wonky statement as possible) there's a rhythm to it...that teams benefitting from turnover margin halfway through the season continue to benefit the rest of the year (and then see the table turned the next year). Then again, maybe not. Minnesota started 7-1 in large part because of a healthy turnover margin. The well dried up after that, however, and they finished the season 0-5. Granted, they didn't really lose the turnover margin the rest of the year (they were dead even in four of the remaining five games), but the well dried up for them, and a predictive system that ignored turnovers would have better seen that coming than one that did not ignore them. Who knows...a lot work still to do here.

I'd also like to see a similar analysis done for the data from blowouts. It seems like removing blowout data might improve the metric, but FO's studies have found that discarding blowout data actually LOWERS the predictive value of DVOA (and obviously throwing out ANYTHING has to be viewed as a bad idea for a measure that was designed to be descriptive).

In terms of correlations to win percentages, S&P+ and Close S&P+ have almost identical correlations on offense, while Close S&P+ is slightly more correlated to winning on defense. Here's where analysis of college and pro data would come off very different. The difference between the best and worst pro team, while significant enough to make a large difference in wins and losses, is not nearly as significant as the difference between the best and worst college team. Almost any good college team is going to play a couple of teams that are just completely overmatched, and they're probably going to spend one-fourth to one-half of the game trying not to score, at least trying not to score quickly. And considering that college teams play a smaller schedule than pros, that ends up occupying a decent % of their schedule. We should not glean too much from those games.

Things that take place in the fourth quarter of a blowout, when the second- or third-string is in, really doesn't have much place in a predictive system, as what happens then only matters if a team's first-string was significantly better than the opponent's first-string.

If anything, it seems like completely discarding data from when the game is out of hand severely penalizes dominant teams. A team that was just a bit better for the course of an entire game that ultimately won by 14 will probably show better than a team that absolute dominated another team and was winning by 28+ before the end of the first quarter, just because all of that extra dominance just gets discarded. Maybe giving teams bonuses depending on when they hit the "blowout" stage? Maybe weighting plays based on how much of the game counted (i.e. if only 75% of the game is considered "close", weight all of the plays 33% higher to compensate)?

I've tossed around this idea because, if each "close-game" play is getting measured, then a game in which you run 30 "close" plays before crushing your opponent ends up with less statistical significance than a game that was close throughout, where you ran 75 "close" plays. The alternative to this, though, is giving each individual game a "+" score and figuring out the averages from there (that's actually where I started with the "+" concept). Then, however, you run into the issue of creating averages and making judgments from a sample size of N=12. Plus, you run the risk of one amazingly great or terrible game (with a "+" score of, say, 300...or 3) very much skewing the averages. There are drawbacks to both approaches, but it seems that counting all "close" plays at least has fewer drawbacks at this moment.

I will say that I thought your conference-only analysis before the bowl games was some of the best college football analysis I've seen. I didn't know how I felt about it at first, but after some reflection, I think it's some of the most elegant analysis given the limitations of the game. The way the schedule is set up, there's so little connectivity between, say, the SEC and the Big 10 that it's almost impossible to accurately compare teams from one to teams to the other (which was the problem before 2006- everyone assumed that the Big 10 was great and Ohio/Michigan were therefore amazing teams, when in reality, the Big 10 was pretty mediocre and Ohio/Michigan were just very good teams). In addition, most non-conference games come in the first three to four weeks of the season, but thanks to heavy roster turnover, most teams in week 12 barely resemble how they looked in week 1. We can, however, very effectively compare teams within the same conference and then add an asterisk that says "and after the bowls, we can probably compare the conferences to each other pretty well, too".

Thank you much. Conference games are much more important and telling than non-con games (even making judgments off of bowls is dicey considering the 6-8 week break teams have, and the infinite number of problems with rhythm/discipline that emerge from that break), but each game played still has a story to tell, and these overall rankings are the first stab at evening the playing field for all 800+ games on the FBS docket.

11
by witless chum :: Fri, 04/17/2009 - 1:28pm

Is there a reason you didn't post the full list anywhere? I'd like to see where the system ranks my Spartans and then complain bitterly about them being disrespected.

21
by Bill Connelly :: Mon, 04/20/2009 - 10:44am

1. USC (288.03)
2. Florida (287.2)
3. Oklahoma (280.92)
4. TCU (273.79)
5. Texas (270.94)
6. Penn State (266.18)
7. Boise State (253.42)
8. Ohio State (252.74)
9. Alabama (250.09)
10. Missouri (241.07)
11. Oregon (240.37)
12. Georgia (237.93)
13. Iowa (237.48)
14. Oklahoma State (236.69)
15. Texas Tech (233.71)
16. Utah (233.12)
17. Oregon State (232.81)
18. Ole Miss (232.36)
19. Cincinnati (231.42)
20. Kansas (230.02)
21. Nebraska (228.75)
22. Pittsburgh (228.27)
23. South Florida (226.51)
24. LSU (226.08)
25. Illinois (224.59)
26. Connecticut (222.52)
27. California (222.28)
28. Arizona (220.39)
29. Rutgers (219.28)
30. Clemson (219.13)
31. Michigan State (218.55)
32. Tennessee (218.14)
33. BYU (217.93)
34. South Carolina (215.92)
35. Boston College (215.13)
36. Nevada (215.11)
37. Florida State (214.95)
38. Arkansas (214.69)
39. Wake Forest (212.85)
40. Baylor (212.41)
41. Virginia Tech (212.38)
42. Troy (210.97)
43. Stanford (210.87)
44. Northwestern (210.52)
45. Ball State (209.83)
46. North Carolina (209.7)
47. Rice (209.14)
48. Tulsa (209.11)
49. Georgia Tech (207.07)
50. Southern Miss (206.5)
51. Houston (206.11)
52. Maryland (205.24)
53. Hawaii (203.83)
54. Notre Dame (202.1)
55. Bowling Green (201.98)
56. East Carolina (201.63)
57. Vanderbilt (201.01)
58. New Mexico (200.93)
59. Purdue (200.91)
60. Virginia (200.1)
61. Fresno State (199.83)
62. N.C. State (199.75)
63. Minnesota (199.68)
64. Wisconsin (199.05)
65. Louisville (198.96)
66. UCLA (198.31)
67. Miami-FL (198.26)
68. Michigan (197.82)
69. Arizona State (197.23)
70. Colorado State (197.14)
71. Indiana (196.91)
72. Air Force (196.63)
73. Auburn (195.79)
74. Kentucky (195.32)
75. West Virginia (194.52)
76. Marshall (193.57)
77. Navy (192.95)
78. Akron (192.83)
79. Kansas State (192.25)
80. Central Michigan (192.24)
81. Buffalo (192.12)
82. Colorado (192.04)
83. Duke (190.63)
84. Memphis (190.52)
85. UL-Lafayette (189.69)
86. Louisiana Tech (189.2)
87. Syracuse (188.92)
88. UNLV (188.86)
89. Ohio (188.65)
90. Florida Atlantic (188.11)
91. SMU (188.1)
92. Mississippi State (187.8)
93. Florida International (187.34)
94. Central Florida (187.23)
95. Temple (187.22)
96. UTEP (186.85)
97. Arkansas State (186.51)
98. Texas A&M (186.01)
99. Utah State (184.76)
100. Northern Illinois (184.76)
101. San Jose State (183.09)
102. Toledo (182.46)
103. UAB (177.81)
104. Iowa State (177.67)
105. Middle Tennessee (177.25)
106. Western Michigan (176.33)
107. Kent State (175.1)
108. Tulane (173.45)
109. Eastern Michigan (171.83)
110. New Mexico State (169.61)
111. Washington (169.11)
112. Army (168.94)
113. UL-Monroe (168.52)
114. San Diego State (167.22)
115. Wyoming (166.31)
116. Miami-OH (162.96)
117. North Texas (160.9)
118. Washington State (154.37)
119. Idaho (152.27)
120. Western Kentucky (150.81)

13
by Bowl Game Anomaly :: Fri, 04/17/2009 - 4:03pm

Am I correct in assuming from your comments that special teams are not included in the + system? It seems to me that, whether you are trying to be prescriptive or descriptive, special teams definitely should be included (at least when measuring within a given year. If special teams do not correlate year-to-year, which I doubt, then perhaps they should be left out of attempts to predict next year's performance).

(Formerly "The McNabb Bowl Game Anomaly")