Writers of Pro Football Prospectus 2008

Most Recent FO Features

ThomasEar10.jpg

» 2013 Play-Action Defense

Are the best defenses against play action the best against regular passes too? How much impact does play action really have in an NFL game, and does it correlate from year to year?

16 Sep 2009

Week 2 FEI Ratings

By Brian Fremeau

The comparisons were inevitable. They were two Southern California kids, true freshmen, in their first starts against major opponents. In front of national TV audiences, they each quarterbacked one of the most storied programs in college football history down by a score in the waning minutes of the fourth quarter. Touchdown, Michigan. Touchdown, Southern Cal. Welcome to the big time, Tate Forcier and Matt Barkley.

The comeback victories by the Wolverines and Trojans were made for headline writers and highlight montage editors. It’s obviously way too silly to start talking about the 2011 Heisman race (though some will try), and who knows if Michigan and USC are destined for a Rose Bowl showdown at any point in the next three or four years. But does it matter? Forcier vs. Barkley: Who was better? We demand to know!

That question is absurd, of course. They have only played two games apiece. They have completely different skills and they lead completely different offenses. Their surrounding casts and coaching staffs are nothing alike. On top of all that, the circumstances they faced at the end of their respective games were entirely different. Barkley's team was on the road facing a long field against a stout defense needing a touchdown to win. Forcier's team was at home facing a much less formidable defense on a shorter field and needing only a field goal to send the game to overtime.

We don’t have to do much heavy lifting statistically to determine which drive was more likely to result in a touchdown. But how specific can we be? Let's try to determine the relative probability of each go-ahead score by using possession efficiency data.

For the purposes of this exercise, the game clock is ignored. Michigan started its drive with only 2:13 left in the quarter and two timeouts. Based on the pace Michigan had scored on its other four scoring drives on the day, the Wolverines did not need to hurry in order to reach the end zone before time expired.

Unadjusted offensive and defensive efficiency ratings start with national drive efficiency data. Based on a regression of thousands of drives, we have determined the scoring expectations of an average offense versus an average defense starting from every yard line on the field. This is the essential data sought by FEI: How well did your team maximize its own opportunities and minimize those of the opponent?

The Wolverines began their final drive at their own 42-yard line -- an average team would score 2.2 points per possession from that starting field position. The Trojans began their final drive at their own 18-yard line, an average value of 1.1 points per possession.

But can we consider the Michigan and USC offenses to be simply “average"? Were the defenses of Notre Dame and Ohio State “average"? Not likely in either case, though without data from many more games, we cannot accurately calculate the opponent-adjusted Offensive and Defensive FEI ratings that can frame those questions more definitively. Instead, we can use unadjusted offensive efficiency data directly from the Michigan/Notre Dame and USC/Ohio State games.

Up until its final drive, the Forcier-led Michigan offense had scored three touchdowns on 11 possessions and had successfully advanced into field-goal range twice. The Wolverines had an unadjusted offensive efficiency of .300 -- or, 30 percent more efficient than an average team would have performed with the same field position. Barkley’s USC offense had scored only one touchdown -- on a very short field -- and had advanced into field-goal range twice on 10 possessions prior to the final drive. The Trojans’ unadjusted offensive efficiency was -.432, or 43 percent less efficient than average.

The in-game unadjusted efficiency rates represent the actual effectiveness of each team’s offense against the opponent’s defense at that point in the match-up. USC’s offense wasn’t necessarily “below average” -- rather, USC’s offense against Ohio State’s defense was below average. And vice versa for Michigan. A sample size of less than a dozen possessions is small, but it represents the best data we have to project the likelihood of a touchdown in the final drive for each team.

Figure 1 illustrates the projected likelihood of each game-winning drive. The chart was created based on a regression analysis of field position scoring rates by teams over the past two seasons with in-game unadjusted efficiency rates similar to those of the Wolverines (blue) and Trojans (red).

By this methodology, Michigan had approximately a 33 percent likelihood of scoring a touchdown on its final possession against Notre Dame. USC had only a 7 percent chance of scoring a touchdown against the Buckeyes. And what about the likelihood that both drives would have been executed by true freshmen with reservoirs of poise and exponential quantities of moxie? We’ll have to leave that calculation for the Downtown Athletic Club to solve.

Week 2 FEI Top 25

The principles of the Fremeau Efficiency Index (FEI) can be found here. Like DVOA, FEI rewards playing well against good teams, win or lose, and punishes losing to poor teams more harshly than it rewards defeating poor teams. Unlike DVOA, it is drive-based, not play-by-play based, and it is specifically engineered to measure the college game.

FEI is the opponent-adjusted value of Game Efficiency (GE), a measurement of the success rate of a team scoring and preventing opponent scoring throughout the non-garbage-time possessions of a game. Like DVOA, it represents a team's efficiency value over average. Strength of Schedule (SOS) is calculated as the likelihood that an elite team would win every game on the given team's schedule.

Only games between FBS teams are considered in the FEI calculations. Since limited data is available at the beginning of the season, the ratings to date are a function of both actual games played and projected outcomes based on the 2009 Projected FEI Ratings. The weight given to projected outcomes will be reduced each week until mid-October, at which point the projections will be eliminated entirely.

Rank Team FBS Record FEI Last Week GE GE Rank SOS SOS Rank
1 Florida 1-0 0.287 1 0.571 3 0.289 46
2 USC 2-0 0.243 3 0.242 22 0.266 32
3 Texas 2-0 0.224 2 0.350 15 0.319 53
4 Ohio State 1-1 0.202 6 0.009 62 0.211 12
5 Alabama 2-0 0.189 4 0.177 29 0.266 31
6 West Virginia 1-0 0.188 10 0.159 32 0.446 80
7 Oklahoma 0-1 0.188 7 -0.011 65 0.214 13
8 Virginia Tech 1-1 0.187 5 0.216 25 0.281 41
9 LSU 2-0 0.179 9 0.198 27 0.149 8
10 BYU 2-0 0.164 18 0.358 13 0.414 74
11 Georgia Tech 1-0 0.158 8 0.030 54 0.244 24
12 Michigan 2-0 0.154 19 0.255 21 0.270 35
Rank Team FBS Record FEI Last Week GE GE Rank SOS SOS Rank
13 Penn State 2-0 0.154 13 0.378 10 0.345 59
14 Clemson 1-1 0.150 16 0.103 40 0.294 48
15 Notre Dame 1-1 0.146 17 0.363 12 0.231 17
16 Auburn 2-0 0.145 11 0.291 18 0.223 15
17 Georgia 1-1 0.145 15 -0.067 74 0.102 1
18 Oklahoma State 1-1 0.132 14 0.038 52 0.238 21
19 Iowa 1-0 0.128 22 0.357 14 0.276 39
20 Boston College 1-0 0.127 12 0.442 7 0.273 37
21 California 1-0 0.122 20 0.435 8 0.323 54
22 Kansas 1-0 0.118 37 0.511 5 0.289 45
23 Arkansas 0-0 0.117 21 n/a n/a 0.127 4
24 South Carolina 1-1 0.108 34 0.003 63 0.137 5
25 Texas Tech 1-0 0.104 25 0.603 2 0.255 29

Ratings for all 120 FBS teams can be found here.

Posted by: Brian Fremeau on 16 Sep 2009

12 comments, Last at 17 Sep 2009, 3:54pm by The Ninjalectual

Comments

1
by chappy (not verified) :: Wed, 09/16/2009 - 12:29pm

Thanks for the Barkley/Forcier analysis! I must admit I find the curve you use to be a little shocking. Would Ohio State really only allow a touchdown 55% of the time? It seems to me that differences in defensive quality would be less pronounced the closer a team is to start the drive, but it looks like a fairly constant relationship.

I guess in economics we'd say this analysis suffers from an identification problem. Usually this means is it supply or demand that is the dominating effect, but in this case the question is: Is OSU's defense really THAT good or is it possible that USC's offense isn't THAT good. Anyway, seems like a reasonable job given the limitations. Thanks again.

2
by EorrFU :: Wed, 09/16/2009 - 1:49pm

I think the answer would be that Ohio State had been THAT GOOD so far. I doubt they are THAT GOOD on absolute basis but given the extremely small sample size that id the best analysis he could give.

3
by Will :: Wed, 09/16/2009 - 2:02pm

Your question is answered in detail in the article:

"USC’s offense wasn’t necessarily “below average” -- rather, USC’s offense against Ohio State’s defense was below average."

"But can we consider the Michigan and USC offenses to be simply “average"? Were the defenses of Notre Dame and Ohio State “average"? Not likely in either case, though without data from many more games, we cannot accurately calculate the opponent-adjusted Offensive and Defensive FEI ratings that can frame those questions more definitively."

Will

6
by chappy (not verified) :: Wed, 09/16/2009 - 3:57pm

Well, actually I'm making a different point. My point is about the SHAPE of the curve. To me it looks like the author uses a simple proportional average, which seems reasonable, but I'm saying (quite speculatively) that I suspect that uniform/proportional relationship doesn't hold in real life.

7
by Will :: Wed, 09/16/2009 - 8:08pm

Ah, I see what you are saying. For some reason I didn't realize you were alluding to the graph.

Will

4
by Muldrake (not verified) :: Wed, 09/16/2009 - 3:30pm

I think that there's too much weight given to the projections at this point. How can Iowa be ranked ahead of California? Or better yet, how can Colorado, who is 0-2...and a bad looking 0-2 at that, be rated ahead of Colorado State given Colorado State actually beat them? While I'm sure that the numbers will make sense in time it seems to me that the formula is pretty much useless as a predictive tool at this point.

8
by Will :: Wed, 09/16/2009 - 8:15pm

All computer rankings are either non-existent or useless this early in the season. The difference between them and the human rankings is that the computers will throw out all of their preseason prejudices once they get enough information, whereas the humans never do. I doubt many computer rankings will have Florida #1 early, but the human polls will have them #1 until they lose.

Will

5
by navin :: Wed, 09/16/2009 - 3:43pm

South Carolina moves up and Georgia moves down despite a Georgia win.

I'm not surprised as a biased (to USC) fan. SC really controlled the game and would have sent it to overtime if not for a blocked XP. Kudos to Georgia for excellent red zone offense and red zone defense.

Maybe the next step for FEI is to incorporate some sort of red zone adjustment?

9
by oldmancoyote (not verified) :: Wed, 09/16/2009 - 11:13pm

The biggest problem with your analysis, in my opinion, is conflating the USC offense with Matt Barkley and the Michigan offense with Tate Forcier. Now, each played a big role in the outcome of their offense's performance but the roles were not equivalent.

For example, on the last drive, USC was able to utilize their running game, in particular McKnight, which introduced a higher level of uncertainty for the defense, increasing its challenge in stopping the USC offense. Notre Dame on the other hand did not need to worry about Michigan's running game on that drive. They were able to call defensive plays and blitzes knowing the ball would be in Tate's hands. In terms of trying to quantify the QB's importance to the offense's performance, that's a huge distinction.

Tate was the focal point of the defense because the offense was centered on his ability to make the play. Matt played a significant but lesser role. In my opinion, just based on watching that drive in real time, the MVP of that drive was Joe McKnight running and receiving, not Matt Barkley.

Barkley was impressive, to be sure. But the statistical analysis you employ presumes an equivalence that wasn't actually there.

11
by Brian Fremeau :: Thu, 09/17/2009 - 12:35pm

I guess I could have been clearer. I agree completely that the drive analysis here should not be credited exclusively to the quarterbacks. I was using the Barkley/Forcier TV debate as a jumping off point for the drive analysis, which was about the teams, not the players.

10
by tcj (not verified) :: Thu, 09/17/2009 - 5:04am

I agree with oldmancoyote. Your analysis is great FOR WHAT IT DOES, but it most certainly doesn't answer the question you posed at the beginning: "Forcier vs. Barkley: Who was better?"

What the analysis shows is that USC's drive was more impressive than Michigan's--at least from a statistical perspective. But it says absolutely nothing about how the quarterbacks performed on those drives. And you should know that.

12
by The Ninjalectual :: Thu, 09/17/2009 - 3:54pm

I don't watch much college football, and I am a Colorado State homer to the extent that I realize I have no hope of objectivity. But CSU is ranked 104 in the nation? Give me a break. That's not even close.

"Just look at that pumpkin."
-John Madden, looking at the moon.