No defense generated more pressure last year than Connor Barwin and the Eagles, but did that pressure do them any good?
25 Aug 2010
by Brian Fremeau
My colleague Bill Connelly's excellent summer series on the Top 100 teams of the last century took a unique approach to a question that has been at the forefront of my mind lately: How do we best quantify and compare historical data in college football? Record books are made to be re-written, of course. The dramatic changes in player development, recruiting, offensive and defensive styles, and more over the years make it especially challenging to compare teams across eras.
On the other hand, the past can tell us so much about the future. As has been the case in previous seasons, our projections for 2010 are heavily based on recent five-year performance. It is not as though teams can't ascend or descend dramatically from year to year. But overall, college football teams tend to play within a general range of historical program expectations. That can be reassuring for some and frustrating for many others. As this year's offseason conference shuffle demonstrated, a few programs retain and exert a great deal of power over the masses, controlling conferences, television deals, and program wealth. In turn, those select few dominate on the field.
We introduced "Program FEI" in Pro Football Prospectus 2008. Program FEI is calculated in a similar manner to FEI. The drive-by-drive summary of the success of a team maximizing its own possessions and minimizing those of its opponent is Game Efficiency (GE). GE data is then adjusted according to the strength of the opposition faced, with more relevance placed on strong performances, win or lose, against strong opponents. With Program FEI, five years of GE data is included instead of just one, and the data is weighted in favor of more recent performances -- all games are included, but last year is a bit more relevant than games played five years ago.
Last year, I created a metric to approximate Program FEI for years in which we did not have drive data. Prior to 2003, drive and play-by-play data is especially scarce, and final scores may be the only consistently reliable data points for all games. During five-year, 60-game spans, the score-based Approximated Program Power (APP) ratings very closely mirror Program FEI (.98 correlation). APP data was collected and processed back to 1984, providing a 25-year data set of rolling program power ratings.
It wasn't an arbitrary decision to stop at 1984 for this project. After the split of Division I in 1978 into I-A and I-AA (now FBS and FCS, respectively), it took a few years for scheduling across the country to sift into the divisional connectedness we recognize today. A few teams straddled that line for a few seasons and played half their games against teams in each division. Including those games and data points made it difficult to identify a consistent APP rating in those seasons, as did disregarding the games. By starting with game data in 1980, the I-A/I-AA split was more cleanly defined, and games between divisions were limited to only one or two per team. As with all FEI data, games between divisions are totally ignored. The five-year APP ratings data for this project begins in 1984 (representing the 1980 through 1984 seasons).
APP data and analysis made its first appearance in an essay I wrote for the Notre Dame Maple Street Press Annual "Here Come the Irish" in 2009. At the end of last season, I used APP in association with another program status check on Notre Dame for ESPN . Included in each piece was an evaluation of where the Fighting Irish stood in relation to Notre Dame's historical program expectations, along with a comparison to other elite programs that also may have undergone a swoon before recovering their former glory.
The approach I took in those previous essays focused on the individual APP ratings themselves. I divided all teams into power buckets to illustrate their five-year strength at any given time since 1984. An APP rating of 0.220 or better represented the top five percent of all APP ratings in the last 25 years, two standard deviations better than average. "Very Good" programs rated between .160 and .220. "Above Average" programs had a rating between .055 and .160, and so forth. Twenty-one programs achieved an APP rating above the "Elite" threshold at least once since 1984, a group I nicknamed the "Elite-21".
|Elite-21 Programs: APP Category Appearances, 1984-2009|
|Team||Elite||Very Good||Above Average||Other|
|Team||Elite||Very Good||Above Average||Other|
The Elite-21 might not all be consensus picks for the top programs in the country over the last 25 years, but this list serves as a reasonably reliable "who's who" in college football since the split of Division I. Sixteen of the Elite-21 have won a national championship in the last 26 years. The only teams not appearing among the Elite-21 that claimed a title were BYU in 1984 and the split champions Georgia Tech and Colorado in 1990.
One major observation from digging into the Elite-21 data has been identifying a minimum APP threshold required of a team in the year preceding its national championship run. The only out-of-nowhere national champions in the last 25 years were the 2000 Oklahoma Sooners. Bob Stoops inherited a program that hadn't had a winning season in half a decade and led the Sooners to a 7-5 debut before winning the BCS title in only his second year at the helm. Every other champion (including the Cougars, Yellow Jackets, and Buffaloes) carried an APP rating in at least the upper part of the Above Average range into their title season. Fourteen champions entered the season already armed with Elite status.
The mean previous-year APP rating for an eventual national champion was .200 (approximately 1.9 standard deviations above average) since 1984. The only teams entering the 2010 season above that threshold are Florida, USC, Alabama, Virginia Tech, Texas, Ohio State, LSU, and Oklahoma. Our projections like all of those teams to be strong this year, of course, but we also project outstanding records for TCU and Boise State. The Horned Frogs enter 2010 with the nation's 16th-best APP rating; the Broncos rank 24th. Each are comparable to the previous-year APP ratings of the 2009 Alabama and 2003 LSU teams. Whether non-AQ teams will contend for a BCS title will be up to the whims of poll voters, but they are already receiving unprecedented preseason praise. More likely, the 2010 champion will once again come from the roster of Elite-21 teams, just as it has for each of the last 19 seasons.
|2009 APP Ratings (2005-2009 Five-Year Weighted Average)|
|1||Florida||.294||41||Notre Dame||.084||81||Bowling Green||-.084|
|6||Ohio State||.221||46||Navy||.056||86||New Mexico||-.104|
|10||Penn State||.189||50||Virginia||.050||90||Middle Tennessee||-.112|
|14||Oregon||.170||54||North Carolina State||.030||94||Wyoming||-.123|
|20||Texas Tech||.130||60||Northwestern||.011||100||Kent State||-.146|
|23||Cincinnati||.125||63||Mississippi State||-.007||103||Louisiana Tech||-.150|
|25||Tennessee||.116||65||Southern Mississippi||-.009||105||San Diego State||-.157|
|27||California||.109||67||Texas A&M||-.027||107||Washington State||-.169|
|28||Wake Forest||.107||68||Colorado||-.029||108||Louisiana Monroe||-.173|
|30||South Florida||.102||70||Washington||-.034||110||Eastern Michigan||-.177|
|31||BYU||.102||71||Kansas State||-.042||111||Utah State||-.189|
|32||North Carolina||.102||72||Duke||-.046||112||Florida Atlantic||-.193|
|33||South Carolina||.101||73||Northern Illinois||-.060||113||Louisiana Lafayette||-.203|
|34||Arkansas||.099||74||Central Florida||-.060||114||San Jose State||-.203|
|35||Oregon State||.098||75||Fresno State||-.061||115||Tulane||-.205|
|36||Utah||.095||76||Iowa State||-.064||116||Florida International||-.208|
|39||Nebraska||.087||79||Indiana||-.082||119||New Mexico State||-.232|
|40||Mississippi||.085||80||Ball State||-.083||120||North Texas||-.239|
I was interested in consolidating the 25-year APP data into a set of overall program ratings for the period, in order to rank the best programs since the split of Division I. Some programs clearly remained at or near the top of the college football world for the whole period, while others experienced periodic or lengthy setbacks. I borrowed from our SOS technique to calculate the strength of the programs overall. If a hypothetical opponent squared off against each of the 26 five-year rolling instances of a program's APP ratings, how likely would victory be for all 26 games?
Unsurprisingly, the Elite-21 surfaced again by this methodology as the best 21 programs in college football over the last 26 years. I ran the formula for the last 20 years and found the same Elite-21 programs at the top (in a slightly different order, of course). I ran the formula over the last 15 years of APP ratings and found the same Elite-21 teams again with only one exception -- Clemson slipped out of the top 21 to No. 29, but the others remained in the top 20. It was only until I examined 10-year and five-year selections of the APP data sets that the Elite-21 were finally broken from their stranglehold on college football's upper crust.
|Overall APP Ratings Summary, 1984-2009|
|Rank||Team||APP Overall||20 Yr
|Rank||Team||APP Overall||20 Yr
This might indicate that the Elite-21 tag may be arbitrary, especially if the teams are years removed from national championship contention. On the other hand, notice the column at the far right. On a hunch, I collected the Rivals enrolled recruiting ratings from the past four years, and combined them into a weighted average ranking for each program. (Senior talent received more weight than freshmen talent).
Note that 18 of the Elite-21 have finished among the top 20 in recruiting in the last four seasons. Is this not a strikingly similar cohort of programs as the Elite-21 itself? The similarity is apparent despite the fact that several Elite-21 teams have not had significant success on the field in recent seasons. What might account for this relationship? I can dream up a few possible reasons.
Either recruiting services such as Rivals artificially inflate the ratings of members of this Elite-21 cohort (either knowingly or unknowingly), thereby guaranteeing that these programs consistently finish atop the recruiting rankings. Or, the recruits self-select Elite-21 programs, naturally drawn to the historical program success and potential for reclaimed glory even in down periods. Or is it merely a coincidence?
Nah. Above everything else, the Elite-21 list and recruiting rankings are both products of the enormous investment of finances and other resources poured into these programs relative to other schools not appearing here. That investment leads to better facilities that attract top coaches and athletes. It reinforces and consolidates the power among members of the group, preventing others from breaking through the recruiting monopoly. And as we have seen throughout the last 25 years, the investment pays off on the field, which in turn brings in more wealth.
What, then, can we really expect over the next few years? It is hard to pinpoint an exact timeline for their resurgence, but Nebraska, Notre Dame, and Washington are all good candidates to reclaim their status among the top programs in college football at some point, simply because they have the resources at their disposal to do so. It will probably come at the expense of a few names we've come to count on lately -- Michigan, USC, and Tennessee are candidates that are already experiencing some form of trouble that may impact their short or long term futures. Others, like Florida, Florida State, and Ohio State may continue without any ghastly spills down the APP ratings. We'll be paying close attention over the next five, 10 and 25 years, and beyond.
16 comments, Last at 04 Feb 2011, 2:47pm by Kirt O