Writers of Pro Football Prospectus 2008

Most Recent FO Features

SandersEmm10.jpg

» Scramble for the Ball: Quarter Pole Projections

Mike and Tom weigh the chances of this year's class of receivers, running backs and tight ends who are on pace to break the magical 1,000-yard mark for the first time.

25 Aug 2010

FEI: Rating the Programs

by Brian Fremeau

My colleague Bill Connelly's excellent summer series on the Top 100 teams of the last century took a unique approach to a question that has been at the forefront of my mind lately: How do we best quantify and compare historical data in college football? Record books are made to be re-written, of course. The dramatic changes in player development, recruiting, offensive and defensive styles, and more over the years make it especially challenging to compare teams across eras.

On the other hand, the past can tell us so much about the future. As has been the case in previous seasons, our projections for 2010 are heavily based on recent five-year performance. It is not as though teams can't ascend or descend dramatically from year to year. But overall, college football teams tend to play within a general range of historical program expectations. That can be reassuring for some and frustrating for many others. As this year's offseason conference shuffle demonstrated, a few programs retain and exert a great deal of power over the masses, controlling conferences, television deals, and program wealth. In turn, those select few dominate on the field.

We introduced "Program FEI" in Pro Football Prospectus 2008. Program FEI is calculated in a similar manner to FEI. The drive-by-drive summary of the success of a team maximizing its own possessions and minimizing those of its opponent is Game Efficiency (GE). GE data is then adjusted according to the strength of the opposition faced, with more relevance placed on strong performances, win or lose, against strong opponents. With Program FEI, five years of GE data is included instead of just one, and the data is weighted in favor of more recent performances -- all games are included, but last year is a bit more relevant than games played five years ago.

Last year, I created a metric to approximate Program FEI for years in which we did not have drive data. Prior to 2003, drive and play-by-play data is especially scarce, and final scores may be the only consistently reliable data points for all games. During five-year, 60-game spans, the score-based Approximated Program Power (APP) ratings very closely mirror Program FEI (.98 correlation). APP data was collected and processed back to 1984, providing a 25-year data set of rolling program power ratings.

It wasn't an arbitrary decision to stop at 1984 for this project. After the split of Division I in 1978 into I-A and I-AA (now FBS and FCS, respectively), it took a few years for scheduling across the country to sift into the divisional connectedness we recognize today. A few teams straddled that line for a few seasons and played half their games against teams in each division. Including those games and data points made it difficult to identify a consistent APP rating in those seasons, as did disregarding the games. By starting with game data in 1980, the I-A/I-AA split was more cleanly defined, and games between divisions were limited to only one or two per team. As with all FEI data, games between divisions are totally ignored. The five-year APP ratings data for this project begins in 1984 (representing the 1980 through 1984 seasons).

APP data and analysis made its first appearance in an essay I wrote for the Notre Dame Maple Street Press Annual "Here Come the Irish" in 2009. At the end of last season, I used APP in association with another program status check on Notre Dame for ESPN . Included in each piece was an evaluation of where the Fighting Irish stood in relation to Notre Dame's historical program expectations, along with a comparison to other elite programs that also may have undergone a swoon before recovering their former glory.

The approach I took in those previous essays focused on the individual APP ratings themselves. I divided all teams into power buckets to illustrate their five-year strength at any given time since 1984. An APP rating of 0.220 or better represented the top five percent of all APP ratings in the last 25 years, two standard deviations better than average. "Very Good" programs rated between .160 and .220. "Above Average" programs had a rating between .055 and .160, and so forth. Twenty-one programs achieved an APP rating above the "Elite" threshold at least once since 1984, a group I nicknamed the "Elite-21".

Elite-21 Programs: APP Category Appearances, 1984-2009
Team Elite Very Good Above Average Other
Florida State 17 7 2 0
Florida 15 9 2 0
Michigan 14 9 3 0
Miami 14 6 6 0
Nebraska 10 7 7 2
USC 9 14 3 0
Ohio State 9 7 10 0
Notre Dame 8 6 12 0
Washington 8 5 8 5
Georgia 6 9 11 0
Penn State 5 15 6 0
Team Elite Very Good Above Average Other
Tennessee 5 10 11 0
Auburn 5 9 12 0
Texas 4 4 11 7
LSU 3 7 11 5
Alabama 2 12 12 0
Oklahoma 2 12 7 5
Virginia Tech 2 9 6 9
UCLA 1 18 6 1
Clemson 1 10 8 7
Oregon 1 8 12 5

The Elite-21 might not all be consensus picks for the top programs in the country over the last 25 years, but this list serves as a reasonably reliable "who's who" in college football since the split of Division I. Sixteen of the Elite-21 have won a national championship in the last 26 years. The only teams not appearing among the Elite-21 that claimed a title were BYU in 1984 and the split champions Georgia Tech and Colorado in 1990.

One major observation from digging into the Elite-21 data has been identifying a minimum APP threshold required of a team in the year preceding its national championship run. The only out-of-nowhere national champions in the last 25 years were the 2000 Oklahoma Sooners. Bob Stoops inherited a program that hadn't had a winning season in half a decade and led the Sooners to a 7-5 debut before winning the BCS title in only his second year at the helm. Every other champion (including the Cougars, Yellow Jackets, and Buffaloes) carried an APP rating in at least the upper part of the Above Average range into their title season. Fourteen champions entered the season already armed with Elite status.

The mean previous-year APP rating for an eventual national champion was .200 (approximately 1.9 standard deviations above average) since 1984. The only teams entering the 2010 season above that threshold are Florida, USC, Alabama, Virginia Tech, Texas, Ohio State, LSU, and Oklahoma. Our projections like all of those teams to be strong this year, of course, but we also project outstanding records for TCU and Boise State. The Horned Frogs enter 2010 with the nation's 16th-best APP rating; the Broncos rank 24th. Each are comparable to the previous-year APP ratings of the 2009 Alabama and 2003 LSU teams. Whether non-AQ teams will contend for a BCS title will be up to the whims of poll voters, but they are already receiving unprecedented preseason praise. More likely, the 2010 champion will once again come from the roster of Elite-21 teams, just as it has for each of the last 19 seasons.

2009 APP Ratings (2005-2009 Five-Year Weighted Average)
Rank Team APP Rank Team APP Rank Team APP
1 Florida .294 41 Notre Dame .084 81 Bowling Green -.084
2 USC .234 42 Connecticut .072 82 Colorado State -.088
3 Alabama .233 43 Michigan State .069 83 Baylor -.093
4 Virginia Tech .229 44 Michigan .067 84 Ohio -.095
5 Texas .228 45 Kentucky .058 85 Western Michigan -.100
6 Ohio State .221 46 Navy .056 86 New Mexico -.104
7 LSU .209 47 Oklahoma State .054 87 Troy -.105
8 Oklahoma .200 48 Louisville .054 88 Hawaii -.105
9 Georgia .195 49 Maryland .054 89 Marshall -.106
10 Penn State .189 50 Virginia .050 90 Middle Tennessee -.112
11 West Virginia .188 51 Kansas .042 91 SMU -.114
12 Clemson .182 52 Arizona State .040 92 UTEP -.114
13 Georgia Tech .179 53 Purdue .033 93 Memphis -.120
14 Oregon .170 54 North Carolina State .030 94 Wyoming -.123
15 Boston College .159 55 UCLA .029 95 Buffalo -.125
16 TCU .152 56 Stanford .027 96 UAB -.129
17 Florida State .150 57 Vanderbilt .022 97 Akron -.138
18 Auburn .148 58 Air Force .018 98 Toledo -.138
19 Pittsburgh .132 59 Houston .012 99 Temple -.141
20 Texas Tech .130 60 Northwestern .011 100 Kent State -.146
Rank Team APP Rank Team APP Rank Team APP
21 Iowa .130 61 East Carolina .008 101 Army -.147
22 Miami .128 62 Minnesota .004 102 Miami (OH) -.150
23 Cincinnati .125 63 Mississippi State -.007 103 Louisiana Tech -.150
24 Boise State .116 64 Illinois -.008 104 UNLV -.150
25 Tennessee .116 65 Southern Mississippi -.009 105 San Diego State -.157
26 Rutgers .115 66 Tulsa -.015 106 Arkansas State -.169
27 California .109 67 Texas A&M -.027 107 Washington State -.169
28 Wake Forest .107 68 Colorado -.029 108 Louisiana Monroe -.173
29 Wisconsin .105 69 Central Michigan -.031 109 Rice -.174
30 South Florida .102 70 Washington -.034 110 Eastern Michigan -.177
31 BYU .102 71 Kansas State -.042 111 Utah State -.189
32 North Carolina .102 72 Duke -.046 112 Florida Atlantic -.193
33 South Carolina .101 73 Northern Illinois -.060 113 Louisiana Lafayette -.203
34 Arkansas .099 74 Central Florida -.060 114 San Jose State -.203
35 Oregon State .098 75 Fresno State -.061 115 Tulane -.205
36 Utah .095 76 Iowa State -.064 116 Florida International -.208
37 Arizona .091 77 Nevada -.075 117 Western Kentucky -.212
38 Missouri .090 78 Syracuse -.079 118 Idaho -.231
39 Nebraska .087 79 Indiana -.082 119 New Mexico State -.232
40 Mississippi .085 80 Ball State -.083 120 North Texas -.239

I was interested in consolidating the 25-year APP data into a set of overall program ratings for the period, in order to rank the best programs since the split of Division I. Some programs clearly remained at or near the top of the college football world for the whole period, while others experienced periodic or lengthy setbacks. I borrowed from our SOS technique to calculate the strength of the programs overall. If a hypothetical opponent squared off against each of the 26 five-year rolling instances of a program's APP ratings, how likely would victory be for all 26 games?

Unsurprisingly, the Elite-21 surfaced again by this methodology as the best 21 programs in college football over the last 26 years. I ran the formula for the last 20 years and found the same Elite-21 programs at the top (in a slightly different order, of course). I ran the formula over the last 15 years of APP ratings and found the same Elite-21 teams again with only one exception -- Clemson slipped out of the top 21 to No. 29, but the others remained in the top 20. It was only until I examined 10-year and five-year selections of the APP data sets that the Elite-21 were finally broken from their stranglehold on college football's upper crust.

Overall APP Ratings Summary, 1984-2009
Rank Team APP Overall 20 Yr
Rank
15 Yr
Rank
10 Yr
Rank
5 Yr
Rank
4 Yr
Recruit Rank
1 Florida State .00005 1 2 4 11 7
2 Florida .00012 2 1 2 2 1
3 Miami .00016 4 7 3 12 12
4 USC .00039 3 3 1 1 2
5 Michigan .00058 5 5 8 14 11
6 Nebraska .00089 7 6 18 44 19
7 Ohio State .00177 6 4 5 6 9
8 Notre Dame .00231 9 16 27 27 5
9 Penn State .00263 11 11 15 15 16
10 Tennessee .00313 8 8 12 20 15
11 Washington .00459 10 18 33 76 34
Rank Team APP Overall 20 Yr
Rank
15 Yr
Rank
10 Yr
Rank
5 Yr
Rank
4 Yr
Recruit Rank
12 Georgia .00476 12 10 6 5 6
13 Auburn .00508 17 19 13 9 13
14 Alabama .00541 13 20 16 17 8
15 UCLA .00774 15 17 25 41 18
16 Oklahoma .00919 16 15 9 8 10
17 LSU .01557 20 13 11 3 3
18 Virginia Tech .02697 14 9 7 7 30
19 Clemson .02736 21 29 19 16 14
20 Texas .02864 19 12 10 4 4
21 Oregon .03877 18 14 14 23 29

This might indicate that the Elite-21 tag may be arbitrary, especially if the teams are years removed from national championship contention. On the other hand, notice the column at the far right. On a hunch, I collected the Rivals enrolled recruiting ratings from the past four years, and combined them into a weighted average ranking for each program. (Senior talent received more weight than freshmen talent).

Note that 18 of the Elite-21 have finished among the top 20 in recruiting in the last four seasons. Is this not a strikingly similar cohort of programs as the Elite-21 itself? The similarity is apparent despite the fact that several Elite-21 teams have not had significant success on the field in recent seasons. What might account for this relationship? I can dream up a few possible reasons.

Either recruiting services such as Rivals artificially inflate the ratings of members of this Elite-21 cohort (either knowingly or unknowingly), thereby guaranteeing that these programs consistently finish atop the recruiting rankings. Or, the recruits self-select Elite-21 programs, naturally drawn to the historical program success and potential for reclaimed glory even in down periods. Or is it merely a coincidence?

Nah. Above everything else, the Elite-21 list and recruiting rankings are both products of the enormous investment of finances and other resources poured into these programs relative to other schools not appearing here. That investment leads to better facilities that attract top coaches and athletes. It reinforces and consolidates the power among members of the group, preventing others from breaking through the recruiting monopoly. And as we have seen throughout the last 25 years, the investment pays off on the field, which in turn brings in more wealth.

What, then, can we really expect over the next few years? It is hard to pinpoint an exact timeline for their resurgence, but Nebraska, Notre Dame, and Washington are all good candidates to reclaim their status among the top programs in college football at some point, simply because they have the resources at their disposal to do so. It will probably come at the expense of a few names we've come to count on lately -- Michigan, USC, and Tennessee are candidates that are already experiencing some form of trouble that may impact their short or long term futures. Others, like Florida, Florida State, and Ohio State may continue without any ghastly spills down the APP ratings. We'll be paying close attention over the next five, 10 and 25 years, and beyond.

Posted by: Brian Fremeau on 25 Aug 2010

16 comments, Last at 04 Feb 2011, 2:47pm by Kirt O

Comments

1
by BGNoMore (not verified) :: Wed, 08/25/2010 - 8:59pm

Either recruiting services such as Rivals artificially inflate the ratings of members of this Elite-21 cohort (either knowingly or unknowingly), thereby guaranteeing that these programs consistently finish atop the recruiting rankings.

This is, actually, a documented phenomenon. A player will have either no rating or a relatively poor rating, then is offered by a big-name program and suddenly has a two-star jump in rating.

Rivals is not a scouting service; it is a crude reputation reporter. Rivals ratings make a poor independent variable because they are anything but independent.

2
by Bill Connelly :: Wed, 08/25/2010 - 9:15pm

For a poor independent variable, using Rivals rankings sure do give you a pretty good indication of program strength. Five-year recruiting rankings had almost as strong a correlation to on-field success as recent on-field history did. Recruiting rankings are far from perfect, but they're accurate enough to use in the projections.

9
by BGNoMore (not verified) :: Thu, 08/26/2010 - 5:43pm

Bill:

You are right; my last statement was off-base. As long as you are consistent with the point in time you at which you pull the rating, how Rivals gets to its final conclusion isn't terribly important (technically, there is potentially a problem with Rivals and recent performance being too correlated).

My main point was the phenomenon you suggested -- that Rivals is influenced by what schools are recruiting the player -- is real.

10
by Bill Connelly :: Thu, 08/26/2010 - 7:49pm

There's no doubting that there's a lot of noise involved in Rivals' ratings, in exactly the way that you described and many others. But I do think the correlations are strong enough to use to some degree. With all the biases and inconsistent patterns involved in creating their ratings, a 5-star is still more likely to succeed than a 4-star, who is more likely than a 3-star, etc.

11
by Brian Fremeau :: Thu, 08/26/2010 - 11:35pm

If a two-star athlete receives offers from Elite 21 teams, thereby elevating its rating to three or four stars, is that necessarily wrong? Is Rivals essentially quantifying the wisdom of the (elite) crowd?

12
by BGNoMore (not verified) :: Sun, 08/29/2010 - 6:08pm

Brian:

I think you are making the same point I was trying to make in my second post: if Rivals gets it right eventually, who cares how it got there? Whether the value of the Rivals rating is genuinely developed by Rivals or not does not impact the value of the rating.

Maybe you are reacting to my parenthetical comment? I was trying to point out, as an aside, the possibility that the Rivals-ratings and the Elite21ism variables might be collinear; if so, the explanatory value of one can only be measured in the absence of the other.

If I'm just not grasping your point, let me know.

15
by Kirt O. (not verified) :: Fri, 02/04/2011 - 2:41pm

Brian: "For a poor independent variable, using Rivals rankings sure do give you a pretty good indication of program strength."

His point is dead-on though. Recruits offered by elite college programs get more stars automatically by Rivals. Rivals' recruiting rankings therefore are strongly determined by past program success. When recruiting rankings strongly correlate with program success, they are acting as a surrogate for past program success, and therefore a pretty poor independent variable. While this kind of stats hackery works ok in sports stats where any shoddy analysis flies, it would be ripped apart in any peer-reviewed literature.

16
by Kirt O (not verified) :: Fri, 02/04/2011 - 2:47pm

Ok... Maybe a bit harsh there...sorry. Point is, just because there is a strong tie there doesn't make it a good independent variable.

3
by dbostedo :: Thu, 08/26/2010 - 9:16am

Rivals is not a scouting service, but if they summarize the results of lots and lots of scouting services (all the college programs internal scouting) then they can still look like one. And perhaps derive some extra accuracy by being the sum of a lot of universities.

So it seems like the real problem with using them is that they have good correlation with program strength, but that's partly because of the historical program strength. So the correlation is probably self-reinforcing, and doesn't show as much causation as it may seem.

13
by NJ Irish (not verified) :: Mon, 08/30/2010 - 9:33pm

Excellent point by dbostedo....

Is it the winning that brings in recruits or is it the players? While a R2 of 35% or so is descriptive,it is no better than last years record as a forecasting tool.

Commercial recruitment services are inherently biased because of their dubious methods of collecting data. At least at the 2-3 star level.

4
by brandon (not verified) :: Thu, 08/26/2010 - 10:22am

It would be interesting to see how the Athletic Directors factor into a program's rating. Florida's Jeremy Foley has done an outstanding job and has been there since 1992 and that coincides with Florida's success on the field.

5
by bird jam :: Thu, 08/26/2010 - 11:03am

Your chart needs a little anvil symbol to put next to my Volunteers. *sigh*

6
by Jetspete :: Thu, 08/26/2010 - 11:16am

You dont have all the SEC teams ranked 1 - 12, so expect a litany of morons from Red State Country to decry this list foul and wish death upon you and your family for your obvious ineptitude in disrespecting the SEC

7
by Joseph :: Thu, 08/26/2010 - 12:10pm

Um, I'm from SEC country--and anyone who thinks Kentucky, Vandy, South Carolina, & the Mississippi schools in the top-20/25 of ANY ranking is on some illegal substance.
Although, if you said that one of those teams is in the top 20/25 of "programs who have no chance at a big-name bowl", I guess I could agree.

8
by Eddo :: Thu, 08/26/2010 - 12:31pm

The initial comment was snark, methinks.

And Ole Miss has played in the last two Cotton Bowls.

14
by Silver Surfer (not verified) :: Tue, 09/14/2010 - 8:56pm

Not to diminish the volume of effort to produce this report, but I do believe it would be appropriate to adjust the on-the-field results with those programs that were found guilty of severe infractions with the rules of the game. If a program has so egregiously violated the rules to have to forfeit victories, including all references to the on-the-field results in their media guides and record books, then this study should reflect that those games were not won, but lost.

Alabama, Florida State and USCw would fall in the rankings. Maybe some other programs, too.

Unless it is somehow okay to cheat . . .