On the heels of Andrew Luck's new megadeal, we look back at 2015 to see which quarterbacks earned their money, and which were overpaid.
17 Sep 2008
by Brian Fremeau
The Fremeau Efficiency Index principles and methodology can be found here. Like DVOA, FEI rewards playing well against good teams, win or lose, and punishes losing to poor teams more harshly than it rewards defeating poor teams. Unlike DVOA, it is drive-based, not play-by-play based, and it is specifically engineered to measure the college game.
FEI is the opponent-adjusted value of Game Efficiency, a measurement of the success rate of a team scoring and preventing opponent scoring throughout the non-garbage-time possessions of a game. Like DVOA, it represents a team's efficiency value over average.
Only games between FBS teams are considered. Since limited data is available at the beginning of the season, the ratings to date are a function of both actual games played and projected outcomes based on the 2008 Projected FEI Ratings. The weight given to projected outcomes will be reduced each week until mid-October, at which point the projections will be eliminated entirely.
As they so often have done over the past six years, USC walked onto a live prime time soundstage Saturday night and lit up the screen with a blockbuster performance. Across the line of scrimmage, the supporting cast Buckeyes read their lines timidly from cue cards. The game so hotly anticipated all summer long as the intersectional regular season showdown of the year suffered from a predictable plot and a yawn-inducing second act. In the end, USC reasserted itself at the very top of the college football world and recorded its sixth victory in seven games since 2003 against Program FEI top ten teams. Each of those six marquee victories has been by multiple scores, and four have been by at least 21 points. The Trojans, meanwhile, haven't lost a game by multiple scores since 2001.
Dominating one of the projected contenders for the national title is a major statement. An even more declarative statement may have been made by Ohio State, officially withdrawing itself from the 2008 national title chase. The Buckeyes might still win the Big Ten conference and get a shot at big bowl game redemption, but their ceiling is poured concrete. Using Massey Consensus Ratings and Game Efficiency data, top 40 teams since 2003 have suffered a defeat worse than Ohio State's only 34 times (out of 681 total losses). Top 15 teams since 2003 have lost a game as badly as OSU only ten times (out of 237 total losses). The future isn't entirely bleak, but this one result was severe enough to knock the Buckeyes to No. 31, well behind the new leaders of the Big Ten, No. 6 Penn State and No. 9 Wisconsin.
The Mountain West Conference took its own big step forward over the weekend, posting a 4-0 record against the previously daunting Pac-10 including No. 7 BYU's obliteration of UCLA. The MWC now boasts at least as many top-20 FEI teams as the Pac-10, Big 12, Big Ten, ACC, and Big East. Thus far, the MWC is a national-best 6-2 against BCS conference opponents, and ranks fourth among all conferences in overall FBS winning percentage (65 percent). Mark the TCU vs. Oklahoma game (September 27) on your calendar now and call it a dress rehearsal for a potential MWC end-of-year BCS bowl game.
When projected data is combined with actual game data in the early-season FEI ratings, the results can be somewhat turbulent from week to week. The process is brand new this year, and though it had been test-driven with previous year data, it has certainly provided some surprises. Since I received a few questions from readers about the weighting given to projected and actual results, I figured it would probably be best if I tried to explain the calculation process in a little bit more detail.
I do not simply calculate and combine two independent ratings, projected and actual. Instead, I combine projected results with actual results in the single FEI formula, recalibrating and modifying the results through multiple-order washes of opponent-adjusted data to stabilize the ratings. The overall weight given to the projected data is reduced from week to week and it is calculated independently for each team. Why? The number of FBS games played for each team is variable. Alabama's three games against FBS competition to date carry more weight relative to the Crimson Tide's projected data than LSU's single FBS game carries relative to its projected data.
As was explained in an FEI column last season, a data-relevance factor is included in the FEI formula in order to place premium value on strong performances against good teams and more severely punish poor performances against bad teams. That factor has created the most interesting side effects in the early season ratings. Notre Dame and Penn State moved in opposite directions from Week 1 to Week 2 even though neither had played an FBS game. This was entirely attributable to the data-relevance factors applied to projected results and impacted by the actual results of future Irish and Nittany Lions opponents. Now that all but one team has played at least one FBS game, the early-season ratings should experience a bit more stability going forward.
It hasn't gone unnoticed by several FO readers that FEI is trailing Russell in his man-versus-machine weekly pick showdown in the Seventh Day Adventure column. FEI certainly wasn't designed to expertly make picks against the spread, and the forecasts thus far rely on as much projected data as actual data. Should it matter that the results to this point are underwhelming? The betting public might measure success exclusively against the Vegas lines, but is that the best way to judge FEI? If FEI predicts Michigan (-3.5) to defeat Utah by four points, should that outcome simply be judged as wrong? Would FEI have been significantly more right to predict Michigan to win by three points?
Another way to measure FEI pick accuracy is by tracking PWE data. This season, in addition to forecasting a score for every game, FEI determines the Projected Win Expectation of the forecasted game winner, the likelihood of victory for that team in that game. How accurate has the PWE data been so far?
|2008 Weekly PWE Accuracy|
|Week||PWE||Actual Win Pct.|
|2008 PWE Accuracy Splits|
|PWE Range||Actual W-L||Actual Win Pct.|
|50 to 55%||3-1||75.0%|
|55 to 65%||9-6||60.0%|
|65 to 75%||19-9||67.9%|
|75 to 85%||21-4||84.0%|
|85 to 95%||22-8||73.3%|
|95 to 100%||26-0||100.0%|
What do these tables reveal? Since actual win-loss outcomes are mostly consistent with PWE, and PWE is a direct function of FEI, is it fair to consider FEI to be a reasonably sound prediction tool? Or is it simply good at assessing its own accuracy? I'll keep tracking this throughout the season, and since I am not yet convinced of the best way to judge the "machine," I'm definitely open to suggestions.
19 comments, Last at 18 Sep 2008, 9:00am by parker