In this week's Varsity Numbers, Bill Connelly revisits some measures and concepts: Adjusted Scores, Covariance, and momentum (or whatever you choose to call it).
06 Nov 2004
By Ryan Wilson
Last week in ESPN.com's Snap Judgment column, Alan Grant made the observation that with two weeks to prepare, the Steelers had a decided edge over the Patriots, and it was really no surprise that New England was upset.
"With a bye and some spare time to game plan, Steelers defensive coordinator Dick Lebeau applied his extensive knowledge. Lebeau has seen a few things in his day. In fact he's seen about as much as Bill Belichick has. And with an extra week, he got to see twice as much. Streak over."
Historically the bye week is a time for teams to regroup from slow starts, recover from injuries, and get in some extra game-planning for the next opponent. Through nine weeks of the 2004 season however, teams coming off a bye are sporting a 9-11 record. On the surface this may seem curious, but a closer look reveals that sometimes an extra week of preparation doesn't level the playing field against a superior opponent.
To date, 20 teams have had their bye weeks and have played at least one game since. Of those teams, 11 had winning records entering the bye and nine had records at or below .500. In aggregate, this works out to a combined record of 44-36 (a .550 winning percentage). How is it that there was such a precipitous drop-off in winning percentage for games immediately following the bye?
For starters, the drop-off may be a bit overstated. The question we're interested in is shouldn't teams perform better after the bye week? Specifically, given that teams have an extra week to prepare, shouldn't this increase the probability they win? To answer these questions, it's important to consider that this is really a question about trade-offs. Does an extra week of preparation equalize an opponents' strengths? At the extremes, I would tend to think that it doesn't. If the Dolphins had two weeks to prepare for the Eagles over a 16-game schedule, I'd be inclined to think the Eagles would beat them about as often as the rules of probability would suggest. At the margins however, there's sure to be some benefit to the team that can more fully prepare. Last weeks' Steelers - Patriots game is an example (for the time being I'm disregarding the role injuries play in all of this, but with the full knowledge of their importance).
So knowing all this, what should we expect from teams coming off bye weeks? I first separated these teams into four groups: WL (winning teams before the bye who lost their first game back), LL (losing teams before the bye who lost their first game back), WW (winning teams before the bye who won their first game back), LW (losing teams before the bye who won their first game back). I then estimated the probabilities each of these groups should win their post-bye games given the winning percentage of their opponents.
I used Bill James' log5 method that answers the question, "How often should team A be expected to beat team B?" The results should give some insight into how teams should fare coming off their bye week. Here's the equation:
|WPct =||A - A * B|
|A + B - 2 * A * B|
Where A is team A's winning percentage and B is team B's winning percentage.
|G||WON||LOST||WINPCT||EST. WINPCT||WON||LOST||WINPCT||EST. WINPCT|
Looking at table 1 we see that teams classified as WL sported winning percentages of .813 going into their bye weeks. James' model predicts that they should beat their post-bye opponents (who collectively had a .632 winning percentage) 71.7% of the time. So it might come as some surprise that these teams went 0-4.
Teams classified as LL won 25% of their games heading into the bye week and the model predicts they should win 28.6% of their games against a post-bye opponent that sports a .455 winning percentage. These teams went 0-7.
WW teams went 20-5 before the bye and the model suggest they should win 80% of the time against a post-bye opponent having a .500 winning percentage. These teams went 7-0.
Finally, LW teams won 36.4% of their games before the bye. The model predicts they'll win 29% of their games against post-bye opponents having a .583 winning percentage. These teams went 2-0.
So what does all this mean? WL teams were hugely disappointing, while LL teams performed slightly worse than expected. On the other hand, WW teams were a little better than the model predicted while LW teams were a pleasant surprise.
But James' model was originally intended to be used to estimate the probability one team beats another team in a one game series. The data above are aggregated, which may result in some loss of precision. Consequently, I've included table 2, which uses the log5 model on a game-by-game basis for each of the 20 post-bye week contests. This should give a little more insight into wins and losses as they relate to the bye week.
|TEAMWINS||LOSSES||LOG5 WINPCT||WINPCT||OPPNT||WINS||LOSSES||LOG5 WINPCT||WINPCT||WIN/LOSS|
Of the 11 teams that lost their first game after the bye, only two (Dallas and Carolina) had winning records and were predicted to lose. Four teams had losing records and were predicted to lose (Washington, Cincinnati, Kansas City, Buffalo); three teams had winning records and were predicted to win (Seattle, Indianapolis, and the New York Giants); and two teams had losing records and were predicted to win (or at least had a 50/50 chance of winning -- San Francisco, Chicago).
Of the nine teams that won their first game after the bye, two teams had winning records and were predicted to lose (Pittsburgh, Detroit). Two teams had records at .500 or below and were predicted to lose (Houston, Arizona); five teams had winning records and were predicted to win (New England, the New York Jets, Philadelphia, Minnesota, and Baltimore); and no team had a losing record and was expected to win.
OK, so again, I'll ask the question (for the last time!): what does this all mean? Well, the cool thing about James' log5 model is that we can scale a teams' probability of winning a particular game by (a) that teams' winning percentage, and (b) that teams' opponents' winning percentage. The results give us a clearer picture of the relationship between bye weeks and post-bye wins. The winning percentage of the 20 teams heading into the bye week was a combined .550; these teams then went 9-11 in games immediately following the bye (.450). What's interesting is that the average log5 winning percentage of each of these teams against they post-bye opponent is .494, which equates to roughly 9.9 wins.
So not only should we not expect teams to do exceptionally better after bye weeks, we've also learned that the 2004 performances to date are just slightly below what the log5 model predicts (.450 vs. .494), but certainly not statistically different from what we might expect. Furthermore, more time to prepare for an opponent does not necessarily make up for inferior talent, game-planning, or coaching.
So even though Alan Grant suggests that the Steelers' victory wasn't much of a surprise because they had an extra week to prepare, the log5 model predicted otherwise. Of course this model doesn't account for injuries, and not having Corey Dillon, Ty Law, and Tyrone Poole may have played a significant role in the outcome. Either way, bye weeks are a great time to get players healthy, but they don't significantly improve the chances of winning.
3 comments, Last at 25 Aug 2006, 1:00am by tenuate dospan