What did the Vikings quarterback do well in his rookie season, and how high is his ceiling?
07 Aug 2006
Guest Column by Patrick Allison
It's often said that NFL preseason is pointless. The players themselves often say they would rather just have the season start, and coaches constantly worry about someone getting injured. People complain it's not real football, because coaches don't gameplan the same, starting players don't play the entire game, and probably most importantly, the teams don't actually care about winning, for the most part.
But on the other side, it's not like offensive linemen protect the quarterback any less just because it's preseason. If a tackle misses a blocking assignment, and the QB goes down, that could doom the entire season. They have to play as if it's real. And no QB will actively try to throw an interception, even in preseason, unless you're Chad Hutchinson.
As for the fact that some of the players aren't the same, well, to be honest, the regular season faces those problems as well. The Buffalo Bills that you face in Week 1 are probably not the same Bills you face in Week 17. Plus, over the season, every team needs to draw from its backups as players get injured. The quality of your backups eventually becomes the quality of your team. Depth is every bit as important as the ability of your first string.
That being said, can we find an actual correlation between preseason and regular season performance? Well, it's difficult; it's only four games in preseason. Do the number of wins in the first four games of the regular season correlate with the number of wins in the remaining 12? Not well. The Patriots and Eagles of 2003 know this from pleasant experience. Plus, what if your preseason contests in 2004 were against four very bad teams? It's easy to see that this approach can be very heavily biased.
Thankfully, there is a way we can deal with this bias. Occasionally, though not often, NFL teams play a team in preseason and then face them again in the regular season. So we can try looking for a correlation in the point difference in the preseason game, and the point difference in the regular season game. Let's say Denver beats San Francisco in the preseason by 30 points. What happens during the regular season if they meet again?
First, let's look at the distribution of point differentials from all NFL games from 2000-2005:
It's a nice, smooth distribution, with a width (RMS) of about 15 points. That is, about 70 percent of all games in the NFL are decided by 15 points or less. That means, if we have no correlation between preseason games and regular season performance, if Denver beats San Francisco by 30 points in the preseason, we would not expect Denver to beat San Francisco again by 30 points -- it's just too uncommon an occurance in the NFL for this to happen twice if the first occurance didn't mean anything. So, by looking at the correlation between the point differentials, we can see if the preseason game influences the regular season game's results.
This is going to be a sloppy, sloppy method. Scoring in football is quantized in weird quantities. It may be easier to get a 7-point differential than a 1-point differential. But we still might be able to see something, though obviously smoother quantities like FO's DVOA stats would be better here.
So we know that this is going to be sloppy. But can we learn anything from it? Well, the first question to ask is what does this look like in a situation where we know that the games mean something? That we can answer easily. Teams in the same division meet each other twice in the regular season, so we just have to look at those games. If Philadelphia beats Washington by 30 points, what does that mean for the next game? So here's a plot of first game regular season point differential vs. second game regular season point differential for the past six years.
The grey line is just to guide the eye -- it's a correlation of 1, that is, the first game point differential equals the second game point differential. The pink line is a linear regression fit. While the fit looks pretty flat -- it's a correlation coefficient of 0.20 -- given the number of points, the trend is quite significant (P-value of 0.008, for those who care). Note how few teams lost their first game by 14 points or more and won the second game by 14 points or more - only four. In fact, only once did a team lose by 20 or more points in their first meeting and win by 20 or more (for real) in their second: the New England Patriots, who lost 31-0 to the Buffalo Bills in 2003, and then beat them 31-0 at the end of the season. (The other point was the Vikings beating the Bears in a meaningless Week 17 game.) That's what those two far outliers are (one for the Patriots, one for the Bills). So if you thought that was weird, you should have. In contrast, there were 17 situations where a team won their first game by 14 or more points, and won their second game by 14 or more points.
This result is exactly what we expect. That's good. It means that people who try to figure out what game results mean (like, anyone at FO) aren't out of their minds -- if a better team beats a worse team once, they're likely to do it again. Game results aren't random. To illustrate this a little more clearly, let's look at that distribution of point differentials -- but now, let's look at the distribution of point differentials from the second game, after the first game has been won by more than 14 points.
It looks similar to the first distribution, with a spread of about 15 points, but the mean is now shifted by about 5 points. So if a team wins their first game by 14 or more points, the average outcome of the second game is a win by 5 points. Unless you're the Buffalo Bills.
Now that we know what the regular season games look like (where we all agree they mean something), let's go to the preseason since 2000. Let's be a little more intelligent about it, though. Let's look at only the first half differential in the preseason versus the full differential in the season (or postseason -- both are included). The correlation is significantly reduced if we use the full game, which is a good thing -- this is what our intuition would expect, since the players in the second half sometimes are on different teams during the season.
This time around, the correlation is actually stronger (correlation coefficient of 0.31), though the significance is less (P-value of 0.04), though still significant. There are no teams who were losing by more than 20 points who won their regular-season (or postseason) games by more than 20 points.
Thanks to Michael David Smith for pointing out why the correlation could be stronger: in the regular season, all intradivisional games are played once at home, once away, whereas that's not the case for preseason rematches. Since we know that home field advantage exists, we actually expect the regular season correlation to be weaker. As a stupid example, imagine that home field advantage gives a team 5 points. If the Packers play at Chicago, and lose 10-25, when Chicago plays at Lambeau, you might expect them to lose 15-20. This would put a point on the previous chart at (-15, -5), which would tend to flatten the correlation.
There aren't enough data points to make the same distribution as before, but we can move things down a little and look at all games where the point differential at the half was more than 7 points. This distribution's got a bit more of a tail than the first one.
However, it's still shifted positive significantly. So if you are more than 7 points ahead of an opponent at the half, on average, you beat them by about 6 points in the regular season. To put things in perspective, note that of the 44 teams in this sample, 28 beat the teams a second time. Only 16 teams lost, and only six by more than 10 points. The tail here is Tennessee, San Diego, and Washington, who must've ticked off Oakland, San Francisco, and Pittsburgh, losing by 27, 28, and 21 points respectively. (It took teams a while to remember Ryan Leaf is god-awful).
So what exactly does this mean? Well, it's difficult to say. The sample size is certainly small, that's for sure, but the effect is pretty significant. It looks like preseason first halves do have predictive power on the regular season.
These are the common preseason games this year, and there are an unusual number of them:
(Ed. note: The number of preseason/regular season rematches is high this year because the AFC West and NFC West tend to schedule each other for preseason games to cut down on travel costs -- and this year, they are also scheduled for interconference play.)
Certainly the preseason is less important than the regular season -- after all, these games don't actually count. Plus, coaches don't usually put backups in when the game is close, and we know they gameplan differently in the regular season. But it seems like the data are trying to indicate that preseason does, in fact, mean something -- at the very least, fans should be concerned when teams are blown out in the first half. Dismissing the games outright is probably a little irrational.
For another take on this which shows that preseason does in fact matter, check out TwoMinuteWarning.com. This is an updated version of an article TMW first did in 2004, and it looks at what the preseason can tell us about total wins and losses in the upcoming regular season.
One final point: there's not enough data in four years of the preseason to see if the preseason/regular season correlation gets weaker as the number of weeks separating the games increases, but we know from weighted DVOA that the predictive power of older games drops significantly after about 13 weeks. So take the common games where the second game is much later in the season with a grain of salt.
Patrick Allison is a graduate student in physics at Ohio State, who is thankful for free time on airplanes to work with random football statistics. If you are interested in writing a guest column for Football Outsiders, something with a unique take on the NFL, please e-mail info-at-footballoutsiders.com.
56 comments, Last at 16 Aug 2006, 5:45pm by Pat