This year's update to the playoff drive stats show that the football gods may have been on Peyton Manning's side this time. Also: Cam Newton and Alex Smith enter the mix, and why we should be comparing Andrew Luck to Dan Marino.
09 Sep 2005
by Mike Tanier
Welcome to the Too Deep Zone. I hope to provide a little of everything in this weekly feature: some scouting, some stats, some history, a few X's and O's, even a little humor. My only goal is to avoid the obvious: if I don't have a unique slant on a story, then I won't tell it.
My two-year-old son can pick division winners.
Give him a paper bag filled with mini-helmets from any division and have him pick one out, and he'll pick this year's division champ 25% of the time. Repeat the process for all eight divisions, and he's likely to pick two future champions correctly. Not bad for a kid.
Of course, we all know that's probability at work. With four teams in each division, any random selection method will work an average of 25% of the time. In fact, probability theory states that while my son (or my dog, or the randomizer on my calculator) has a 31.14% chance of getting two division champions right purely by chance, he's likely to do even better.
A method called binomial expansion tells us how likely my son is to guess four, six, or eight winners from four-team divisions, purely by chance. He has a 20.76% chance of guessing three division champs correctly, an 8.65% chance of four, a 2.31% chance of five. His chances of getting all eight right are 0.0015%, or fifteen out of a million, but no one expects perfection from a toddler. He also has just over a 10% chance of striking out and getting none of the division winners right.
Think about it: pick this year's teams out of a hat, and you have a better than 10% chance of getting half or more of the division champs right.
Of course, no writer, reader, or fan makes his preseason prognostications entirely at random (I hope). Let's move from my son to my wife. She knows enough about football to make better than random decisions. If forced to make predictions, she would eliminate teams like the Niners and Browns. She would automatically select powerhouses like the Eagles, Colts, and Patriots: these teams aren't guaranteed to win anything, but they would certainly be sound selections.
By adding some very basic knowledge to the equation, my wife's chances of guessing any one division correctly would probably be closer to 1-in-3 (33.3%) than 1-in-4. Running another expansion, my wife would have about a 27.3% chance of getting three winners correct, a 17.1% chance of getting four correct, and a certainly possible 0.24% chance of picking seven out of eight correctly.
Once we move above my wife's knowledge level, we enter the wide gray area of expertise. There's a huge continuum of avid fans, writers, sports-talk hosts, handicappers, and other interested observers who want to be able to pick winners in advance. We all use whatever resources at our disposal to make the best possible selections. A guy who wants to win a barroom argument or a $50 futures bet may spend an hour pouring over Street & Smiths before making his choices. A writer for a nationally-recognized website may spend days scrutinizing scouting reports and stats before making his. Both individuals could end up with the exact same selections.
Anything can happen in one year -- my stuffy-nosed kid could beat Vegas -- but over the course of many years, expertise should win out. A full-time NFL scout should be able to out-pick me. I should be able to out-pick a casual fan. That fan should out-pick my son. But we are all just using various degrees of expertise, none of which is absolute and infallible.
The goal -- the Holy Grail -- is to obtain the kind of expertise that allows you to pick winners 60% of the time, or 70%, or 90%. Not only could you clean up in Vegas, but you could attain fame and fortune as the world's foremost football expert.
But reaching those high percentages isn't easy. Take a nationally-recognized group of football experts: the writers of Pro Football Weekly. These guys study the game year round. They make divisional predictions in their annual season preview. Over the last five years, they've had to predict the winners of 36 divisions, eight per year from 2002-2004, six per year before that. They were right 14 times: a 38.9% success rate.
Not bad. But it has to be possible to do better. After all, I suggested at the start of the article that my wife would probably have a 33% success rate; you probably agreed with that. Surely a team of experts should be worth more than six percentage points.
Of course, picking division winners is just a small part of football speculation. There's picking Super Bowl teams, Wild Card teams, determining actual records, and so on. The folks at Pro Football Weekly, or the Sporting News, or Football Outsiders, would blow away my wife or a barstool blowhard if we considered all of these predictions. My son has a 25% chance of picking the winner of a division out of a hat, but only a 1-in-24 shot (4.2%) of getting the exact order of a division correct. Smart, dedicated football experts have much better odds.
But picking division winners is fairly elemental. If we cannot say for sure who will top the NFC South, we cannot say much of anything for sure. If you cannot predict that the Steelers or Chargers will have a great year, no one will be impressed that you pegged the Jaguars for a 9-7 season.
Those PFW analysts, and the writers of other preseason annuals, may be guilty of a little over-conservatism: they seem to play it safe and pick lots of teams to repeat for division crowns. In 1999, the Colts, Jaguars, Seahawks, Redskins, Buccaneers and Rams all won their divisions. Pro Football Weekly picked all but the Seahawks to repeat, selecting the Broncos to win the old AFC West. In fact, not a single one of those teams won, a sign that photocopying last year's results isn't the way to determine this year's winners.
But of course, that's what all of us do to a certain extent: we build this year's predictions from last year's standings. If my wife were pressed into a football pool, she would probably just write down last year's division champions and turn them in. And it wouldn't be a bad gambit. Since 1994, and excluding the year when the NFL went from six to eight divisions (three teams did repeat that year), division champs have repeated 16 times in 46 opportunities, or 34.8% of the time.
On the one hand, the preseason publications beat the photocopier by about four percent. On the other hand, hundreds of man-hours and thousands of pages of text should add up to more than a four percent advantage.
Some experts do little more than take last year's results and tweak them a little, making obvious changes: moving a weak division champion down to clear room for a team on the rise, for instance. Maybe we can avoid all of the hard work of analysis by creating a crude formula that does the tweaking for us. For example, we'll start with last year's champs, bump off any team that won ten or fewer games, and replace them with the second place team in that division. That should eliminate some lucky title winners, replacing them with the next most logical choice to win that division.
Applying that method for every season since 1994 (excluding 2001, the division-shift year), we find that some tweaks result in better predictions (taking the Jets over the Patriots in 1998), some worse (taking the Jaguars over the Steelers in 1997) and some have no effect at all. The results: the tweak method picked winners 18 times out of 46 chances, or 39.1% of the time.
The tweaks are slightly better than just photocopying last year's results. In fact, the simple tweaks have a slightly higher success rate than the Pro Football Weekly experts, albeit in different sample sizes. That's not to disrespect the gang at PFW, but it does show how far years of experience and months of research can get you in the world of football prognostication.
Of course, if all you do is take last year's winners and make slight adjustments, people will catch on. You'll be accused of making only "obvious" picks. Those who want to be perceived as an expert must go out on a limb once in a while.
Real experts go out on a limb because their methods, research, and analysis have led them to odd-but-inescapable conclusions. Fake experts just avoid an obvious team or pick an oddball team in a quest for "genius points". When the genius pick succeeds, the fake expert rubs everyone's face in it. When it fails, which it usually does, he distances himself from it quickly.
We can add this genius element to our tweak system by factoring in the Merrill Hoge Adjustment. Hoge figured that the 2004 Eagles, a three-time division champion that had just added Terrell Owens and Jevon Kearse to the roster, would finish 8-8. That's a classic genius pick: Hoge wasn't looking at the product on the field, he was reading tea leaves.
We'll perform a Hoge-like tweak by never picking a two-time champion to repeat again: we're going out on a limb here, baby, by saying that such-and-such team will not three-peat. And if they three-peat, we sure as heck won't pick them to four-peat, or whatever. We'll stack this adjustment with the one we made before, eliminating 10-win champions.
The Merrill Hoge Adjustment is counterintuitive, removing successful teams from consideration, so you would expect it to hurt the forecasts. And it does, but not by much: pick against any three-peats, and you will correctly identify division champs 31.25% of the time. It's wrong a lot, but it's right every couple of years. It's not a bad gamble: lose seven or eight percentage points of accuracy, but gain a big "I told you so" when some dynasty crumbles. And no one accuses you of playing it safe.
We can do tweaks like these until the cows come home. Leaving Hoge aside, we can come up all sorts of wacky methods. Start with last year's winners. Take out any team with 10 or fewer wins and any team with a starting quarterback over 35. Replace them with the second place team if that team had a winning record. If the second place team didn't have a winning record, select the team with the best record over the last six games of the season, with the tiebreaker being the age of that team's starting quarterback (the youngest wins).
You can work that all out if you want to; I didn't. My guess is that it would land you between 35% and 40%. My point isn't to come up with some foolproof method based on last year's standings. My point is that the current standards of expertise in pro football don't get us very far when it comes to predicting winners and losers. My son can pick division champions 25% of the time. My wife can probably do it about 33% of the time. The writers of the premier football magazines in the nation can do it about 40% of the time, about the same percentage you would attain by cutting away the deadwood from last year's standings.
At Football Outsiders, we're striving for something better, using everything at our disposal to make the most accurate picks possible: Pythagorean analysis, DVOA projections, historical research, old fashioned scouting. In the first year, 2003, FO consensus correctly chose four of eight division champs; last year, it correctly chose five of seven with a split vote on the AFC South. It's a process, not a product: last year's picks were great, this year's should be better, next year's better still. No one in their right mind thinks 100% or even 75% accuracy is attainable, but we're taking some of the guesswork out.
And I'm glad to be aboard. When I worked for a different sports service, I made the "official" predictions for the 2002, 2003, 2004 seasons. That's a total of 24 division winners. Laboring hard through the spring and summer, I picked eight correctly.
8-of-24. That's 33 percent.
Should have just let my wife pick them.
67 comments, Last at 02 Oct 2005, 6:57pm by Bryan