How much do we tend to know after five weeks? Bill Connelly compares five-week data to full-season data to find out if we should be worried about TCU and Baylor.
07 Jul 2006
by Aaron Schatz
Today we go back, back, back into some very old questions in the Football Outsiders mailbag. We get a lot of e-mail, and there are a lot of comments on the discussion threads. It all simply means too many good questions that require well thought out answers. There gets to be a huge buildup at the bottom of my e-mail. With the book finished, I've been going through, trying to answer some of these old questions. A good number of them provide excellent examples of why we value certain teams and players higher than others.
The backlog is also another reason to remind folks of two things. First, the best way to get your question answered at this point is to use the contact form, not to ask a question in a discussion thread. Second, if it is a question not related to the DVOA stats, it is more likely to be answered if you send it to one of the other writers, not me.
Be aware that we reference plenty of our innovative FO stats here, not to mention their unfamiliar terminology, so if you are a recent addition to the readership you might want to read this first.
Let me start by answering some frequently asked questions about Pro Football Prospectus 2006 and the upcoming season. You can read more about the book on this page, and it will be out in late July. We wanted it to be earlier than last year, but that proved difficult, in particular because the NFL moved the draft back a week. The book is available from Amazon right now as a pre-order, and it will be in your local bookstore as well. I've received a lot of questions about overseas shipping, but that's not really our department -- it all depends on who you buy the book from.
Yes, there will again be a downloadable KUBIAK fantasy projections spreadsheet this year. We're taking some time to really improve it for 2006. Unlike last year, it will be customizable with your league's rules, and it includes team defense. We're also planning things about a little better, so you won't send a donation and manually have us send you the sheet -- instead, there will be a more structured system so that after you pay a fee, you can download as many updates as you want as depth charts change throughout the preseason. We know some people are already asking for the projections, so we're working on putting this out there as soon as possible. The book also includes the projections, of course, ranked for eight different league setups including the much-requested point-per-reception leagues.
On to the questions...
Alan: Judging by conventional statistics, LaMont Jordan's primary strength in 2005 was his receiving. While he posted a rather paltry 3.8 yards per carry and barely topped 1000 yards, he caught 70 passes for over 500 yards, which are high numbers for a halfback. However, in rushing DVOA, Jordan is ranked 17th while his receiving DVOA is 27th. What is the reason for this apparent conflict in statistics?
Aaron: Jordan's ranks aside, he basically came out as average in rushing (1.1% DVOA) and receiving (-0.7% DVOA). So the question is, why isn't his receiving DVOA higher?
First of all, Jordan caught 68% of intended passes. The average for RB is 72%. Remember, we're talking about screens and dumpoffs and a lot of open stuff here.
More importantly, look at the breakdown of Jordan's passes by down.
1st down: 27 catches in 37 passes, 8.6 yards per catch, 2.4% DVOA. Fine.
2nd down: 33 catches in 43 passes, 7.2 yards per catch, 15.2% DVOA. Good.
3rd/4th down: 10 catches in 23 passes, 9.3 yards per catch. -44.3% DVOA. Yikes.
The Raiders threw to Jordan 23 times on 3rd/4th down and he had a grand total of 3 first downs. Enjoy such passes as:
20 yards on third-and-21
15 yards on third-and-18
11 yards on third-and-12
10 yards on fourth-and-14
5 yards on third-and-6
I actually wrote about something similar in Pro Football Prospectus 2006 regarding Chris Perry of Cincinnati, who is thought of as a great receiver but has a horrible receiving DVOA. He has this same issue with pointless third-down catches, as well as a high number of catches for lost yardage.
Andrew Jones: I am trying to find out the amount of dropped catches by the Oakland Raiders and how it compares to the rest of the NFL. I know there were quite a few, but were do I get the exact figures?
Aaron: I'll turn our second Raiders-related question over to Bill Moore, coordinator of the Football Outsiders game charting project.
Bill Moore: Ah, the dropped pass, the bane of every NFL fan. Is nothing more frustrating and possibly infuriating? Drops are a fairly subjective statistic. What Andrew might label a drop, I might consider an unreachable overthrow or an excellent defensive move by the cornerback. The NFL doesn't officially track dropped passes. The statistics behemoth STATS, Inc. does track dropped passes, and while they release very little of their proprietary database of football data, one stat they do make available on a player-by-player basis is drops for wide receivers. To the best of my knowledge, they do not provide the information in sortable tables. To get league-wide averages, one would have to look up each player individually and do the math.
Fortunately for FO readers, we began tracking a number of non-official football statistics including drops in the inaugural year of our charting project. A wide array of volunteers recorded each play of the first 16 weeks of the 2005 season. Each volunteer's interpretation of a drop is likely to be different, and therefore is unlikely to exactly match that of STATS; however, as you seen in the table below we are close.
|Oakland Raiders Dropped Passes, 2005|
League wide, we estimate the drop rate to be about 5.7% of all passes, or 14% of all incompletes. The Raiders' receivers dropped 29 passes in the first 16 weeks. Among all pass plays, that is almost exactly on league average. Here are the teams with the most and fewest drops as a percentage of charted pass attempts. We've removed passes marked "Thrown Away," "Tipped at Line," or "Hit in Motion," and our count of drops includes passes that are juggled by a receiver and then intercepted.
|Most Drops||Fewest Drops|
Yes, that really does say "Seattle." Look for more drop data in the upcoming Pro Football Prospectus 2006.
David Brude: I'm just curious to understand how Tatum Bell can average over a yard more per carry and yet have a DVOA that is only half of Mike Anderson's? This is a general question I always have about teams that use two running backs. What kinds of situations allow the back with seemingly better stats to be rated lower? I mean, I can see where backs that run out the clock a lot will have a low YPA, but I don't think that is the case here ... or is it?
Are TD's worth that much vs. yards per carry to make that big of a difference? What kinds of situations is Tatum Bell in that keep his DVOA low with such a high yards per rush? Or is it just that most of his runs go for little yardage but he breaks off very long runs that keep his YPA total very high? When he's averaging a yard more than his closest competitor, something must be going on.
Aaron: First of all, here's a look at the stats of the three Denver running backs for 2005:
Why is Anderson's DVOA so much higher than Bell's? The correct answer is definitely "most of his runs go for little yardage but he breaks off very long runs that keep his YPA total very high," as you can see from this graph of how often each back ran for a given amount of yardage:
You can see that Bell has more losses and more runs for no gain, but Anderson has more runs in every category from 3-4 yards to 9-10 yards. Bell had seven runs over 25 yards, and that explains his high yards per attempt. Anderson only had one.
Dave had a second question that he asked months and months ago. You might not think the 2004 Arizona Cardinals are that interesting, but this does a good job of showing why DVOA and the official NFL ratings can be so different.
Dave Brude: I happened to be looking at the team defense ratings for 2004 and was wondering how Arizona's non-schedule adjusted yards per rush allowed could rank dead last yet their non-adjusted DVOA for rush defense is slightly above average. What kinds of things are going on that allows them to jump that much? Did they face a lot of running QB's?
Aaron: Reasons why the Arizona Cardinals rush defense in 2004 scores so much higher in DVOA than in actual yards per carry allowed:
1) The Cardinals caused 12 fumbles on running plays, more than twice the league average. (7 recovered by Arizona, 5 by the offense)
2) The Cardinals run defense that year stiffened in the red zone. They allowed just 2.49 yards per carry in the red zone, 10th in the league.
3) The Cardinals run defense that year stiffened on third down. Only 45% of running plays against Arizona on third or fourth down converted for a new first down, sixth-best in the league.
Now, interestingly, in 2005, the Cardinals allowed just 4.2 yards per carry, middle of the league, but their DVOA hardly budged. Why?
1) They caused only 4 rushing fumbles.
2) They allowed 2.88 yards per carry in the red zone, 21st in the league.
3) They were still very good on third down, however, allowing success on 43% of plays.
Richie Wohlers: When you give the strength of schedule ratings for a team for 2005, you just average out all opponents' DVOA. Have you put much thought into this? Wouldn't it be possible that a team's schedule could look tough because they happened to play a hard division foe twice, and that really weights down their schedule?
In an extreme example, a team could play 5 opponents with DVOA's of: 76%, -5%, -3%, -1%, -4%. These average out to +12.6%, seemingly a tough schedule. When in reality, they are playing 4 opponents that are below average, and one really tough opponent. They would likely go 4-1 over that 5-game stretch. "You can only lose each game one time." (The same thing would go for calculating strength of schedule using W-L of opponents.) Or, in reality, does it just not matter, and there isn't that much of a difference between the best and worst teams?
Aaron: To answer this question, I went and looked at average opponent done two ways: mean, which is how we do things right now, and median, which will lessen the effect of extremes on both ends.
For all but three teams in 2005, the mean opponent DVOA and median opponent DVOA were within 5%. (I mean 5% DVOA, not five percent as an actual percent.) Here are the three exceptions and one team very close to 5%:
|TEAM||AVG||MED||AVG RK||MED RK|
Everyone pretty much knows about Jacksonville's season -- they played the Colts twice, plus the Seahawks, Steelers, Bengals, and Broncos. Baltimore had a DVOA of -5.2% (new upgraded DVOA) and every other opponent was below -10%, and that's why the median is so low.
The NFC East played the AFC East (good) and the NFC West (bad). Add Washington and New York, and Philly and Dallas end up with much higher median opponent DVOA than mean opponent DVOA.
I suppose I could run both numbers on the site. But since DVOA includes every single play of the season, the mean opponent DVOA version of schedule strength is a much better indicator of teams which will have a big difference between VOA and DVOA. Jacksonville is a good example. The Jags' DVOA (17.3%) isn't much lower than their VOA (20.0%) because in some games they get a big bonus for a super-hard opponent, and in some games they get a big penalty for a super-easy opponent, and it mostly cancels out.
(By the way, these strength of schedule numbers are slightly different from those on the 2005 team efficiency page because they are calculated from the new, improved DVOA v5.0 introduced in the book. We'll have those numbers all up on the site in the next few weeks.)
John Fessenden: Loyal FO reader here to ask you about defensive DVOA ratings. First I'll say that I'm a huge fan of this site and its brand of analysis, but I have one quibble with how defensive ratings, as I understand them, are done. Do you think its possible to interpret interceptions as â€œluckyâ€? events for defenses, in so far as they are reliant on an the opposing quarterback putting the ball in an area to be intercepted?
I'll use a baseball analogy: a defense is to a pitcher in baseball, and a pitcher can have all the right peripherals (K:BB, GB/FB, etc) and still be undone by being unlucky in his BABIP (batting average on balls in play) simply because he has no control over where the batter hits the ball. Similarly a defense can also do all the right things as far as stuffing the running game, low completion %, good pass rush, etc, but in order to generate interceptions it is reliant on the opposing QB throwing the ball into an area that could possibly be intercepted. I certainly think that a defense can be more prone to getting interception with having a good pass rush, good coverage, etc, but should these areas be the ones measured instead of interceptions, because these are the areas which a defense can control? An interception requires luck simply because if the QB throws an errant ball, the defense is in no position to control whether it is interceptable or not.
I also noted that as DVOA stands now, defensive rankings are much more unpredictable from year to year, and I wonder if that would still be true if you treated an interception simply as another incomplete? As a fan of the Ravens my eyes told me that their defense last year was about as efficient as it was the year before, however they simply weren't able to intercept as many balls, hence the lower ranking. I'm not sure if skill or luck generated this lower ranking, but my intuition tells me that their â€œperipheralsâ€? would be pretty close the past two years. Sorry for the rambling query, but this has been on my mind for a while, and I was hoping you could clear this up.
Aaron: Interceptions do in fact correlate from year to year, from a defensive standpoint. Not as well as other things, but much better than, say, fumbles. The newer version of DVOA does control the defensive unpredictability by lowering the penalty for interceptions and treating different interceptions differently -- i.e., picking off a pass at the line is better than picking one off 40 yards downfield. But the interceptions have to be in there to make the system more accurate.
Having the same thought as you, I once tried to stick interceptions into the team projection system, thinking that teams with lots of interceptions would tend to regress in defense the next year, and teams with very few interceptions would improve. It doesn't work. (This is a part of the new team fantasy defense projections in PFP 2006, as teams will regress towards the mean in interceptions, but it doesn't help in projecting actual team defensive quality as judged by DVOA rather than fantasy points.)
People interested in these issues should read this article from two summers ago.
85 comments, Last at 25 May 2007, 12:38am by Quinton