The 2015 Saints were the worst defense we have ever measured, and Brandon Browner set a single-season record for penalties, so it's no surprise to see him at the bottom of the coverage tables.
26 Mar 2013
by Danny Tuccitto
I'm going to come right out and say up front that, as an avid member of the NFL stats community, you're likely to find the content in this column very familiar. Quite literally, people have been attempting to measure value and efficiency in the NFL draft for decades. For instance, Tony Villiotti of Draftmetrics.com published a study back in 1989, updated it in 2010, and now writes about it on both his site and National Football Post.
There's also, of course, the Jimmy Johnson draft value chart, which shas produced an entire cottage industry of statheads debunking it. (Seriously, check out these search results.) One thing that's often missed, or at best glossed over, in most of these "The Jimmy Johnson chart is wrong!" posts is that the chart was based on a historical analysis of draft-day trades. Rather than summarizing the actual value of players taken at a specific pick, its purpose from the outset was to summarize the trade market. The whole point was for Dallas to use trading tendencies exhibited by the rest of the NFL to their advantage. And you know what? Not only did it work (see "Herschel Walker trade"), but a study by Cade Massey and Richard Thaler in 2005 found that it remained to that day an "extraordinarily" accurate representation of the trade market.
Two related internet posts that thoughtfully explored differing notions of "trade value" and "player value" appeared on the now-defunct Pro Football Reference (PFR) blog: In 2007, Doug Drinen did a series responding to the Massey-Thaler study, and Chase Stuart made his own performance-based chart in 2008. (He's recently done an updated version on his new Football Perspective site.) Separately, they came to the conclusion that both types of value can be right, or close to right, without ripping a hole in NFL space-time.
Bottom line: People have spent enough time and energy trying to establish "true" draft pick values, and ultimately they've found very similar results. So, it's high time that we leave to general managers whether they want to use the Jimmy Johnson trade chart, one of the myriad "true" value charts out there, both, or neither.
At this point, I'd rather use a draft efficiency model for entertainment purposes, catering to FO readers more than NFL general managers. So what I hope to accomplish over the course of the next month is a series that, as objectively as possible, answers interesting questions related to NFL draft history. For instance, from 1970-2007, which franchise has made the most efficient draft picks? What was the most efficient draft class for any franchise? Which university has produced the most efficient picks? Which draft added the most value at quarterback? Which draft had the least-efficient first round? Which team added the most value in the 1989 draft? And so on.
With all of that said, even if we're now beyond establishing a "true" value for draft picks, we still need a model worthy of answering all those interesting questions about draft history. For reasons I'll detail shortly, the main thing we need to improve is the model's generalizability. And in order to understand why that's the case, I do need to provide a quick refresher on (or introduction to) how all these types of models work.
The basic idea is that you use PFR's career approximate value (AV) metric* (which weighs more valuable seasons more heavily) to find the average career values of players taken at each draft slot, and then have a computer find the logarithmic curve that best fits the data.
At that point, you have an actual value (i.e., career AV) and an expected value (i.e., what the curve formula says career AV should have been), which means you can then calculate whatever kind of draft efficiency measure you like based on an "actual minus expected" framework. In that 2011 piece on Niners Nation, I unimaginatively called my version of such a metric "value above expectation (VAE)." Equally unimaginatively, I called a separate metric, "return on investment (ROI)," which is a term any econometricians out there should be familiar with. The formula I used for ROI was (actual AV - expected AV) / expected AV. Essentially, the difference between VAE and ROI is analogous to the difference between DYAR and DVOA. Like DYAR, VAE is a measure of total value added, whereas ROI, like DVOA, is a measure of percentage value added.
My last model differed from Stuart's in only one fundamental way. Instead of using a player's career AV, I divided career AV by the number of years he was active in the league (i.e., CarAV/Yr). The main reason for this was ease of interpretation. At its core, AV describes a player season, and we know some benchmarks for the value of that season (e.g., Pro-Bowlers are around 10, MVP candidates are closer to 20, benchwarmers are closer to zero, etc.), so I wanted to keep the numbers on that scale. If I say that Terrell Davis added 62 points worth of career AV above what was expected from the 196th pick, even I'm not sure what that means without proper context. However, if I say Davis added nine points worth of career AV per year, we can readily interpret that to mean he was better than the average 196th pick by about one Pro-Bowler. Likewise, if I say that the Cincinnati Bengals added about 15 points of career AV per year with their 1971 draft, it means they added the equivalent of one MVP candidate above expectation.
For this latest iteration, though, I've made two necessary methodological improvements, the story of which starts with the following chart:
What you're looking at is the trajectory of actual CarAV/Yr for draft classes from 1970 to 2007. Because the number of picks in a given draft has changed over the years, it's more precisely the trajectory of actual CarAV/Yr among the first 222 picks. As you can see, the average draft pick has produced more value over time: The first 222 picks in 1972 produced about 380 CarAv/Yr in the aggregate, whereas the first 222 in 2006 totaled over 600 CarAV/Yr.
Similar to many other situations in sports analytics, this upward trend is a problem for a draft efficiency application that seeks to make comparisons across eras. Without era-adjusting** CarAV/Yr, picks in later years will appear to have higher VAEs and ROIs when in reality it's just an illusion created by the above trend.
Therefore, the first improvement I made in model version 3.0 was to create an index with CarAV/Yr in 1970 equal to 100, and adjust each player's CarAV/Yr based on the index associated with his draft year. For instance, the index for 1985 was 89.9 (i.e., lower than 100), so Bruce Smith had his CarAV/Yr adjusted upward from 7.7 to 8.6. In contrast, the index for 1993 was 120.1 (i.e., higher than 100), so Drew Bledsoe had his CarAV/Yr adjusted downward from 7.4 to 6.1. Essentially, Smith becomes a more valuable pick for having been a good player in a bad draft, while Bledsoe becomes a less valuable pick for having been a good player in a great draft.
The second improvement I made was to randomly split the 38-draft sample (i.e., 1970-2007) in half, and visually check whether there was a large difference in fit between the two 19-draft models. Again, for the purposes of being able to generalize across eras, I wanted to remove the (at this point very mild) concern that VAEs and ROIs were related to differences in draft years.
In the end, I tested six different models: two half-sample models and one full-sample model using PFR's raw CarAV metric, and three analogous models using the new adjusted CarAV/Yr metric I just described. Here are the R^2 values for these six models:
|Sample||Career AV||Adjusted Career AV Per Year|
Looking at the row related to the full models, we see excellent fit using both value metrics. As I alluded to in the intro, pretty much any efficiency model you'll find out there explains around 90% of the variance in average draft pick values (including earlier versions of my own model). The more important findings for our purposes are that (a) all three models using adjusted CarAV/Yr performed better than their raw CarAV counterparts, and (b) the three adjusted CarAV/Yr models were more similar to each other than were their raw CarAV counterparts. Essentially, with these two results, we've achieved a level of generalizability across draft years that's more to my liking.
For the sake of transparency, here's the graph and equation for the full model using adjusted CarAV/Yr:
Just to recap, we now have an equation to give us era-adjusted expected player values, and we have era-adjusted actual player values, so we can now calculate era-adjusted VAEs and ROIs to compare draft picks from 1970 to 2007 in an era-neutral way.
In the next installment of this series, I'll exhaust a few thousand words on draft efficiency by franchise, so consider this example somewhat of a tease. Here are the franchises with the eight worst era-neutral ROIs (i.e., percentage value added) over the course of their picks from 1970 to 2007:
Right out of the box, our model passes the smell test. Come back next time for all the pungent details.
*I've deliberately side-stepped the entire debate about the validity of AV as a measure of player-season value. My view in the context of the present analysis is analogous to Churchill's famous quote about democracy: It's the worst measure of value across football positions and NFL eras ... except for all the others.
**I fully realize I'm technically "year-adjusting," not "era-adjusting." Just using the latter phrase because it's accepted shorthand.
43 comments, Last at 03 Apr 2013, 1:13am by Scott C