NFL Drafting Efficiency, 2010-2019
Guest column by Benjamin Ellinger
Every year, NFL fans look back on the draft and immediately wonder: was it a good draft for my team? Pundits compare who was drafted to the consensus draft boards and are impressed when general managers make the obvious picks. Teams that have lots of high draft picks get excellent grades (unless they deviate too much from the conventional wisdom). The Seattle Seahawks get poor draft grades because nobody understands why they made the picks they did.
To make clear what my biases are: I'm a Seattle Seahawks fan. I would not trade Pete Carroll and John Schneider for Bill Belichick. Russell Wilson is the best quarterback in the NFL. Bobby Wagner is the best linebacker since Ray Lewis. Throwing the ball on the last play of Super Bowl XLIX was the right call. The Beastquake is the greatest run in NFL history. If my life depends on one catch, I'm praying that Doug Baldwin or Steve Largent is the target. So, if I tell you that the Seahawks are better at drafting players than all other teams over the last 10 years, you would be right to be skeptical. The data says I'm right, though.
But how can we really measure, objectively, how good a team is at drafting players? First, we must consider how many draft picks you have and how good those picks are. This is your team's "draft capital." If you have a bunch of high picks, that may be due to a clever GM who is good at making trades, but they are only good at drafting if they get more than an average GM does out of those picks. To see how much a team gets out of a draft, we need a measure that gives a value to all players, regardless of position, based on their actual on-field performance (not their potential, not their talent, not their time in the 40-yard dash). We can then compare how much total value a team got out of a particular draft relative to how much we would expect an average team with the same amount of draft capital to get.
So here's the method. First, to calculate draft capital, let's use Chase Stuart's draft value chart (generally considered superior to the original Jimmy Johnson chart). Each draft pick, from the No. 1 pick down to the last pick of the draft, is given a value based on the average amount of career approximate value (CarAV) that pick has generated. This is a solid approach, as the entire point of approximate value is to have a cross-position single number to compare any player's value to the team to any other player's value.
However, since we will want to compare drafts from different years to each other, we need to normalize these values so we can compare different years fairly. So each draft position is converted from an expected CarAV into a percentage of the total expected CarAV of the entire draft. We'll use this trick throughout this method, calculating the percentage share of an entire draft class (or multiple classes when aggregating multiple years).
The following table shows each team's percentage of draft capital in each of the last 10 years, as well as the totals of the last decade and the last five years. In short, the teams at the top of the table should have found the most talent in these drafts. Values shown in gold are in the top 1%, while those in green are in the top 10%. Those in red are in the bottom 10% and those in grey are in the bottom 1%.
|Total Draft Capital, 2010-2019|
|Percentage of CarAV expected to be generated based on draft picks used (using Chase Stuart's expected AV draft chart).|
Cleveland utterly dominates the competition for draft capital, with the two top drafts overall (6.05% in 2017 and 7.18% in 2018, which are actually the top two for the last 20 years) and six drafts in the top 10%, while only having two below-average drafts. (The average is 3.125%, which is exactly 1/32nd of 100%). That means that from 2015 to 2018, Cleveland effectively had three extra entire drafts.
It helps to be bad for a long time if you want a lot of draft capital, but how you finish the season doesn't determine everything. Teams that trade away picks for players will lose draft capital, but that's not a bad thing if those players are worth it. Teams like Seattle and New England still got a moderate amount of draft capital (23rd and 24th), despite never getting high (or even middle) picks because they continually traded down for multiple lower picks to increase their total draft capital. Chicago's 2019 draft is the lowest amount of draft capital in the last 20 years, while the Raiders' 2019 draft was the third-highest in the last decade. I'm sure these facts are related somehow.
The Browns are followed in total draft share by the 49ers and Buccaneers. If you only look at the last five years, the top three teams stay the same, and Cleveland becomes even more dominant. They had as much draft capital as the bottom two teams (Philadelphia and Kansas City) combined.
How much return has each team gotten from its draft capital, though? We can answer that by calculating the percentage of total CarAV in each draft class that was produced by each player. Add up all those values for the players a team drafted, and then you have the total draft return for that team (and year). Note that this is not relative to how much draft capital the team had (that step comes next), but is relative to the overall quality of the year being looked at. This is necessary, because not very much CarAV has been generated by players from the last couple of years yet, and we don't want to think that drafts from years ago were better just because the players have been around longer.
Of course, the numbers for recent seasons are going to change a lot in a year or two. Kansas City's 2017 draft is going to keep looking better and better as Patrick Mahomes keeps racking up approximate value and catching up for his "missing" rookie season. Numbers from years ago are a lot more stable and unlikely to change much. These are the teams that actually did find the most talent in each draft.
|Total Draft Return, 2010-2019|
|Percentage of CarAV generated by drafted players (relative to total CarAV in the draft).|
Who has the top season by draft return in the last 10 years? The Seattle Seahawks, of course! The Seahawks came out of the 2012 draft with the best player in the class (Russell Wilson) … as well as the second-best player in the class (Bobby Wagner). You can see the string of great drafts from 2010, 2011, and 2012 for the Seahawks that propelled them to two Super Bowls shortly thereafter, making Seattle the only team with more than one year above 5%. After Seattle in 2012, Indianapolis and Baltimore have the next two best drafts, both coming in 2018 when they combined to draft Quenton Nelson, Darius Leonard, Lamar Jackson, and Orlando Brown.
Seattle has the top average draft return over the past decade, followed by Baltimore and Cleveland. Baltimore has a good reputation for drafting, but remember that Cleveland has had a huge amount of draft capital, so it would be embarrassing if they were not in the top three. Chicago, the Chargers, and the Jets are at the bottom, which certainly fits with the struggles they've been having. It also shows that Philip Rivers hasn't really had a lot of help in the latter part of his career.
Over the last five years, Baltimore and Cleveland remain at the top, with Indianapolis coming in right behind. Seattle has dropped off a bit, but is still above average. Cincinnati and Philadelphia haven't found a lot of new talent recently, as they drop to the bottom three. But as sure as the sun rises, the Jets remain at the bottom for the last five years as well, without a single year even approaching average.
The most surprising numbers here (although maybe not to Tom Brady) are the last three years for New England. Their 2017 return is the worst for the whole decade at just 0.71%, but their results in 2018 (at 2.07%) and 2019 (at 1.54%) have been terrible as well. That puts New England at sixth-worst in the last five years (they would easily be the worst for the last three years). Has the master lost his touch? Or are there a bunch of young players in New England about to become valuable starters?
Return vs. Capital
So now we come to the true test of drafting ability. How much return did each team get relative to the draft capital it had? We can find out by dividing each team's draft return by its draft capital in each year, then expressing that as a percentage. A score of 100% means that teams got the talent they were expected to get given how much draft capital they had. That's a league-average GM in drafting ability.
|Draft Return vs. Draft Capital, 2010-2019|
|Percentage of CarAV generated by drafted players (relative to total CarAV in the draft), divided by percentage of expected CarAV from draft picks used.|
After adjusting for the amount of draft capital used, the Seahawks remain in the No. 1 position. By a mile. At a 135% average over the last decade, they are 15% higher than the next two teams, Green Bay and Dallas. Schneider wins! (Although Carroll deserves some credit for developing those players, of course.) The Seahawks have the two best drafts of the last decade (2011 and, of course, 2012). Even if you only look at the last five years, they are still doing well at tenth in the league (Kansas City and Minnesota are now on top, while Dallas remains in third). No, 2019 wasn't good at 74%, but when Marquis Blair, L.J. Collier, and Travis Homer all become starters, it will start looking much better, right?
If you look at the top teams on this list, you see a lot of well-run organizations with a lot of stability: Seattle, Green Bay, Dallas, Pittsburgh, New Orleans, Baltimore, Kansas City, New England. It seems like Dallas has done a great job of drafting, but they haven't really had the success these other teams had. Maybe their coaching wasn't that good? We can also see how badly New England has drafted in the last three years. It doesn't look quite as bad as the raw numbers, but it's still very bad, easily the worst in that time. But over the last five years, they are right about at average, so maybe things aren't that bad?
At the bottom of the list you have Tampa Bay (86%), Cleveland (78%), and the Jets (74%). Jets fans are not surprised, I'm sure, with only two above-average drafts in the last decade but three in the bottom 10%. They are at the bottom of the list for the last five years as well. Tampa Bay has struggled in the draft, not getting what they should have out of their third-best draft capital, while Cleveland has squandered the best draft capital of the decade. At least the 2019 draft is looking good for Cleveland, but that's only because they had very little draft capital (very unusual for them). The actual raw return is still below average. The factory of sadness is still open for business.
Consistency and Variance
But does this show that particular teams are actually better at drafting over time than others? At first glance, the results still look awfully random. The year-to-year correlation for each team's return vs. capital results in Table 3 is just 0.081 -- there is practically zero relation between any team's results from one season to the next. This would imply that good or bad drafting results can be attributed almost entirely to luck. However, things change ever so slightly if we look at the big picture. If we compare any team's draft results in any given season to their average draft results of their other nine seasons, we get a correlation of 0.122. If we use the median of the other nine years instead of the average -- which should limit the impact of outliers like Seattle in 2012 -- we get an even stronger correlation of 0.145. This suggests that while the data is still mostly random noise, there is some faint evidence that some teams really are better (or worse) at drafting talented players than their competition. (A future expansion of this analysis might look at correlations by administration, rather than by team, to account for teams that have changed front offices over the past decade.)
We can also analyze variance by using an ANOVA test. The short-hand explanation of an ANOVA test is that we are comparing how much variation we see between teams compared to how much variation we see for each individual team over the ten years in the sample. The math for this is a little complicated, but in the end, this gives us a p-value of 0.034 (based on a generic F-distribution) for the chance that the variation between teams in this data set is due to randomness (about a 1-in-30 chance). This passes the generic 0.05 threshold for statistical significance, but not by a lot. Still, this is moderately strong evidence that drafting is not completely random.
If you are familiar with ANOVA tests, though, you might immediately be worried because using a generic F-distribution to get p-values can be a problem if your distribution is not fairly normal (or if the standard deviations within each group are too different, but that's not the case here). So what does our distribution look like? If we look at a histogram of the Z-scores, we can see how things get skewed by the top 20 or so drafts. The purple bar is close to a 0.0 Z-score, while the different shades of blue represent about a standard deviation each. You can see the 2011 and 2012 Seahawks drafts on the far right, along with the 2013 Green Bay draft. There is definitely a strong tail to the high end, but overall the distribution is moderately normal in shape.
To be safe, we can run the ANOVA test using the medians (of each team, along with the overall data set) instead of the means when calculating the variances at each step. Again, doing this somewhat neutralizes the effects of outliers, particularly when the outliers are biased in a particular direction. We when do this, we get a p-value of 0.000072, which corresponds to over a 1-in-13,000 chance that the variation between teams is entirely due to randomness. This is pretty close as we could ever expect to get to a smoking gun telling us that there is real skill in drafting.
A reasonable interpretation of all this is that while there is a good amount of randomness in how well teams draft, over the course of a decade, the skill of the drafting GM is fairly important in how well a team does. Of course, one could also make the case that the dominant factor here is the coaching staff's ability to get the most out of the players who are drafted. It's likely to be a combination of the two.
So clearly, the Seahawks are just vastly more skilled than other teams at drafting over the last ten years, right? As much as I would like to believe that, there is a reason the p-value is only a little under 0.05 when you don't attempt to neutralize unusual outliers. The fact that the p-value drops so much when those outliers are neutralized likely means that those outliers have a lot of luck in them. But even then, how could the Seahawks get the top two drafts of the decade, back-to-back, without that being evidence of incredible skill in drafting (rather than just good skill and a heavy dose of luck)?
Well, when you have a skewed distribution, there is usually a reason why, and that reason is right in front of us. When we use CarAV as our measurement of value, it inherently means that the best players have more upside than the worst players have downside. If you get a Russell Wilson or a Tom Brady, their value is massive due to positional value, longevity, lower risk of injury (especially in the modern NFL), etc. But if you draft a JaMarcus Russell, there is only so much damage he can do. If the Raiders had been forced to start Russell for every game for a decade, then he might be a Tom Brady-level outlier. Instead, he gets benched and the damage is limited to just wasting a draft pick and having a bad season or two.
What this all tells me is that drafting well is a lot of luck, mixed with some skill and an extra layer of a random "jackpot" on top (the one or two later-round picks each draft that become unexpected Hall of Famers). This would explain the data we see (including the outliers) pretty well. The Seahawks are probably pretty good at drafting, but also had some crazy luck in hitting three jackpots in a row (Wilson, Wagner, and Richard Sherman). What this should tell NFL teams is that you need to roll the dice as many times as you can (trading down for additional value whenever possible), get the best GM you can possibly find, and get the top coaches in the league to develop the talent you draft -- which is what we already see consistently good teams generally do.
Benjamin Ellinger is a program director at the DigiPen Institute of Technology. He teaches programming and game design following a long career as a game developer.
79 comments, Last at 06 Oct 2022, 8:25am
#1 by theslothook // Jun 11, 2020 - 11:45am
In graduate school, I adapted Eugene Fama and Ken French's, "Luck VS Skill" paper to the NFL draft. I found very little evidence to suggest the results of drafting were due to skill. If there was skill, it was very minimal. And this period covered 2012 all the way back to 1978.
To quote Fama, "Over a wide enough sample, there will be outliers just by chance. But we remember them as winners"
#2 by mehllageman56 // Jun 11, 2020 - 12:08pm
In general, I agree with you. But the Jets being at the bottom is not due to luck. The last two general managers have been atrocious drafters, with MacCagnan a little better than Idzik, who whiffed on a draft with 12 picks, with only Quincy Enunwa being considered worthy of a second contract IN THE NFL. You read that right, 11 of those 12 picks are out of the NFL right now. One of them, Ik Enemkpali, helped destroy the 2013 draft by breaking Geno Smith's jaw in the locker room. Just so you don't gloat that much, Mr. Ellinger, Idzik came over to ruin the Jets from your favorite team, the Seahawks. He went on to work for the Jaguars, presumably as the cap specialist for the team that gave Nick Foles 88 million dollars. The last decent Jets draft was 2011, Mike Tannenbaum's last with the team. As much as people here gripe about Tannenbaum, he is stay way better than what the Jets have had the last five years, until Douglas took over.
#3 by mehllageman56 // Jun 11, 2020 - 12:18pm
I have to admit I was wrong; 2012 was Tannenbaum's last draft, and it ended up being not that great. Only Demario Davis remains in the NFL from that draft. Quinton Coples was ok for a couple of years until the Jets decided he needed to play outside linebacker since they kept drafting interior D-linemen in the first round-thank god that's over.
#5 by theslothook // Jun 11, 2020 - 2:26pm
I would respond by saying just as there great winners that we remember, there are also extreme losers that we forget.
This subject always rankles people's feathers both in finance and the NFL draft.
There's also something people miss. I'm not saying there is no skill in drafting. Evaluator spend hours and hours pouring over metrics for a reason. however the inescapable conclusion is that saying someone is a good drafter means better than average. And by definition not everyone can be above the average. It's the relative skill that matters not the absolute skill.
I do want to be clear that I do think there is some skill. I think Ozzie newsome is a skilled drafter. But it shows up only on the margins, getting a little more here and there rather than one giant perpetual homerun machine.
#44 by coremill // Jun 12, 2020 - 10:31am
I think this is a crucial point. That no NFL GM can consistently exceed the average performance of NFL GMs doesn't mean there is no skill involved in drafting and that returns are all luck. it could be that there's a lot of skill involved, but that everyone at the top level is nearly equally skilled, or at least that the differences in skill are relatively small when compared to the huge amount of variance. If you pulled a random Joe off the street and had him start drafting, it's likely he would do very poorly.
#45 by sbond101 // Jun 12, 2020 - 10:42am
"or at least that the differences in skill are relatively small when compared to the huge amount of variance."
This is correct - and it's exactly the same as the stock-picking problem. To actually judge someone positively/negatively would take a very large sample to tune out the noise. Very few GM's draft enough players to begin to look at this and get an answer that overcomes the noise.
The right description of the situation is that we don't know about GM draft skill very much; and that within the parameters of general competence it doesn't much matter, rather than that skill does not exist.
#60 by PirateFreedom // Jun 12, 2020 - 5:29pm
the people involved with scouting and evaluating all talk with each other, so there is probably some consensus building that makes drafting efficiency more equal than it would be if 32 teams worked in true isolation.
#12 by Led // Jun 11, 2020 - 5:24pm
The other confounding variable is coaching/player development. Teams that are better at coaching up players and taking advantage of their strengths will, all things being equal, get better returns on their draft picks. Also, teams with better rosters are better able to put individual players in defined roles that maximize their value rather than having having to ask them to do things that expose their weaknesses. So I think there's a chicken and egg problem in evaluating drafting that is difficult to resolve.
#21 by SeaRhino // Jun 11, 2020 - 10:07pm
I'm the author (if you didn't guess from the admission of Seahawks bias). I think my analysis shows that there is some skill (but tons of luck) where other analyzes did not because of the method of using "ratio of percentage of draft AV to draft capital" to measure how good a draft is. If you know of an analysis that did something like this, I would love to know about it.
#23 by SeaRhino // Jun 11, 2020 - 10:13pm
The Seahawks are still #10 for the last five years, so they are still doing quite well. The Chiefs are just absolutely crushing it for the last five years (like the Seahawks did 6-10 years ago). Wouldn't shock me if they win 2-3 more Superbowls, although the cap will get them eventually and perhaps prevent that.
#54 by Sixknots // Jun 12, 2020 - 1:25pm
Almost all of the Seahawks' advantage comes in the first three years of the study.
Interesting that those years coincide with the tenure of Scot McCloughan as a senior personnel executive. If memory serves, the 49ers had previously drafted well with him as GM as did the Redskins in a later GM role.
Edit: Maybe not so good with the Skins.
#57 by Bowl Game Anomaly // Jun 12, 2020 - 3:37pm
Maybe McCloughan didn't draft well with he Skins, but he was an excellent GM overall. Jay Gruden was the coach for 6 years- in the 2 years with McC, the team was over .500 both years. All 4 years without McC, the team was below .500.
Put another way, Gruden with McC went 17-14-1, .546. Gruden without McC went 18-35, .340.
#25 by SeaRhino // Jun 11, 2020 - 10:16pm
I wanted to include UDFAs in this (they are 20%+ of the AV generated in the league), but it's much harder to get complete data on them, especially figuring out which team actually signed them initially and things like that.
#9 by Joseph // Jun 11, 2020 - 2:53pm
The other interesting thing about this study:
On one hand, how big one draft pick that hits (Wilson, J.J. Watt, Donald, Sherman, Mahomes, etc.) can impact the score in a dramatic way. BUT--how about the rest of the picks? In other words, if a GM selects a player who ends up a HOFer or even considered for the HOF, then that's a great pick. But if you blow the other 6 (on average, for a 7 round draft), you won't be able to build a good team around that superstar. Contrast that to a class like the 2017 Saints with 2 Pro Bowlers, plus another All-Pro, plus two other starters, plus two other part-time players.
Obviously either result is really, really good. But I would be interested in Mr. Ellinger running his analysis again, with either a harmonic mean, median, etc. for the value in each team's draft class, at least for 2010-2014. This would show whether a superstar is influencing the score for his whole class, or if there are several good players who are influencing the grade.
The latter part of the article shows how much outlier classes seem to indicate that drafting results are practically random, versus median results showing some level of skill/cohesion/coaching coupled with some luck. I would say that there is some skill in choosing players who fill needs, fit your team and it's systems, etc. coupled with good injury luck, plus a player who everyone thinks is good working himself into a player who is great.
All in all, great article.
#11 by DraftMan // Jun 11, 2020 - 4:34pm
Bill James had a methodology for compiling lists of great baseball families so that they weren't simply overshadowed by singularly great players who put up more value than any of them combined. His idea was to count the production of the most valuable family member (by his "Win Shares" construct, or in this case AV) once, the second-most-productive player counts double, then if there's a third player who put up production, it counts triple, and so on.
Something similar to that approach would make sense as the starting point for a depth-focused evaluation of a draft class. You might have to work out snags like: if a team drafted 12 players one year, five of them never played a down in the NFL, and one put up -2 AV, would the negative guy count as the 7th best player in the class (for -14 weighted points), 12th best (for -24), or would you simply disregard negative contributions altogether? It also doesn't produce strange results when given small sample sizes, for example the harmonic mean of a draft class consisting of Ricky Williams and nobody else is going to compare quite favorably to one that's spread across several serviceable depth guys, contrary to what you might have wanted a method like that to prefer.
#13 by Joseph // Jun 11, 2020 - 6:10pm
Well, of course small sample size biases a lot of things. And, for the methodology that I proposed, more than 8 or 9 draft picks might also bias the numbers in a negative way, simply b/c some of them would have almost no shot to make the roster.
The way that I understand AV, negative numbers are not possible.
Maybe a better way to evaluate any class that is in the top ~25% overall is to take the median or harmonic mean of the top 5 guys in the class, or something like that. Because any class that nets several Pro Bowl level players is almost always going to benefit the team more that a class with one superstar, except if the superstar is a QB.
#30 by SeaRhino // Jun 11, 2020 - 10:35pm
To do an analysis like this at all, you have to have a cross-position number that captures the value of the player to the team. AV is the best attempt at this that I know of, despite its flaws (and I certainly don't know how to improve it). Do you know of some other such stat that you think would work better? I've basically used it out of necessity...
#32 by theslothook // Jun 11, 2020 - 11:13pm
When I did my analysis, I had to invent a new metric. It basically did this...correlate variables to predict hall of fame probability. That probability became the metric of choice and it was standardized across positions and time.
Look I admire the heck out of what you are doing. This is a hard problem. There is a reason I chose it from my graduate thesis.
#34 by theslothook // Jun 11, 2020 - 11:52pm
I took the set of draft picks from 1978 to 2007 and marked them 1(if they were hall of famers) and 0(if they were not). Later on I distinguished them between first ballot and not, but that's not important here.
Then my set of variables were all position agnostic variables like games started, games active, prowbowls, all pros, sports illustrated all pros, etc etc. I then added the position of the player and tried to create a model from that I tested a few different kinds of algorithms to predict the probability( ie - logit, random forest, svm, and even some deep learning). It didn't really matter, you got a pretty strong prediction model. That became the statistic. How likely was this player to be a hall of famer.
#36 by SeaRhino // Jun 12, 2020 - 1:27am
That's an interesting approach. I can see a lot of benefits to it, but it does equate value to what Hall of Fame voters think is important, right? Which is a big downside (not that AV doesn't have downsides as well). Its biggest downside for me, of course, is that I don't have that data, whereas I can just get CarAV from Pro-Football Reference. :-)
#16 by Bryan Knowles // Jun 11, 2020 - 6:35pm
Negative numbers are ultra-rare, but not impossible; David Blough picked up -1 AV last season. The single-season record is -2, shared by 2007 Trent Dilfer, 2007 Gus Frerotte and 1960 Zeke Bratkowski.
#26 by SeaRhino // Jun 11, 2020 - 10:22pm
Doing the ANOVA test with medians instead of means hints at how useful something like this might be. It's clear the big outliers are making it seem more random than it really is for non-outliers, but it doesn't tell us in exactly what way that is true. But when you start eliminating or discounting outliers in a situation like this, you have to be really careful that the way you choose to do that doesn't bias the result. There is also the issue that those outliers are really important to how well the team does, it's not like they are just random data that was probably instrument error or something.
I think putting some kind of cap on the amount of AV we give the GM/coach credit for is the way to go, but how that cap works has to be carefully considered.
#61 by SeaRhino // Jun 12, 2020 - 6:25pm
I've considered doing something like that. The trick is to figure out what that really means, though. It could very well overvalue finding 4th to 6th round picks that become marginal to average starters and devalue finding first round picks who become pro-bowlers. That wouldn't get at what I want to measure. It think a cap on AV of some kind or a more "success rate" like stat (as was mentioned above) might do the job better. For example, I could cap the amount of AV the drafting team gets credit for at 1% above the expected amount for their draft position (2.4% is the expected amount for the #1 pick, so you could not get credit for more than 3.4% for the #1 pick).
#28 by SeaRhino // Jun 11, 2020 - 10:24pm
So, to be completely honest, I used those because they are the default colors in Excel when selecting conditional formatting for top 10% and bottom 10%. Do you have some specific colors you would recommend instead?
#49 by putnamp // Jun 12, 2020 - 11:36am
People who deal with color-blindness tend to have issues with red/green, and very rarely some have issues with blue/yellow. So anything that isn't adjacent to those two is good. I'm red/green color-blind and I have a hard time with some adjacent shades of purple and blue (mostly darker ones) because the red that is meant to distinguish them doesn't stand out very well. Likewise bright shades of orange and yellow. Green and brown can sometimes be difficult to distinguish too.
So Red and Blue and Red and Yellow would be good choices. Green and blue usually works. Red and yellow. Red and blue is the most common one I see.
#17 by Lost Ti-Cats Fan // Jun 11, 2020 - 7:37pm
Love this analysis! Thank you for sharing.
I'll echo the comment from Joseph: I'd be curious to see the results if - pick-by-pick - you simply capture that pick as generating an above expected CarAV (count as 1) or below expected CarAV (count as -1) and then tally for all picks across the years. I suspect it'll tell a similar story, but I'd be interesting if the results for any GM/HC combos change.
I'd also love to see this type of analysis by GM/HC rather than team, even though small sample size issues would get worse.
Finally, not that this really matters, but when evaluating a GM/HC over time, I would think a weighting modifier for the most recent three years would be in order - say weight Year -1 at 25%, Year -2 at 50%, Year -3 at 75%, then Years -4 and beyond at 100%. Just to reflect that drafting success for a given year wouldn't necessarily be evident until all players have had a chance to establish themselves.
#29 by SeaRhino // Jun 11, 2020 - 10:29pm
That's a really interesting idea (the +1/-1 method). It's kind of like a "success rate" stat instead of a "yards gained" stat.
I considered weighting the years (by total AV generated in that year relative to other years), but the problem was that anything like that made the Seahawks look like the gods of drafting for the last 10 years, rather than just "only" being #1. That's just a quirk of the exact time window being looked at, and the "last 5 years" column is there to help see more recent trends.
#52 by Joseph // Jun 12, 2020 - 11:56am
Now that is a good idea, and probably gets to the heart of what I was aiming for. Maybe also tweak for a +2 if the player hits certain metrics like say 5+ Pro Bowls, 2+ AP 1st team All-Pro, or 2x their expected CarAV--in other words, a way to give more credit for a superstar/HOF type. Possibly also a -2 for players who earn like 20-25 points less than their expected CarAV. (IIRC, pick #32 is expected to generate about 25 points--so only huge 1st-round busts could qualify.)
#65 by SeaRhino // Jun 12, 2020 - 11:50pm
So here's what I've been looking into this evening. I started with giving a +1 for being above expectations and a -1 for being below. That had issues because it means late-round picks swing things a lot, just because a player started a few games here and there. So I added a middle tier. I took the rough expectations for a late fourth/early fifth round pick and used that as a break point. So if a player exceeded his draft position's expectations by that amount (i.e., he was worth his draft position plus about the 130th pick in the draft), he gets a +1. If he under performs by that amount, he gets a -1. Anything in between gets a +0. This means that if the pick is 130th or later, it can't be a bad pick (which I think is basically correct). Add these all up for each year and you get a "success rate" like measure.
For the last 10 years, you get Dallas at the top at +10, Atlanta and Kansas City next at +8, Seattle and Washington at +6, Green Bay at +5, Miami at +4, then Chicago, Pittsburgh, and Baltimore at +3 to round out the top 10. Interestingly, New England is #20 at +0. At the bottom is Cleveland at -12, Tennessee at -11, and the Giants and Jets both at -9.
This is interesting, but I haven't decided whether I think that 130th pick break point is correct or not. I definitely don't think it should be lower, but I could see moving it up a round or two. You can also calculate a straight-up success percentage out of this, either with all +0s and all +1s counting as a success, or only +1s counting as a success. Doing this with only the +1s counting doesn't change any team's position that much, except that Pittsburgh drops to #16 and New England shoots all the way up to #2. Does that mean Pittsburgh plays it safe while New England rolls the dice more? I'm not sure. Note that the success rates with this method range from 12% to 29% with an average of about 20%.
#66 by Lost Ti-Cats Fan // Jun 13, 2020 - 3:49pm
Thanks for doing this, SeaRhino!
I understand your rationale for treating late round drafts as either successes or non-factors, and never "busts". It may, however, tilt the playing field in favour of GMs that tend to trade down. You could also make an argument that finding a player in the late rounds that starts a few games is a good pick, and picking a player that never sees the field is a wasted pick no matter when it happens. But I do agree with the idea of treating a pick falling into band around expected CarAV as 0.
When you say you counted +1s and +0s as successes, did you drop the +0s arising from picks 131+ onward, i.e. the ones that couldn't be any lower than 0? Otherwise, wouldn't that further tilt the calculations in favour of GMs who assemble a lot of late picks?
Anyway, this is fascinating stuff, and I think hits home on the idea that GMs are good at their job, and it's tough to outperform on the actual drafting. Over a 10 year period, you're top notch if you make 5 more good picks than you make bad picks, i.e. if every 2 years you make 1 good pick more than bad picks after accounting for when you're drafting.
#68 by SeaRhino // Jun 13, 2020 - 11:12pm
I calculated the percentages only using the +1's as a percentage of all picks (+1, +0, or -1). Using the +0's and the +1's as "success" is valid, but it just doesn't seem right to me because it gives you an awful lot of credit for low-round picks.
This really does show how much randomness there is. Bad is one over-performing pick a year and great is two. Of course over five years, that might mean the bad team has a solid offense with a good QB, a good WR, a good TE, and 2 good OL. But the great team has that, and a good DE, DT, MLB, CB, and SS. The difference in team quality can really add up from these advantages. That "factor of two" difference between the top and the bottom is also what we see (very roughly) when doing the calculations for draft return and/or draft efficiency.
#18 by Scott P. // Jun 11, 2020 - 7:53pm
In graduate school, I adapted Eugene Fama and Ken French's, "Luck VS Skill" paper to the NFL draft. I found very little evidence to suggest the results of drafting were due to skill. If there was skill, it was very minimal. And this period covered 2012 all the way back to 1978.
This wouldn't be surprising, since few GMs have 34-year careers. Unless you assume that hiring a good GM once means the next GM is expected to be good. This is a bit like saying winning games is all luck because teams swing from bad to good and then bad again.
#19 by theslothook // Jun 11, 2020 - 8:26pm
Not exactly. The lifecycle of successful money managers is shorter than you think and both are plagued with survivorship bias, even among your middle tier above average funds.
In fact, plot a histogram of the investment firms by years in the market and it will look surprisingly similar to NFL regimes(here I looked at the longest tenure of coach or GM of the team). So Ozzie gets counted throughout his tenure with the Ravens as 1 while BB gets counted throughout his tenure as the Pats HC despite both of these regimes switching multiple HCs and GMs respectively.
Also, the theory of efficient markets would seem to be applicable in the NFL draft as well.
Look. Think about it. Bill Polian had the highest draft results of any GM in the 2000s. His first round track record was sterling, even without having Peyton Manning as the jewel of his drafts. Yet, cut the period in half and his record drops substantially. Same with Pete Carrol I suspect. Carmine Policy had the same issue. So did Vinny Cerato.
We only remember the winners, that was the point Fama was trying to say.
#35 by BigRichie // Jun 12, 2020 - 1:00am
First, how does 'Return' change as the years go by? That is, how much return is typical of the draft class year, how much Y+1, Y+2, etc?
And have you considered discounting that return as the years go by? A huge advantage of draftees is that they provide cheap labor, which advantage takes a big hit come Year 5 (6?) when you have to start paying the player. For example, Russell Wilson was far more valuable to the Hawks while they got to underpay him. It's sorta like come Free Agency, every player now has the same draft capital. Still some value in that it's definitely easier to sign your own. But conceptually seems like there ought to be a discount rate in there, devaluing very slowly at first of course, but then starting seriously downward about the time you're thinking of locking up your best draftees who've proved worthy of such.
#37 by SeaRhino // Jun 12, 2020 - 1:57am
Here are the total CarAVs generated by a given year's draft class (UFDAs not included).
- 2019: 585
- 2018: 1355
- 2017: 1984
- 2016: 2629
- 2015: 2831
- 2014: 3393
- 2013: 3464
- 2012: 4217
- 2011: 4198
- 2010: 4348
Roughly speaking, we can say that about half the value comes from the first three years, distributed fairly evenly. Then the rest of the value comes mainly in years four to eight, also distributed fairly evenly. After that, it's just the elite veteran outliers adding a bit to the totals. Of course, this is CarAV, which already has a time discount built in (10% per year from best year to worst), but I don't think things would change all that much is this were just done with straight AV.
You second paragraph gets at the question of what, exactly, are we trying to measure? This is not a simple question. How much value did this player generate for their team relative to how much they got paid is a very interesting question, but doesn't really get at "how well did your draft class perform?" And both of those are different questions than "how skilled is your team at drafting?" Obviously, I'm focusing on the drafting-related questions.
But it might be a good idea to only use AV for the first five years of a player's career for drafting-related analysis, since that's the maximum length of any rookie contract. But, of course, that makes Aaron Rodgers look like a week draft pick, which is a bit odd. Of course, you could argue that Aaron Rodger's HOF career should count that much towards drafting skill because if they were so sure he was that good, they would have played him earlier, right? Not like they already had a HOF playing for them already, right? :-)
While I thought of things like this, it was an easy choice to just use CarAV because, well, that's what I could easily get from Pro Football Reference. If you know of a way I can get AV for the first five years of every player's career, I would like to know about it. (RIght now, the only way I can think of that isn't doing it by hand is to write a program that looks at the career page of every single player, processes the yearly data for that player to parse out the year-by-year AV, and total all that up. That's not an impossible thing to do, but maybe they have a paid service that gives you complete access to their database? I've never looked into that.)
#39 by Bright Blue Shorts // Jun 12, 2020 - 8:12am
I read the article but struggled to find the following question answered.
How does the study control for successful teams having fewer spaces for draft picks to make the roster? Or terrible teams having lots of opportunities?
- 2010-12 Seahawks are coming off three consecutive losing seasons and draft really well.
- 2017-19 Patriots are coming off three consecutive SB appearances and draft badly.
#47 by takeleavebelieve // Jun 12, 2020 - 11:25am
I assume because the values are anchored to career AV, not just AV accumulated while playing for that team. So for example, the Patriots would still get credit for Garropolo’s AV and Washington gets credit for Cousins’ AV, regardless of the fact that they’re playing for the 49ers and Vikings respectively. If a player is “NFL caliber” but just happens to find himself on the short end of a roster crunch, he should theoretically still have opportunities to catch on with another team.
This does expose a bit of a flaw, however, in that QBs generate the most career AV but a team that already has a good QB generally won’t try to draft another QB even if they have the opportunity to do so.
#50 by Joseph // Jun 12, 2020 - 11:45am
IMO, every coach and GM know who they have, who may leave/has left in FA, whose cap # is a lot more than their production, what holes need to be filled, etc. For some teams, they could use realistic upgrades on 20 players--on others 5-10. Sometimes, "upgrades" may be just somebody equally skilled, but younger; or equally skilled, but having potential to get better; or still mostly playing special teams, but at a different position group (DB vs. WR, for example).
If team A has lots of holes, they should generally be drafting best player available; team B who does not have many holes may target a specific player, position, or skill. Either way, every team should be drafting players who are in theory better than somebody currently on the roster. Obviously, that doesn't always happen, especially with players drafted in the 5th round or lower. Sometimes draft picks deal with injuries; sometimes they just don't transition to the pros as well as expected; sometimes the player that they were expected to replace gets better, etc.
So, while BBS's 2 examples are quite correct, if the Patriots don't begin to get big contributions from those 2017-2019 draft classes starting this year, they aren't going to look any better in hindsight. Yes, "flags fly forever," and MLB's Marlins have TWO of them. But does anybody believe that the franchise is better for it? (Not trying to turn this into a MLB thread.) [Are their any Jets' fans that would argue, "Yeah, we're horrible now, but we won that SB back in '69!]
#64 by SeaRhino // Jun 12, 2020 - 11:25pm
It doesn't control for that at all (other than slightly by using CarAV instead of DrAV). There isn't much of a way to do it that I can see. How do you account for Aaron Rodgers sitting for years, looking like a bust by CarAV, but only because the team already had a HOF QB? If you want an objective system that doesn't adjust things on a case-by-case basis, that's going to be tough, if not impossible, to do. It's generally better to not try to adjust for things like that and just take that into account when you are looking at the results and trying to determine the cause of those results.
#67 by OldFox // Jun 13, 2020 - 6:54pm
Agree that this is an excellent article. As a Browns fan, though, it was certainly not news to me that the Browns have had lots of draft capital which they have badly squandered. Very frustrating as a fan to watch these guys blow such a golden opportunity. I assume a large part of the reason is the constant turnover at the top of the organization. We had a chance to develop a powerhouse, and we blew it. That opportunity may not come around again for many years.
#69 by SeaRhino // Jun 13, 2020 - 11:21pm
The quality and continuity of an organization is critical. Look at the Steelers, the Ravens, the Patriots, the Packers, the Saints, the Seahawks, hell, even the Cowboys (their problem is that their owner has a blindspot about coaches, but otherwise they do very well). If you want to be consistently good in the NFL, you need your whole organization to be solid and on the same page. Constant turnover makes that impossible.
#48 by Pat // Jun 12, 2020 - 11:33am
I've made this point before elsewhere, but it's worth repeating: I think the vast majority of the reason people think drafting looks like luck is that they're evaluating drafts wrong. Almost all the evaluations I see base it on something like career AV versus expected AV at that draft position, and that's a big problem, because it rewards teams when they're *wrong* and punishes them when they're *right*.
So it never surprises me when the results say "oh, it's all luck" - because you're folding noise straight into the metric. What matters is whether teams are right or wrong *relative to their own expectations*, and how that compares to the rest of the league. Of course we don't *know* what teams expectations were, so the best you can do is something like pre-draft reports.
The Chiefs drafting Patrick Mahomes is a win, but it's not a massive win. They drafted him early relative to expectation, but not a ton. He's exceeding even the Chiefs expectations. They didn't see him becoming the best quarterback in the league. If they had, they would've worked to get him higher than when they did. Think about it this way: if instead of the Chiefs drafting Mahomes at 10, the Bears take him at 3, and Mahomes works out exactly the same. You're saying the Bears did a *worse* job drafting Mahomes than the Chiefs? No way - the Chiefs got lucky. They evaluated Mahomes better than the rest of the league (they were right) but *worse* than he actually was (they were wrong).
So in fact you're giving the Chiefs a *boost* relative to "pretend Bears" because they were wrong - Mahomes should've been valued much higher, and he wasn't, and you're *docking* "pretend Bears" because they were even *more* right - Mahomes *was* worth that much.
It gets even worse when you start thinking about late round picks, because rewarding, say, Seattle, for all of Russell Wilson's success is nuts. Seattle didn't think Wilson would turn out to be a starter. They likely thought he could have a chance to be something special if his height wasn't an issue. Maybe they thought (as I've argued before) that with all the rules changes protecting quarterbacks, small quarterbacks would be able to last. They still deserve a boost, just like the Chiefs - most draft reports/etc. had Wilson as a ~4th round pick, and they valued him higher. They were right. But they didn't think he'd be a starter - in that, they were wrong.
I don't know how you'd boil this into a metric, because it'd still be fairly noisy as you only really have a few picks that you can say "right" or "wrong" anyway. On average teams get something like ~2-3 ish starters a year. And then of course the pre-draft reports/mock drafts aren't really what the league thinks either, so you've got that problem.
But what I'm saying isn't crazy. If you read stories about Brady - probably the biggest draft steal in history - being drafted, the Patriots didn't think he was a first-round pick. They thought he was worth maybe a 3rd or 4th round pick. By the time they got to the 6th round, Brady was just such a high value that they felt that they *had* to take him. And they were right - but those guys don't look at Brady and say "I'm such a genius." They look at the guys they drafted *before* Brady and say "dear God, I was an idiot."
#53 by Pat // Jun 12, 2020 - 12:14pm
I should point out that while it would be difficult for fans/media to boil this into a metric, for a *team* it'd be trivial, as they've got orders of magnitude more data than we do - they know where they valued *every* player, including ones they didn't take (and they can use where other teams drafted players as a reasonable proxy for their valuation). So it's not terribly surprising if the actual front offices look at fan stats geeks and roll their eyes when they say "drafting's random," because if they're even reasonably competent they can probably pretty rapidly figure out which teams are better than others at drafting.
#56 by dbostedo // Jun 12, 2020 - 2:28pm
"They didn't see him becoming the best quarterback in the league. If they had, they would've worked to get him higher than when they did."
I don't think that's correct. They only would have done that if they thought that other teams had him ranked highly enough to take him ahead of them. Likewise with Brady, you're not thinking you're an idiot for taking others ahead of him - that's applying hindsight that you couldn't know at the time. You're thinking you probably had him overrated and are glad to have gotten him in the 6th round when you could have "messed up" and taken him too high.
#59 by Bowl Game Anomaly // Jun 12, 2020 - 4:00pm
These are good points, but I think you're missing the fact that a lot of drafting comes down to assessing and valuing probabilities. If a team takes a high-floor, low-ceiling guy in the 3rd, and a low-floor, high-ceiling guy in the 4th, and then the 4th rounder has a better career than the 3rd rounder, that does not mean that the GM was wrong to draft them in the order he did. It just means the more variable guy ended up on the higher end of his range. The 3rd round guy was a safer pick, and the 4th round guy was more of a gamble. They were drafted in the correct order (presuming that was the consensus of the scouts). To say anything else is hindsight bias.
Now consider the fact the basically every pick after the 3rd round has a low floor. Every one of those picks is a gamble. Some have higher ceilings than others, some are more likely to become useful players than others, which is not the same thing. It's absurd to say that a 6th or 7th round pick who has a good career was a mistake because he should have been taken higher, or that it was foolish to take some other guy before him who didn't pan out (except Marques Colston, maybe. Just kidding, that's more hindsight bias). They are all gambles. None of them, individually, are likely to pan out. Some may hit, most will miss. You can only judge a GM on his ability to find useful players, not on the order that he finds them.
#63 by SeaRhino // Jun 12, 2020 - 11:21pm
All the data I've seen points to really high performing players being impossible to predict with any degree of accuracy. You can tell they will likely be good, but more than that doesn't really seem to be possible with any regularity. Let's look at the #1 picks for the last 20 years. The best #1 pick is Eli Manning (by percentage of CarAV from his draft class). The 2nd best is Cam Newton, followed by Matthew Stafford. Andrew Luck would likely have been way out in front if not for the injuries, but injuries count. The only two who were true busts (not even worth a 1st round pick) were Courtney Brown (all the way back in 2000) and, of course, the legendary JaMarcus Russell. So there is a decent floor, but getting a HOF player with the #1 pick is rare. Jared Goff or Baker Mayfield are your median #1 picks.
This is why smart teams trade down so much and why some GMs think Khalil Mack is worth two first round picks (he isn't though, unless those are very late picks; nobody outside of a top 5 QB is, except maybe Julius Peppers in his prime). I just see no evidence that outlier performers are predictable.
Brady is definitely the ultimate draft pick success story and points to how random predicting super-high performance is. But to say the Patriots made a mistake in drafting him too low is not correct. They valued him (by their actions) higher than all the other teams. In the "how did you handle drafting Brady" contest, they are ranked #1. You can only evaluate these things relative to other teams (unless you think you know the secret to drafting better than the top NFL teams). Thinking otherwise is like saying you made a mistake by not raising an opponent in Texas Hold'em on the turn, because you filled an inside straight on the river. You can only assess decision-making based on the information you have when the decision is made.
#55 by tjb // Jun 12, 2020 - 2:19pm
I am a little surprised at how well the Bills 2017 draft grades out considering they completely whiffed their 2nd round pick (Zay Jones) and also drafted quite possibly the worst QB to ever start an NFL game (you know who). I guess Tre White and Matt Milano make up for a ton of mistakes.
#58 by Joseph // Jun 12, 2020 - 3:43pm
White does good, as he has already earned his CarAV for his slot. Jones is below average, but Dawkins + Jones almost balance each other out as 2nd rounders. Peterman and their 6th rounder aren't great picks, but about average for their draft slot. Milano definitely puts them over the top.
#62 by SeaRhino // Jun 12, 2020 - 10:48pm
Matt Milano was the 163rd pick, so he is a fantastic pick. He's performing like the 17th pick or so. Tre White was a 27th pick, but is performing like an 8th pick of so, which is very good. Dion Dawkins (63rd pick) is performing like a 30th pick, so that's solid. Tanner Vallejo probably isn't even on your radar, but he slightly outperformed his 195th pick position (of course the expectation for that is just barely above zero). Nathan Peterman, the worst QB ever? Maybe, but he was picked 171st, which expects about 2 AV at this point for him. Which he has, so he has produced exactly according to expectations (those expectations being almost zero). Zay Jones is actually the only "bust" of the draft, being picked 37th and only performing like the 74th pick. But that's actually not terrible, even if it is disappointing.
Of course, it may not feel like a great draft, because they only had 2.68% of the draft capital that year (average is 3.125%), but they did an excellent job with what they had. Milano and White alone would meet the expectations of that amount of draft capital, even if the rest of the draft failed to make the team at all. I don't know why their draft capital was so low that year, which might be the GM's fault, but that's not what these tables analyze.
#70 by GwillyGecko // Jun 15, 2020 - 12:00pm
Im not a big fan of the chase stuart draft value chart. Just for two examples, it considers the difference between the 10th pick and the 20th pick to be a fourth rounder, and the difference between the 20th and 30th to be a 5th rounder. No teams would ever make such a trade because if a team is trading up ten spots in the first round it's for a specific player that won't be there at their current pick.
#71 by Bowl Game Anomaly // Jun 16, 2020 - 8:53am
The Chase Stuart chart is not intended to be a measure of trade value. It's a measure of production value. The Johnson chart is a more accurate measure of trade value, however the point of the Stuart chart is that NFL teams' valuation of picks (as measured by trades) does not reflect their actual value in terms of probable production.
#73 by zenbitz // Jun 16, 2020 - 8:03pm
how it would change if you just used 3 years (or rookie contract?) value, instead of career value. That might help dampen the effect of drafting HOFer. Plus - certainly when you are picking a 22 yo out of college, his injury future from age 26-30 is not something you could have possibly predicted, and injured players generate zero value...
Not sure if you would have to adjust the draft capital calculation as well.
#75 by davekenya // Jun 18, 2020 - 12:11pm
I could not find an aggregating source for career-ending injuries but I think this is legit to factor into the analysis if possible to do so. Would Green Bay's draft ratings be higher, for example, if Nick Collins and Jermichael Finley had not had career ending injuries (CEIs) but blossomed into pro-bowl players? Or are these CEIs on par with what other teams have experienced in the past 10 years and of no relevance?
#77 by SeaRhino // Jun 20, 2020 - 12:45am
But that's the problem isn't it? What do you mean by "best"? Do you mean best total performance over an entire career, regardless of the amount of draft capital you have? Do you mean relative to the amount of draft capital? Do you mean only for the five years after the draft? Do you only count years spent with the team that drafted them? Do you try to account for injuries? What about Aaron Rodgers sitting for three years behind Brett Favre? Do you only count performance above that of a replacement player? How do you even separate drafting ability from the coaching staff making the players better?
Even just defining what you are really measuring is very tricky. That's why I laid out my methods as clearly as possible, as it is very reasonable to disagree with those methods or to say "that doesn't really measure who drafts the best properly". It's very tempting to eliminate major outliers in an effort to be more "accurate", but those outliers are really important to a team's success, so you can't just remove them if your definition of "drafts the best" includes the impact that draft had on the team's performance. But there is some good logic behind removing them if your definition of "drafts the best" is about how much skill a GM has relative to other GMs in making picks that out-perform their draft position.
#78 by Thomas427 // Jan 10, 2022 - 3:29pm
This relates to player evaluation only.
I think the math has to simply reflect the difference in value between the player selected and the player that could have been selected. I don't think NE should get extra credit for passing on Brady 5 times and then him turning into the GOAT. I know everyone missed on Brady but NE should not get extra credit: that's the epitome of luck. Same with Russell Wilson. Or Stephon Diggs. If the GM was so good they wouldn't have risked letting other teams draft potential future HOF'ers.
Best score = 0. If you select the Best Player Available (BPA) you did your job. Your pick's points = BPA points. If you passed on BPA you lose points equal to the difference in the BPA's draft value (need to rerank the draft class and assign the 'right' value points to each player) and your player's draft value. Missing in earlier rounds costs you more points. As it should.
GMs are compared to each other by how well they made the best pick they could each time it was their turn.
Trades before and during the draft are a completely separate metric. Why? Because it's a different GM skill being measured. That's guessing correctly how other team's are or aren't going to draft. It's choosing whether to value experienced players over draft picks. Both are extremely important skills. But they're different skills and need to be evaluated on their own.