Writers of Pro Football Prospectus 2008

17 Apr 2014

Game of Inches: The Talent Gap by the Numbers

What if you could see the difference between the average college player, NFL prospect, NFL veteran, and NFL star through the lens of errors per season? Matt Waldman explores this idea with a hypothetical look of error rates based on the percentiles of players who account for these levels of achievement. Read on in the link above at the RSP blog.

Posted by: Matt Waldman on 17 Apr 2014

8 comments, Last at 19 Apr 2014, 6:10pm by LionInAZ

Comments

1
by turbohappy :: Thu, 04/17/2014 - 11:59am

I guess I couldn't suspend my disbelief enough, I just don't think there's that much validity to the X percentile means you commit (100-X)% egregious bad plays.

2
by nlitwinetz :: Thu, 04/17/2014 - 1:48pm

I concur. Look at an elite player like Dez Bryant. He probably runs the wrong route at least 10% of the time. Being consistent is important at some positions (DB, LB, OLine, etc.) but being an explosive athlete capable of big plays is more important at other positions (DE, WR, RB, etc.).

4
by Dr. Mooch :: Thu, 04/17/2014 - 5:45pm

No, there's only a tiny amount of validity to it, since it vastly oversimplifies a much more complex idea:

Error rates won't go from 0-100% in reality. They will occur in a distribution, the center of which will probably not be 50%. We can guess that perhaps it will be a normal distribution, rather than the completely even distribution in the example. This will probably be very different for certain types of errors (running the wrong route vs. a false start penalty vs. blowing a blocking assignment).

We can presume some correlation between a player's percentile in the grand winnowing of recruitment and the player's percentile in the distribution of frequency for a certain type of error. It will be less than 1.00.

We can postulate that all of the relevant numbers are real and could, in principle be known, but they'd be very difficult in practice to figure out. So Matt's really just playing with the idea by stipulating totally arbitrary (and unrealistic) values for the error distribution, and also assuming a perfect correlation between recruitment rank and position in the error distribution.

You could certainly make it more interesting by using a normal distribution and standard deviations, but it would still be highly inventive. E.g., Pretend that if you gathered up all of the people who played high school ball they'd commit, on average, 50 truly game altering errors over the course of a season. (A little more than 4 game changers per game is surely too low a number, considering how ludicrous it would be for the vast majority of these folks to go anywhere near a pro football practice.) Perhaps the standard deviation would be, say, +/- 20 errors. Well, then your NFL prospect, better than 98.4% of people would be expected to be 2.14 standard deviations fewer in a normal curve, or: committing only 7 game changers a season. The 93.1% that play in college would be putting up 20 game changers a season. The 99.06% group who get a second contract: 3 a season. Still totally made up numbers, but look how using a normal distribution changes the way the numbers lie: 20-->7-->3, as opposed to 42-->10-->6

8
by LionInAZ :: Sat, 04/19/2014 - 6:10pm

In simpler terms: this article should never have been written. The connection between the data and conclusions is much too speculative.

3
by TimTheEnchanter :: Thu, 04/17/2014 - 4:55pm

There's a circular logic to this. It starts with percentiles of people selected, but then transfers those to percentiles of successful execution (non-error plays) for no apparent reason.

But then lo and behold, the conclusion is that these small differences in execution rate explain the difference. So of course execution explains the difference in success when you transfer the success numbers to a measure of execution.

You could have just as easily transferred those percentile numbers to some other measure like...how many parties they go to where they get laid. The pros get laid 99% of the time, whereas your average college player gets laid only 93% of the time. (that correlation sounds about right). Therefore it's not hard to think that how well the guys close the deal at parties determines who will make a living at football.

So maybe it does come down to "Inches".

5
by Pen :: Thu, 04/17/2014 - 7:40pm

All I got out of this article is that college players should get paid while in college for the money they make the school. There's little chance they'll get a career out of the NFL. They are paying their way through school and like any employee, they have a right to negotiate the terms of that employment. Just getting paid tuition is too low a price for some of them.

6
by BDC :: Fri, 04/18/2014 - 5:55pm

But that means less money for rich executives. Won't anyone please think of the rich executives?

7
by burgmeister :: Sat, 04/19/2014 - 7:57am

I'm curious about the distribution of players making it to the NFL and their status as a two, three, or four star recruit in high school. I see lots of students getting all googly-eyed at the idea that they are going to play college ball and then go to the NFL. How many kids who are Not dominant in high school finally make it to the NFL? I would think those kids would be the kings of technique rather than the loose cannons like Des Bryant.