Dallas needs help in the secondary (as usual), the Giants need running backs, and Philadelphia needs receivers. As for Washington... well, that's complicated.
30 Jun 2005
by Ryan Wilson
As Jamal Lewis approached Eric Dickerson's all-time single-season rushing record two seasons ago, I got to thinking about how impressive this performance was in both an historic sense, and in the context of the 2003 season. When having the "greatest running back of all time" discussion, is there a difference between a player who dominates (like Dickerson in 1984) and one who leads the league in rushing in a relatively tight race (like Priest Holmes in 2001)? Of course, I didn't get around to answering this question until now, but I figure better late than never.
If you go to NFL.com and check out the rushing statistics, you'll have no trouble finding which players lead the league in categories like total rushing yards and yards per carry. But what you won't find at NFL.com are statistics that tell you how much better a back was than his competitors in a given season (or even over the course of their career). Let me explain what I mean.
In 2001, Priest Holmes led the league in rushing yards with 1,555. Now 1,555 yards falls well short of the single-season record of 2,105, but how much better is 1,555 yards in 2001 when compared to how the other running backs did that year? Likewise, how much better is 2,066 yards in 2003 when compared to how the other running backs did that year? Or in broader terms, what backs, over the course of their careers, were marginally more productive than their peers? The methods I describe below allow you to compare players across both seasons and careers.
Historically, football statistics have focused primarily on averages (average yards per game, average yards per carry) and sums (total rushing yards, total rushing TDs). Who is tenth on the all-time rushing list? Look it up and you'll find that it's Franco Harris (12,120 yards). Who was tenth for the 2003 season? It was Ricky Williams. (he rushed for 1,372 yards)
But here's the question I wanted to answer: How much better is Jamal Lewis than Ricky Williams in 2003, or Jerome Bettis than Thurman Thomas over the course of their careers, when compared to all other running backs? Now this required a little more work than opening the football almanac, but thanks to our good friends at Pro-Football-Reference.com, I was able comb through the data.
(Disclaimer: This article looks only at total rushing yards over seasons and careers; this is not to diminish the importance of receiving yards by running backs. Instead, in this first iteration, I'm interesting in looking solely at pure rushers -- remember, Jamal Lewis's 2,000 yard season was the impetus for this research, and he's not known for his ability to catch passes coming out of the backfield. In the future I'll take a look at how including receiving yards changes the list.)
Looking at all the rushing data from 1957 to 2003, I normalized rushing yards to something called z-scores. What is a z-score? It's a statistical measure that tells how a single observation (Jamal Lewis rushing for 2,066 yards in 2003, for example) compares to all other observations (all running backs for the 2003 season). A z-score says not only if the observation is above or below the average value, but also how unusual it is (and it's no surprise that Lewis' rushing total would be considered unusual using z-scores).
In general, a z-score of 1.0 means that a player is in the top 16% of all rushers, a z-score of 2.0 means that a player is in the top 2.5% of all rushers, and a z-score greater than 3.0 means that a player is in the top 0.15% of all rushers.
In addition to looking at how running backs compared by season, I also looked at how running backs compared over their careers. This lets us answer questions like: "How did Jim Brown, who played nine seasons, compare to Barry Sanders, who played ten seasons?"
According to standard rushing yards, here are the top single-season rushing performances since 1957:
|Rank||Last Name||First Name||Year||Team||Season
No surprises here. Now what happens if we revisit the single-season rushing leaders, but instead of measuring success by total yards gained, we instead measure success by how many more yards a player gains when compared to other players for a given year? The top three rushing yardage seasons no longer rank one, two, and three. Looking at the z-score table, Dickerson is now 9th, Lewis drops to 28th, and Sanders is 8th.
|Rank||Last Name||First Name||Year||Team||Season
Yes, O.J. Simpson now has the top rushing season of all time -- and the second-best season as well. How did these seasons which began 5th and 14th end up on top?
Well, remember, z-scores look at how well a player does in relation to other players during a particular season -- not just total yards gained. And while Dickerson holds the all-time rushing record of 2,105, Walter Payton that same year finished second with 1,684 yards and James Wilder was third with 1,544 yards (somewhere Carl Prine is smiling).
On the other hand, when O.J. Simpson rushed for 2,003 yards in 1973 John Brockington (remember him?) was second with 1,144 yards and Calvin Hill was third with 1,142 yards. So the difference in yards gained between Dickerson and Payton (421 yards) was a lot closer than the difference between Simpson and Brockington (859 yards). It's also important to remember that before 1978, the NFL had a 14-game regular season schedule. This means that performances by guys like Jim Brown and O.J. Simpson are even more impressive when compared to backs who had two more games to rack up yards. But that's the beauty of z-scores -- they allow you to compare a player's performance based on the performance of every other player during a particular season, no matter how long that season was.
Compare O.J. Simpson's 1975 performance (1,817 yards) to Jamal Anderson's 1998 performance (1,846 yards). We find that even though both players rushed for over 1,800 yards, Simpson ranks 2nd using z-scores and Anderson ranks 36th; using single-season rushing yards, Simpson ranks only 14th while Anderson is 12th.
(And yes, sitting at #10 on the list is that Charles White -- the guy who dominated the scabs during the strike year. I thought about actually removing him from the list, but when was the last time White got some pub?)
Seeing these lists, I know the question you are asking: "Jamal Lewis ran for 2,066 yards in 2003, and he's not even in the top 20 when using z-scores. What gives?"
Walter Payton, who in 1977 rushed for fewer than 2,000 yards (1,852), ranks 3rd using z-scores but only 11th using single-season rushing yards. That's because Payton was the only runner in 1977 to gain more than 1,300 yards. In comparison, Lewis was one of twelve players to clear 1,300 yards rushing in 2003.
Using this same reasoning, we can see why Terrell Davis fell from 4th on the single-season rushing list to 15th on the z-score list; and why Edgerrin James fell from 21st to 66th -- there were many more players closer to Davis' 2,000 rushing yards in 1998 (and a lot more players closer to James' 1,709 total in 2000) and as a result their respective z-scores are lower (2.62 and 1.84, respectively) than they would have been in 1977.
All of the z-scores are extremely high, but that's to be expected because we're looking at the top of the list. If we were looking at every player who ever ran the football during an NFL game, we'd also see some z-scores around 0.0 as well as some negative z-scores.
Next, let's look at the all-time career rushing leaders (as of February 2005):
|Rank||Last Name||First Name||Career
|Rank||Last Name||First Name||Career
This also looks pretty familiar. But what happens if we use z-scores for career rushing leaders like we did for single-season rushing leaders above? To accomplish this, I summed each player's z-score for every season he averaged at least 10 carries per game (I'll call it the aggregated z-score). The aggregated z-score gives you an idea of how dominant a player was when compared to other players during his career.
Here's what the z-scored table looks like:
|Last Name||First Name||Career
|Rush Rank*||Z-score Rank|
Note: The rush rank in table 4 may vary slightly from the all-time rushing leaders because in the z-score analysis, seasons where players averaged fewer than 10 carries per game were dropped from the analysis.
What immediately stands out is that players like O.J. Simpson and Earl Campbell rank very high when using z-scores, but only rank 14th and 20th, respectively, when looking at all-time rushing yards. There's a straightforward explanation for these results. Both running backs played roughly a decade (Simpson played 11 seasons, Campbell played nine), and for most of their careers they dominated (both had five All-Pro type seasons). So even though both Simpson and Campbell declined as they approached retirement, the fact that they were so much better than their counterparts boosted their aggregated z-score.
On the other hand, a player like Franco Harris, who ranks 10th all-time in rushing yards, only ranks 11th when aggregating z-scores. Why? Primarily because Harris played for 13 seasons and he was always just above average in terms of yards gained per season. In fact, Harris never gained more than 1,250 yards or fewer than 600 yards in any season where he appeared in at least 12 games.
An even more glaring example is Marcus Allen. You'll notice that he's 9th on the all-time rushing list, but he's missing from the aggregated z-score list. Actually, his aggregated z-score is an abysmal 314th! In fact, his aggregated z-score over a 16-year career is -2.07 (yep, that's a negative sign preceding the 2.07). This is such an extreme case that it almost defies logic. Allen made a very long career out of being one of the most consistent running backs ever, but when it came to running the ball he was consistently just below average (Allen is hurt, of course, because we're not counting receiving here). In 1985, Allen rushed for 1,759 yards -- far and away his best season. The other fifteen seasons he averaged just under 700 yards per season. In Allen's case, his longevity actually worked against him in terms of his aggregated z-score. Because his rushing totals were below average almost every season, his z-scores were largely negative.
Obviously, this is one weakness of using z-scores -- at least as a measure of career success. When people mention Marcus Allen, it usually conjures images of that unbelievable run he had against the Redskins in the Super Bowl, not images of his final years in Kansas City. And that's the point -- new statistics are good for clarifying strategies and uncovering the underlying value in measures like total rushing yards. Still, statistics shouldn't obscure mind-boggling feats or great careers; they should supplement these experiences instead of replacing them.
That said, there is a fix. As Mike Tanier pointed out to me, "The career rankings suffer from Linear Weights syndrome: hang around and be slightly below average for 4 years, and suddenly your standing drops. You might consider using just the z-scores of the RBs six best seasons, for example." Yet another option might be to weight a running back's z-scores such that the early and late seasons in their career receive less weight than the middle, more productive seasons. This will give value to seasons where the running back was most effective and lessen the impact mediocre performances have on aggregated z-scores. This is something to be addressed in version 2.0.
Anyway, using z-scores allows us to now compare players not only to their peers, but also to players across different seasons. One of the benefits of this method is that it dispels the myth that milestones like 2,000 rushing yards in a season are the ultimate measure of dominance. Hopefully I've shown here that it also depends on the competition. Maybe run defenses are particularly weak in a season when running backs are particularly strong. Is having five running backs break 1,500 rushing yards in a season less impressive than having one do it? I think so, and that's one of the good things about using z-scores.
As always, there are sure to be better measures of running back success. Michael David Smith has done a lot of work on the subject. Last spring, Michael wrote an article (Introducing Leader Ratio) that compared running backs based on the difference in yards gained between the leading rusher and the runner-up. He followed that up with his Similarity Scores piece that used a classic baseball analysis tool to draw comparisons between running backs with similar numbers.
This article is by no means definitive, but hopefully, it sheds some light on how a running back's rushing performance shouldn't simply be taken as a raw number, but instead should be considered relative to how other running backs performed during a given season (or a career).
165 comments, Last at 21 Dec 2005, 8:26pm by felix adams