The Saints were 2016's oldest team, and the Rams were once again the youngest. Are more rookies starting than ever before in the NFL? 2016 was the youngest season we have tracked yet.
03 Apr 2017
Guest column by Zachary O. Binney
I saw a remarkable stat reported last week at Pro Football Talk: that starting quarterbacks missed only 35 games in 2016, compared to 76, 77, and 59 in 2013, 14, and 15, respectively. The claim is sourced to Peter King's MMQB column from March 27, but unfortunately I haven't been able to find a source document for the actual numbers. King sources this to "NFL injury research" but is non-specific beyond that. What's more, per King, the NFL's Competition Committee views this data as evidence that the new quarterback protection rules are having their intended effect.
Now that seemed like a huge drop to me, and the numbers for 2016 seemed implausibly low. I did some quick calculations, and just three injuries -- Teddy Bridgewater (16 games) plus Tony Romo (nine) plus Jay Cutler (11) -- are enough to get us to 36 games missed by starting quarterbacks in 2016. That's already more than the number reported by the NFL and King.
Epidemiologists spend a large chunk of our time just counting things. As it turns out, that's not grade-school math. It's really, really hard, and a correct count relies on a) counting the right things, and b) doing so in a consistent manner. So I wanted to go into the FO injury database and see if I could replicate the NFL's numbers. Spoiler alert: I couldn't really.
To figure out whether a stat is counting the right thing, the first step is to make sure we know what, exactly, we are trying to count.
MMQB says the following about the numbers: "The NFL defines this statistical category as being games missed by the declared starting quarterback of a team. So even though, for example, Cody Kessler did not open 2016 as the starting quarterback, he was knocked out of two games that he started (concussions) and missed a total of four games because of them. Those count on this list."
Well, "declared starting quarterback" is a little unclear, but it seems clear that you don't have to be the Week 1 starter to count in these stats. I'm missing some information, but I'm going to now define what I think the goal of this statistic is:
A measure (count) of the toll of games missed due to injury by a quarterback who, but for his injury, would have started the game that he is counted as missing.
Note that this is different from a statistic that would help us assess whether the quarterback protection rules worked. More on that below.
Assuming the above is our goal -- and the goal of the NFL stat reported in MMQB -- we should count both in-season injuries to starters and certain preseason injuries to major quarterbacks. For example, I would count both Tony Romo (preseason back injury that made him unable to play for nine games) and Teddy Bridgewater (shredded his knee in training camp, knocking him out for 16 games) this year.
It's possible the NFL had another goal in mind with these starting quarterback weeks-missed-due-to-injury counts, but that is not made explicit in the definition from MMQB. For example, they could be counting only in-season injuries. That would lead us to exclude Bridgewater and Romo this year, as well as past injured players such as Sam Bradford from 2014.
They could also only be counting in-game injuries, which would actually make sense if you want to assess the effectiveness of quarteback protection rules. That would make us exclude Bridgewater, but not Romo or Bradford in 2014.
In fact, if you're trying to assess the effectiveness of quarterback protection rules, what would make the most sense is to look at just in-game contact injuries. That complicates things even further.
Can you see how counting the right things gets complex very quickly? It's all goal-dependent. But whatever your goal, one thing is a constant: you have to count the same things consistently year-over-year.
There's another wrinkle in our simple counts, though. Every week there have to be 32 starting quarterbacks in the NFL (not counting byes). So consider the Vikings this year: we're already dinging Bridgewater with 16 missed games, but other Vikings starting quarterbacks (e.g. Sam Bradford, who is unironically all over this quarterback injury post) were still at risk of missing games due to injury every week. If Bradford had torn his ACL in Week 5, we would be dinging the Vikings for 27 weeks missed already, with another quarterback at risk for injury the next 11 weeks.
Meanwhile, a team like the Titans that only lost Marcus Mariota late in the year didn't have the ability to rack up that many weeks missed.
In layman's terms, there's the potential for a sort of multiplier effect on games missed when there are big injuries like Bridgewater and Romo, especially when they occur before the season begins. How do we deal with that?
Well, I'm actually going to ignore this issue for our analysis. There are two reasons, and they get back to the goal of our analysis. I want to compare our data to that reported by MMQB, so I need a count of injuries. And as I've interpreted it, the goal of the data reported in MMQB is to get a metric for the toll in games missed that injuries took on starting quarterbacks each year -- that's a simple count.
Let's dig into our database now. I calculated the number of games missed for starting quarterbacks from 2013 to 2016. Fortunately the FO staff already tracks whether an injured player would have been a starter on a weekly basis, so it was no problem to identify games missed according to our definition above. For example, in 2014 Matt Cassel missed 13 games with a broken foot, but he was only likely to start two of them before Teddy Bridgewater returned in Week 6. Thus he only counts for two games in our total below.
Here's what I found:
|Starting QB Games Missed Due to Injury, 2013-2016|
Using our data, we simply don't see the dramatic decline reported by MMQB and PFT. It looks like what variation there is has been random, with no discernible time trend.
Because the data source isn't public, I don't know quite how the NFL's numbers were calculated. There must be some set of exclusion or starting classification criteria that conspired to artificially depress this year's stats. But even if you excluded Bridgewater (training camp practice), Romo (preseason), Robert Griffin (maybe would have been benched for some games he missed), and Geno Smith (same as RGIII), that would still only drop our figure to 47, substantially above the other figure of 35.
To the first question: No. Our data doesn't show any meaningful drop. I'm convinced that there's some quirk in how the NFL is tracking this stat that makes the 2016 drop look bigger than it was. Perhaps they had some different criteria for defining a starting quarterback? Unfortunately, we don't know. But there's just no way that 35-game figure is an accurate representation of time missed by starting quarterbacks this year. It fails even the weakest smell test.
As for whether the protection rules are working, the jury is still out. We'd really want to count in-game contact injuries (or resulting games missed) to figure this out, and that would require a little more finessing our data. I don't find the NFL's numbers to be compelling evidence for the rules' effectiveness, though. For one thing, we don't know what on earth they actually counted.
For another, why would a decline start in 2016? I'll admit it could be that low hits to the quarterback were a "point of emphasis" for referees this year, but that indicates we don't need new rules, just proper enforcement.
Third, there's just a ton of randomness (noise) in this data. We would need to see a sustained drop over multiple years in the right metric before making any final judgements on these rules.
That's not to say these rules are not working or that I'm against the rules; I don't know and I'm not, respectively. But I am against extrapolating beyond what the data can tell us (or, in this case, concluding anything from potentially fatally flawed data).
On the subject of noise, PFT brought up this point: even if we take the NFL's data at face value, a count like this is very sensitive to luck (not the Andrew variety) with respect to catastrophic injuries and their timing. A couple of severe freak injuries earlier rather than later in the season could cause a massive swing in this count. Imagine if Derek Carr or Mariota's freak injuries came in Week 2 instead of Week 16? How would these numbers look?
However, while this argument is certainly correct, I don't think it undermines the assertion that the quarterback protection rules are working. There's a reasonable case to be made that increased quarterback safety both delays and prevents quarterback injuries, so couldn't injuries coming later in the season be a result of the rules? There are certainly more fundamental concerns with the NFL's numbers.
I don't know what the numbers reported in MMQB/PFT are counting to get 35 games missed in 2016, but 70-plus in 2013 and 2014. The 2016 figure is implausibly low as a measure of starting quarterback injury toll.
I'm even less convinced that they're counting the right things to be able to make any statement about the effectiveness of quarterback protection rules. It's possible I'm wrong, but figuring that out would require more details and transparency from the NFL on their injury research.
Zach is a freelance injury analyst and a Ph.D. student in Epidemiology focusing on predictive modeling. He consults for an NFL team and loves Minor League Baseball. He lives in Atlanta.
(Ed. Note: Discussing 2016 injury numbers naturally brings up the question of when we'll be running 2016 Adjusted Games Lost numbers, and the answer is "next week." -- Aaron Schatz)
14 comments, Last at 07 Apr 2017, 6:01pm by Tomlin_Is_Infallible