by Zach Binney
Less than a month before the 2016 season began, the NFL announced a few substantial changes to how it handled injuries. The biggest one -- at least from a fan(tasy football) perspective -- was a modification to the game status report component of the NFL injury report: eliminating the "Probable" designation for how likely players are to play in their upcoming game.
I wasn't sure how this change would affect NFL injury reports, so I have been eagerly waiting to amass enough data to examine this rule change. Now that the FO Injury Database has been updated and cleaned through the first half of 2016, let's take a look at the data.
An Injury Report Intro
To start, let me make sure everyone is on the same page about a few intricacies of the NFL's injury report. The "NFL Injury Report" is actually comprised of three separate documents:
1. Practice Reports: These are reports given by teams on Wednesday, Thursday, and Friday (for teams with Sunday games; the timing varies for other game days) that lists the practice status of all players with "significant or noteworthy injuries." This language does give teams some wiggle room on exactly whom they put on their reports. Some teams, but not all (as far as I can tell), even regularly list players (mostly veterans) who just miss a practice for scheduled rest. Each injured player gets one of the following designations each day:
- Did Not Participate
- Limited Participation
- Full Participation
You could previously also have been listed as "Out," but that was also eliminated in 2016 to avoid confusion with the Game Status Report.
2. Game Status Reports: These are reports given by teams on Friday for Sunday games (or Wednesday/Saturday for Thursday/Monday games). They list a projection for how likely an injured player is to play in the team's upcoming game. Of note, a player listed on the practice report does not have to appear on the game status report if they are certain to play. The game status designations are:
As noted above, through 2015 "Probable" had been a fourth option, but the NFL eliminated that this year.
3. In-Game Injury Report: Exactly what it sounds like. We won't waste more time on it here as it's not pertinent to our questions.
Impact of Removing "Probable"
There were a lot of prognostications about the effects of this change. I was... uncertain. The players previously named as "Probable" could have followed one of two paths: "Questionable" or off the game status report entirely.
Path 1: The new rule could result in a lot more "Questionable" tags, since the NFL can get a bit investigate-y if a player not on the game status report unexpectedly doesn't play.
Path 2: On the other hand, 90 percent or more of players marked "Probable" in a given week did in fact play in the team's next game, so maybe they'd fall off the injury report entirely without a "Probable" designation.
I really wasn't sure what to expect, and I thought different teams would probably take different tacks.
Alright, let's dig into the results. Most data here comes from the FO Injury Database, collected prospectively by dedicated interns since 2007 (though I have elected to only report data from 2009 onwards here to better reflect the current era). Daily 2016 practice report statuses came from weekly CBS practice report pages such as this one. To maintain parity across years, data for only Weeks 1 to 8 are reported.
Obviously, the probable bar went to zero in 2016 as the designation was eliminated. In the previous five years, the total number of probable and questionable player-weeks ranged from 1,409 to 1,666. On average, these years saw 987 probable and 518 questionable player-weeks. In 2015 there were 928 probable and 486 questionable player-weeks.
In 2016, meanwhile, we saw just 809 questionable player-weeks. If we assume that we would have seen the same data in 2016 as the average from 2011 to 2015 if "Probable" had not been removed (likely a pretty reasonable assumption looking at the chart), it looks like about 30 percent of "Probable" player-weeks went into the "Questionable" category and the remainder fell off the report [(809-518)/987]. Neat.
How Often do Questionable Players Play Now?
The official League definition of "Questionable" now is "uncertain that the player will play." Well, that's... vague. Previously the League said 75 percent of Probable, 50 percent of Questionable, and 25 percent of Doubtful players should play, but these percentages -- which weren't all that accurate, anyway, to be fair -- are now gone. So what does Questionable mean in 2016?
Turns out a "Questionable" designation used to mean that, on average, you had about about a 50 to 60 percent chance of playing that week, though this varies a lot by team. So far in 2016, meanwhile, Questionable players have played 73 percent of the time. That's a substantial difference. Given the relative numbers of Probable and Questionable designations in the past few seasons, this is just about in line with what we would expect to see if 30 to 35 percent of Probables were now Questionable -- maybe a touch lower since many of the Probables who became Questionables were probably "more hurt" Probables (i.e., their risk of missing a game was likely above the overall Probable average).
Regardless, the Questionable designation was already a tough-to-predict hodgepodge before this year, and it has only gotten more heterogeneous. This led me to wonder: is there anything we can look at to better predict which players are going to suit up on game day? I'm going to stratify by team, injury type, and practice status to try and find out.
Predicting Questionable Players in the Post-Probable Era
Let's check on the percent of Questionables that play each week for a few of the major injury types. Our interns collect all the data they can from injury and more detailed press reports, but the categories are still going to be a very broad mix of many different kinds of injuries that may not belong together. My apologies for that.
Most injuries seem to be clustered right around the overall average of 73 percent. Players with injuries to the quadricep/thigh (82 percent active) or head (concussions) (80 percent active) were the most likely to play. Concussions in particular make sense to me since the return-to-play process portion of the NFL's concussion protocol, while it has no set timeline, is often hard to complete in five days but easier to complete in seven, leaving a lot of players as "Questionable" on the game status report but ready to suit up on game day. I'm not sure what to make of quad/thigh injuries -- if any trainers or physical therapists or physicians want to weigh in in the comments, please do.
[ad placeholder 3]
On the other end, players with hip (69 percent active), groin (67 percent active), and calf (65 percent active) injuries were the least likely to be active. I'm not sure I have a particular explanation for that but, again, I would value any input trainers or medical folks can provide down in the comments.
I have also included error bars, which represent 95 percent confidence intervals (approximate binomial for the stats nerds). The intervals all overlap and most of them include the overall average of 73 percent, so it's possible there are no true differences among injuries and all we're seeing is random noise. But if I had to guess, I think we're seeing a real higher active proportion among concussions and maybe quad/thigh injuries, too. I'm not so sure about the lower ones.
As I noted above, teams issue three (or only two if they play on Thursday) "practice reports" during the week that tell us if injured players had "Full Participation" (FP) or "Limited Particiation" (LP), or that they "Did Not Participate" (DNP). If a player only had one or two of the three practice statuses (for example, if a guy first strains his hamstring in a Thursday practice) we would expect for a non-Thursday game, we assumed the other practices were "Full Participation."
I wanted to stratify these statuses in a couple ways:
1. Last practice status before a game (typically Friday for a Sunday game).
2. Practice status trajectory. This could be "Flat," "Up," or "Down." "Up" means a player improved between any earlier practice and his final practice. "Down" means a player's final practice status was lower than all previous practice statuses. All others were "Flat." You can certainly debate the merits of this categorization scheme, but I think it correctly captures most cases and is good enough for our purposes.
So, let's take a look at the data.
Hey, now we're cookin'! Overall, if your final practice status was FP/LP/DNP, you played 87/73/37 percent of the time. We often hear fantasy experts saying that they are concerned if a player doesn't practice late in the week; this data bears that out, but it's also not a death sentence.
As far as the trajectories go there's a bit less variation. Overall, Down/Flat/Up trajectories meant you played 71/67/82 percent of the time. So if you're trending up (as I've defined it) you're more likely to play, whereas flat and down trajectories are both a bit below the overall average.
When we stratify by both, we see that among player-weeks with full participation in their last practice, those who were trending up (93 percent active) were way
Among players with limited participation at their last practice report, there wasn't a whole lot of variation by trajectory. Player-weeks with down/flat/up trajectories played 73/75/70 percent of the time. So it looks like if a player is limited at practice late in the week, they're basically of average likelihood to play.
Among players who did not participate in their final practice, those who had declined from midweek actually had a decent chance to still play: 69 percent. That's kind of surprising to me -- it may be semi-injured guys just being given Friday off? Meanwhile, if a guy doesn't practice all week his chances of playing are pretty low at 24 percent. Honestly, that's still higher than I was expecting.
Teams have historically exhibited quite a bit of variation in what proportion of their "Questionable" players suit up on Sunday. Did this continue through the first eight weeks of 2016?
Apologies for the small text, but I thought it was still easiest to look at this information on one chart. Overall you see quite a bit of variation by team -- from Cincinnati with 100 percent of questionable players suiting up, to Tennessee with just 33 percent. Not surprisingly, both of these extremes have extremely small sample sizes (seven and nine player-weeks, respectively), so I imagine we can expect some regression to the mean as the season drags on.
[ad placeholder 4]
Interestingly, we see a pretty even distribution of proportions across teams -- they follow a gradation rather than being in "high" or "low" clusters. The teams don't break into camps where some have 90 percent of the questionable players play and others, 40 percent. They may very well all have a common understanding of "Questionable," but just experiencing random variation. So I would caution against using this data -- at least just this eight-week set -- to make any reliable predictions about what teams will do next with their Questionable guys.
As a sidenote, New England -- where Bill Belichick has a reputation for... strategic use of the injury report -- has listed the second-most (64) player-weeks as Questionable through eight weeks (Chicago has 67). However, those Questionable Patriots have played at a near-average clip, suiting up 67 percent of the time. I'm a Dolphins fan, and even I have to admit that this doesn't support the shady Belichick narrative.
Conclusions and Limitations
Our data suggests about 30 percent of Probable players from previous years became Questionable in 2016, with the remainder falling off the injury report. Questionable players jumped from a 50 to 60 percent chance to play to a 73 percent chance to play. This probability doesn't vary a lot based on injury type, but there is quite a bit of variation by team, and especially by practice status. Practice status, in particular, can give you a good idea of who is likely to play before game day, which is, I guess, useful for anyone still in their fantasy playoffs. (*Cries softly.*)
All that said, your most reliable source of information is always going to be individual reports on game day about whether a guy is playing or not. Also, my data says nothing about whether a guy plays his regular number of snaps or not. But this data help provide some context that can inform rough guesses earlier in the week about whether a player is going to play, and that's not not useful.
Zach is a freelance injury analyst and a PhD student in epidemiology. He writes about injuries at NFLInjuryAnalytics.com; you can also follow him on Twitter @zbinney_nflinj. He loves Minor League Baseball and lives in Atlanta.