Writers of Pro Football Prospectus 2008

02 Mar 2010

On Dunta Robinson And Charting

I talked with our friend Steph Stradley from the Houston Chronicle yesterday regarding Dunta Robinson's charting numbers and how difficult it is to actually chart games.

Posted by: Bill Barnwell on 02 Mar 2010

30 comments, Last at 05 Mar 2010, 7:10pm by Just Another Falcons Fan

Comments

1
by SteveNC (not verified) :: Wed, 03/03/2010 - 5:52am

Very informative article, thanks. I didn't know about Pro Football Focus.

2
by DeltaWhiskey :: Wed, 03/03/2010 - 6:30am

"As for Pro Football Focus -- here's the thing. I love the idea of what they do. Love it. The problem is that you can't do anything they're suggesting they can with any sort of comfortable reliability. If anyone could, we'd have started doing it when we started the charting project four years ago."

"If you make 100 guesses about what's happening on a play, and you get 15 of them right and 85 of them wrong, even if you mean well, you're doing the concept of charting a disservice."

But what if 2009 DYAR correlates with PFF overall scores?

For QB r = 0.889 n = 40 p < 0.0000001
For WR r = 0.786 n = 89 p < 0.0000001
For RB r = 0.700 n = 46 p < 0.0000001
For TE r = 0.465 n = 47 p < 0.0005

And what if DVOA correlates with PFF Team Overall Scores

For OFF DVOA and OFF OVR r = 0.806 n = 32 p < 0.0000001
For DEF DVOA and DEF OVR r = -0.538 n = 32 p < 0.001

The decreasing magnitude of the correlation coefficiencts from QB to TE may be due to the fact that DYAR requires a player to show up in the game log to be counted, while PFF attempts to measure all facets (like blocking) of a player's game that doesn't show up in the box score. For example, Matt Spaeth appears in the FPP data, but not in FO data. Almost everything a QB does of importance gets captured in the game log, while perhaps only half or less of what a TE does is not. Therefore, the QB DYAR and FPP QB score are measuring similar constructs, while TE DYAR is more restricted in its measurement of what a TE does.

The corelation between OFF DVOA and FPP OFF OVR certainly suggests that both measures are measuring similar constructs.

Regarding the defensive correlation coefficient, the coefficient, while statistically significant, certainly is not as robust of a correlation as the OFF correlations. Several explanations exist. First, DEF DVOA in general, when compared to OFF DVOA, correlates less well with other measures. For example, the correlation with points scored for OFF DVOA is 0.822 while the correlation of DEF DVOA with points allowed is 0.766. However, the magnitude of the difference between FO and FPP offensive and defensive numbers (absolute value 0.268 - remember DEF correlation is negative b/c DEF DVOA goes in a negative direction while FPP is positive for good) is greater than the typical difference between DEF DVOA and other statisitics. This suggests that either FPP Defense scores measure a slightly different construct than DEF DVOA and/or this is where some of Bill's criticism of FPP methodology has validity.

RE: Not being able to tell who is on the field. FO uses a population sample to construct their measures - every NFL play goes into the measures generated; however, if PFF only scores the players they can clearly identify as being involved in a play, then they are taking measurements from a "sample." Sampling is a technique used in a lot of scientific research. Now regretably, PFF's sample is not random, but a convenience sample which comes with a variety of limitations and potential biases. However, this alone does not invalidate FPP methodology. It would be nice if FPP provided data on how many players were unaccounted for.

"The grading thing is even more spurious." Just b/c NFL teams don't do it, doesn't make it plausible but false, or wrong or invalid. PFF does not claim to be trying to provide a service to NFL teams. Obviously, PFF doesn't attempt to score someone by what they're supposed to do (as this would add a whole different type of error and bias), they score what they observe. This methodology has limitations (what should have happened v. what happenned = measurement error), but this does not necessarily invalidate the measures outright. I'm sure Mike Tanier doesn't grade his students on what they intended to answer on exam, or what they were supposed to answer on an exam either.

Bill, you certainly raise some valid concerns about PFF methodology; however, to summarily dismiss PFF's work in the manner you outlined is premature. The correlations of PFF measures with FO measures provides preliminary evidence of validity of their methodology. This is one year of data (2009) I've examined, may be a fluke finding, somebody else can explore this further. If the PFF data is valid, the real question for me is, what use is it? Does it tell anything above and beyond what traditional and/or advanced metrics such as FO stats tell us?

3
by Vince Verhei :: Wed, 03/03/2010 - 7:35am

FO uses a population sample to construct their measures - every NFL play goes into the measures generated; however, if PFF only scores the players they can clearly identify as being involved in a play, then they are taking measurements from a "sample."

From PFF's front page (and this is their emphasis, not mine):

"For every game we analyze and grade every player on every play to provide you with the most in-depth statistics you can find anywhere outside the team's film room."

So they claim to be evaluating everything, and not sampling. And as Bill noted, that suggests a lot of guesswork.

4
by DeltaWhiskey :: Wed, 03/03/2010 - 7:55am

However, if you move beyond the eyecatching front page, you find more detail about the process.

"If you're not 80% sure what's gone on then don't grade the play. The grades should stand up to scrutiny and criticism. It's far better to say you're not sure than be wrong. However, this is not an excuse for chickening out on making a judgment. What we definitely DO NOT do is raise or lower the grading because we're not sure. Giving -0.5 rather than -1 or -1.5 because you can't be certain what went on is wrong. The correct score is 0."

"As with any type of analysis we are always at the whim of the TV companies who seem to think showing a QB or HBs face right up until the snap somehow makes good coverage. We do the best we can but can't guarantee to cover every play."

http://www.profootballfocus.com/about.php?tab=about#grad3

5
by drobviousso :: Wed, 03/03/2010 - 11:13am

The grading thing is even more spurious. NFL teams don't chart other team's stuff like the way they're suggesting they do, because you can't do it with anything close to reliability.
I don't have a dog in the PFF spat, but that seems like a stunningly bad argument from a site that is generating new stats not used by NFL teams.

8
by dmb :: Wed, 03/03/2010 - 12:19pm

Great idea, DeltaWhiskey, and some interesting results. As you suggested, I think it lends some evidence to the belief that PFF's charting is probably accurate enough to be useful, but not enough to be an end-all, be-all source. (Then again, what is?) I think the fact that the correlations drop with each position that gets less face time is probably indicative of some limitations at PFF. It would be interesting to see if the variance is higher for those positions over time.

As for Bill's diatribe, I just don't understand it. Yes, PFF has some very obvious limitations due to the material that's available to them, and sure, they could do a better job of making that explicit. But here's the question he was asked: "Do you have any numbers on how Robinson did relative to the rest of the Texans' corners in 2009? There's numbers floating on the interwebs (Pro Football Focus) that make Robinson look very poor."

Bill then gives his numbers and spends one paragraph explaining them. He then gives four paragraphs trying to discredit PFF -- and not just trying to explain how the two methodologies relate to each other in the case of Robinson, but trying to discredit their entire approach. It just seemed petty, especially considering drobviousso's very good point. Instead of wholeheartedly dismissing PFF at face value, perhaps FO could consider some collaboration. It gets tiring to see my favorite site simply ignore the possibility than anyone else might be doing something useful.

9
by DeltaWhiskey :: Wed, 03/03/2010 - 12:45pm

Since you asked

Standard Deviation

for FPP QB = 21.90
for FPP WR = 7.96
for FPP RB = 7.79
for FPP TE = 10.17

for DYAR QB = 833.36
for DYAR WR = 139.90
for DYAR RB = 103.88
for DYAR TE = 100.32

" It gets tiring to see my favorite site simply ignore the possibility than anyone else might be doing something useful."

Seconded...FWIW the QB correlation from FPP w/ Brian Burke's recently posted WPA and EPA numbers is r = 0.840 and r = 0.912 respectively. For DYAR it's r = 0.891 r = 0.947. It also gets tiresome seeing these sort of statements not supported by any evidence, stasistical or otherwise. These computations took me less than an hour grand total to compile and calculate.

10
by dmb :: Wed, 03/03/2010 - 12:54pm

More awesome/interesting info, though I actually meant looking at the year-to-year variance for each player, then finding the mean by position. Obviously, it'll be a little while before we can do that. :)

11
by Aaron Schatz :: Wed, 03/03/2010 - 1:05pm

Hi. We're not ignoring the possibility that other people are doing something useful. I have, in fact, talked a bit with PFF about the idea of joining forces. (For example, in case you are wondering how they manage to chart things so much faster than FO, I can tell you that the main four guys are apparently indepedently wealthy or something and don't have to work real jobs.) The problem is not with their math; it is with the issue of how much it is possible to chart through television camera angles. These correlations are sort of meaningless because the issues with television camera angles primarily relate to their attempt to judge non-skill positions.

You haven't seen criticism by FO writers of Brian Burke's WPA and EPA, right? That's because those ratings are based on standard play-by-play, just like ours. At that point it simply becomes a question of what statistical methodology you prefer, rather than a question of whether the underlying data is trustworthy.

The best people to comment on the limits of accuracy with game charting are probably the FO game charting volunteers. I would ask people not to be uber-critical of Bill's comments unless they have tried charting games themselves.

15
by Dan :: Wed, 03/03/2010 - 2:36pm

I've run some numbers on other positions, and found close relationships between the PFF grades and the objective statistics that are tracked by PFF (which tend to be similar to the objective statistics tracked by FO game charters). For cornerbacks, their yards allowed, interceptions, and passes defensed explain around 65% of the variation in their coverage grades. For the defensive line, their QB sacks, hits, and pressures, along with the number of snaps and their position, explain about 80% of the variation in their pass rush grades.

17
by dmb :: Wed, 03/03/2010 - 4:16pm

Thanks for the reply, Aaron. It's encouraging to hear that you've been open to some sort of collaboration with them. I think I went a bit far in suggesting that FO doesn't take anyone else seriously, but I do think a whole lot of the readers here would love to see more open dialogue about some of the other work that's being done. On a site dedicated to "intelligent analysis," it seems odd that there's a link to Peter King once a week, but the only time I'm sent to Burke or PFF or beatpaths (among others) is if a reader mentions it. To be fair, I know there have been links to (among others) the blog at pro-football-reference before, but I do think there's some legitimacy to the feeling that this site isn't always the most open to other works. So I haven't seen any criticism of Burke's WPA or EPA, but I haven't really seen any sort of acknowledgment of it, either. (That's not to say that there hasn't been any; I may have missed it.)

Also, I wanted to clarify that I do recognize that the limitations of TV broadcasts are very real; that wasn't my issue with Bill's comments in the article. My issue is that Bill was asked a question about FO's numbers pertaining to a particular player, and how they differed from PFF's. Rather than focus most of his explanation on the FO numbers, or even why PFF's methodology might yield different (or inaccurate) results in the case of that particular player, he decided to talk about PFF's methodology in general. He only brought up one particular issue that really pertained to Robinson -- that receiver's cuts are often out of view -- and brought up some other valid but irrelevant examples. (Trying to infer blocking assignments certainly isn't likely to produce perfect results, but it also has nothing to do with PFF's assessment of Dunta Robinson.) The situation just didn't call for it. If you're going to look at their overall ability to assess player participation and effectiveness accurately, find a way to actually evaluate it -- for example, compare the charting of a few games to see how they match up -- and post it as an XP or article.

6
by drobviousso :: Wed, 03/03/2010 - 11:13am

The league needs 31 more Stephanies.

7
by Mr Shush :: Wed, 03/03/2010 - 11:29am

Yeah, between her, Zierlein and Burge I think we Texans fans are truly spoiled. It's not like anyone forces us to read Justice.

12
by The Guy You Don't Want to Hear (not verified) :: Wed, 03/03/2010 - 1:23pm

Agreed. Every time they link to her blog, I think, "If Kubiak weren't there and Shanahan were still in Denver, I would become a Texans fan just for this."

21
by Steph :: Thu, 03/04/2010 - 1:27am

Thank you for the very kind words. For an additional follow-up to this thread,I suggest reading this item I put together at FanHouse:

Pro Football Focus: How Do They Put Their Numbers Together

I find this subject fascinating though there is no amount of potential possible future money in this world that would make me spend 70 hours a week charting games. I admire those who could do that though it almost gives me a headache just to contemplate it.

13
by Temo :: Wed, 03/03/2010 - 1:41pm

Some points, since I've looked at PFF's stuff a lot in the past:

- PFF's player participation is very accurate, and it's because they don't get their info by looking at TV angles. They use a different method, and all my attempts to verify their information has come up solidly in their favor, as far as accuracy goes. (Thanks to Travis for pulling PFF's participation data for me, by the way)

- Delta, your correlations are a bit off, there are better ways to do them for RBs, WRs, and especially TEs. Rest assured that correlations for these individual actually increse. (For one thing, you want to use VOA, not DVOA here)

- I used regression analysis and found that you can determine unit VOA (like run offense, run defense, pass offense, pass defense, etc.) with a fairly low error and fairly significant coefficients on individual player ratings.

- However, rather than the above giving me confidence in PFF's ratings, it makes me suspicious. Really, they can grade players so well that you can determine unit VOA by using individual player ratings by a human? That seems far fetched.

- I think, though I cannot be sure, that PFF is regressing their player ratings to VOA. If that seems like it defeats the purpose of the whole exercise, well, you'd be right.

- I still need to do more work on this.

14
by Bill Barnwell :: Wed, 03/03/2010 - 2:27pm

We went over the correlation of PFF data to DYAR in a different thread. Just because two pieces of data are correlated does not mean that the underlying methodology is valid. Again, 2H kneels have a great correlation to winning. I'm not surprised that their data correlates somewhat to ours, because it's impossible to watch a game and not have the results bear some resemblance to what's being measured by any metric. I mean, our stats say the Redskins' offense wasn't very good and that the offensive line really sucked. PFF's does too, because it's not that hard to tell when a team blows a block or when a guy comes free on a blitz. Assigning the blame for that to an individual player, though, is a serious problem, and while the outcome still might be a net negative for the line, it's very easy to get the one player whose fault it was wrong. On the other hand, if you're constantly giving them 0 grades when they "truly" deserve a -1 or a -2 or whatever, you're watering down the data without even realizing it.

The 80% idea is, again, a step in the right direction, but again, you can think that you're 100% sure of what a player's responsibility was on a play and still be totally wrong. This is where the offensive lineman example in my "diatribe" comes into play.

The only time I've ever seen anyone compare their player participation data to real player participation data is when someone in the comments compared Donald Lee's data on their site to what Tom Pelissero noted in his column. They were off by something like 15%. And that's a TE, which isn't that hard to see. For teams that rotate linemen in and out, that would be really difficult. For DBs? Even more difficult.

Will you get MOST of the data right? Sure, because it's not hard to tell the majority of the players on the field. Take the Chargers -- Vincent Jackson is tall. Malcolm Floyd is less tall. Darren Sproles doesn't look like LaDainian Tomlinson, who doesn't look like Mike Tolbert, and none of them are fan favorites like Jacob Hester. The offensive linemen don't really rotate out unless they get hurt, so you can pay attention and pencil them in for the same moves on every play. It would take a while, but you could do that with a reasonable amount of accuracy on offense. It would be harder on defense.

18
by dmb :: Wed, 03/03/2010 - 4:38pm

If you collaborate with them to see what they do differently from FO charters, then there's the possibility of a truly informed evaluation. (Temo suggested that PFF gets participation data from sources other than TV broadcasts ... wouldn't it be worth exploring what they might be doing, since that could be of use to FO charters?) Better yet, compare the results of charting specific player/games, and you could find out what they're inferring and why, and how likely it is to be accurate.

As for the correlations between PFF grades and metrics, you're right that they don't prove anything. But if we take it as a "given" that FO numbers are a fairly strong way to evaluate player performance in the context of their team, then another methodology attempting to measure the same thing should find pretty similar results. Does it guarantee that they're doing well? No. But it shows that it's possible, and without much evidence to the contrary, it seems there should at least be some interest in investigating it further.

I do agree that assigning individual responsibility gets really problematic, but there's only so much you can do about that. I've seen it done on here plenty of times -- diagramming a play generally requires some inference about what each player is supposed to be doing -- and it's definitely easier to acknowledge possible error when you're focusing on a few specific plays, and can give some written context for it.

22
by DeltaWhiskey :: Thu, 03/04/2010 - 5:30am

My concern was not and has not been whether PFF was right or wrong, but the manner in which Bill has dismissed PFF's work. Of greatest concern, in the other thread mentioned, and this one, he failed to understand the point that dmb clarified and said much better than I could - correlation suggests a possible relationship.

Next concern: " I'm not surprised that their data correlates somewhat to ours"
None of the correlations I presented suggested that FO and PFF data correlate "somewhat," the magnitude of the correlation is quite large. Moreover, there are multiple correlations; therefore, it is not as simple to dismiss as kneel to win, which was a ridiculous argument. FWIW, the correlation between TE DVOA and PFF TE Pass is r = 0.861. Finally, it is interesting that when FO numbers correlate with things that FO believes in, the correlation is meaningful.

Finally, if FPP is constantly giving zeros in the manner Bill describes, then there is clearly a problem in the data; alternatively, if they are only grading players' performance they can clearly judge (what a reasonable person would do, but might not think to clearly spell out), then they are still grading a sample. We don't know perfectly what they're doing, but we have a better idea of what they're doing than what FO is doing to create their metrics.

23
by Temo :: Thu, 03/04/2010 - 11:40am

It's quite inevitable that individual grades are well correlated with objective statistics. It's easy to sit down and say "hey, the TE caught a 20 yard pass, let me give him a good grade on it". Of course this type of grading would result in a strong positive correlation. Now you have to ask yourself what it means.

For positions like QBs, RBs, WRs, and TEs, 2008 PFF ratings do no better at predicting 2009 PFF ratings than do conventional statisitics, or FO's advanced statistics.

For positions that are harder to grade, like CBs, it's even worse. My problem is that their player ratings don't seem to correlate much from 2008 to 2009 (r=.09, ICC=-0.0108 for CB ratings). In other words, there is no evidence that player ratings are correlated at all from 2008 to 2009. It is only a 2 year sample (77 CBs qualified-- at least 250 snaps in each season), but in general if the grading is picking up inherent CB talent, it's failing.

Look at Pittsburgh: According to PFF, Deshea Townsend, William Gay, and Ike Taylor supposedly went from being 3 good corners in 2008 to average (Townsend) and two of the worst (Gay and Taylor). I know their pass defense fell a lot last year, but wouldn't a more rational explanation be that they no longer had Polamalu? Oh, and Bryant McFadden also went from being average on PIT in 2008 to being shitty on ARI in 2009, according to the player ratings.

It could be because CB quality is due more to scheme and teammates or something (ie, missing a great cover safety as an insurance blanket), but in that case player grading just isn't telling you much anyway about specific player strengths.

Either way, I think most rational people would agree that other than as a way of vaguely quantifying things like "Darrelle Revis is really good" and "Orlando Pace sucked this season", the grading system is pretty dubious, barring further investigation.

26
by dmb :: Thu, 03/04/2010 - 1:00pm

Interception rates in year y-1 have little to no correlation to interception rates in year y, but they're still a very precise way of measuring which quarterbacks throw the most or fewest interceptions. I don't think anyone is looking at PFF for predictive value, and if they are, then they need to have their head examined.

As for measuring "inherent CB talent," I think you have good points that it's going to be very context-dependent (scheme and teammates matter a lot), and that just looking at grades won't tell you about that. But that criticism can be made of any quantitative measure in football.

Finally, major variation in CB metrics isn't exclusive to PFF. Two of the most famous FO "binkies" were cornerbacks who have had some very up-and-down seasons: Leigh Bodden and Fred Barnett. Heck, one of your examples -- Ike Taylor -- went from 67th to 25th to 65th in Stop Rate from 2006-2008. So I think the issue is mostly that any metric examining secondary performance has a fair amount to do with the scheme and personnel context. The best way to even try to get around that is to watch film to try to figure out who's doing their job, which is what PFF is trying to do. Is it perfect? Certainly not. Is there anything else available to us that's clearly superior? If so, I'd love to learn about it.

27
by Temo :: Thu, 03/04/2010 - 1:21pm

Interception rates in year y-1 have little to no correlation to interception rates in year y, but they're still a very precise way of measuring which quarterbacks throw the most or fewest interceptions.

I find that hard to believe. Perhaps Brett Favre will suddenly stop throwing interceptions in one season, like he did last season, but in most seasons he's had a steady interception rate. McNabb has steadily posted one of the lower interception rates in the league.

But that criticism can be made of any quantitative measure in football.

Yes, but no other quantitative measure attempts to assign individual player worth on such a granular scale. I suppose something like yards per carry would be the closest comp, but even there you have more year-to-year correlation than these player ratings.

Finally, major variation in CB metrics isn't exclusive to PFF.

There's something I'd like to test. Lets compare more objective statisitics like CB success rate, YPA, and target rate to player scores. My hypothesis would be that those statistics would see a greater stability and accuracy than player grading. I've got several ideas on how to attack this (or have had, for quite some time), but never got the chance to do it yet.

28
by dmb :: Thu, 03/04/2010 - 2:10pm

Interception rates: Fair enough, I should have included some links. Here you go. For more, check out this post at the Pro Football Reference blog.

I think you make a good point that PFF needs to be much more explicit about what their approach can tell you and what it can't, since what they're doing will almost certainly lead people to believe that they're isolating individual performance more than they (probably) actually can.

I also agree that it would be interesting to see how volatile defensive back metrics are, and if you end up putting something together, I'll be very eager to see it.

24
by Aaron Schatz :: Thu, 03/04/2010 - 12:15pm

Given how hard I have worked to explain the FO game charting project, I am really disturbed by the complaint that "we have a better idea of what they're doing than what FO is doing to create their metrics." Please feel free to email all questions on the game charting project and I will answer them fully. Feel free to ask questions about DVOA and DYAR as well. I have never, ever hidden what we do, except that I won't give out the exact numerical baselines that we use to determine the "average" that plays are compared to in DVOA.

In addition, I really feel that "Delta Whiskey" is giving a big middle finger to all the people who work hard on the FO game charting project, volunteers like Nate Eagan and Dan Haverkamp and Dave DuPlantis and Navin Sharma and Rivers McCown... and I can name 25 other guys. Those are the guys who do the game charting and if you think we're hiding something about how we create our metrics, you should really ask them, because they're the ones who are creating the game charting.

25
by dmb :: Thu, 03/04/2010 - 12:42pm

I may be wrong about this, but I think Delta Whiskey was referring to the formula for DVOA and DYAR. I'd certainly agree with you that game charting explanations have always been transparent -- something I really appreciate -- and the explanations of the proprietary metrics are as clear as possible without giving away the proprietary formulas. Of course, not having total transparency can make some discussions abut DVOA/DYAR more difficult unless one of the writers chimes in, but I do think FO has been reasonably open about its methods. (As a "transparency" freak, I'd always love to see more, but given the goal of keeping DVOA / DYAR proprietary, I think a pretty good balance has been struck. Given the info provided in the "Stats Explained" section, I think one could make a pretty close replication of DVOA, but would probably struggle to duplicate it exactly.)

29
by DeltaWhiskey :: Fri, 03/05/2010 - 5:04am

Aaron, thanks for responding, and thanks to all of the FO staff and others that have responded to this thread.

Please let me clarify, my provocative comments. First, dmb grasped the nature of my concern. There is a marked lack of transparency in the DVOA/DYAR measures that if clarified would not threaten FO's proprieteary control. Aaron, I honestly don't expect the numerical baselines to be revealed or any other "proprietary" information of that nature re: these measures. I appreciate how much effort you and your staff have put in to this over the years and you deserve all the money you can make from this. What I am referring to is the absence of some basic information regarding the process. For example, this year DVOA was updated. I assume (I shouldn't have to assume), that the DVOA forumla is similar to a regression equation (i.e.certain numbers are weighted and are added or subtracted to yield the final number). You told us various correlations between DVOA and key metrics (wins?) improved with these changes and updates; however, there was no evidence that the change in these correlation coefficients was a statistically significant improvement (i.e. the delta of the R-square was statistically significant). Moreover, and I think I've b*tched about this in other threads from time to time, when correlations are cited on this site, they rarely have their "p" values cited, not to mention other "standard" pieces of information that should be reported when citing a statistic. What this means to me as a reader of this site is that I either have to push the "I Believe" button or have to do calculations myself. I've done the former, and I've done the latter. As a result, I believe DVOA to be a useful and valid, although sometimes awkward, metric.

Regarding the game charters, I'm certainly not giving them the finger, and I appreciate and envy anyone who is able to devote such a significant chunk of time to watching and studying football. I do have concerns about the project though. I worry about inter-rater and intra-rater reliability, rater drift, halo effects, rater bias, etc. I wonder if having raters only rate halves of games introduces error to the data or improves the ratings. I wonder if there are ways to control for the bias in the data that is due the limitations of TV angles. Essentially, I fear that the charters may be wasting (some of) their time because factors like this aren't adequately or appropriately addressed (FWIW these caveats and concerns apply to PFF as well).

Essentially, the game charters project is analagous to observational studies in the Social Sciences - human behaviors are examined and measured. Without understanding how to properly assess, measure and control for the many problems (and others) I've noted, the value of the data obtained may be as dubious as Bill's impression of the PFF data. Perhaps FO controls,measures and evaluates these factors appropriately, but it's not clear to me that they do (i.e. missing transparency). Finally, validation studies of the game charting data need to be conducted. Preliminary studies could be as simple as what I did w/ the PFF data and DYAR - that is does the information on QBs, RBs, WRs, and TEs have any relationship to DYAR?

Again, thanks for responding.

16
by Karl Cuba :: Wed, 03/03/2010 - 2:58pm

I propose a fight between the FO principle writers and the PFF team in some sort of dome with weapons scattered around the ring.

The PFF lot would look at the efficacy of the various weapons as they are used by different individuals while Barnwell and Farrar try to hold them off as Aaron compiles a database that includes all the weapons and combatants.

19
by Arren (not verified) :: Wed, 03/03/2010 - 6:52pm

Karl Cuba wins the thread.

Mr. Cuba, please proceed to the giftorium, where you shall be rewarded accordingly.

20
by Ole Miss Fan (not verified) :: Wed, 03/03/2010 - 7:00pm

It's a trap!

Good things don't end in ium. They end in mania. Or teria.

30
by Just Another Falcons Fan (not verified) :: Fri, 03/05/2010 - 7:10pm

FYI, Robinson has signed a 6-year deal with the Falcons.