Easterbrook on the Lancet

Lots of people are jumping on Gregg Easterbrook for his remarks on the Lancet study of deaths in Iraq. In particular, fellow ScienceBlogger Tim Lambert blasts him for saying:

The latest silly estimate comes from a new study in the British medical journal Lancet, which absurdly estimates that since March 2003 exactly 654,965 Iraqis have died as a consequence of American action. The study uses extremely loose methods of estimation, including attributing about half its total to “unknown causes.” The study also commits the logical offense of multiplying a series of estimates, then treating the result as precise.

Lambert points out that the study actually gives a range: “654,965 (392,979-942,636).”

In tepid defense of Easterbrook, let me just note that my first impression of that range is also “Oh, that’s stupid.” If I had a student hand in a paper reporting a value as “654,965 +287,671-261,986,” I’d dock them the full “significant figures” points. If your uncertainty is greater than 40%, you have no business reporting your values to six signficant figures.

I’m willing to accept that this is just different practice in the statistics community, but my first impression is that those numbers look deeply silly to me. They’re claiming way more precision than seems reasonable, at least by the standards used for uncertainty handling in physics.

This doesn’t mean that Easterbrook isn’t an idiot, by the way– as a general rule, if the subject is anything other than football, he can safely be assumed to be talking nonsense. He said nice things about Tiki Barber, though, so I’ll defend him just a little bit.

14 thoughts on “Easterbrook on the Lancet

  1. Would you prefer 650,000? It’s just as precise. It’s still an integer. The only difference is that your training tells you that such a number is probably not intended to be read as precise. But that’s covered by the use of a confidence interval anyway.

    …oh, and I doubt Tim Lambert will thank you for confusing him with Blair 😛

  2. Would you prefer 650,000?

    Yes.
    I would probably write that as something like “650,000 +/- 270,000.”

    It’s just as precise. It’s still an integer.

    No, it’s not.
    Trailing zeros are not significant.

    And, given that we’re talking about deaths, I would generally expect the answer to be an integer, unless some of those people are only mostly dead.

    I doubt Tim Lambert will thank you for confusing him with Blair

    Shit.
    I don’t know how I did that. Fixed, now.

  3. Chad, even on football Easterbrook’s losing credibility these days. He’s currently World Champion of the “Hindsight is 20/20” League for all his nonsense about preposterous punts and blitzes… I agreed with him sometimes, but most times I just wanted to hand him a stick and point to the dead horse over yonder. I can’t even read him anymore, it’s just too much to take, especially when he expounds on science and technology in his (poorly edited) column.

  4. You’re 100% right about the sig figs; they’re including lots of numbers with no meaning. The overall study is sound, though. And interestingly, it’s the same technique used to determine the extent of Hussein’s atrocities, numbers the same people who are dismissing this study quote with glee.

    And there’s still a “Blair” reference in there…

  5. (Full disclosure: I am an economist. Feel free to assign whatever discounting factor you choose to my intuitions on the matter.)

    I agree with you on this application but not, I think, on the general significant digits principle. I’d make a distinction between situations where the uncertainty is purely a matter of imprecision of measurement and one where it comes from sampling from a truly heterogeneous population. If you are measuring the length of a pencil with a ruler that has precision 1mm, no matter how many measurements you take there is no point in reporting the average to the 100th of a millimeter. But in many social science measurements, each of the individual measurements has precision down to the individual person; uncertainty about the average arises because there is so much heterogeneity across people. In a situation like that, I’d be inclined to report the sample mean at least to the precision of the individual measurement, using the confidence interval to report the uncertainty that comes from the potential unrepresentativeness of the sample.

    So, suppose I surveyed 1000 people about their gender, and 518 were male. I’d report my estimate of the number of males per 1000 people as 518, not 520, even though the standard error of that estimate is 16 and the last digit is therefore not significant.

    Like you, I’m willing to accept that there are differences in standards across disciplines. But this may give a reason for that–in physics, uncertainty typically (though of course not always) derives from imprecision in measurement, while in social sciences population heterogeneity is more frequently the relevant factor

    The reason I agree with you about this application, though, is that the estimate-per-1000 was then scaled up to correspond to the population of Iraq. And there is no way that the estimated population used for that scaling is precise down to the person.

  6. And there’s still a “Blair” reference in there…

    Damn it.
    This is what I get for posting when I’m not completely awake…

    Now it’s fixed.

  7. Why does anybody care about enemy dead? Vietnam body counts were arbitrary and meaningless, ditto Northern Ireland. The only bottom line is to kill them all. How to win a war:

    1) Locate an enemy.
    2) Point the Marines in that general direction.
    3) Shut up and look away for 90 days.
    4) Welcome the Marines home.
    5) Supply reloads.

    Everything else is politics. Everything and its opposite (contrapositive if you are a stickler) are true in the social sciences. Post-WWII Nuremburg was inevitable. It made no difference at all who was judging and judged. No doubt the Nazis would have done a much better job with the docket were they the victors (and Europe would have had one currency, one set of electric plugs, and one set of phone noises 50 years sooner).

  8. … and the trains would have run on time. I hope you’re being ironic, Uncle Al.

    Actually, killing people is obviously not the way to win a war when winning means leaving a country that is favorably disposed towards oneself. Given only the choice between having 650,000 dead Americans and 650,000 dead Iraqis, I, being American, would choose the latter. However, given a third choice of not being responsible for any Iraqi or American dead, I would choose that. We removed that possibility when we invaded Iraq. Now all that’s left appears to be counting the dead and deciding when there are enough of them that we can bring the troops home.

  9. If I measure the levels of a protein in a cell culture, and after a bunch of measurements I get 2.87 +/- 0.13, I report exactly 2.87 +/- 0.23. I don’t find reporting 3.0 +/- 0.1 any more informative at all. In fact, if you get the actual numbers for the Iraq survey that the Lancet article provides, the number they report might help you understand exactly what statistics they used. Now if you say that it might be more convenient to round up numbers for the media so there’s no idiot saying the idiotic things Easterbrook said, then I can understand, but the truth is that mostly every reporter in the media got it right, and this ironically pedantic illiterate got it wrong.

  10. If I measure the levels of a protein in a cell culture, and after a bunch of measurements I get 2.87 +/- 0.13, I report exactly 2.87 +/- 0.13. I don’t find reporting 3.0 +/- 0.1 any more informative at all.

    I wouldn’t either, because that’s wrong in the other direction. I would round 2.87 +/- 0.13 to 2.9 +/- 0.1, but leaving it alone is also probably ok. My general rule is that you round the uncertainty to one significant figure, and the mean value to the same number of decimal places, but I’ll go two significant figures if the first digit is a “1.”

    I don’t think there’s ever a case where it makes sense to report three significant figures in the uncertainty, though, let alone six. If you say your figure is 2.875 +/- 0.132, that 0.002 is so completely dwarfed by the 0.1 that it’s not even worth reporting. Going past the first digit or two of the uncertainty implies a level of precision that you just can’t honestly claim.

  11. I am not a mathematically competent, by any means, and I’ve not read the paper in question, or read all the reporting on the issue. But I remember enough math to see that your post is just a bit strange.

    What was the precision of their p-value or level of confidence? Those p-values are the “results” they are reporting. That’s what’s important, in a practical sense.

    I’m not sure as to the details on how to calculate what this precision is. But I’m certain you can. It probably has to do with the sample size and distribution and second and third moments about the mean and things I’ve forgotten about because I’ve never used them. You could figure out that X people is the minimum number of people that would represent a change of 0.005 in your p-value, then we can call this figure X a “people-unit”. We can use this unit to measure our confidence interval. It could represent the “granularity” of our measures. It may be that the people-unit measure would differ for the lower and upper bound of the confidence interval, which is nagging towards my simplifications.


    The comments you’ve made in your post are not particularly helpful, because they do not enlighten the nature of the problem. I think it is ok to report the mean and confidence interval as they did, even with all those ‘figures’. They do know what those figures mean, even if the public doesn’t appreciate it. What the public should desire is information on the precision in the level of confidence.

    It is going to be confusing people, what you’ve written so far though! Talking about measuring with rulers and things like that is confusing the issue with what I think are less than appropriate analogies. Counting (sampling) and measuring with rulers introduce different kinds of errors.

    I don’t doubt your argument that the mean and confidence interval have some level of significant figures. It is not clear to me that adjusting the mean and confidence interval to the nearest number of significant people-units sheds any more light upon the issue than simply reporting the level of significant figures in the level of confidence.

  12. Remember that this is a survey-based study. They polled some number of people (40 households in each of 47 regions, as I read it) and were told how many were lost from each. The then extrapolate up to the number of people in Iraq, and boom, your number. This means that each response represents many dead; they calculated 654,965, but one fewer yes response would have gotten (say – I’m making this up, but the order of magnitude is roughly right) 653,128. Note that the true value is roughly equally likely to be any of the integers in between, but that that number will never be reported due to the granuarity of the sample.

    Thus it is clear that the trailing digits have no real meaning; 654,965 is not statistically distinguishable from 654,118, say.

    Significant figures are hard to get a handle on, it’s true, but they’re important. And Chad is 100% right.

    In terms of p-value and such, remember that all statistics are based on assumed, idealized distributions for the true answer. We make guesses as to what these distributions are, but both the assumptions and the distributions are inherently incorrect so again, reporting results down to one part in a million is a bad thing to do.

  13. I hammer my students over and over regarding precision and significant digits so I understand the issue here, but really folks, you’re missing the forest for the trees. We’re talking in excess of one-half million people killed and no one has punched sizable holes in the methodology. THAT’S what matters. Everything else is new doilies for a burning building.

Comments are closed.