Lancet Iraq Unspeak

 
Polarised reactions to the Lancet Iraq study,
and to criticisms of it...
 


Criticism of the Lancet survey of Iraqi deaths (2006) tends to polarise opinion according to whether one opposed or supported the war. Rational dialogue quickly turns into suspicion-mongering and "bad faith". Examples of this can be seen in reactions, from both "sides", to AAPOR's criticism that the study's lead author, Gilbert Burnham, violated "fundamental standards of science".

Prior to AAPOR's involvement, Burnham had revealed that the survey used a sampling methodology which differed from the published account1. When researchers requested details, all were refused – making it impossible to assess the study's claim of random sampling (an important matter for a study which estimated 601,000 violent deaths from 300 recorded deaths in the sample surveyed).2

After receiving a related complaint regarding the survey, AAPOR asked Burnham for "basic methodological details" (including "sampling information", "protocols regarding household selection", etc) but was refused.3 As a result, AAPOR criticised Burnham for not answering "even basic questions about how their research was conducted".

"His data and methods" - New Scientist

A brief piece about this appeared on New Scientist's website. The author, Debora MacKenzie, writes that Burnham "did not send" the information requested by AAPOR, but that, "According to New Scientist's investigation, however, Burnham has sent his data and methods to other researchers, who found it sufficient."

Has Burnham really "sent his methods" to researchers? No, he hasn't made details of the sampling methodology available (see comments above, and footnotes 1 & 2). And since AAPOR's complaint was largely about this important aspect of the study, MacKenzie's choice of words here seems misleading, to say the least. The fact that other assorted information on the study's methods has been available, and that data has been released to some researchers (some of whom, incidentally, have not found it "sufficient" – presumably MacKenzie's "investigation" didn't stretch to talking to them) is irrelevant to AAPOR's criticism about what specifically hasn't been made available to anyone.

In response to a complaint about her New Scientist website piece, MacKenzie replied with the following characterisation of AAPOR's emailed requests for information from Burnham:

The similarity between [AAPOR's] procedure and the anonymous, unspecified charges and secret deliberations characteristic of jurisprudence in totalitarian states should be painfully obvious, and not really worthy of further comment.4

"Totalitarian" seems a bizarre term to use in this context – it's not "obvious" (let alone "painfully obvious") why a request for information would be characterised in this way. AAPOR has a code of ethics which is apparently widely recognised by survey/poll professionals, but Burnham wasn't bound by this (he's not a member of AAPOR). That, of course, shouldn't stop AAPOR (or anyone) requesting information from Burnham or criticising him for non-disclosure of important details (eg sampling methodology).

Behind MacKenzie's piece there appears to be one of those polarised opinions that I mentioned above, resulting in suspicion-mongering. The title, "What is behind criticism of Iraq deaths estimate?", seems suggestive of something sinister, not least because she provides no answer. Instead she writes: "There is no direct evidence that the latest attack on Burnham is politically motivated...". Why, then, put the thought of such motivations into the readers' minds?

"Bona fide" researchers?

The following sentence from MacKenzie's piece is interesting, not least because it contains at least two mistakes:

Burnham's complete data, including details of households, is available to bona fide researchers on request.

The data isn't available to all researchers – the "Main Street Bias" researchers were refused it, for example. But perhaps they're not "bona fide" researchers (even though they've published peer-reviewed papers on conflict research - one of which made the cover of the prestigious Nature journal)? So, who is making the rules which decide whether a researcher is "bona fide"? Clue: it's not AAPOR.

Burnham did not release "complete" data (as researchers who received the incomplete data would inform MacKenzie, if she'd bothered to ask them). As for "details of households", that could mean anything (eg household-level data). In the context of AAPOR's request for household selection data (for assessing sampling methods) it's misleading, since this data wasn't made available.

"Authority to judge"

Meanwhile, what was the reason, if any, that Burnham gave for refusing AAPOR's request for information about the study? MacKenzie claims that:

A spokesman for the Bloomberg School of Public Health at Johns Hopkins, where Burnham works, says the school advised him not to send his data to AAPOR, as the group has no authority to judge the research. The "correct forum", it says, is the scientific literature.

This, again, seems odd. What "authority" does AAPOR (or anyone else) need in order to "judge" (ie evaluate) information about a study? Did Burnham refuse the requests of other researchers because they didn't have the correct "authority"? What does "authority" have to do with it? Note also that the comment about the scientific literature being the "correct forum" is disingenuous, as some of the writers appearing in "the scientific literature" were the very people being refused basic information by Burnham in the first place. One can't discuss aspects of a study in the "scientific literature" unless information about those aspects is made available.

So, the issue is framed in terms of a totalitarian-state bully demanding "all the data" but with no "authority" to "judge" it. What doesn't appear in this frame is the fact that researchers have been unable to assess (for example) the survey's sampling scheme because Burnham, to date, hasn't made it available, and that AAPOR also requested this, without success. How "totalitarian" is that?

Meanwhile, Steven Poole, author of Unspeak, appears to put the sinister conspiriological framing into absurd perspective:

"With my good friend Senator Inhofe, I have recently founded an important public body called the Agglomeration of Truthiness in Scientist Harrassment. Burnham is not a member, and nor is his institution, and his institution is indeed at this moment telling him that our organization does not have the authority to judge his work, but I am going to write to Burnham anyway demanding that he send me cloned copies of all his hard drives, plus receipts for any food he has eaten over the past three years..."

When I read in the newspaper of some survey claiming "30% of British children carry a knife" (or whatever), my first thoughts are: how did they get a representative sample, and what questions did they ask? Would it be over-demanding of anyone to want to know these things? On the Lancet study, AAPOR asked for (and were refused) "the wording of questions and other basic [ie sampling] methodological details".5 They also asked (reasonably, I think) for sampling-related data (eg "a summary of the outcomes" for household selection) which had been refused to other researchers. I don't think they were interested in cloned hard drives or food receipts.

Burnham suspended - is it "relevant" to the science?

Burnham's school conducted its own investigation (after AAPOR's criticisms were published). It suspended Burnham for violations of the approved protocol: use of the wrong data collection form and inclusion of respondents' names.

Some commentators argued that this wasn't relevant to estimation of the science. Burnham, however, reportedly said the investigation "verified his results" (a surreal claim, since the only thing "verified" was the transcription of data to computer. The school stated that it "did not evaluate aspects of the sampling methodology or statistical approach").

While it's obvious that inclusion of names wouldn't itself affect the study's results, it seems relevant to earlier criticism of the study (over sampling methods) that the field team were carrying respondents' names around (eg through checkpoints). Here's Gilbert Burnham's description (2007) of the "main street" aspect of the sampling (which came under criticism):

The interviewers wrote the principal streets in a cluster on pieces of paper and randomly selected one. They walked down that street, wrote down the surrounding residential streets and randomly picked one. [...] The team took care to destroy the pieces of paper which could have identified households if interviewers were searched at checkpoints. [My emphasis]

The reason why the Lancet study's authors haven't released their lists of "principal streets" (which would be crucial to assessing their sampling scheme) was apparently about protecting respondents' identities. And yet the list of main streets (including both sampled and unsampled streets) would in itself be somewhat less likely to reveal their identities than forms which contained their names.

At least one of the Lancet study's authors (Riyadh Lafta) presumably knew that names were being carried through checkpoints (contrary to the above claim, within the context of sampling, that care was taken to "destroy" any identifiers). Lafta was part of the Iraq-based team and one of the authors (along with Burnham) of the official companion document to the Lancet study, which states:

The survey was explained to the head of household or spouse, and their consent to participate was obtained. For ethical reasons, no names were written down, and no incentives were provided to participate.

But the relevant issue is, again, non-disclosure of essential information needed to assess the study, and the reasons given for it. The sampling methodology hasn't been released, the basic data needed to assess it (eg the list of principal streets) hasn't been released. Following his school's investigation, Gilbert Burnham said he "takes responsibility" for the identified lapses (over data form and identifiers), but prior to the investigation which effectively forced him to take responsibility (over two years after the study) he had failed to disclose that the wrong data collection form had been used. Previously, researchers had requested (without success) copies of the survey questionnaire.6 (A report in the National Journal published a survey data-entry form which contained a space for "name of householder" – this was reportedly obtained from Lafta by a United Nations official, but the study's authors wouldn't confirm whether or not it was the form used).

Not "necessarily" wrong

Of course, none of this "necessarily" means the survey's estimate of deaths was incorrect. In fact, nobody (with the possible exception of a few relatively ignorant pro-war commentators) seems to be arguing that. Even if no information at all had been published and we had nothing but the team's assertions to go on, it wouldn't mean that their estimate was "necessarily" wrong. It would just make claims of an impeccably conducted, intensely-vetted survey look questionable, and we might, as a result, have a preference for other studies which published more information with which to assess the results.

From Unspeak to Unspeakable

One tangentially-related thing I came across while looking into the above was a statement by Les Roberts, co-author of the Lancet 2006 study:

"Our data suggests that the (March 2003) shock-and-awe campaign was very careful, that a lot of the targets were genuine military targets. So, I think it is correct that in 2006, probably in almost any month, there were more civilians dying than during shock-and-awe."

Shock-and-awe was "very careful"? This statement from Roberts was in response to a question asking why the Lancet study didn't suggest the spike of deaths that can be seen on Iraq Body Count's graph (see below) for the March 2003 period.

I can understand why Les Roberts would say his study suggested more civilian deaths per month in 2006 than in March 2003, even though his study didn't actually make a distinction between civilian and combatant deaths7. What's difficult to understand is the logic which leads to his conclusion that the shock-and-awe campaign was "very careful". It's one thing to record 300 violent deaths and to perceive something about the distribution of those deaths over the period 2003-2006. It's something else to infer that part of this outcome resulted from "care" on the part of those who were doing the bombing. Or perhaps, for some people, "care" is a synonym for "clinical efficiency" or "collateral-damage management effectiveness"?

IBC graph as at Nov 09

IBC's graph, as at Nov 2009, showing the spike of violent civilian
deaths over the shock-and-awe period (March 2003).

Footnotes/sources


1. Gilbert Burnham writes: “As far as selection of the start houses, in areas where there were residential streets that did not cross the main avenues in the area selected, these were included in the random street selection process, in an effort to reduce the selection bias that more busy streets would have.” (http://tinyurl.com/yltzr8) This refers to a sampling method which was not included in the published account of the study. To date, Burnham has not made details of this sampling method available (in other words the actual procedures used to achieve random sampling "in areas where there were residential streets that did not cross the main avenues in the area selected" have not been released), despite requests from researchers and journalists.

See also: http://tinyurl.com/4xsjtl [p10]
http://www.sciencemag.org/cgi/content/full/319/5861/273

2. For example: "Peter Lynn, Professor of Survey Methodology at the Institute of Social and Economic Research, University of Essex, has been quietly investigating and, despite several e-mails to the [Lancet team] researchers, has been unable to get answers". One of the aspects of the study that Lynn was concerned about was sampling methods: “The researchers made a list of all the roads intersecting the main road, and took one of those at random. They then went to 40 adjacent addresses going up one side. But these were all near the main road, so streets away from the main road may not have been represented.” http://www.timesonline.co.uk/tol/comment/article659020.ece

See also: http://tinyurl.com/4xsjtl
http://www.zcommunications.org/znet/viewArticle/20890
http://news.nationaljournal.com/articles/databomb/

3. http://articles.baltimoresun.com/[...]burnham-conduct
http://tinyurl.com/yct4p23
http://www.pollster.com/blogs/aapor_censures_lancet_iraq_cas.php

4. Email from Debora MacKenzie, 9 Feb, 2009, copied by her to AAPOR and others.

5. MacKenzie claims, misleadingly, that "The wording of the questions was also provided (pdf format) to a non-scientific magazine article critical of the study". She links to an "Iraq Mortality Survey Template" which was provided to the National Journal by a third party. But the Lancet authors have declined to confirm or deny if these were the questions used (see also note 6 below). The National Journal also supplies what it calls "Iraq Mortality Survey Questionnaire (actual)", however this is in fact a data entry form (also received from a third party and not confirmed or denied as used by the Lancet authors) and contains no wording of questions. For more on the confusion (and non-confirmation/disclosure) regarding these forms and the survey question wording, see http://tinyurl.com/4xsjtl [p9]

6. "The L2 authors have not publicly released their questionnaire in any language: English, Arabic or Kurdish (III2). It is not clear at this stage that there was a formal questionnaire for L2 and there is no way to know how questions were worded in the field. Various researchers, such as Fritz Scheuren of NORC and Madelyn Hsiao-Rei Hicks of the Institute of Psychiatry in London, have requested copies of the L2 questionnaire and have been refused by the L2 authors (personal communications). Scheuren was also told that the questionnaire exists only in English and that L2 interviewers, said to be fluent in both Arabic and English, translated the questionnaire into Arabic in the field. Several problems ensue." [L2 = Lancet Iraq study, 2006] http://tinyurl.com/4xsjtl [p7]

7. "Separation of combatant from non-combatant deaths during interviews was not attempted". (Lancet 2006; 368: page 1,422)

Back to top