21 June 2011

Unhealthful News 172 - Reviews of expert analyses are not better than expert analyses

Ben Goldacre is a blogger/columnist for the Guardian, covering much of the same ground as Unhealthful News.  He writes some interesting stuff, and often makes a point I overlooked when writing about the same topic, though I often disagree with some of his points, usually because he falls into a trap of incorrect conventional wisdom about what something mean.  Recently, he posted about a research paper that he and colleagues wrote to address the question of how often health claims in newspapers are wrong.

Ironically, though, the study has at least one serious weakness.

He reports:
Here's what we found: 111 health claims [about food that could be interpreted as advice] were made in [the 10 leading] UK newspapers over one week. The vast majority of these claims were only supported by evidence categorised as "insufficient" (62% under the WCRF system). After that, 10% were "possible", 12% were "probable", and in only 15% was the evidence "convincing". Fewer low quality claims ("insufficient" or "possible") were made in broadsheet newspapers, but there wasn't much in it.

Sounds impressive, until you ask "what could that possible mean?" (remember to always ask that!)  He does actually explain much better than news stories usually do, and the explanation points out a certain contradiction in the reasoning.

I have a minor quibble with the characterization of the target population of stories.  I think it is a bit misleading to claim that you can clearly define such statements, separating them cleanly from statements about food that are so obvious or obscure that they do not count as advice.  But so long as they had a clear idea of what they were looking for, and worked hard to avoid choosing to include something because it made their results so impressive, then that is fine.  A category can be systematic without being a clear epistemic object.  Another minor quibble is their choice to take every paper for a single week, rather than gathering the same number of each newspaper from across a wider time period.  This would help reduce random sampling error since health news stories tend to cluster.

The important concern, however, is how they decided what category to put something into:
a heroic medical student called Ben Cooper completed this epic task, researching the evidence behind every claim using the best currently available evidence on PubMed, the searchable archive of academic papers, and current systematic reviews on the relationships between food and health.
But this depends on the published literature, as interpreted by someone who is semi-expert, representing the best expert knowledge on the subject.  It is remarkable how often that is not the case.  I can think of numerous examples where someone reviewing the literature would come away with a conclusion that is very different from that of genuine experts.  To name just three examples I have worked on that I have written about here and that come immediately to mind, someone naively reviewing the literature is likely to conclude: harm reduction using smokeless tobacco is not proven to be beneficial, H.pylori infections never go away without treatment, and routine screening mammograms at age 45 are a good idea. 

In fairness to Goldacre et al., none of these are dietary choice, which tends to involve rather simpler claims and where most of what is claimed is based on one data fishing study (and thus is not well supported).  So they kind of wired the result by choosing that particular topic.  But there are still specific subject experts who know more than a simple literature review could tell you and cases where they recognize something is true though the literature has not caught up.  Sometimes they are the ones making the statements to the press that get reported but do not appear to be supported by the literature, based on a naive reading.  Moreover, the researchers who wrote those papers that Cooper reviewed are often the ones making claims to the press that are fodder for Goldacre's criticism.

There is no easy answer here.  You have to figure out who to trust, and you cannot trust that the literature is accurate if you are not going to trust the authors of that literature.  But if you are going to trust the literature and are really trying to figure out if a claim is supported, it is probably worth asking a few of the people you are trusting as experts for their opinion.  Many systematic review papers are synthetic meta-analyses, which I have pointed out are highly flawed.  But the others, that do not blindly follow a bad recipe, are heavily reliant on the expertise of their author, in both the subject matter and scientific epistemology, and there is no rule that prevents someone who is far from a top expert from writing the review (indeed, it is far more common than not).  Many reviews just take sketchy information and repackage it so that it looks authoritative.  Is this review of 111 claims such a case?  It seems even harder to do this well.

1 comment:

  1. UPDATE:
    Today, one of Goldacre's colleagues at the Guardian, James Randerson, posted an even more thorough takedown of Goldacre et al.'s article than I did.
    http://www.guardian.co.uk/science/2011/jul/04/ben-goldacre-study-dietary-news

    He had the advantage over me of having seen the actual data, such as it is. With that, he was able to show that the article was actually complete junk science, far worse than the majority of what certain critical pundits like to criticize. Basically, it was worthless. I will not try to summarize here, since the whole thing is worth reading if you are interested.

    It also serves as a reminder of why peer review is fairly close to useless in cases like this. As Randerson points out, the reviewers should have had access to the data, which is instantly damning, but presumably did not.

    ReplyDelete

Note: Only a member of this blog may post a comment.