10 April 2011

Unhealthful News 100 - A couple of observations from 100 days of Unhealthful News

I had thought about putting together a nice summary of topics or lessons for day 100, but jetlag, a deadline for some final edits, and a few other matters conspired to make that impossible.  I did not even have time to read through 1/3 of what I had written.  Perhaps that is just as well and it is not quite time for that yet (kind of like releasing a "best of" collection as your third album).  So, I will just make a few observations about what I have discovered so far.

I had in mind some particular errors, fallacies, and points of confusion when I started this, but I chose to let events dictate what I wrote about.  As I expected, a very common theme was that randomized trials are really a gold standard, and that all studies can let us infer causation if they support the conclusion.  But I have been surprised by how much this extends into the theme of studies often being a proxy for what we really want to know.  I had always thought the broader point regarding randomized trials was about there being no hierarchy of study types, but it might have more to do with the observation that a study gives you data about what you studied, not what you want to have studied.  This observation applies to such diverse topics as proxy measures and subgroup analyses.  (I hope that last sentence makes sense to anyone who has read the series.)

Another point I predicted would be common was about apparent data dredging, including intentional misleading of readers, and the resulting publication bias.  I think this is probably the biggest problem in the epidemiologic science literature, and so I tried to emphasize it in looking at what was reported.  But it is actually quite difficult to recognize cases other than those that are incredibly blatant propaganda.  This is part of what makes it such a problem; it is something that we should keep in the back of your mind as a common source of bias, but is usually not backed by a very clear story.  It is much easier to pick up on less critical points like reporting too much precision or mistaking statistical significance for effect measure statistics.

I have seen nothing that changes my view that almost all health reporters know very little about what they are reporting on, and that those with some expertise have often "gone native" (identifying too much with those they are reporting on, and so transcribing rather than interpreting, and showing little journalistic skepticism).  I noticed that a common metaphor about political reporting translates to health reporting: reporters try to oversimplify every controversy as if it were a sports match.  However, most health reporting does not even mention that there is a second team on the field.  I also learned that you need really serious editing to avoid typos and perhaps impossibly good editing to make sarcasm clear.

In Unhealthful News 18, I remarked, "Researchers, publicists, reporters, editors, high school science teachers, and others pursue their own goals and figure that it is someone else's job to keep readers of the health news from being misinformed, confused, unnecessarily scared, or generally resigned that it is all nonsense."  From the feedback I have gotten, apparently I am succeeding at that to some modest extent.  It is a good thing too, since there are rumors in the press (just to get in a bit of today's health news) that Obama, having already gotten rolled on plans to reduce medical costs, is now just going to reduce government spending on medical programs (which is to say, is going to start further rationing health care).  This means that, for those of you who depend on American government programs for your or your relatives' health care, the practical importance of being able effectively read the published health information yourself just became a little bit greater.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.