31 January 2011

Unhealthful News 31 - Will anyone figure out it is not about the caffeine?

Those of you who read this blog before the start of Unhealthful News will recall me writing about the stupidity that surrounded the regulation of alcoholic energy drinks like Four Loko, that appeal to young people seeking uppers and cheap alcohol that tastes like bad soda.  The upshot was (a) it was fairly stupid to think that banning these drinks was worthwhile (it is trivial to make them yourself: energy drink + vodka) and (b) all of the discussions, and the ultimate bans, completely missed what might be worth regulating.  They all focused on how these drinks contain caffeine, a well-studied, approximately harmless drug that is mixed with alcohol in any number of ways.  Both the news stories and the regulations ignored that what is different about these drinks and might have made them more dangerous to mix with alcohol are the various other less-well-studied stimulants in the drinks.  The bans ended up forbidding the caffeine-alcohol combination (except in drinks that wealthier people buy, like Kahlua) but allowing the energy drinks with alcohol to continue to exist with the removal of just the caffeine (so you need to drink some coffee to complete the cocktail).

A new proposed regulation in a large county in New York has the merits of addressing this problem, by banning sales of energy drinks to young people, though in a way that seems to hold little promise (even if you think it is a good idea).  But it was the critics of the proposal who managed to get the issue even more wrong.  The latter is quite disappointing to me.  I generally oppose regulations that restrict people's right to make informed autonomous choices about their health.  But that does not mean we should misrepresent efforts that, despite being bad in their details, are based on concerns that people are not actually informed about a particular set of risks (the first sentence of the statement proposing the law emphasizes the goal, "to alert consumers to the health risks associated with energy drinks", not a restriction on autonomy, though a restriction is, unfortunately, where the law's teeth lie).

The county government released this statement (pdf) about the proposed ban – sorry for the pdf link but it was not exactly big news and I only heard of it because critics were touting this critique of it.  The proposed ban would require merchants to post a warning about energy drinks, which you can read in the county's statement.  The warning suffers from the common problem of leaving the consumer thinking "might" means "is", and has a few badly chosen emphases, but is really not a terrible bit of advice.  The proposed law would also prohibit sales to anyone under age 20, which is what makes it a target of criticism.  As always, the devil is in the details, especially the definition, in this case:

an Energy Drink is defined as "a soft drink that is classified as a dietary supplement not regulated by the Food and Drug Administration and that contains 80 or more milligrams of caffeine per 8 fluid ounces and generally includes a combination of methylxanthines, B vitamins and herbal ingredients and other ingredients which are advertised as being specifically designed to provide or increase energy."
Let's unpack that a bit.  The emphasis on  caffeine is just as misguided as it was with Four Loko.  If such a rule were widely adopted (beyond just a few local governments), manufacturers would presumably lower the caffeine content of energy drinks to below that threshold (few are actually much above it) or produce a "lite" version, differing only in the lower caffeine content, that would now appear like the government said it was safe for the kiddies.  If they did that, it is not clear that the drink would meet the conditions of the law no matter what else it contained, if I am reading the word "and" correctly.  The rest of it is even more of a mess.  Methylxanthines are a class of chemicals that include caffeine and some other stimulants, so just adding this word to the already-listed caffeine seems content-free, whether or not some of the other methylxanthines are included, as they are in many energy drinks.  Merely having B vitamins (good for you in moderation, perhaps of concern in the megadoses found in some energy drinks) and herbal ingredients (like orange juice?) is a pretty useless way of creating a legal definition of energy drinks, though not as useless as "other ingredients".  As for how they are advertised, perhaps one or two of them actually say "designed to provide energy", but even if they say they provide energy, they seldom say anything about how they were designed.  And does it count the ads that say "Red Bull gives you wings"?

I think the best thing we can conclude from this is that county governments are probably in over their heads a bit trying to regulate food and drugs.  Health regulators at higher governmental levels often know how to better define substances and advertising regulators at those levels are better at characterizing statements.

That said, I have to admit some sympathy with the underlying goals.  There are a few stimulants we know a lot about, enough to be confident that they pose some minor risks but no substantial threat to users who are teens or older.  There are others that might pose greater risk, especially in combination, enough that better regulation is warranted (at least some warnings and perhaps limits on concentrations).  Caffeine is one of the well-understood stimulants, which is a huge problem with the way the legislation was written.  Nicotine is another.  The latter makes this proposed ban much more sensible than New York's proposed ban on e-cigarettes, since it is an attempt to cover other stimulants which we know disquietingly little about.

Damn, I guess that blows my thought that states will make better health policy than counties.  Perhaps the rule should simply be to not let an individual legislator propose line-item health regulations, especially when she basically admits (in the case of New York and e-cigarettes) that their attempt to deprive tens of millions of people of a potentially life-saving choice is based on the extensive expertise that comes from having done something once herself.

Returning to the energy drink law, it is unfortunate that the critics seem to completely miss the point of the regulation.  They first focus on the caffeine, probably because it is the only concrete part of the proposed definition.  But they suggest that the concern about energy drinks is about caffeine (this is clearly not in the intent of the law, but its author did leave herself open to the representation by emphasizing caffeine) and use that as a source of ridicule by comparing the (higher) caffeine content of a decent cup of coffee.  When they briefly address the other ingredients that are the greater source of concern, they emphasize that one ingredient common in energy drinks is a required ingredient in baby formula.  With this, they not only ignore the fact that the dose is what makes the poison, but they employ the same anti-chemical tactic often used by those calling for unjustified regulation (e.g., the "it is in anti-freeze!!!" rhetoric that is frequently used to imply that the major inactive ingredient in electronic cigarettes is unhealthy).  Someone reading the critique would think that the new proposal is as absurd as the U.S. FDA's choice of how to respond to the Four Loko hype, which it is not.  Whatever you might think about its actual net costs and benefits, the proposed law is not without merit.

The critics go on to express worry about convenience stores having to pay the cost of putting up a warning sign, though by my reading of the proposal they could print out the specified paragraph on a half-sheet of paper and tack it to the wall somewhere (and I am sure that the local Red Bull distributor would supply signs if color printing, cardstock paper, lamination, or some other great expense is required).  They also seem to worry about the burden of c-store clerks having to check the age of their patrons, perhaps not realizing that c-stores make most of their money selling products that already require carding (cigarettes alone are how those stores make a profit, and in New York they also sell alcohol).  Finally they argue at some length that this will damage teenagers by depriving them of the opportunity to make their own decisions:
One of the worst things about this bill — and other laws like it — is the disservice it does to youngsters. Young adults need to learn how to make their own decisions about nutrition and moderation. Almost any substance or product can be abused — even water can be consumed in fatal quantities. 
Really?  That is what we are using for anti-regulation arguments these days?  We should be just as worried about someone drinking gallons of water as we are about downing the pint of concentrated stimulants that might lead to cardiac arrhythmia?  And should we do nothing to reduce the chance of the latter just because the former might happen and because kids need to be responsible enough to do their own epidemiology before getting amped to study or party?  It is one thing to argue that the science shows that there is no reason to worry about these drinks (and then actually tell us what the science says), but responding with trumped-up concerns about implementation costs and the loss of personal responsibility just undermines legitimate criticisms of regulations.  Just because one county's ban of Happy Meals is simplistic and infantilizing does not mean the same is true of an attempt by a county on the other side of the country to regulate drugs that have not been well studied and seem to have some bad effects.  One-size-fits-all works no better for anti-regulation than it does for regulation.

I agree with the critics' main conclusion, that this would be an ill-formed law.  If for no other reason, banning 19-year-olds from consuming something seems to be among the least sensible ways to act on concerns about these minimally-regulated stimulant products.  However, assessing whether warnings and quantity limitations, and maybe age limits, are warranted seems worthwhile.  Unfortunately, without regulating specific stimulants by name, and without the involvement of scientifically-expert regulators who can understand the research on the chemicals and demand more if needed, regulation is unlikely to be effective.  County lawmakers are not in a good position to do this, and it is very frustrating that we cannot trust the FDA to play that role sensibly, so it is tempting to condemn all attempts at regulation.

But the debate over ill-formed health proposals is not improved by responding to legitimate concerns with aggressive condemnation.  This is especially true when the same essay could be used to condemn most any regulation with the substitution of only a few words.  The reader of the critique will be misled, both about the fact that the proposal has some reasonable motivation and, if he thinks about it, about the nature of opposition to public health interventions.  Some readers will conclude they should resist any such regulation because they are all a threat to autonomy and, besides, the water will kill you.  But others might go another direction:  The tone of the critique plays into the hands of those who would declare all opponents of regulation to be mere pro-industry spokesmen who just object to any restrictions on commerce.  When authors employ blanket condemnation, readers cannot recognize the difference between indefensible regulation (e.g., New York e-cigarette ban) and sloppy and perhaps-unwise, but sensibly motivated regulation.   As a reader you can be wary:  If it sounds like almost the same argument could be used to, say, argue that anyone should be able to sell any drug they want, without a warning label, until it is proven to be deadly, then perhaps you should doubt the authors' claims.  Their criticism might be correct, but you will not know that from what you are reading.

30 January 2011

Unhealthful News 30 - Figuring out who to believe (part 1)

The challenge that probably interests me more than anything else in my intellectual career is how to recognize when someone in a debate is clearly right without having to become expert in the subject matter, and the closely related problem of how to make it clear that you are right to outside observers.  The specific situation that most interests me is one where the observer of the debate is intelligent, generally well-versed in similar subject matter (science, politics), and genuinely interested in figuring out the truth but does not have any particular expertise in the specific subject matter and is not likely to acquire it.  I know this does not describe most situations where you might be trying to persuade someone, but it is a particularly important case (it describes many cases of trying to win over opinion leaders) and one that seems to present a surmountable challenge.

It is not so easy though.  I have tried doing it on several topics over the years, most recently tobacco harm reduction.  It is clear to me -- as someone who has made an extensive study of the epidemiology, economics (that is, what people like), politics, and ethics of the matter -- that there is no legitimate case to be made against encouraging THR unless someone accepts some very odd goals.  I am fairly certain I have identified the motives of those who oppose THR, and it is clear to me that if they openly admitted their real goals and preferences they would face opposition from the vast majority of the population.  They apparently agree with that assessment, since they hide their real motives beyond pseudo-scientific claims and rhetoric. 

That is what is clear to me.  But I know that to most observers it is not clear that the opponents of THR are trafficking in dishonest nonsense and misdirection.  They know how to use the vocabulary of science and make "sciencey" arguments (i.e., things that sound like they ought to be scientific claims, but really are not, in the spirit of Colbert's "truthiness").  To the completely uninitiated, it sounds like there is a scientific argument going on about health risks, when there is no legitimate debate on those points whatsoever.  To those who know a bit more, it seems like there is a legitimate debate going on about ethics and behavior, though there is barely more of a case against THR from those quarters than there is from the health science.  I know from experience that if I can sit down and talk with a member of my target audience (in particular, someone who is genuinely interested in learning the truth), I can almost always convince him or her of the truth. 

On the other hand, in such circumstances I generally have the advantage that my listener knows me, and thus knows that there is no chance I am the one spouting utter nonsense and simply lying about the science when I point out that the other side is doing just that.  So perhaps I have not quite achieved my goal of figuring out how to communicate the material to someone who wants to know the truth but does not know, going in, that I am the one that should be believed.  I think I have some insight into the topic, and would like to try to communicate some pointers, and at the same time try to better figure out how to do it myself.

I will explore that theme and goal periodically (maybe most every week) in this series because it is critical to what I am trying to do.  Eventually I will challenge a health news claim that you (some particular one of you) was inclined to believe.  Perhaps you will believe me because I have built up enough credibility through my other analyses, but maybe you will want me to make a case for why I am right that does not require you to start by assuming I am right.  For example, I suspect some readers must be asking (if they have read this series, particularly what I wrote yesterday), "why should we believe you, the iconoclast, rather than the icons of epidemiology in academia and government; if your calls for methodologic reform are right, why is almost no one adopting them."

In short, I want to explore what I can write and what you can realize that would lead you to believe me?

To start exploring "why should you believe me?", I would like to invoke the work of someone who I consider to be very talented at making a good case for why we should believe him.  Many of my readers follow Chris Snowdon's Velvet Glove Iron Fist blog, but may not be as familiar with his other book and blog The Spirit Level Delusion.  (If you are somewhat familiar you might want to check back, he added a lot of new material last week.)  This is his response to the book "The Spirit Level", by two epidemiologists, Richard Wilkinson and Kate Pickett (W&P), which claims that wealth or income inequality in a society (not the well-known problems of poverty, but inequality per se) causes all manner of health and social problems.  W&P's book apparently has a big following among British lefty pundits – those who are predisposed to support the policies that would be recommended were the book correct.  It has received much less attention in the U.S. (perhaps due to a dumb choice of title, which sounds New Age-ish to the ears of those of us who refer to that tool as just a "level", "bubble level", or perhaps "carpenter's level" and had never before heard the term "spirit level"), though it has been picked up by a few lefty pundits like Nicholas Kristof (which I commented on with dismay since I like Kristof's non-naive analyses). 

Snowdon's book (and associated interviews and blog posts) do a thorough job of debunking W&P and showing that it is utter junk science.  I am confident that no serious reader who was genuinely interested in learning the truth could read what he wrote and still believe that there was a legitimate debate about whether W&P's analysis was legitimate.  He could not easily win a fight by simply presenting his own assertions that were counter to theirs, hoping readers would choose to believe him.  Why would they choose to believe a journalist who is not backed by a major publisher over two university researchers?  (Most readers of this blog perhaps realize that a sharp scientifically literate journalist is probably a better scientific thinker than most people who publish epidemiology, but the average reader would not know this.)  There is a lot to mine from his presentation, and I can only touch on the answer today (more later).

The key to Snowdon's methods is pointing out, in ways that any sensible reader can see without an expertise in the subject matter, fundamental flaws in W&P's arguments.  The reader is then forced to either believe the critique or believe that Snowdon is fabricating gross out-and-out lies.  For example, in the first of his recent posts, Snowdon addresses W&P's implication that the many studies on the subject of inequality that came before them all supported their claim.  He first points out that if you read carefully, W&P only state that there were 200 papers that tested the relationship between income inequality and health.  They overlook the fact that quite a few of them conducted that test and concluded that there was nothing there.

(I am reminded of a Colbert episode from last week where he was joking at length about Taco Bell being accused of putting "beef" in its food that did not actually meet the U.S. Department of Agriculture legal definition of beef.  A Taco Bell spokesman responded to the accusations by pointing out that all of their beef was USDA inspected.  Colbert noted that "inspected" is not the same thing as "approved".  This further reminds me of a word that you may see in epidemiologic survey research, "validated", which basically is more like "inspected" though authors try to make it seem more like "approved".  I expect I will take up that point sometime in this series.)

Snowdon then went on to produce a series of quotes from previous researchers about findings that disagree with W&P's claim.  His key observation here is not that the evidence that W&P were wrong is more compelling than the evidence they were right.  That argument would require the reader to have expertise in the field to sort out the conflicting claims, to know whether all relevant studies were being cited, to know what exactly the quoted study results mean, etc.  But Snowdon's key point was a different one:
Those with a healthy scepticism will have noticed that I have only quoted studies that support one side of the debate. It’s a slippery and misleading trick and it is exactly what Wilkinson and Pickett do throughout The Spirit Level. The difference is that I made it clear from the outset of this book that there are many conflicting studies. Readers of The Spirit Level would be hard-pressed to guess that there was any debate at all.
So Snowdon has successfully pointed out to the reader that whatever the weight of the evidence might show, the evidence does not resemble what W&P claim it is.  To doubt that point would require believing that Snowdon was making up the quotes he wrote, something that would undoubtedly be picked up on by those on the other side and that would destroy his credibility, and thus is vanishingly unlikely.  (Also the interested reader could check it himself.)  He then redoubles the point by showing that a study that W&P cited as being the exemplary support for their thesis was actually quite equivocal.  Roughly speaking that one translates into, "if that's all you got, why did you even show up?"

(Aside:  This is also is support for a criticism I make about the way reference citations are used in health science.  Far too many authors, reviewers, and editors seem to think that it is appropriate to make a sweeping statement and then cite a single supporting study following it.  But all this does is create an illusion of increased credibility – finding a single quote or citation to support a particular claim is almost completely uninformative because there is some support for all but the most hopeless claims.  Authors need to either implicitly say "this broad claim is true; we assert this based on our expertise about the entire body of evidence and you will have to trust us", provide a complete review of the evidence, or direct the reader to further analysis of the point (a legitimate use of a citation, and should be used more often).  Citing a single piece of support as if it justifies a sweeping claim is just a way of trying to mislead readers.)

While pointing out that W&P are trying to misrepresent the weight of the evidence is not sufficient to deny any particular claim they make (Snowdon debunks many of their points in detail using other argument), is should be enough to make the open-minded reader seriously doubt everything that W&P claimed.  The general lesson is:  If authors can be shown to be denying the existence of opposing evidence and conclusions – not disagreeing, challenging its validity, or saying that it is overwhelmed by the evidence on the other side, but simply pretending it does not exist – this is pretty good evidence that they are not honest analysts and, moreover, do not think their case can stand on its merits.

Of course, W&P made it easy for Snowdon to shatter their credibility by making it so brittle.  They put the reader in the position of either believing they have unequivocal evidence for a "new theory of everything" (to quote from Snowdon's snarky subtitle), or concluding that they were just pulling a sales-job on the reader.  If they had behaved like scientists – recognizing the best contrary evidence and being properly equivocal – rather than peddlers or evangelists, it would have been necessary to explore the merits of their argument to challenge their claims and credibility.

Still, it is useful to figure out how to debunk as easy a target as is The Spirit Level.  We need to start with the challenge of winning one-sided debates before we can take on arguments that have some credibility.

29 January 2011

Unhealthful News 29 - Um, yeah, we already knew that: smokeless tobacco does not cause pancreatic cancer

A recent paper has been touted as showing that smokeless tobacco (ST, which mainly refers to oral snuff, which is sometimes called snus, and chewing tobacco) does not cause pancreatic cancer (PC), which is contrary to what some people believe.  This is of little practical consequence, since even the highest plausible risk claimed for PC was only a fraction of 1% of the risk from smoking, and thus the claim had no effect on the value of ST in tobacco harm reduction.  But there are several angles on this that are worth exploring here.  (For those of you not familiar with my work on tobacco harm reduction – substitution of low-risk sources of nicotine like ST for smoking, which could have enormous public health benefits – you can find more background in our book, blog, and other resources at TobaccoHarmReduction.org.) 

As a first observation, since this is a series about health news, I should point out that, as far as I know, the new article did not make the news.  Since I cannot point to a news report for background reading, I recommend instead a good blog post by Chris Snowdon that summarizes it (and touches on a few of the themes I explore here).

It would be one thing if it did not make the news because it was not actually news (see below).  But I doubt that most reporters would have realized that, so the obvious explanation does not speak well of the press.  News that contradicts conventional wisdom is likely to be highlighted because it is more entertaining, but not if it is an inconvenient truth for those who control the discussion, in which case it stands a good chance of being buried.  Since the anti-tobacco activists who dominate the discourse in these areas want to discourage smokers from switching to low-risk alternatives (yes I know that sounds crazy, but it is true – it is beyond the present scope, but I cover it elsewhere), they prefer people to believe that ST is riskier than it really is.

Second is the "um, yeah, we already knew that" point.  Those of us who follow and create the science in this area have always known that the evidence never supported the claim of any substantial risk of PC from ST.  (An important subpoint here is that an empirical claim of "does not cause" should be interpreted as meaning "does not cause so much that we can detect it".  For an outcome with many causes, like cancer, and an exposure that affects the body in many ways, it is inevitable that if enough people are exposed at least one will get the disease because of the exposure.  It is also inevitable that at least one person will be prevented from getting the disease because of the exposure.  So what we are really interested in is whether the net extra cases are common enough that we can detect them.)

There have been a three or four studies whose authors claimed to have found an association between ST use and PC.  Other studies found nothing of interest and there must be dozens or perhaps hundreds of datasets that include the necessary data, so the lack of further publications suggests that no association was found in these.  There was never a time that a knowledgable and honest researcher reviewing the available information would have been confident in saying there was a substantial risk.  One of the studies that claimed to find an association, published by U.S. government employees, was a near-perfect example of intentionally biased analysis; they actually found that ST users had lower risk for PC but figured out how to twist how they presented the results to imply the opposite.  Two somewhat more honest studies each hinted at a possible risk, but each provided very weak evidence and they actually contradicted each other.  Only by using an intentionally biased comparison (basically cherrypicking a high number from a different analysis of each dataset, because if similar methods were used they got very different results) could activists claim that these studies could be taken together as evidence of a risk.  Several of us had been pointing this out ever since the second of these studies were published; see the introduction (by me) and main content (by Peter Lee) of Chapter 9 of our book (free download) for more details.

The worst-case-scenario honest interpretation of the data is that there are a few hints that perhaps there is some small risk, but it is clearly quite small and when all we know is considered the evidence suggests there is no measurable risk.  In other words, if the new report had made the news, it would have been portrayed as a new discovery that contradicted old beliefs.  But only people who did not understand the evidence (or pretended to not understand the evidence) ever held those old beliefs.

One clue about why this would be is that the study was a meta-analysis, which refers to methods of combining the results from previous studies.  While some people try to portray such studies as definitive new knowledge, such a study cannot tell anyone who already understood the existing evidence anything they did not already know.  They are just a particular way of doing a review of existing knowledge, usually summarizing our collected previous knowledge with a single statistic.  In some cases, such as when the body of evidence is really complicated and fragmented (e.g., there are hundreds of small studies), this can be useful.  That might be a case where no one actually could understand all the existing evidence because it was too big to get your head around.  But doing a meta-analysis is not fundamentally different from graphing your results differently or presenting a table a different way – it might reveal something you overlooked because of the complicatedness of the information, but it cannot create new information. 

So when the information we already have is rather limited and simple, as it is for the ST-PC relationship, there is no way this meta-analysis of a handful of studies could have told us anything new.  Anyone who learned anything from the new study must have not known the evidence.  This makes the new paper a potentially useful convenient summary, but many of those already existed, so there was no value added.

[There are other problems that make meta-analyses much less definitive than they are made out to be, including some serious conceptual problems with the most common approach.  That single summary statistic has some big problems.  But I will save elaboration on these points for later posts.]

Third, given that, you might wonder why some people think this was news.  I have already pointed out that activists wanted to portray ST as more harmful than it really is. 

A few years ago, those anti-ST activists who wanted to maintain a modicum of credibility realized they could no longer claim that ST caused oral cancer (they came around to this conclusion about ten years after the science made it clear that there was no measurable risk).  While clueless activists, and those who do not care about even pretending to be honest, still make that claim about oral cancer, their smarter colleagues went searching for other claims where the evidence was not so well known.

But a quick web search reveals that the claims about pancreatic cancer risk from ST are stated as fact by anti-tobacco activists, as expected, and by electronic cigarette merchants, which I suppose is understandable marketing dishonesty, but also by some companies that make smokeless tobacco.  The latter are apparently motivated by a fear of ever denying that their products cause health effects, even health effects that their products do not actually cause.  It does escape me why, exactly, they felt compelled to overstate the support for the claim that ST causes PC, rather than perhaps just acknowledging that it has been claimed, not attempting to dispute the claim but also not bolstering it.  I know they had the expertise to know the truth, and urged some of them to stop actively supporting the disinformation, but it had little effect.  Maybe they thought they benefitted from the incorrect beliefs in a way that was too subtle for my politically-naive brain.

The more general observation from this is that accurate science per se does not have much a constituency.  If someone has a political motive to misrepresent the science, like the anti-tobacco extremists do in this case, they will do so.  Perhaps there will be a political competitor who will stand up for scientific accuracy by voicing the opposite view.  But if there are no political actors on one side of the fight, or they are intimidated into not standing up to the junk science as in the present case, then we are left only with those of us who want to defend scientific accuracy for its own sake.  Needless to say, we do not have the press offices that wealthy activists groups, governments, and companies have, so we have little impact on the news.  This is especially true because most health news reporters have no idea who to ask for an expert opinion about the accuracy of a claim, so they usually just find the political spokesmen (some of whom are cleverly disguised as scientists).

Fourth, and most important for the general lessons of this series, is that the new paper exemplifies the fact that there is basically no accountability in health science publishing.  This is a particularly destructive aspect of the preceding observation about accurate science not having a constituency.  In many arenas, adamantly making a claim that turns out to be wrong is bad for your reputation and career.  This is obviously not true everywhere – American right-wing political rhetoric is the example that currently leaps to mind – though you might expect it to be so in science.  Unfortunately, it is not in public health science.

The senior author of the new paper (considered ultimately responsible for oversight; that is what being listed last of the several dozen "authors" of a paper usually means) is Paolo Boffetta.  Boffetta is personally responsible for much of the junk science and disinformation about ST and PC.  He was the lead author of one of the two not-really-agreeing studies mentioned above, a major player in the International Agency for Research on Cancer (IARC) report that constructed misleading evidence of cancer risk from ST, and author of a completely junk meta-analysis that engaged in the dishonest cherrypicking I mentioned above.  I would love to go through the entire indictment of him, but I have been urged to keep my word count down a bit, so I will refer you to the above links, the post by Snowdon and Lee's article that is reprinted in the book chapter, as well as this recent post by Brad Rodu

Instead I will focus on the point that since publishing in public health science is treated purely as a matter of counting-up scorekeeping by many, no one pays any attention to whether someone is producing junk science or even utter nonsense.  If you are someone like Boffetta who "authors" more papers than anyone could seriously analyze, let alone write, no one cares that you could not possibly be doing any thinking about their content – they just say "wow, look at that big number", since assessing quality is beyond the abilities of the non-scientists who occupy most senior positions in public health academia and government.  They do not even care (or notice) that someone's publication record for the last few years contains flat-out contradictions, like the various reports by Boffetta listed here (and it gets even better – during the same period he was also first author of a polemic that called for more honest research in epidemiology and condemned science-by-committee of the type he engaged in on about ST).

If you are thinking that things cannot really be that bad, I have to tell you that they are even worse.

The above describes what is typical for most of the best known (I did not say best respected) researchers in public health science, like those closely associated with the Nurse's Health Study I mentioned a few days ago.  They crank out far more papers than they could possible hope to do well or even think through, and these are what you read about in the news.  Indeed, you are more likely to read about these mass-produced studies in the news because the authors are more famous – famous for cranking out a zillion often quite lame studies.

Down in the less-rarified end of the field, it can get just as ugly.  I have observed ambitious (in the bad sense of the term) colleagues in public health, trying to climb the ladder explicitly making deals to put each other's names on their papers, as authors, even though the other contributed nothing to the paper and had no idea whether it was accurate.  Slipping a sixth author into a list of five does not penalize anyone's credit (though it obviously should), but let someone boost his numbers knowing no one would ever ask him to defend the content of the paper.  On a few occasions I or one of my colleagues who actually cares about science have asked a guest lecturer (often someone who is applying for a faculty job in our department) to explain or justify an analysis in one of their recent papers that we disagreed with, and were later told that actually challenging someone's claims was considered impolite.  (These people would have never survived graduate school in the fields I studied!)

A lot of critics who do not really understand the field call epidemiology junk science, but typically their condemnations are based on ignorance.  The truth is worse.

I wish I could conclude this point with some optimistic note of "so what you need to do as a reader is…", but I do not have one.  The one bright spot that occurs to me is that when I work as an expert witness the health science "experts" on the other side are seldom anyone who has really worked in the area since, given the quality of typical public health articles, if they had written much they probably would have published and stood by numerous errors that would undermine their claims of expertise.

Bringing this back to a few take-away points:  If someone claims to have discovered an existing belief is wrong, particularly if this is based on a simple review of the evidence, chances are that either (a) the new claim is wrong, or (b) the real experts did not actually have the incorrect belief.  For a politicized issue (one where any significant constituency cares about the scientific claim for worldly reasons), you are unlikely to get an accurate view of the science unless you hear from a scientific expert who supports the opposition view.  If such a person says "I do not like this, but I cannot dispute the claim", you have learned a lot; if they are merely given a meaningless soundbite in a news story then you have only learned about the bias of the reporter and have not heard the counter-argument.  If you hear a counter-argument, that is where the tough part begins – for both your analysis and my attempts to empower you.  I start on that tomorrow.

28 January 2011

Unhealthful News 28 - coffee, olive pits, and liability as regulation

An interesting confluence of two events seems to have been overlooked, and I suspect that neither one is being reported outside the U.S.  The movie "Hot Coffee", which tries to counter some of the ridicule that is the conventional wisdom about personal injury lawsuits, debuted at the Sundance Film Festival, and U.S. member of congress Dennis Kucinich filed suit against the food service company in his congressional office building for the dental injury he suffered as a result of a hidden olive pit.

Though print stories I saw about Kucinich were short and matter-of-fact, the television clips included open ridicule.  Taking a shot at Kucinich is undoubtedly tempting for the corporate media since he is a huge outlier among high-office elected officials in America – there is pretty much no other major official who is close to him on the populist left (maybe Bernie Sanders).  Thus he has pushed hard for some very unpopular positions, like "do not start a land war in Asia" (you might recall that opposing the wars was a very unpopular position before everyone else caught on).  [Disclosure: I campaigned for him, a rare fellow libertarian left vegan (I used to be) from Ohio and back in the days I was at the higher end of my widely changing income I was at the "have cocktails with the candidate and hand-signed mementos" level.]

What you would not learn from the giggling talking heads was that biting into the olive pit, hidden in a wrap sandwich where it was easy to bit on without warning, caused so much damage that Kucinich had to endure multiple surgeries and suffered a lot of pain and loss of functionality.  I suspect most of us who are not impoverished would pay a year's salary to avoid what he went through, and the amount of the suit ($150,000) was less than what he earns in a year, though it sounds like a large number when described without the context of the injury's severity.

Similarly, you have probably heard of the seven figure judgment awarded against McDonalds for someone getting burned by a cup of their coffee.  The new movie sets the record straight on that one, in the context of a polemic intended to push back against the conventional wisdom about such lawsuits, which its creators characterize as being a concerted campaign by corporations to create scorn and thus increase support for protecting them from further lawsuits.  (For more about the movie from that perspective, there is a series of stories from the anti-corporate media here (scroll down past the video window to find transcripts).)  To briefly correct the story about the coffee:  It was not a case of a driver taking the cup and spilling it on herself, as widely reported.  Rather she was a passenger in a parked vehicle, holding the cup between her legs, and the claim was that the styrofoam cup just collapsed (the last part is hard to verify, of course).  The injuries were so bad that they required surgery and were reported to have substantially ruined her life. 

Yet the family merely asked McDonalds to cover their out-of-pocket medical expenses (recall that this is in medical-financing-backward America), just like a homeowner's insurance policy might provide such coverage if this happened at a private home.  When McDonalds refused, the family went to court, still seeking a fairly modest sum.  At trial it came out that McDonalds intentionally keeps its coffee at an extremely high temperature, far hotter than coffee would normally be served, because it saves them money, and that hundreds of people had suffered medical-treatment-level injuries from it.  The judge and jury were so incensed by what they learned that they awarded over $2 million in punitive damages, though the plaintiff probably only collected a small fraction of this in the final (secret) settlement.  That final outcome is typical for lawsuits like this; even if the consumers win a big award that makes the news, they usually have to negotiate for a much more modest sum in exchange for the company not tying up the finalization of the award with further legal action that can last longer than the person might live.

Another case I am reminded of, which was a favorite example 20 years ago, was someone who successfully sued the owner of a phone booth when he was struck by a car while using it.  (For my younger readers, a phone booth is like a mobile phone, except that it is bolted to a particular piece of tarmac and a lot cheaper to use.)  It sounds utterly absurd until the fact – not mentioned by those delighting in the example, of course – that this was the second time such a thing had happened in that particular phone booth, which suffered from both dangerous placement and a door that was difficult to open to get away from oncoming traffic.

Why are the Kucinich and Hot Coffee stories important health news?  Because they reflect an important part of the U.S. regulatory system for health risks.  I believe almost half of my readers are from the E.U., and many of you grouse about the increasing morass of regulations there.  In the U.S. we have fewer command-and-control regulations and depend on the threat of lawsuits to give companies the incentive to police themselves.  This is explicitly recognized as an important part of the regulatory regime by those who study law and economics, though probably not by most people.  In theory it has big advantages:  Companies are theoretically in a better position to keep track of possible hazards and, because they are creating the hazards, to figure out the best way to reduce them.  It is flexible, forcing companies to worry about creating new hazards that hurt people even if the regulators have not caught up with the situation.  It is also accepted, as part of this theory, that the optimal number of bad outcomes is not zero, and sometimes it is more efficient to compensate the occasional victim rather than to engage in overly-expensive interventions to reduce the risk. 

There is an endemic debate about doing something to stop "frivolous" lawsuits in the U.S., including Obama's promise to reduce medical malpractice lawsuits in his State of the Union speech this week.  It is important to realize that such lawsuits are an inherent part of consumer protection, and so this is really a call to reduce consumer protection regulation.  There is no obvious way to get rid of genuinely frivolous suits without creating barriers to other suits that are useful contributions to regulation, especially since some of the apparently frivolous examples are really being misrepresented. 

This does not mean that there are not frivolous lawsuits, and it certainly does not mean that the current system works as well as it might.  The fact that some suits even need to be defended is indefensible (I have worked on several of those – for the defense, I would like to note).  It is quite possible, for example, for someone to win a lawsuit even when ample science shows that there is almost no chance that the exposure in question caused the disease that is being attributed to it.  There is also an inherent arbitrariness to it (e.g., how should the blame be shared between the company that serves dangerously hot coffee and the consumer who takes the inadvisable step of holding it between her legs?).  As for medical malpractice, there is a lot damage done by bad medical practices, but it seems that consumer lawsuits do almost nothing to reduce most of the real errors and frequently punish providers for outcomes that were unfortunate but not caused by error.  So the incentivization to do better work and get rid of practitioners who are incompetent is minimal.

A lot of news stories about consumer health lawsuits, like many news stories, focus on extreme cases and look for ways to make the story entertaining.  Thus, the causal reader might think that a large fraction of lawsuits are silly.  It is true that pretty much no one would have come up with the American liability system if tasked with creating a regulatory system, but that is what evolved and as with most evolved systems, mutation (radical change) is more likely to make things worse than better.  But when you read a story about how we need to do something about the claimed excess of costly lawsuits, keep in mind that this is really saying that that we should reduce companies' expenses at the cost of having less consumer protection.  You may or may not agree that we should pursue such a change, but if you just read the news you probably would not even know that is what you were being asked to agree with.

[Update:  After writing this, I learned that Kucinich's lawsuit was settled (and he provided a lot more detail about it at that link) which is what typically happens.  An incentive for food providers to take greater care with pits in hidden olives has been created, and instead of this cost of imposing such incentives being paid to the government or being a deadweight loss, it goes to compensate someone who was injured.  Again, there is plenty wrong with the current system, but this is an example of what is right about it.]

27 January 2011

Unhealthful News 27 - breast cancer noise

Any "news" about breast cancer generates press coverage, and it is especially fun for the reporters if they can blame the victims' own unsavory behavior, like smoking.  The latest such story (e.g., here) has so many interesting aspects that it is difficult to cover all of them.

First, why?  The study reported a slightly elevated risk of breast cancer for some smokers.  I can understand why a student might want to crunch the numbers on this as a project, but why is it news?  We already know that smoking is bad for you, and even if the press release about this were accurate, it would not change that much.  Who, exactly, is on the fence about smoking, thinking "if it were a bit less healthy I would quit, but I am fine with it now."  Moreover, such a discovery would not really tell us it is a bit less healthy than we thought:  We already know the overall longevity and health outcomes for smokers vs. nonsmokers, so all this would do is explain some of the total impact.

On the same note, this result should not be touted as news because overall the studies of smoking and breast cancer have found no measurable effect.  Smoking is such an intense and complicated exposure that there are few health outcomes that it does not effect (most of the effects are bad, a few are good), so it seems reasonable to think that breast cancer is affected to some extent.  But we have a lot of evidence that the effect is small and not really worth mentioning compared to the other risks from smoking and other causes of breast cancer.  As with any well-studied small effect, there are some studies that show a positive association like this one (that is, it looks like the exposure causes the disease) and some that show a negative association (that is, it looks like the exposure might protect against the disease).

Second, this study, in itself, fits that description.  That is, it finds that smoking is sometimes associated with breast cancer, but sometimes protective against it.  Did you miss that bit in news stories?  Unless you live in Los Angeles, where the paper sort of pointed that out, that is not too surprising.  It turns out that the study estimated a positive association with smoking in one's youth, but a protective association with smoking post-menopause.  The two estimated associations are about the same size, and they come from the same study, so if you are going to believe one, you are obliged to believe the other.  That is, if you are going to say, "here is a good reason for young women to not smoke," you need to also say, "here is a reason post-menopausal women might want to start smoking" (or, perhaps more realistically, "if you are still smoking at menopause, here is a reason to not quit").  It is difficult to imagine a more patent form of intellectual dishonesty than picking only the result that supports your political views and ignoring a comparable point from the same source just because you do not like it. 

Of course, it would be better to say "this does not imply it is good to smoke post-menopause because the other risks outweigh any benefit this result implies", but that would require admitting, "this really does not change the arguments against smoking either, since any effect is quite small compared to the known risks."  The accurate interpretation is that both estimated effects are small, and are really just curiosities.  Moreover, in the greater context of what is known about smoking and breast cancer, this one study should change our beliefs very little anyway.  The study should have been treated like a new study in astrophysics –  something for technical experts to use, and perhaps to be published in the  news for people who like to read science news for amusement, but not run as a headline as if we should really care about it.

Third, learned readers should notice that this result came from the Nurse's Health Study.  That does not make it wrong, but it should make you suspicious.  That long-running cohort study is largely responsible for the opinion that many experts in epidemiology have, as I noted earlier, that nutritional epidemiology is mostly junk science.  That study collects zillions of variables about its participants every year, so it is possible to study countless topics – or dredge for associations among the data and publish whatever randomly appears.  A few researchers who have violated the code of omerta that surrounds the study have told me how the researchers who control it assign junior researchers particular topics and forbid publication of results that contradict claims made by previous researchers in the group.  These factors, plus the fact that the data is a closely guarded secret, make this study the epitome of how epidemiology often violates the fundamental norms of proper science.

Just to mention a second story which is not actually about breast cancer, though you would probably not know that from the headlines, it was discovered that breast implants apparently increase the risk of a rare cancer.  This was only discovered because the cancer is so extremely rare that just a few cases of it – there are about 60 cases reported among the perhaps 10 million women who have had implants – is enough of an increase to be noticeable.  It turns out that this is not even breast cancer, but a lymphoma (a cancer of the immune system) that occurs in the scar tissue that is created.

To their credit, neither the U.S. FDA who reported this, nor any other reputable actor, is suggesting that this ought to affect anyone's decision.  This cancer is easy to treat, and the risk from the anesthesia for the surgery completely swamps it.  Indeed, it is down in the range of the risk from the car trips required to arrange and get the surgery. 

The major thing that these stories do have in common is that they are hyped beyond the minor technical curiosities that they are, and that neither one represents a new reason to avoid the activity that causes the risk.  There are plenty of arguments against those activities, of course, and those downsides must be balanced against the advantages.  Not surprisingly, the news acknowledged this for cosmetic surgery, but was steadfast in avoiding any mention of why people choose to smoke.  The image of a woman with a cigarette used to be considered sexy, but this was changed as a result of a social engineering campaign.  Perhaps if anti-obesity campaigns decide to try to create a similar backlash against double-D-cups being considered sexy, future stories about minor health risks from breast implants will be spun as definitive reasons why we should stamp out that practice, regardless of individual preferences.

26 January 2011

Unhealthful News 26 - the State of the Union is uninspired

I was hoping that President Obama would give me something good to write about today, but there was no mention of public health (only a bit about medical financing) and pretty much no reference to science (vague calls to improve America's competitiveness, education, and immigration policy do not count).  But since I do not have much time to blog today, I will just write a bit about what he did say.

The one claim about public health that I noticed was wrong, that wind energy (implicitly: wind energy using our current technologies) is "clean".  He did not make a big deal about the claim, and obviously he personally does not have time to know anything about it, so I will not harp on it.  But I wonder why no one in his administration has not figured out that it might be a good idea to not be too closely associated with that particular technology.  Even someone who insists on believing, against all the evidence, that the current industrial wind turbines do not cause serious health problems for local residents, create aesthetic damage, lower property values, and tear apart communities, surely they must realize that a lot of us are of the studied opinion that they do cause these problems.  Oh, and they are very inefficient (read: expensive and do not provide the reliable generation that the power grid needs.)   (If you are curious, there is a lot written about it; start here.)

Thus, we have to wonder why his energy advisors, who must know about the controversy, are letting him be directly associated with the issue.  It is not like tax policy, the wars, or even abortion rights, where it is difficult to avoid taking a stand; it is easy to just never mention it.  It is difficult to believe that anyone is a strong enough proponent that this would win him votes, but I know a lot of people who are strong enough opponents to consider it a voting issue.  Perhaps this is an example of people thinking they understand an issue because they only listened to the proponents of a particular belief.  Let's see… this is day 26 of Unhealthful News, so I have probably offered more than 10 previous examples of someone making the same error, and will probably provide more than 100 more before the year is over.

More relevant to scientific analysis of health, though you probably would not realize it, is that Obama pledged to end the terrible Bush education policy called "No Child Left Behind" (NCLB).  That Bush administration policy imposed brainless standardized tests and other restrictions on the nation's school systems, resulting in years where millions of children spent enormous amounts of their time drilling to do better on the test rather than actually learning.  The policy was widely condemned as terrible for the education of the kids, insulting to teachers' status as trained professionals, and even corrupt (cronies of those who put it in place, including one of Bush's brothers, were suppliers of the test-related materials to the schools).

The bit that makes this a health science story (other than the fact that focusing teaching on what can be included on a standardized test pretty much ensures that no one will learn to think scientifically) is that many proponents sowed confusion by insisting that NCLB was better because it followed a "scientific medical model".  It was a complete misrepresentation of how health science really works, but few of the opponents of the policy had the expertise to argue this point.  (I toyed with writing some papers to help out with this, but never managed to organize the necessary collaboration.) 

It is not entirely clear whether the proponents who were using this rhetoric knew that they were misleading people or whether they actually thought they understood how health science is done.  I am guessing that they mostly committed the standard intellectual crime of hearing some claim that they vaguely understand and translating that into definitive (false) statements.  Other examples include things like the first topic of this post, and my personal favorite, "we need to run this university more like a business."  The latter is quite similar to the "scientific medical model" rhetoric, since it usually comes from someone who either understands almost nothing about running a business or almost nothing about what a university is (first, we lay off everyone at all the unprofitable departments, like liberal arts; then, we cut back needless frills like student community building, seminars, and non-contract research; then we abandon rules that might impede our marketing efforts and IP, like free and open inquiry – hmm, come to think of it….)

The mythical "scientific medical model" that the NCLB proponents invoked was that only randomized trials are informative.  Since that has been the theme for much of January (I will write less about it in the future), I will not add to what I have already written about how wrong that is in general.  It was a wonderfully convenient claim for NCLB proponents because only very rigid interventions that anyone can do, like imposing simple uncreative classroom methods, are amenable to trials.  Moreover, trials tend to be very simplistic research and require a simple endpoint – like the scores on a standardized test.  The observational research, which can examine the value of various teaching methods that cannot be easily imposed as part of an experiment, is far more difficult to do, but has the advantage that we can try to assess what we actually care about.

Funny how most everyone who pretends to think that only RCTs are informative are those who stand to profit from either doing the RCTs or claiming that the lack of them means we have no evidence.  Going back to the first example, the consultants hired by the wind power industry are fond of trying to mislead people into believing that there is "no scientific evidence" of health effects, when what they are really pointing out is that the evidence has not been published in journals.  Even this is not entirely true, but it is true that the majority of the evidence we have takes the form of adverse event reports of specific problems.  But within another year, I expect there will be a fair bit of evidence published in journals, and at that point I expect the industry consultants to start claiming "we cannot really know anything because there are no randomized trials" (indeed, I went up against someone who was claiming that in one hearing already).

This ignorance (or pretend ignorance in the case of many of those profiting from it) is indeed a medical model.  For decades physicians drew bad conclusions as a result of conducting very bad observational research.  The solution was to teach them that they should do (or only trust) randomized trials to overcome the problems of their bad observational research (and, occasionally, problems of good observational research).  Many of them mis-learned the lesson (the problem is not observational research is not useful in general, just that physicians were really bad at it and, for some areas of inquiry like medical treatments, trials are almost always better) and somehow their misunderstanding spilled over to other non-scientists. 

The result is indeed model medical behavior: someone having the kind of practiced arrogance that makes him certain about everything, no matter how little he knows and how many mistakes he and his colleagues have made before.  Not too different from the political model of macroeconomics right now – I kind of shrugged when Obama made the mistake about wind power, but I yelled at the television when he got things so wrong about what economic policy we should pursue (and was not at all surprised when the Republicans responding to him took these same errors a lot further).  But others are much more expert at explaining those points, so I will leave it to them.

25 January 2011

Unhealthful News 25 - cynicism, like acetaminophen, can come in too large a dose

In a column this morning, Columbia University medicine professor Richard Sloan declares that there is absolutely no evidence to support the widely held belief that having the right mindset or a "fighting spirit" can benefit your health.  The trouble with having a burning urge to debunk is that it is sometimes hard to know when to stop.

Most of what Sloan has to say is reasonable.  As much as someone might like to think that having a positive attitude can slow the growth of cancer cells or even bolster our immune system, the evidence shows that the effect is minimal.  The belief that there is a strong effect is easily attributable to selective memory and reporting.  No one ever says, "he did not have a spontaneous remission because he is such a lazy heathen."  Sloan also recounts stories of a few of the many movements and traditions that have put far too much faith in the power of thinking and belief (strategically avoiding mentioning any currently influential religion).

But his thesis statement is simply wrong.  He claims: 
But there’s no evidence to back up the idea that an upbeat attitude can prevent any illness or help someone recover from one more readily.
This is certainly not true for the many illnesses that have psychological distress at their core.  For depression and the other psychological diseases that arguably account for more of the total disease burden than physical conditions, Sloan's claim comes close to being wrong by definition.  Those conditions are not, of course, merely attitude, but many of them are obviously closely tied to it.  So perhaps Sloan had in mind the caveat "any physical illness" but was just so caught up in his point that he forgot that he needed to clarify. 

But many physical diseases also have causes in mood, attitude, or mindset.  He tries to dismiss this by pointing out old myths about breast cancer being caused by sexual inhibition and other bad attitudes (but when is it not possible to pull some absurd believe from the dark ages of medicine – by which I mean now or any earlier point in time – to illustrate that a particular pattern of belief led to some absurd claims?).  He also notes the belief that stomach ulcers are caused by "unresolved fear and resentment" (I would have gone with "stress and anger"), which was rather less absurd in its day.  But he includes in that list hypertension being caused by "inability to deal with hostile impulses", which is not a very strong argument.  While we now know that stomach ulcers are caused by H. pylori infection, it is not entirely clear that stress has no effect.  As for hypertension, while most causes have little to do with our thought patterns, some of them do.

It is on the recovery side that this blanket dismissal falls down completely.  Sloan uses the news about Gabrielle Giffords's recovering from the effects of our American culture of political violence as the hook to publish this column.  (Yes, I know, it is guns that shoot people.  Or violence-prone individuals.  It is not because of the cultural norms that make guns common, or the rhetoric that triggers the inclination some people have to pick up a gun and shoot people.  Oh, wait, yes it is.  In the spirit of yesterday's post, all of these are the cause of her injury.  Come to think of it, this is a much simpler example than I used yesterday.)

Anyway, Giffords's case is a particularly bad choice for Sloan.  It is true that her much cited "fighting spirit" probably did not help her survive the bleeding and brain swelling that could have killed her.  But recovery from a traumatic brain injury can depend hugely on someone's spirit or mindset.  Re-learning, in middle age (when we do not have the brain plasticity of young children) how to do thing things that the missing bits of brain used to control, dealing with frustration and a sense of loss, and overcoming the mood disorders that often result from such injuries require an enormous amount of work.  Such work is aided by having a fighting spirit.  I have personally witnessed the recovery from severe brain injury of a young highly-driven person who fought back hard and an older person who made some efforts but just did not have as much spirit.  The differences were enormous.  Obviously my two observations do not constitute a very good study, and I understand that many myths come from inadequate studies like this.  But I am not arguing the statistics; this is the type of causal pattern that you can observe without statistics.  Having less fighting spirit can obviously cause someone to not push hard enough through the difficult effort that improves recovery.

I can provide even more direct insight.  My shoulder was injured to the point that last summer I could not use my arm very effectively (we are talking not being able to wear most shirts due to loss of mobility).  After getting treatment and learning the right strategy for recovery, I was stoked to fight back, and endured serious pain to recover about 80% of what I had lost.  But when the point of diminishing returns coincided with other events that left me with rather less fighting spirit, I stopped pushing hard enough to finish the recovery and, surprise!, stopped recovering.  (Though failing to recover from an injury that ends one's rock climbing career might actually result in better health in the long run.  Causation is complicated.)

I understand that Sloan was probably thinking something like "a fighting spirit will not fix blood vessels or make chemotherapy agents work better" when he went overboard and made a much broader statement.  But even then, the claim is not entirely correct.  Even if fighting spirit, praying, anger, resignation, and boredom have no effect on a treatment, complete resignation can cause someone to give up on the medical efforts and just die.

As a broader lesson, this is a case of how easy it is to deny any broad phenomenon by picking its weakest claims and pointing out that they are wrong.  This is the counterpart of advocates of a theory cherrypicking evidence that seems to support it.  Instead of addressing the evidence about how willpower affects recovery from serious trauma, it is possible to observe that many people believe that cancer can be driven to remission by joyous prayer.  It is then easy to point out that studies have failed to show that this is true.  That does not mean that every claim of attitude affecting health is wrong, of course, but it is easy to write an essay that implies that it does. 

The lesson, then is of weak science and powerful rhetoric.  The fact that someone can attack weak claims and thereby get away with publishing statements that any editor or reader should realize are overly general (who among us has not observed someone taking actions to recover from a serious injury or illness make better progress when in a good mood?) is a good cautionary tale:  If someone wants to "debunk" what you are trying to support, there is a good chance they will find the least defensible claim that you make and use it to imply that the entire category of claims is wrong (a little hint to my colleagues in THR).

24 January 2011

Unhealthful News 24 - Trying hard to avoid stooping to the phrase "food fight"

The major American public health policy story right now is Walmart announcing, with a publicity boost and endorsement from the President's wife (making it something with a whiff of official government policy, but not really such), a plan to push healthier food.  Reports about it range from the mainstream media reports (e.g., this one) where numerous reporters performed their stenography duties and reported that this must be a good thing because those announcing it say it is good to extremely skeptical critiques (e.g., this good one from an old friend of mine which links to others).  The plan includes reducing salt and sugar in Walmart store brands, pressuring their suppliers to do the same, and lowering the price of produce where they are a major source of it, implementing this all sometime over the next five years.

Debates about this plan involve questions of whether this is an important initiative or too little, a great use of the first-family's bully pulpit or an inappropriate support by a government for a weak corporation-specific plan, a bold private move to fill in a social policy gap or preemptive weak action to avoid more useful regulation, and from a different direction, whether any corporate or government intervention is unreasonable interference with what is ultimately an individual choice.

I will bet that this issue annoys those who cling to the myth that there are no such things as good and bad foods.  While there is no bright line between good and bad, and quantities often matter more than items for overweight, we obviously can distinguish broccoli from cola on a healthy-unhealthy spectrum.  The "no bad foods" myth – which traces largely to the U.S. Department of Agriculture's rhetoric, USDA being a dominant source of nutrition advice, but also a marketing agency for American-made food products (all food products) – is often embraced by the same people who like the idea of markets solving our problems.  So it is amusing watching them decide whether private efforts are good even when they are grounded in the good-vs-bad food message.

My contribution to the discussion today does not involve any conclusions about whether the new initiative is a good thing.  I am just going to clarify two issues that muddle the debate about the topic.

The arguments that have been written about this often focus on the question of whether health problems from food choice (overweight, inadequate micronutrients) are the fault of public policy, corporate decisions, or individual choice.  The answer is that all of these cause poor food choice.  Indeed, they mostly cause it via a particular causal pathway, wherein public policy affects corporate actions, and those actions affect individual choices, and the individual choices cause nutrition problems.  Of course, there are other paths – e.g., public policy directly affects individual choice via education – but this linear pathway is responsible for a lot.  As just one example, farm subsidies and free market rules that treat food little different from other goods cause companies to aggressively market junk food, and that marketing persuades people to make unhealthy choices.  Of course, some individuals still choose good foods despite the marketing and some companies escape the allure of cheap corn, so the upstream influences are not sufficient causes.  That is, they do not make the outcome happen all by themselves, regardless of everything else.  Often when someone claims something is not a cause of some outcome, they are really trying to say that it is not a sufficient cause.  So it is simply false to say "it is not the agricultural subsidies that cause companies to market junk food so aggressively, it is the fact that of shareholders are considered to be the only important stakeholders"; it is true that capitalism creates the profit motive, but the subsidies create the huge opportunity to profit from particular actions, so they are both the cause.

Sorting out all the apparently contradictory arguments is simply a matter of ignoring it when someone says "it is caused by X, not Y" and realizing that whenever someone says it is caused by government, corporations, or individuals they are right.  Everything always has multiple causes.  (They do not always follow a pathway like this, sometimes X and Y both cause something independently of each other.)

Once we get past the erroneous language about what is the cause of this problem, we run into the more legitimate language about what is the right spot in the causal pathway at which to intervene.  Educate consumers?  Encourage companies like Walmart to expand the supply of vegetables?  Impose rules like Los Angeles's ban on new fast food restaurants?  Buried in this normative language is a confusion between what would have a desired effect and what is proper to do.  The first is a scientific question that is simply about predicting or empirically measuring consequences.  The second invokes ethical claims about paternalism, freedom, protecting the weak, and what is the public interest.  Of course, everyone can agree on the answers to the empirical scientific questions and then we can debate the ethical questions, informed by knowledge of what a particular intervention will actually do.

Ha! just kidding.  That is what should happen, of course.  But this being public health, there is no shortage of people with a particular political agenda distorting the science to show that their preferred approach is most effective.  There is a lot of dishonesty about true motives in this business – no one ever seems to say, "I do not much care what is most effective at improving diets, because I believe that the proper intervention is…." (Not to pick on those interested in diet – the same is true for most policies that involve people's behavior.)  Those who want to blame corporate behavior embrace studies that show that kids are literally captivated (and by "literally" I mean literally, unlike most uses of that word) by Happy Meal and cereal advertising.  Those who favor education and individual choice point to people like me who have periodically lived in poor neighborhoods but have always eaten fairly healthfully because I knew how to do it.  Such points are somewhat accurate, but also overly simplistic and so misleading.  Parents do not have to surrender to kids' wishes, the libertarians point out, but it is also true that people who have periods of being financially poor but who are well-educated are not very similar to most poor people in terms of personal empowerment (though such examples are also used to create the myth of American economic mobility – just because my income has shifted by more than 60 percentage points within the population distribution quite a few times in my life does not mean that America has high income mobility, or that when my income is low I am vulnerable to fast food marketing).

Anyway, to summarize:  Oversimplification of causal pathways is a tactic that is often used to elevate one particular intervention over other options; by declaring that something has one particular cause you can create the illusion that all interventions must start there.  Beware of statements about what we "should" do which muddle claims of "this would work better than alternatives" with "this is the morally proper way to do things".  Do not trust news reporters' "objective" reporting of "the" "facts" about deals between the government and corporations (be it subsidizing Wall Street or Walmart's new line of low-salt kale chips) – governments lie, and so do corporations, but you will not be able to determine that if you only read what they dictate to the mainstream media. 

As for food politics, I am guessing that some particular sub-faction in this fight is honest about what the research shows overall, and I would tend to guess it is those who believe that pursuit of corporate profits is not optimal for nutrition but is better than not having a free market, and who believe in intervention but recognize that government intervention frequently hurts the public.  But I am not completely sure, even with the expertise I have, and would welcome anyone who says "read this and you will be convinced" (so long as "this" is not just a monologue of that does not even acknowledge there are opposing claims and counts on people just believing what they are told).  Even when you are right and backed by overwhelming empirical and ethical arguments, it is tough to make this this clear to people who are interested, well-read, and even technically skilled, and who want to understand which arguments are genuinely stronger, but are not expert in your particular political fight.  I am willing to be your test subject if you are willing to try to show me how it is done.

23 January 2011

Unhealthful News 23 - Thou shall not enjoy being healthier

I will continue my practice of making Sunday posts a little different.  For this sabbath, two observations about the religious sides of public health.

I watched a few documentaries about beer brewing over the last few days, and decided to look for a health news story about beer.  WebMD favored my search with this story, which is currently on their front page, though there is nothing new in it.  What is special about the story is that it is called "The Truth About Beer" and, much to my surprise, it was almost the truth about beer.  Credit is shared between the author, Kathleen M. Zelman, and the researcher who she seemed to get most of her information from, Eric Rimm, who has been writing about this topic since I was in school.

Here was the best part:
It might seem unlikely, but beer (just like any wine, spirits, or other alcohol), when consumed in moderate amounts, has health benefits.
I hope the anti-alcohol disinformation has not been so effective that this actually still seems unlikely to most readers, but at least the message was accurate, which it usually is not.  For some reason, a myth persists that red wine, alone among alcoholic beverages, reduces heart attack risk and has other benefits, but there is no benefit from other drinks.  The evidence that all sources of alcohol have similar effects been clear for decades.  There was originally a hypothesis, ages ago, that the French Paradox (the fact that the French have better cardiovascular health than would have been predicted from naive models from many decades ago that over-emphasized the badness of dietary fats) might be explained by a benefit from red wine.  Red wine indeed turned out to be beneficial, but no more so then the more plebeian brown liquids that most of us prefer.

I said "for some reason", but I suppose the persistence of the myth is neither a mystery nor an accident.  Similar to the myth that all tobacco/nicotine use causes large health risks, the myth about red wine is used by moralizing activists, who pretend to be motivated by their health "sciencey" claims but are really using the language of science to support their purification campaigns without admitting their political motives.  In this case, studies of isoflavones and such, and biased reviews of the data that cherry-pick any statistics that favor wine over beer, are used to try to convince people that the benefits are from a few stray molecules left over from the grape skin, rather than the ethanol.  Since relatively few of the unwashed masses that these activists are trying to manipulate have red wine as their drink of choice, the activists can continue to pretend that almost all drinking is bad.

I really do think there is a class issue at work here, and it affects why this news story (which also waxes about the quality of current American brewing) admits the truth.  When I came of age, there were only 60 breweries operating in all of America, and 99.9% of what they produced was little more than cheap slightly-alcoholic bubbly water.  Most anyone reading about the benefits of alcohol for a decade and a half after that saw nothing but the red wine myth, and those of us who knew enough to know the science and that there was such a thing as good beer did our own brewing (I was pretty good at it) and research (I published one paper about this ages ago).  Eric Rimm has been communicating the same message contained in the WebMD story for at least 20 years.  But now, with more than 1000 craft breweries and dozens of fairly large high-quality operations in the U.S., people who would have been wine snobs a generation ago now drink fancy beer, so the health news is no longer entirely anti-beer. 

The article also points out that beer provides the health benefit of hydration because it contains plenty of "fluids", by which they actually mean "water"; the other fluids in it – alcohol and carbon dioxide (fyi, the word "fluid", despite its misuse by medics and health reporters, means "any liquid or gas") – do not aid hydration.  This is a rare response to the myth that any drink that contains alcohol is magically dehydrating no matter how much water it contains.  Wine and undiluted liquor probably do increase your need for more water to rinse out your system, and certainly do not offer the hydration benefits of beer.  They also seem to have the downside of increasing the risk of oral cancer due to topical exposure to the concentrated alcohol, though this was not mentioned in the article.

The article has a few gaffes (e.g., the alcohol content of beers cannot rise to 40%, a level that is only possible – at least with any yeast that has been invented to date – via distilling; even Sam Adams's Utopias only makes it up to the mid-20s).  But the biggest problem with even accurate summaries of the health effects is this:
Health experts don't recommend that anyone start drinking beer, or any other alcoholic beverage, for health benefits.
One of my readers pointed this oddity in a Facebook post a couple of weeks ago.  Of course, as he noted, it can be easily explained by the moralizing.  It is perfectly acceptable to recommend a pharmaceutical that also has unfortunate side effects, but we cannot recommend something that people actually enjoy.  Doing so might ruin the reputation of public health.  (The moralizers probably actually think that, and they probably do not recognize the bitterly ironic truth of it.)

On a completely random and unrelated note, I ran across this from the Journal of Policy Practice and just had to post something about it:
Deconstructing Social Constructionist Theory in Tobacco Policy: The Case of the Less Hazardous Cigarette
Author: Michael S. Givela
Abstract: Scholars in tobacco control have utilized a social construction approach to test and explain tobacco control policy and advocacy. Some recent tobacco control policy research has contended that Philip Morris's support of the U.S. Food and Drug Administration (FDA) regulation of tobacco (including purportedly reducing the harm of cigarettes) is to obtain the social construction goal of a socially responsible company. However, the primary motivation for Philip Morris's support of proposed FDA regulation and harm reduction for cigarettes was to maintain the company's market stability and profitability implemented by U.S. political process and institutions. In tandem with this, Philip Morris also sought political stability, a new company image, and federal preemption of conflicting and costly state requirements for harm reduction and tobacco ingredients. Social construction theory did not explain Philip Morris's motivation for seeking FDA regulation of tobacco. Only by reducing tobacco industry markets and customer use will there be a significant reduction in tobacco consumption.
As my readers know, sometimes I am embarrassed to be associated with epidemiology or particular subfields.  But when things seem to be at their darkest, I can always look on the bright side and realize that no matter how bad the literature in my areas is, at least it is usually caused by researchers not understanding and making a hash of what is an inherently legitimate science.  It is not actually a parody of science.

I honestly could not stop laughing after reading that abstract out loud to someone.  Try it, it is fun.  It is like trying to recite the dialogue from Star Wars – until you try to say it out loud it is not obvious how funny it is.  (In fairness, having heard my blog posts being read out loud by my "editor", what I write sounds pretty funny too, at least when read to the baby in a Winnie-the-Pooh storytime voice.)

It is interesting to see that last sentence of the abstract in something that is not standard anti-tobacco health science (or health pseudo-science); it is apparently a cross disciplinary part of anti-tobacco as a religion.  It obviously does not follow from the analysis.  Instead, it is a version of the mandatory "amen" that you have to attach to the end of any paper to maintain membership in the anti-tobacco activist congregation.   Still, it is a pretty funny version of the amen:  It says that reducing consumption depends on reducing sales and consumption.  I guess it is difficult to argue with that.

To add a little bit of substance to this analysis, I suspect this is an example of the Mythbusters fallacy that I explained in Unhealthful News 4.  Saying that "social construction theory" cannot explain PM's behavior was based on looking at a particular possible interpretation of the theory and its implications.  I am quite confident that social construction theory can produce a just-so story to explain anything someone wants it to explain with it (read: it seems to be pretty much just made up on the fly).  Thus, declaring that it does not explain a particular action seems rather, well, convenient.  Of course, any serious analysis of PM's actions in this matter would note that pretty much every one of the motives mentioned probably played some role, so the conclusion "their actions cannot be entirely explained by X" is undoubtedly true.

Still, I think my class-discrimination-based theory of public health disinformation about alcohol stands up pretty well as a sociologic theory.  So, I will conclude with:  Only by reducing the elite's monopoly on information will we be able to increase the democratization of information. 

And seeing how it is now after 5:00pm somewhere, may I recommend that if you like a medium-hoppy amber that tends a little bit toward scotch ale, that you reduce your heart attack risk with Gordon Imperial Red from Oskar Blues Brewery in Colorado (which is SKU 8 19942 00008 1, not that I have one in my hand right now).  Also, if you have ever actually spent the three-figures per bottle price to try Sam Adams Utopias could you drop me a note and tell me how it was; I am really curious.

22 January 2011

Unhealthful News 22 - I will continue to Countdown disgraces in the news

Today I decided to check MSNBC's health news for a story, as a tribute to Keith Olbermann's work there and to take a minuscule poke at MSNBC for driving him away.  I was rewarded with this story, right at the top of the page, about the "discovery" that eating a larger breakfast is not helpful for losing weight.  It is a great example of confused health reporting resulting from trying to hype nothing as well as what you might consider a surprise based on a few of my recent posts, putting faith in an observational study of a subject that can only be effectively studied with an experiment.

The article begins, "For years, dieters have been told that the way to lose weight was to start the day with a hearty breakfast."  I cannot say that I study current leading weight loss advice, and I suppose that for any behavior relating to food or exercise, someone recommends it for weight loss.  But I am pretty sure that the advice about breakfast is merely to not skip it (because the backlash from feeling starved will drive you to eat more later), and sometimes a recommendation of protein and fat over carbohydrates.  Also there was some recent intriguing advice to exercise before you first eat in the morning.  But to just eat a lot?  It is not clear why anyone would think that is a good idea.  The reporter fashioned her story so that the exciting new conclusion was to eat something, but not a lot, which I suspect is exactly what most of the current advice says.

It is bad enough when news reports identify a current conventional wisdom, stating it like fact even though it is fairly uncertain, and then declare it to be overturned based on a single new study, ignoring flaws in the new study, to say nothing of the fact that scientific inference is not based on "whatever is newest is right" rule.  That is probably responsible for the majority of public confusion and annoyance with health reporting.  But it is even worse when the reporter just makes up a fake conventional wisdom and then claims to be reporting on the "news" that we "now" "know" something else to be true.

As for the study itself, it tells us almost nothing because of confounding.  Readers of this series will recall some posts where I criticize the naive notion that experiments on people (usually called RCTs: "randomized clinical/controlled trials"), where the researchers assign people to particular exposures, always provide better information about health effects than observational studies, where people choose or experience exposures as they would in everyday life.  As I noted at greater length before, RCTs eliminate systematic confounding but at the expense of creating a very artificial situation, with the odd sort of people who would volunteer to have an exposure assigned to them, exposures which may not represent a realistic range of what people actually experience, and forces people to do something they might never have chosen.  Figuring out whether the upside or downside matters more requires some scientific common sense.

If something is purely biological (rather than behavioral or psychological) and normally occurs in an artificial controlled setting, then most (not all) of the downsides go away.  This is why RCTs are good for comparing the effectiveness of medical procedures.  But if you are interested in the behavior of free-living people, the downsides become quite large.  That is why the vogue of doing RCTs and implying that they tell us whether smokers will switch to smokeless alternatives is just bad science.  What is of interest is whether many typical smokers can be informed or persuaded so they choose to switch, using mass communication, as a behavioral choice in their lives.  The RCTs start with the odd subgroup who are inclined to volunteer for a cessation intervention, educates them in a particular way, which may not be effective and is certainly not natural or based on normal educational methods.  Perhaps the results tell us a little bit about what we might really want to know, but they tell us far less than, say, observing the actual substitution choices made by would-be-smokers at the level of personal anecdote, to say nothing of systematic observational studies.  For somewhat different reasons that I noted in the previous posts, RCTs of how long to exclusively breastfeed also end up measuring something we do not really want to know. 

Back on the other side, though, are cases where the confounding is obviously such a huge problem that if it cannot be eliminated then we really cannot possibly hope to sort out the actual causal relationship.  This is the case with the study that triggered today's news story.

I have colleagues who might suggest that – in the spirit of the cliche "how can you tell if X is lying?" "his lips are moving." – you can tell if epidemiologic studies of diet and nutrition are junk science by observing that they are epidemiologic studies of diet and nutrition.  There is a lot to that – most of what is done in those areas is a complete joke.  The methods for measuring the exposure (i.e., what people eat), subjects keeping diaries of that information, have been shown to be terrible, and the statistical analyses are often - perhaps even usually - so bad as to be unethical.  But more specifically in this case, the observation was that when someone reported eating more for breakfast it was not associated with them reporting eating less for the rest of the day.  That is, whatever the extra food intake at breakfast, at the end of the day the intake is elevated above the average by about the extra breakfast calories.  But does this mean that eating more at breakfast does not cause a compensating reduction later in the day?  Absolutely not.  There is confounding, both across the population and across different days for the same person.  Some people just eat more than others, obviously, even among study subjects who are trying to lose weight.  They eat more for breakfast, and also a lot the rest of the day.  This effect can be controlled for in the study design, by using someone as his own comparison group (i.e., see if he eats less or more on a particular day, as a function of breakfast, compared to what he himself eats on average).  (There is also the more subtle problem of biased measurement error:  Someone who misreports what he ate for breakfast may also misreport about lunch that day, but I suspect I will have better examples of this point later.)

But though the interpersonal differences can be pretty well controlled, that just leads to another level of confounding that cannot be controlled.  Some days any given person eats more than on other days, due to activity, mood, opportunity, social pressure, simple swings in appetite, or whatever.  So if someone eats a large breakfast, he may do so for reasons that also cause him to eat more than he might otherwise the rest of the day.  So it might well be that eating more at breakfast causes someone to eat less than he would have the rest of the day, but this effect is swamped by whatever caused the eating of the big breakfast.  The claim is still not really plausible (it was never particularly plausible that eating more for breakfast would cause someone to eat less overall), but the point is that the study does not really inform us; there is a problem with confounding that is so bad that it renders the study useless.  So what would be a better way to address this question?  To assign the size of each person's breakfast each day and see what else they eat – in other words, do an experiment.  Then how much someone eats for breakfast is unrelated to their activity, mood, etc. because it is random, and so the association, or the lack of association, cannot be explained by the obvious confounding.

Why is the artificiality of that not such a problem in this case?  Because what we are really interested in is not what results when people happen to choose to eat a big breakfast, but rather what would happen if they forced themselves to eat (or avoid) a big breakfast.  We want to know if it is a good tool to achieve a particular goal, just like we want to know that about a drug or surgical procedure.  Thus, using a method that works well for testing drugs and surgical procedures seems like a good idea.  We still have the problem that, since this is behavioral rather than biological, that people might react differently to being assigned a meal size as compared to forcing it upon themselves, so the RCT approach still has problems, but it is certainly better than completely fatal confounding.

I expect, however, that we are never going to see that trial.  The researchers might have tricked the MSNBC reporter into believing that this represented some important new knowledge, but I suspect no researcher would be interested enough in the question to do the RCT.  Rather, these dietary studies are almost always fishing expeditions, collecting a lot of data about hundreds of things and then sifting through it for associations that might be used to impress someone.  So, with the exception of the anti-tobacco extremists and a few other political actors, who are not really even scientists anymore, I will declare nutritional researchers, particularly the "health promotion" types and dietitians, to be today's Worst Epidemiologists In The World.