31 March 2011

Unhealthful News 90 - Two little observations about little humans

(1) In UN81 I commented on the American Academy of Pediatrics' new recommendation that babies should ride in backward-facing car seats until a later age than one year (which is good advice) in which they tried to claim that there was no benefit to having the kid facing forward (which was typical AAP behavior:  pretend that a policy has no costs rather than go to the trouble to argue that the benefits warrant the costs).  Well, being a good scientist, I gathered some additional empirical evidence and can report that at least some babies under one year of age love to ride facing forward.  It does not change the fact that the recommendation seems good, but it is further evidence that AAP is completely incompetent at policy analysis (and ignorant about parents and children) even when they get the policy recommendation right.

(2) According to an email I got from another baby advocacy organization, planningfamily.com (which is not actually a nutcase group, though it kind of sounds like it, doesn't it?),
On average, children two to five years old spend 32 hours a week in front of a TV and they will have watched an average of 8,000 murders before they finish elementary school.
The first problem with that is, of course, that young children watch TV for 4.5 hours per day.  But setting aside that madness, I have to wonder why no one is out there claiming that these portrayals of murder's are causing half of the horrible number of murders in American.  After all, there must be some anti-violence-in-entertainment activist who is as nutty as Stanton Glantz, and I would bet that the evidence comes closer to supporting this claim than Glantz's claim that smoking portrayed in movies causes half of all smoking.

Oh, and I have to say that the 8000 number has got to be more-or-less made up and obviously arbitrary (though at least I give them credit for rounding their fake number rather than acting like most health activists and reporters and saying 8132.5 – you have to like them for that).  Does it count the Coyote (of Road Runner fame)?  If so, how many times per episode?  How about if they just find the body on CSI?  How about the 1.97 billion people on Alderaan?  (That is a good piece of bar trivia, by the way.  The movie with among the largest number of homicides – surpassed only by a few "aliens invade Earth c.2000 and kill almost everyone" flicks – is the kid friendly Star Wars that most of those kids will see at least once, pretty much burying that 8000 number.)

30 March 2011

Unhealthful News 89 - Oily e-cigarettes?

I thought it was bad when health reporters seemed to do nothing more sophisticated than a Google search about the topic they were writing about.  Perhaps asking for even that standard would be a good start.

As discussed in more detail by Chris Snowdon, the BBC reported the "news" about an e-cigarette user who died (it happens) of lung disease and his medic claimed that it was due to the e-cigarettes rather than, say, his former heavy smoking.  It is, of course, possible that e-cigarettes could cause lung disease (though not very likely – not only are the chemicals in e-cigarette liquid unlikely to cause lung disease based on what we know, but not much actually reaches the lungs).  But when a reporter says the following, it is pretty clear he did not even do the Google search: 
[E-cigarettes] contain a cartridge of liquid nicotine
(that would indeed be bad – it would be a contact poison!)

and the product the decedent used
 seemed to involve a mixture of nicotine and some oil
(there are very tiny bits of flavoring oils in some products, but a basic knowledge of e-cigarettes or chemistry would reveal that nicotine+oil could not have been the recipe)

and,
there's no systematic research assessing the overall safety of inhaling these chemicals deep into the lungs over an extended period
(which is fairly obviously true, since people do not actually inhale the chemicals deep into the lungs – not the ones that are actually in e-cigarettes, let alone the made up oil that appeared in this story).

If the reporter had decided to spend a second five minutes learning about his topic (after the first five minutes it would have taken to gain a basic literacy about e-cigarettes), he would have discovered what Snowdon, in his role as unpaid investigative reporter not employed by the BBC, reported:  That the medic in question is an activist anti-nicotine extremist who was presumably looking for an excuse to attack tobacco harm reduction.  Hmm, should that have caused a reporter to question what he was reporting, or at least be worth mentioning?  Nah, if you do something other than just transcribe allegations, it takes work and then you would not be able to spend most of the day, after submitting your story, working on your lousy novel on the company's time.

I honestly do not have anything more to add to what Chris and others have already mentioned about how dumb the analysis in this report was.  I just thought this deserved to be highlighted not just as anti-tobacco/nicotine propaganda disguised as science, but also as Unhealthful News.

29 March 2011

Unhealthful News 88 - More conflict over conflicts of interest

There is a lot to say about how lame most statements about "conflict of interest are".  To summarize most of what needs to be said:  (1) COIs among researchers are real and matter.  (2) Most (not all) of the time when someone screams "he has a conflict of interest!" it represents a political ploy to try to advance a particular agenda rather than a genuine concern about COI.  (3) The conflict of interests generally identified are not really the ones that matter; often they are not really even COIs, and typically they ignore most of the most important COIS.   I have written a lot more about these points in the past and will likely do so in the future.  For today, I will just comment specifically on the problem definition in an Archives of Internal Medicine article that was reported in the medical news today.

The abstract reports that the study totaled up the number of what they called conflicts of interest:
Using disclosure lists, we cataloged COIs for each participant as receiving a research grant, being on a speaker's bureau and/or receiving honoraria, owning stock, or being a consultant or member of an advisory board.
They looked at cardiology guidelines advisory boards, though what amounts to a COI there is basically the same as what is a COI for reporting research results.  They concluded that "Fifty-six percent of the 498 individuals reported a COI". 

Let's think about that list.

Owning a substantial amount of stock in a company with an interest in a study's result or board's recommendation is obviously a glaring financial conflict of interest, perhaps second only to owning intellectual property whose value will change dramatically based on those results.  Being on a speaker's bureau tends to mean a continuing flow of outrageous fees for repeating some party line, which would tend to have the same pull as owning a lot of stock.  Someone's belief in that party line my have been come to honestly, but the financial incentive to not waver is a strong interest.  However, "receiving honoraria" can mean anything from basically the same as the speakers bureau concept to having gotten a reasonable compensation for taking the time to go to a meeting and present one's honest views.  The latter will have little or no influence on someone's opinion and create little or no incentive.

Being a consultant or advisor is also pretty tricky to interpret.  This can be anything from working as a corporate hired hack, writing whatever the corporation asks to support filings, litigation, etc.  Or it could mean that you are the best in your field and scrupulously honest, and so corporations want your honest opinion.  Or anything in between. 

Similarly, receiving a research grant can mean doing funder-directed research (typical for government grants, for example) with an implicit promise about what the results will be (often offered in grant proposals for those funder-directed projects), or giving the funder the right to suppress the result if they do not like it (common for some industries' funding).  Those actually do not cause a conflict of interest:  They are dishonest and/or bad scientific conduct, but only when there is a hope of future funding is there a COI (and that is regardless of whether there was past funding).  Otherwise there is no remaining interest once the funding is in place (unless the funder has the right to take it away if they do not like what the researcher is doing; that is a much worse, but slightly different, problem).  One of my colleagues insists that out of consideration for this, he will take grant money from anyone, but never more than once.  That way, there is no way that hoping to get more funding from the particular organization can influence the ongoing research.

Alternatively, a research grant can be largely free of COI, consisting of designing investigator-initiated projects and then asking for money to support them, not changing anything to please the funder.  Or anything in between.

These observations barely scratch the surface of the topic, but they should be sufficient to show that simply identifying one of these phenomena tells us little, let alone arbitrarily adding them up.  The authors of the article seem to want to make a big deal about how some members of the boards do not have any of these COIs, implying that the boards should be made entirely of such individuals.  But their list of types of COIs ignore the ones (political preferences) that are next most important behind a chance to make a personal fortune.  Moreover, ensuring that a board has no apparent COI with respect to an issue is an easy matter of picking members who have never said anything important about the subject or had any significant involvement.  After all, do you really want your experts to have done so little work on a topic that they have never received a grant or consulting fee?  Not too expert, I suspect.

Oh, wait.  It turns out that the authors are interested only in grants, honoraria, etc. that come from industry sources.  They do not bother to mention this in the abstract, apparently believing that only industry money can create COI.  That is an incredibly naive notion of COI.  They do not tell us "we are only concerned here with the COIs that result from industry ties, not those resulting from government grants, advocacy group funding or involvement or even being employed by an interest group, personal inventions, pet theories, and many other sources of COI".  That would be a very partial analysis, but at least it would be an honest portray of what it was.  Rather than admitting that, however, the authors try to imply that one specific source of COI is all of COI.

The people with the most expertise and interest in a topic inevitably have some of the listed COIs and others too, unless they are extremely anti-social.  Indeed, for many medical or health matters, the greatest experts, who could contribute the most to an advisory board, work for an interested industry.  Why not include them as panelists if they are willing?  Better still, include several, whose employers have differing interests to get the crucible of scientific inquiry boiling.  They might tell us something useful, and if they simply demand actions that are in their employer's interest without justification, there is no obligation to pay attention to what they say.  A good open fight among the most invested parties is generally a good way to figure things out, so long as those evaluating the results have the competence to recognize which arguments are more compelling (and if they do not, they should start reading my Sunday posts).

28 March 2011

Unhealthful News 87 - all I have time for before takeoff

Air France has seatbelt extenders for infants. Much smarter than what Pediatrics recommends (full car seats filling up the plane). Viva.

I promise to have a better post tomorrow.


[Anyone wonder what happened to UN87?  I really did send it out and it said it posted, but it does not appear in the blog reader -- you need to tell me things, friends.  So here it is (again?).  I still claim credit for posting it on time!  As long as I am at it, I will add an update:  AF is great (they even give out baby toys), but the remote controls for their televisions etc. are really lame.  After spilling just one little whiskey and water over one of them in a groggy state in the middle of hte night, it kept resetting my movie and, worse, turning the overhead light on and off and calling the flight attendant.  The French just do not have good waterproof electronics.  U-S-A, U-S-A.]

27 March 2011

Unhealthful News 86 - If you cannot figure out how they could possibly measure that, they probably didn't really

Survey research involves no magic opinio-meters that plug into people's heads.  Everyone knows that.  All of us have been survey subjects.  And yet we seem all too willing to believe claims about survey results that could not possibly have been measured well by asking questions, even in the best case.  The best you can hope for is to measure actions and characteristics of people that they are capable of meaningfully and accurately reporting.  Moreover, since almost all surveys are based on checking a box, what is measured has to be measurable that way (i.e., it cannot require the conversation or free-text description that it takes to really communicate the details of someone's preferences, experiences, and motives).

GlaxoSmithKline released the results of a survey of smokers they commissioned regarding the potential ban of menthol in cigarettes.  I am not going to address the underlying policy discussion, because I have already covered that.  Rather, I would like to point out what readers (and those news outlets that basically just printed the content of that press release as news) should have noticed about some of the claims.  Some observers might suggest that the main reason to question the study results is that it was paid for by someone who had a stake in the matter at hand (if a mandatory reduction in the quality of cigarettes causes people to try to quit, some of them will buy GSK's products that many people believe aid quitting).  It is certainly true that the fact that they released the results (which they could have chosen to not do, unlike with, say, all scholarly research funded by tobacco companies that I know of, where the funder cannot suppress the result if they do not like it) tells us something about the results:  GSK does not think that the results offer any competitive advantage as marketing information, but does think that they could influence the political debate in a direction they prefer.

The real reasons for doubting the results are not a lazy ad hominem criticism, though, but a scientific criticism of the claimed results, one that anyone can understand.  The most reported result was, "if the FDA were to ban menthol cigarettes, four out of five menthol smokers (82 percent) say they are likely to try quitting."  I saw this reported as "82 said they would quit", which obviously misinterprets the press release.  But just consider the actual claim:  What question was asked to get those responses?  We do not know.  But since I suspect that if you designed the right series of questions, you could get 3/4 of smokers to say they are likely to try quitting next month if the month contains a Thursday.  "Try to quit" is a phrase that can mean very little effort or volition, but elicits the impression of something aggressive and likely to succeed.

Consider also, "almost 40 percent [of menthol smokers] say that menthol flavoring is the only reason they smoke."  Again, by asking the right questions, it is possible to get many smokers to attribute their behavior entirely to social factors, daily patterns, the aesthetics, etc., rather than the drug delivery.  We did some focus group research of smokers, and quite often no one would mention nicotine as part of their motivation.  And, yes, it is practically mandatory to acknowledge that it is not all about nicotine.  But it does not play any role in the motives of more than 40% of this subpopulation of smokers?  Come on!

But, gee, maybe it is true: "The survey shows that menthol smokers feel "twice-addicted" – both to the menthol and to the tobacco – and most are attracted by the taste and feel of menthol cigarettes." Um, but wait a minute, how does a survey show that someone is addicted to menthol?  It is pretty sketchy to even claim that someone is addicted to smoking at all, since addiction is not well-defined, so you either ask about addiction and get an answer based on each individual's idiosyncratic interpretation of the term, or you ask well-posed questions and idiosyncratically decide for yourself which of those represent addiction.  But how can you possibly figure out whether menthol smokers are addicted to tobacco (presumably that means nicotine) and menthol independently? 

Only those rare individuals who had been stuck able to buy only non-menthol cigarettes for a while could realistically assess how they would feel about smoking non-menthols, while measuring an independent "addiction" to smoking menthol, apart from tobacco, would require that someone had experienced… well, I have no idea what.  Maybe vaping nicotine-free menthol e-cigarettes, which has probably been experienced by approximately zero of the respondents.  And that says nothing about how it can be that 40% of the respondents smoke only for the menthol, but they are apparently also addicted to tobacco.  Go figure.

The point is that you should go figure.  These results are so absurd that no one should take them seriously.  And everyone should learn enough from the most absurd claims that they do not take any of the other survey results seriously either.  Whatever you might think of GSK and the ridiculously self-serving balance of the press release – about how wonderful their barely-functional products are and how lousy other options for quitting smoking are – it is not difficult to be able to see the absurdity of their conclusions about the survey results.  Perhaps if they told us what they actually asked we could make something useful from the data, though I suspect that any survey that attributes 40% of smoking entirely to something other than nicotine is pretty much doomed. 

Oh, but good news, the reader has no idea what the survey questions actually were and how they reached their conclusions, but they do make the effort to report, "For analysis, sample data were statistically weighted by race, gender, income, and menthol versus non-menthol smoking to accurately reflect the current population of adult smokers on each of these dimensions."  The humor of that might be lost on many readers, so to offer an analogy, imagine someone making a salad of Miracle Whip and iceberg lettuce, and serving it over Jello, but making sure to sprinkle it only with organic sea salt -- or, if you want a less colorful metaphor, call it polishing the brass on the Titanic. 

26 March 2011

Unhealthful News 85 - Overly-conclusive health science articles, and never having to say you are sorry

One of the reasons that there is so much junk published in health science, particularly of the kind that make for junk news, is that there are no repercussions for declaring a dramatic, telegenic conclusion that turns out to be absurdly wrong.  Every real science allows for the possibility that a particular study, done a particular way on a particular day, might produce a result that is "wrong" in the sense of running contrary to what is (or later becomes) agreed upon wisdom on the point.  That is the nature of science, of random error, of unexpected effects of methodologic choices, etc.  What makes a mess of it is if someone doing a little faulty study, whose results are what they are, thinks he is writing Principia Mathematica, or at least "The mortality of doctors in relation to their smoking habits".  In most fields, one of the key lessons taught in graduate school is that you – each individual student – know very little compared to the extent of human knowledge in and around your subject of study.  This produces an epistemic modesty that makes for better science.

This message seems to be lost on health researchers.  Part of the problem is that a lot of researchers are trained only as physicians, not as scientists, and clinical training usually includes the message that you are supposed to act like a god and never admit to your ignorance.  Actually, that is not entirely fair – it is perhaps more a matter that medical training causes people to become unaware of their ignorance, a trick of mind control that is utterly baffling and might be of interest to the psychologists working at Guantanamo.  When this god complex spills over into research, it creates a tendency to say "my little lame study showed X, and therefore X is True and the world should be changed in the following way based on that…."

Of course, this does not explain the behavior of health researchers who studied science and got research degrees rather than professional degrees, though maybe some of it is a spillover effect.  Also, epidemiologists, toxicologist, etc. also get to play god sometimes, influencing or even controlling decisions that are important to how people live their lives.

But the bigger problem, I believe, is that no one in health science is ever asked to say they are sorry for a faulty conclusion they adamantly declared.  Changing your beliefs based on evidence is the mark of a scientific or otherwise intelligent mind (though political pundits like to call it "flip flopping").  But failing to recant the old conclusions, refusing to admit that you made them, and never explaining what made you wrong before and makes you correct now are dishonest behaviors that warrant embarrassment and public criticism.  In a world of such ethics, having to change your conclusions means either admitting you were wrong or being justly criticized for failing to do so.  In that world you have a lot of incentive to not over-conclude.  Again, this does not mean there is any embarrassment in saying "my research did Y and the result was X", even though this turns out to contradict better evidence, so long as you stop there.  But if you make a press release and adamant declarations about X being true, you deserve a reputation as a bad scientist and someone whose opinions should not be trusted.

That is not the world of health research and publishing.

I was thinking about this because here at Vapefest this weekend, several people have mentioned to me the new research by Thomas Eissenberg and his research group, who notoriously reported – and aggressively touted to the media – that e-cigarettes deliver no nicotine to users.  Basically, he did a badly designed study (a minor error) and then implied he had created a Great Work for the Ages (a very major error).  That group's new study (described here) discovered what, oh, maybe a million people already knew from personal experience:  E-cigarettes do deliver nicotine after all.

Surprise!

From what has been reported, the study might well exhibit the same kind of naivety that got the researchers into trouble in the first place:  Doing one tiny study of an extremely heterogeneous phenomenon and making a big deal about the quantitative results.  At least this time the results are rather less absurd than "zero", but they are still quantitatively meaningless.  If you told me what levels of nicotine absorption you wanted to get from a study of three (yes, just three) vapers, I am sure I could design a study of three people that within a few tries would give you those numbers.  What remains of the scientific integrity of research on nicotine and tobacco can only hope that they do not imply the specific quantitative results matter when they publish their results, let alone that they make policy recommendations based on them.  We should not be too optimistic, though, based on this.

The rumor here (though I could not find any documentation of this) is that Eissenberg's message then switched from "these things are bad because they do not give smokers the nicotine they need" to "these things are bad because they deliver so much nicotine that users might get too much".  This might not be true – it might be that he is becoming a "friend of the cause" of vaping, as some users speculated.

But either way, the major crime against science and common sense was the original conclusion.  If you "discover" something based on one lousy study, you should not do an interview about it on CNN.  And if your "discovery" is contrary to what lots of people with better evidence than your own are pretty sure is true, then you are an utter fool for making declarations on national television.

Oh, but wait, maybe not.  You would be an idiot if there were any repercussions for staking your scientific reputation on such a claim.  But if you publish in public health, and especially if you want to be an influential pundit in public health, then making an incorrect over-the-top claim to the press makes you a shrewd politician not a bad scientist.  Much like it is in Hollywood, not so much like it is in science, any publicity is good publicity.  Consider what Eissenberg and company have contributed to the discussion and science about e-cigarettes.  Then realize that his name comes up in discussions of experts on the subject (and he self-represents as one).  Welcome to a world where Charlie Sheen is on television giving advice about relationships and healthy living thanks to his widely-reported contributions to these areas.

25 March 2011

Unhealthful News 84 - If you cannot basically explain it in a couple of minutes, you probably do not understand it

This is not really news, but it was in the newspaper and I thought it was too good to pass up.  Krugman wrote in his blog (still free to access for a couple more days) yesterday about what is called "lead time bias" in epidemiology.  (Too bad he called it "over-diagnosis" which is actually the name of a different epidemiologic challenge, but he can be excused for not knowing our jargon.)  It was an aside in a point he was making about U.S. cancer treatment not being particularly outstanding, a retort to a dumb op-ed by a U.S. politician, which you can follow in his thread if you are interested.

He wrote:
Here’s how I understand the over-diagnosis [sic] issue, in terms of an extreme example: suppose that there’s a form of cancer that kills people 7 years after it starts, and that there is in fact nothing you can do about it. Suppose that country A screens for cancer very aggressively, and always catches this cancer in year 1, while country B chooses to invest its medical resources differently, and never catches the cancer until year 4. In that case, country A will have a 100% 5-year survival rate, while country B will have a 0% 5-year survival rate — because survival is measured from the time the cancer is diagnosed. Yet treatment in country B is no worse than in country A.  Real life isn’t that simple, but you get the point: a society that tests for cancer a lot may have higher survival rates simply because it tends to catch cancer early on, even if it doesn’t treat cancer any better.
This is a great one-minute lesson in the concept of lead-time bias (again, make sure to note that he got the label wrong), which I think any reader would immediately understand and never forget.  Epidemiology classes sometimes spend hours trying to explain this.  I get the feeling that most of the physicians and "health promotion" types never really understand it.  This is why I think trained economists make the best epidemiologists (not that I am biased or anything :-).

By the way, the jargon "over-diagnosis", whose natural language meaning is quite broad obviously, refers to another complication created by aggressive diagnosis wherein something is diagnosed only because of screening – i.e., it never would have manifested in any bad outcomes.  This tends to be about cancer specifically because something can be officially biologically cancer but not destined to ever cause a health problem (note: if something is a "false positive" then there is not really any disease after all, which is different from over-diagnosis).  Not too many other conditions are officially a disease until they actually cause a problem.  Unlike lead-time bias, which is a statistical complication in trying to measure survival time and thus assess treatment quality, over-diagnosis has immediate real health implications:  Almost all the diagnosed cases of the condition get treatment, even the ones that are destined to never cause harm since we do not know which those are.  There is also a problem with the statistics and assessment of treatment too, since the over-diagnosed cases all end up being chalked up as "successfully treated" when the patient emerges without health problems (unless the treatment causes them).

To make this more concrete, somewhere between 1/3 and 1/2 of the cases of breast cancer detected by screening mammography are over-diagnoses, biological cancers that never would have caused detectable harm.  Note that this is separate from the false positives, where a biopsy is done to examine something that showed up on the mammogram and it turns out to not be cancer – there are about ten times as many of those.  So it looks like mammography is doing a great job, since it typically gets credit for saving the almost half of detected cancer victims who would never have had any problem if those cells had just been ignored.  And that lead-time bias has to be dealt with too, to avoid crediting the early treatment with the extra period before the cases would have been noticed otherwise.  Last I studied and wrote about mammography, there were efforts to account for the lead-time bias, but the over-diagnosis was conveniently ignored; things may have improved since then.


I actually wrote a longer blog today that is less theory and more more topical substance (about a totally different topic: the politics of e-cigarettes).  It is here.

24 March 2011

Unhealthful News 83 - Hot dogs better for you than television health news

[Housekeeping note:  This is the first of what will be some short posts for the next two or three weeks.  Too much going on.]

Most of the news reports were not quite as absurd as MSNBC's which said, "Hot dogs for better health? Actually, yes".  The study actually just showed that the concentration of one specific set of carcinogens, heterocyclic amines, in one particular set of samples studied, was comparatively low in hot dogs and a few other prepared meats and high in rotisserie chicken.  Some slightly – but only slightly – less dumb headlines were "Hot dogs healthier than previously thought" from ABC and "Hot dogs may be healthier than chicken" from Fox. 

It should be obvious that no study of the chemistry of a food is definitive about how healthful it is, let alone a study of a single chemical.  That study cannot even support the claim that was reported in the news that "rotisserie chicken contains more carcinogens than hot dogs"; that distinction probably goes to whichever has more chemicals of the kind that are measured in grams rather than micrograms and that cause colon cancer.  Presumably what the reporters want to be able to say is that one causes a greater risk for cancer than the other, but they are going to have to study a little harder before they can figure that out.

As for the conclusions that hot dogs are healthy because there is a lot of that one chemical in something else, well, good news:  A new study of Twinkies, cheesecake, Four Loko, deep-fried brownies, Coke, bourbon, and tuna revealed that tuna had much higher level of mercury than any of the others, so enjoy the others! 

The MSNBC story actually reported the concentration of the chemicals in the various foods, which mean nothing to the audience, and I am quite confident meant nothing to the reporter who wrote the story.  The researcher that did the study, and who was interviewed for that report, might have known the importance to health of those quantities, but he apparently kept it a secret (and, frankly, I would bet that he has no idea either).  So the audience and the reporter, at least, have no idea whether those quantities matter and thus whether the differences found in the new study matter.  The story does not tell us whether science even knows what level of  differences matter in terms of health risk.  My bet, based on experience with related matters, would be that such quantification is really not known, though I do not know about this specific topic.  But I am pretty sure I could find out the conventional wisdom (which might be wrong) in about twenty minutes, and dig down a bit within a few hours.  Perhaps with such effort a reporter could have turned this into a story that actually contained information.  But it is so much more fun to mislead people into thinking that hot dogs have been shown to be healthy. 

23 March 2011

Unhealthful News 82 - Tobacco, mint, and tea

There are a couple of interesting current news stories about tobacco policy from the U.S. Food and Drug Administration, and they warrant an Unhealthful News type analysis so I am covering them here instead of the Tobacco Harm Reduction blog.  The first I already wrote a bit about at the THR blog was the release of the report about menthol cigarettes by FDA's tobacco science advisory committee.  The report basically said that having menthol cigarettes available increases the total public health impact of smoking.  This is undoubtedly true, since it is true for anything that increases the net benefits from cigarettes for some people: The higher the quality of the product (measured in the only way that matters, how much people like it), the more consumption there is and so lowering quality will lower consumption

However the report produced much sound and fury for a couple of reasons.  First, the committee chose not to explicitly say "we recommend that menthol be banned from cigarettes", which some commentators insisted violated the law that empowered them to write that report and called for a recommendation.  This failure to recommend a ban has also been interpreted as the reason that the market capitalization of menthol cigarette makers went up substantially after the report was released, though my hypothesis was that there was insider information about FDA's plans that was more solid than noticing the lack of the phrase "we recommend".  On the other hand, the chair of the committee commented to the NYT to the effect that too much was read into the lack of the phrase, perhaps suggesting that my hypothesis was wrong, the lack of the phrase was not significant, and regulatory action is coming.

The second bit of sound and fury has to do with the concern about a restriction on menthol increasing black market sales, which it inevitably will do to some extent.  Again, this is simply economic inevitability:  Once a black market is viable, as it already is, than anything that lowers the net quality of the legal product (removing flavors that people like, increasing taxes, putting grotesque pictures on the packages) will increase black market sales at the expense of legal sales.  For reasons that I explained in my previous posts, unless a menthol restriction triggers particular qualitative changes in the market, which is possible but would be unexpected, the net result will still be a reduction in consumption.  However, the science committee quit reasonably punted on this point, saying they were not capable of analyzing it (they clearly do not have the necessary expertise on the committee and it is quite refreshing to see them admit that).  But this was glommed onto by some commentators, some saying the black market was sufficient reason to predict that a ban would not have public health benefits (though this seems unlikely for the reasons I note) and others complaining about the fact that they punted on the point.

(I feel I need a conflict of interest disclosure here:  As someone who advocates tobacco harm reduction, I have a bit of bias toward favoring reductions in the quality of cigarettes since they encourages more switching to low-risk alternatives.  On the other hand, I subscribe to the underlying harm reduction ethics that tend to support individual liberty, encouraging harm reduction, but not wanting to criminalize the decision to not engage in it, or in abstinence.  So I have a conflict within my own interests, and certainly both side of it leak into my analysis.)

The second current story relates more directly to THR, and that is the press release by Star Scientific (a company that functions in the market primarily as a patent holding company and spec biotech company but that is known in THR for its consumer goods).  Star reports that FDA has ruled that their dissolvable tobacco products – lozenges of powdered tobacco, mint flavors, and binders that the user holds in his mouth like other smokeless tobacco or some pharmaceutical nicotine products – are not subject to the FDA regulation as tobacco products.  One possible implication of this is that Star does not have to go through what many of us predict will be an incredibly difficult process required for low-risk tobacco products, which is built into the legislation that authorizes FDA to regulate tobacco, before they can claim that they are a low-risk alternative to cigarettes.  Since meeting the standard in the legislation is arguably scientifically impossible, this would be good news.  Allowing manufacturers to communicate "this product, while not harmless, is estimated to be about 99% less harmful than smoking and is undoubtedly at least 95% less harmful" would be one of the biggest low-cost victories for public health that can be imagined.

On the other hand, commercial speech that makes scientific claims is almost always regulated by someone.  For years before FDA's recently granted jurisdiction, the U.S. Federal Trade Commission regulated such claims about low-risk tobacco products, and the manufacturers apparently believed they would not allow such claims, as evidenced by the fact that they never made them.  It has been clear to anyone who honestly evaluates the science, for at least five years and arguably for ten, that mass market Western smoke-free nicotine products fit the description in that "99%" statement.  That is true for all such products – Star's lozenges, Nicorette gum, Nicoderm patches, Skoal Bandits, Copenhagen, Red Man, Camel Snus, Marlboro Snus, e-cigarettes, etc.  Star wants everyone to believe that their products are particularly low risk compared to others in that list – and more power to 'em if they can convince consumers to switch from cigarettes based on this rhetoric – but there is actually no scientific support for that claim. 

Nevertheless, if the press release is to be believed, Star's products are for the moment in a unique position, not regulated by the FDA as pharmaceuticals that are not approved for long-term use (like Nicorette), nor as tobacco products (like Skoal or Camel Snus), nor in a weird limbo that seems to allow the product to be marketed as a substitute so long as there are no claims about risk reduction (like e-cigarettes).  It is difficult to believe that uniqueness can persist since, e.g., the Camel line of dissolvable smokeless tobacco products is functionally the same as the products Star sells (indeed, there is an ongoing patent fight).  But it is even more difficult to believe that a regulatory vacuum will be created.  Government abhors a vacuum.  (There is my libertarian side – look it up if you think that word means what certain current American politicians would have you believe it means – popping in).  It is difficult to imagine that no one in the US government will assert the authority to regulate claims about THR.  Someone will undoubtedly decide how those products will be taxed (and if they actually start to sell much, you can bet that states will want to get their taste).

But an over-arching theme in all of this is an Unhealthful News point:  Many people, myself included, are excited about these developments and their implications for public health (and science, and law, and policy, and freedom).  But excitement creates the urge to try to divine meaning from each new bit of information, and for scientists or health policy advocates to try to become jailhouse lawyers or political tea leaf readers.  But what we are dealing with is the messy business of policy-in-progress.  The reason that e-cigarettes are in limbo is that a court ruled that FDA had to treat them as tobacco products (which they can regulate in specific limited ways) rather than drug delivery devices (which FDA is authorized to micromanage and ban outright), though e-cigarettes use only the nicotine from tobacco and deliver it in a novel way, and FDA has no policy within their tobacco regulation for regulating them.  Meanwhile, apparently FDA has decided that the smokeless tobacco products that Star produces are not tobacco products.  I will let you draw your own conclusions about these observations, and will resist the temptation to pretend I am qualified to interpret the law and just stick to the science and normative sides of the situation where I do claim expertise.  (I was invited to sign onto a filing in the e-cigarette court case, but demurred because it was being decided based on legalistic points rather than the scientific or public health claims, and I could not claim relevant expertise.)

It is always tempting to treat today's news as if it always resolved outstanding conundrums.  But just as I have note in this series that a new scientific study is typically not more informative than the existing scientific beliefs, the news of the day on tobacco policy is far from definitive.  It would be nice to think that the ruling Star just reported means that anything in their category – the category that is often attacked by anti-tobacco extremists as "tobacco candy" – will be legally marketed as a low-risk alternative to smoking.  It would be wonderful if scientific truth would be allowed to carry the day.  But anti-harm-reduction activists simply have too much power to be very optimistic about that.  FDA is foundering through the process of figuring out how to regulate tobacco, and are not issuing careful oracular pronouncements like we might expect from the Fed (the US central bank, which controls certain interest rates and such).  Tea leaf reading about the Fed is perhaps warranted, but FDA does not necessarily have the manpower, skill, or mindset to be quite so careful and oracular.  Legalistic analyses of some government policies are warranted, but it should be pretty clear that there are a lot of contradictions to work out in this case, and courts are being just as ad hoc as the FDA. 

The various relevant corners of the US government have a long way to go before they rough out what they think of the mint leaves in the tobacco – deadly? candy? attractive to children? innocuous? not even tobacco? – and until they do, we should focus on the evolving scientific and normative analyses (and for those who must, which way the big-picture political winds are blowing) rather believing there is much to be found in the tea leaves.

22 March 2011

Unhealthful News 81 - Pediatrics gets one right; the world moves backwards

The journal Pediatrics is notorious for publishing alarmist junk-science claims in the form of pseudo-research papers.  They are also the mouthpiece for the American Academy of Pediatrics, an organization that is notorious for making policy declarations without actually doing any policy analysis.  But their latest recommendation that has been all over the headlines seems to be reasonable, though that is not for lack of trying to do a bad analysis.

The latest dictate from the AAP is that babies should ride in rear-facing car seats until they exceed the height or weight limit for those seats, rather than switching them to forward as soon as it is possible or legal, often their first birthday.  They based this recommendation on evidence that being in a rear-facing seat remains considerably safer for these somewhat older children.  This should really come as no surprise, and if anything this observation is disturbingly late in coming, since (a) rear facing is fairly obviously safer, all else equal (and you are not operating the vehicle), for anyone in any vehicle due to the physics of the transport, since dangerous deceleration happens much more often than dangerous acceleration, (b) there has been epidemiologic evidence for a very long time, and (c) this was already the standard in other countries. 

So the advice turns out to be good.  Not so good is the message that AAP apparently included in their press packet (it was claimed in too many news reports to be coincidence) that the reason that parents turn their children around so early is that it is some kind of first-birthday rite of passage.  While perhaps a few parents feel strongly about this, the car-seat-reversal-as-circumcision seems like a pretty dumb story.  The alternative explanation offered in the news is that parents feel the kids are bored facing backward.  Apparently the physicians who dictate AAP's policy statements are clueless about the experiences of parents of only-children who take care of the kid alone and really hate having to leave the little guy alone in the back, where you cannot see if he is distressed, breathing, etc., for substantial periods of time for a year – and now more – of his life.  And it is not just boring, it is alone.  If the authors had made an effort to understand this it should not have changed the recommendation, but perhaps they would have done something useful, like suggest one of those cute mirrors that let the driver see the baby, and play music for him at the touch of the button.  (But only glance back occasionally or next year's headlines will be:  Is watching your baby in the mirror more dangerous than texting while driving?)

The point is that they are pretending-away the costs.  This is typical AAP behavior and the classic error made by those who presume to offer policy prescriptions but do not understand what policy prescriptions should be based on:  They see that the policy has one benefit and so they recommend it.  Perhaps you noticed that I wrote they based the recommendation on the evidence about greater safety, full stop.  But every policy has one benefit; the question is do the total benefits outweigh the costs.  To figure that out, it is necessary to quantify the benefits (they actually made an estimate in this case, which is better than they usually do) and costs, and see if the former outweighs the latter.  It pretty clearly does in this case, but that does not excuse them for making a joke of the cost (which is also pretty common behavior for AAP – not only do they ignore the costs, but they dismiss or even  anyone who might worry about them).  This is, after all, the same AAP that has such a long list of activities that a pediatrician is supposed to perform in a well-child visit that it would take roughly all day to do all of them.  Not only are they unaware of the basics of policy analysis and the experiences of parents, but they apparently cannot even figure out that those in the group they represent have about six minutes per patient, not six hours.

This is what I mean when I say that they got this one right, but not for lack of trying to get it wrong.  A similar "analysis" would also lead to the conclusion that all babies should have to be in backward facing seats in planes, which is a terrible idea.

But the gang from Pediatrics did manage to find something dumb to say, even while offering good lifesaving advice.  They insisted that the infant car seats should not be used outside the car because they are often set somewhere such that they fall, and thereby injure 8000 kids a year.  Yes, that's right, if you need to pop into a store for a minute or give your infant a place to sit in a restaurant you should just… um… well, leave him at home.  (That is actually not bad advice – it is much safer to leave the kid at home while you run an errand for 20 minutes rather than exposing her to the much more dangerous experience of being in the car.  The problem is that if something happens to the kid at home, you will go to prison, while if you cause her to be injured by taking her for a ride, well, tragic accidents happen.)

So instead of using this nice solid car seat as a temporary carrier and storage place, you should juggle the baby in and out of it extra times, and carry her in your arms while shopping or eating, or just walking into the house.  Yeah, what could go wrong there?  It is classic Pediatrics:  The policy has a benefit (avoiding injuries from falling while in a car seat), and so long as you ignore the costs, (substantial hassle in many cases, and presumably even more falls, but now without armor) it looks great.  I suspect a closer look would reveal that most of those 8000 involve serious bad judgment that is probably not caused or exacerbated by the car seat being used as a hand carrier.

But I do not want to distract from the real error here, which might have been lost in the above.  This was not based on any new information (they cited a study from 2007 and Swedish population data that has existed for a while).  The basis for making this recommendation has existed for years, and babies have died because of the delay in communicating it.  It is bad enough when people who make behavioral recommendations pay attention only to safety and ignore all other human concerns, but at least they should try keep all their errors in the direction of being too aggressive rather than dropping the ball for years.

21 March 2011

Unhealthful News 80 - Gone fishin' (ACS study of alcohol and pancreatic cancer)

A study was reported last week in various new sources with the claim that heavy drinking causes pancreatic cancer.  The study results do tend to suggest that conclusion, though there are a handful of issues worth mentioning.

The article was written by employees of the American Cancer Society.  While sometimes perceived as a scientific organization, they are primarily a political activist group.  For example, they have engaged in efforts to prevent people from learning about tobacco harm reduction, disguising their political goals with sciencey claims.  I am not sure whether they deserve similar distrust on the topic of alcohol as they have earned regarding tobacco, but I would not assume they are being honest.  They are certainly not doing great epidemiology.  For example, they controlled for whether someone had a history of gallstones, but alcohol consumption has a causal relationship with gallstones (in particular, it protects against them) so, as anyone who read this blog for the last week should know, it should not be controlled for as a confounder.  It probably does not matter much, but it might.  They also controlled for body mass index, which has the same problem and might matter more.  Additionally they controlled for such variables as marital status, which are not plausibly confounders, though at least they are not caused by the exposure.

In short, they just threw in whatever variables they happened to have, which is standard ACS practice.  At least it might be innocent ignorance of proper methodology, without any biased fishing for which confounders yield the "best" answer, though we cannot be sure.  The datasets they use for most of their analyses are old cohorts for whom they gathered exposure data a long time ago (in this case, 1982) and have been watching for health outcomes ever since.  Since they cannot collect the optimal information for each exposure-disease combination they want to study, they generally just throw in what they have.  That brings up another difficulty with interpreting the results of these studies, though one that is an inherent challenge, not an error:  We only know the exposure data from 1982, and then it is limited by the quality of a 1982 survey (there are also some sampling issues with that survey that I will not go into).  We also have evidence there is measurement error in the alcohol variable (e.g., when they studied smokeless tobacco using this data they found there was still an association with alcohol-caused liver disease even after controlling for alcohol consumptions, which means they apparently did not measure alcohol very accurately).  Indeed, most surveys underestimate alcohol consumption because people under-report it.

All that said, the results seem to be legitimately interpretable as supporting the claim.  But let's not get quite as excited about it as those reporting it did.  The authors were clearly fishing a bit for the result, so their exact claims should not be taken seriously.  First, the increase in risk was only about 20%, so even if the result were exactly right, it is a fairly modest contribution to the harm from heavy drinking.  The result suggested that the negative results start at 3 drinks per day (or more precisely, a reported 3 drinks per day, which probably means more), but we know that if they had seen the results starting at 2 drinks per day, they would have reached a similar conclusion.  This matters because it means that the results are biased upwards (that is, they tend to exaggerate the effect) because the hypothesis – drinking at least 3 drinks per day increases pancreatic cancer risk – was designed to fit the data.  (See this for more about that concept if you are interested and it is not clear why this would be).  Consider this to help explain:  Assume that other studies had suggested that the unhealthy effects start at 2 drinks per day; then this study would have somewhat contradicted the previous belief, but it would have been described as if it supported it, biasing the evidence in favor of the claim.) 

The finer you fit the worldly conclusions to the specific data, the more bias you get, both in the sense of overstating the support for the generically-phrased worldly hypothesis (E causes D) and exaggerating the strength of the association.  So when some commentators pointed out that women only had the effect at 4 or more drinks per day (a level of mining that the study authors actually chose to not  include in their abstract, though obviously it was in the results), they increased this "over-fitting" problem, as it is sometimes called.  The worst example of this, though, which the study authors are guilty of and some reports fell for, is separating out the effects of liquor, beer, and wine, and claiming that the latter two did not show an association, and (obviously) liquor showed a stronger association than the average of the three.  This is not actually true in the first place – beer and wine showed the association, it was just smaller – but the real problem is chopping up the association to see if it can be made stronger if only some of the dataset is considered and the rest is ignored.  It pretty much always can, and so doing such fishing and concluding that whatever it generates is right is pretty much a guarantee that the result is biased.  It was very likely guaranteed that either beer, wine, or liquor would show a substantially stronger association than the other two by chance alone, so picking it and highlighting it tells us very little.  It would not be fishing if there were a good theoretical reason to expect the result, but it would be rather odd to expect that women are able to safely drink more than men, or for different sources of alcohol matter a lot for the pancreas (they plausibly matter for oral cancer, where highly concentrated alcohol in liquor touches the mucosa).

This reminded me of another aspect of the wind turbine case I wrote about a few times over the last few days (sorry to flog it, but I can only tell the stories that I have).  Without going into confidential details, consultants working for the industry were trying to claim that one epidemiology study's results should be dismissed because the authors did not use a Bonferroni correction.  You are excused for not knowing what that is, even if you read a lot of epidemiology studies, because it is basically never done.  It is a simplistic statistical trick for "penalizing" your statistical tests when you are doing multiple analyses of the same data (and interpreting the results a particular way) making it harder to call something statistically significant.  It did not actually apply at all in that case, but to some extent the concept addresses the same problem that I describe as creating bias resulting from fishing in the data.  Unfortunately, no simple rule can correct for the bias, so it is not really very useful in addressing the main problem, and it is based on very simplistic assumptions that do not actually describe the way fishing is typically done.  Nevertheless my recent experience explaining why a particular claim was nonsense was a reminder that there is a simple thought exercise that is taught in first-year statistics-for-epidemiologists, that reminds people that there is a problem if we are allowed to look at the sane data too many different ways (e.g., looking at 2 drinks, and 3, and 4, and men, and women, and beer, and liquor), and they remember hearing about that Bonferroni thing.  Anyone who would try to use the specific Bonferroni arithmetic is generally making a host of errors (it is a baby step toward understanding something, not method that works well in the complicated real world), but everyone who took a year of epidemiology did learn that fishing is a problem.

As the ACS study authors acknowledge in their introduction, there is controversy about the alcohol-pancreatic cancer relationship, and their study obviously does not resolve it.  I ran into this controversy a few years ago at a conference talk in which I was insisting that, in the context of one study that the authors claimed supported the (never actually supported) claim that smokeless tobacco causes pancreatic cancer, that they should have controlled for alcohol consumption (we were showing an exercise about how the reader could do that even though the original authors did not).  Someone who I believe was from U.S. National Cancer Institute pointed out that NCI's position was that alcohol does not cause pancreatic cancer, though she agreed that in the particular study in question it was strongly associated, so a case could be made for what I was claiming.  It was kind of interesting that the U.S. government position was that, in this particular controversy, this particular evil vice is not a problem after all.  I will be interested to see how that evolves.  In any case, there were strongly contradictory results before the new study, and following the study… well, obviously there still are.

This reflects what might be rule number one about interpreting the health news (one that reporters should learn):  The new study is not necessarily better than an old study, and it is rare when it should move our beliefs very far from where they were based on all previous existing knowledge.

20 March 2011

Unhealthful News 79 - Knowing when to say why

My Sunday sub-series about how to know who to believe when faced with contradictory claims from apparent experts seems to often circle back to a similar theme, paying attention to who is actually addressing others' arguments.  Obviously this is not always possible; in most forums we have to provide background points or even make our main points without responding to the arguments against those points, or even acknowledging them.  For example, in my post yesterday I alluded to synthetic meta-analyses being often terribly flawed without defending the claim, even though it is contrary to what most people think they know.  It was not central to my point, and I had explained it elsewhere.  Sometimes you cannot say why, but other times you need to do so.

Notice that the phrasing of today's title is grammatically analogous to the anti-binge-drinking message it derives from:  It is not when to ask "why?" but when to provide an explanation for your claims, because if you do not then any astute reader should infer that you are trying to trick them or do not know what you are talking about.

I was struck by the point this week when responding to the "rebuttals" to some of my recent testimony about wind turbines written by the other side's consultants.  Never were scare-quotes so appropriate, given that one of the three basically agreed with what I had to say (merely positing a specific variation on the causal mechanism) while the other two rebutted nothing.  It seems that after I spent many pages carefully explaining in detail why particular types of evidence are compelling, taking it down to basic principles so that the reader could follow the analysis rather than have to just accept my assertions, they just declared that mine is different from most analysis so it should be ignored.  That would have been a really good time for them to say why.

The problem was that they did not have a "why".  If they had, they surely would have provided it.  Lacking one, they both made lots of empty comments about the analysis not being of the standard type.  Again, the rather important logical step, "why that is a bad thing", was missing.  I suppose that hired-gun reports like this work when the decision makers are just looking for an excuse to take their side, as often happens in this area.  I trust that any serious reader can see right through the vacuousness of it.  (As an aside, I have to mention the worst and best bits of those criticisms of me was one of them describing my report as "commentary" and calling it "bombastic".  It never fails to distress me how often reviewers of any epidemiologic analysis that is beyond the first-year-class-level mistake it for commentary; as for "bombastic", I pointed out that it is one of those interesting words that is self-referential, like "sesquipedalian" or "misspeling".)

This example, of an obvious need to make one's argument rather than just asserting "my view is right", is so clear that it would seem like a cartoon if I made it up:  Someone carefully explained why, in that particular case, unusual evidence was available, and such evidence is particularly compelling; those tasked with refuting this claim replied "your evidence is unusual".  Um, yeah, I actually led with that.  For some reason, my memory flashed to the climactic scene in the movie 8 Mile where Eminem leaves his rap-battle opponent speechless by presenting everything the opponent was going to say about him, pointing out why it represented strength rather than being an embarrassment.  Except in this case, imagine that the opponent just went ahead and said what he planned anyway.  If the contestants had gone in the other order, the taunts directed at the protagonist might have sounding damning to the audience, having some sting at least until his response.  But when the protagonist led with the response to those very taunts, the opponent realized it would be pathetic to just recite the taunts.

A less one-sided example I have been thinking about is what the crisis at Japan's damaged Fukushima Daiichi nuclear power plant should say to us about the future of nuclear power.  Opponents of nuclear power obviously have gained a powerful bit of evidence for their cause.  But, wait:  Those who support continued and expanded use of nuclear power can, and occasionally do, point out that the rate of about one crisis per decade across all the world's nuke plants needs to be compared to the human health and ecological damage that would be done over the same period by substituting fossil fuel plants or industrial wind turbines.  (Ok, I just threw that last bit in myself – most people have still not caught on.)  In the face of that claim, it is no longer enough for someone to just say "look at how bad that was, and it probably will happen again".  They need to explain why that risk is greater than the toll being compared, or why the comparison is not fair. 

I continue to look for an example where an advocate on either side of this issue acknowledges the opposing claims and explains why they are not compelling.  But I almost always find myself thinking, "do you, perhaps, know that there are many people who disagree with your position and have arguments of their own; how can you possibly think your arguments are compelling without telling me why they refute or trump the opposing arguments."

Another example from my own writing this week, in response to the question of whether the US FDA would ban menthol in cigarettes, people have said that this would cause more people to buy from the black market (undoubtedly true).  But some comments have suggested that this could cause an increase in total consumption.  This desperately calls for a "why", since as I posted, the claim is basically that making something costlier to acquire (that is what a ban does) will increase consumption.  Making an assertion that runs contrary to existing evidence, established theory, or conventional wisdom cries out for saying "this is why the previous belief is wrong".

I guess what I come away with is:  If you are making a point that is contrary to the conventional wisdom, you need to recognize the conventional belief and explain why you are right and it is wrong, or that it does not apply in the particular case.  Why is menthol so unusual that a ban could actually lead to increased consumption?  Providing that "why" is what I do in most of my overview writings about about tobacco harm reduction or wind turbines (yes, I know – I have detected the trend in my intellectual life).  I try to explain why the conventional wisdom is misleading, and if possible to do so in enough detail, starting with first principles, so that readers can see why they should believe it (apparently doing so is bombastic, but it is just the price I have to pay).

But the need is even more compelling, when someone is a part of a debate where an opponent has explicitly made a claim about why they are right, perhaps even acknowledging what might seem weak about their position in order to make their case about it.  It cannot be useful, in that case, to just recite one's stump speech again.  If that is all someone has to say, the reader really should conclude that they have nothing to say. 

Kinda makes me want to watch 8 Mile again.  Cue a beat.  Get seriously bombastic.  "Tell these people somethin' they don't know about me" (the protagonist's final line of the last battle) is a great way of saying, "if your only arguments are ones that I already presented and refuted, perhaps you might want to just be quiet."

19 March 2011

Unhealthful News 78 - Controlling for other variables, answering one question well, but not another

Today's New York Times reported on a study that is a nice addition to my recent examination of confounding.  I have to admit that I am just running with the description in the news story rather than reviewing the original study, because it is such a good example based on what was reported that I hesitant to learn more and mess up the story.

There has long been a conventional wisdom that carrying a bit of extra weight in an "apple shape" (around your midriff) puts you at higher risk of heart attack than carrying the weight in a "pear shape" (in your hips and butt).  Euphemisms aside, this claim had implications that are both diagnostic (be more worried if your extra weight is around your middle) and advisory (work harder to lose the weight if it is around your middle because you have more to gain by getting rid of it).  The new study pooled the data from 58 studies (yes, it was one of those synthetic meta-analyses that I have pointed out are often terribly flawed, but as I said, I like the story as told) and found that after you take into account the effects of other causes of heart attack (the news article mentions blood pressure, cholesterol, diabetes and smoking, and implies that there are others), the pear-vs.-apple distinction has no further explanatory power.  That is, by looking at your blood pressure, etc., you can assess your heart attack risk, and having done that, you learn nothing more by looking at where you carry your extra weight.

This is a potentially useful thing to know (assuming it is right, and there are a lot of ways it could have gone wrong).  It is also one legitimate way to analyze data for asking a specific question:  Does the apple factor have extra diagnostic power beyond what we already know?   Notice how the study is described:
Conventional risk factors like blood pressure, cholesterol, diabetes and smoking were accurate predictors of a heart attack or stroke, but additional information about weight or body shape (ascertained by measuring waist circumference or waist-to-hip ratio) did not improve the ability to predict risk.
It is important to not mistake this for controlling for confounding.  This is one of the places where epidemiology gets really complicated – far too complicated for most of the people who do it.  If we were asking the question "does apple shape cause heart attack risk?", we would want to control for confounders.  But as I have pointed out this week, this does not mean just controlling for every variable you might have, because some of them are not confounders.  To take the simplest case, it could be that having fat around your middle does more to cause higher blood pressure or unhealthy cholesterol levels than the same amount of weight elsewhere.  Thus, naively controlling for blood pressure or cholesterol is not proper controlling for confounders, because they are on the causal pathway from exposure to disease, and thus controlling for them hides the real effect of the exposure.  (As an aside, it really gets messy here because inevitably at least some of the blood pressure differences between the exposed (apple shaped) and unexposed groups represents confounding, but some might be on the causal pathway or otherwise not a confounder.  Thus it becomes necessary to figure out how to control for the right part of the effect of blood pressure, something that is quite challenging.)

So, it is critical to not make the mistake, based on information like this study that controls for everything that is a confounder or not, of claiming that a particular exposure causes, or in this case, does not cause an outcome.  That would be incorrect, and doing so would reflect a failure to understand how to do epidemiology.  The study, as described, lets us know whether there is any diagnostic value in the apple shape, given complete other information, but it does not actually answer the advisory question, "should I make an extra effort to lose this weight because it is around my middle and that causes higher risk?"

As I have tried to drill into the head of every student I have ever had:  The quality of an answer depends rather crucially on what question you are asking.

Regular readers will guess where I am going this this.  Apparently the study researchers did not understand the limits of what they had found:
 “Whatever your shape is doesn’t really matter,” said Dr. Emanuele Di Angelantonio, a lecturer at the University of Cambridge and a member of the Emerging Risk Factors Collaboration, which carried out the study.  He emphasized that being overweight or obese is one of the main modifiable risk factors for cardiovascular disease, and is often an early sign of future risk. But he said, “Whatever form of obesity or overweight you have is all the same.”
But this is not what the study actually found.  All it found was that other variables screened any effect of where you carry your fat ("screen" being a term that is sometimes used to describe the logic, "if you know these other things then you learn nothing more from knowing this" or in an intervention study, "if you fix these other values then you block -- i.e., screen off -- the effect of the one").  That does not mean that shape was not causing bad outcomes, just that it was doing so via pathways that we measured and thus picked up the effect of in the other variables.

Actually controlling for confounding, in order to figure out the effects of where you carry your fat, is much more difficult.  The fact that various studies in the past have suggested that the apple-shaped are at higher risk means that we should still consider it possible, and those with that shape should still be more worried about losing the extra weight (and, thus, those with pear weight can remain a bit less worried).  A study that controls for obvious pathways via which being fat around the middle rather than in the butt (I am sick of the euphemisms) might cause risk does not actually contradict that claim.  It cannot. 

If these mistaken claims of the study's authors, via credulous news reporters, become conventional wisdom (keep in mind that most people evaluating the claims do not know as much as you learned about proper controlling for confounding reading UN this week), the official word will to be not to worry about your fat middle.  But this will only apply if you have a complete catalog of all the other screening information and its implications, based on medical testing from your physician and parsing that information by your consulting epidemiologist (what, you don't have one of those?).  The public health advice, however, should not change because as far as we know, losing that fat middle is still more important than losing fat hips.


[A note on New York Times references:  As those of you who are avid internet news and blog readers probably already know, the NYT just announced that it is going to a semi-paywall model, wherein you have a limited number of article accesses a month unless you pay to subscribe, or link in from twitter or a few other sources that do not include this blog host as far as I understand.  In consideration of this, I will stop including links to NYT stories unless citing that particular version of the story is crucial to my analysis, which it usually is not, and will make sure to note when a link is to the NYT so you do not click and spend one of your monthly allotment without knowing you are doing so.]

18 March 2011

Unhealthful News 77 - Important scientific discovery: RJR products that are made from tobacco and confectionary are found to contain nicotine and confectionary

Brad Rodu got the scoop on this story at his Tobacco Truth blog a few days ago, but now it has been reported in the popular press, so it is time for an Unhealthful News analysis.  Some scientists (I refer to their job descriptions, not their behavior or way of thinking) analyzed some of RJR's Camel-branded dissolvable tobacco products (Orbs, Sticks, Strips) and reported that they contained exactly what would have been expected.  The researchers from Indiana University-Purdue University Indianapolis (I bother to include that institution name just to mention that those of us who, back in the day, were in the same leagues referred to them, using a rough phonetic interpretation of their acronym IUPUI, as oo-ee-poo-ee) reported that the products contained nicotine (gasp!) and simple food ingredients like you could find in most confectionaries.

If they had just stopped there, it would have been a somewhat useful, albeit fairly boring, scientific study of chemical content that might have been used appropriately as data for other analyses sometime.  Or it might have been pretty useless, since the same information probably could have been obtained from the manufacturer. 

But the researchers decided to tout that information to the popular press.  This is actually a bit of a good-news story about the press because, indicting only those further upstream, since only a few obscure news outlets have picked up the press release.  In that press release the authors try to turn their boring workaday chem lab study into a warning about:

(a) Child Poisoning ("The packaging and design of the dissolvables may also appeal to children….")

Huh?  What the hell does that have to do with their chemistry analysis?

(b) Mysterious Effects of Nicotine ("They note abundant scientific evidence about the potential adverse health effects of nicotine, including those involving the teeth and gums.")

Would those be the adverse effects like the relief from psychological disorders and prevention of degenerative brain disease, or some other adverse health effect.  And I would love to hear how they decided that nicotine is bad for the teeth and gums, and I suspect the folks who make Nicorette gum might be interested too.

(c) The Use of Deadly Toxins as Ingredients ("Other ingredients in dissolvables have ...[properties]… and one, coumarin, has been banned as a flavoring agent in food because of its link to a risk of liver damage."

At least this one has something to do with their chemistry study, unlike the rest of their random comments, and might cause some concern.  So it is a good thing that Brad Rodu addressed it several days ago.  Too bad the authors did not bother to read what people had written about their study before they put out their press release.  Brad pointed out that coumarin can be found in some kinds of cinnamon, and the traces of it were indeed only found in the cinnamon flavor.  Mystery of that "ingredient" solved.

(d) A Mysterious Dramatic Mutation of Tobacco ("The researchers' analysis found that the products contain mainly nicotine and a variety of flavoring ingredients, sweeteners, and binders.")

From this we can infer that the finely ground tobacco in these products, since not a flavoring ingredient, sweetener, or binder, must consist of "mainly nicotine".  Funny, I had not heard of that breakthrough in plant engineering.

(e) And, of course, Politics Disguised as Science ("Health officials are concerned about whether the products, which dissolve inside the mouth near the lips and gums, are in fact a safer alternative to cigarette smoking.")

Um, no.  Anyone who deserves to have the word "health" as part of their title knows the truth.  Anti-tobacco extremists are concerned about the fact that these are a low-risk alternative, and thus might undermine their dishonest messages about all tobacco products having the same high risk.  Fortunately, they have the folks from oo-ee-poo-ee to help them keep people confused.


I must say, this is a surprising level of hype about nothing, even for anti-tobacco propaganda.  I am not even sure Stephen Hecht could have exaggerated the ramifications of a meaningless chemistry study this much.  It is somewhat forgivable when news reporters try to conjure up some simplistic practical implication of a new purely technical scientific study.  ("The discovery of an Earth-sized planet orbiting the distant star may aid in the development of tastier breakfast cereals.")  They are probably under orders to make such stories appeal to people as stupid as they think we are.  It is not forgivable when people who are pretending to be legitimate scientists do this.

The oo-ee-poo-ee researchers [yes, I know that is cheap and bordering on childish, but it takes me back to college days, so bordering on child is about right] conclude their press release with:
"The results presented here are the first to reveal the complexity of dissolvable tobacco products and may be used to assess potential health effects," said Goodpaster [what would be really cheap and childish would be to riff on the lead investigator's name, so I won't], noting that it is "therefore important to understand some of the potential toxicological effects of some of the ingredients of these products." Nicotine in particular, he noted, is a toxic substance linked to the development of oral cancers and gum disease.
I trust that my readers, unlike these non-health scientists who should really stick to their chemistry sets, know enough to know that nicotine has never been shown to cause any cancer or any other disease (there is concern about pregnancy effects and speculation about the effects of mild blood pressure increases).  And, yes, most of that really is just a repeat much of what I already commented on (this was entirely separate from all of the bits I quoted above) – they had so little to say that they had to make the the same false statements multiple times in the press release.  I thought about adding some final assessment of the lengths of dishonesty that anti-tobacco researchers will go to try to discourage tobacco harm reduction, but I do not think I can put it better than they did themselves.

17 March 2011

Unhealthful News 76 - Fatal confounding is worse than badly controlled confounding

Two days ago, I wrote a primer on confounding.  A couple of times this week I discussed how controlling for confounding can be done wrong.  But that might have been somewhat of a misleading emphasis.  Sometimes when you are too deep into something you forget that the real problems are more basic than the ones you usually bother to struggle with.  In the case of epidemiology, the bigger problem then controlling for confounders badly is pretending that confounders do not exist at all.  It is really a huge problem when there is literally no way to control for the worst of them.

This week a study was reported – as naively as can be imagined – that purported to show that old people suffer much more impairment when trying to multitask, in particular to talk on the phone while crossing the street, as compared to college kids.  This was based on some simulator studies and a wonderful example of uncontrolled confounding.  As an aside, I am greatly amused that the reports all specified that the tasks were "talking on a cell phone while crossing the street" as if (a) a cell phone is some novel technology that needs to be specified, (b) that "cell" is the standard phrasing (elsewhere in the anglophone world it is a "mobile"), and most important (c) there is any other kind of phone you can talk on while crossing the street.  (Look out for that cord running across the crosswalk up ahead!)

As is the case with so many psychology studies, the main conclusion is almost certainly correct, the problem being that it does not really follow from the study.  The biggest problem is something that epidemiologists are quite familiar with, but that psychologists may not be, the age-period-cohort issue.  That is, if we study current 70-year-olds (a particular age), right now (a particular period), then you inevitably have people who are entirely from one cohort (born about 1941).  If it is conceivable that an outcome is caused by either age (older people have trouble multitasking compared to people with young intensely-alert brains) or cohort (people born in 1941 only learned to walk while talking on the phone at age fifty-something, if ever, while current college students have been walking and phoning since childhood), or both, then the study makes it impossible to sort out the effects.  Assuming we are interested in the effect of being old, and not the effect of having come of age without phones that can cross the street, we are stuck.  Since there is perfect collinearity of the two, there is no variable than can be used to control for the effect of the second because there is nothing that can distinguish them.  We would need a study that separated age from cohort (i.e., we would have to wait until we could study people of the same age from a relevantly different cohort, one that learned to cross streets when kids already had cell phones), which will require waiting about 50 years.

There were other related problems.  The participants were not actually crossing a street, but were walking on a treadmill with display screens around them creating a virtual reality.  In other words, they were playing a first-person video game (with their bodies as the controller), something that college kids are almost all comfortable with, but that those aged 60-80 are much less familiar with than they are with actually crossing the street.  Thus, part of the measured effect was neither the age nor the cohort effect associated with being on the phone, but the cohort effect of not being used to video games.  The researchers tried to make a big deal about how the study shows that walking is not as natural as we think it is; they are no doubt right that it is not natural when "walking outside" involves a treadmill and synchronized video screens in a dark room.

If we really wanted to solve some of the study problems sooner, we would not really have to wait 50 years or get old people run over.  There are some clever tricks, like finding some young people who were not used to having a mobile phone, or comparing older people who have more and less experience with them (and even with video games).  That introduces other more subtle confounding, but at least it gets some traction on the biggest confounding problem.  Notice the subtext lesson here:  One way to get rid of confounding is to control for covariates but an even better way is to design a study that avoids it. 

Of course, we do not really need to do that at all, because we already are quite sure that old people have a harder time crossing the street while talking on the phone.  The study told us nothing, but the news reporters just did not get that.  Unfortunately, the way they reported it actually could create a public health danger.  Another way of saying "older people have more trouble", one that even the average health news reporter ought to be able to figure out, is "younger people have less trouble".  I am sure that is not lost on younger readers.  But 70-year-olds are seldom found talking on the phone while crossing the street, while 20-year-olds do it all the time, and thus are at much higher total risk.  But if they read the headlines that the authors of this study created (via use of their press release and the credulous health press) young people will think they have been told they are good to go.  Look out at the crosswalk!, indeed.

This reminds me of another study that told us nothing despite having a clearly correct conclusion, and that is a rather telling indictment of the quality of epidemiology (not just pickin' on psych today).  Some researchers in public health at University of Alberta did a study which they claimed showed that allowing body checking in youth hockey increases injuries.  It seems that a rule change moved 11-year-olds in youth league hockey from playing with the 10-year-olds in a league that did not allow checking to the league with 12-year-olds that did allow checking.  So the researchers compared the injury rate from the year before the change to the year after and, sure enough, injuries increased.  They got so breathlessly excited about the fact that they had a "natural experiment" that they seem to have overlooked the fact that it was a very badly designed experiment, another one with perfect (and thus non-controllable) confounding.

Before reading on, see if you can see the problem and prove yourself smarter than a University of Alberta public health faculty member (a lower standard than you can imagine).

Got it?  If you said "wait a minute, the 11-year-olds went from being the bigger kids, surrounded by smaller and thus less damaging bodies, to being the smaller kids among a group of larger and stronger bodies," you win.  There was perfect collinearity between becoming exposed to checking and becoming exposed to older opponents.  It is safe to say, without any serious doubt, that each of those increases injuries.  But when we are only measuring their combined effect, we learn very little.  One thing that we do not learn, despite the claim of the authors, is the effect of exposure to checking alone.

But I have not even gotten to the worst part yet.  One of the authors presented their results and conclusions at a departmental seminar when I was there (back when it was much more difficult to be smarter than a University of Alberta public health faculty member ;-), and I pointed out this fatal flaw that they had not even recognized.  What did they do?  They put in a few-word caveat in the discussion that it was maybe possible that there might be a wee confounding problem because of this, though that does not really matter; they did not change their interpretation of their results or their conclusions at all.  And a journal published it anyway and, what is worse, the editors and reviewers no longer had the excuse that they might not have figured out the fatal problem because it was mentioned (though downplayed) in the paper.

There is always a bright side, though.  It is fun to have a perfect teaching example of horribly bad epidemiology that you can use in classes at the same institution that produced the article.  Well, fun for the teacher and students anyway.

16 March 2011

Unhealthful News 75 - Coffee, please, and leave room for confounding

When I read a report of the recent study that suggested that coffee consumption protects against stroke, it mentioned that the result controlled for several variables, including blood pressure.  I immediately wondered if this was like the example of controlling for measures of blood lipids when studying alcohol and heart attacks, as I talked about in yesterday's post about confounding.  It struck me that the beneficial effect might be an artifact of the detrimental effect of higher blood pressure (which is caused by caffeine use) being taken out, leaving an apparent benefit when the blood pressure effect was removed.  There are a couple of ways this could play out – e.g., it could be that the negative effect of blood pressure was overestimated, leaving coffee drinkers looking healthier than they "should be" based on that, resulting in an apparent benefit.  As I noted yesterday, you should not control for something on the pathway from the exposure of interest to the effect, and blood pressure is clearly on that pathway.

I should say that I was certainly not bothered by the possibility that lots of coffee is healthy.  That is a delightful notion.  But I had to wonder. 

It turns out that the "blood pressure" measure was actually just a "yes or no" to some question about hypertension (presumably it was whether that had ever been diagnosed, though this was never clarified in the journal article).  Also, the confounders did not seem to matter too much.  To their credit, the authors reported what their effect estimate was for coffee adjusting only for age.  (This is not quite adjusting for nothing but pretty close and since it did not seem to vary much among the exposure groups – those who consumed none or almost none, and three categories of increasing consumption – it probably does not matter.)  That showed that their model that was adjusted for a bunch of factors got basically the same results.  Therefore, they could not have been trying to mislead people as with the U.S. National Cancer Institute group I mentioned a few days ago.  There were some things that made me a bit uneasy about the methods and claims, but obvious fraud was not on the list.  (It seems that most researchers at the Karolinska Institute, where this study came from, are average or above-average; it may only be the ones researching snus that engage in unethical and then illegal practices.)

Mind you, I am not suggesting this study looked at confounders correctly, just that this seemed like it probably did not matter.  After all, they still wrote, "we cannot rule out the possibility that our findings may be due to unmeasured or residual confounding" which, since it is always true, is kind of a silly thing to say.  The authors still seem to have blindly thrown in whatever variables they happened to have, without regard to whether they should have been controlled as confounders or whether their relationship to the exposure and disease was such that they should not have been controlled for.  As suggested in yesterday's post, properly controlling for covariates requires a lot more than this, even at a minimum.  For example, I noticed that in this study population, body mass index and exercise averaged the same for each exposure group (as did most everything else measured), but calorie intake increased quite a lot as coffee consumption increased.  Is coffee keeping heavy eaters from having health problems by burning off excess calories?  Do coffee drinkers naturally have different metabolism?  That is probably not an issue for the outcome in question, but the point is that if those variables should have been included in the model then the authors should have tried to figure out what was up with them.  It is also possible that throwing in all the variables they had did not change the effect estimate, but controlling for only the variables that they should have controlled for might have changed it – we cannot tell because they did not report such models.

Of course, as with any study of this type, the important questions do not end there.  There are also other explanations for the result like: the healthy worker effect (people with jobs tend to be healthier than those who lack them, often in ways that are unmeasured, and workers might drink more coffee – they do in a lot of populations, though I am not sure about older Swedish women); the unhealthy abstainer effect (some people who completely avoid common behaviors that might be unhealthy in some way do so because of a health problem that is not captured in the data); the Coke or tea that non-coffee drinkers consume causes stroke; and even the healthy survivor effect (the story would be that coffee actually triggers strokes at an age earlier than the study age in anyone who is prone to them, and so those coffee drinkers who survived without stroke to be in the study population were the ones who were not at risk).  I am not suggesting that any of these is right, just that they are possible and could be tested.

Again, I think it would be great if they are indeed right and drinking six cups of coffee a day lowered your stroke risk by 20%.  But I am a bit leery.  Also, since other stimulants (decongestant medicines, ephedra-like drugs, nicotine) are widely believed (rightly or wrongly) to cause stroke, it is interesting that everyone is so quick to believe it and report the new result without skepticism.  Hundreds of news sources covered it and I did not notice any of them expressing doubt, though obviously I did not review them all.  Perhaps it is a plot by Big Coffee to mislead us.  Or should that be Venti Coffee?  After all, they spend millions on advertising their addictive product, add flavors to attract children who get hooked and then graduate to the hard stuff, and use sexually explicit cartoon images of ship-crashing temptresses – what more do you need to know?