09 February 2011

Unhealthful News 40 - The unhealthful news about e-cigarettes could fill a book (hmm...)

Today's new offered several good candidates for this blog, but I am going to go with the one in my primary field of research, a new study (pdf) about the use of electronic cigarettes as a substitute for smoking.  The WebMD news article about the study contains enough material for a month's UN blogs.  (I am not trying to pick on WebMD after the coverage in UN37 – they just seem to be the only news service that reported the story.)  Tomorrow we will post something about this at the tobacco harm reduction blog, so I will focus on some points about the nature of scientific inference today, leave the THR points for the THR blog, and try to pick up some of the other problems with the news story somewhere.

Writing about this is a bit tricky since I am an advocate in the area and the author of the study, Michael Siegel, and I share many views and goals, as well as professional circles and correspondence (though we have only met once).  But since I presume to write this series, I had better be able to critically analyze claims on "my side" of an issue (though most of the criticism is about the news reporting).  I would hate to have those of you who follow this series to eventually come back and point out that today's post did not adhere to my own standards.

For those who may not know, e-cigarettes are vaporizers (usually in the size and shape of a cigarette) that deliver a fog that contains nicotine, providing nicotine delivery and an experience that is like smoking.  They are designed as a substitute for smoking and are estimated to be roughly 99% less harmful than smoking, though we have no direct evidence about their health effects (they are too new) and limited evidence about how many smokers consider them a satisfying substitute.  The 99% estimate is therefore necessarily based on extrapolating the evidence about the most similar product, smokeless tobacco; my and others' calculations put the risk of smokeless tobacco at about 1/100th that of smoking.  It is possible that e-cigarettes are a bit more hazardous, but probably not much.  They are almost certainly not measurably less hazardous than smokeless tobacco, but they are more appealing to many smokers, and thus are very promising for public health.

But just because we do not know nearly as much as we want to does not mean we do not know anything, as the WebMD story implied.  The headline reads "Survey: E-Cigarettes May Help Smokers Quit", which we obviously already knew.  Indeed, anyone aware of e-cigarettes before the first one was even produced knew that they may help smokers quit.  What we now know, based on ample evidence, is that e-cigarettes do help smokers quit.  Not only are these products something that may replace smoking, thus helping smokers quit, but many smokers have substituted them, so they do what was hoped.  This "do" claim, not the "may", is the claim that the survey tends to support.  However, what the survey actually found about that was a bit disappointing for me, as someone who hope that THR will succeed and who follows the success of product substitution closely (my research group, in collaboration with an e-cigarette merchant, published the first study like Siegel's in November 2009 – you can find it in our 2010 THR yearbook).  I will try to expand on that disappointment in a later post.

The author of the news story even recognizes the evidence that e-cigarettes help smokers quit:
Many e-cigarette users say the devices have helped them quit smoking, or at least cut back. 
But then he demonstrates that he should probably not be writing for a health news service:
That's what scientists call "anecdotal evidence," i.e., not a proven fact. To remedy the gap in scientific evidence, Siegel is currently studying a group of e-cigarette users to see whether they're quitting or cutting back on real cigarettes.
Oh where to begin with that?  Shall I observe that no worldly claim is ever proven fact, but rather we have evidence to support the claim to varying degrees?  Should I explain that an anecdote is as close to a "proven fact" as you can get, since either the information is true or the person is a liar?  Or should I explain why "anecdotal evidence", since it is evidence, is not a gap in scientific evidence but part of it?  That one is too big to fully cover today, so I will expand on it with at least one other example over the near future, but I will start on it.  More subtle is the claim that Siegel's research, which is the type of research that can be done given the limited available resources, is basically a way of collecting and systematizing "anecdotal evidence", not something that is fundamentally different from it.

Let's back up a step.  Something that we would really like to know is what portion of all smokers would switch to e-cigarettes (or smokeless tobacco, or pharmaceutical nicotine products) if they were widely available, properly recognized as being low-risk, socially acceptable, priced competitively, and actively promoted as a low-risk substitute, or some combination thereof.  Unfortunately we will only be able to learn that when circumstances change to fit that description and we observe what people do.  A comparably interesting question is how many smokers will switch to a low-risk alternative when they learn it is low risk and members of their social circle have already switched?  That is something that could be studied without waiting for society-level changes, but would entail substantial costs and no one is supporting much research on e-cigarettes (the industry cannot afford it yet and anti-tobacco people are even more anti-harm-reduction than they are anti-smoking for reasons I have discussed extensively elsewhere).  We have evidence about one or the other of these (depending exactly what stories you tell) because so many Swedes have substituted smokeless tobacco for smoking, but nothing about e-cigarettes.

So what else might we want to know?  Next best might be knowing how many smokers who decided to try e-cigarettes ended up switching.  That is what Siegel apparently wanted to study, but could not (see below).  If we cannot know that, next best might just be to confirm that people quit smoking by switching to e-cigarettes.  To show the latter – which is quite useful to know, since it shows that e-cigarettes are an effective smoking cessation aid – what we need are those much maligned anecdotes.  An organized collection of them is most useful, which is what we and one other research group did when we did surveys of "convenience samples" of e-cigarette buyers (i.e., we got whoever was most convenient to survey – specifically anyone who responded to an email sent out to people who ordered e-cigarettes and posted at e-cigarette aficionado discussion groups asking them to do a survey).  The limitation of that is not that the respondents are convenient, but that they are self-selected, people who volunteer to do a survey when a call for participant is broadcast.  Since such volunteers are often very unlike the average person in important ways, you cannot generalize percentages and such to the rest of the population.  You can observe that many people reported that e-cigarettes let them quit smoking when nothing else they tried did (as we observed) and that almost every respondent was a former smoker (as we observed), but you cannot be sure that this is true for the average person who tries e-cigarettes.

Anyway, I trust you can see the pattern here:  Answering different questions requires different evidence.  Conversely, whatever evidence you have tells you something, even if it is not what you most want to know.  You might think that health news reporters would know that, so that they could pass the insight it on to the reader.  But it is pretty much up to the reader to figure it out.

The ultimate epistemic failing of the news reporter (which judging from the wording might be the fault of one of the people he interviewed, though he did not attribute it) was:
However, only an expensive clinical trial could really determine how safe and effective e-cigarettes are for smokers who want to quit.
Too bad he did not ask himself how, exactly, a clinical trial could determine how safe e-cigarettes are.  As I have noted previously in this series, clinical trial is not some magic spell for creating scientific knowledge.  It consists of causing volunteers to take a particular action for a short period of time and seeing what happens.  It is perfect for figuring out which drug or surgical procedure cures a specific disease.  It is useless for figuring out the risks of e-cigarettes, which will only emerge over a lifetime of risk.  Some people like to try to explain that a trial is not possible because we are not allowed to assign people to use e-cigarettes their whole life, but they miss the much bigger problem:  No one will ever use today's e-cigarettes for their whole life, whether assigned or not.  The current e-cigarette technology will probably be outmoded in a few years, so we will never have a population of long-term users of this product (like we do for smoking or smokeless tobacco) whose health outcomes we can measure.  The best we can do is assume that all smokeless nicotine products have about the same very low risk, as we already do.  (We can monitor e-cigarette users for acute adverse outcomes, as might result from serious contamination problems that many of us do worry about given the limited quality control, but that comes from free use of the products, not from a small clinical trial.)

How about an "expensive clinical trial" about using e-cigarettes to quit.  This would actually tell us something, though nothing we really want to know.  If we gathered a group of smokers who signed up for a study on cessation methods and assigned half of them to use e-cigarettes, and educated them about the low risk, we would likely see that many of them ended up switching.  This would, unfortunately, not tell us too much about whether smokers who are not the rare type to volunteer for a cessation trial would, if given information under normal circumstances and not told they are supposed to use e-cigarettes, choose e-cigarettes.  The good news is, however, that such a trial would not actually be all that expensive.  It just would not be very informative.

So what do we know about our real subject of interest, smokers who are open to the idea of quitting, are free to choose what they do, and have a normal level of knowledge?  It would be interesting to know how many smokers who tried an e-cigarette a few times decided to switch.  To figure this out we would want to get survey responses from everyone in a random group of people who tried e-cigarettes.  This is what Siegel tried to do, but failed.  He contacted what was basically a random group of people who tried e-cigarettes by using a merchant's email list from customers, as we did.  But less than 5% of them responded.  This makes this another highly self-selected sample – he sent out an email asking for people to join a survey and a few people volunteered to do so, exactly what happened in the previous studies.  This does not mean that the study was not useful, but it needs to be recognized as providing the evidence that it provides (an organized collection of those "anecdotes") and nothing more.

But I have to say I was rather disappointed to see Siegel, in his article, try to spin his study as fundamentally better than the two that came before because he had a systematic sample while we used a convenience sample.  This is not because I am defensive about our study, but because it invites a damning criticism:  If someone does a study where they were following a group of people (a cohort study, to use the jargon) to see what happens to them (presumably seeing if their characteristics, or what they were assigned to do if it was a trial, affected the outcomes) and they lose more than 95% of the participants before they can evaluate them (the jargon is "lost to followup"), it is a complete disaster.  Since the 5% who were not lost were undoubtedly different from average in any number of ways, it would be absurd to assume that they were a random sample of the whole study population and just ignore the loss to follow up. 

Imagine that you were studying a smoking cessation intervention with 1000 people but when you went to see who was still smoking at the end of six months you could get responses from 45 when you try to contact them.  The good news is that 40 of them had quit smoking.  But should you claim that your method is 90% successful?  Of course not, because it seems fairly likely that those who were favorably disposed to you, because you helped them quit smoking, are the ones who stayed in touch.  So you would likely just abandon the study as a failure and try again, just as if you were doing a laboratory study and your machine exploded half way through.  If you did submit a result to a journal with 95% loss to followup and suggested that the loss did not matter, I really hope it would be rejected – this is a rare case when reporting the data could be worse than not doing so.

This criticism does not apply to what Siegel's study really was.  He collected a self-selected convenience sample and did what he could with it.  He was not following a cohort which he lost 95% of.  But by claiming that this was a systematic cohort he invites the following criticism:  "Those tobacco harm reduction people draw their conclusions based on studies that have 95% loss to follow up – what a bunch of junk science!"  I kind of expect that will happen sometime, and I think it is a real shame.

Also, I have a theory about why the response rate was quite so low.  I learned (not from the methods reported in the paper – a problem I might take up tomorrow – but from someone who participated and volunteered the information on a blog) that the subject line of the email that went out to recruit participants was "Enter for a Chance to Win a Free iPod, iPod Shuffle, or one of two $100 Amazon.com gift cards for Completing a Quick Survey".  No doubt you have already arrived at the same theory that I and the person who posted that information did:  Survey respondents were limited to people whose email system does not have spam filters.




I am going to stop there and pick this up in the THR blog and maybe in tomorrow's ep-ology, but this is enough for now, and I have to finish a report that is due in Australia tomorrow, which I think is now already – timezones confuse me but the date line really baffles me.

1 comment:

  1. This journal obviously does not think that the study methods are all that important as they are printed in a font size that is noticably smaller than the rest of the article. Why don't they just require that section heading be "Here are the methods, they aren't very important and it is not your worth to read this"?

    In addition, the instructions for authors for the methods section say little more than to include the year of data collection, IRB approval and informed consent and a description of the statistical methods. However, from this article, it is clear that "we analyzed the data using standard methods" suffices for a description of the statistical methods.

    ReplyDelete

Note: Only a member of this blog may post a comment.