23 February 2011

Unhealthful News 54 - Exercising your brain is good, microwaving it perhaps not

If you glanced at the health news today, you undoubtedly learned of a new study that found, based on brain scans, that talking on a mobile phone has some effect on the brain, though it is not known if that is unhealthy (here is a version with still images of the what the scans look like, which is of course utterly meaningless to the reader, but the colors are pretty).  There has long been speculation about whether the radiation (i.e., signals) from phones that enter the brain, due to being transmitted from a point close to it when the phone is held to the ear, might cause cancer or some other disease.  The new study found increased brain activity at the point nearest the transmitting phone.  I am not going to take on the subject as a whole, but I thought I would point out some specific observations that struck me about the stories.

First, it was remarkable how many stories made observations about how the radiation from cell phones is non-ionizing (that is, it cannot break molecular bonds, which is what makes some radiation carcinogenic) but did not mention that the frequency of the radiation is in the microwave range.  You might recognize that term as something that makes water molecules get hotter, which could alter the brain due to the minor heating effect (as could the direct thermal effect of the waste heat from the phone pressed against the head, or sunlight or just being warm).  I am not saying I believe there is some effect from this – I have almost no idea about the biophysics here – but it was very odd that no report bothered to tell us whether this was likely the explanation for the results that were observed, probably was not the likely explanation for some reason, or that the experts have no idea.

Also, I noticed a lot of the stories seemed to place great stock in the fact that the observed metabolic change was "highly statistically significant".  This seemed intended to cause the reader to believe that the change was of important magnitude even though the magnitude of the change, a 7% increase in activity, seemed modest (though I have no idea whether this is truly small in context).  But all that "statistically significant" means is that the observed result was unlikely to occur by chance alone, which means that even though the effect seems small, random spikes in metabolism are rare enough or they repeated the experiment enough times to see a clear signal above any noise.  This does not mean that the result matters or is even impressive, though presumably that is what the news reader is supposed to be tricked into believing.  (Also, as a more technical point, the phrase "highly statistically significant" is nonsense and indicates a lack of understanding of statistics on the part of the researchers.  Statistical significance is, by construction, a "yes or no" proposition; there are no degrees of "yes" nor is there an "almost" category.  There are other related statistics that have magnitude, but statistical significance does not.)  Note: I wrote more about the technical meaning of statistical significance in UN16.

On a disappointing related note, one of my all time favorite new clippings for teaching was from sometime in the 1990s when an early epidemiologic study reported no statistically significant increase in brain cancer among mobile phone users.  But, the story reported, when researchers looked individually at each of the 20 different brain cancers studied, they did find a statistically significant result for one of them, which was portrayed as worrisome.  The beauty of this, if you do not recognize it, is that the concept of "statistically significant at the .05 level" (which is what is usually meant by "statistically significant") is often explained by saying that if you repeated a study multiple times and there was really no correlation between the exposure and the outcome, then only 5% of the time (1 time out of 20) you would get a statistically significant result due to bad luck.  Thus, we would expect to see 1 out of the 20 different cancers show up as statistically significant. 

This is not actually quite correct, but it is works in spirit, fitting the usual simplification story, so the fact that there were exactly 20 different brain cancers examined made it such a great example, kind of an inside joke for students learning this material.  Unfortunately, this was back in the days before digital copies of everything and I apparently lost every copy of it.  I thought I had found it again a couple of days ago in an old file and pulled it out, and thought that everything had just come together perfectly when the stories about the new study ran today.  Alas, the clipping I found was a far less interesting random story about the same topic from about the same era.  My perfect example remains lost.

So as to not finish on that note of minor tragedy, one last observation about the news stories.  One story caught my eye because the lead for it included the promise to explain how, "Many variables have prevented scientists from getting good epidemiological evidence about the potential health risks of cell phones."  That sounded interesting since only the epidemiology can tell us whether there is any actual health problem, and so far it has not supported the fears that there is.  However, it is far from definitive.  After all, with an exposure this common, a tiny increase in probability among those exposed could still be a lot of cases, and with brain problems – not restricted to cancer – being as complicated as they are, figuring out what to look for is not easy.  So it was disappointing that the article only included the above sentence and, "Radiation levels also change depending on the phone type, the distance to the nearest cell phone tower and the number of people using phones in the same area."

The claim was that because there is so much heterogeneity of exposure, it prevents us from getting good epidemiologic evidence.  But it is actually in cases where there is so much heterogeneity that observational epidemiology of the real-world variety of exposures is particularly important.  The experiment that was reported today, like most experiments, looked at only one very specific exposure (and, in fact, one that was not very realistic), but it served as a "proof of concept" – a demonstration that the phones can have some effect.  But other experiments or narrow studies might have missed this effect if they had looked at a different very specific exposure.  Epidemiology that measures any health problems associated with a varying collection of different but closely related exposures (e.g., all mobile phone use) can provide a proof of concept that does not run so much risk of missing the effect.  With a study of the right type and sufficient quality, observational epidemiology can show whether at least some variations on the exposure are causing a problem, even if not all of them are.  The same data can then be mined to suggest which specific exposures seem to be more strongly associated.

Oh, and just for the record, I try to use a plug-in earphone/microphone when I have a long conversation on a mobile phone.  I would not be surprised if no important health risk is ever found, and is seems that any risk must be small or we would have noticed it already.  On the other hand, why be part of the experiment if you do not have to?  Besides, I just do not like the feeling of the side of my head heating up.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.