A recent highly-reported study claimed to show that the risk of autism among second children is dramatically higher when the interval since the birth of the first child is shorter. The result itself might be perfectly accurate. (However: Some of the numbers reported in the news articles did not seem to add up. The outcome, autism, is sufficiently rare that the entire result could be explained by a minuscule increase in the probability of diagnosing this notoriously difficult-to-define condition, which undoubtedly does vary among people and may be associated with having another very young child or being the type of family that has two children very close together. And it was published in the journal Pediatrics, which does not necessarily mean that it is wrong, but it is certainly not a good sign.) But what was clearly misleading were the comparisons made with the claims about autism and vaccines.
Coming out the same week as a new chapter in the vaccines controversy, reporters and even the author of the birth-timing study could not resist making comparisons. Most of them took the form "unlike that junk science about vaccines, this is good research that helps us better understand the incidence of autism." But the vaccine hypothesis, in spite of being so disdained that for me to even mention it here without a denouncement here probably invites attack, had the great advantage that it could explain what people really want to know: Why has autism incidence apparently increased by an order of magnitude, from incredibly rare to still rare but sufficiently common that you probably know someone with an autistic child? A particular vaccine formulation could cause cases of a disease, and thus changing the formulation could cause a change in incidence rate. (For those who may not read very carefully: "could cause" does not mean "did cause" or "would cause"; the point I am making is mathematical, not empirical.) A higher risk from shorter between-birth intervals could not cause a major change in the rate unless there were a huge surge in short-interval births, which there has not.
Thus, whatever the merits of the new study, it could not fill the same niche as the vaccine hypothesis, explaining the change in incidence. As I noted a few days ago, other sciences recognize the importance of looking at causes of changes in rates of outcomes in themselves, rather than trying to understand changes based just on studies of the causes of the outcomes. Health scientists and health reporters are generally rather bad at understanding the distinction. (Random speculation about the cause: Most science education includes a lot of math, so thinking about derivatives of values becomes second nature. Clinical health, public health, and even almost all epidemiology education include only cursory attention to the math.) Failure to recognize the difference between causes of events and causes of changes in the rate of events leads to such absurdities as looking for explanations for changes in disease rates within our biology (genetics and such). Social/economic factors and environmental exposures can change quite a lot and rapidly, while our biology is basically a constant.
On an unrelated note (though it shares the ad journalem snarkiness) last week I pointed out that the title "health promotion" (in a job title or, in that case, a journal title) is a red flag that you are likely dealing with the widely disdained anti-liberal (and very often junk-science-producing) political wing of public health, the ones who create the unfortunate impression that "public health" means "campaigns to tell people how they should live their lives". We happened to run across another article from the American Journal of Health Promotion, which reported a study of the effects of Florida cutting back on anti-tobacco advertising because of budget cuts. The study reported that after the advertising was cut, teens there saw fewer anti-tobacco ads on average.
It is the perfect health promotion study: How would we have know, without this study, that cutting back on advertising leads to people seeing less advertising? Presumably everyone saw fewer ads, but by focusing on teens, they were able to get in a "won't somebody please think of the children!" angle. Apparently no attempt was made to assess whether this actually matters for health, though. Perhaps this is just because pursuing such useful information would be much more difficult, but it is likely that the explanation is that "health promotion" types tend to care more about the "promotion" side of things, where they make their living, rather than the actual "health" side. The authors make clear that they consider the reduction in advertising to be a bad thing in itself, without concern for why the government thought it might be better to save money. I wonder how many more ads could have been run for the cost of doing the study?
Coming out the same week as a new chapter in the vaccines controversy, reporters and even the author of the birth-timing study could not resist making comparisons. Most of them took the form "unlike that junk science about vaccines, this is good research that helps us better understand the incidence of autism." But the vaccine hypothesis, in spite of being so disdained that for me to even mention it here without a denouncement here probably invites attack, had the great advantage that it could explain what people really want to know: Why has autism incidence apparently increased by an order of magnitude, from incredibly rare to still rare but sufficiently common that you probably know someone with an autistic child? A particular vaccine formulation could cause cases of a disease, and thus changing the formulation could cause a change in incidence rate. (For those who may not read very carefully: "could cause" does not mean "did cause" or "would cause"; the point I am making is mathematical, not empirical.) A higher risk from shorter between-birth intervals could not cause a major change in the rate unless there were a huge surge in short-interval births, which there has not.
Thus, whatever the merits of the new study, it could not fill the same niche as the vaccine hypothesis, explaining the change in incidence. As I noted a few days ago, other sciences recognize the importance of looking at causes of changes in rates of outcomes in themselves, rather than trying to understand changes based just on studies of the causes of the outcomes. Health scientists and health reporters are generally rather bad at understanding the distinction. (Random speculation about the cause: Most science education includes a lot of math, so thinking about derivatives of values becomes second nature. Clinical health, public health, and even almost all epidemiology education include only cursory attention to the math.) Failure to recognize the difference between causes of events and causes of changes in the rate of events leads to such absurdities as looking for explanations for changes in disease rates within our biology (genetics and such). Social/economic factors and environmental exposures can change quite a lot and rapidly, while our biology is basically a constant.
On an unrelated note (though it shares the ad journalem snarkiness) last week I pointed out that the title "health promotion" (in a job title or, in that case, a journal title) is a red flag that you are likely dealing with the widely disdained anti-liberal (and very often junk-science-producing) political wing of public health, the ones who create the unfortunate impression that "public health" means "campaigns to tell people how they should live their lives". We happened to run across another article from the American Journal of Health Promotion, which reported a study of the effects of Florida cutting back on anti-tobacco advertising because of budget cuts. The study reported that after the advertising was cut, teens there saw fewer anti-tobacco ads on average.
It is the perfect health promotion study: How would we have know, without this study, that cutting back on advertising leads to people seeing less advertising? Presumably everyone saw fewer ads, but by focusing on teens, they were able to get in a "won't somebody please think of the children!" angle. Apparently no attempt was made to assess whether this actually matters for health, though. Perhaps this is just because pursuing such useful information would be much more difficult, but it is likely that the explanation is that "health promotion" types tend to care more about the "promotion" side of things, where they make their living, rather than the actual "health" side. The authors make clear that they consider the reduction in advertising to be a bad thing in itself, without concern for why the government thought it might be better to save money. I wonder how many more ads could have been run for the cost of doing the study?
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.