This is not really news, but it was in the newspaper and I thought it was too good to pass up. Krugman wrote in his blog (still free to access for a couple more days) yesterday about what is called "lead time bias" in epidemiology. (Too bad he called it "over-diagnosis" which is actually the name of a different epidemiologic challenge, but he can be excused for not knowing our jargon.) It was an aside in a point he was making about U.S. cancer treatment not being particularly outstanding, a retort to a dumb op-ed by a U.S. politician, which you can follow in his thread if you are interested.
He wrote:
By the way, the jargon "over-diagnosis", whose natural language meaning is quite broad obviously, refers to another complication created by aggressive diagnosis wherein something is diagnosed only because of screening – i.e., it never would have manifested in any bad outcomes. This tends to be about cancer specifically because something can be officially biologically cancer but not destined to ever cause a health problem (note: if something is a "false positive" then there is not really any disease after all, which is different from over-diagnosis). Not too many other conditions are officially a disease until they actually cause a problem. Unlike lead-time bias, which is a statistical complication in trying to measure survival time and thus assess treatment quality, over-diagnosis has immediate real health implications: Almost all the diagnosed cases of the condition get treatment, even the ones that are destined to never cause harm since we do not know which those are. There is also a problem with the statistics and assessment of treatment too, since the over-diagnosed cases all end up being chalked up as "successfully treated" when the patient emerges without health problems (unless the treatment causes them).
To make this more concrete, somewhere between 1/3 and 1/2 of the cases of breast cancer detected by screening mammography are over-diagnoses, biological cancers that never would have caused detectable harm. Note that this is separate from the false positives, where a biopsy is done to examine something that showed up on the mammogram and it turns out to not be cancer – there are about ten times as many of those. So it looks like mammography is doing a great job, since it typically gets credit for saving the almost half of detected cancer victims who would never have had any problem if those cells had just been ignored. And that lead-time bias has to be dealt with too, to avoid crediting the early treatment with the extra period before the cases would have been noticed otherwise. Last I studied and wrote about mammography, there were efforts to account for the lead-time bias, but the over-diagnosis was conveniently ignored; things may have improved since then.
I actually wrote a longer blog today that is less theory and more more topical substance (about a totally different topic: the politics of e-cigarettes). It is here.
He wrote:
Here’s how I understand the over-diagnosis [sic] issue, in terms of an extreme example: suppose that there’s a form of cancer that kills people 7 years after it starts, and that there is in fact nothing you can do about it. Suppose that country A screens for cancer very aggressively, and always catches this cancer in year 1, while country B chooses to invest its medical resources differently, and never catches the cancer until year 4. In that case, country A will have a 100% 5-year survival rate, while country B will have a 0% 5-year survival rate — because survival is measured from the time the cancer is diagnosed. Yet treatment in country B is no worse than in country A. Real life isn’t that simple, but you get the point: a society that tests for cancer a lot may have higher survival rates simply because it tends to catch cancer early on, even if it doesn’t treat cancer any better.This is a great one-minute lesson in the concept of lead-time bias (again, make sure to note that he got the label wrong), which I think any reader would immediately understand and never forget. Epidemiology classes sometimes spend hours trying to explain this. I get the feeling that most of the physicians and "health promotion" types never really understand it. This is why I think trained economists make the best epidemiologists (not that I am biased or anything :-).
By the way, the jargon "over-diagnosis", whose natural language meaning is quite broad obviously, refers to another complication created by aggressive diagnosis wherein something is diagnosed only because of screening – i.e., it never would have manifested in any bad outcomes. This tends to be about cancer specifically because something can be officially biologically cancer but not destined to ever cause a health problem (note: if something is a "false positive" then there is not really any disease after all, which is different from over-diagnosis). Not too many other conditions are officially a disease until they actually cause a problem. Unlike lead-time bias, which is a statistical complication in trying to measure survival time and thus assess treatment quality, over-diagnosis has immediate real health implications: Almost all the diagnosed cases of the condition get treatment, even the ones that are destined to never cause harm since we do not know which those are. There is also a problem with the statistics and assessment of treatment too, since the over-diagnosed cases all end up being chalked up as "successfully treated" when the patient emerges without health problems (unless the treatment causes them).
To make this more concrete, somewhere between 1/3 and 1/2 of the cases of breast cancer detected by screening mammography are over-diagnoses, biological cancers that never would have caused detectable harm. Note that this is separate from the false positives, where a biopsy is done to examine something that showed up on the mammogram and it turns out to not be cancer – there are about ten times as many of those. So it looks like mammography is doing a great job, since it typically gets credit for saving the almost half of detected cancer victims who would never have had any problem if those cells had just been ignored. And that lead-time bias has to be dealt with too, to avoid crediting the early treatment with the extra period before the cases would have been noticed otherwise. Last I studied and wrote about mammography, there were efforts to account for the lead-time bias, but the over-diagnosis was conveniently ignored; things may have improved since then.
I actually wrote a longer blog today that is less theory and more more topical substance (about a totally different topic: the politics of e-cigarettes). It is here.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.