Over the last few months I have realized that a strategy I employ for identifying good science is to look for common oversimplifications that no real expert would present (at least without noting that they are simplifications). These represent bits of advice that are better than believing the opposite and better than complete ignorance about what to believe, but I just realized what they are akin to. They are analogous to the advice we give children who are learning to cross the street or to drive – something that is a good start on what to think about in those situations, but that would make us rather uncomfortable if repeated by an experienced adult as if it were still how she thinks about the challenge.
I am thinking about such things as the advice to look left, then right, then left again (or is it the other way around?) before crossing the street. That is really good advice for a child who is just learning to cross the street and might only look in one direction if not advised otherwise. But it is not necessary or useful to think in such mechanistic terms once you are skilled at recognizing traffic patterns as you approach the street, taking in what you need to know subconsciously. But adults also know that it is not sufficient in some cases, like in London or someplace like Bangalore, where I use the strategy of furiously looking at every bit of pavement that could fit a car, just to make sure one is not attacking from that direction. A similar bit of advice is "slow down when it is snowing", good advice to a new driver, and it remains true for an experienced driver. But it would be a mistake for someone to interpret that as "all you have to know about driving in the snow is to slow down".
Encountering a writer or researcher who believes that a randomized trial is always more informative than other sources of information (I have written a lot about this error in the UN series, which I will not repeat now) is like walking down the street with a forty-year-old who stops you at the corner and talks himself through "look left, then right…." Yes, it is better than him walking straight into traffic, just as fixating on trial results is better than physicians saying "based on my professional experience, the answer is…." The latter is the common non-scientific nonsense that forced medical educators to hammer the value of randomized trials into the heads of physicians, so they did not get hit by cars. Or something like that – I am getting a bit lost in my metaphors. Anyway, the point is that you would conclude that perhaps your forty-year-old companion was perhaps not up to the forty-year-old level of sophistication in his dealings with the world.
Other such errors peg the author at a different point in the spectrum of understanding. Yesterday I pointed out that anyone who writes "we applied the Bradford Hill criteria", or some equivalent, is sending the message that they do not really understand how to think scientifically and assess whether an observed association is causal. They seem to recognize that it is necessary to think about how to interpret their results, but they just do not know how to do it. They certainly should think about much of what was on Hill's list, but if they think it can be used as a checklist, their understanding seems to be at the level of "all you need to know is to drive slower". That puts them a bit ahead of residents of the American sunbelt who do not seem to understand the "slower" bit, and have thousands of crashes when they get two centimeters of snow. You have to start somewhere.
Perhaps if it were forty five years ago, when Hill wrote his list, their approach would be a bit more defensible. As I wrote about Hill's list of considerations about causation in one of the papers I wrote about his ideas,
More generally, it is always an error to claim that there is some rigid hierarchy of information, like the claims that a meta-analysis is more informative than its component parts. As I wrote yesterday, not only are synthetic meta-analyses rather sketchy at best, but this particular one included a rather dubious narrowing of which results were considered. The best study type to carry out to answer a question depends on what you want to know. And assessing which already-existing source of information is most informative is more complicated still, since optimality of the study design has to be balanced against how close it comes to the question of interest and the quality of the study apart from its design.
When authors make an oversimplification that is akin to advice we give children, is a good clue that they do not know they are in over their head. That is, I suspect that most people who repeat one of these errors not only do not know it is an error (obviously), but were not even of the mindset, as we all are at some point, of saying "uh oh, I have to say something about this, but it is really beyond my expertise, so I had better look up the right equation/background/whatever and try to be careful not to claim more than I can really learn by looking it up." Rather, I suspect they thought they really understood how to engage in scientific inference at a deep level, but they are actually so far from understanding that they do not even know they do not understand. It is kind of like, "what do you mean complicated? everyone knows how a car works; you just turn this key and it goes."
These errors are a good clue that the authors thought they understood the rest of their analysis, but might have been just as over their heads there too. I may not be able to recognize where else they were wrong or naive, either because I am not an expert on the subject matter or simply because they did not explain how they did the analysis, as is usually the case. But the generic sign that they know only enough to be dangerous is there. This is why I am engaging in anger management self-therapy about these errors, telling myself "when I read things like that, I should not feel like my head is exploding with frustration yet again; rather, I should thank the authors for generously letting me know that I should not take anything they say too seriously."
If someone writes about a hierarchy of study designs or Bradford Hill criteria, it probably means they are following a recipe from a bad introductory epidemiology textbook or teacher, perhaps the only one they ever had. This probably also means that the rest of their methods are following a simplistic recipe also. That certainly does not mean that they did a bad study; the recipes exist because they are passable ways to do simple studies of simple topics, after all. But if they are trying to do something more complicated than crank out a field study, like do an analytic literature review or sort through a scientific controversy, the recipe-followers are definitely in over their heads.
These errors serve as a shibboleth, or more precisely a shibboleth failure. Anyone who makes one of those statements is volunteering that he cannot pronounce the tricky test word correctly (i.e., is not really expert in the language of scientific analysis and is just trying to fake it). We cannot count on everyone to volunteer this signal, of course, and we cannot stop them at the river and quiz them. This approach is not useful for typical health news reporting, when a reporter basically just transcribes a press release about a research study, because they do not even attempt to make such analyses and so cannot make the error. But researchers and news-analysis authors (and people giving "expert witness" testimony in legal matters) volunteer information about their limited understanding often enough that we can make use of it. What is more, though a shibboleth is normally thought of as a way to recognize whether someone is "one of us", it can be used just as effectively to recognize when someone is pretending to have expertise even if you yourself do not have that expertise. You can train your ear to recognize a few correct pronunciations even if you cannot lose your own accent.
I am thinking about such things as the advice to look left, then right, then left again (or is it the other way around?) before crossing the street. That is really good advice for a child who is just learning to cross the street and might only look in one direction if not advised otherwise. But it is not necessary or useful to think in such mechanistic terms once you are skilled at recognizing traffic patterns as you approach the street, taking in what you need to know subconsciously. But adults also know that it is not sufficient in some cases, like in London or someplace like Bangalore, where I use the strategy of furiously looking at every bit of pavement that could fit a car, just to make sure one is not attacking from that direction. A similar bit of advice is "slow down when it is snowing", good advice to a new driver, and it remains true for an experienced driver. But it would be a mistake for someone to interpret that as "all you have to know about driving in the snow is to slow down".
Encountering a writer or researcher who believes that a randomized trial is always more informative than other sources of information (I have written a lot about this error in the UN series, which I will not repeat now) is like walking down the street with a forty-year-old who stops you at the corner and talks himself through "look left, then right…." Yes, it is better than him walking straight into traffic, just as fixating on trial results is better than physicians saying "based on my professional experience, the answer is…." The latter is the common non-scientific nonsense that forced medical educators to hammer the value of randomized trials into the heads of physicians, so they did not get hit by cars. Or something like that – I am getting a bit lost in my metaphors. Anyway, the point is that you would conclude that perhaps your forty-year-old companion was perhaps not up to the forty-year-old level of sophistication in his dealings with the world.
Other such errors peg the author at a different point in the spectrum of understanding. Yesterday I pointed out that anyone who writes "we applied the Bradford Hill criteria", or some equivalent, is sending the message that they do not really understand how to think scientifically and assess whether an observed association is causal. They seem to recognize that it is necessary to think about how to interpret their results, but they just do not know how to do it. They certainly should think about much of what was on Hill's list, but if they think it can be used as a checklist, their understanding seems to be at the level of "all you need to know is to drive slower". That puts them a bit ahead of residents of the American sunbelt who do not seem to understand the "slower" bit, and have thousands of crashes when they get two centimeters of snow. You have to start somewhere.
Perhaps if it were forty five years ago, when Hill wrote his list, their approach would be a bit more defensible. As I wrote about Hill's list of considerations about causation in one of the papers I wrote about his ideas,
Hill's list seems to have been a useful contribution to a young science that surely needed systematic thinking, but it long since should have been relegated to part of the historical foundation, as an early rough cut.I would like to be able to say that those who make this mistake are solidly a step above those who think that there is some rigid hierarchy of study types, with experiments at the top. However, the authors who wrote the paper that appealed to Hill's "criteria" that I discussed yesterday also wrote, "Clearly, observational studies cannot establish causation." As I have previously explained, no study can prove causation, but any useful study contributes to establishing (or denying) it to some degree. The glaringly obvious response is that observational studies of smoking and disease – those that were on everyone's mind when Hill and some of his contemporaries wrote lists of considerations – clearly established causation. (I love that "Clearly" they started the sentence with because I know I am clearly guilty of overusing words like that. But I certainly would like to think, of course, that I obviously only use them when making an indubitably true statement.)
More generally, it is always an error to claim that there is some rigid hierarchy of information, like the claims that a meta-analysis is more informative than its component parts. As I wrote yesterday, not only are synthetic meta-analyses rather sketchy at best, but this particular one included a rather dubious narrowing of which results were considered. The best study type to carry out to answer a question depends on what you want to know. And assessing which already-existing source of information is most informative is more complicated still, since optimality of the study design has to be balanced against how close it comes to the question of interest and the quality of the study apart from its design.
When authors make an oversimplification that is akin to advice we give children, is a good clue that they do not know they are in over their head. That is, I suspect that most people who repeat one of these errors not only do not know it is an error (obviously), but were not even of the mindset, as we all are at some point, of saying "uh oh, I have to say something about this, but it is really beyond my expertise, so I had better look up the right equation/background/whatever and try to be careful not to claim more than I can really learn by looking it up." Rather, I suspect they thought they really understood how to engage in scientific inference at a deep level, but they are actually so far from understanding that they do not even know they do not understand. It is kind of like, "what do you mean complicated? everyone knows how a car works; you just turn this key and it goes."
These errors are a good clue that the authors thought they understood the rest of their analysis, but might have been just as over their heads there too. I may not be able to recognize where else they were wrong or naive, either because I am not an expert on the subject matter or simply because they did not explain how they did the analysis, as is usually the case. But the generic sign that they know only enough to be dangerous is there. This is why I am engaging in anger management self-therapy about these errors, telling myself "when I read things like that, I should not feel like my head is exploding with frustration yet again; rather, I should thank the authors for generously letting me know that I should not take anything they say too seriously."
If someone writes about a hierarchy of study designs or Bradford Hill criteria, it probably means they are following a recipe from a bad introductory epidemiology textbook or teacher, perhaps the only one they ever had. This probably also means that the rest of their methods are following a simplistic recipe also. That certainly does not mean that they did a bad study; the recipes exist because they are passable ways to do simple studies of simple topics, after all. But if they are trying to do something more complicated than crank out a field study, like do an analytic literature review or sort through a scientific controversy, the recipe-followers are definitely in over their heads.
These errors serve as a shibboleth, or more precisely a shibboleth failure. Anyone who makes one of those statements is volunteering that he cannot pronounce the tricky test word correctly (i.e., is not really expert in the language of scientific analysis and is just trying to fake it). We cannot count on everyone to volunteer this signal, of course, and we cannot stop them at the river and quiz them. This approach is not useful for typical health news reporting, when a reporter basically just transcribes a press release about a research study, because they do not even attempt to make such analyses and so cannot make the error. But researchers and news-analysis authors (and people giving "expert witness" testimony in legal matters) volunteer information about their limited understanding often enough that we can make use of it. What is more, though a shibboleth is normally thought of as a way to recognize whether someone is "one of us", it can be used just as effectively to recognize when someone is pretending to have expertise even if you yourself do not have that expertise. You can train your ear to recognize a few correct pronunciations even if you cannot lose your own accent.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.