The most promising approach to persuade someone that something they strongly believe is just not so is definitely not to present contrary evidence. It has clearly been demonstrated, repeatedly, that for most people (the vast majority who do not think like scientists or philosophers, including most "scientists" in health, and presumably in lots of other fields), presenting contrary evidence leads to a non-intellectual gut-level defensive reaction which tends to just harden their belief. Expecting to get a rational reaction to evidence is usually a nonstarter. Sigh! But it is not much more useful to simply ask people to explain the basis of their claims -- they will just do a biased search for confirmatory evidence (or, quite likely, mere assertions of others who agree with them) and, again, become more hardened in their position. Rather, the solution is to ask them to explain the mechanism that supports their view of the world.
The abstract reads:
As motivating examples, the authors note that most people will express confidence that they understand how such familiar mechanisms as toilets and combination locks work, but when asked to explain the mechanism, they change their mind and recognize that they do not really understand after all. To the extent that extreme political positions often result from similar overconfidence (as the authors claim), a similar tactic can be used to show someone his beliefs are based on overconfidence. Causing a recognition, by asking for a mechanistic explanation, goes a long way to lowering misplaced confidence. This then might(!) lead to a softening of malformed extreme positions (I am not so sure that the author's conclusions that this does happen, based on their artificial experiment, is completely convincing).People often hold extreme political attitudes about complex policies. We hypothesized that people typically know less about such policies than they think they do (the illusion of explanatory depth; Rozenblit & Keil, 2002) and that polarized attitudes are enabled by simplistic causal models. We find that asking people to explain policies in detail both undermines the illusion of explanatory depth and leads to more moderate attitudes (Experiments 1 and 2). We also demonstrate that although these effects occur when people are asked to generate a mechanistic explanation, they do not occur when people are instead asked to enumerate reasons for their policy preferences (Experiment 2). Finally, we show that generating mechanistic explanations reduces donations to relevant political advocacy groups (Experiment 3). The evidence suggests that people’s mistaken sense that they understand the causal processes underlying policies contributes to polarization.
In my mind, the more obvious uses of this observation do not relate to policies at the big picture level, but specific individual claims.
An obvious application is one I always thought was a good idea (and, indeed, embedded in some of my analysis on the topic): "So you think that snus or e-cigarettes might be as harmful as smoking? Can you tell me what particular diseases you think might be caused, and at what rates, that would add up to the total risk from smoking?" Of course, someone can still retreat into a nihilistic "we just don't know, and therefore anything is possible", and those who are just generating rhetoric in support of some hidden financial or "moralizing" interest will not be persuaded because they never really cared whether it was true or not. But those who actually believe the claim is true, and care whether that is really the case, tend to rapidly realize it is absolutely implausible.
A related example is the claim that low-risk tobacco/nicotine products are a "gateway" to smoking. But just ask someone to explain the mechanism by which a consumer who would not otherwise choose to smoke would choose to smoke after learning about a low-risk alternative and trying it. Among those who actually believe the myth and are motivated by (not those reciting it to support some hidden goal), lightbulbs appear over their heads.
Of course, sometimes this step alone gets you nowhere because someone is way too far from understanding for one question to get them there. For example, if you ask someone who thinks that installing a lot of industrial wind turbines are a good idea to explain the mechanism by which benefit is created, he will probably assert that they reduce the awful pollution from coal burning and produce electricity with no emissions, and feel not the least bit less confident of their knowledge. The naive belief is simply so far away from the actual mechanism in this case that the believer does not even understand that there is an ultimate mechanistic process. The situation is unlike the case of "should we impose unilateral sanctions on Iran?" (one of the questions in the Fernbach study, which lends itself to simple "how might that accomplish what you think it accomplishes" thinking) and more like "should we be fighting a war in Afghanistan?" (which is several layers away from the goals someone might support).
This still might open the door for better conversation. You could to explain why the electricity from IWTs displaces the relative benign burning of gas, not coal, and that the manufacture and installation of IWTs, and the extra gas burning that is needed to stabilize the power grid because of their intermittent performance, are obviously not emissions-free. But at that point you are back to relying on someone being open to hearing evidence and actually learning something, because if you try to continue the proposed tactic, it will fail: If you ask, "so how can IWTs substantially reduce coal burning or the installation of fossil fuel plants when they always need to be backed up by dispatchable [can be turned on immediately] gas-burning turbines", you are depending on them being willing to recognize the truths implicit in the question, not their mere inability to answer it.
But with such caveats in mind, this is still a very promising tactic. I suppose I have always recognized that and used it, but this study is a great reminder to do it more, and that other approaches that seem similar really are not, and that they seldom work.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.