05 January 2011

Unhealthful News 5 – Scanning pets part 2, over-concluding from proxy measures

Yesterday I started writing about a column in which the author, West Carolina University psychology professor Hal Herzog, sought to minimize the evidence that having pets can be beneficial to your health.  Today I address a few additional lessons that that analysis offers.

Herzog apparently comes from a psychology background.  The field of population psychology research (as opposed to patient-based research, which I am more hesitant to judge) suffers from two problems that are not quite so bad in other population research.  The first is that they still treat measures of statistical significance as measures of effect, something that even moderately qualified epidemiologists stopped doing many years ago.  I suspect I will have plenty of chances to address such points in this series, and it is not relevant to the article in question.  The second is putting far too much faith in proxy measures and artificial situations, which I address today.

When the health news reports on a study that showed, say, that people who felt a high degree of fear of physical risk were more financially conservative, it is often based on a study following a protocol like:  Recruit a group of highly-educated, relatively wealthy 19-year-old Americans (i.e., use the convenient population, undergraduates at your university) and misinform them that they are part of a study on how blood pressure is related to the ability to do simple calculations.  Show half of them a scary scene from a slasher movie and the other half exciting but non-scary sports footage, and then let them play a gambling game (to keep up the ruse, take their blood pressure and use a game that involves simple calculations).  Observe the result that those seeing the scary movie were much less willing to wager their day's pay on a gambling game that had the odds slightly in their favor than were the others.  Draw sweeping conclusions about important everyday decisions, generalized to all people.

Ok, I made up that particular study, but it would not shock me if it had been done.  I have little doubt that if it were done, the conclusions piped out to the media would be what I suggested.  If you were assigned the task of measuring whether physical fear made people more financially conservative and told you had to do an experiment on a $5000 budget to find out, perhaps you would do exactly what I described.  I hope, however, you would recognize that it was an extremely rough measure, quite possibly entirely useless, and be very conservative in your conclusions.  The conclusion itself seems plausible and I would not hesitate to guess it is true, but the experiment really adds nothing to our general knowledge about people that tells us it is probably true.

Seeing a dramatized homicide might be a source of fear of physical risk, but it might have just repulsed the subjects, making them want to wind up their assignment and leave quickly.  The gambling game might be working as a proxy for someone’s actual attitudes toward personal finance, but it might be little more than a way to get a good story out of the afternoon’s work, so those who did not already have a good story (“they made me watch a scene from Texas Chainsaw Massacre – have you ever seen that? it is hilarious”) might be more inclined to gamble.  And 19-year-old college students might be a good proxy for most adult decisions makers, but… no, that is not even true – they are obviously a terrible proxy.

[As an aside:  I had a professor who told the story of how he picked up spending money as an undergraduate by doing as many psych studies as he could.  He said that he came to quickly realize that when the experimenters told a room full of students “you are all participating in a study of X” they were always lying once, and often twice:  X was never the true purpose of the study, so it was interesting to try to guess what was.  Moreover, chances were that not all were participating, but one or half of the students were actually part of the experiment, acting some role but pretending to be subjects.  This was about 1970, but it appears that nothing has changed.  So not only are the experiments extremely artificial, but many of the participants have figured out most of the subterfuge, and are probably acting on that knowledge to some extent or just having a little fun, out of boredom if nothing else.  Thus the experiments might not even correctly measure the proxy, let alone what it proxies for.]

Bringing this back to Herzog’s column, he wrote:
This pattern of mixed results also holds true for the widely heralded notion that animals can cure various physical afflictions. For example, a study of people with chronic fatigue syndrome found that while pet owners believed that interacting with their pets relieved their symptoms, objective analysis revealed that they were just as tired, stressed, worried and unhappy as sufferers in a control group who had no pets. Similarly, a clinical trial of cancer patients undergoing radiation therapy found that interacting with therapy dogs did no more to enhance the participants’ morale than reading a book did.

Objective analysis of whether people are worried or unhappy??  “Objective” refers to something that is measured without relying on someone’s feelings and perception.  But worry and unhappiness exist only in someone’s perception.  There is no “objective”.  If someone feels them then they have them, and the only way to find out is to ask them.  If someone honestly says “I feel happier” and your “objective” measurement method says otherwise, the problem is with the measurement method.

Sure, there are objective measures of conditions that might be associated with these feelings, but those are better described as what they are (e.g., not getting out of bed) rather than what they proxy for (unhappiness).  The psychologists get so caught up in their proxy measures and fancy surveys that they seem to forget that they are not actually observing what they draw conclusions about.  Willingness to gamble $20 tells us little about someone’s overall financial responsibility.

Even worse is the final sentence of that quote, in which Herzog tries to suggest that pets (the lifelong close companions that readers would be inclined to believe are useful) are not useful because therapy dogs are not.  If we interpret this analysis as applying to pets, as most readers probably will, it is like saying having friends is of no value because assigning a nurse to talk to someone did not have much effect – too much faith in a convenient proxy once again.  The research undoubtedly also suffers from the Mythbusters fallacy I addressed yesterday: at best they only failed to show a particular effect for one particular implementation of the intervention, perhaps one that was not very good.  And that is to say nothing of the fact that the reported evidence says the dogs did have an effect.  So what if it was no better than reading?  Chance are that some people get more benefit from reading and some from animals, and if people could self-select (instead of being assigned one or the other) they would be better off.

Of course, psych researchers are not the only ones who put too much faith in artificial situations.  Among people who half-understand epidemiology there is a myth that randomized clinical trials always provide better information than observational studies.  Sometimes RCTs are, indeed, superior in pretty much every way.  But this is mostly for cases of choosing a particular clinical treatment for a disease.  When dealing with more complicated exposures or outcomes, and especially social or economic phenomena, the degree of artificiality necessary to be able to control the trial often causes problems that exceed the benefits of randomization.  I have written a bit about this recently, and it will reappear in this series.

Out of fairness, I should conclude this two-part series by pointing out that Herzog probably wrote this column to try to dissuade people from putting too much magical faith in pet therapy, and he may well be able to make a good case for that.  He erred, however, by not convincing the reader that he is rebutting many people think that their faith in Dog protects them from disease; maybe there is someone out there who thinks that pets can cure the proposed viral cause of chronic fatigue syndrome, and not just make sufferers feel better, but it seems more like a strawman.  He further erred by going overboard, trying to deny even the obvious psychological benefits (which may or may not help cure physical ailments).  The quote above illustrates the problem:  The substance of the paragraph is evidence about feelings and other mostly- or entirely-psychological states, but the first sentence makes a claim that people believe that pets cure the physical disease.  If he had just stuck to documenting and challenging that claim, he probably could have found some firmer ground to stand on.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.