Every time the concept of the amount to spend to save one statistical life makes the news there is much confusion and kerfuffle about it. Usually this is when the government adjusts the number that is used (at least in the U.S. for the U.S. government – I assume it is similar elsewhere), as just happened in the U.S. Perhaps you have not heard of the concept of "amount to spend to save one statistical life", and that is probably because the news stories about it usually use the shorthand "value of a life". Therein lies much of the problem.
In fairness to the news reporters, that phrase is the shorthand used by those of us who teach, write about, and make use of such numbers. But that is clearly not actually an accurate descriptor of the concept, and so the reporters are not off the hook: It is the job of the press to translate our technical jargon when writing a news story.
The concept of a statistical life refers to a situation where lots of people face a small risk, such that you have no idea who might die from an exposure, but you can predict that someone will (though you might not even know who it was after they die). For example, a government might choose to save some money by leaving a known-dangerous highway interchange without an upgrade. It is possible to predict that over the next ten years, one person will die as a result of foregoing the upgrade who would not have died otherwise. You do not know which of the millions of people who use the interchange it will be.
A value for that statistical life is chosen because the government must choose a figure such that any expenditure below that to save a statistical life if deemed worthwhile and anything above it is not. I used the word "must" literally: This is not a case of deciding whether we ought to make such a choice. It is always possible to spend more to reduce risk or spend less and allow more risk, so a decision must be made, and "eliminate all risk" is not one of the options. Any government decision about spending resources to reduce risk or improve health implicitly invokes such a quantitative decision. Every decision to spend to reduce risk creates a floor (that is, it implicitly declares that it is worth at least that much to save a life) and every decision to forego spending creates a ceiling (however much it is worth, it is not worth what that would have cost). Yes, it is possible to avoid setting a common number, letting those decisions be made ad hoc, but that just leads to a lot of such numbers (or ranges) which probably are mutually contradictory. So we either have to make a rational decision to pick a common number or default into decisions that are based on some number anyway, but are not rational.
The reason that a common number is needed for rationality is easy to see. Imagine that the government decides to spend $10 million to save a statistical life on the highway but only $1 million to per life for food safety. Or even worse, imagine that that range of numbers was for traffic safety, but we were willing to pay for very expensive policies to fix major intersections, but were only willing to spend a tenth that (per life saved) on signage and enforcement to protect residential neighborhoods. Obviously we could shift some resources from the first expenditure in each example to the second, and thereby save more lives, and could even do that while spending less money/resources. This does not help us know what the number should be, but it makes it pretty clear that it should be fairly similar across different policies.
As for the choice of number, it has some grounding in empiricism, though it need not have much. In theory, those in charge of such numbers try to base them on what people are willing to spend to (statistically) save their own life. That is, how much money does someone demand to face a 0.1% chance of dying, or how much will they spend to avoid a 0.01% chance. We look at such things as how much more someone gets paid for a dangerous job compared to one that is equally difficult but with less danger. Or we look at how much extra people will spend for a safer care. These involve lots of tricky statistics that try to separate out the premium demanded for risk, or the amount paid to avoid risk, from the other features of the job/car/etc. This does not, of course, reflect what someone would spend to save himself from a high probability of death, like 100% or even 10% – in such cases we would face a major wealth constraint – most people would spend all they have and would be willing to spend more if they had it, but most people do not have very much. But the wealth constraint is not binding for smaller numbers.
However, saying that the number is based on these estimates is partially a convenient fiction. Those estimates are rough and, furthermore, it is not necessary for government to make the same choice that people do for themselves. A government can choose any number it thinks appropriate, though to deviate too much from the empirical estimates of what people spend themselves would create some problems kind of like the inconsistencies described above.
The new figure is $9.1 million, up from about $5 million a few years ago. Most of use considered the old figure to be too low.
Notice how different all this is from a case where an identified person is in peril and we can expend resources to save them. There is a concept know as the "duty to rescue" that says if we know the specific person we are trying to save, the statistical calculation no longer counts – we have a moral obligation to do whatever we can. We will spend a limited amount on mine safety, but if some miners are lucky enough to survive a collapse and be trapped underground, we will spare no expense to get them out (though generally it will cost a lot less to rescue someone in that situation then the accepted values of a statistical life – even the Chilean rescue costs a small fraction of $9 million per person). It is actually hard to imagine spending $9 million to save someone. But it is possible to spend more than that per statistical life to rescue identified cancer patients, giving a treatment that is very expensive and has only a 1% chance of saving them. (That opens up the very similar question of rational restrictions on medical spending, which I will not go into here.)
The concept obviously has nothing to do with the existential question, "what is a human life really *worth?" But confusing the two is what careless readers (aided by careless reporters) often seem to do. This generates no end of silly complaints about the whole concept. No one presumes to offer an official answer to the question of value. But we must provide an answer to the statistical life question. Answering the existential question with "priceless" seems to cause people who are not familiar with the material world to suggest that there be no limit on spending to save every statistical life, an idea that I trust readers of this blog will see the problems with.
That is not to say that there are not flaws in the concept. The biggest is that not all saved lives are equal, which is a fatal(!) flaw for setting a single number. At the extreme we obviously want to spend less to save frail, lonely 97-year-olds than to save healthy, productive, 32-year-old mothers of small children. Right? If you do not agree, think of it this way, which is exactly equivalent, but does not bait you into objecting: Figure out how much you would spend to save the (statistical) life of a 97-year-old. However much that is, would you not want to spend more to save 32-year-old mothers?
A partial solution is to replace "lives saved" (a rather odd concept if you think about it) with "life years saved". Better still conceptually (though almost impossible to legitimately calculate, despite implicit claims you might see to the contrary) are "quality adjusted life years". Even that is not quite right, though because many people would probably agree that it is worth greater expenditure to save 17-year-olds rather than newborns who have more life expectancy. A death of either would be tragic, obviously, but the 17-year-old is more a part of social networks and generally has greater value to more people, and to be blunt about it, has consumed a lot more of society's resources and is on the verge of being productive. (Again, if you think that is a terrible thought, go through the exercise above, picking a number for the infant and then asking if you should not pick an even bigger number for the teen.)
Another complication that gets overlooked is that the same number does not have to be used for all sources of expenditure. I simplified what I wrote above by talking about direct government spending, but the main role of the figure is to decide when a life-saving regulation is worth the resources it will cost. But there is no reason why the government might not decide to make those two numbers different. A cash strapped government (perhaps one under attack by oligarchs who have tricked people into believing naive anti-government propaganda that demands cuts to childhood nutrition programs while cutting taxes on the rich) might decide it can only spend a few million per statistical life saved by government programs directly. But it could still demand that profit-making companies that are creating risks for people spend a lot more than that to reduce those risks.
Further differences are possible and, indeed, seem very appropriate, and they tend to sneak through though they are seldom formally proposed. Perhaps a polluter should be required to spend more to reduce the risk to innocent bystanders than a food company should to protect its customers. Perhaps an auto maker should be required to spend more to protect innocent bystanders (from pollution or hazards a vehicle creates for other driver) than to protect the driver. In theory, of course, drivers or food buyers could choose their own level of risk, paying a bit more or less based on their own willingness to spend to save their own statistical life, but for obvious reasons this is not practical.
Another problem with the concept is that it still gives some harm away for free, as it were. That is, if a company makes a product that kills a few people, but it is allowed to do that because to reduce the risk any more would be more expensive than the guidelines call for, then the company saves the money it would have cost to save those lives, but the people at risk do not get the money (except in the sense that resources are not consumed so all of society is a bit richer – that wealth accrues to the company and its customers). This is not so bothersome when the person at risk is the customer, such as "how strong should the roofs of cars be made" example that has been widely reported. The customer is the one at risk from not spending more on safety features and is also the one who gets a cheaper vehicle. It is more bothersome when the hazard is air pollution and those who are put at risk get nothing in exchange for regulators only demanding so much expenditure to reduce the risk. Something more is demanded there, and it is seldom offered.
In short (and I write this to head off certain comments that this topic inevitably generates), there are limits to how much we can spend to reduce risk. There are limits to how much regulators can demand to force companies to reduce risk (if there is no limit, no one will make anything). It is good to have a number for those limits that is consistent within types of expenditure and fairly similar across types. It is good to base it on rough empirical estimates of what people spend to reduce their own risk. It is not so good to call it "value of a life", but we are kind of stuck with that.
In fairness to the news reporters, that phrase is the shorthand used by those of us who teach, write about, and make use of such numbers. But that is clearly not actually an accurate descriptor of the concept, and so the reporters are not off the hook: It is the job of the press to translate our technical jargon when writing a news story.
The concept of a statistical life refers to a situation where lots of people face a small risk, such that you have no idea who might die from an exposure, but you can predict that someone will (though you might not even know who it was after they die). For example, a government might choose to save some money by leaving a known-dangerous highway interchange without an upgrade. It is possible to predict that over the next ten years, one person will die as a result of foregoing the upgrade who would not have died otherwise. You do not know which of the millions of people who use the interchange it will be.
A value for that statistical life is chosen because the government must choose a figure such that any expenditure below that to save a statistical life if deemed worthwhile and anything above it is not. I used the word "must" literally: This is not a case of deciding whether we ought to make such a choice. It is always possible to spend more to reduce risk or spend less and allow more risk, so a decision must be made, and "eliminate all risk" is not one of the options. Any government decision about spending resources to reduce risk or improve health implicitly invokes such a quantitative decision. Every decision to spend to reduce risk creates a floor (that is, it implicitly declares that it is worth at least that much to save a life) and every decision to forego spending creates a ceiling (however much it is worth, it is not worth what that would have cost). Yes, it is possible to avoid setting a common number, letting those decisions be made ad hoc, but that just leads to a lot of such numbers (or ranges) which probably are mutually contradictory. So we either have to make a rational decision to pick a common number or default into decisions that are based on some number anyway, but are not rational.
The reason that a common number is needed for rationality is easy to see. Imagine that the government decides to spend $10 million to save a statistical life on the highway but only $1 million to per life for food safety. Or even worse, imagine that that range of numbers was for traffic safety, but we were willing to pay for very expensive policies to fix major intersections, but were only willing to spend a tenth that (per life saved) on signage and enforcement to protect residential neighborhoods. Obviously we could shift some resources from the first expenditure in each example to the second, and thereby save more lives, and could even do that while spending less money/resources. This does not help us know what the number should be, but it makes it pretty clear that it should be fairly similar across different policies.
As for the choice of number, it has some grounding in empiricism, though it need not have much. In theory, those in charge of such numbers try to base them on what people are willing to spend to (statistically) save their own life. That is, how much money does someone demand to face a 0.1% chance of dying, or how much will they spend to avoid a 0.01% chance. We look at such things as how much more someone gets paid for a dangerous job compared to one that is equally difficult but with less danger. Or we look at how much extra people will spend for a safer care. These involve lots of tricky statistics that try to separate out the premium demanded for risk, or the amount paid to avoid risk, from the other features of the job/car/etc. This does not, of course, reflect what someone would spend to save himself from a high probability of death, like 100% or even 10% – in such cases we would face a major wealth constraint – most people would spend all they have and would be willing to spend more if they had it, but most people do not have very much. But the wealth constraint is not binding for smaller numbers.
However, saying that the number is based on these estimates is partially a convenient fiction. Those estimates are rough and, furthermore, it is not necessary for government to make the same choice that people do for themselves. A government can choose any number it thinks appropriate, though to deviate too much from the empirical estimates of what people spend themselves would create some problems kind of like the inconsistencies described above.
The new figure is $9.1 million, up from about $5 million a few years ago. Most of use considered the old figure to be too low.
Notice how different all this is from a case where an identified person is in peril and we can expend resources to save them. There is a concept know as the "duty to rescue" that says if we know the specific person we are trying to save, the statistical calculation no longer counts – we have a moral obligation to do whatever we can. We will spend a limited amount on mine safety, but if some miners are lucky enough to survive a collapse and be trapped underground, we will spare no expense to get them out (though generally it will cost a lot less to rescue someone in that situation then the accepted values of a statistical life – even the Chilean rescue costs a small fraction of $9 million per person). It is actually hard to imagine spending $9 million to save someone. But it is possible to spend more than that per statistical life to rescue identified cancer patients, giving a treatment that is very expensive and has only a 1% chance of saving them. (That opens up the very similar question of rational restrictions on medical spending, which I will not go into here.)
The concept obviously has nothing to do with the existential question, "what is a human life really *worth?" But confusing the two is what careless readers (aided by careless reporters) often seem to do. This generates no end of silly complaints about the whole concept. No one presumes to offer an official answer to the question of value. But we must provide an answer to the statistical life question. Answering the existential question with "priceless" seems to cause people who are not familiar with the material world to suggest that there be no limit on spending to save every statistical life, an idea that I trust readers of this blog will see the problems with.
That is not to say that there are not flaws in the concept. The biggest is that not all saved lives are equal, which is a fatal(!) flaw for setting a single number. At the extreme we obviously want to spend less to save frail, lonely 97-year-olds than to save healthy, productive, 32-year-old mothers of small children. Right? If you do not agree, think of it this way, which is exactly equivalent, but does not bait you into objecting: Figure out how much you would spend to save the (statistical) life of a 97-year-old. However much that is, would you not want to spend more to save 32-year-old mothers?
A partial solution is to replace "lives saved" (a rather odd concept if you think about it) with "life years saved". Better still conceptually (though almost impossible to legitimately calculate, despite implicit claims you might see to the contrary) are "quality adjusted life years". Even that is not quite right, though because many people would probably agree that it is worth greater expenditure to save 17-year-olds rather than newborns who have more life expectancy. A death of either would be tragic, obviously, but the 17-year-old is more a part of social networks and generally has greater value to more people, and to be blunt about it, has consumed a lot more of society's resources and is on the verge of being productive. (Again, if you think that is a terrible thought, go through the exercise above, picking a number for the infant and then asking if you should not pick an even bigger number for the teen.)
Another complication that gets overlooked is that the same number does not have to be used for all sources of expenditure. I simplified what I wrote above by talking about direct government spending, but the main role of the figure is to decide when a life-saving regulation is worth the resources it will cost. But there is no reason why the government might not decide to make those two numbers different. A cash strapped government (perhaps one under attack by oligarchs who have tricked people into believing naive anti-government propaganda that demands cuts to childhood nutrition programs while cutting taxes on the rich) might decide it can only spend a few million per statistical life saved by government programs directly. But it could still demand that profit-making companies that are creating risks for people spend a lot more than that to reduce those risks.
Further differences are possible and, indeed, seem very appropriate, and they tend to sneak through though they are seldom formally proposed. Perhaps a polluter should be required to spend more to reduce the risk to innocent bystanders than a food company should to protect its customers. Perhaps an auto maker should be required to spend more to protect innocent bystanders (from pollution or hazards a vehicle creates for other driver) than to protect the driver. In theory, of course, drivers or food buyers could choose their own level of risk, paying a bit more or less based on their own willingness to spend to save their own statistical life, but for obvious reasons this is not practical.
Another problem with the concept is that it still gives some harm away for free, as it were. That is, if a company makes a product that kills a few people, but it is allowed to do that because to reduce the risk any more would be more expensive than the guidelines call for, then the company saves the money it would have cost to save those lives, but the people at risk do not get the money (except in the sense that resources are not consumed so all of society is a bit richer – that wealth accrues to the company and its customers). This is not so bothersome when the person at risk is the customer, such as "how strong should the roofs of cars be made" example that has been widely reported. The customer is the one at risk from not spending more on safety features and is also the one who gets a cheaper vehicle. It is more bothersome when the hazard is air pollution and those who are put at risk get nothing in exchange for regulators only demanding so much expenditure to reduce the risk. Something more is demanded there, and it is seldom offered.
In short (and I write this to head off certain comments that this topic inevitably generates), there are limits to how much we can spend to reduce risk. There are limits to how much regulators can demand to force companies to reduce risk (if there is no limit, no one will make anything). It is good to have a number for those limits that is consistent within types of expenditure and fairly similar across types. It is good to base it on rough empirical estimates of what people spend to reduce their own risk. It is not so good to call it "value of a life", but we are kind of stuck with that.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.