Should we want the greatest good for the greatest number? (And, incidentally, should “we” mean a numerical majority?) The trolley problem in philosophy raised this question. I was reminded of it in an interesting article by the economist and philosopher Michael Munger: “Adam Smith discovered (and solved!) the trolley problem” (June 28, 2023), as well as in a follow-up Econtalk Podcast.
The precise form of the trolley problem was formulated by the British philosopher Philippa Foot in a 1967 paper. Imagine that you see a trolley hurtling down a steep street and about to hit and kill five men working on the track. But you are near a switch that can divert the trolley to another track where only one man is working. None of the men see the trolley coming. You are certain that if you change tracks, only one person will die instead of five. Should you do so, as a utilitarian would?
If you answered yes, consider an equivalent dilemma (quoting Munger’s article):
Five people in hospital will die tomorrow if they do not receive respectively: (a) a heart transplant; (b) a liver transplant; (c) and (d) a kidney transplant; and (e) a blood transfusion of a rare blood type. There is a sixth person in hospital who, by an astonishing coincidence, is a perfect match for all five. If the chief surgeon does nothing, five people will die tonight, with no hope of living until tomorrow.
Assuming there is no legal risk (the government is run by utilitarians who want the greatest good for the greatest number and are fond of cost-benefit analyses), should the chief surgeon kill the providential donor to harvest his organs and save five lives? To answer this question, most people would probably change their minds and reject the crude utilitarianism they adopted in the previous Trolley Problem. Why?
Munger argues that Adam Smith formulated another example of the trolley problem in his 1859 book The theory of moral sentiments and discovered the principle to solve this problem. Smith did not put it that way, but his solution underscores the distinction between intentionally killing an innocent person, which is clearly immoral, and letting them die of unrelated causes, which is not necessarily immoral. Drowning someone to kill them is immoral, but failing to save a drowning person may not be. Intentionally shooting an African child is murder; failing to give the $100 to charity that would save their life is certainly not criminal.
A more recent argument by Philippa Foot (see Chapter 5 of her book Moral Dilemmas and Other Topics in Moral Philosophy (Oxford University Press, 2002)) explains that the fundamental distinction underlying it is “between setting off a harmful sequence of events and failing to intervene to prevent it” (this concise formulation of her full argument is taken from the abstract of her paper). Specifically, she writes:
The question that concerns us has been posed dramatically by asking whether we are as responsible for starving people in third world countries as we are for killing them by sending them poisoned food?
With an emphasis on moral action, the basic principle is that
It is sometimes permissible to allow some harm to happen to someone, even if it would have been wrong to cause that harm on one’s own initiative, by causing or maintaining the sequence that causes the harm.
In his 2021 book Knowledge, Reality and ValueLibertarian anarchist philosopher Michael Huemer also addresses the trolley problem and arrives at a similar solution, albeit more nuanced in extreme cases. His philosophical approach is “intuitionism,” as the subtitle of this book suggests: A guide to philosophy based primarily on common sense. (My double Regulation goodbye, “A libertarian philosopher with varied ideas, reasonable and radical” gives the flavor of this book and its The Problem of Political Authority: An Examination of the Right to Coerce and the Duty to Obey (2013).)
Anthony de Jasay’s condemnation of utilitarianism as a justification for (coercive) government intervention is based on the simple economic observation that there is no scientific basis for comparing utility between individuals; for example, it is absurd to say that saving five men preserves “more utility” than killing one. Statements about utility, he writes, “are unfalsifiable, and will forever remain my word against yours.” (See my opinion on Econlib Against politics.)
What is certain is that utilitarianism, and certainly “act utilitarianism” (as opposed to “rule utilitarianism”), does not work, except perhaps in the most extreme and uninteresting cases—like “stealing $20 from Elon Musk without him noticing and transferring the money to a homeless person would create net utility,” meaning that Musk would lose less utility than the poor person would gain. While this statement seems logical, we cannot predict the behavior of a single individual, only general categories of events: perhaps this homeless person will use the $20 to buy cheap booze, get drunk, and kill a mother and her baby, who would have been a second Beethoven. He might even be a utility monster, deriving “more utility” from the harm he causes others than from what he himself loses. Even if the homeless man uses his $20 to buy a used copy of John Hicks’ book A Theory of Economic HistoryThe story of his “gift” could spread and lead to a billion greedy people demanding the same transfer from Elon Musk. Or they could campaign for the $20 billion to be expropriated directly by the state to fund subsidies for them.
****************************************
I did my best to make DALL-E (the most recent version) represent the simplest version of Philippa Foot’s trolley problem. Despite my detailed descriptions, “he” just couldn’t get it – which is not really surprising, after all. Even the idea of a fork in the trolley track with five workers on one side and one on the other, he couldn’t represent. I finally asked him to draw a runaway trolley with a track and five workers in the middle of the track. The images he produced were among the most surreal, as you can see in the featured image of this post. Given his poor performance, I mentally apologized to Philippa Foot (who died at age 90 in 2010) and asked DALL-E to add to the image “a dignified old woman (the philosopher Philippa Foot) deep in thought and watching the trolley arrive.” In this simple task, the robot performed quite well.