Here’s Part 2 of Huemer’s final word on our recent Book Club. 9) On consent and my paradox for moderate deontology: BC: How is this different from a person who foolishly refuses to consent to a vaccination, even though he admits that the benefit of the vaccine greatly exceeds the pain of the needle? As you explain in The Problem of Political Authority, we have no right to benefit him given his explicit refusal to consent. I’m on board with the idea that it’s wrong to harm one person without his consent in order to benefit someone else. But I balk at the idea that it’s wrong to stop harming a person without his consent, in order to benefit someone else. In particular, if someone is being unjustly tortured (which they also didn’t consent to!), I think you can
Bryan Caplan considers the following as important: Book Club, Economic Philosophy
This could be interesting, too:
Bryan Caplan writes Knowledge, Reality, and Value Book Club Round-Up
Pierre Lemieux writes The Continuum Between Liberalism and Anarchism
Bryan Caplan writes Knowledge, Reality, and Value Book Club: Huemer’s Last Word, Part 1
David Henderson writes The Trillion Platinum Coin
Here’s Part 2 of Huemer’s final word on our recent Book Club.
9) On consent and my paradox for moderate deontology:
BC: How is this different from a person who foolishly refuses to consent to a vaccination, even though he admits that the benefit of the vaccine greatly exceeds the pain of the needle? As you explain in The Problem of Political Authority, we have no right to benefit him given his explicit refusal to consent.
I’m on board with the idea that it’s wrong to harm one person without his consent in order to benefit someone else. But I balk at the idea that it’s wrong to stop harming a person without his consent, in order to benefit someone else.
In particular, if someone is being unjustly tortured (which they also didn’t consent to!), I think you can just turn off the torture device. It would be weird to think that it’s wrong to stop torturing someone unless they consent to the non-torture. Similarly, if the best you can do is to turn down the torture device, you can also do that without consent.
Given that, in my example, no consent is required to reduce both people’s torture. But consent would be required (on common deontological views) to increase one person’s torture while decreasing the other’s. So that’s how you can get a case where A and B are each impermissible, but (A&B) is permissible.
10) On an immoral pool project
BC: … the mice lose their home due to your pool construction, then slowly die of starvation and exposure while they hunt for another home. … And unless you’re doing a “No True Vegan” thing, I really doubt that even many vegans would actually consider this a morally strong reason not to build a pool.
You must not know very many vegans. I think almost no one initially sympathetic to veganism would find your example persuasive. First, I think you’re underestimating how horrible factory farms are. After leaving their current home, the mice would be in a pretty normal position for wild animals. Sometimes, indeed, wild animals starve; nevertheless, factory farm life is, in general, much worse than a normal situation in the wild. At least, that’s what all or nearly all vegans believe.
We could debate how bad life in the wild is, but that’s unproductive. This argumentative strategy in general can’t help, because either you give an example that vegans will think is less bad than factory farming, or you give an example that they see as just as bad as factory farming. In the first case, the example will be irrelevant. In the second case, pretty much every vegan will have exactly the same reaction to your example that they have to factory farming.
Here’s an analogy to explain how I and other ethical vegetarians will see your arguments. Imagine arguing with a Trump-supporter about immigration. You claim to be concerned about the welfare of potential migrants. The Trumpster can’t believe that you care about foreigners, and he tries to prove to you that you don’t. So he gives thought experiments like this: “Obviously, you’d agree that it was fine to drop atomic bombs on Hiroshima and Nagasaki in World War II. But this was more harmful to those foreigners than merely denying people entry to the U.S. today.”
Why would that not be a productive line of discussion for the immigration debate? First, it’s not obvious on its face that the atomic bombing and the immigration restrictions are analogous actions. You’d have to have a lengthy debate about that. But that debate would be a time-waster because, second: if the atomic bombing was analogous, then of course you would also be opposed to the atomic bombing. There is no way that thinking about the atomic bombing will help to resolve the immigration issue.
The best I can make of it is that you (Bryan) can’t believe that anyone really cares about other species; you think everyone is like you on the fundamental level, but maybe some just got confused when thinking about a few kinds of cases. Similarly, the hypothetical Trumpster assumes everyone is like himself. Both are mistaken. Some people are in fact different from the Trumpster and the meat-eater. Some people do in fact care.
11) Does potential/species intelligence make your pain bad?
BC: Does it even slightly reduce your confidence to learn that only 10% of respondents to this survey say that I’m definitely wrong?
"The suffering of beings who will normally develop intelligence is much more morally important than the suffering of beings who will never develop intelligence, though probably not as important as the suffering of beings who are already intelligent."
— Bryan Caplan (@bryan_caplan) September 30, 2021
Does it reduce your confidence to learn that only 22% of respondents think you’re definitely right?
This survey does not significantly change my opinions. I note a few points:
- This isn’t a random sample; respondents are people who follow you on Twitter, which means they are disproportionately likely to agree with you.
- I already knew that the overwhelming majority of people are meat-eaters. That’s most likely driving their response. Respondents may have said that potential intelligence is morally significant because they know that they’re going to have to say this in order to defend their meat eating. R’s may even have actually read your earlier post. In other words, they’re rationalizing.
- The survey gives no information about why (other than the rationalization theory) 52% of R’s picked “probably”. I can’t right now think of any other reasons why someone would think that.
- I’m also not sure that R’s understood the sentence. It’s pretty complex. Moreover, when asked to judge the goodness or badness of something, people generally start thinking of possible instrumental reasons why the thing would be good or bad. They almost never think that you’re asking about intrinsic value. But the relevant interpretation of the quote would be that a given painful experience is more intrinsically bad, all other things being equal, if the subject is a type of being that would normally become intelligent later. Besides the intrinsic/instrumental distinction, one would have to emphasize that the being in question is not in fact intelligent, and will never be intelligent; then ask, with that understood, does the intelligence of other members of its species make this individual’s pain worse?
- Once properly understood, the proposition strikes me as absurd. To me, it’s like doing a survey about whether the shortest path between two points is a curve. If 52% of people said “Probably yes”, I would suspect that R’s didn’t understand the question or made some other error. But even if I couldn’t figure out why they said what they did, I would not give significant credence to the claim that curves are shorter than straight lines, since I can just see that that’s false.
For a closer analogy, the proposition strikes me as sort of like someone saying that painting A is more beautiful than painting B, because there are some other paintings that cost a lot of money that were painted by a member of the same race as the painter who painted A. There’s no further explanation; the person claims this is just a self-evident axiom of aesthetics. I would have ~0 credence in that.
12) On plant interests:
BC: Imagine a conversation between you and someone who believes in the rights of plants. You tell him, “Plants don’t feel pain,” and he says, “That’s an arbitrary difference. Plants are still alive. They have interests, and we shouldn’t do immense harm to their interests to slightly advance our own.” You probably consider this an obtuse position – and I agree.
There is in fact an argument in the ethics literature like that – that all living things have interests (something can be “bad for a plant”, “good for the plant”, etc.), and interests are what really matters.
I would try two lines of argument against this. One line of argument would be to compare a single cell in your body to a single-celled organism. They are intrinsically very similar. If the person agrees that the cell in your body doesn’t merit intrinsic moral consideration, then it’s plausible to generalize to single-celled organisms.
The other line of argument would start by thinking about what counts as in your interests or against your interests (for us people). One can argue plausibly that this is determined by one’s mental states (such as enjoyment/suffering, or desires), rather than by pure biological functioning. E.g., it’s good to frustrate normal biological functioning when this clearly satisfies a person’s desires, causes overall happiness, etc.
About interests: I think talk about what is “good for” plants is analogous to talk about what is good for your car (like frequent oil changes). It’s a non-moral use of “good”.
Having said that, of course if someone has sufficiently strong and different intuitions, then one can’t convince them.
13) On the Argument from Conscience:
BC: Attacks on sincerity seem more futile, but how about a direct appeal to sincerity, a la my Argument from Conscience?
(The link goes to a post about how Bill Dickens claims to be a utilitarian, and is generally highly conscientious, yet he fails to come close to maximizing utility in his actions.)
I don’t find this persuasive, even though I’m not a utilitarian. The overwhelming majority of people have strong emotional motivations apart from their ethical beliefs. There is good evidence that these emotions are the main motivators for most seemingly ethical behavior. One particularly strong source of motivation has to do with the practices and norms of one’s own society, which have a powerful effect on people’s emotions and desires. But there are probably other emotional mechanisms, possibly genetically programmed, which lead people to act in accordance with deontological rules.
With that in mind, it’s entirely plausible that Bill Dickens’ emotions and desires would mostly conform to the non-utilitarian norms of his society in most circumstances, despite his belief in utilitarianism. It is then plausible that his actions would mostly be non-utilitarian. (I say “mostly”, because I expect utilitarian beliefs to have some influence. For instance, they probably make him give more to charity than most non-utilitarians do.)
It’s not weird that Dickens’ behavior appears highly conscientious by conventional standards. That just means that he is naturally high in the sort of emotional dispositions that produce that type of behavior, esp., cooperative & respectful behavior.
What is the alternative hypothesis? That he’s lying – he doesn’t think utilitarianism is true, and he’s been playing a weird hoax for the past 25 years?
Or perhaps he thinks that he believes utilitarianism, yet he doesn’t believe it? Try to fill that in a little more. When he thinks about arguments for utilitarianism, what happens? Those arguments seem right to him, or they don’t? If they do, then it’s kind of bizarre that these arguments didn’t convince him of utilitarianism (which they support) but merely convinced him of the false proposition that he believes utilitarianism (which they don’t support). If the arguments don’t seem right to him, then it’s kind of weird that he would affirm them, and that he would think he agrees with them.
Another hypothesis might be that there are two different kinds of beliefs, or belief-like states. Maybe he consciously believes utilitarianism, but unconsciously believes deontology? (The reverse wouldn’t make sense.) But then, how would this give you an argument against utilitarianism? Suppose Dickens has an unconscious belief that conflicts with his conscious belief. What’s the argument that the unconscious belief is more likely to be true? Surely in most such cases, the conscious belief is more likely to be correct.
. . .
I’m going to conclude with this YouTube video link. This is a five-year-old child who recently learned that meat is made from animals: https://www.youtube.com/watch?v=5Npv2Mpbd3w
I include this because it is extremely difficult to doubt this child’s sincerity. She hasn’t had time to come up with the sorts of rationalizations that adults come up with, nor is she repeating propaganda from other people. It’s just the natural reaction of an innocent, compassionate person.