By Alfred R. Mele

On Self-Deception Unmasked (Princeton and Oxford: Princeton UP, 2001)

Alfred R. Mele, Florida State University

I am grateful to my three friendly commentators for presentations that are bound to promote lively discussion. In the interest of leaving ample time for that, I will keep my reply brief. I will proceed in reverse order and start with Risto Hilpinen’s comments. Incidentally, I agree with Risto that, in recent work on self-deception, relatively little attention has been paid to major historical literature on the topic, and it may be fruitful for people look more closely at this literature.

Risto asked about the confirmation bias, something I got mileage out of in the book. There is considerable evidence that it is a very common phenomenon. Here is a simple example. In one set of experiments, two different groups of people are asked to examine the same set of photographs of facial expressions. One group is asked to test the hypothesis, “Are these happy people?” The other group is asked to test the hypothesis, “Are these faces angry?” Most of the people asked the first question say “yes.” Most of the people asked the second question say “yes.” Why is that? It turns out that, much more often than not, people testing a hypothesis are much more sensitive to and receptive of confirming data than disconfirming data. In Self-Deception Unmasked, I review empirical evidence for the confirmation bias. I also argue that the bias can be triggered and sustained by desires—for example, the desire that one’s son is not using drugs or that one’s spouse is faithful—and that this helps to explain how it is that we sometimes believe what we would like to be true when we have stronger evidence that it is false.

If Risto’s story about Adam and Eve is a story about self-deception, it describes an extreme instance of the phenomenon. Years ago, after I described a more typical case of self-deception about spousal infidelity, a student asked what sort of evidence of this kind of behavior would render self-deception about it impossible. My first thought was that catching one’s spouse in the act would turn the trick, but I immediately started conjuring up a story in which even this might leave room for the belief that one’s spouse is faithful and for self-deception about this. Later, I found a much better story—Isaac Bashevis Singer’s “Gimpel the Fool.” I summarize it in Self-Deception Unmasked. Here is a shorter summary.

One night, Gimpel, a gullible man, enters his house after work and sees “a man’s form” next to his wife in bed. He immediately leaves—in order to avoid creating an uproar that would wake his child, or so he says. The next day, his wife, Elka, denies everything, implying that Gimpel was dreaming. Their rabbi orders Gimpel to move out of the house, and he obeys. In time, Gimpel begins to long for his wife and child. His longing apparently motivates the following reasoning: “Since she denies it is so, maybe I was only seeing things? Hallucinations do happen. You see a figure or a mannequin or something, but when you come up closer it’s nothing, there’s not a thing there. And if that’s so, I’m doing her an injustice.” Gimpel bursts out in tears. The next morning he tells his rabbi that he was wrong about Elka.

After nearly a year’s deliberation, a council of rabbis allow Gimpel to return to his home. He is ecstatic, but wanting not to awaken his family, he walks in quietly after his evening’s work. Predictably, he sees someone in bed with Elka, a certain young apprentice, and he accidentally awakens Elka. Pretending that nothing is amiss, Elka asks Gimpel why he has been allowed to visit and then sends him out to check on the goat, giving her lover a chance to escape. When Gimpel returns from the yard, he inquires about the absent lad. “What lad?” Elka asks. Gimpel explains, and Elka insists that he was hallucinating. Elka’s brother then knocks Gimpel unconscious with a violent blow to the head. When Gimpel awakes in the morning, he confronts the apprentice, who stares at him in apparent amazement and advises him to seek a cure for his hallucinations.

Gimpel comes to believe that he has again been mistaken. He moves in with Elka and lives happily with her for twenty years, during which time she gives birth to many children. On her deathbed, Elka confesses that she has deceived Gimpel and that the children are not his. Gimpel the narrator reports: “If I had been clouted on the head with a piece of wood, it couldn’t have bewildered me more.” “Whose are they?” Gimpel asks, utterly confused. “I don’t know,” Elka replies. “There were a lot . . . but they’re not yours.” Gimpel sees the light.

This may be a case of self-deception. If it is, it is an extreme one. My point about it in the book is that even a case of self-deception as extreme as this does not require the machinery of a traditional conception of self deception—that is, simultaneously believing that p and believing that ~p and intentionally bringing it about that one acquires the belief one favors. Singer never tells us that while Gimpel believes that Elka has had an affair he also believes—at the same time—that she has not. Nor does he describe Gimpel as intentionally bringing it about that he believes in her fidelity. And there is no need to suppose that any of this is so in order to make sense of the story.

This leads me to Crystal Thorpe’s commentary. Crystal’s thesis is that I’m right about self- deception and my opponents are wrong, and she has an argument for this thesis that I don’t advance. Her argument is that the traditional view of self-deception that I just mentioned makes self-deceived people seem much weirder than we take them to be.

I certainly won’t disagree with this. The argument offers more support for my view. But what would happen if Crystal were to give this talk to an audience of Freudians and proponents of the traditional model of self-deception. They would say that self-deception requires believing that p while also believing ~p, intending to deceive oneself, and successfully executing that intention. And they have at least two options in responding to Crystal’s argument. They can grant that, on their model, agents who deceive themselves are indeed weird and argue that the common-sense view of self-deceived people seriously underestimates their weirdness. Alternatively, they can argue that partitioned selves or whatever mechanisms they deem to be required for successfully executing intentions to deceive oneself and for simultaneously believing that p and believing that ~p really aren’t so weird.

A promising response to Crystal’s imagined critics, it seems to me, is to argue for the thesis that there is no need to appeal these mechanisms in explaining self-deception. And the best arguments for that thesis that I know of are mine. In any case, without an argument for this thesis Crystal runs the risk of begging the question against her opponents.

Finally, I turn to Peter Dalton’s comments. In Self-Deception Unmasked, I offer a set of sufficient conditions for a person’s entering self-deception in acquiring the belief that p. These conditions, as Peter indicates, are as follows:

  1. The belief that p which S acquires is false.
    2. S treats data relevant, or at least seemingly relevant, to the truth- value of p in a motivationally biased way.
    3. This biased treatment is a nondeviant cause of S’s acquiring the belief that p.
    4. The body of data possessed by S at the time provides greater warrant for ~p than for p.

Peter suggests that a necessary condition of being self-deceived in acquiring the belief that p is that the person not believe that he reasoned incorrectly. Does Peter have the makings of a counterexample to my claim that my four conditions are sufficient for self-deception? If so, he should be able to produce a case in which my four conditions are satisfied, and even so, the person is not self-deceived because he doesn’t satisfy Peter’s condition.

Such a case would feature a person—call him Al—who does believe that he reasoned incorrectly. More precisely, Al believes that the reasoning on the basis of which he believes that p is incorrect. Now, for obvious reasons, Al’s believing this would seem to make it hard for him to believe the conclusion of that line of reasoning—that is, p. There are two possibilities: (1) Al, who satisfies my four conditions, can believe that p even though he believes that the reasoning on which this belief is based is incorrect; (2) Al cannot believe that p in the circumstances at issue. Suppose that (1) is true. On that supposition, I challenge Peter to produce an instance of this possibility in which Al is not self-deceived in acquiring the belief that p! Given that in addition to satisfying my four conditions, Al believes that p despite believing that the reasoning on the basis of which he believes this is incorrect, we would seem to have a particularly perplexing, extreme case of self- deception on our hands and Peter would not have produced a counterexample. So suppose instead that (2) is true. Then Peter’s necessary condition does not add anything substantive to my set of sufficient conditions. If (2) is true, no agent who satisfies my conditions fails to satisfy Peter’s condition. That is, no agent who satisfies my conditions believes that the reasoning on which his belief that p is based is incorrect.

I’d like to make one last comment about my four conditions. I claim that being self- deceived in acquiring the belief that p requires that p be false. For me, then, the falsity of p is a necessary condition for such self-deception. Also, I deny that condition (4) is a necessary condition for self-deception. I argue that some cases of self-deception importantly involve a kind of blindness to evidence that is readily available. In some such cases, the evidence one actually possesses might favor p. Consider a zealous campaign worker for a presidential candidate. People might tell her that the candidate is corrupt because he’s done x, y, and z. Instead of looking for documents that might give her evidence that he has done x, y and z, however, the campaign worker reads campaign literature in favor of her own candidate that takes a strong positive line on his moral character. Given further details, this may be a case of self-deception, even if the evidence the campaign worker possesses favors p over ~p.

*This is an edited version of a transcription of an oral presentation to the FPA. I am grateful to the editors and their staff for their transcription.





Alfred R. Mele

Alfred R. Mele, the William H. and Lucyle T. Werkmeister Professor of Philosophy at Florida State University, is the author of Irrationality (Oxford, 1987), Springs of Action (Oxford, 1992), Autonomous Agents (Oxford, 1995), Self-Deception Unmasked (Princeton, 2001), and Motivation and Agency (Oxford, forthcoming). He is the editor of The Philosophy of Action (Oxford, 1997) and coeditor of Mental Causation (Oxford, 1993) and Handbook of Rationality (Oxford, forthcoming). His primary research interests are in philosophy of action, philosophy of mind, and metaphysics.