By Risto Hilpinen

Some Remarks on Self-Deception: Mele, Moore, and Lakatos1

Critical Commentary on Alfred Mele’s Self-Deception Unmasked (Princeton and Oxford: Princeton UP, 2001)

Risto Hilpinen, University of Miami 

Sartre and Moore on Contradicting Oneself 

Much of the recent philosophical discussion of the problems and paradoxes of self- deception or self-delusion goes back Jean-Paul Sartre’s analysis of bad faith in Being and Nothingness: many recent papers on the subject introduce the concept of self-deception by references to Sartre. However, people have been writing about the puzzles of self-deception for centuries. The earliest book on the subject I found in the library of my university was published in 1614, Daniel Dyke’s The Mystery of Selfe-Deceiuing. Or A Discourse and Discouery of the Deceitfulnesse of Mans Heart.

According to Sartre, bad faith consists essentially in lying to oneself.2 The possibility of lying depends on the ontological and epistemic duality between the deceiver and the deceived, but how can this duality be preserved if the two parties are in the same consciousness, that is, if the deceiver is trying to hide the truth from himself? This is a good conceptual puzzle for philosophers to write about. In Sartre’s formulation, the philosophical problem of self-deception seems to be the question: How is it possible to lie to oneself, that is, qua the deceiver accept a proposition, and qua the deceived party not accept the same proposition, or even accept the contradictory proposition?3 According to this model, self-deception involves the acceptance of mutually contradictory propositions, and consequently the problem of self-deception is a special case of the more general question about the possibility of having jointly inconsistent beliefs. Makinson’s paradox of the preface4 and the lottery paradox are standard examples of inconsistent belief sets. I accept all my beliefs, but on the basis of our general fallibility, I also believe that some of my beliefs are false.5 This is a logically inconsistent system of beliefs. In the same way, in the case of a fair lottery in which just one of a large number of tickets will win a prize, it seems reasonable to believe about each ticket x that x will not win, and also believe that one of the tickets will win.6 These examples are not examples of self-deception and seem to have very little to do with self-deception. If self- deception is analogous to lying to another person, it involves inconsistency in a particularly acute form, namely, the acceptance of two mutually contradictory propositions and not just the acceptance of an inconsistent system of beliefs. But not all inconsistencies are instances of self-deception. Even if it were possible to believe explicitly contradictory propositions, such a situation could not be regarded as an example of self-deception unless one of the contradictory beliefs were in some way hidden from the believer. According to Harold Sackeim and Ruben Gur, this means that the individual is not aware of holding one of the beliefs, in other words, a person who is deceiving himself about accepts (believes) both and ~p, but does not believe that he believes that or does not believe that he believes that ~p.7 In his new book, Self-Deception Unmasked (as well as in his earlier publications), Alfred Mele rejects this view.8 The view that self-deception must involve contradictory beliefs involves a common philosophical misconception, namely, that one can contradict oneself only by accepting contradictory propositions or at least a set of propositions which are jointly inconsistent. There are many well-known counterexamples to this view, even though they have not always been recognized as such. One such example is the paradoxical assertion (type) considered by G. E. Moore:

It is raining, but I do not believe that it is raining. 9

This assertion has the form

(AsMoore) R & not Bel(I, R).
This proposition is not self-contradictory, but by uttering it the speaker contradicts himself. The assertion of a conjunctive proposition is a conjunctive assertion. Thus

(AsM) As(R & not Bel(I, R)), where ‘AsP’ means that the speaker asserts that P, entails

(1.1) As(R)

and

(1.2) As(not Bel(I, R)

A sincere assertion is an expression of belief: by asserting a proposition the utterer states that and expresses the belief that (conveys the information that she believes that p); thus the former assertion (1.1) conveys the information that I believe that R, and in the latter assertion (1.2) I assert that I do not believe that R.10 The two assertions cannot be “correct” at the same time: either the first is not sincere or the second is false. This can be regarded as an instance of “contradicting oneself.” In this sense anyone who utters a Moore sentence contradicts himself. The indefensibility (or inconsistency) of an assertive utterance of a Moore sentence can be explained without assuming that the speaker utters a contradictory proposition.11 Even though an utterer of a Moore sentence does not assert contradictory propositions, she is not “unanimous,” as Mrs. Slocombe would put it.

In the same way, a person who believes that R, but thinks (believes) that she does not believe that R, does not necessarily accept contradictory propositions. (I assume here that believing that R does not entail and is not entailed by believing that one believes that R.) However, such a person could not express or articulate these beliefs without contradicting herself in the sense described above. If she were to make the conjunctive assertion that (i) she believes that R, and (ii) she believes that she does not believe that R, she would express by means of the first conjunct that that she believes that she believes that R, and simultaneously state by means of (ii) that she believes that she does not believe that R. A person who is in this way mistaken or deceived about her beliefs is subject to an interesting form of self-deception: she is deceived, even though no one else is deceiving her. This conception of self-deception differs from Sackeim and Gur’s conception: according to Sackeim and Gur, a self-deceiver believes and~p, but does not believe that he believes (for example) p, whereas a Moorean self-deceiver simply believes that and also believes that he does not believe p.

Self-Deception and Motivational Bias 

Many philosophers have approached the phenomenon of self-deception by starting from certain representative examples and not from an abstract model. Such examples are not hard to come by. Here is one: The Miami Herald, a Miami newspaper, reported on October 26, 2001, that Senator Bob Graham had stated:

I am confident that we will eventually achieve our objective of not only taking down bin Laden but other global terrorists, because it is too important for us not to be successful. 12

According to a newspaper report published on October 25, Defense Secretary Donald Rumsfeld had expressed a less optimistic view. He had said: “I just don’t know whether we will be successful” at tracking bin Laden down.13 In view of Secretary Rumsfeld’s observation, Senator Graham’s statement may look like an instance of self-deception, an expression of a self-deceptive belief. However, this depends on how the statement is understood. It can be taken to mean that Senator Graham’s confidence (and belief) that bin Laden will eventually be caught is directly caused by (and dependent on) his view that it is important to be able to do it. If the statement is understood in this way, it can be regarded as an example of self-deception. Alternatively, the statement can be taken to mean that the importance of catching the terrorists is evidence that they will be caught: the importance of the goal is evidence that the attempt to catch the malefactors will not be given up until they have been caught. According to this (more charitable) interpretation, the statement does not involve any self-deception.

One of the standard examples discussed in the philosophical literature is the following. A husband (let us call him Adam) believes that his wife Eve has always been faithful to him despite strong evidence to the contrary. For example, let us assume the couple has never had any sexual relations (because the husband is impotent or for some religious reason), but Eve is about five months pregnant. When Adam questions Eve about the situation, she assures him that she has been completely faithful and that her pregnancy is a miracle. The fact that the scarcity of hotel rooms forced her to share a room with a male co-worker during a business trip about five months ago has nothing to do with the matter. Adam accepts this explanation and believes her. This example fits Alfred Mele’s analysis of self-deception (given the obvious assumption that Eve has in fact been unfaithful). According to Mele, the following conditions are jointly sufficient for S’s self-deception in acquiring a belief that p14

  1. (2.1)  The belief that which S acquires is false.
  2. (2.2)  S treats data relevant, or at least seemingly relevant, to the truth-value of in

a motivationally biased way.

  1. (2.3)  The biased treatment is a nondeviant cause of S’s acquiring the belief that p.
  2. (2.4)  The body of data possessed by S at the time provides greater warrant for ~p 

than for p.

Mele’s conditions concern the acquisition of self-deceptive beliefs. They can be applied to belief retention by replacing the expression “acquiring” in condition (2.3) by the expression “sustaining”: 15

(2.3’) The biased treatment is a nondeviant cause of S’s sustaining the belief that p.

Conditions (2.3) and (2.3’) are causal conditions. A cause-effect relation between two events or states involves causal dependence or a chain of causal dependence relations between them.16 In ordinary cases of self-deception, S’s belief that is caused and causally dependent on his biased treatment of the evidential data; thus (2.3) entails the following dependence condition:

(2.5) If S did not treat the data in an evidentially biased way, S would not believe (or would not have acquired the belief) that p.

Moreover, in ordinary cases of self-deception, for example, in the case of Adam and Eve, the evidential bias as well as the belief that are dependent on S’s interest in the truth of (S’s preference for over ~p). If S did not treat the data in a biased manner, S would not be able to sustain the self-deceptive belief, and S treats the data in a biased manner because S prefers the truth of to its falsity (in the sense of wishing rather than ~p to be true). If Eve’s faithfulness were not important to Adam, he would not treat the evidence in a biased way and would not continue to believe that Eve is faithful despite the apparently conclusive counter-evidence. These dependence relations are expressed by the following conditionals:

  1. (2.6)  If S did not prefer to ~p, S would not treat the evidential data in a biased way, and
  2. (2.7)  If S did not prefer to~p, S would not believe that p.

(2.6) expresses part of Mele’s condition that S treats the evidence in a “motivationally” biased way.

Mele’s condition (2.3) contains the poorly understood expression “nondeviant cause” often encountered in the literature on the philosophy of action; this qualification distinguishes (2.3) from the simple dependence condition (2.5). Mele gives the following example (adapted from Robert Audi) of a situation in which a false belief is caused by the believer’s evidential bias and condition (2.5) holds, but no self-deception occurs because S’s belief does not depend on his evidential bias in the right way.17 Bob is investigating an airplane crash, and hopes that it was the result of a mechanical failure rather than a terrorist attack. He consults Eva, who has usually rejected terrorist hypotheses in the past, but in this instance Eva believes (on the basis of her data) that the terrorists were at work, and is able to convince Bob that the terrorist hypothesis is true. However, the crash was not caused by any terrorists, but by a mechanical failure, and this is clearly shown by the evidence possessed by the investigators (other than Eva), and readily accessible to Bob. We can assume that without his bias in favor of the mechanical failure explanation Bob would not have limited his inquiries to Eva, and Eva’s erroneous opinion would not have led Bob to accept the false terrorist hypothesis.

For the purpose of this example, the warrant condition (2.4) must be (re)interpreted in such a way that the evidence which provides greater warrant for ~p than is evidence “readily accessible” or available to S rather than the evidence actually possessed (accepted or known) by S. Bob could hardly accept the terrorist hypothesis rather than the mechanical failure explanation (which he prefers) if he were fully aware of the evidence which supports the latter hypothesis. According to Mele, this is a permissible interpretation of the warrant condition. 18

This example has the following form: Assume that Bob wishes to be false, and this leads him to search evidence relevant to in a biased way (to prove the falsity of p), but to his great surprise most of the evidence turns out to be favorable to p. On the basis of this evidence, Bob cannot but believe that p. Even if turns out to be false, Bob is not subject to self-deception. However, Bob’s believing that is dependent on his biased evidence gathering procedures: we can assume that without his bias he would have conducted the investigation into the truth of in an impartial manner, and would have formed his opinion in accordance with the readily available evidence against p. Thus condition (2.5) can hold (together with (2.1)-(2.2) and (2.4)) also in cases in which no self-deception occurs.

According to Mele, a selective approach to gathering evidence for a proposition owing to a desire that can contribute to self-deception by “leading one to overlook relatively easily obtainable evidence for ~p while finding less accessible evidence for p, thereby leading to believe that p.” 19 This does not hold in the present example: Bob’s approach,

an instance of selectively gathering evidence for P motivated by a desire that P—is of a kind that leads to self-deception by increasing the subjective probability of the proposition that the agent desires to be true, not by increasing the subjective probability of the negation of that proposition. 20

Bob’s attempt to find evidence for the hypothesis he wishes to be true causes him to find evidence against the hypothesis and leads him to conclude that the hypothesis is false. For this reason, Mele does not regard this example as an example of self-deception, and observes that “S enters self- deception in acquiring the belief that if and only if is false and S acquires the belief in ‘a suitably biased way’.” 21 The purpose of the condition of non-deviance in (2.3) is to register that the evidential bias must be “suitable” for self-deception. Bob’s belief that the crash was caused by a terrorist attack is not an instance of self-deception because Bob initially preferred the explanation by mechanical failure, and tried to find evidence supporting the latter explanation, but ended up accepting the terrorist hypothesis. Bob’s belief was not consonant with his interests or desires. This means that the dependence condition (2.7) does not hold, and it might be suggested that the “deviance” of the dependence of Bob’s belief on his evidential bias is due to the failure of (2.7). In proper cases of self-deception, an agent’s belief that should depend not only on his evidential bias in favor of p, but also on his preference for over ~p (and not on his preference for ~p over p). It is clear that Mele makes this assumption in his discussion of the case.

However, the dependence condition (2.7) cannot be regarded as necessary for self-deception if we accept the possibility of “twisted” instances of self-deception in which a person’s false belief that is dependent on his desire that ~p, that is, on his preference for ~p over p. An “irrational” (i.e., unfounded) false belief that can be caused by the fear that rather than the desire that p; in such a case an emotion (for example, jealousy or fear) leads a person to form or retain “an intrinsically unpleasant belief against the promptings of reason.” 22 In “twisted” cases, we seem to have, instead of (2.7),

(2.8) If S did not prefer ~p to p, S would not believe that p.
If a jealous husband did not wish his wife to be faithful to him, he would not believe that his wife is unfaithful. The following simple formula covers both forms of dependence:

(2.9) S’s belief that depends on the desirability of (for S).

However, as the example about Bob and Eve shows, this condition is not always sufficient. As Mele observes, a conceptually satisfactory account of self-deception must say more about the “routes” to self-deception, that is, about the way in which an agent’s belief depends on his motivation and his evidential bias. 23

According to Mele’s first condition, self-deception involves a false belief. He presents conditions (2.1)-(2.4) as jointly sufficient for self-deception, but does not regard them as necessary conditions. However, he takes (2.1) to be a necessary condition of self-deception. He says that the first condition “captures a purely lexical point: a person is, by definition, deceived in believing that only if is false; the same is true of being self-deceived in believing that p.” 24 Mele is considering self- deception in believing something, and condition (2.1) holds for this concept by definition, but it is interesting to observe that withholding judgment (agnosticism) about some proposition can look very much like self-deception and can be motivated in the same way. For example, if Adam simply refuses to believe that Eve is unfaithful (without claiming that she is faithful), and continues to insist (against overwhelming evidence) that he has no idea whether Eve has been unfaithful or not, he seems to be engaged in a form of self-deception. Adam acts like a jury which fails to convict a defendant despite practically conclusive DNA-evidence against him. (A “not guilty” verdict is not an assertion that defendant is innocent, only that the evidence is not regarded as sufficient to prove that he is guilty.) In a situation of this kind, Adam’s self-deception need not involve the acceptance of any false belief, but consists in having an incorrect and self-deceptive attitude towards a proposition. This is self-deception in refusing to believe what should be (and perhaps is) obvious to any reasonable person.

Lakatos, Confirmation Bias, and Self-Deception 

The example about Adam and Eve reminds me of Imre Lakatos’s conception of the methodology of scientific research programs. According to Lakatos, a scientific research program has three components: (i) a “hard core” of theoretical laws, together with a “protective belt” of auxiliary hypotheses which can be used to explain away apparent counter-evidence to the theory; (ii) the negative heuristic, that is, methodological rules which prohibit the application of modus tollens to the hard core of the program; and (iii) the positive heuristic of the program which gives directions for future development and for possibly fruitful auxiliary (protective) hypotheses. 25 According to Lakatos, research programs can be either “progressive” or “degenerative.” A progressive program is capable of using its positive heuristic successfully to predict novel phenomena, whereas a degenerative program can account for anomalous phenomena only by inventing auxiliary hypotheses after such phenomena have been discovered.

In the example given above, Adam’s conviction that Eve is faithful is part of the hard core of his conception of their marriage, and he explains apparent counter-evidence by introducing auxiliary hypotheses (such as the miracle hypothesis) for its protection. The “research program” by which Adam sustains his belief seems degenerative insofar as the auxiliary hypotheses introduced for the purpose of protecting the core belief (the miracle hypothesis or, for example, the hypothesis that when Eve had her annual checkup about 5 months ago, she was artificially inseminated by mistake) do not lead to successful predictions, but must be protected by additional auxiliary hypotheses. From a Lakatosian perspective, self-deception means clinging to a degenerative belief revision program built around a false core hypothesis. We might say that scientists and philosophers who cling to degenerative research programs in an obsessive manner are engaged in a form of self- deception.

The Lakatosian model may also throw some light on what has sometimes been called “confirmation bias” or “verification bias,” people’s tendency to focus on evidential information that confirms and avoid or overlook evidence which disconfirms their current beliefs and hypotheses.26 Thus, according to the confirmation bias thesis, a methodologically unsophisticated person who is testing a hypothesis tends to focus on the confirming evidence rather than disconfirming evidence. People are usually not good Popperians.

According to Mele, the confirmation bias contributes to the evidential bias which is one of the conceptual ingredients of self-deception:

Given the tendency that this bias [the confirmation bias] constitutes, a desire that p—for example, that one’s child is not experimenting with drugs—may, depending on one’s desires at the time and the quality of one’s evidence, promote the acquisition or retention of a biased belief that by leading one to test the hypothesis that p, as opposed to the hypothesis that ~p, and sustaining such a test. 27

This is puzzling, because from the logical point of view there is no difference between testing and testing ~p: any test of a hypothesis is simultaneously a test of its negation. A test of a hypothesis is an attempt to find an answer to the question whether is true, and the following three sentences express the same question:

  1. (i)  Is true (or false)?
  2. (ii)  Is ~p true (or false)?
  3. (iii)  Is true or is ~p true?

It should not make any difference whether a test is described as a test of or as a test of ~p. In the discussion and interpretation of psychological experiments, it is important to distinguish an investigator’s (a psychologist’s) theoretical language or “system language” from the language of the subjects who are being investigated. The instructions given to the subject at the beginning of an experiment belong to the latter. It is perfectly possible that in an experiment about reasoning, a subject’s interpretation of the evidential data depends on the way in which the instructions are formulated. If a mother were asked to “test” the hypothesis that her daughter is experimenting with drugs, her test procedures and conclusion might differ from those prompted by the instruction to determine whether her daughter is not experimenting with drugs. This is possible, but it would be surprising: if the mother has a tendency to deceive herself, that is, if her interpretation of the evidence depends on what she wishes to be true, she is in both situations likely to overlook evidence which would be positively relevant to the drug hypothesis. If the mother is thought to be testing the drug hypothesis, she is likely to show a “disconfirmation bias.” This is, of course, an empirical issue, to be decided by means of experiments.

The alleged confirmation bias is related to, and difficult to distinguish from, a number of other “biases of rationality” studied by psychologists.28 According to Fischoff and Beyth-Marom,

Confirmation bias has proven to be a catch-all phrase incorporating biases in both information search and interpretations. Because of its excess and conflicting meanings the term might be retired.29

However, the term has not disappeared from the psychological literature. The clearest instances of confirmation bias can be found in situations in which a subject is looking for evidence relevant to a proposition he already believes (or thinks he knows to be true). Evans and Over have distinguished confirmation bias—the tendency to seek evidence that supports a prior belief—from belief bias, a “biased evaluation of the evidence that is encountered.” 30 It is clear that both are involved in the ordinary cases of self-deception. In rational belief revision, the evaluation and interpretation of new evidence depends usually on the believer’s prior beliefs and on their degree of (epistemic) entrenchment in her belief system.31 It might be suggested that, in cases of self-deception, the dependence of the evaluation and interpretation of new evidence on the agent’s prior beliefs depends on her interests and desires; thus self-deception seems to involve a second-order dependence relation. Another form of bias is “positivity bias,” the tendency to favor and find confirmation for hypotheses expressed in positive terms (instead of negative terms). 32 This bias seems to have been shown by the subjects in an experiment reported by Trope, Gervey and Liberman:

Subjects who tested the hypothesis that a person was angry interpreted that person’s facial expression as conveying anger, whereas subjects who tested the hypothesis that the person was happy interpreted the same expression as conveying happiness.33

“X is angry” and “X is happy” are not negations of each other, but the experimenters presumably regarded them as incompatible descriptions. The first-mentioned subjects were trying to answer the question whether a person shown to them was angry or not, whereas the subjects of the second experiment were trying to find out whether the person shown was happy or not. The results of the experiment illustrate the positivity bias rather than the belief bias or the confirmation bias.

The direction of the “confirmation bias” seems to depend on how a given test is described. Yaacov Trope and Akiva Liberman provide an answer to this puzzle: not every proposition counts as a hypothesis. The negation of a hypothesis need not be a hypothesis (of the proper kind). Trope and Liberman observe:

Any given hypothesis is usually more specific than its alternatives. A hypothesis often refers to a single possibility (e.g., the target is a lawyer), whereas the alternatives may include a large number of possibilities (e.g., the target has some other occupation).34

When Trope and Liberman refer to the “alternatives” of a given hypothesis, they seem to mean its negation, that is, the disjunction of all its alternatives (in their example, the disjunction of the occupations other than a lawyer). A hypothesis that refers to a “single possibility” is obviously more informative and has more explanatory value than its negation. For example, the hypothesis “Dr. Kafka is a professor” is a good and informative answer to a question about Dr. Kafka’s profession, but its negation is almost worthless. A specific and informative hypothesis is attractive not only if a person wishes it to be true, but also on the basis of its informational value. Both the acceptance and the rejection (the acceptance of the negation) of such a hypothesis adds information to a person’s belief system, but the acceptance of the hypothesis adds more information than its rejection. This may lead to a form of “rational” confirmation bias (or positivity bias), based on the believer’s epistemic interest in having informative beliefs, and should not be confused with other forms of self-deception. A highly informative hypothesis can function in the same way as the “hard core” of a Lakatosian research program: once accepted, an investigator is reluctant to give it up (let alone reject it) unless it can be replaced by an equally informative alternative. According to Lakatos, the hard core of a research program is protected by maximal “confirmation bias”: the negative heuristic of the program instructs the investigator to protect it under all circumstances by suitable auxiliary hypotheses, and never to abandon it.

Concluding Remarks 

To conclude, I would like to suggest in what sense a self-deceiver can be said to contradict himself without accepting contradictory propositions. In philosophical discussion believing something (or the belief that p) is often construed simply as an attitude towards a proposition (the acceptance of a proposition) or as having a proposition in one’s mental “belief box.” I think belief (or believing) is more complex and some puzzles of self-deception are due to this complexity. Believing something seems to involve several conceptual constituents. I would like to suggest that a full-fledged belief (for example, the belief that it is raining) involves the following:

  1. (B1)  Assent to the proposition and a disposition to assert (utter) the proposition in appropriate circumstances. The assent may be external (linguistic) assent, or merely internal, mental assent.
  2. (B2)  Disposition to act in a way that would be optimal (given the believer’s interests) if the belief were true.
  3. (B3)  A conception of what the world would be like if the belief were true, which involves knowing how to find out whether the belief is true and how to defend the belief against objections.

The first condition may be termed the assent condition (As-condition), and (B2) the action condition (Ac-condition). According to (B3), belief requires understanding: to believe that p, one has to understand what p’s being the case amounts to. This condition may be termed the evidence condition (E-Condition).

In the example about Adam and Eve, the standard procedures for determining whether Eve is faithful support the conclusion that she is not: on the basis of the available information this is evident to everyone except Adam. The evidence condition makes one wonder whether Adam can “really” believe that his wife is faithful or whether he is just pretending. Nevertheless, Adam assents to the proposition that Eve has always been faithful to him, and is willing to defend this proposition against objections by constructing a protective barrier of auxiliary hypotheses around it. As to the second (action) component of belief, Adam may act as if Eve were faithful to him. Given Adam’s interest in preserving his marriage, such action may in his case be (in most situations) optimal regardless of whether she is faithful or not. But this need not be the case: Adam’s behavior towards Eve may change, even though he does not waver in his assent to the proposition that she is a good and faithful wife. Adam does not accept contradictory propositions, but he is not quite “unanimous” about Eve’s faithfulness. The incoherence of Adam’s beliefs is “hidden” at least in the sense that it does not involve the conscious assent to jointly inconsistent propositions.

Some of the examples which have been presented as evidence for the possibility of self- deception involving contradictory beliefs are based on the assumption that the presence of a belief can be detected by several indicators or criteria which can possibly conflict with each other. Mele reports Sackeim and Gur’s experiment in which subjects denied that a tape-recorded voice was their own, but various physiological responses, for example, their GSR (galvanic skin response), indicated that they recognized the voice they heard.35 The experimenters used the verbal report to determine that the subjects accepted a certain belief (viz., that the voice they heard was not their own) and regarded the behavioral indices as evidence that the subjects also held the contradictory belief.36 The latter belief was thought to be “hidden,” that is, the belief of which the subjects were unaware. The verbal criterion is the same as the assent condition (B1) above; the latter criterion is not the same as the action condition (B2) above, but a behavioral criterion of a different sort. In effect, Sackeim and Gur assume that the truth-values of belief sentences are determined by the following condition:

(BSG1)S believes that if and only if some belief-indicator shows the presence of the belief that p.

If there are several logically independent indicators or criteria for the belief that P, it can of course happen that (BSG1) justifies contradictory belief ascriptions. But as Mele argues, this does not show that the subjects accept contradictory propositions. 37 We should conclude instead that (BSG1) is inconsistent with the construal of belief as a simple propositional attitude, expressible by “S believes that p” or “S believes that ~p.” In reality, belief is a more complex phenomenon. From the standpoint of the propositional attitude theory of belief (the view that belief is an attitude towards a proposition), the phenomena of self-deception can be regarded as theoretical anomalies.

 

Works Cited 

Audi, Robert. “Self-Deception vs. Self-Caused Deception: A Comment on Professor Mele.” Behavioral and Brain Sciences 20 (1997): 104.

Dyke, Daniel. The Mystery of Selfe-Deceiuing. Or A Discourse and Discouery of the Deceitfulnesse of Mans Heart. London: Griffin and Mab, 1614.

Evans, Jonathan St. B. T. “Bias and Rationality.” Rationality: Psychological and Philosophical Perspectives. Eds. K. I. Manktelow and D. E. Over, London and New York: Routledge, 1993. 6-30.

Evans, Jonathan St. B. T. and David E. Over. Rationality and Reasoning. East Sussex: Psychology Press, 1996.

Fischhoff, B. and R. Beyth-Marom. “Hypothesis Evaluation from a Bayesian Perspective.” Psychological Review 90 (1983): 239-260.

Gärdenfors, Peter. Knowledge in Flux: Modeling the Dynamics of Epistemic States. Cambridge, Mass.: MIT Press, 1988.

Kyburg, Henry E. Probability and the Logic ofRational Belief. Middletown, Conn.: Wesleyan UP, 1961.

Lakatos, Imre. “Falsification and the Methodology of Scientific Research Programmes.” Criticism and the Growth of Knowledge. Eds. I. Lakatos and A. Musgrave. Cambridge, U.K: Cambridge UP, 1970. 91-196.

Lewis, David. “Causation.” The Journal of Philosophy 70 (1973): 556-67.

Lewis, David. “Postscripts to ‘Causation’.” Philosophical Papers. Vol. 2. By Lewis. New York and Oxford: Oxford UP, 1986. 172-213.

Makinson, David. “The Paradox of the Preface.” Analysis 25 (1965): 205-207.

Mele, Alfred. Irrationality. New York: Oxford UP, 1987.

Mele, Alfred. Self-Deception Unmasked. Princeton and Oxford: Princeton UP, 2001.

Moore, G. E. Ethics. London: Williams & Nogate; New York: H. Holt, 1912.

Moore, G. E. “A Reply to My Critics.” The Philosophy of G. E. Moore. Ed. P. A. Schilpp. La Salle: Open Court, 1942. 535-677.

Pears, David. Motivated Irrationality. Oxford: Clarendon Press, 1984.
Ramsey, Frank P. 1929. “Knowledge.” Philosophical Papers. By Ramsey. Ed. D. H. Mellor.

Cambridge: Cambridge UP, 1990. 110-111.

“Rumsfeld: Bin Laden Hard to Catch.” The Miami Herald. 26 Oct. 2001, final ed.: 1A.

Russell, Bertrand. 1912. The Problems of Philosophy. Reset ed. London and New York: Oxford UP, 1946.

Sackeim, H. and R. Gur. “Self-Deception, Self-Confrontation, and Consciousness.” Consciousness and Self-RegulationAdvances in Research and Theory. Vol. 2. Eds. G. Schwartz and D. Shapiro. New York: Plenum Press, 1978. 139-197.

Sartre, Jean Paul. 1943. Being and Nothingness. Trans. Hazel E. Barnes. New York: Philosophical Library, 1956.

Sorensen, Roy A. Blindspots. Oxford: Clarendon Press, 1988.

Trope, Y., Gervey, B. and N. Liberman. “Wishful Thinking from a Pragmatic Hypothesis-Testing Perspective.” The Mythomanias: The Nature of Deception and Self-Deception. Ed. M. Myslobodsky. Mahwah, N.J.: Lawrence Erlbaum, 1997. 105-131.

Trope, Y. and A. Liberman. “Social Hypothesis Testing: Cognitive and Motivational Mechanisms.” Social Psychology: Handbook of Basic Principles. Eds. E. Higgins and A. Kruglanski. New York: Guilford Press, 1996. 239-270.

Wood, Allen W. “Self-Deception and Bad Faith.” Perspectives on Self-Deception. Eds. B. P. McLaughlin and A. Oksenberg Rorty. Berkeley: U of California Press. 1988, 207-227.

  1. This paper is an expanded version of comments delivered at the Florida Philosophical Association 2001 meeting.
  2. Jean Paul Sartre, Being and Nothingness (1943; New York: Philosophical Library, 1956) 49.
  3. Cf. Allen W. Wood, “Self-Deception and Bad Faith,” Perspectives on Self-Deception, ed. B.P. McLaughlin and A. Oksenberg Rorty (Berkeley: U of California Press, 1988) 207.
  4. David Makinson, “The Paradox of the Preface,” Analysis 25 (1965): 205-207.
  5. Bertrand Russell, The Problems of Philosophy (1912; London and New York: Oxford UP, 1946) 131; Frank P. Ramsey, “Knowledge,” Philosophical Papers, by Ramsey, ed. D. H. Mellor (1929; Cambridge: Cambridge UP, 1990): 110-111.
  6. Henry E. Kyburg, Probability and the Logic of Rational Belief (Middletown, Conn.: Wesleyan UP, 1961) 70.
  7. H. Sackeim and R. Gur, “Self-Deception, Self-Confrontation, and Consciousness,” Consciousness and Self-Regulation: Advances in Research and Theory, eds. G. Schwartz and D. Shapiro, vol. 2 (New York: Plenum Press, 1978) 150. Cf. Alfred Mele, Self-Deception Unmasked (Princeton and Oxford: Princeton UP, 2001) 81.
  8. Mele 2001. See also Alfred Mele, Irrationality (New York: Oxford UP, 1987).
  9. Moore’s problem; see Roy A. Sorensen, Blindspots (Oxford: Clarendon Press, 1988) 1.
  10. Cf. G.E. Moore, Ethics (London: Williams & Nogate; New York: H. Holt, 1912) 125; G.E. Moore, “A Reply to My Critics,” The Philosophy of G.E. Moore, ed. P.A. Schilpp (La Salle: Open Court, 1942) 540-543.
  11. Cf. Sorensen 55-56.
  12. “Rumsfeld: Bin Laden Hard to Catch,” The Miami Herald, 26 Oct. 2001, final ed.: 1A.
  13. Rumsfeld quote from an (unidentified) article published in USA Today on 25 Oct. 2001, cited in “Rumsfeld: Bin Laden Hard to Catch,” The Miami Herald, 26 Oct., 2001, final ed.: 1A.
  14. Mele 2001, 50-51.
  15. Cf. Mele 1987, 131-132.
  16. David Lewis, “Causation,” Journal of Philosophy 70 (1973); David Lewis, “Postscripts to ‘Causation’,” Philosophical Papers, vol. 2, by Lewis (New York and Oxford: Oxford UP, 1986).
  17. Mele 2001, 122-123; Robert Audi, “Self-Deception vs. Self-Caused Deception: A Comment on Professor Mele,” Behavioral and Brain Sciences 20 (1997): 104.
  18. Mele 2001, 51-52.
  19. Mele 2001, 123.
  20. Mele 2001, 123.
  21. Mele 2001, 123.
  22. David Pears, Motivated Irrationality (Oxford: Clarendon Press, 1984) 42.
  23. Mele 2001, 123.
  24. Mele 2001, 51.
  25. Imre Lakatos, “Falsification and the Methodology of Scientific Research Programmes,” Criticism and the Growth of Knowledge, eds. I. Lakatos and A. Musgrave (Cambridge, U.K.: Cambridge UP, 1970) 132-138.
  26. Jonathan Evans, “Bias and Rationality,” Rationality: Psychological and Philosophical Perspectives, eds. K.I. Manktelow and D.E. Over (London and New York: Routledge, 1993) 15; Jonathan Evans and David E. Over, Rationality and Reasoning (East Sussex: Psychology Press, 1996) 103.
  27. Mele 2001, 32.
  28. Cf. Evans; Evans and Over, ch. 5.
  29. B. Fischoff and R. Beyth-Marom, “Hypothesis Evaluation from a Bayesian Perspective,” Psychological Review 90 (1983): 257.
  30. Evans and Over, 109.
  31. Peter Gärdenfors, Knowledge in Flux: Modeling the Dynamics of Epistemic States (Cambridge, Mass.: MIT Press, 1988) 17-18, 86-94.
  32. Cf. Evans; Evans and Over, 106.
  33. Y. Trope, B. Gervey and N. Liberman, “Wishful Thinking from a Pragmatic Hypothesis-Testing Perspective,” The Mythomanias: The Nature of Deception and Self-Deception, ed. M. Myslobodsky (Mahwah, N.J.: Lawrence Erlbaum, 1997) 115; cf. Mele 2001, 29.
  34. Y. Trope and N. Liberman, “Social Hypothesis Testing: Cognitive and Motivational Mechanisms,” Social Psychology: Handbook of Basic Principles, eds. E. Higgins and A. Kruglanski (New York: Guilford Press, 1996) 247.
  35. Mele 2001, 82.
  36. Mele 2001, 82; Sackeim and Gur, 173.
  37. Mele 2001, 82-83.

Risto Hilpinen

Risto Hilpinen is Professor of Philosophy at the University of Miami. He has published about 100 papers and edited several books in philosophical logic, epistemology, philosophy of science, and the philosophy of C. S. Peirce.