By Jason Turner

Graduate Essay Prize Winning Paper of the 49th Annual Meeting of the Florida Philosophical Association

Jason Turner, Florida State University

The Consequence Argument has long been a staple in the defense of libertarianism, the view that free will is incompatible with causal determinism and that humans have free will. It is generally (but not universally) held that libertarianism is consistent with a certain naturalistic view of the world— that is, that (given quantum indeterminacy) libertarian free will can be accommodated without the postulation of entities or events which neither are identical to nor supervene on something physical. In this paper, I argue that libertarians who support their view with the Consequence Argument are forced to reject this naturalistic worldview, since the Consequence Argument has a sister argument, which I call the Supervenience Argument, that cannot be rejected without threatening either the Consequence Argument or the naturalistic worldview in question.

The Consequence Argument

The Consequence Argument purports to show that free will is incompatible with causal determinism, where the latter thesis is understood as the claim that the laws of nature, conjoined with any proposition accurately describing the entire state of the world at some given time, entail any other true proposition. An informal version of the argument runs as follows:

If determinism is true, then our acts are the consequences of the laws of nature and events in the remote past. But it is not up to us what went on before we were born, and neither is it up to us what the laws of nature are. Therefore, the consequences of these things (including our present acts) are not up to us.1

If the argument is sound, determinism is incompatible with free will.

This argument can be clothed in formal garb. This garb makes use of a modal operator, ‘N’, where ‘Nφ’ is to be read, ‘φ, and no one has, or ever had, any choice about whether φ’.2 I will follow Alicia Finch and Ted A. Warfield as understanding ‘someone has a choice about φ’ to mean ‘someone could have acted so as to ensure the falsity of φ.’3

The argument also makes use of three propositional symbols and one inference rule. The symbol ‘P’ stands for a proposition that expresses the state of a world at a remotely early time (before there were any human agents, say), ‘L,’ a conjunction of all the laws of nature, and ‘F,’ any true proposition. The inference rule is often called the Transfer Principle, or just Transfer:

(T) From Nφ and □(φ → ψ), deduce Nψ, where ‘□’ represents broad logical necessity.

The argument follows:

The Consequence Argument

(1) N(P & L)                                       Premise

(2) □((P & L) → F)                              Assumption of Determinism

(3) NF                                               T: 1, 2

Recall that F could be any true proposition whatsoever. Thus, if determinism is true (and if no one has, or ever had, a choice about the truth of the conjunction of the laws of nature with a proposition expressing the state of the world in the remote past), then no one has ever had a choice about anything.4

 

What of the first premise? It is highly intuitive that we cannot do anything to change the laws of nature—i.e., we cannot do anything that would ensure the falsity of the laws (and hence we ‘have no choice’ about them)—and it is likewise intuitive that we cannot do anything to change the past.5 It seems intuitive that, as a result, we have no choice about the conjunction of these two propositions.

We must not be too hasty. Thomas McKay and David Johnson have shown that the N- operator is not agglomerative—we cannot infer ‘N(φ & ψ)’ from ‘Nφ’ and ‘Nψ.’6In their example, we consider an agent who does not flip a coin, but could have. In this case, ‘N(the coin does not land heads)’ is true, and ‘N(the coin does not land tails)’ is true—to falsify either of these claims, one would have to ensure that a coin land heads or tails. Yet ‘N(the coin does not land heads & the coin does not land tails)’ is false. The agent could have falsified the embedded conjunction by flipping the coin. Thus, we cannot infer ‘N(P & L)’ directly from ‘NP’ and ‘NL.’

While this is formally correct, it may not be much of an obstacle; (1) does not seem to be a plausible candidate for rejection, even given the general invalidity of N-agglomeration. Finch and Warfield describe it this way:

[T]he core intuition [described above] motivates the acceptance of [the first] premise. This core intuition is, we maintain, the intuition that the past is fixed and beyond the power of human agents to affect in any way. P describes the state of the world at some time in the distant past (before any human agents existed). L is a conjunction of the laws of nature which, we presume, in addition to being inalterable by human agents, do not change over time. Thus the conjunction (P & L) offers a description of what might be called the “broad past”—the complete state of the world at a time in the distant past including the laws of nature. We maintain, in asserting our premise, that the broad past is fixed [in a way that justifies N(P & L)].7

Thus we need not appeal to agglomeration to justify ‘N(P & L)’ given our intuitions that ‘NP’ and ‘NL’ are true, because those intuitions directly support ‘N(P & L)’ without any formal mediation.

The Supervenience Argument

There is a view about the nature of reality, which I will tag with the over-worked name of Naturalism, which in rough form holds that everything around us eventually boils down to fundamental physics. This is not necessarily a reductionistic view (although global reductionism is one variant of Naturalism), but rather a supervenience thesis that holds that every event supervenes on the microphysical. We can distinguish two versions of the thesis—the Strong and the Weak. The former holds that the supervenience relation is one of logical supervenience—that is, any two possible worlds that differ with respect to which events occur in them differ with respect to which microphysical events occur as well. The latter requires only that the supervenience relation be nomic, so that any two possible worlds with divergent events have either divergent microphysical events or divergent laws of nature.

The Supervenience Argument is designed to show that, if Weak Naturalism is true, the class of actions about which someone has or ever had a choice is empty. As with the Consequence Argument, there is an informal version of the Supervenience Argument:

If weak naturalism is true, then our acts are the consequences of the laws of nature, events in the remote past, and the outcomes of undetermined microphysical events. But it is not up to us what went on before we were born, what the laws of nature are, or how undetermined microphysical events turn out. Therefore, the consequences of these things (including ourpresent acts) are not up to us.

The next step is to clothe the Supervenience Argument in the same formal robes worn by the Consequence Argument. For stylistic reasons, throughout this paper italicized lower-case letters will refer to particular events (including actions) and their upper-case counterparts will refer to propositions that express the events’ occurrences. For instance, if a is an event, then ‘A’ is the proposition expressing the occurrence of a. Likewise, if there is a group of events designated as ‘the bs,’ ‘B’ will be the proposition that all of the bs occurred. (Of course, not all propositions have corresponding lower-case events—‘L,’ for instance, does not.) All events are to be understood as particular event tokens, unless otherwise indicated.

Choosy Actions

Call an event a choosy if and only if ‘A & ~NA’ is true. If there are any choosy events, there is a first one. Furthermore, this event should be an action. It seems as though the only way a non- action event could have been the first choosy action would be if some omission allowed the event to occur, and the agent had a choice about the omission. But, plausibly, even if omissions are not actions (I do not wish to commit myself either way on this issue here), there would have been some other action the agent did perform which she would not have performed had she not allowed the omission. Suppose, for instance, that Jane failed to press a button that, had she pressed it, would have kept thousands of gallons of toxic waste from being spilled in the ocean. It seems likely that there is some action she performed which she would not have performed if she had pressed the button instead. Perhaps this action is an overtly physical action, such as walking past the button instead of turning toward it. More likely, it is a mental action like deciding not to press the button. Either way, the first choosy event is an action.

There are two minor wrinkles, both of which deal with ties. First, on one popular account of action-individuation, a single bodily movement may comprise a large number of actions, and an individual may perform multiple actions simultaneously. Thus there would be no first choosy action, because the first time someone acts in a choosy manner, there will be many actions that all begin simultaneously. We may accommodate this fine-grained account of action-individuation by a little formal maneuvering. Suppose that, at a time t, an agent S’s behavior counts as multiple actions on a fine-grained account of action but as a single action on a coarse-grained account. Clearly, even for the fine-grained theorist, the actions S performed at t bear some sort of similarity to each other that they do not bear to other actions performed by S at t, or to actions performed by other agents (whether at t or not). So we can shift our discussion from that of actions to that of equivalence classes of actions (using this similarity relation), and argue that the set of equivalence classes of choosy actions must have a first member. For stylistic reasons, the coarse-grained way of speaking will be used for the balance of this paper; for my purposes, talking of equivalence classes of actions will add complication without enlightenment.

Considerations of action-individuation aside, there is still the potential for genuine ties for the first choosy action. If two choosy actions a and b are performed simultaneously, and there are no choosy actions that occur before a and b, then the class of choosy actions will not have a first element. For our purposes, however, we will be able to get by with an artificially restricted notion of ‘first.’ For instance, we can say that whichever of a or b is highest and closest from the northeast to the intersection of the international date line, the equator, and sea level, is the ‘first’ element of the class—and it is not possible that there be any ties in this competition.

Our purpose in locating a ‘first’ element of the class of choosy actions is to allow us to argue that the class of choosy actions is empty. We do this by showing that the first class of choosy actions is not choosy. (The argument has the form of a reductio: suppose the class is non-empty. Then it has a first element, which is choosy. But that element is not choosy. Thus the class is empty.) In order to make the argument work, we need only an ordering with the following properties: (a) for every set S of choosy actions, S has a first element, and (b) if a occurs ‘before’ b, then b cannot be causally relevant to a. Clearly, our artificial ordering satisfies both of these properties, so it is suited to do the work we need it to. (Likewise, for those worried that general relativity will throw a spanner into the works, we can arbitrarily pick a frame of reference for our ordering to operate within without violating either of the needed conditions.)

The Argument—A First Pass

If there are some free actions, there are some choosy ones; and if there are some choosy ones, there is a first choosy one. Call it r, and suppose it was performed by an agent S. For illustration, suppose that the causal theory of action is true. Then r, by virtue of being an action, will have been caused by some particular pair of desires and beliefs, which I will call db. But db probably will not encompass all of the causes of r—causal theorists seldom think that a belief/desire pair alone is nomically sufficient for an action. Other inner states of the agent, as well as external, environmental factors, etc., may figure into the causal story. So let db+ represent the sum total of what we would call the causes of r if we knew enough about r’s production.

Now, libertarians will hasten at this point to remind us that, if we accept the causal theory of action and are talking about free (or choosy) actions, the causal chain between an action and its causes had better be indeterministic. So we shall suppose it is. By the Weak Naturalistic thesis, though, this indeterminism is only going to get in to the picture from the ‘ground up’—via microphysical undetermined events.

For the sake of illustration, suppose that somewhere in the causal chain between db+ and r a particle gained the property e+ but was not determined to do so. Then r was only indeterministically caused by db+. But if in is the undetermined event in question—the gaining of a particle of the property e+—then r will supervene on db+ and in. In general, there may be a large number of such undetermined events in the causal chain; we will let ‘in’ stand for the entire collection of these events.

It is clear that r will supervene (nomically) on db+ and in. In other words, it is not possible that db+ and in occur, the laws of nature remain the same, and r not occur. Recalling that DB+, IN, and R express the respective occurrences of db+, in, and r, we note the following formal equivalencies of the supervenience thesis:

~◊((DB+ & IN & L) & ~R)                           Supervenience Thesis

□~((DB+ &IN&L)&~R)                                Df.◊

□((DB+ & IN & L) → R)                             Truth-functional Equivalence

This final version of nomic supervenience will serve as the second premise in our argument.

The first premise is that no one has, or ever had, a choice about whether DB+ & IN & L. This seems to follow from the ‘broad past’ principle appealed to with respect to the Consequence Argument. In that instance, the intuitions supporting ‘N(P & L)’ were that both ‘P’ and ‘L’ were true long before there were any humans around and that the past is fixed. Apparently, the idea is that, since ‘P & L’ was true before anyone could have done anything to falsify it, and since we cannot now do anything to falsify what has gone on before, nothing we can now do could falsify ‘P & L.’

Similar reasoning lends support to ‘N(DB+ & IN & L).’ The proposition ‘DB+ & IN & L’ is made true before r occurs, and r is the first choosy act. Thus, nobody could have done anything to falsify it at the time it was made true (since if they could have, r would not have been the first choosy act), and by the time r comes around, DB+ & IN & L is already a fixed part of the past.
Of course, one may object that DB+ & IN & L is not part of the remote past, since it occurs very soon before r. This appeal to the remoteness of the past is a red herring. It is not as though we think the recent past is only somewhat fixed, and we can change it a bit, whereas as time goes on it ‘solidifies’ until it is eventually unchangeable. Rather, the past—remote or not—cannot be changed by anything we can do now. The only reason to appeal to a ‘remote’ past in defense of the Consequence Argument is to make sure that we do not appeal to a time at which people (not necessarily we) were going around performing choosy actions. If our proposition is made true before the first choosy action, though, we are in the clear. We are now ready for the argument.

The Supervenience Argument

(1) N(DB+ & IN & L)                                   Premise
(2) □((DB+ & IN & L) → R) (3)                     Premise
NR                                                           T: 1, 2

Thus, r is not a choosy act; so there are no choosy acts at all. No one has, or ever had, a choice about anything.

The Argument for Trickier Cases

My claim that ‘DB+ & IN & L’ is made true entirely before r occurs may raise a few eyebrows. There is one way in which it could be false. Notice that there are two ways in which an event e may be caused only indeterministically by a cause c. On the one hand, there may be a causal chain between c and e that contains indeterministic links ‘somewhere in the middle,’ as it were. But there may instead be a causal chain from c up to but not including e in which e itself is the undetermined link. This means that there could be two possible worlds (with the same laws of nature) in which the entire causal chain strictly between c and e occurred, but e only occurs in one of them. I have assumed that the causal chain between db+ and r is the first sort of undetermined link, where the indeterminism crops up in the middle. If the link between db+ and r is the second sort of chain, then I face a dilemma. Either r is not one of the ins, in which case premise (2) is false, or r is one of the ins, in which case premise (1) begs the question by including ‘R’ as a hidden conjunct of a proposition bounded by the ‘N’ operator.

It is implausible that r is a microphysical event. Such events are, in general, too small to be actions. Since the occurrence or non-occurrence of r will have to supervene on something microphysical (by our Naturalistic hypothesis), and since r was undetermined by everything that went on before it, there must be some undetermined microphysical event x, concurrent with r, that r supervenes on. In other words, x is the microphysical event that ‘makes the difference’ between the occurrence and non-occurrence of r. (There may be more than one such event; call them the xs, collectively.)

Of course, the xs will not be r’s entire supervenience base. There will probably be other events that r supervenes on that were determined by events preceding r. Of these events, we will call the ones that were caused by db+ the ys, and the ones that were not, the zs.

Let e represent the collection of events that were nomically sufficient for the zs. (Some of the es themselves may have been undetermined, but this will not make any difference since the es all occurred before r.) Also, let the xs be separate from the ins. Now consider the proposition ‘DB+ & IN & L & E.’ Once again, it should be clear that nobody has, or ever had, a choice about this proposition, for it was made true by the laws of nature and events that occurred in the ‘broad past,’ before any choosy actions. Thus, N(DB+ & IN & L & E).

Likewise, it appears as though nobody has, or ever had, any choice about X, the proposition that expresses the occurrence of the xs. This is trickier, but it does seem that nobody could have done anything such that, had they done it, X would have been false. How could anyone exercise such control over the truly objective chance happenings of particle physics? What could I do, for instance, to ensure that an electron will have a certain property at a certain time, if it is objectively undetermined whether or not it will gain said property?

As far as I can see, there is nothing I (or anyone) could do that would determine the outcome of an undetermined event. What I would like to do is combine N(DB+ & IN & L & E) with NX, which would allow me to offer the following argument.

The Tricky Supervenience Argument
(1)N(DB+ &IN&L&E&X)                                       Premise

(2)□((DB+ &IN&L&E&X)→(L&X&Y&Z))                  Premise

(3) N(L & X & Y & Z)                                           T: 1, 2
(4) □((L & X & Y & Z) → R)                                  Premise (Supervenience of R)

(5) NR                                                               T: 3, 4

The second premise is unproblematic: DB+ & L & IN entails Y, since the ys are caused by db+; L & E entails Z, since the es deterministically cause the zs; and L & X trivially entails L & X. The problem is that I cannot simply agglomerate the first premise, and X does not lie in the ‘broad past’ of r.

Nonetheless, I claim that N(DB+ & IN & L & E & X) is true. According to Finch and Warfield,

[I]t is important to be clear that the McKay and Johnson argument [against agglomeration] shows only that the inference from Np and Nq to N(p & q) is invalid. This does not, by itself, provide any reason at all for thinking that [in the case of NP and NL] NP and NL are true, while N(P & L) is not. An inspection of the difference [between the two cases] shows that the McKay/Johnson case seems to cast no doubt on the truth of N(P & L). In the McKay/Johnson case, one has no choice about either conjunct of a conjunction but does have control over the conjunction because although there is nothing one can do that would falsify either particular conjunct there is something one can do that might falsify either conjunct and would falsify the conjunction. . . . [I]t is not at all plausible that though one cannot, for example, do anything that would falsify… the laws of nature, one might somehow do so.8

Similar remarks apply here. There is nothing one could do that even might falsify the occurrence of the xs, the truly undetermined events,9

nor is there anything one could do that even might falsify the past.10 Thus, there is not anything one could do that would falsify their conjunction. Premise (1) is vindicated and the argument follows.

Implications

Naturally, libertarians will want to reject the conclusion of the argument, for they believe in the existence of free will. The most obvious candidate for rejection is the supervenience thesis. Since this follows from Weak Naturalism, a libertartian who escapes the argument by this route concedes that free will is incompatible with Weak Naturalism after all.

Some libertarians already think free will and Naturalism are incompatible; such theorists will undoubtedly welcome this argument.11 Others12 seek to locate the indeterminism needed for free will at the level of indeterminacies in nature; for them, the argument is not so welcome, for it directly concerns their theories of free will.

If rejection of Weak Naturalism is required in order to make libertarian free will work, some may feel it is not worth the effort. Weak Naturalism, for better or worse, is a widely held view. In the arena of philosophical debate, positions are evaluated both on their logical merits and their plausibility. For those who are committed to naturalism, the implausibility of the denial of Weak Naturalism is likely to outweigh whatever intuitive support the libertarian position may have. When faced with the Supervenience Argument, those initially drawn towards libertarianism by the Consequence Argument may be inclined to either find fault with that argument (and thus lean towards compatibilism) or give up on free will altogether.

The point can be put another way. Many who accept the libertarian position do so only because of the plausibility available to it via naturalistic means. Some libertarians are skeptical of attempts by their colleagues to “look for those additional factors [for grounding free will] in mysterious sources outside of the natural order or to postulate unusual forms of agency or causation.”13 If a libertarian version of free will is to be found in the natural order, though, then the Supervenience Argument must be avoided without the rejection of the Naturalistic Thesis.

What other options does the Naturalistic libertarian have? Premises (1) and (3) (of the Tricky argument) are supported by just those considerations used to invoke the first premise of the Consequence Argument. Rejecting either of these, then, licenses a rejection of the Consequence Argument. The same is true of the Transfer Principle. These two arguments, it seems, stand or fall together. Of course, the Naturalistic libertarian may not be motivated by the Consequence Argument in the first place, in which case she may reject them both and be done with it. In the absence of the Consequence Argument, however, it is not clear how she will be able to support her position against compatibilists who insist that indeterminism is not a necessary component of free will.

Objections and Replies

Naturalistic libertarians who are motivated by the Consequence Argument are in something of a bind, for they must find a way to reject the Supervenience Argument that does not in turn license a rejection of the Consequence Argument. In this section, I will consider potential objections to the Supervenience Argument and, in each case, argue that either the objection fails or that, if it is successful, a parallel objection can be used against the Consequence Argument.

  1. Suppose that the universe is such that, for every time, there is an earlier time at which someone performed a choosy action. Then a set of all choosy actions would not have a first element, but it clearly would not be empty.

This is correct, and it is the only way a non-empty, linearly-ordered class of choosy actions could fail to have a first element. The first thing to note is that this universe is not, it seems, our universe, and so we can view the Supervenience Argument as an argument to the effect that, if the Free Will Thesis is true in our universe, then our universe is not one in which Naturalism is true.

One might not think that this objection is very compelling on the grounds that the argument is supposed to show the conceptual incompatibility of Naturalism and the Free Will Thesis. This is too strong, though, for if the possibility of the sort of universe in question undermines the Supervenience Argument, it also undermines the Consequence Argument in exactly the same way. In a universe with an infinite backwards cascade of choosy actions, there is no time t such that a proposition expressing the state of affairs of that universe at t is one about which no one has or ever had a choice—i.e., there is no P such that NP. Thus, this objection faces a dilemma. Either the existence of (non-actual) Naturalistic universes with infinite backwards cascades of choosy actions does not pose a problem for the Supervenience Argument, or if it does, similar deterministic universes pose a problem for the Consequence Argument.

  1. Your argument begins from the premise that some beliefs and desires (or their realizers) caused r and proceeds from there. Actions, though, are not caused by beliefs and desires, in which case they do not nomically supervene on beliefs and desires (even in part), in which case you are not entitled to your supervenience premise.

I mention this objection mainly for completeness. If the objection is just about what causes the action, we can easily replace beliefs and desires (or their realizers) with whatever one thinks did cause the action. If the objection is that actions are not caused at all, there is still no threat, for if the objection holds that the action does not even nomically supervene on some physical base, it will have rejected the very assumption the argument was designed to bring into tension with free will. Since the action will supervene on the microphysical, we can (in the extreme case) replace db, in, and e with rlc, the contents of r’s reverse light cone. Then r supervenes nomically on rlc and the argument proceeds as before.

A last-ditch effort to save this objection might appeal to backwards causation, or, at least, backwards supervenience. That is, r might supervene on future microphysical events. Arguing in this manner puts the Consequence Argument at risk, though, for if r supervenes on future events, why not think that some event whose occurrence is recorded in P is one which supervenes on some future act? In this case, the future act could be one which, if it had not occurred, P would have been false, which would cast doubt on NP (and, hence, N(P & L)).

  1. Pick some action, a, which occurred before r. Then a is clearly something the agent could have done, since it is something the agent did do. The xs are undetermined, so X might have been false. Then a is an action that might have falsified X. Furthermore, the ins are undetermined, so a might have falsified them as well. Thus, you are not entitled to the first premise of the ‘tricky’ version of your argument, because there is an action, a, which the agent might have done and which might have falsified either of the two conjuncts of which that premise is a conjunction.

There is an ambiguity in the expression ‘event e falsified φ,’ which will be dealt with in a later objection. I will set this aside here and simply suppose that it is correct to say that a is an action which our agent could have done (because our agent did do it) which might have falsified x. Does it follow that my defense of first premise of the ‘tricky’ argument, N(DB+ & IN & L & E & X), is flawed?

No. My defense was that there is no action which one could perform that might falsify N(DB+ & IN & L & E) and that might falsify NX and which, by virtue of this fact, would falsify N(DB+ & IN & L & E & X). The fact that one could act in a way that might falsify φ and might falsify ψ is not sufficient, in the face of Nφ and Nψ, to demonstrate that this act would falsify N(φ & ψ). Suppose Herbert rolls a die; his die rolling might falsify ‘the die does not land one’ and ‘the die does not land six,’ but it is simply wrong to say it would falsify ‘the die does not land one and the die does not land six.’

The defense of the tricky supervenience premise is more like this die-rolling case than McKay and Johnson’s coin-tossing case. The easiest way to see this is by noting that our agent did a, but N(DB+ & IN & L & E & X) wasn’t falsified. Thus it simply cannot be right to say that if a had occurred, N(DB+ & IN & L & E & X) would have been false. The objection has not located an action that undermines the premise.

  1. Although a does not undermine the tricky supervenience premise, there is an action which does. If S had not r-ed, something in the supervenience base would have been false. Since the first premise states, in essence, that nobody had a choice about anything in r’s supervenience base, if S had not r-ed, that would have rendered DB+ & IN&L&E&Xfalse;thus,ShadachoiceaboutDB+ &IN&L&E&X.

As noted above, I do not wish to commit myself to the view that not-doings are actions, but I am happy to allow that they are for the sake of this objection. Besides, I have already argued that, even if they are not, if S had not r-ed then there would have been some other action, r′, that S would have performed instead.

There are a couple of things to note about this objection. It is true that, had S not r-ed, DB+ & IN & L & E & X would have been false. In order for the objection to succeed, though, it must be true that S could have not r-ed. On the face of it, this sort of objection begs the question. I have offered an argument with the conclusion NR; the objection rejects on of the argument’s premises on the basis of the claim of ~NR. On the other hand, some may insist that I beg the question if I do not allow my opponents this claim, for I then (they may say) unfairly shield my premises from any objections that do not presuppose my argument’s conclusion. I do not wish to embroil myself in sticky issues about question begging and burdens of proof, so I will take another tack. I will allow my opponents that ~NR is true and examine what follows from it (if it is held consistently with the Consequence Argument).

So, suppose that ~NR is true. Then, goes the objection, N(DB+ & IN & L & E & X) is false. Why is this? Because S is able to not-r, and if S had not r-ed, r would have rendered DB+ & IN & L & E & X false.

Libertarians walk a thin line with this objection, though. It is part of the defense of the first premise of the Consequence Argument that people cannot now do anything that would, or even might, render false propositions entirely about the past.14Thus, libertarians cannot say that if S had not-r-ed, that would have rendered false any propositions entirely in the past of r—including DB+ & IN & L & E. What is left? It must be the case that S’s not r-ing renders X false, which means this objection, if it is to work, must collapse into the next.

  1. The premise N(DB+ & IN & L & E & X) is false because ~NX is true. Specifically, S could have not-r-ed and, if S had not r-ed, this not-r-ing would have rendered X false.

Again, let us grant that S could have not r-ed. Then, it would seem, S could do something such that, if S had done it, it would ensure the falsity of X. That is, ~NX is true.

This, as I see it, would clearly invalidate the argument. If someone has, or ever had, a choice about the occurrence of the microphysical undetermined events upon which an action supervenes, then there is good reason to think that person has or had a choice about that action itself.

The problem is that this reply begins to look more like a reductio to the claim ~NR. The only way, it appears, to maintain the falsity of NR without jeopardizing the Consequence Argument is by postulating that we have the ability to falsify propositions about microphysical undetermined events.

This seems, on the face of it, every bit as implausible as the claim that we can falsify propositions about the past or the laws of nature. The fundamental idea behind the sort of ‘objective indeterminism’ that Naturalistic libertarians think is needed for free will is that nothing else determines these events. They just happen—full stop.

Can agent causal accounts get around this problem? They can if they are able to find a way to make agent causation consistent with Weak Naturalism. A few hurdles stand in the way: (for instance) if ‘S’s agent-causing of e’ is itself an event, then it will have to either be a microphysical event or nomically supervene on a collection of them, either of which appears problematic for the agent causal account.

Even if agent causation can be made to square with the letter of Weak Naturalism, it is not clear it can ever be fully in the spirit of the naturalistic thesis. As already noted, the rhetorical purpose of the Supervenience Argument is, in large part, to raise the costs of accepting the Consequence Argument. Since determinism is not generally thought to be true, many people are willing to accept the Consequence Argument so long as they are able to get free will by simply appealing to the claims of our best physics (including quantum physics). However, if the Consequence Argument means free will comes only with the acceptance of ‘spookier’ elements of the world—whether they are events that do not nomically supervene on the microphysical or relations of immanent, agent causation—then Consequence-Argument-motivated libertarianism is left in a less appealing state than the one in which it was found.15

 

 

Works Cited

Carlson, Erik. “Incompatibilism and the Transfer of Power Necessity.” NOUS 34 (2000): 227-290.

Crisp, Thomas M. and Ted A. Warfield. “The Irrelevance of Indeterministic Counterexamples to Principle Beta.” Philosophy and Phenomenological Research 61(1) (2000): 173-184.

Ekstrom. Laura Waddell. Free Will: A Philosophical Study. Boulder, Co.: Westview Press. 2000.

Finch, Alicia and Ted A. Warfield. “The Mind Argument and Libertarianism.” Mind 107 (1998): 515- 28.

Kane, Robert. The Significance of Free Will. New York: Oxford UP. 1996.

McKay, Thomas and David O. Johnson. “A Reconsideration of an Argument Against Compatibilism.” Philosophical Topics 24 (1996.): 113-122.

O’Connor, Timothy. Persons and Causes: The Metaphysics ofFree Will. New York: Oxford UP. 2000.

Wideker, David. “On an Argument for Incompatibilism.” Analysis 47 (1987): 37-41.

Van Inwagen, Peter. An Essay on Free Will. Oxford: Oxford UP. 1983.

Unger, Peter. “Free Will and Scientiphicalism.” Philosophy and Phenomenological Research 65(1) (1983): 1- 25.

  1. Peter Van Inwagen, An Essay on Free Will (Oxford: Oxford UP, 1983): 56.
  2. Van Inwagen 93. In this and following quotes, I have taken the liberty of replacing the propositional variables ‘p’ and ‘q’ with ‘φ’ and ‘ψ’ in order to avoid confusion later in the paper.
  3. Alicia Finch and Ted A. Warfield, “The Mind Argument and Libertarianism,” Mind 107 (1998): 516.
  4. This discussion skips a lot of history. This version of the Consequence Argument is a derivative of van Inwagen’s ‘Third Argument’ (van Inwagen, 93-95). The version of the transfer principle used in the Third Argument, called ‘Beta,’ has since been shown invalid (David Wideker, “On an Argument for Incompatibilism,” Analysis 47 (1987): 37-41; 122 and Thomas McKay and David O. Johnson, “A Reconsideration of an Argument Against Compatibilism,” Philosophical Topics 24 (1996): 113-122). See also Erik Carlson, “Incompatibilism and the Transfer of Power Necessity,” NOUS 34 (2000): 227-290; Thomas M. Crisp and Ted A. Warfield, “The Irrelevance of Indeterministic Counterexamples to Principle Beta,” Philosophy and Phenomenological Research 61(1) (2000): 173-184; and Finch and Warfield, 520-521 for further discussions of these examples.) The transfer principle used here, which escapes those counterexamples, was first suggested by Wideker, and this rendition of the Consequence Argument has appeared in Finch and Warfield 522.
  5. Van Inwagen 96.
  6. McKay and Johnson 115.
  7. Finch and Warfield 523.
  8. Finch and Warfield 523-524; emphasis added.
  9. See, e.g., van Inwagen 142-143.
  10. This is different from the claim that there is nothing one could have done which might have falsified the past, which is in general false (since one might have falsified it back before it was the past) but in this case true by virtue of the fact that r is the first choosy act.
  11. See Timothy O’Connor, Persons and Causes: The Metaphysics of Free Will (New York: Oxford UP, 2000) chap. 6 (I take it that the supervenience present in the Weak Naturalistic thesis is incompatible with O’Connor’s emergentism) and Peter Unger, “Free Will and Scientiphicalism,” Philosophy and Phenomenological Research 65(1) (1983): 1-25. Van Inwagen considers—and endorses, if necessary—the rejection of something very much like Naturalism in order to preserve free will (213-217).
  12. See, e.g., Robert Kane, The Significance of Free Will (New York: Oxford UP, 1996), esp. chap. 8, and Laura Waddell Ekstrom, Free Will: A Philosophical Study (Boulder, Co.: Westview Press, 2000) 81-129.
  13. Kane 115.
  14. Finch and Warfield 523.
  15. I am grateful to Tom Crisp, Zac Ernst, Al Mele, and Eddy Nahmias for helpful comments on various drafts of this paper.

Jason Turner

Jason Turner received his Bachelor of Arts in Philosophy from Washington State University in 2002. He is currently finishing a Master of Arts at Florida State University, after which he will enter Ph.D. candidacy at Rutgers, the State University of New Jersey. He is the author of “Strong and Weak Possibility,” Philosophical Studies (forthcoming) and the co-author (with Eddy Nahmias, Stephen Morris and Thomas Nadelhoffer) of “The Phenomenology of Free Will,” Journal of Consciousness Studies (forthcoming).