Wednesday, September 17, 2014

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (1): Chalmers's Optimism

Chalmers's Image from TedxSydney - Pigliucci from his CUNY Profile Page

The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically advanced, substrate has proved so attractive to futurists and transhumanists.

But is it really feasible? This is a question I’ve looked at many times before, but the recent book Intelligence Unbound: The Future of Uploaded and Machine Minds offers perhaps the most detailed, sophisticated and thoughtful treatment of the topic. It is a collection of essays, from a diverse array of authors, probing the key issues from several different perspectives. I highly recommend it.

Within its pages you will find a pair of essays debating the philosophical aspects of mind-uploading (you’ll find others too, but I want to zone-in on this pair because one is a direct response to the other). The first of those essays comes from David Chalmers and is broadly optimistic about the prospect of mind-uploading. The second of them comes from Massimo Pigliucci and is much less enthusiastic. In this two-part series of posts, I want to examine the debate between Chalmers and Pigliucci. I start by looking at Chalmers’s contribution.

1. Methods of Mind-Uploading and the Issues for Debate
Chalmers starts his essay by considering the different possible methods of mind-uploading. This is useful because it helps to clarify — to some extent — exactly what we are debating. He identifies three different methods (note: in a previous post I looked at work from Seth Bamford suggesting that there were more methods of uploading, but we can ignore those other possibilities for now):

Destructive Uploading: As the name suggests, this is a method of mind-uploading that involves the destruction of the original (biological) mind. An example would be uploading via serial sectioning. The brain is frozen and its structure is analyzed layer by layer. From this analysis, one builds up a detailed map of the connections between neurons (and other glial cells if necessary). This information is then used to build a functional computational model of the brain.

Gradual Uploading: This is a method of mind-uploading in which the original copy is gradually replaced by functionally equivalent components. One example of this would be nanotransfer. Nanotechnology devices could be inserted into the brain and attached to individual neurons (and other relevant cells if necessary). They could then learn how those cells work and use this information to simulate the behaviour of the neuron. This would lead to the construction of a functional analogue of the original neuron. Once the construction is complete, the original neuron can be destroyed and the functional analogue can take its place. This process can be repeated for every neuron, until a complete copy of the original brain is constructed.

Nondestructive Uploading: This is a method of mind-uploading in which the original copy is retained. Some form of nanotechnology brain-scanning would be needed for this. This would build up a dynamical map of current brain function — without disrupting or destroying it — and use that dynamical map to construct a functional analogue.

Whether these forms of uploading are actually technologically feasible is anyone’s guess. They are certainly not completely implausible. I can certainly imagine a model of the brain being built from a highly detailed scan and analysis. It might take a huge amount of computational power and technical resources, but it seems within the realm of technological possibility. The deeper question is whether our minds would really survive the process. This is where the philosophical debate kicks-in.

There are, in fact, two philosophical issues to debate:

The Consciousness Issue: Would the uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?

The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process? Would our identities be preserved?

The two issues are connected. Consciousness is valuable to us. Indeed, it is arguably the most valuable thing of all: it is what allows us to enjoy our interactions with the world, and it is what confers moral status upon us. If consciousness was not preserved by the mind-uploading process, it is difficult to see why we would care. So consciousness is a necessary condition for a valuable form of mind-uploading. That does not, however, make it a sufficient condition. After all, two beings can be conscious without sharing any important connection (you are conscious, and I am conscious, but your consciousness is not valuable to me in the same way that it is valuable to you). What we really want to preserve through uploading is our individual consciousnesses. That is to say: the stream of conscious experiences that constitutes our identity. But would this be preserved?

These two issues form the heart of the Chalmers-Pigliucci debate.

2. Would consciousness survive the uploading process?
So let’s start by looking at Chalmers’s take on the consciousness issue. Chalmers is famously one of the new-Mysterians, a group of philosophers who doubt our ability to have a fully scientific theory of consciousness. Indeed, he coined the term “The Hard Problem” of consciousness to describe the difficulty we have in accounting for the first-personal quality of conscious experience. Given his scepticism, one might have thought he’d have his doubts about the possibility of creating a conscious upload. But he actually thinks we have reason to be optimistic.

He notes that there are two leading contemporary views about the nature of consciousness (setting non-naturalist theories to the side). The first — which he calls the biological view — holds that consciousness is only instantiated in a particular kind of biological system: no nonbiological system is likely to be conscious. The second — which he (and everyone else) calls the functionalist view — holds that consciousness is instantiated in any system with the right causal structure and causal roles. The important thing is that the functionalist view allows for consciousness to be substrate independent, whereas the biological view does not. Substrate independence is necessary if an upload is going to be conscious.

So which of these views is correct? Chalmers favours the functionalist view and he has a somewhat elaborate argument for this. The argument starts with a thought experiment. The thought experiment comes in two stages. The first stage asks us to imagine a “perfect upload of a brain inside a computer” (p. 105), by which is meant a model of the brain in which every relevant component of a biological brain has a functional analogue within the computer. This computer-brain is also hooked up to the external world through the same kinds of sensory input-output channels. The result is a computer model that is a functional isomorph of a real brain. Would we doubt that such a system was conscious if the real brain was conscious?

Maybe. That brings us to the second stage of the thought experiment. Now, we are asked to imagine the construction of a functional isomorph through gradual uploading:

Here we upload different components of the brain one by one, over time. This might involve gradual replacement of entire brain areas with computational circuits, or it might involve uploading neurons one at a time. The components might be replaced with silicon circuits in their original location…It might take place over months or years or over hours.

If a gradual uploading process is executed correctly, each new component will perfectly emulate the component it replaces, and will interact with both biological and nonbiological components around it in just the same way that the previous component did. So the system will behave in exactly the same way that it would have without the uploading. 
(Intelligence Unbound pp. 105-106)

Critical to this exercise in imagination is the fact that the process results in a functional isomorph and that you can make the process exceptionally gradual, both in terms of the time taken and the size of the units being replaced.

With the building blocks in place, we now ask ourselves the critical question: if we were undergoing this process of gradual replacement, what would happen to our conscious experience? There are three possibilities. Either it would suddenly stop, or it would gradually fade out, or it would be retained. The first two possibilities are consistent with the biological view of consciousness; the last is not. It is only consistent with the functional view. Chalmers’s argument is that the last possibility is the most plausible.

In other words, he defends the following argument:

  • (1) If the parts of our brain are gradually replaced by functional isomorphic component parts, our conscious experience will either: (a) be suddenly lost; (b) gradually fadeout; or © be retained throughout.
  • (2) Sudden loss and gradual fadeout are not plausible; retention is.
  • (3) Therefore, our conscious experience is likely to be retained throughout the process of gradual replacement.
  • (4) Retention of conscious experience is only compatible with the functionalist view.
  • (5) Therefore, the functionalist view is like to be correct; and preservation of consciousness via mind-uploading is plausible.

Chalmers adds some detail to the conclusion, which we’ll talk about in a minute. The crucial thing for now is to focus on the key premise, number (2). What reason do we have for thinking that retention is the only plausible option?

With regard to sudden loss, Chalmers makes a simple argument. If we were to suppose, say, that the replacement of the 50,000th neuron led to the sudden loss of consciousness, we could break down the transition point into ever more gradual steps. So instead of replacing the 50,000th neuron in one go, we could divide the neuron itself into ten sub-components and replace them gradually and individually. Are we to suppose that consciousness would suddenly be lost in this process? If so, then break down those sub-components into other sub-components and start replacing them gradually. The point is that eventually we will reach some limit (e.g. when we are replacing the neuron molecule by molecule) where it is implausible to suppose that there will be a sudden loss of consciousness (unless you believe that one molecule makes a difference to consciousness: a belief that is refuted by reality since we lose brain cells all the time without thereby losing consciousness). This casts the whole notion of sudden loss into doubt.

With regard to gradual fadeout, the argument is more subtle. Remember it is critical to Chalmers’ thought experiment that the upload is functionally isomorphic to the original brain: for every brain state that used to be associated with conscious experience there will be a functionally equivalent state in the uploaded version. If we accept gradual fadeout, we would have to suppose that despite this equivalence, there is a gradual loss of certain conscious experiences (e.g. the ability to experience black and white, or certain high-pitched sounds etc.) despite the presence of functionally equivalent states. Chalmers’ argues that this is implausible because it asks us to imagine a system that is deeply out of touch with its own conscious experiences. I find this slightly unsatisfactory insofar as it may presuppose the functionalist view that Chalmers is trying to defend.

But, in any event, Chalmers suggests that the process of partial uploading will convince people that retention of consciousness is likely. Once we have friends and family who have had parts of their brains replaced, and who seem to retain conscious experience (or, at least, all outward signs of having conscious experience), we are likely to accept that consciousness is preserved. After all, I don’t doubt that people with cochlear or retinal implants have some sort of aural or visual experiences. Why should I doubt it if other parts of the brain are replaced by functional equivalents?

Chalmers concludes with the suggestion that all of this points to the likelihood of consciousness being an organizational invariant. What he means by this is that systems with the exact same patterns of causal organization are likely to have the same states of consciousness, no matter what those systems are made of.

I’ll hold off on the major criticisms until part two, since this is the part of the argument about which Pigliucci has the most to say. Nevertheless, I will make one comment. I’m inclined towards functionalism myself, but it seems to me that in crafting the thought experiment that supports his argument, Chalmers helps himself to a pretty colossal assumption. He assumes that we know (or can imagine) what it takes to create a “perfect” functional analogue of a conscious system like the brain. But, of course, we don’t know really know what it takes. Any functional model is likely to simplify and abstract from the messy biological details. The problem is knowing which of those details is critical for ensuring functional equivalence. We can create functional models of the heart because all the critical elements of the heart are determinable from a third person perspective (i.e. we know what is necessary to make the blood pump from a third person perspective). That doesn’t seem to be the case with consciousness. In fact, that’s what Chalmers’s Hard Problem is supposed to highlight.

3. Will our identities be preserved? Will we survive the process?
Let’s assume Chalmers is right to be optimistic about consciousness. Does that mean he is right to be optimistic about identity/survival? Will the uploaded mind be the same as we are? Will it share our identity? Chalmers has more doubts about this, but again he sees some reason to be optimistic.

He starts by noting that there are three different philosophical approaches to personal identity. The first is biologism (or animalism), which holds that preservation of one’s identity depends on the preservation of the biological organism that one is. The second is psychological continuity, which holds that preservation of one’s identity depends on maintaining threads of overlapping psychological states (memories, beliefs, desires etc.). The third, slightly more unusual, is Robert Nozick’s “closest continuer” theory, which holds that preservation of identity depends on the existence of a closely-related subsequent entity (where “closeness” is defined in various ways).

Chalmers then defends two different arguments. The first gives some reason to be pessimistic about survival, at least in the case of destructive and nondestructive forms of uploading. The second gives some reason to be optimistic, at least in the case of gradual uploading. The end result is a qualified optimism about gradual uploading.

Let’s start with the pessimistic argument. Again, it involves a thought experiment. Imagine a man named Dave. Suppose that one day Dave undergoes a nondestructive uploading process. A copy of his brain is made and uploaded to a computer, but the biological brain continues to exist. There are, thus, two Daves: BioDave and DigiDave. It seems natural to suppose that BioDave is the original, and his identity is preserved in this original biological form; and it is equally natural to suppose that DigiDave is simply a branchline copy. In other words, it seems natural to suppose that BioDave and DigiDave have separate identities.

But now suppose we imagine the same scenario, only this time the original biological copy is destroyed. Do we have any reason to change our view about identity and survival? Surely not. The only difference this time round is that BioDave is destroyed. DigiDave is the same as he was in the original thought experiment. That suggests the following argument (numbering follows on from the previous argument diagram):

  • (9) In nondestructive uploading, DigiDave is not identical to Dave.
  • (10) If in nondestructive uploading, DigiDave is not identical to Dave, then in destructive uploading, DigiDave is not identical to Dave.
  • (11) In destructive uploading, DigiDave is not identical to Dave.

This looks pretty sound to me. And as we shall see in part two, Pigliucci takes a similar view. Nevertheless, there are two possible ways to escape the conclusion. The first would be to deny premise (2) by adopting the closest continuer theory of personal identity. The idea then would be that in destructive (but not non-destructive) uploading DigiDave is the closest continuer and hence the vessel in which identity is preserved. I think this simply reveals how odd the closest continuer theory really is.

The other option would be to argue that this is a fission case. It is a scenario in which one original identity fissions into two subsequent identities. The concept of fissioning identities was originally discussed by Derek Parfit in the case of severing and transplanting of brain hemispheres. In the brain hemisphere case, some part of the original person lives on in two separate forms. Neither is strictly identical to the original, but they do stand in “relation R” to the original, and that relation might be what is critical to survival. It is more difficult to say that nondestructive uploading involves fissioning. But it might be the best bet for the optimist. The argument then would be that the original Dave survives in two separate forms (BioDave and DigiDave), each of which stands in relation R to him. But I’d have to say this is quite a stretch, given that BioDave isn’t really some new entity. He’s simply the original Dave with a new name. The new name is unlikely to make an ontological difference.

Let’s now turn our attention to the optimistic argument. This one requires us to imagine a gradual uploading process. Fortunately, we’ve done this already so you know the drill: imagine that the subcomponents of the brain are replaced gradually (say 1% at a time), over a period of several years. It seems highly likely that each step in the replacement process preserves identity with the previous step, which in turn suggests that identity is preserved once the process is complete.

To state this is in more formal terms:

  • (14) For all n < 100, Daven+1 is identical to Daven.
  • (15) If for all n < 100, Daven+1 is identical to Daven, then Dave100 is identical to Dave.
  • (16) Therefore, Dave100 is identical to Dave.

If you’re not convinced by this 1%-at-a-time version of the argument, you can adjust it until it becomes more persuasive. In other words, setting aside certain extreme physical and temporal limits, you can make the process of gradual replacement as slow as you like. Surely there is some point at which the degree of change between the steps becomes so minimal that identity is clearly being preserved? If not, then how do you explain the fact that our identities are being preserved as our body cells replace themselves over time? Maybe you explain it by appealing to the biological nature of the replacement.  But if we have functionally equivalent technological analogues it’s difficult to see where the problem is.

Chalmers adds other versions of this argument. These involve speeding up the process of replacement. His intuition is that if identity is preserved over the course of a really gradual replacement, then it may well be preserved over a much shorter period of replacement too, for example one that takes a few hours or a few minutes. That said, there may be important differences when the process is sped up. It may be that too much change takes place too quickly and the new components fail to smoothly integrate with the old ones. The result is a break in the strands of continuity that are necessary for identity-preservation. I have to say I would certainly be less enthusiastic about a fast replacement. I would like the time to see whether my identity is being preserved following each replacement.

4. Conclusion
That brings us to the end of Chalmers’ contribution to the debate. He says more in his essay, particularly about cryopreservation, and the possible legal and social implications of uploading. But there is no sense in addressing those topics here. Chalmers doesn’t develop his thoughts at any great length and Pigliucci wisely ignores them in his reply. We’ll be discussing Pigliucci’s reply in part two.

Sunday, September 14, 2014

Are hierarchical theories of freedom and responsibility plausible?

In order to be responsible for your actions, you must be free. Or so it is commonly believed. But what exactly does it mean to be free? One popular view holds that freedom consists in the ability to do otherwise. That is to say: the ability to choose among alternative possible futures. This popular view runs into a host of problems. The obvious one being that it is inconsistent with causal determinism.

This has led several authors to propose alternative hierarchical theories of freedom. According to these theories, an action is free when it is consistent with an agent’s higher-order, reflective desires. The idea is that sometimes we have impulsive, non-reflective desires that are not consistent with the kinds of people we really want (or believe) ourselves to be. I, for example, currently desire a piece of cake. But I have also, in my more reflective moments, committed to losing weight because I want to be a skinny person (note: this is just a hypothetical). Consequently, acting on my impulsive desire for cake would be inconsistent with my higher-order preference for being skinny. That would be the essence of unfreedom.

Hierarchical theories of freedom have many attractive features. They are consistent with determinism, and speak to the core belief that in order for an action to free it must belong to us in some respect. But do they provide a compelling account of responsibility? In his book, Against Moral Responsibility, Bruce Waller argues that they don’t. Indeed, he argues that the overwhelming belief that freedom and moral responsibility are connected has led people to propose deeply flawed theories of freedom and responsibility. The hierarchical theory is just one particularly good example of this.

In this post, I want to review Waller’s main arguments. I do so largely as an attempt to better understand his critique. I have, in the past, endorsed hierarchical theories, but have recently become more sceptical. Waller’s critique proceeds in three stages, each one looking at a variant of the hierarchical theory from a different theorist — Harry Frankfurt, Gerald Dworkin and Susan Wolf, respectively. I’ll sketch each of these three stages in what follows.

1. Frankfurt’s Theory and the Implausibility of the Hierarchical Approach
Harry Frankfurt’s 1971 article, “Freedom of the will and the concept of a person”, is perhaps the classic work on hierarchical theories of freedom. It proposes the simplest, and arguably most compelling, of the hierarchical theories. This is the one I laid out in the introduction. It claims that freedom consists simply in doing whatever is consistent with one’s second-order preferences. And what exactly is the difference between a first and second (or higher)-order preference? The answer is roughly as follows:

First-Order Desire: Is expressible in the form “A wants to X”, where “X” is some particular action (like eating cake)
Second-Order Desire: Is expressible in the form “A wants to X”, where “X” is some particular first-order desire (like wanting to want to eat cake, or, in my case, wanting not to want to eat cake).

So any particular action is free when the desire motivating the action is endorsed by a higher order preference to want to have that desire.

The problem with this simple theory is that it appears to have troubling implications. It implies that certain people who we would not ordinarily classify as being free are in fact free. In particular, it implies that people who are gripped by compulsive desires are, sometimes, free. Frankfurt embraces this implication when he distinguishes between two kinds of drug addict. Ordinarily, we would be inclined to say that the drug addict is not free: she is controlled by her first-order desires to take a drug. But Frankfurt says this is not always true. There are wanton addicts and willing addicts. Wanton addicts follow their first-order desires, even when they are out of line with what they really want to want (a good job, a stable family life etc.). Willing addicts fully endorse their first-order desire for drugs: they want to want them. They are truly free (and, by implication, responsible for what they do).

Waller argues that this is absurd, particularly when we bear in mind the typical history of the willing addict. Consider three counterexample:

Willing Addict: Peter starts taking drugs in college. He initially believes himself to be in control of his desire, saying “I can quit anytime”. Later, he finds himself trapped in a drug addiction he despises. He tries to get out of it but instead he slides deeper and deeper into difficulties. He loses his family and friends, destroys his career, and suffers from numerous psychological and physical problems. In the end, nothing of his old life is left. At this stage, he has an epiphany: since nothing of that old life is left, he has no reason to despise what he has become. He then embraces his addiction. He wants to want the drugs. He becomes a willing addict.

Willing Slave: Jamal is a fierce, independent warrior. He his captured by slavers and transported to a plantation in the Caribbean. While there, he his “whipped, branded, and abused”. He is forced to work against his will. In the beginning, he maintains his commitment to freedom, striking back at his slave masters whenever he gets the chance. But, after many years, he gives up. His spirit is broken. He embraces his conditions. He becomes a willing and happy slave.

Willing Convert: Eve is a strong, independent young woman. She longs for an education and career of her own. Unfortunately, she has been born into a strict, religious community. In that community, women are expected to be meek and compliant, to accept male authority, to remain uneducated, and maintain a subservient societal role. Eve rejects those values and “insists that she be respected as fully equal to anyone else”. But after years of “failure, condemnation, and psychological and physical abuse”, she breaks down. She starts to accept the subservient role. She becomes a willing convert.

In each of these cases, the individuals in question meet the conditions set down by Frankfurt. In the end, each of them reflectively approves of their first-order desires. But surely we would not say that any of them are free? Indeed, they arguably epitomise unfreedom. This suggests that Frankfurt’s simplistic version of hierarchical freedom is deeply flawed. The question is whether the hierarchical approach can then be salvaged.

2. Dworkin and the Right Causal Pathway Account
If you look at the three counterexamples just given, you’ll notice a common theme. They each involve people who have to embrace their position in life via a certain kind of causal pathway. Namely, a causal pathway involving their deprivation, abuse or coercion. They want what they now have, but only because circumstances left them with no other viable options. This cannot be freedom.

But this directs our attention to a possible escape route for proponents of the hierarchical approach to freedom. Instead of arguing that freedom is simply about wanting what you want, couldn’t they argue that it is about that and coming to that realisation through the right causal pathway?

This is exactly what Gerald Dworkin claims in his 1988 book The Theory and Practice of Autonomy. He argues that one’s higher order evaluations need to meet the condition of procedural independence. Very roughly, this means that one’s higher-order evaluations are free from manipulation and coercion. That they are arrived at through appropriate education and access to the right kinds of information. They do arise simply because one is beaten, abused and cajoled into accepting one’s lot in life.

This is an intuitively attractive idea. We all have the sense that certain desires are arrived at via improper causal pathways, and certain others are not. If Eve decided she wanted to live the life of subservience after having received an education and being exposed to the mainstream, secular way of life, then we might view her differently. The fact that she didn’t and was never even given that opportunity is crucial to her lack of freedom.

But Dworkin’s solution raises new problems. Waller highlights two of them. The first is that it effectively does away with the hierarchical component of the theory. If what matters is that you arrive at your desires through the right kind of deliberation, then that’s all that matters: consistency with higher-order desires doesn’t seem like an necessary addition. The second is that it is difficult to know what counts as the right causal pathway. Manipulation and coercion by others is one thing, but what about more subtle forms of manipulation? We are all “manipulated” by our genes, culture, education and social setting. Countless studies confirm this fact. Do these count, and if not why not? Waller gives the example of a willing gambler, who came by his addiction due to a fortunate run of luck the first time he visited a casino. Did he arrive at his compulsion through the right causal pathway? If not, then any number of “fortuitous contingencies” would seem to undermine freedom. The number of truly free actions would be vastly diminished. Maybe that’s something we are willing to accept, but we should acknowledge it as a potential consequence of Dworkin’s theory nonetheless.

Dworkin proposes a test of his own. He says that one way we can know if a desire is arrived at in the right way is if the individual in question would reflectively approve of the process whereby they arrived at that desire. Thus, I can say that I arrived at my desire to be skinny through careful deliberation about the person I would like to be; and I approve of that process of arriving at that desire. I’ve come to this position in the right way. The danger with this test is that many people who have been manipulated or coerced into a state of acceptance are likely to reflectively endorse the process whereby they arrived at that state. So, for example, Eve may well approve of her frustrations and denials once she has “come to see the light”. She may thank her community for helping her to see the errors of her ways. That doesn’t make her free.

3. Wolf’s Perfect Rationality Account of Freedom
There may, however, be one causal pathway that leads to freedom. This is the one advocated by Susan Wolf. According to Wolf, the only way to be truly free — i.e. free from the kinds of manipulations and coercions we worried about above — is to track the True and the Good. In other words, to desire what is right for the right reasons.

This view has its origins in religious, predominantly Christian, philosophy (though the Christians adopted it from the Greeks). The idea, as Waller describes it, is that:

True freedom is living in accordance with one’s true nature (as a rational being); genuine freedom can be realized only through accurate pursuit of the True; real freedom means living in accordance with the way God designed you; true freedom is found in perfect obedience to God. 
(Waller, 2011, pp 66-67)

Wolf adapts this classic ideal by replacing obedience to God with obedience to reason. One behaves freely when one believes what is true, desires what is right, and does so because one has access to the right information and can process it appropriately. In that case, you are not being surreptitiously manipulated into your desires, and there are no subtle, undetectable, genetic or environmental quirks influencing what you do. You are simply being guided by the light of reason.

There is some irony to all this. If Wolf is right, then freedom does not consist in the ability to jump tracks and to do otherwise; instead, it consists in the ability to follow the right track (note: if there are many things that are “right”, there may still be several tracks that one can follow; nevertheless, there is a much narrower set of right tracks than is typically thought).

I have to say, I find Wolf’s account somewhat attractive. I don’t know if I would call it a theory of responsibility or freedom, per se, but I do think it addresses the worries we have about manipulations and other causal influences. The obvious problem with Wolf’s account is that humans routinely and systematically fall short of such perfect rationality. In addition, it may challenge the traditional conception of responsibility for one’s actions. This doesn’t mean she’s wrong, of course, it just means this sort of freedom is alien to human beings. (Waller, I should note, also thinks that the ability to jump track is valuable and neglected in Wolf’s account).

So what should we conclude? I’m not sure. There is much more to the literature on freedom and responsibility. There are several other variants on the hierarchical/right causal pathway theme, and there is a veritable cottage-industry of academic work on manipulations and how they may, or may not, undermine freedom. Nevertheless, I like Waller’s simple criticisms. I think they embody a robust commonsense. I think he is right to say that Frankfurt’s approach is flawed, and that identifying the right causal pathway is extremely difficult (unless we resort to Wolf’s extreme). Philosophical sophistication is all well and good, but sometimes you need that kind of commonsense critique.

Thursday, September 11, 2014

Steven Pinker's Guide to Classic Style

I try to be a decent writer. I try to convey complex ideas to a broader audience. I try to write in a straightforward, conversational style. But I know I often fail in this. I know I sometimes lean too heavily on technical philosophical vocabulary, hoping that the reader will be able to follow along. I know I sometimes rush to complete blog posts, never getting a chance to polish or rewrite them. Still, I strive for clarity and would like to improve.

That’s why I have been keen to read Steven Pinker’s new book, The Sense of Style. Pinker is, of course, a well-known linguist, cognitive scientist and public intellectual. And this latest book is his attempt to provide a style guide for the 21st Century. Those of you who are familiar with style guides will know the usual drill: a list of principles and dos and don’ts, often supplied without reason and subject to any number of qualifications and exceptions. Some are good, some are bad, some are merely infuriating. Pinker’s book is different. It has some of the traditional lists of dos and don’ts, but with an added helping of psychology and linguistic theory. Furthermore, it’s written in an engaging style (always encouraging in a style guide), and may be the first manual of its sort that you would actually want to read from start to finish.

But I don’t intend for this post to be a fawning review. Instead, I want to share one of the key ideas from the book. In particular, I want to share the basic theory of communication that Pinker relies on, and some of his main dos and don’ts.

1. The Classic Style of Communication
Let’s start with the theory. One of the infuriating aspects of traditional style guides — according to Pinker anyway — is that they lack an underlying theory of communication. When their authors are busy doling out advice, they do so in an intuitive, somewhat haphazard manner. That’s why their rules are often so odd, and why the best prose stylists often break them. If the style-gurus had some theory of communication in place, they could explain why they adopt a certain set of rules and
why it is okay to occasionally break them.

So that’s what Pinker does. His preferred theory of communication is that of classic style. This is not something he came up with himself. It was originally presented by two literary theorists — Francis-Noel Thomas and Mark Turner — in a book called Clear and Simple as the Truth. The essence of classic style is that writing should be viewed as a conversation between the writer and the reader, in which the writer explains some object of joint attention to the reader. As Pinker puts it:

The guiding metaphor of classic style is seeing the world. The writer can see something that the reader has not yet noticed, and he orients the reader’s gaze so that she can see it for herself. The purpose of writing is presentation, and its motive is disinterested truth. It succeeds when it aligns with the truth, the proof of success being clarity and simplicity.
(Pinker, 2014, pp. 28-29

The simplest example of classic style in action would be where the writer literally describes an object or event in the real world to the reader. Suppose I just witnessed an accident on my way home, and I’m trying to describe it to you in a letter. Here, the accident is the object of joint attention; the goal of the written communication is to “orient your gaze” toward that accident; and the communication succeeds when I manage to describe it accurately.

But don’t get too hung up on this example. The object of joint attention need not be so mundane. It could be much more abstract. For example, it could be a scientific theory, or a philosophical concept, or an academic or scholarly debate. Indeed, I think of most academic writing as an attempt to orient the reader toward some kind of abstract “object”. In my case, it is usually an argument, one that I almost literally want the reader to be able to see: I want them to see the premises, how they connect with one another, and how they support one or more conclusions. The visual element of this metaphor is driven home by the use of argument diagrams.

As Pinker sees it, classic style is an ideal model of communication for academic and expository writing. In academic writing, you are usually trying to explain or justify something to a reader: you have seen something they have not, and you want to bring it into the spotlight. It may be less well-suited to other disciplines. Poetry and fiction writing, for example, is not always about describing and explaining some object of joint attention (though it often is, albeit in a novel and interesting way).

Classic style is the antithesis of the postmodern style. This is for good reason. Postmodernists are usually sceptical about the “Truth”. They don’t think it exists, at least not apart from the concepts and theories we use to describe it. Classic style seems naturally opposed to this since it assumes that the goal of writing is to convey some truth to the reader. However, the tension between postmodern and classic style may be more apparent than real. Postmodernists often do have important truths to convey. It is true that knowledge is sometimes socially constructed; it is true that our concepts and theories are sometimes biased. It is sometimes a good idea to draw the reader’s attention to these things.

In other words, the postmodernists have no excuse.

2. Pinker’s Dos and Don’ts for Classic Stylists
With the theory in place, Pinker proceeds to give some dos and don’ts to would-be writers. These dos and don’ts are not absolute or steadfast rules. There are exceptions to them. But these exceptions make sense in light of the underlying theory of communication. And that’s the important thing. A good classic stylist will tend to follow the rules that Pinker outlines, but they won’t always do so. This is because the good writer never loses sight of the overarching goal of communication. So long as you never lose sight of this, you too can occasionally flaunt the rules.

Pinker offers a lot of advice, but I’ve tried to reduce it down to eight basic principles, along with a few qualifications:

1. Eliminate Metadiscourse - Metadiscourse is writing about the writing. Signposting is a famous example (“in the first section we will do x”...”in this first section we will...”). Sometimes this is necessary, but it should be kept to a minimum and should be conversational in nature (“as we have just seen”... “let’s start by looking at this”).

2. Don’t confuse the subject matter of the communication with your line of work - You are trying to explain some important subject matter to the reader, don’t get bogged down in debates only relevant to those in your line of work, and don’t constantly harp on about how difficult or controversial what you are trying to say really is. This is a major problem in academic writing. For example, philosophers often talk about what other philosophers say and do, rather than about actual arguments and theories. 

Exception: Sometimes the object of joint attention really is what others in your field of work say, e.g. you want to talk about a debate between two famous academics.

3. Minimise Compulsive Hedging - As Pinker says, “Many writers cushion their prose with wads of fluff that imply they are not willing to stand behind what they say”. Thus we have the persistent use of the adverbial qualifiers “seemingly”, “apparently”, “nearly”, “partially”. To be sure, some of this is necessary. But it is tedious if overdone and many times readers will imply the necessary qualifications. Save the qualifications for the claims that really need to be qualified. 

Complementary rule: Avoid excessive use of intensifiers: they often detract from the impact of what you are saying.

4. Avoid cliches like the plague - Cliches were originally effective and punchy ways of conveying ideas - they brought to mind powerful sensory metaphors and analogies. But overuse has robbed them of this value. Try to come up with new, punchy metaphors instead. If you must resort to a cliche, don’t mix the metaphors. For example, say “fall through the cracks” rather than “fall between the cracks”.

5. By all means discuss abstract ideas, but avoid unnecessary abstract nouns - This one takes a little explaining. It’s perfectly okay to discuss abstract concepts and ideas, but you should avoid unnecessary abstraction. So avoid using verbal coffins like “issues”, “models”, “levels”, “perspectives” to convey abstract ideas. Example: “Individuals with mental health issues can become dangerous” becomes “People who are mentally ill can become dangerous”.

6. Remember: Nominalization is a dangerous weapon - This is the process of turning a verb into a noun -- e.g. affirming into affirmation. Academics and bureaucrats tend to overuse nominalizations. Not only do they strip your prose of agents and actors, they are often used to avoid accountability, e.g. the classic politician’s defence “Mistakes were made”.

7. Adopt an active, conversational style - Use the first and second person pronouns, don’t talk about the article or book as though it were an agent independent of you (“this article will argue that...”). Use the active voice if possible, e.g. if you are giving important instructions to someone (Pinker gives the example of instructions for a dangerous produce: “X can result in accumulated damage over time” vs “Never do X: it can kill you in minutes”).

8. But it’s okay to use the passive voice (sometimes) - the passive voice is much maligned in writing guides, but it’s okay to use it sometimes. Just remember the guiding principle: you are trying to direct the reader’s attention to something in the world. The active voice directs their attention to the doer of the action; the passive voice directs their attention to the person or object to whom the action is being done. Sometimes it’s the latter to which you want to direct attention. For example “See that mime? He’s being pelted with zucchini by the lady with the shopping bag”. The passive construction in the second sentence makes sense because you are trying to draw the reader’s attention to the mime, not the lady pelting him with zucchinis.

So there you have it: Steven Pinker’s guide to classic style. I don’t want you to go away with the impression that this is all there is in the book. There is much more, including some very interesting chapters on grammar and the psychology of correct usage. I would encourage everyone to read the full thing. Still, I hope this summary gives a flavour of its contents, and is of some use to all.

Tuesday, September 9, 2014

Teaching Documents Online

My teaching idol...

Since I have started a new job, I decided to put some of my old teaching documents online. It's just a small sample, but some people might be interested. They are handouts from classes I taught over the past three years, while employed at Keele University. I've just put up the ones dealing with ethics and the philosophy of law for the time being. I may add more in the future.

Bear in mind that these are intended for teaching purposes. I don't defend any particular views in them. I just try to explain key concepts and arguments, and give students some suggestions for how to evaluate and analyse those concepts and ideas.

Here's what I have up so far:

  • Rationality and Efficiency - An introduction to rational choice theory for law students. Also looks at the concept of economic efficiency.
  • Scientific Evidence and Torture - Uses economic concepts to evaluate the worth of scientific evidence and information gained from interrogational torture.

Sunday, September 7, 2014

Next Journal Club: Finitism and the Beginning of the Universe

via the excellent 

The next journal club will be dealing with the following paper:

The paper is an interesting contribution to the debate about the Kalam Cosmological Argument. As many of you will know, William Lane Craig (and others) use this argument to support the existence of God. The basic thrust of it is that the universe must have had a cause of its existence. Central to this is the claim that the universe must have begun to exist. This claim is defended on the grounds that an infinite past is an impossibility. One of the main arguments in defence of this impossibility is one that focuses on the impossibility of traversing an actually infinite sequence of past events.

Puryear's paper casts doubt on the plausibility of this argument. Although he accepts that it may be impossible to traverse an actually infinite sequence of events, he argues that the past may not consist of an actually infinite sequence of events, even if the universe had no beginning. (The argument is more nuanced than this, but that's the basic gist of it).

Anyway, we'll be discussing this paper at the end of September/start of October. (The paper is available via the link above).

Friday, September 5, 2014

Do Cognitive Enhancing Drugs Actually Work?

I’ve been writing about the ethics of human enhancement for some time. In the process, I’ve looked at many of the fascinating ethical and philosophical issues that are raised by the use of enhancing drugs. But throughout all this writing, there is one topic that I have studiously avoided. This is surprising given that, in many ways, it is the most fundamental topic of all: do the alleged cognitive enhancing drugs actually work?

One reason for avoiding this topic is that philosophers like to pursue hypotheticals: to imagine possible worlds and trace out their logical implications. And this can be all well and good, but as I have written elsewhere, there is a danger that it leads one to commit the “vice of in-principlism”. That is: the vice of talking about enhancement purely in terms of “well if, in principle, cognitive enhancing drugs worked, then the following would be true…”. This is a vice because there are many real-world substances that are alleged to have an enhancing effect. And it’s important that in all our philosophising we don’t ignore the real-world.

So, anyway, to make up for my historical failure, I am going to try to answer the question now. I do so by summarising three studies on the effects of cognitive enhancing drugs. Two of these are systematic reviews (one including a meta-analysis) of the available experimental literature, the other is a “phenomenological study” that I happen to find interesting. The studies focus on three drugs (or drug types) — Adderall (a mix of amphetamines and dextroamphetamines); Ritalin (methylphenidate) and Provigil (modafinil) — all of which are alleged to have enhancing effects, and are frequently used by students to improve their educational performance.

Are all these students wasting their time? Let’s see.

1. Repantis et al 2010: Systematic Review of Methylphenidate and Modafinil
The first study I am going to look at is a systematic review and meta-analysis by Dimitris Repantis and his colleagues. Their analysis was concerned solely with the cognitive enhancing effects of methylphenidate and modafinil. The authors found 46 studies on methyphenidate and 45 on modafinil that met their inclusion criteria. All of these studies were reviewed, but some did not have sufficient data to be extracted for their statistical analyses.

Their review focused on a variety of enhancing effects, specifically on: (a) mood; (b) motivation; c) wakefulness; (d) attention and vigilance; (e) memory and learning; and (f) executive functions and information processing. It also focused on different kinds of experimental trial and different classes of experimental subject. In the first instance, the focus was on healthy individuals, i.e. not individuals who were taking these drugs for some illness or disorder (e.g. ADHD). These individuals were then divided-up into two further subclasses — non-sleep deprived and sleep-deprived. The reviewers looked at the effects on such individuals in two scenarios: (i) single-dose trials — in which the experimental subjects were given a single dose of the relevant drug; and (ii) repeated-dose trials — in which the experimental subjects were given more than one dose over a period of time.

The summary findings in relation to methylphenidate were as follows:

Single Dose Trials (Non-sleep deprived): A single dose of methlyphenidate had a strong enhancing effect in relation to one outcome only: memory. No statistically significant effect was found in relation to attention, mood and executive function. A lack of appropriate baseline measures made it impossible to derive a statistical conclusion in relation to the effect on wakefulness. Only one study looked at the effects on motivation and found some subjectively-reported improvement in willingness to engage in mathematical tasks.

Repeated Dose Trials (Non-sleep deprived): Only two of the included studies looked at repeated usage. Consequently, no statistical analysis could be performed. One of these studies actually looked at two drugs and so the effect of methylphenidate was difficult to determine. The other study involved six weeks of usage by elderly healthy individuals and found some positive effect in relation to fatigue, but nothing in relation to the other parameters of enhancement.

Trials in Sleep-Deprived Individuals: Five of the included studies involved sleep-deprived individuals, two of them involved repeated doses. This wasn’t sufficient to perform a statistical analysis. Still, the results of the studies are interesting. No cognitive enhancing effect was found for single-dosage after one night of sleep deprivation, and in fact it was found that use of the drug may give rise to an overconfidence effect. No positive effects of repeated dosage were found for wakefulness in cases of long-term sleep deprivation (greater than 36 hours) and minimal effects were found in short-term cases (around 4 hours).

The authors conclude that no firm conclusion can be reached about the enhancing effect of methylphenidate at this stage (remember, they were writing in 2010), though they accept that there may be some positive effect in relation to memory. They also note that the popular belief that methylphenidate enhances attention is not confirmed by the meta-analysis.

Turning then to modafinil, the summary findings are as follows:

Single Dose Trials (Non-sleep deprived): A positive effect was found in relation to attention and wakefulness for single-dose trials (the latter is not particularly surprising given how modafinil works), though a negative result for wakefulness was also found the further away (in time) from the drug administration. In other words, modafinil may keep you awake for longer, but it may make you more tired at a later point in time. No significant effect was found in relation to mood, memory and motivation. And no analysis could be performed in relation to executive functioning.

Repeated Dose Trials (Non-sleep deprived): Only two of the included studies involved repeated drug administrations. The first found no effect on attentional tasks after an evening and morning administration. The second involved administrations over the course of three days and found both a positive and negative effect on mood (i.e. it made people happier, but also more anxious).

Single Dose Trials (Sleep-deprived): Statistical analysis found a strong-to-moderate positive effect of modafinil in relation to executive function, memory and wakefulness in sleep-deprived individuals. The effect generally declined the longer the period of deprivation continued. No effects were found in relation to mood and attention, and none of the studies looked at motivation.

Repeated Dose Trials (Sleep-deprived): Statistical analysis suggested a strong positive effect of repeated drug administrations on wakefulness (again, not hugely surprising), but no effect in relation to executive functioning and attention. None of the included studies looked at memory, mood or motivation.

The authors conclude that there is evidence of an enhancing effect for modafinil, primarily on attention in non-sleep deprived individuals, and on wakefulness, executive function and memory in sleep deprived individuals. They caution that, with the exception of wakefulness, these positive effects do not seem to be sustained in the long-term (i.e. across repeated drug administrations), and, furthermore, that modafinil may also give rise to an overconfidence effect in sleep-deprived individuals. This is important insofar as modafinil is often touted for use among sleep-deprived professionals, e.g. doctors.

2. Smith and Farah 2011: Are Prescription Stimulants Smart Pills?
The next study I am going to look at is by Elizabeth Smith and Martha Farah. Again, this one is a systematic review of the literature. It is concerned with two major issues: (i) how many people are using these drugs? and (ii) do they actually work? I’ll only be looking at the latter issue in this summary.

Smith and Farah’s review focused on two drug types: methylphenidates (Ritalin) and dextroamphetamines (by itself or in Adderall). They were concerned with alleged enhancing effects on healthy adults, and they reviewed placebo-controlled trials involving oral administration of the relevant drugs. They divided the available literature into four groups, each concerned with the effects of these drugs on different cognitive processes: (i) memory and learning; (ii) working memory; (iii) cognitive control; and (iv) other executive functions.

If you are at all interested in this topic, I would highly recommend reading Smith and Farah’s review. It really is a comprehensive and detailed summary of the available literature (though, bear in mind, there is overlap between this and the Repantis et al review). I cannot do justice to everything they say in this brief summary. I will, however, try my best:

Memory and Learning: The authors looked at 22 studies on the effects of methylphenidate and d-AMP on learning and memory. The studies covered 24 different tasks, some involving declarative memory (i.e. recall of facts), others involving non-declarative (procedural) memory (i.e. skills). They found some weak-to-strong evidence for an enhancing effect on declarative memory tasks, with that effect more pronounced over the long-term. Indeed, many studies suggest that these drugs have little-to-no effect in the short term, but significant effects in the long-term. The findings in relation to non-declarative memory tasks were more mixed, though more inclined to be positive for methlyphenidate than for d-AMP.

Working Memory: Working memory is like the brain’s “scratch pad” - a space in which various bits of information can be kept “online” while performing a given cognitive task. The authors looked at 23 studies on this, involving 27 different tasks. The results are far too complex to summarise fairly because they vary depending on the task the experimental subjects were asked to perform. But in general, the studies are mixed, with some showing positive effects, some showing no effect, and none showing a negative effect. The suggestion from the studies seems to be that the enhancing effect is greater on those who are less able to perform the given tasks in the first place.

Cognitive Control: Cognitive control is, in essence, the ability to override and control the brain’s more automatic, learned responses. The authors looked at 13 studies involving 16 control tasks. Again, the results are complex, but overall there were more null results than positive ones, and one finding of impairment. Careful analysis of the results once again suggests that the positive effects are greatest for those who have poor cognitive control in the first place.

Other Executive Functions: This is just a catch-all for other possible enhancing effects covered by the available studies. The authors looked at five such studies, covering a eight different tasks, including verbal fluency and grammatical reasoning, as well as Raven’s Progressive Matrices and variants on the Towers of London task. Overall, there were only two positive results, but the small number of tests makes it difficult to draw any firm conclusions.

So what is the upshot of all this? Once again there seems to some reasonable evidence for a positive effect on long-term memory recall, and some indication that these drugs work better for people with weaker cognitive performance. Still, the authors are cautious. Over a third of the studies reviewed showed no enhancing effect, and there is the danger that negative results are not being published (due to publication bias).

3. Vrecko 2013: Just how “cognitive” is cognitive enhancement?
The final study I want to look at is a bit different from the preceding two. It is not a systematic review. It is a single study, conducted on 24 students at an unnamed, elite university on the East Coast of the United States. The study is interesting to me because it focuses exclusively on what the students taking these drugs think it does to their ability to study (and engage in other types of academic work), and because it argues that the emotional effects of these drugs may be just as important (if not more important) than the cognitive effects. (I would, however, note that disentangling the emotional from the cognitive can be a tricky business).

I think the most valuable part of this study is the quotations it provides from actual student users, and I want to share some of those quotes here. Before I do that, I need to share something of Vrecko’s analytical framework. Using fairly standard methodological protocols, Vrecko noted that his interviewees’ responses suggested that drugs such as Adderall, Ritalin and Modafinil had four mood-enhancing effects. I’ll cover all four here, and include student quotes that illustrate them along the way.

The first mood-enhancing effect was “feeling up”. By this is meant an increased feeling of energy and well-being. With this improvement, the study participants felt more willing to do the kinds of academic work they needed to get done. For example, one student (Sarah) said:

Everything seems better, and more doable. Sometimes, a lot of the time actually, I'll feel kind of, it's hard to do anything. When I'm walking to the library I'll think, if I didn't have it [Adderall], there's no way I'd get anything done. I'd just sit there in front of my computer, and be not doing anything.

The second mood-enhancing effect, closely related to the first, was “drivenness”. Students who took the drugs felt more driven to do things. As Vrecko puts it, the energy they had “would build up to a point at which there was a surplus or excess that needed to be discharged through activity”. One student described the feeling like this:

I didn't want to stop what I was doing until it was completed up to a certain level of my satisfaction. So I wouldn't ever have to do something and just be, oh, I'm tired, I'll finish it in the morning. I would just finish it.

Ironically, this could have a negative effect too, particularly when the students became driven to do something other than what they should be doing. As one participant put it:

When I take it, I might feel like, “oh, I'm going to start cleaning my room,” or something else. So when it's kicking in, I have to make sure I start telling myself, “ok, it's work time. This is what you've got to do, this is why you're doing it.”

The third mood-enhancing effect was “interestedness”. Several of the participants in Vrecko’s study reported a general inability to get interested in the academic subjects they were taking. But when they took the drugs, things suddenly seemed more interesting:

It just got to where I felt like if I was staring at something I just couldn't take my eyes away from it—it made studying more interesting.

Another positive effect of this was an increased ability to avoid distractions like e-mail, facebook and chatting with friends in the library.

The fourth and final mood-enhancing effect was “enjoyment”. Students who took the drugs found that they were able to enjoy subjects that had previously seemed dull. (This is, obviously, closely allied to the “interestedness”-effect). One student described a particularly remarkable experience while doing an assignment:

I had this paper to write, for a class on art and Romanticism— pretty much the most boring topic I can imagine. Even just finding books in the library annoyed me, like, “why in the hell am I doing this?” But when I started reading [after having taken 20 mg of Adderall], I remember getting just completely absorbed in one book, and then another, and as I was writing I was making connections between them … And I was like, this is really cool, actually enjoying the process of putting ideas together. I hadn't had that before.

In summary, Vrecko’s study is an interesting one. It suggests that one of the main effects of these so-called cognitive enhancing drugs may not be on cognition as such, but, rather, on removing the psychological barriers to doing cognitive work. The study is, however, a small one. The sample of students being interviewed may not be representative (e.g. they may be the “weaker” students who, based on the studies discussed above, seem to get the most positive effect). And there is no placebo control: it’s possible that the drugs themselves are little more than a psychological crutch for the students in question.

4. Conclusion
Let’s now return to the opening question: do the alleged cognitive-enhancing drugs actually work? It’s a difficult question to answer in the abstract. We would need to specify the drug we are interested in and what it would mean for it to “work”. Still, at a general level, I’m slightly more persuaded of their enhancing effects than I was before I read these studies. That may, however, be attributable to my hyper-scepticism prior to doing so.

For me, there are four big takeaways from these studies. This first is that methyphenidate (in particular) seems to have a decent enhancing effect on memory over the long-term. The second is that modafinil seems to have a decent enhancing effect on attention for single doses but not for repeated doses. The third is that the extent of the enhancing effect is likely to greater for those who struggle more in any given cognitive task. And the fourth is that, for students who persist in using them, the major benefit of these drugs may simply be their ability to remove the psychological barriers to getting things done.

Tuesday, September 2, 2014

The Journal Club ♯3 - Street on Why the Price of Theism is Normative Skepticism

This is the third edition of the Philosophical Disquisitions Journal Club. The goal of the journal club is to encourage people to read, reflect upon, and debate some of the latest works in philosophy. The club focuses on work being done in the philosophy of religion (broadly defined). This month we’re looking at the following paper:

Sharon Street “If everything happens for a reason, then we don’t know what reasons are: why the price of theism is normative skepticism” in Bergman and Kain (eds) Challenges to Religious and Moral Belief: Disagreement and Evolution (Oxford: OUP, forthcoming)

Longtime readers of this blog will know that I’m a big fan of Sharon Street’s work in metaethics. Street defends a form of constructivist antirealism, which I find quite attractive. I was thus pleasantly surprised to find that she had also recently written a paper dealing with one of my favourite topics in the philosophy of religion: the problem of evil and its moral implications. It’s a very good paper too, one that I’m sure will provide plenty of fodder for discussion.

In brief, the paper offers a twist on the traditional problem of evil. It argues that theistic belief, coupled with the existence of immense suffering, implies a thoroughgoing normative scepticism. That is: scepticism about what we are (or are not) morally obliged to do. I’ll offer a general summary of the argument below.

1. What is Street’s Argument?
The argument Street wishes to defend is simply stated:

  • (1) If theism is true, then everything happens for a reason.
  • (2) If everything happens for a reason, then we are hopeless judges of what reasons are.
  • (3) But we are not hopeless judges of what reasons are.
  • (4) Therefore, theism is false.

For those of you who might be interested in this kind of thing, the logical structure of this argument involves chained conditionals and the negation of their consequents (if p, then q; if q, then r; not-r; therefore, not-q; therefore, not-p). Street dedicates her attention to premise (2) in the article, but she does say something about the other premises and so we may as well start with them.

Premise (1) can be defended by appealing to the orthodox conception of a monotheistic god. If god is maximally-powerful and maximally-knowing, then it would indeed seem to follow that everything that happens in our world happens for a reason. Why? Because everything that happens is either a direct or indirect consequence of God’s action or inaction. Either God has deliberately caused that event to occur (or he set up the necessary original conditions); or God allows that event to occur (by not intervening). What follows from this? Well, since God, presumably, must act (or omit) with sufficient moral reason (he is the supreme moral agent after all), it would follow that everything happens for a (moral) reason of some kind. I think this is a plausible argument. Some qualification may need to be added in relation to the “everything”-claim, but I still think a sufficient amount of what happens in the everyday world — including those events at the heart of Street’s argument — would fall within the scope of the qualified version.

Premise (3) can be defended on three grounds. The first is that thoroughgoing normative scepticism is deeply implausible: it clashes with some of our most foundational epistemic commitments. This is something Street pushes quite forcefully in the paper. The second is that it is practically paralysing. This is something I looked into a little in my paper on this topic last year, though I was less bullish than Street on the grounds that certain considerations revolving around moral risk/uncertainty could resolve the paralysation problem. The third is that it would ultimately defeat theism itself, since theism is premised on the belief that there exists a supreme morally good being and Street’s argument, if correct, undermines any claim we might like to make about God’s supposed goodness.

That leaves premise (2) as the main bone of contention. We’ll discuss this in more depth below. Before that, I should say something about how Street understands her argument. It is, as she herself notes, just another version of the well-worn problem of evil, one that gets into some very similar issues as does the contemporary debate about sceptical theism. But there are some differences too. First, her argument is neither deductive nor evidential; it is, instead, a reductio. It is claiming that theism has practically and theoretically absurd implications, not that there is some logical inconsistency between the existence of evil and the existence of god, nor that the sheer volume of evil in the world provides evidence against his existence. (For what its worth, I’m not entirely persuaded by this distinction: it seems to me like traditional versions of the problem of evil have those absurd implications as well). Second, her argument is not intended as a reply to sceptical theism. It is a freestanding problem for theists, irrespective of their commitment to sceptical theism.

2. The Practical Dilemma and the Absurdity of Theism
Street’s argument presents the theist with a practical epistemic dilemma. To appreciate the dilemma, we need to appreciate some beliefs that all (or virtually all) theists are likely to share. First, there is the commitment to some sort of moral truth. That is: something about the nature of reality that allows us to meaningfully and appropriately apply labels like “good”, “bad”, “right”, and “wrong” to objects, events and states of affairs. Indeed, god is one of the things to which theists are most inclined to apply such labels. He is a supremely good being who acts in a morally justified manner. Second, there is the (prima facie) commitment to certain standard moral obligations. Street gives the example of intervening to prevent horrific suffering (which she discusses in light of a particular example of horrific suffering) in the paper. To be clear, this obligation is not absolute. Street emphasises that it is always possible for conditions to override that prima facie obligation. But, in the absence of such conditions, the obligation is something we generally agree upon.

The question then is what happens when we are confronting with cases in which god allows horrific suffering to occur. Given the standard conception of god, he could have intervened to prevent such suffering. The fact that he allows this to happen (over and over and over again) has consequences for our epistemic commitments. Street uses the analogy of a man who lives down the road from you and who you believe to be “good”. Suppose one day you learn that the man sat back impassively while he watched his children drown in his swimming pool. Would you still be committed to the belief that he is “good”? Presumably not. Presumably, learning this fact would force you to revise your belief.

Similar revisions are forced upon the theist every time they learn of an instance of horrific suffering that is not prevented by god. Only their revisions are not as straightforward. Prior to learning of the instance they have two epistemic commitments: (a) that god exists and he is good; and (b) there is a moral obligation to prevent horrific suffering. After learning of the instance of evil they will have to revise one of those commitments. Theists will probably seek to revise (b), clinging to the belief that god is good, but Street argues that this is the more implausible view. Revising our commitment to (b) does much more violence to our overall worldview. This is what is being defended in premise (2).

Of course, the argument isn’t fully persuasive yet. The typical theist will try to avoid the horns of this alleged dilemma. They will say that God’s permission of suffering need not impact upon their everyday moral commitments. This is because special circumstances arises (call these “C”) that provide sufficient moral reason for God’s permission of suffering, but that do not affect our moral obligations. Is this reply plausible? That’s the question at the heart of this debate.

3. Why the Dilemma is Significant
Here’s where Street’s paper gets really interesting. She carefully dissects the typical theist reply and argues that, no matter how you slice it, the dilemma remains. To do this effectively, she introduces a number of conceptual distinctions. First, she distinguishes between agent-neutral and agent-relative reasons for action:

Agent-Neutral Reasons: These are reasons for action that apply to all agents, irrespective of who or where they are.
Agent-Relative Reasons: These are reasons for action that apply to specific agents and/or in specific circumstances.

Our moral reasons come in both forms. Sometimes all agents have the same moral obligation, e.g. do not kill another person. Sometimes obligations are limited to a specific subset of agents, e.g. parents must look after their children. God’s reasons for action could also come in both forms. It could be that his reasons for allowing suffering apply to all agents, or it could be that his reasons for action only apply to him (given his unique properties). Since we don’t know which scenario obtains, we have to consider both (actually, we have to consider a third as well: the possibility that his reasons for action are sometimes agent-neutral and sometimes agent-relative).

In addition to this conceptual distinction, Street identifies another that bears directly on our (as opposed to God’s) reasons for action. This is the distinction between fact-relative and evidence-relative reasons for action:

Fact-Relative Reasons: These are reasons for doing or forbearing from some action, in some particular case, that arise because of the facts of the case. For example, if it is the case that tripping a pedestrian has the consequence of preventing him/her from being killed by an oncoming vehicle, then there is a fact-relative reason for tripping the pedestrian. 
Evidence-Relative Reasons: These are reasons for doing or forbearing from some actions, in some particular case, that arise because of the evidence you have at your disposal. For example, in the case of the pedestrian, you presumably could not know that tripping him/her would prevent a fatal accident. All the evidence at your disposal would suggest that tripping a pedestrian is wrong. Consequently, you would have no evidence-relative reason for tripping him/her.

The theist could, perhaps, avoid Street’s dilemma by arguing that god’s permission of great suffering either (a) provides them with no fact-relative reason for questioning their ordinary moral obligations (because they are agent-relative in nature); or (b) even if it does provide them with fact-relative reasons, it provides them with no evidence-relative reason for questioning their ordinary moral obligations. It is this line of reasoning that Street calls into question. She does so by going through the three scenarios mentioned above.

Scenario 1: God’s reasons for action are agent-neutral - In this scenario, god’s permission of suffering does indeed provide us with a fact-relative reason for disregarding our prima facie moral obligations. Nevertheless, the theist will argue that this fact-relative reason doesn’t spill over into the evidence-relative domain. Street says this is wrong. If God’s reasons for action are agent-neutral, then everything that happens is a piece of evidence that bears upon our moral obligations. Using the example of fatal road collisions, she argues that the fact that one such incident occurs (on average) every 52 minutes in the US gives us a reason to think it permissible to permit such incidents. We don’t know exactly why they are permissible, but the fact that they occur — combined with the fact that God’s reasons for action are agent-neutral — is enough to give us an evidence-relative reason for questioning our ordinary moral beliefs. As she puts it herself:

On scenario 1, all of history is converted to a source of evidence about our fact-relative reasons with respect to evils. And as soon as one has evidence that one has a fact-relative reason of a certain kind, one has information of direct practical relevance; in other words, one now has an evidence-relative reason too. On the assumption of scenario 1, we have indisputable evidence that there is, on a regular basis, fact-relative reason for us to permit evils. The only rational response to this evidence is to increase one’s credence, in the case of any given unfolding potential evil, that there is good reason to permit the evil to occur, even though one won’t have any idea in virtue of what.
(p. 11, online version)
So much for that. What about the other possibilities?

Scenario 2: God’s reasons for action are agent-relative - This scenario seems much more promising for the theist. It claims that God’s permitting evil provides us with no fact-relative reason for questioning our ordinary moral obligations. This in turn suggests that we have no evidence-relative reason for questioning those obligations. But this view is problematic. The problems emerge when we ask where we are supposed to learn of our moral reasons? There are two possibilities, either there is some plausible secular moral epistemology, or there is some theistic moral epistemology. We’ll consider the former possibility later on since it is one that most theists will wish to avoid (as it could effectively concede that morality is not grounded, ontologically or epistemically, in god). Let’s focus on the latter for the time being.

If moral knowledge must ultimately be grounded in god, how are we to come by knowledge of our moral reasons? It cannot be done by observing what happens in nature (i.e. what god permits) as that only has relevance for god’s reasons for action, not ours. So we must come by it through some form of divine communication (where this could take a number of forms, e.g. Bible, personal revelation or voice of conscience). But this suggestion is itself highly problematic. For it to work, we would have to have some way of reliably distinguishing false communications from real ones. We would also have to have some way of working out exactly what those communications demanded of us in particular circumstances. I won’t get into the full details here, but Street argues that we have neither of these things. All alleged communications from God are of doubtful provenance or of ambiguous and vague scope. This impacts upon our, evidence-relative, reasons for action. For if we have reason to question all alleged communications from god, we also have reason to doubt whether we have access to moral reasons at all.

So what about relying on a secular moral epistemology? Could a theist successfully pair-up his/her belief in god with a secular approach to moral epistemology? Unwelcome bedfellows though they may be, Street agrees that this is possible. The only problem is that it too leads to normative scepticism. Unfortunately, this part of the paper is pretty sketchy. This is because Street points to other work she has done to complete the argument. First, through her articulation of the “Darwinian Dilemma” she has argued that non-natural normative realism leads to normative scepticism. Second, she argues that normative antirealism isn’t a good fit with theism. The antirealist thinks that our normative commitments are constructed out of certain foundational attributes of our agency. On this account, a normative judgement is true if it withstands scrutiny from our own practical point of view (from our own foundational evaluative attitudes). The problem is that this doesn’t sit well with the notion that God has a moral reason for permitting suffering. To accept this, the antirealist would have to accept that they don’t really understand their own foundational evaluative attitudes. And accepting this would land them in a state of profound normative scepticism. They would no longer be able to trust their own judgment about what is right or wrong. So we end up in the same place.

Scenario 3: God’s reasons are sometimes agent-neutral and sometimes agent-relative: We no longer need to consider this scenario. Since Street has argued that normative scepticism arises no matter what the nature of god’s reasons are, it follows that normative scepticism would arise on scenario 3 as well.

This means that Street thinks her basic contention is correct: believing in theism is not compatible with our ordinary moral beliefs.

So what do people think? Is this a good/interesting argument? What are its weaknesses/flaws?

Comments are welcome below.