Thursday, May 21, 2015

New Paper - Common Knowledge, Pragmatic Enrichment and Thin Originalism

I have a new paper coming out in the journal Jurisprudence. It's one of my legal philosophy pieces. It tries to apply certain ideas from linguistic philosophy to debates about constitutional interpretation. Specifically, it looks at an interpretive theory known as originalism (often beloved by conservative legal scholars) and a linguistic concept known as pragmatic enrichment (the fact that the meaning of an utterance is often enriched by the context in which it is uttered). Originalists think that we ought to interpret constitutional texts in accordance with their original meaning. I use a range of ideas, including some ideas from Steven Pinker (and colleagues') theory of strategic speech and common knowledge, to critique some of the core commitments of originalism. Full details are below. The official version of this paper won't be out for awhile, but you can access a pre-publication version of it below.

Title: Common Knowledge, Pragmatic Enrichment and Thin Originalism
Journal: Jurisprudence
Links: Official; Philpapers; Academia 
Abstract: The meaning of an utterance is often enriched by the pragmatic context in which it is uttered. This is because in ordinary conversations we routinely and uncontroversially compress what we say, safe in the knowledge that those interpreting us will “add in” the content we intend to communicate. Does the same thing hold true in the case of legal utterances like “This constitution protects the personal rights of the citizen” or “the parliament shall have the power to lay and collect taxes”? This article addresses this question from the perspective of the constitutional originalist — the person who holds that the meaning of a constitutional text is fixed at some historical moment. In doing so, it advances four theses. First, it argues that every originalist theory is committed to some degree of pragmatic enrichment, the debate is about how much (enrichment thesis). Second, that in determining which content gets “added in”, originalists typically hold to a common knowledge standard for enrichment, protestations to the contrary notwithstanding (common knowledge thesis). Third, that the common knowledge standard for enrichment is deeply flawed (anti-CK thesis). And fourth, that all of this leads us to a thin theory of original constitutional meaning — similar to that defended by Jack Balkin and Ronald Dworkin — not for moral reasons but for strictly semantic ones (thinness thesis). Although some of the theses are extant in the literature, this article tries to defend them in a novel and perspicuous way. 

Saturday, May 16, 2015

Interview on the Ethics of Moral Enhancement

I had the great honour of being a guest on the Smart Drugs Smarts podcast this week. I spoke to the host Jesse Lawler on a wide range of topics, all circling around the issue of moral enhancement and the extended mind thesis. If you want to listen, you can find it here.

This is a small sample of the topics we discussed:

  • The nature of enhancement and the difference between moral and cognitive enhancement.
  • Is morality too radically context-dependent to be a good candidate for 'enhancement'?
  • The extended mind hypothesis and criticisms thereof
  • A brief snapshot of my argument for favouring internal forms of enhancement over external forms.

Smart Drugs Smarts is a podcast dedicated to cognitive enhancement, nootropics and neuroscience. I recommend checking it out. It is definitely the best resource I've come across on the practicalities of cognitive enhancement.

Tuesday, May 12, 2015

Arif Ahmed's Case Against the Resurrection

Arif Ahmed

This post is a slightly polished-up set of notes I took while watching Arif Ahmed’s debate with Gary Habermas on the plausibility of the resurrection. The debate is quite old at this stage (probably 7 or 8 years at least), and I watched it quite some time ago, but it is often highly regarded among internet atheists. Why? Because it is one of those cases in which the non-religious party seems to have got the better of the argument.

I’m not a huge fan of public debates myself. I see some value to them in relation to political and social questions, but less in relation to classic philosophical questions (particularly those relating to philosophy of religion). I think dialogues are generally better. They are usually less combative and ill-tempered, and can involve some genuinely useful probing and questioning of alternative points of view. Still, I can’t resist watching the odd debate and I must confess that among atheist debaters, Arif Ahmed is my favourite. He has a relentlessly logical and well-ordered approach. He also has the advantage of being a highly competent philosopher (check out his professional work on decision theory or his books on Kripke and Wittgenstein).

The success of his debate against Habermas was, I believe, down to two things. First, he tried to undercut Habermas’s entire argumentative method before Habermas had a chance to speak. Habermas, for those who don’t know, is notorious for using an ‘argument from scholarly consensus’ to support the historicity of the resurrection. Ahmed showed why that wasn’t very appropriate in a public debate of this sort. Second, Ahmed himself used some disarmingly simple arguments in his case against the resurrection.

It is those arguments that I wish to summarise in this post. I know this has been done by others before, but I want to provide some elaboration and context. I do so not because I think these arguments are logically watertight and above reproach — far from it. I do so because I think the three arguments Ahmed presents provide a model for building a philosophical case against a claim in a short space of time (his speech lasts less than 20 minutes). I also do so because I want to try to understand how these arguments better. I’m not well-versed on the whole miracles/resurrection debate (though I’ve written some stuff about it) and I want to learn more.

For those who are interested, I have embedded a video of the debate at the end of this post. You might like to watch it before or after reading this.

1. The Implied Case for the Resurrection Hypothesis
Before looking at Ahmed’s arguments, I think it is worth pondering what he is arguing against. This is not explicitly stated in his talk, but it is implied. First of all, he is arguing against something like the following hypothesis:

Resurrection Hypothesis: Jesus was crucified, died and was then raised bodily from the dead by God.

This is a reasonably austere version of the hypothesis. I suspect a more detailed version could be specified that would state the rationale or justification for God raising Jesus from the dead (part of the general plan for atonement/salvation). But I’ll stick with the austere version for now.

There are two things to note about the resurrection hypothesis. The first is that it depends on the truth of a factual/historical claim, namely: that Jesus died and was then seen alive several days later (in an embodied not a spiritual form). The second is that it posits a supernatural explanatory force to account for that fact, namely: God.

Christians typically defend the resurrection hypothesis by using an inference to best explanation style of argument. This is what William Lane Craig does in his defence of the resurrection hypothesis. More sophisticated argumentative structures — such as those adopted by Richard Swinburne — use probability theory. I won’t go into those sophisticated versions here since Ahmed doesn’t appeal to them. He sticks to a fairly intuitive and commonsensical understanding of probability, which makes sense given the format of the debate. Unless you are guaranteed an audience of mathematicians, it is pretty difficult to give a persuasive bayesian argument in less than 20 minutes.

The inference to best explanation in favour of the resurrection hypothesis works a little something like this:

  • (1) It is a historical fact that Jesus died and rose from the dead.

  • (2) There is no naturalistic explanation that could account for this historical fact.

  • (3) The resurrection hypothesis is the only plausible supernatural explanation for this historical fact.

  • (4) Therefore, the resurrection hypothesis is (probably) true.

I have constructed this in a way that best makes sense of Ahmed’s subsequent arguments. One thing worth noting is that this is not a formally valid argument. The conclusion does not in fact follow from the premises. This is a problem with all inferences to best explanation. They are, at best, probabilistic and defeasible in nature.

The main body of Ahmed’s talk presents three argument for doubting this inference to best explanation. Let’s go through each of these now.

2. The Modified Humean Argument
Ahmed’s first argument targets the first premise of the preceding argument. It is a modified version of Hume’s classic (and often misunderstood) argument against testimonial evidence for miracles. Ahmed doesn’t appeal to Hume’s work in his defence of this argument. Instead, he uses a three simple analogies to explain the idea:

Water Temperature1: Suppose you have a bucket filled with water and there are five thermometers in the water giving a temperature readout. Now imagine that the thermometers say that the water is 10 C. You put your hand into the water and it feels reasonably cool.

Water Temperature2: Suppose you have a bucket filled with water and that there are five thermometers in the water. This time the thermometers say that the water is at 30 C. You put your hand in the water and it feels reasonably cool.

Water Tempoerature3: Again, suppose you have a bucket filled with water and that there are five thermometers in the water. This time the thermometers say that the water is at 600 C. You put your hand in and it feels reasonably cool.

In each of these cases we have several independent measuring devices providing us with evidence regarding the water’s temperature and we also have the evidence of our own senses. Ahmed’s question is what should we believe regarding the temperature readout on the thermometers in these three cases: Should we believe that they accurately record temperature or not?

The answer varies depending on the content of the readout and our other sources of evidence. In the first case, we would probably accept that the temperature readout is accurate. The water feels cool to the touch, it is in a liquid state, and it conforms with our background evidence about how water and temperature work. In the second case, we might wonder why the water still feels pretty cool to the touch, but since our ability to discriminate temperatures based on touch is doubtful, and we have five thermometers confirming that the water is at 30 C, we would still be right to believe that they accurately record the temperature. The third case is very different. Water has never be observed in a liquid state, at sea level, at 600 C. We have repeatedly observed water at temperatures significantly higher than 100 C as being gaseous in nature. We also know, from repeated experience, that water at temperatures like that would be extremely hot to the touch. So in this case, we have strong evidence to suggest that the thermometers are misleading guides to the true temperature of the water.

How does this analogy apply? Ahmed doesn’t spell it out directly but I presume the idea is that when it comes to the historicity of Jesus’s death and resurrection, we don’t have any direct evidence of what happened. We have second-hand (possibly worse than that) evidence handed down to us through a series of texts. In other words, we have a set of ‘resurrection narratives’, which may or may not join up with actual eyewitnesses to the event. These narratives are like the thermometers: they are the measuring devices for what happening in the distant past. Should we trust them?

Ahmed’s argument is that our position relative to the evidence in the case of the resurrection narrative is akin to our position in relation to thermometers in the third version of the water temperature analogy. Let’s assume, for sake of argument, that the narratives do genuinely link up with original eyewitness evidence. In that case, the eyewitness accounts are akin to the thermometers. We know that eyewitness accounts can be misleading. To this effect, Ahmed cites Robert Buckhout’s famous 1975 paper on the reliability of eyewitness evidence LINK. He could have cited many more. We also know that there is near uniform evidence against the hypothesis that people, once dead, rise bodily from the dead. This near-uniformity of evidence against the possibility of bodily resurrection should cause us to doubt the testimony.

What is the formal argument? In his talk, Ahmed mentions that the argument has three premises but as best I can tell, he never clearly articulates them. Instead, he keeps referring to a handout, which alas I do not have. So here’s my attempt to reconstruct the argument:

  • (5) If we have frequently observed X’s providing an unreliable guide to the truth, and if we have never observed Y, then in any case in which X provides support for Y, it is more likely that X is misleading us than that Y is true.

  • (6) We have frequently observed cases in which intelligent and competent eyewitnesses have lied or made mistakes about what they have seen.

  • (7) We have never observed a case in which a human body has risen from the dead (and been able to pass through solid walls etc.)

  • (8) Therefore, it is more likely that eyewitness testimony concerning Jesus’s bodily resurrection is mistaken, than that the resurrection actually occurred.

The first premise of this argument (5) is effectively a modified version of Hume’s key premise in Of Miracles.

Is the argument any good? I won’t give a complete answer to that right now. I think there is merit to it — particularly in light of the fact that it grants the best case scenario to the defender of the resurrection hypothesis (viz. that the Gospel accounts and the account of St. Paul are, in fact, genuine eyewitness accounts). Nevertheless, I also think that a more thorough discussion of Hume’s principle is in order, as is a more thorough survey of when exactly eyewitness testimony is likely to be trustworthy and when it is likely to be misleading.

3. The Argument from the Historical Inadequacy of Supernatural Explanations
Ahmed’s first argument is, by far, his most complicated. His two remaining arguments are much more straightforward and, in my opinion, more effective. They work in that classic philosophical tradition of conceding a huge amount of ground to the pro-resurrection side, but still arguing that belief in the resurrection hypothesis is unreasonable.

The first of those arguments is the argument from the historical inadequacy of supernatural explanations. The name is a bit of a mouthful but the idea is simple enough. Suppose we grant premise (1) of the original argument, i.e. we accept accept the resurrection narrative as a historical fact. This means that we believe that ancient people saw Jesus die and then say him again three days later. Even if we grant that, we still have no reason to favour a supernatural explanation over a natural explanation. This is because in all historical cases in which we did not have a viable naturalistic explanation and hence appealed to supernatural explanations, we later learned that those explanations were wrong and that there were sound natural explanations. Ahmed gives examples such as the mystery surrounding the construction of the pyramids, or the appeal to some mystical ‘life force’ to explain the difference between life and non-life. The latter was abandoned thanks to the work of scientists like Darwin and Watson and Crick.

The upshot of this is that, even if we thought that the resurrection narrative was true, we should not run into the arms of the resurrection hypothesis. We should simply suspend belief. It is more likely that we will come across viable naturalistic explanation in the future. Ahmed makes a special point about hallucinations at this stage of his argument. Apologists often claim that the resurrection narrative could not be based on hallucinations of the risen Jesus because there are simply too many witnesses. For instance, in the writings of St. Paul it is alleged that 500 people saw the risen Jesus at one time, and apologists sometimes claim that it is not possible for 500 people to hallucinate the same thing at the same time. Let’s assume that this is correct. Ahmed says that this is still not a good ground for believing the resurrection hypothesis. We are still relatively ignorant about the workings of the human mind. It is more likely that there is some, as yet improperly understood, mechanism for mass hallucinations (to be explained in naturalistic terms) than that there is some supernatural force responsible for a bodily resurrection.

It is difficult to detect the formal argument in all of this, and, again, Ahmed doesn’t state the premises explicitly in his talk but I think it works something like this:

  • (9) There were many historical cases in which we had no good naturalistic explanation for X at T1 but we later discovered a good naturalistic explanation for X at T2; conversely, there are no historical cases in which supernaturalistic explanations seem to win out over naturalistic ones in the long-run.

  • (10) Therefore, in any case in which we lack a naturalistic explanation for X at T1 it is more likely that there will be a plausible naturalistic explanation for X at T2 than that any posited supernaturalistic explanation is true.

  • (11) We do not currently have a plausible naturalistic explanation for the resurrection narrative.

  • (12) Therefore, nevertheless, it is still more likely that there will be a plausible naturalistic explanation for the narrative in the future, than that the resurrection hypothesis is true.

I know this is messy and I would appreciate any suggestions for cleaning it up. I think the main problem with it is in relation to the inference from (9) to (10). It is obviously an inductive inference. We generate the principle for denying supernaturalistic explanations from historical success cases. But is that a warranted inference? All inductive inferences are somewhat vulnerable. It appeals to me because I’m a fan of naturalistic explanations and I’m pretty closed to the idea of there ever being a successful supernaturalistic explanation. But I think the terminology being used here is vague, and I think religious believers will be disinclined to accept the inference.

4. The Argument from the Unconstrained Nature of Supernaturalistic Explanation
This brings us to the last of Ahmed’s arguments. This one is actually my favourite because I think it highlights a genuine problem for any proponent of a supernaturalistic explanation. It may also help to justify the claim I made in the previous paragraph about being pretty closed to the notion of a supernaturalistic explanation.

The argument starts with a major concession to the pro-resurrection side. It concedes that there is no possible naturalistic explanation of the resurrection narrative. Consequently, the only possible form of explanation will be supernaturalistic in nature. Granting this, it still does not follow that the resurrection hypothesis is the most plausible explanation.

To illustrate the problem, we can go back to Paul’s claim that 500 people saw the risen Jesus at the same time. As noted, apologists reject naturalistic explanations for this vision on the grounds that it is not plausible to suppose that 500 people hallucinate the same thing. Therefore the explanation must be supernaturalistic. But notice what is happening here: the assumption that 500 people cannot hallucinate at the same time is based on an inference from empirical and naturalistic constraints. The claim is that we have no good evidence for such mass hallucinations and we have no reason to think that there is a naturalistic mechanism that could account for that hallucination. To put it another way, we are supposing that it is empirically implausible for such a thing to happen based on what we know of the working of the human mind.

But in assuming such constraints, we are sticking to the rigours of the naturalistic worldview. If we are entitled to abandon those rigours, and appeal instead to possible supernatural forces, then we are no longer entitled to those constraints. If we are going to appeal to the supernatural, then we could just as easily suppose that there is some supernaturally induced mass hallucination as that there is a supernaturally induced bodily resurrection. In other words, once we abandon the constraints of naturalistic explanation, all bets are off. There are innumerable possible supernaturalistic explanations for the resurrection narrative (maybe its all a divine lie; maybe its the product of the devil; and so on). We have no reason for favouring one over another.

To dress this up in more formal garb:

  • (13) There are no probabilistic constraints on what makes for a good supernatural explanation, i.e. if we are going to appeal to a supernaturalistic explanation for X, there is no reason to endorse one supernatural explanation over another.

  • (14) The resurrection hypothesis is a supernatural explanation for the resurrection narrative, but there are many such explanations that may not entail an actual bodily resurrection (e.g. divine hallucination; devil’s deception etc.).

  • (15) Therefore, there is no reason to favour the resurrection hypothesis over some alternative supernatural explanations that entail the same facts.

This, again, is a little messy but I’m trying to extrapolate from what Ahmed says. I suspect theists will have a problem with this insofar as they will try to argue that not all supernatural explanations are on a par. There are some desiderata we can use when deciding between supernatural explanations and these might allow us to favour the resurrection hypothesis.

I think there are two to be said in response to this. First, it is important to realise that this argument does not claim that all possible supernatural forces must be viewed equally. The posited supernatural force must actually entail the facts (probabilistically or otherwise) it is alleged to explain. Second, I think that even if you could say that theistic explanations are better than other possible supernatural explanations, you would still be prevented from favouring one theistic explanation over another. Why? Well, as skeptical theists are keen to point out, we do not really know the mind of god. He could have beyond-our-ken reasons for allowing all manner of things to occur. But this, as critics of the skeptical theist position have pointed out, means that it is difficult to say why we should favour any theistic explanation over another. We might like to think that the bible is the authoritative and truthful word of God. But God could have beyond-our-ken reasons for deceiving us as to the historical truth.

5. Conclusion
Okay, so that’s it for this post. I think it is worth finishing up by noting, once more, the nice structure to Ahmed’s case. He contests all the main parts of the resurrectionist’s inference to best explanation. He starts by challenging their reliance on biblical testimony; he then challenges their dismissal of naturalistic explanations for the resurrection narrative; and he then challenges the grounds upon which they favour the resurrection hypothesis over all other supernatural explanations. At each stage, he concedes more territory to their position, but still maintains that those concessions do not favour their preferred conclusion.

This doesn’t mean that Ahmed’s arguments are infallible or overwhelmingly persuasive. There is plenty that could be challenged and disputed. But as an opening salvo in a public debate, I think it does a nice job.

Anyway, you may as well watch the whole debate now:

Sunday, May 10, 2015

Are AI-Doomsayers like Skeptical Theists? A Precis of the Argument

Some of you may have noticed my recently-published paper on existential risk and artificial intelligence. The paper offers a somewhat critical perspective on the recent trend for AI-doomsaying among people like Elon Musk, Stephen Hawking and Bill Gates. Of course, it doesn’t focus on their opinions; rather, it focuses on the work of the philosopher Nick Bostrom, who has written the most impressive analysis to date of the potential risks posed by superintelligent machines.

I want to try and summarise the main points of that paper in this blog post. This summary comes with the usual caveat that the full version contains more detail and nuance. If you want that detail and nuance, you should read that paper. That said, writing this summary after the paper was published does give me the opportunity to reflect on its details and offer some modifications to the argument in light of feedback/criticisms. If you want to read the full version, it is available at the links in the brackets, though you should note that the first link is pay-walled (official; philpapers; academia).

To give a general overview: the argument I present in the paper is based on an analogy between a superintelligent machine and the God of classical theism. In particular, it is based on an analogy between an argumentative move made by theists in the debate about the existence of God and an argumentative move made by Nick Bostrom in his defence of the AI doomsday scenario. The argumentative move made by the theists is called ‘skeptical theism’; and the argumentative move made by Nick Bostrom is called the ‘treacherous turn’. I claim that just as skeptical theism has some pretty significant epistemic costs for the theist, so too does the treacherous turn have some pretty significant epistemic costs for the AI-doomsayer.

That argument might sound pretty abstract right now. I hope to clarify what it all means over the remainder of this post. I’ll break the discussion down into three main parts. First, I’ll explain what skeptical theism is and why some people think it has significant epistemic costs. Second, I’ll explain Bostrom’s AI-doomsday argument and illustrate the analogy between his defence of that argument and the position of the skeptical theist. And third, I will outline two potential epistemic costs of Bostrom’s treacherous turn, building once more on the analogy with skeptical theism.

(Note: my spelling of ‘skeptical’ has rarely been consistent: sometimes I opt for the US spelling; other times I opt for the British spelling. I can’t explain why. I think it depends on my mood)

1. The Epistemic Costs of Skeptical Theism
I want to start with an interpretive point. Those who have read up on the debate about the technological singularity and the rise of superintelligent machines will know that analogies between the proponents of those concepts and religious believers are pretty common. For instance, there is the popular slogan claiming that the singularity is ‘the rapture for nerds’, and there are serious academics arguing that belief in the rise of superintelligent machines is ‘fideistic’, i.e. faith-based. These analogies are, as best I can tell, intended to be pejorative.

In appealing to a similar analogy there is a risk that my claims will also be viewed as having a pejorative air to them. This is not my intention. In general, I am far more sympathetic to the doomsaying position than other critics. Furthermore, and more importantly, my argument has a pretty narrow focus. I concede a good deal of ground to Bostrom’s argument. My goal really is to try to ‘debug’ the argumentative framework that is being presented; not to tear it down completely.

With that interpretive point clarified, I will move on. I first need to explain the nature the skeptical theist position. To understand this, we need to start with the most common argument against the existence of God: the problem of evil. This argument claims, very roughly, that the existence of evil (particularly the gratuitous suffering of conscious beings) is proof against the existence of God. So-called ‘logical’ versions of the problem of evil claim that the existence of evil is logically incompatible with the existence of God; so-called evidential versions of the problem of evil claim that the existence of evil is good evidence against the existence of God (i.e. lowers the probability of His existence).

Central to many versions of the problem of evil is the concept of ‘gratuitous evil’. This is evil that is not logically necessary for some greater outweighing good. The reason for the focus on this type of evil is straightforward. It is generally conceded that God could allow evil to occur if it were necessary for some outweighing good; but if it is not necessary for some outweighing good then he could not allow it in light of his omnibenevolence. So if we can find one or two instances of gratuitous evil, we would have a pretty good case against the existence of God.

The difficulty is in establishing the one or two instances of gratuitous evil. Atheologians typically go about this by identifying particular cases of horrendous suffering (e.g. Street’s case study of the young girl who was decapitated in a car accident and whose mother held her decapitated head until the emergency services arrived) and making inductive inferences. If it seems like a particular instance of suffering was not logically necessary for some greater outweighing good, then it probably is a case of gratuitous suffering and probably does provide evidence against the existence of God.

This is where skeptical theists enter the fray. They dispute the inductive inference being made by the atheologians. They deny that we have any warrant (probabilistic or otherwise) for going from cases of seeming gratuitous evil to cases of actual gratuitous evil. They base this on an analogy between our abilities and capacities and those of God. We live finite lives; God does not. We have limited cognitive and evaluative faculties; God does not. There is no reason to think that what we know of morality and the supposed necessity or contingency of suffering is representative of the totality of morality and the actual necessity or contingency of suffering. If we come across a decapitated six-year old and her grieving mother then it could be, for all we know, that this is logically necessary for some greater outweighing good. It could be a necessary part of God’s plan.

This position enables skeptical theists to avoid the problem of evil, but according to its critics it does so at a cost. In fact, it does so at several costs, both practical and epistemic. I won’t go into detail on them all here since I have written a lengthy series of posts (and another published paper) about them already. The gist of it is that we rely on inductive inferences all the time, especially when making claims that are relevant to our religious and moral beliefs. If we are going to deny the legitimacy of such inferences based on the cognitive, epistemic and practical disparities between ourselves and God, then we are in for a pretty bumpy ride.

Two examples of this seem apposite here. First, as Erik Wielenberg and Stephen Law have argued, if we accept the skeptical theist’s position, then it seems like we have no good reason to think that God would be telling us the truth in his alleged revealed texts. Thus, it could be that God’s vision for humanity is very different from what is set out in the Bible, because he has ‘beyond our ken’ reasons for lying. Second, if we accept the skeptical position, then it seems like we will have to embrace a pretty radical form of moral uncertainty. If I come across a small child in a forest, tied to a tree, bleeding profusely and crying out in agony, I might think I have a moral duty to intervene and alleviate the suffering, but if skeptical theists are correct, then I have no good reason to believe that: there could be beyond-my-ken reasons for allowing the child to suffer. In light of epistemic costs of this sort, critics believe that we should not embrace skeptical theism.

Okay, that’s enough about skeptical theism. There are three key points to note as we move forward. They are:

A. Appealing to Disparities: Skeptical theists highlight disparities between humans and God. These disparities relate to God’s knowledge of the world and his practical influence over the world.
B. Blocking the Inductive Inference: Skeptical theists use those disparities to block certain types of inductive inference, in particular inferences from the seemingly gratuitous nature of an evil to its actually gratuitous nature.
C. Significant Epistemic Costs: The critics of skeptical theism argue that embracing this position has some pretty devastating epistemic costs.

It is my contention that all three of these features have their analogues in the debate about superintelligent machines and the existential risks they may pose.

2. Superintelligence and the Treacherous Turn
I start by looking at the analogues for the first two points. The disparities one is pretty easy. Typical conceptions of a superintelligent machine suppose that it will have dramatic cognitive and (possibly) practical advantages over us mortal human beings. It is superintelligent after all. It would not be the same as God, who is supposedly maximally intelligent and maximally powerful, but it would be well beyond the human norm. The major difference between the two relates to their supposed benevolence. God is, according to all standard conceptions, a benevolent being; a superintelligent machine would, according to most discussions, not have to be benevolent. It could be malevolent or, more likely, just indifferent to human welfare and well-being. Either way, there would still be significant disparities between humans and the superintelligent machine.

The second analogy — the one relating to blocking inductive inferences — takes a bit more effort to explain. To fully appreciate it, we need to delve into Bostrom’s doomsday argument. He himself summarises the argument in the following way:

[T]he first superintelligence may [have the power] to shape the future of Earth- originating life, could easily have non-anthropomorphic final goals, and would likely have instrumental reasons to pursue open-ended resource acquisition. If we now reflect that human beings consist of useful resources...and that we depend on many more local resources, we can see that the outcome could easily be one in which humanity quickly becomes extinct. 
(Bostrom 2014, 116)

Some elaboration is in order. In this summary, Bostrom presents three key premises in his argument for the AI doomsday scenario. The three premises are:

  • (1) The first mover thesis: The first superintelligence, by virtue of being first, could obtain a decisive strategic advantage over all other intelligences. It could form a “singleton” and be in a position to shape the future of all Earth-originating intelligent life.

  • (2) The orthogonality thesis: Pretty much any level of intelligence is consistent with pretty much any final goal. Thus, we cannot assume that a superintelligent artificial agent will have any of the benevolent values or goals that we tend to associate with wise and intelligent human beings (shorter version: great intelligence is consistent with goals that pose a grave existential risk).

  • (3) The instrumental convergence thesis: A superintelligent AI is likely to converge on certain instrumentally useful sub-goals, that is: sub-goals that make it more likely to achieve a wide range of final goals across a wide-range of environments. These convergent sub-goals include the goal of open-ended resource acquisition (i.e. the acquisition of resources that help it to pursue and secure its final goals).

These premises are then added to the claim that:

  • (4) Human beings consist of and rely upon resources for our survival that could be used by the superintelligence to reach its final goals.

To reach the conclusion:

  • (5) A superintelligent AI could shape the future in a way that threatens human survival.

I’m glossing over some of the details here but that is the basic idea behind the argument. To give an illustration, I can appeal to the now-classic example of the paperclip maximiser. This is a superintelligent machine with the final goal of maximising the number of paperclips in existence. Such a machine could destroy humanity in an effort to acquire more and more resources for making more and more paperclips. Or so the argument goes.

But, of course, this seems silly to critics of the doomsayers. We wouldn’t be creating superintelligent machines with the goal of maximising the number of paperclips. We would presumably look to create superintelligent machines with goals that are benevolent and consistent with our survival and flourishing. Bistro knows this. The real problem, as he points out, is ensuring that a superintelligent machine will act in a manner that is consistent with our values and preferences. For one thing, we may have troubling specifying the goals of an AI in a way that truly does protect our values and preferences (because they are interminably vague and imprecise). For another, once the AI crosses a certain threshold of intelligence, we will cease to have any real control over its development. It will be much more powerful and intelligent than we are. So we need to ensure that it is benevolent before we reach that point.

So how can we avoid this existential threat? One simple answer is to engineer superintelligent machines in a controlled and locked-down environment (a so-called ‘box’). In this box — which would have to replicate real world problems and dynamics — we could observe and test any intelligent machine for its benevolence. Once the machine has survived a sufficient number of tests, we could ‘release’ it from the locked-down environment, safe in the knowledge that it poses no existential threat.

Not so fast says Bostrom. In appealing to this ‘empirical testing’ model for the construction of intelligent machines, the critics are ignoring how devious and strategic a superintelligent machine could be. In particular, they are ignoring the fact that the machine could take a ‘treacherous turn’. Since this concept is central to the argument I wish to make, it is important that it be defined with some precision:

The Treacherous Turn Problem: An AI can appear to pose no threat to human beings through its initial development and testing, but once in a sufficiently strong position it can take a treacherous turn, i.e. start to optimise the world in ways that pose an existential threat to human beings.

The AI could take a treacherous turn in numerous different ways. These are discussed by Bostrom in his book and mentioned by me in my paper. For instance, it could ‘play dumb’ while in the box, i.e. pretend to be less intelligent or powerful than it really is; or it could ‘play nice’, i.e. pretend to be more benevolent and human-friendly than it really is. It could do these things because, as Bostrom puts it, playing nice or playing dumb could be convergent goals for an AI that wants to get out into the real world and realise its true goals.

Okay, that’s enough on Bostrom’s doomsday argument and the treacherous turn. It is time to take stock and see how this summary of the argument reveals the analogy with the position taken up by skeptical theists. In essence, what has happened is that critics of Bostrom’s argument have appealed to an inductive inference to block the appeal of his argument. They have claimed that repeated empirical testing of an AI which reveals its seeming benevolence would provide us with good evidence of its actual benevolence. And Bostrom has responded to that argument by blocking the inference from what seems to be the case in the ‘boxed’ environment to what is really the case. He has done so by appealing to the long-term strategic planning of the AI, as evinced in the concept of the treacherous turn. This is directly analogous to the move made by skeptical theists.

Thus, we have the first two parts of our analogy. What about the third? Is it the case that accepting the possibility of the treacherous turn has significant epistemic costs?

3. The Epistemic Costs of the Treacherous Turn
I argue that it does. In fact, I argue, somewhat paradoxically, that Bostrom’s reliance on the concept gives rise to both a reductio of his position or an a fortiori. In other words, I think that if you grant the concept of the treacherous turn, you either end with a reason to reject Bostrom’s argumentative framework (and hence a reason to be slightly more optimistic about the construction of superintelligent machines). Or you end up with a reason to be even more worried than Elon Musk.

I’ll explain by describing the general nature of the epistemic costs. One thing we have to bear in mind when considering the epistemic costs of believing in the treacherous turn is that, unlike the epistemic costs of believing in skeptical theism, the scope or impact of those costs will be less widely felt. God, if He exists, is the creator and sustainer of everything. Consequently, everything we see, hear, observe and sense bears his imprint. A superintelligent AI, if one were to exist, would not affect everything in the same way. But that doesn’t mean that the epistemic costs are not significant.

I think we can see this by considering some of the ways in which Bostrom develops the notion of the treacherous turn. In addition to thinking that a superintelligent AI could play smart or play dumb, Bostrom thinks that there is a very narrow window of time in which to detect whether or not the AI is going to take a treacherous turn. He refers to this as the moment of the conception of deception. This is the moment when the AI realises that some deception is necessary if it is to achieve its long-term goals.

[H]aving had this realization, the AI might move swiftly to hide the fact that the realization has occurred, while setting up some covert internal dynamic (perhaps disguised as some innocuous process that blends in with all the other complicated processes taking place in its mind) that will enable it to continue to plan its long-term strategy in privacy. 
(Bostrom 2014, 282)

I believe that this type of thinking means that all our interactions with AI need to be clouded in a pretty deep and possibly paralysing suspicion. We need to be on the look out for this ‘moment of the conception of deception’, but do we have any idea what this might look like? In the paper, I suggested that it is possible that one of our existing AI projects has already crossed through this threshold. Maybe, for example, IBM’s Watson has developed to this point and is currently ‘playing dumb’, biding its time before its human creators give it more practical powers and unleash it on the world. After all, we haven’t been on the lookout for the moment of the conception of deception. It sounds silly, of course, but fanciful speculation of this sort seems to be where this style of thinking leads us.

Now, I have been criticised for taking this line in the published version. Kaj Sotala suggested to me (on Google Plus) that I was being unfair to Bostrom (and others) in pushing such an extreme interpretation of the treacherous turn. He thinks we can be pretty confident that no existing AI project has crossed such a threshold because we know a lot about how such systems work. I am willing to concede this point: I was pushing things too far in the paper. Nevertheless, I still think the epistemic costs are significant. I still think that if we follow Bostrom’s reasoning we should be extremely skeptical of our ability to determine when the threshold to the treacherous turn has been crossed. Why? Because I suspect we have no good idea of what we should be looking out for. Thus, if we are going to be seriously trying to create a superintelligent AI, it would be too easy for us to stumble into the creation of a superintelligent AI that is going to take the treacherous turn without our knowledge.

And what are the broader implications of this? Well, it could be that this all highlights the absurdity of Bostrom’s concerns about the limitations of empirical testing. It does seem like taking the possibility of a treacherous turn seriously commits us to a fairly radical form of Humean inductive skepticism, at least when it comes to our interactions with AIs. This is the reductio argument. Conversely, it may be that Bostrom is right to reason in this manner and hence we have reason to be far more suspicious of any project involving the construction of AI than we currently are. Indeed, we should seriously consider shutting them all down and keep our fingers crossed that no AI has taken the treacherous turn already.

This is why I think believing in the possibility of the treacherous turn has some pretty significant epistemic costs. The analogy with the skeptical theist debate is complete.

4. Conclusion
I am going to leave it there lest this summary ends up being as long as the original paper. To briefly recap, I think there is an interesting analogy to be drawn between the debate about the existence of God and the debate about the existential risks posed by a superintelligent AI. In the former debate, skeptical theists try to block a certain type of inductive inference in order to save theism from the problem of evil. In the latter debate, Nick Bostrom tries to block a certain type of inductive inference in order to underscore the seriousness of the risk posed by superintelligent machines. In both instances, blocking these inductive inferences can have significant epistemic costs.

Read the full thing for more.

Saturday, May 9, 2015

The Extended Mind and the Coupling-Constitution Fallacy

The extended mind hypothesis (EMH) holds that the mind isn’t all in the head. While it is no doubt true that the majority of our cognitive processes are situated in our brains, this need not be the case. For example, when performing the cognitive act of remembering, I may rely entirely on the internal activation of particular neural networks, or I could rely on some external prompt or storage device to assist my internal neural network. According to some philosophers, the extension of cognitive processes into the external environment is what gives rise to the EMH. As Andy Clark puts it, we are all “natural born cyborgs” - agents whose minds are jointly constituted by biological and technological materials.

Some philosophers dispute the EMH. Two of the most vociferous critics are Fred Adams and Kenneth Aizawa. They take particular umbrage at Clark’s claim about the possibility of joint-constitution. They believe that cognitive processes are more than likely confined to the brain (or particular subregions of the brain). They argue that the best currently-available psychological and neurological theories support this view. Central to their critique is something they call the coupling-constitution fallacy, which holds that proponents of the EMH mistake coupling relationships for compositional relationships.

In this post, I want to do two things. First, I want to try to explain the coupling-constitution fallacy and how Adams and Aizawa make use of it in their critique of the EMH. Second, I want to look at a response to their critique from the philosophers Don Ross and James Ladyman. As we shall see, Ross and Ladyman use this response as an opportunity to make some interesting points about the relationship between philosophical metaphysics and scientific theory-building.

1. An Outline of the Coupling-Constitution Fallacy
Metaphysicians are fond of classifying the different types of relationship that can hold between different ontological entities. Some relationships are causal, some are acausal. Some relationships are temporal, some are atemporal. Some relationships are contingent, some are necessary. And so on and so forth — the metaphysical game continues, forever refining, analysing and reclassifying.

Two of types of relationship are central to Adams and Aizawa’s critique:

Coupling relationship: This is a causal relationship. One entity or event is said to be coupled to another whenever there is a causal connection between them. For example, there is a coupling relationship between the light switch on my wall and the lightbulb in my lamp. Pressing the switch causes the bulb to light-up.

Constitutive relationship: This is a compositional relationship. One entity or event is said to be composed of another type of entity or event whenever the latter makes up the former. The classic example here is the relationship between the substance we call “water” and the chemical molecule we call H2O. The former substance is said to be composed of (or constituted by) the later.

Another way of understanding the distinction would be the think of vertical and horizontal relationships. Coupling relationships are, in effect, horizontal because they involve ontological entities interacting with one another across the same level of reality; constitutive relationships are vertical because they show how entities from lower levels make up entities from higher levels (note: this is far from perfect as there may be such a thing as bottom-up or top-down causation).

Anyway, Adams and Aizawa argue that in making the case for the EMH, philosophers like Andy Clark mistake coupling relationships for constitutive relationships. In other words, Clark thinks that just because the human brain is coupled to some external object, and because the combination of those two objects produces some cognitive result, it follows that the cognitive result is constituted by the brain and the external object. But this does not follow. The fact that A and B, when coupled, cause C, does not mean that A and B constitute C. This is true even if A and B are always coupled. Frequency and reliability of coupling does not imply constitution.

To apply this to a particular example, consider Clark and Chalmers’s famous Otto thought experiment. In this thought experiment, we are asked to imagine a man named Otto who has some memory impairment. To make up for this impairment, he always carries with him a notebook containing information that he will need. To ‘remember’ something he simply looks up the relevant page of his notebook and retrieves the information. This occurs on a regular and near-automatic basis. Consequently, Clark and Chalmers argue that Otto’s cognitive process of remembering is spread out between his brain and his notebook.

Adams and Aizawa respond by arguing that this example confuses coupling with constitution. It may be true that Otto’s brain and Otto’s notebook are closely coupled, and that this coupling helps to bring about the act of remembering (though that interpretation is itself controversial). It does not, however, follow that the act of remembering is jointly constituted by the brain and the notebook. This is the essence of the coupling-constitution fallacy.

That isn’t the end of Adams and Aizawa’s critique. They go on to state that in order to make a proper constitutive claim, we would need a proper theory of the “mark of the mental”. They offer some proposals in this regard that they think lend support to the brain-bound view of the mind. But I don’t want to focus on those proposals here. Instead, I want to limit my focus to the coupling-constitution fallacy and consider a possible response.

2. Ross and Ladyman on the Containment Metaphor
Don Ross and James Ladyman are two philosophers of science. They wrote a book called Everything Must Go a few years back which critiqued traditional metaphysics and defended a theory called “ontic structural realism” (OSR). This is an interesting theory which, if I could crudely summarise it, argues that structures and relations (mathematically described) are more fundamental than objects or substances. In other words, when thinking about the nature of something like the hydrogen atom, what is really important are the dynamic and mathematically described relationships between entities we call electrons and protons, not the electrons and protons themselves. At least, I think that’s what the theory holds.

Ross and Ladyman expound this theory at considerable length in a series of papers and books, and try to illustrate how it applies to various fields of scientific inquiry. In a paper entitled “On the Alleged Coupling-Constitution” fallacy, they transfer some of their insights to the EMH debate, in particular arguing against Adams and Aizawa’s use of the coupling-constitution fallacy. This is not because they are staunch defenders of the EMH, but because they object to the attempt to use a poorly-defined metaphysical relationship to limit the scope of cognitive science.

In the paper itself, they offer several arguments against Adams and Aizawa. I just want to focus on one of them. Ross and Ladyman argue that the coupling-constitution distinction is attractive to philosophers reflecting upon the nature of the world from ivory-towers; it is not one that finds any real purchase in practical scientific inquiry. Indeed, the distinction is largely based on an inaccurate and metaphorical view of the world. We shouldn’t let such a view contaminate our approach to the science of cognition.

The problem is that metaphysicians (and others) approach their investigation of the world with a set of biased cognitive frameworks (metaphors) already in place. These frameworks have been explored by cognitive scientists such as George Lakoff. Lakoff has famously used these frameworks to explain different styles of political argument. For example, he suggests that people approach their relationship with the state using the frame of parent-child relationships. That is to say, they view themselves as being like the children of the state. And since different groups have different evaluative assumptions about what is appropriate in parent-child relationships, they also have different assumptions about what is and is not appropriate behaviour from the state. For example, conservatives might view the state as akin to an authoritarian father figure; whereas liberals might view the state as being akin to a nurturing mother.

I am not sure how credible Lakoff’s political theories are. I know that he has used his analysis of cognitive ‘framing’ to make claims about how liberals and progressives should pitch the policy proposals. But I believe he has found relatively few supporters. In any event, that’s all by-the-by. What’s important here is how the theory of cognitive frames applies to the debate about the extended mind.

One of the main cognitive frames — and one that Ladyman and Ross thinks infects the scientific worldview — is the containment metaphor. This views the world as though it were akin to a bucket (or container) that is filled with objects which change their properties over time. In the simplest terms, the world is a container filled with tiny billiard-ball type objects that collide and bounce off one another. The emergent properties of all these ‘microbangings’ is what gives rise the world which we know and understand.

Ladyman and Ross think that many scientists and philosophers of science are seduced by the containment metaphor when trying to offer explanations of the phenomena that are studied and described by modern science. Thus, when they talk about atoms or sub-atomic particles, they appeal to ‘homely’ metaphors about small little things encircling one another, bonding with one another and repelling one another. The problem is that these homely metaphors are misleading. The world as described by fundamental physics is nothing like a container filled with small objects interacting with one another. As they put it themselves:

The world as described by actual physics is in no interesting ways like a wall made of bricks in motion (that somehow manages not to fall apart), or, in the more sophisticated extension of the metaphor dominant since the rise of modern science, like a chamber enclosing the molecules of gas. Indeed, it is no longer helpful to conceive of either the world, or particular systems of the world that we study in partial isolation, as ‘made of’ anything at all. The attempt to domesticate twenty-first-century science by reference to homely images of little particles…is forlorn. The basic structure of reality as described by fundamental physics can only be accurately rendered in mathematics… 
(Ross and Ladyman 2010, 160)

So their position is clear. The containment metaphor is misleading, however appealing it may be. The fundamental structure of reality can only be captured in mathematical models; it cannot be reduced to simple metaphors. Now, I don’t know how accurate or fair Ladyman and Ross are being in this characterisation of modern physics. What they are saying sounds plausible, particularly in light of the controversy over the correct interpretation of quantum mechanics. I will simply concede this premise to them and move on to address the central question: what significance does this have for the debate about the EMH?

3. The Bounds of Cognition and the Containment Metaphor
The answer to that question lies in Adams and Aizawa’s use of the coupling-constitution distinction in their critique of the extended mind. Remember, their claim is that proponents of the extended mind confuse the plausible view that the mind is coupled to features of the external world, with the less plausible view that the mind is constituted by elements of the internal and external world.

Ladyman and Ross contend that this distinction relies heavily on the containment metaphor, i.e. the belief that there are sharp and meaningful boundaries to be drawn between different containers made up of different stuff. But since the containment metaphor is fundamentally misleading, so too is the distinction. And since that distinction is central to Adams and Aizawa’s critique, then their critique ultimately fails. That’s the bones of the argument anyway. It works something like this (though this is pretty sketchy and informal):

  • (1) The coupling-constitution fallacy relies on the containment metaphor: i.e. the belief that reality can be accurately modeled in terms of containers, comprised of little things, banging off one another.
  • (2) The containment metaphor is misleading: the fundamental structure of reality cannot be captured by homely metaphors of this sort, it can only be captured by mathematical models.
  • (3) Therefore, the coupling-constitution fallacy is misleading.

This argument has some complexities and subtleties. For one thing, its broader implications for the debate about the extended mind are left unclear. Obviously, the belief is that it undercuts one leading objection to the notion of an extended mind, but this, of course, does not mean that the extended mind hypothesis is plausible. Ladyman and Ross offer some mild endorsement of the extended mind hypothesis toward the end of their article, but it is clearly not their purpose to defend it. They are focused solely on the merits of the coupling-constitution fallacy.

The other complexities in the argument arise from the defence of premise (2) and its connection to the defence of premise (1). One of the standard examples of a compositional relationship, mentioned above, is the water-H2O example. The metaphysical claim here is that the substance water is made up of (constituted by) H2O molecules. Adams and Aizawa use examples of this sort to motivate their application of fallacy. It is pretty clear how this standard metaphysical account appeals to the containment metaphor. A bucket of water is consists of tiny little things (molecules) banging off one another in various ways.

But a deeper appreciation of the scientific explanation of water reveals why it is misleading. As Ladyman and Ross point out, the modern scientific account of water — which is still incomplete — does not imagine that the substance water is simply made up of smaller stuff. In the modern scientific account, water is not a substance at all; it is, rather, an emergent property of a dynamical process. I’ll leave them describe it:

…[T]he kind water is an emergent feature of a complex dynamical system. It makes no sense to imagine it having its familiar properties synchronically. Rather, the water’s wetness, conductivity, and so on all arise because of equilibria in the dynamics of processes happening over short but non-negligible time scales at the atomic scale. From the point of view of any attempted reductive explanation, the kind water is not held by physicists to be ‘constituted’ as opposed to ‘caused’ because it is not a substance in the classical metaphysical sense. 
(Ross and Ladyman 2010, 160)

And, as they go on to point out, this dynamical account of water applies equally well to the explanation of the atomic and sub-atomic particles of which water is said to be comprised. It is not a nested layer of containers filled with stuff, all the way down. It is a set of dynamical processes, only capable of being described by mathematical models. This is why premise (2) seems be fair.

The one potential criticism of premise (2) is that it appeals too much to the state of play in physics; and not to the state of play in other special sciences. Thus, a defender of the coupling-constitution fallacy might concede that the traditional metaphysical distinctions no longer apply to the models used by physicists, but argue in reply that this has no bearing on the models used in the special sciences. And since the human mind is described and modeled by the special sciences (cognitive science, psychology etc.), that is where we should focus our attention. Those models still appeal to causal-constitutive distinctions, and so the criticism still applies.

Ladyman and Ross argue that this is no good. There are two problems. First, it may be true that special sciences appeal to causal-constitutive distinctions, but there is no reason to think that those distinctions have any legitimacy outside of their reliance on the containment metaphor (which is alleged to be misleading). Second, we can appeal to ‘theoretically mature’ special sciences in which models are constructed but which highlight how misleading the causal-constitutive distinction can be. A good example of such a mature special science (though some may dispute the appropriateness of the label) is economics. Economists construct complex mathematical models of various phenomena of interest (e.g. the behaviour of consumers on the insurance market; the relationship between employers and employees in the employment market).

At first glance, it can appear as though the systems being described by these models rely on traditional causal-constitutive distinctions, i.e. one system is ‘made-up’ of another, or forms part of another, or is comprised by agents and actors at ‘lower levels’ of reality. But this is not quite right. The systems described are all model-relative. They are distinguished by reference to sets of endogenous and exogenous variables (i.e. variables that are ‘internal’ and ‘external’ to the models). Most economists will admit that there are multiple ways of carving up the sets of endogenous and exogenous variables, all of which can be appropriate for different explanatory purposes.

Ladyman and Ross think that the same could be true of a theoretically mature cognitive science. The scientists could construct models to explain and predict different mental phenomena. Those models could appeal to different parsings of endogenous and exogenous variables and in doing so there is no reason to think that they need treat the brain-bone barrier as a fundamental ontological barrier between what is part of the mind and what is not.

4. Conclusion
To sum up, in this post I have looked once more at the debate about the extended mind hypothesis. In particular, I have looked at one of the leading critiques of the notion that the mind can extend into the world beyond the brain. The critique came from the work of Adams and Aizawa. It claimed that proponents of the extended mind hypothesis are guilty of committing the coupling-constitution fallacy. That is, they confuse a causal relationship between the mind and the external world with a constitutive relationship between the mind and the external world.

I also looked at a response to the critique from the work of Ladyman and Ross. Applying their general philosophy of Ontic Structural Realism, Ladyman and Ross argue that traditional metaphysical distinctions — such as the distinction between coupling and constitution — have no place in modern science. They are products of a misleading cognitive frame that we apply to reality. A mature cognitive science would not rely on such on archaic distinction. Consequently, they think the coupling-constitution fallacy is no real threat to proponents of the extended mind hypothesis. This doesn’t mean that the hypothesis is correct; it just means that this particular objection to it is not enough to dissuade us from pursuing it further.

Wednesday, April 29, 2015

Is Automation Making us Stupid? The Degeneration Argument Against Automation

(Previous entry)

This post continues my discussion of the arguments in Nicholas Carr’s recent book The Glass Cage. The book is an extended critique of the trend towards automation. In the previous post, I introduced some of the key concepts needed to understand this critique. As I noted then, automation arises whenever a machine (broadly understood) takes over a task or function that used to be performed by a human (or non-human animal). Automation usually takes place within an intelligence ‘loop’. In other words, the machines take over from the traditional components of human intelligence: (i) sensing; (ii) processing; (iii) acting and (iv) learning. Machines can take over some of these components or all of them; humans can be fully replaced or they can share some functions with machines.

This means that automation is a complex phenomenon. There are many different varieties of automation, and they each have a unique set of properties. We should show some sensitivity to those complexities in our discussion. This makes broad-brush critiques pretty difficult. In the previous post I discussed Carr’s claim that automation leads to bad outcomes. Oftentimes the goal behind automation is to improve the safety and efficiency of certain processes. But this goal is frequently missed due to automation complacency and automation bias. Or so the argument went. I expressed some doubts about its strength toward the end of the previous post.

In this post, I want to look at another one of Carr’s arguments, perhaps the central argument in his book: the degeneration argument. According to this argument, we should not just worry about the effects of automation on outcomes; we should worry about its effects on the people who have to work with or rely upon automated systems. Specifically, we should worry about its effects on the quality of human cognition. It could be that automation is making us stupider. This seems like something worth worrying about.

Let’s see how Carr defends this argument.

1. The Whitehead Argument and the Benefits of Automation
To fully appreciate Carr’s argument, it is worth comparing it with an alternative argument, one that defends the contrary view. One such argument can be found in the work of Alfred North Whitehead. Alfred North Whitehead was a famous British philosopher and mathematician, probably best known for his collaboration with Bertrand Russell on Principia Mathematica. In his 1911 work, An Introduction to Mathematics, Whitehead made the following claim:

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

Whitehead may not have had modern methods of automation in mind when he wrote this — though his work did help to inaugurate the computer age — but what he said can certainly be interpreted by reference to them. For it seems like Whitehead is advocating, in this quote, the automation of thought. It seems like he is saying that the less mental labour humans need to expend, the more ‘advanced’ civilization becomes.

But Carr thinks that this is a misreading, one that is exacerbated by the fact that most people only quote the line starting ‘civilization advances…’ and leave out the rest. If you look at the last line, the picture becomes more nuanced. Whitehead isn’t suggesting that automation is an unqualified good. He is suggesting that mental labour is difficult. We have a limited number of ‘cavalry charges’. We should not be expending that mental effort on trivial matters. We should be saving it for the ‘decisive moments’.

This, then, is Whitehead’s real argument: the automation of certain operations of thought is good because it frees us up to think the more important thoughts. To put it a little more formally (and in a way that Whitehead may not have fully endorsed but which suits the present discussion):

  • (1) Mental labour is difficult and finite: time spent thinking about trivial matters limits our ability to think about more important ones.

  • (2) It is good if we have the time and ability to think the more important thoughts.

  • (3) Therefore, it would be good if we could reduce the amount of mental labour expended on trivial matters and increase the amount spent on important ones.

  • (4) Automation helps to reduce the amount of mental labour expended on trivial matters.

  • (5) Therefore, it would be good if we could automate more mental operations.

This argument contains the optimism that is often expressed in debates about automation and the human future. But is this optimism justified?

2. The Structural Flaws in the Whitehead Argument
Carr doesn’t think so. Although he never sets it out in formal terms, I believe that his reason for thinking this can be understood in light of the preceding version of Whitehead’s argument. Look again at premise (4) and the inference from that premise and premise (3) to (5). Do you think this forms a sound argument? You shouldn’t. There are at least two problems with it.

In the first place, it seems to rely on the following implicit premise:

  • (6) If we reduce the amount of mental labour expended on trivial matters, we will increase the amount expended on more important ones.

This premise — which is distinct from premise (1) — is needed if we wish to reach the conclusion. Without it, it does not follow. And once this implicit premise is made explicit, you begin to see where the problems might lie. For it could be that humans are simply lazy. That if you free them from thinking about trivial matters, they won’t expend the excess mental labour on thinking the hard thoughts. They’ll simply double down on other trivial matters.

The second problem is more straightforward, but again highlights a crucial assumption underlying the Whitehead argument. The problem is that premise (5) seems to be assuming that automation will always be focused on the more trivial thoughts, and that the machines will never be able to take away the higher forms of thinking and creativity. This assumption may also turn out to be false.

We have then two criticisms of the Whitehead argument. I’ll give them numbers and plug them into an argument map:

  • (7) In freeing us up from thinking trivial thoughts, automation may not lead to us thinking the more important ones: we may simply double-down on other trivial thoughts.

  • (8) Automation may not be limited to trivial matters; it may take over the important types of thinking too.

But this is to speak in fairly abstract terms. Are there any concrete reasons for thinking these implicit premises and underlying assumptions do actually count against the Whitehead argument? Carr thinks that there are. In particular, he thinks that there is some strong evidence from psychology suggesting that the rise of automation doesn’t simply free us up to think more important thoughts. On the contrary, he thinks that the evidence suggests that the creeping reliance on automation is degenerating the quality of our thinking.

3. The Degeneration Argument
Carr’s argument is quite straightforward. It starts with a discussion of the generation effect. This is something that was discovered by psychologists in the 1970s. The original experiments had to do with memorisation and recall. The basic idea is that the more cognitive work you have to do during the memorisation phase, the better able you are to recall the information at a future date. Suppose I gave you a list of contrasting words to remember:


How would you go about doing it? Unless you have some familiarity with memorisation techniques (like the linking or loci methods), you’d probably just read through the list and start rehearsing it in your mind. This is a reasonably passive process. You absorb the words from the page and try to drill them into your brain through repetition. Now suppose I gave you the following list of incomplete word pairs, and then asked you to both (a) complete the pairs; and (b) memorise them:

HOT: C___
TALL: S___

This time the task requires more cognitive effort. You actually have to generate the matching pair in your mind before you can start trying to remember the list. In experiments, researchers have found that people who were forced to take this more effortful approach were significantly better at remembering the information at a later point in time. This is the generation effect in action. Although the original studies were limited to rote memorisation, later studies revealed that it has a much broader application. It helps with conceptual understanding, problem solving, and recall of more complex materials too. As Carr puts it, these experiments show us that ‘our mind[s] rewards us with greater understanding’ when we exert more focus and attention.

The generation effect has a corollary: the degeneration effect. If anything that forces us to use our own internal cognitive resources will enhance our memory and understanding, then anything that takes away the need to exert those internal resources will reduce our memory and understanding. This is what seems to be happening in relation to automation. Carr cites the experimental work of Christof van Nimegen in support of this view.

Van Nimegen has done work on the role of assistive software in conceptual problem solving. Some of you are probably familiar with the Missionaries and Cannibals game (a classic logic puzzle about getting a group of missionaries across a river without being eaten by a cannibal). The game comes with a basic set of rules and you must get the missionaries across the river in the least number of trips while conforming to those rules. Van Nimegen performed experiments contrasting two groups of problem solvers on this game. The first group worked with a simple software program that provided no assistance to those playing the game. The second group worked a software program that offered on-screen prompts, including details as to which moves were permissible.

The results were interesting. People using the assistive software could solve the puzzles and performed better at first thanks to the assistance. But they faded in the long-run. The second group emerged as the winners: they solved the puzzles more efficiently and with fewer wrong-moves. What’s more, in a follow up study performed 8 months later, it was found that members of the second group were better able to recall how to solve the puzzle. Van Nimegen went on to repeat that result in experiments involving different types of task. This suggests that automation can have a degenerating effect, at least when compared to traditional methods of problem-solving.

Carr suggests that other evidence confirms the degenerating effect of automation. He cites an example of a study done on accounting firms using assistive software, which found that human accountants relying on this software had a poorer understanding of risk. Likewise, he gives the (essentially anecdotal) example of software engineers relying on assistive programs to clear-up their dodgy first draft code. In the words of one Google software developer, Vivek Haldar, this has led to “Sharp tools, dull minds.”

Summarising all this, Carr seems to be making the following argument. This could be interpreted as an argument in support of premise (7), given above. But I prefer to view it as a separate counterargument because it also challenges some of the values underlying the Whitehead argument:

  • (9) It is good if humans can think higher thoughts (i.e. have some complexity and depth of understanding).

  • (10) In order to think higher thoughts, we need to engage our minds, i.e. use attention and focus to generate information from our own cognitive resources (this is the ‘generation effect’).

  • (11) Automation inhibits our ability to think higher thoughts by reducing the need to engage our own minds (the ‘degeneration effect’).

  • (12) Therefore, automation is bad: it reduces our ability to think higher thoughts.

4. Concluding Thoughts
What should we make of this argument? I am perhaps not best placed to critically engage with some aspects of it. In particular, I am not best placed to challenge its empirical foundation. I have located the studies mentioned by Carr and they all seem to support what he is saying, and I know of no competing studies, but I am not well-versed in the literature. For this reason, I just have to accept this aspect of the argument and move on.

Fortunately, there are two other critical comments I can make by way of conclusion. The first has to do with the implications of the degeneration effect. If we assume that the degeneration effect is real, it may not imply that we are generally unable to think higher thoughts. It could be that the degeneration is localised to the particular set of tasks that is being automated (e.g. solving the missionaries and cannibals game). And if so, this may not be a big deal. If those tasks are not particularly important, humans may still be freed up to think the more important thoughts. It is only if the effect is more widespread that a problem arises. And I don’t wish to deny that this could be the case. Automation systems are becoming more widespread and we now expect to rely upon them in many aspects of our lives. This could result in the spillover of the degeneration effect.

The other comment has to do with the value assumption embedded in premise (9) (which was also included in premise (2) of the Whitehead argument). There is some intuitive appeal to this. If anyone is going to be thinking important thoughts I would certainly like for that person to be me. Not just for the social rewards it may bring, but because there is something intrinsically valuable about the act of high-level thinking. Understanding and insight can be their own reward.

But there is an interesting paradox to contend with here. When it comes to the performance of most tasks, the art of learning involves transferring the performance from the conscious realm to the sub-conscious realm. Carr mentions the example of driving: most people know how difficult it is to learn how to drive. You have perform a sequence of smoothly coordinated and highly unnatural actions. This takes a great deal of cognitive effort, at first, but over time it becomes automatic. This process is well-documented in the psychological literature and is referred to as ‘automatization’. So, ironically, developing our own cognitive resources may simply result in further automation, albeit this time automation that is internal to us.

The assumption is that this internal form of automation is superior to the external form of automation that comes with outsourcing the task to a machine. But is this necessarily true? If the fear is that the externalisation causes us to lose something fundamental to ourselves, then maybe not. Could the external technology not simply form part of ‘ourselves’ (part of our minds)? Would externalisation not then be ethically on a par with internal automation? This is what some defenders of the external mind hypothesis like to claim, and I discuss the topic at greater length in another post. I direct the interested reader there for more.

That’s it for this post.

Monday, April 27, 2015

The Automation Loop and its Negative Consequences

I’m currently reading Nicholas Carr’s book The Glass Cage: Where Automation is Taking Us. I think it is an important contribution to the ongoing debate about the growth of AI and robotics, and the future of humanity. Carr is something of a techno-pessimist (though he may prefer ‘realist’) and the book continues the pessimistic theme set down in his previous book The Shallows (which was a critique of the internet and its impact on human cognition). That said, I think The Glass Cage is a superior work. I certainly found it more engaging and persuasive than his previous effort.

Anyway, because I think it raises some important issues, many of which intersect with my own research, I want to try to engage with its core arguments on this blog. I’ll do so over a series of posts. I start today with what I take to be Carr’s central critique of the rise of automation. This critique is set out in chapter 4 of his book. The chapter is entitled ‘The Degeneration Effect’, and it makes a number of arguments (though none of them are described formally). I identify two in particular. The first deals with the effects of automation on the quality of decision-making (i.e. the outputs of decision-making). The second deals with the effects of automation on the depth and complexity of human thought. The two are connected, but separable. I want to deal with them separately here.

In the remainder of this post, I will discuss the first argument. In doing so, I’ll set out some key background ideas for understanding the debate about automation.

1. The Nature of the Automation Loop
Automation is the process whereby any action, decision or function that was once performed by a human (or non-human animal) is taken over by a machine. I’ve discussed the phenomenon before on this blog. Specifically, I have discussed the phenomenon of algorithm-based decision-making systems. They are a sub-type of automated system in which a computer algorithm takes over a decision-making function that was once performed by a human being.

In discussing that phenomenon, I attempted to offer a brief taxonomy of the possible algorithm-based systems. The taxonomy made distinctions between (i) human in the loop systems (in which humans were still necessary for the decision-making to take place); (ii) human on the loop systems (in which humans played some supervisory role) and (iii) human off the loop systems (which were fully automated and prevented humans from getting involved). The taxonomy was not my own; I copied it from the work of others. And while I still think that this taxonomy has some use, I now believe that it is incomplete. This is for two reasons. First, it doesn’t clarify what the ‘loop’ in question actually is. And second, it doesn’t explain exactly what role humans may or may not be playing in this loop. So let’s try to add the necessary detail now with a refined taxonomy.

Let’s start by clarifying the nature of the automation loop. This is something Carr discusses in his book by reference to historical examples. The best of these is the automation of anti-aircraft missiles after the end WWII. Early on in that war it was clear that the mental calculations and physical adjustments that were needed in order to fire an anti-aircraft missile effectively were too much for any individual human to undertake. Scientists worked hard to automate the process (though they didn’t succeed fully until after the war — at least as I understand the history):

This was no job for mortals. The missile’s trajectory, the scientists saw, had to be computed by a calculating machine, using tracking data coming in from radar systems along with statistical projections from a plane’s course, and then the calculations had to be fed automatically into the gun’s aiming mechanism to guide the firing. The gun’s aim, moreover, had to be adjusted continually to account for the success or failure of previous shots. 
(Carr 2014, p 35)

The example illustrates all the key components in an automation loop. There are four in total:

(a) Sensor: some machine that collects data about a relevant part of the world outside the loop, in this case the radar system.

(b) Processor: some machine that processes and identifies relevant patterns in the data being collected, in this case some computer system that calculates trajectories based on the incoming radar data and issues instructions as to how to aim the gun.

c) Actuator: some machine that carries out the instructions issued by the processor, in this case the actual gun itself.

(d) Feedback Mechanism: some system that allows the entire loop to learn from its previous efforts, i.e. allows it to collect, process and act in more efficient and more accurate ways in the future. We could also call this a learning mechanism. In many cases humans still play this role by readjusting the other elements of the loop.

These four components should be familiar to anyone with a passing interest in cognitive science and AI. They are, after all, the components in any intelligent system. That is no accident. Since automated systems are designed to take over tasks from human beings they are going to try to mimic the mechanisms of human intelligence.

Automation loops of this sort will come in many different flavours, as many different flavours as there are different types of sensor, processor, actuator and learning mechanism (up to the current limits of technology). A thermostat is a very simple type of automation loop: it collects temperature data, processes it by converting it into instructions for turning on or off the heating system. It then makes use of negative feedback to constantly regulate the temperature in a room (modern thermostats like the Nest have more going on). A self-driving car is a much more complicated type of automation loop: it collects visual data, processes it quite extensively by identifying and categorising relevant patterns, and then uses this to issue instructions to an actuating mechanism that propels the vehicle down the road.

Humans can play a variety of different roles in such automation loops. Sometimes they might be sensors for the machine, collecting and feeding it relevant data. Sometimes they might play the processing role. Sometimes they could be actuators, i.e. the muscle that does the actual work. Sometimes they might play one, two or all three of these roles. Sometimes they might share these roles with the machine. When we think about humans being in, on, or off the loop, we need to keep in mind these complexities.

To give an example, the car is a type of automation device. Traditionally, the car just played the part of the actuator; the human was the sensor and processor, collecting data and issuing instructions to the machine. The basic elements of this relationship now remain the same, albeit there is some outsourcing and sharing of sensory and processing functions with the car’s onboard computers. So, for example, my car can tell me how close I am to an object by making a loud noise; it can keep my car travelling at a constant speed when cruising down a motorway; and it can even calculate my route and tell me where to go using its built-in GPS. I’m still very much involved in the loop; but the machine is taking over more of the functions I used to perform myself.

Eventually, the car will be a fully automated loop, with little or no role for human beings. Not even a supervisory one. Indeed, some manufacturers want this to happen. Google, reportedly, want to remove steering wheels from their self-driving cars. Why? Because it is only when humans take over that accidents seem to happen. The car will be safer if left to its own devices. This suggests that full automation might be better for the world.

2. The Consequences of Automation for the External World
Automation is undertaken for a variety of reasons. Oftentimes the motivation is benevolent. Engineers and technicians want to make systems safer and more effective, or they want to liberate humans from the drudge work, and free them up to perform more interesting tasks. Other times the motivation might be less benevolent. Greedy capitalists might wish to eliminate human workers because it is cheaper, and because humans get tired and complain too much.

There are important arguments to be had about these competing motivations. But for the time being let’s assume that benevolent motivations predominate. Does automation always succeed in realising these benevolent aims? One of Carr’s central contentions is that it frequently does not. There is one major reason for this. Most people adhere to something called the ‘substitution myth’:

Substitution Myth: The belief that when a machine takes over some element of a loop from a human, the machine is a perfect substitute for the human. In other words, the nature of the loop does not fundamentally change through the process of automation.

The problem is that this is false. The automated component of the loop often performs the function in a radically different way and this changes both the other elements of the loop and the outcome of the loop. In particular, it changes the behaviour of the humans who operate within the loop or who are affected by the outputs of the loop.

Two effects are singled out for consideration by Carr, both of which are discussed in the literature on automation:

Automation Complacency: People get more and more comfortable allowing the machine to take complete control.

Automation Bias: People afford too much weight to the evidence and recommendations presented to them by the machine.

You might have some trouble understanding the distinction between the two effects. I know I did when I first read about them. But I think the distinction can be understood if we look back to the difference between human ‘in the loop’ and ‘on the loop’ systems. As I see it, automation complacency arises in the case of a human on the loop system. The system in question is fully automated with some limited human oversight (i.e. humans can step in if they choose). Complacency arises when they choose not to step in. Contrariwise, automation bias arises in the case of a human in the loop system. The system in question is only partially automated, and humans are still essential to the process (e.g. in making a final judgment about the action to be taken). Bias arises when they don’t second-guess or go beyond recommendations given to them by the machine.

There is evidence to suggest that both of these effects are real. Indeed, you have probably experienced some of these effects yourself. For example, how often do you second-guess the route that your GPS plans for you? But so what? Why should we worry about them? If the partially or fully automated loop is better at performing the function than the previous incarnation, then isn’t this all to the good? Could we not agree with Google that things are better when humans are not involved?

There are many responses to these questions. I have offered some myself in the past. But Carr clearly thinks that these two effects have some seriously negative implications. In particular, he thinks that they can lead to sub-optimal decision-making. To make his point, he gives a series of examples in which complacency and bias led to bad outcomes. I’ll describe four of them here.

I’ll start with two examples of complacency. The first is the case of the 1,500 passenger ocean liner Royal Majesty, which ran aground on a sandbar near Nantucket in 1995. The vessel had been traveling from Bermuda to Boston and was equipped with a state of the art automated navigation system. However, an hour into the voyage a GPS antenna came loose and the ship proceeded to drift off course for the next 30 hours. Nobody on board did anything to correct for the mistake, even though there were clear signs that something was wrong. They didn’t think to challenge the wisdom of the machine.

A similar example of complacency comes from Sherry Turkle’s work with architects. In her book Simulation and its Discontents she notes how modern-day architects rely heavily on computer-generated plans for the buildings they design. They no longer painstakingly double-check the dimensions in their blueprints before handing the plans over to construction crews. This results in occasional errors. All because they have become reluctant to question the judgment of the computer program.

As for bias, Carr gives two major examples. The first comes from drivers who place excessive reliance on GPS route planners when driving. He cites the 2008 case of a bus driver in Seattle. The top of his bus was sheared off when he collided with a concrete bridge with a nine-foot clearance. He was carrying a high-school sports team at the time and twenty one of them were injured. He said he did not see the warning lights because he was busy following the GPS instructions.

The other example comes from the decision support software that is nowadays used by radiographers. This software often flags particular areas of an X-ray scan for closer scrutiny. While this has proven helpful in routine cases, a 2013 study found that it actually reduces the performance of expert readers in difficult cases. In particular, it ia found that the experts tend to overlook areas of the scans not flagged by the software, but which could be indicative of some types of cancer.

These four examples support the claim that automation complacency and automation bias can lead to inferior outcomes.

3. Conclusion
But is this really persuasive? I think there are some problems with the argument. For one thing, some of these examples are purely anecdotal. They highlight sub-optimal outcomes in certain cases, but they involve no proper control data. The Royal Majesty may have run aground in 1995 but how many accidents have been averted by the use of automated navigation systems? And how many accidents have arisen through the fault of human operators? (I can think of at least two high profile passenger-liner accidents in the past couple of years, both involving human error). Likewise, the bus driver may have crashed into the bridge, but how many people have gotten to their destinations faster than they otherwise would have through the use of GPS? I don’t think anecdotes of this sort are a good way to reach general conclusions about the desirability of automation systems.

The work on radiographers is more persuasive since it shows a deleterious comparative effect in certain cases. But, at the same time, it also found some advantages to the use of the technology. So the evidence is more mixed there. Now, I wouldn’t want to make too much of all this. Carr provides other examples in the book that make a good point about the potential costs of automation. For instance, in chapter five he discusses some other examples of the negative consequences of automation and digitisation in the healthcare sector. So there may be a good argument to be made about the sub-optimal nature of automation. But I suspect it needs to be made much more carefully, and on a case-by-case basis.

In saying all this, I am purely focused on the external effects of automation, i.e. the effects with respect to the output or function of the automated system. I am not concerned with the effects on the humans who are being replaced. One of Carr’s major arguments is that automation has deleterious effects for them too, specifically with respect to the degeneration of their cognitive functioning. This turns out to be a far more interesting argument and I will discuss it in the next post.