Wednesday, October 22, 2014

One Million Pageviews


According to google stats, this blog finally crossed the 1,000,000 page view threshold yesterday. As I understand it, google stats are not always reliable, so this is likely to be an overestimate. Nevertheless, it felt like a moment that was worth marking in some way.

A big thanks everyone who reads on a regular basis, checks in from time-to-time, and shares my work online. It is much appreciated. If I can ever reciprocate, just let me know.


Friday, October 17, 2014

Algocracy and other Problems with Big Data (Series Index)




What kind of society are we creating? With the advent of the internet-of-things, advanced data-mining and predictive analytics, and improvements in artificial intelligence and automation, we are on the verge of creating a global "neural network": a constantly-updated, massively interconnected, control system for the world. Imagine what it will be like when every "thing" in your home, place of work, school, city, state and country is monitored or integrated into a smart device? And when all the data from that device is analysed and organised by search algorithms? And when this in turns feeds into some automated control system?

What kind of world do you see? Should we be optimistic or pessimistic? I've addressed this question in several posts over the past year. I thought it might be useful to collect the links to all those posts in one place. So that's what I'm doing here.

As you'll see, most of those posts have been concerned with the risks associated with such technologies. For instance, the threat they may pose to transparency, democratic legitimacy and traditional forms of employment. But just to be clear, I am not a technophobe -- quite the contrary in fact. I'm interested in the arguments people make about technology. I like to analyse them, break them down into their key components, and see how they stand up to close, critical scrutiny. Sometimes I end up agreeing that there are serious risks; sometimes I don't.

Anyway, I hope you enjoy reading these entries. This is a topic that continues to fascinate me and I will write about it more in the future.

(Note: I had no idea what to call this series of posts. So I just went with whatever came into my head. The title might be somewhat misleading insofar as "Big Data" isn't explicitly mentioned in all of these posts, though it does feature in many of them)


1. Rule by Algorithm? Big Data and the Threat of Algocracy
This was the post that kicked everything off. Drawing upon some work done by Evgeny Morozov, I argued that increasing reliance on algorithm-based decision-making processes may pose a threat to democratic legitimacy. I'm currently working on a longer paper that develops this argument and assesses a variety of possible solutions.


2. Big Data, Predictive Algorithms and the Virtues of Transparency (Part One, Part Two)
These two posts looked at the arguments from Tal Zarsky's paper "Transparent Predictions". Zarsky assesses arguments in favour of increased transparency in relation to data-mining and predictive analytics.


3. What's the case for sousveillance? (Part One, Part Two)
This was my attempt to carefully assess Steve Mann's case for sousveillance technologies (i.e. technologies that allow us to monitor social authorities). I suggest that some of Mann's arguments are naive, and that it is unlikely that sousveillance technologies will resolve problems of technocracy and social inequality.


4. Big Data and the Vices of Transparency
This followed up on my earlier series of posts about Tal Zarsky's "Transparent Predictions". In this one I looked at what Zarsky had to say about the vices of increased transparency.


5. Equality, Fairness and the Threat of Algocracy
I was going through a bit of Tal Zarsky-phase back in April, so this was another post assessing some of his arguments. Actually, this one looked at his most interesting argument (in my opinion anyway). In this one, Zarsky claimed that automated decision-making processes should be welcomed because they could reduce implicit bias.


6. Will Sex Workers be Replaced by Robots? (A Precis)
This was an overview of the arguments contained in my academic article "Sex Work, Technological Unemployment and the Basic Income Guarantee". That article looked at whether advances in robotics and artificial intelligence threaten to displace human sex workers. Although I conceded that this is possible, I argued that sex work may be one of the few areas that is resilient to technological displacement.


7. Is Modern Technology Creating a Borg-Like Society?
This post looked at a recent paper by Lipschutz and Hester entitled "We are the Borg! Human Assimilation into the Cellular Society". The paper argued that recent technological developments pushed us in the direction of a Borg-like society. I tried to clarify those arguments and then asked the important follow-up: is this something we should worry about? I identified three concerns one ought to have about the drive toward Borg-likeness.


8. Are we heading for technological unemployment? An Argument
This was my attempt to present the clearest and most powerful argument for technological unemployment. The argument drew upon the work of Andrew McAfee and Erik Brynjolfsson in The Second Machine Age. Although I admit that the argument has flaws -- as do all arguments about future trends -- I think it is sufficient to warrant serious critical reflection.


9. Sousveillance and Surveillance: What kind of future do we want?
This was a short post on surveillance technologies. It looked specifically at Steve Mann's attempt to map out four possible future societies: the univeillant society (one that rejects surveillance and embraces sousveillance); the equiveillant society (one that embraces surveillance and sousveillance); the counter-veillance society (one that rejects all types of veillance); and the McVeillance society (one that embraces surveillance but rejects sousveillance).

Wednesday, October 15, 2014

The Journal Club #4: Puryear on Finitism and the Beginning of the Universe



Welcome to the fourth edition of the Philosophical Disquisitions Journal Club. Apologies for the delay with the club this month — real life got in the way — but I’m here now and ready to go. The purpose of the journal club is to facilitate discussion and debate about a recent paper in the philosophy of religion. This month’s paper is:

Puryear, StephenFinitism and the Beginning of the Universe” (2014) Australasian Journal of Philosophy, forthcoming.

The paper introduces a novel critique of the Kalam Cosmological argument. Or rather, a novel critique of a specific sub-component of the argument in favour of the Kalam. As you may be aware, the Kalam argument makes three key claims: (i) that the universe must have begun to exist; (ii) that anything that begins to exist must have a cause of its existence; and (iii) that in the case of the universe, the cause must be God.

There is no need to get into the intricacies of the argument today. To understand Puryear’s paper we only need to focus on the first of those three key claims. That claim is typically defended by arguing that the universe could not be infinitely old because actual infinities cannot exist in the real world. Puryear argues that this defence creates problems for proponents of the Kalam, particularly when they try to reinforce it and render it less vulnerable to objections.

Is he right? Well, that’s what is up for debate. As per usual, I’ll try to kickstart the debate by providing a brief overview of Puryear’s main arguments.


1. Why can’t there be an actual infinite?

To start, we need to consider why proponents of the Kalam think that the past cannot be an actual infinite. William Lane Craig — the foremost defender of the argument — does so by highlighting absurdities in either the concept of an actual infinite (e.g. the absurdity of Hilbert’s Hotel) or in the concept of an actual infinite being formed by successive addition (e.g. the reverse countdown argument). We will focus on the later absurdity here.

One of the main ideas put forward by Craig is that the past is made up of a series of events (E1, E2…En). Indeed, what we refer to as “the past” is simply the set of all these events added together. But if that’s what the past is, then it cannot be an actual infinite. If you start with one event, and add another event, and another, and so on ad infinitum, then you never get an actually infinite number of events. Instead, you get a set whose number of constituents is getting ever larger, tending towards infinity, but never actually reaching it. In the literature, this is known as a “potential infinite”. And if the set of past events cannot be actually infinite, it must have had a beginning (i.e. a first event).

This line of reasoning can be summarised as follows:



  • (1) If the universe did not have a beginning, then the past would consist in an infinite temporal sequence of events.
  • (2) An infinite temporal sequence of past events would be actually and not merely potentially infinite.
  • (3) It is impossible for a sequence formed by successive addition to be actually infinite.
  • (4) The temporal sequence of past events was formed by successive addition.
  • (5) Therefore, the universe had a beginning.



There are a variety of criticisms one could launch against this argument. I’ve considered some of them in past, but for now we’re just interested in one possible response. Premise (3) is claiming that actual infinities cannot be formed by successively adding more and more elements to a set. Another way of putting it would be to say that we cannot traverse an actually infinite sequence in a stepwise fashion (i.e. go from E1 to E2 to E3 and so on until we reach an actual infinite).

The problem with this is that it runs foul of the possibility that we traverse actually infinite sequences all the time. This was a notion that was first introduced to us by Zeno and his famous paradoxes. A simple version of Zeno’s argument would go something like this. In order for me to get from one side of the road to another, I first have to traverse half the distance. And in order to traverse half the distance I have to first traverse a quarter of the distance. And before I do that I have to traverse an eighth of the distance. And before that a sixteenth. And so on ad infinitum. The space between me and the other side of the road is made up of an actual infinite sequence of sub-distances. Nevertheless, I can traverse it in a stepwise fashion. Where’s the problem?

Well, there could be several. One is that maybe space cannot be infinitely sub-divided, as Zeno’s paradox assumes. We’ll return to that possibility later on. Another possibility is that when it comes to segments of space and time, the whole is prior to the parts. What does this mean? Take a line drawn on a piece of paper. You could argue that the line is made up of smaller sub-units or you could argue, perhaps more plausibly, that the whole line is prior to the sub-units. In other words, that the full length of the line exists first, and then we simply sub-divide it into units thereafter. This sub-division is, however, purely conceptual in nature: it exists in thought only, not in reality. This means that the sub-division is only potentially infinite, not actually infinite. Why? Because we cannot mentally sub-divide something into an actually infinite number of sub-units, we can only add more and more sub-divisions and thereby tend towards infinity, but never reach it.

William Lane Craig has advocated this “priority of the whole” response himself, arguing that “the past” exists as an undivided whole first, and is then broken down into sub-units afterwards by our mentality. This means it could only ever consist in a potentially infinite number of sub-units. Puryear argues that in embracing this response, Craig creates problems for the Kalam as a whole. Let’s see what those problems are.


2. Why the Priority of the Whole view is Problematic
Puryear’s basic contention is this: If it is true that the whole can be prior to the parts, then it is possible that the past is simply a indefinitely extended temporal unit. In other words, it is possible that the past consists of one, metaphysically indivisible whole, that is then conceptually sub-divided into temporal units (minutes, seconds, lunar cycles, whatever). Those conceptual sub-divisions would be imposed upon the metaphysical reality; they would not be actual features of that reality.

Why is this a problem? Because it would defeat one of the original assumptions underlying the Kalam. Proponents of the Kalam believe that their critics cling steadfastly to the notion of an actually infinite past sequence of events because they are committed to a beginningless past. But if the past can simply be one whole, which extends indefinitely in the reverse temporal direction, then it is possible to argue both that the universe did not begin to exist and that the past does not consist of an actually infinite number of events. This is because the sub-division of the past into events would be conceptual only, i.e. a potential infinite not an actual infinite, much like the division of a line into sub-units after it is drawn on the page.

That’s the gist of Puryear’s argument. One possible objection would be to argue time and the events which take place in time are metaphysically distinct. In other words, to say that although the past could be one whole temporal unit, the events which take place in the past may not be. This would imply that if the past was an indefinitely extended whole, it would still need to consist of an actually infinite number of events. And if it does, then the absurdities beloved by Craig and others would still apply.

To rebut this objection, Puryear needs to argue that the priority of the whole with respect to time (PWT) entails the priority of the whole with respect to events (PWE): if the past is just one big, indefinitely extended thing upon which we impose conceptual sub-divisions, then the same is true for events. That is to say, the past can simply be viewed as one big event (one “happening”) that we conceptually sub-divide into other events. To illustrate, Puryear gives the example of a moon orbiting a planet for an indefinitely extended period of time. Clearly, in such a case, the number of past events (i.e. number of “orbits”) coincides with the number of temporal intervals (i.e. lunar years). But if the latter are purely conceptual in nature, then so too could the former be purely conceptual. This could be true for all “events”.

If this is right, then the attempt to defend the Kalam by reference to the priority of the whole view fails.


3. Conclusion and Thoughts
This could have two significant implications. It could mean that the Zeno-paradox argument is open to the critic of the Kalam once more. Thus, if the division of time into sub-units isn’t purely conceptual, then, as Wes Morriston has argued, the fact that we can specify a rule that would lead to it being divided into an actually infinite sequence of sub-units gives us some reason to think that it does consist of an actually infinite sequence of sub-units. This, again, reopens the possibility that we traverse actually infinite spaces in a stepwise fashion all the time. Alternatively, it could mean that proponents of the Kalam are forced to defend the view that time and space are quantized, i.e. that there is some minimum unit of sub-division.

Anyway, that’s a brief overview of Puryear’s article. I think it opens up an interesting avenue for debate, one that isn’t typically explored in conversations about the Kalam. Instead of plumbing the depths of our intuitions about infinity — which is never that fruitful given that infinity is such a counter-intuitive idea — it plumbs the depths of our intuitions about composition. But it also raises some questions. Is the priority of the whole view plausible? Does Puryear successfully argue for the equivalency between the past sequence of events and the past sequence of temporal intervals? Is the “quantised” view of space and time workable?

What do others think?

Monday, October 13, 2014

How can you make your writing more coherent? Four Tips




I’m currently teaching a course on research and writing. The goal of the course is to teach students how to better research, plan and write an academic essay. As a student, this was the kind of course I tended to dislike — usually because the advice offered was either completely banal (“write in a clear, straightforward manner”) or fussily prescriptive (“judgment should be spelled without an ‘e’ when it refers to legal judgment, but with an ‘e’ when it does not”*). Teaching such a course has changed my attitude. I’ve realised that although most of what I was taught was indeed banal and fussy, there are nevertheless some interesting things to be said about the craft of writing.

One of these is the importance of coherence in essay-writing. Incoherence is one of biggest flaws I see in student essays. Such essays can often be made-up of well-formed sentences, but nevertheless be difficult to decipher. I cannot remember the number of times that I’ve waded through page-after-page of carefully-worded prose, only to be left in the dark as to what the student was trying to say. The missing ingredient was coherence: the connective tissue that helps to knit together all carefully-worded prose.

Although I’ve long been aware that this was the missing ingredient, I have never had much in the way of concrete advice to offer. I’m not that self-conscious about what I’m doing when I’m writing, so I’m typically unable to break the process down into a series of rules. Getting all the elements of an essay to fit together seems to come pretty naturally to me (though I’m not claiming to be a good or coherent writer). Fortunately, there are other people who can break things down into rules. Indeed, this was one of the joys of reading Steven Pinker’s recent book The Sense of Style. In one of my favourite chapters, he sets out exactly what it takes to write coherently. In this post, I want to share the four main “tips” that emerge from that chapter. In doing so, I’ll focus on their application to the kinds of academic writing that I engage in.


1. Adopt a sensible overarching structure
There are different “levels” to an academic paper. At the lowest level, there are the words that make up the sentences. At the next lowest level, there are the sentences that make up the paragraphs. Then come the paragraphs which make up sections and subsections. And then come….You get the idea. “Coherence” is something that can be assessed across these different levels. Before you start writing, it’s worth thinking about it at the most general level: that of the paper itself. What are you trying to say? What order should you say it in?

The answer is that you should adopt a sensible overarching structure (often referred to as an “essay plan”). Admittedly, this is pretty banal advice. But it can be rendered less banal with some concrete examples. Suppose I want to write an essay about the nature of love in Shakespeare’s plays. How should I go about it? There are a number of sensible structures I could adopt. I could just open a complete collection of Shakespeare’s work and take it play-by-play, discussing all the different forms of love that appear in each play. Alternatively, I could group the plays into their sub-genres (comedies, tragedies and histories) and explain the similarities and differences across the genres.

Another possibility would be to group the types of love into different categories (romantic love, friendship, tragic love, unrequited love etc.) and discuss how they arise in different plays. Or I could take the plays in the order in which Shakespeare wrote them and see how his thinking about love evolved over time. Some of these might be more appropriate in different contexts. The important point is that each of them is sensible: if someone read an essay with one of those structures, at no point would they feel lost or disoriented by the discussion.

Pinker gives some examples of sensible structures from his own writing. First, he talks about a time when he had to write about the vast and unruly literature on the neuroscience and genetics of language. How could he bring order to this chaos?

It dawned on me that a clearer trajectory through this morass would consist of zooming in [on the brain] from a bird’s-eye view to increasingly microscopic components. From the highest vantage point you can make out only the brain’s two big hemispheres, so I began with studies of split-brain patients and other discoveries that locate language in the left hemisphere. Zooming in on that hemisphere, one can see a big cleft dividing the temporal lobe from the rest of the brain, and the territory on the banks of that cleft repeatedly turns up as crucial for language in studies of stroke patients and brain scans of intact subjects. Moving in closer, one can distinguish various regions — Broca’s area, Wernicke’s area, and so on — and the discussion can turn to the more specific language skills, such as recognising words and parsing them into a tree, that have been tied to each area. 
(Sense of Style, p. 144)


That definitely makes sense. In fact, it sounds like an exciting tour of different brain regions. Another example he gives relates to something he wrote on different languages: English, French, Hebrew, German, Chinese, Dutch, Hungarian, and Arapesh (spoken in New Guinea). He decided to write about them from a chronological perspective, starting with the most recent language and working his way back to the oldest. This allowed readers to see how human language had changed over time.

I tend to structure my papers around the arguments I want to make. Typically, I make one general argument in each paper, which is defended by a number of premises, and fended off from attack by other counter-arguments and objections. I think of the argument I wish to defend as having a structure, one that can be literally mapped out and visualised using an argument mapping technique. I then view the paper as my attempt to illuminate that structure to the reader. Thinking about it in this way helps me plan out the structure. I always start with the conclusion — I don’t want to keep the reader in suspense: I want them to know where the discussion is going. This is usually followed by a section setting out the key concepts and ideas (just to make sure everyone has what they need to understand the structure of the argument). Thereafter, there are a number of different orderings available to me. Sometimes, I will start by looking at objections to my position, usually grouped by author or theme. A good example of this would be my paper “Hyperagency and the Good Life”. In it, I defended the notion that extreme forms of human enhancement might make life more meaningful. And I did so by first looking at four authors who disagreed with me. Other times, I will start with a basic defence of my own position, and follow it up with an assessment of the various counterarguments and objections (I did that in a more recent paper; yet to be published).

This probably sounds pretty dull and uninteresting — certainly when compared to Pinker’s tour of the brain — but I think it works well for academic writing, which often needs to be quite functional.


2. Make sure you introduce the reader to the topic and the point
A reader needs to know what it is you are writing about (the topic) and why (the point). Again, this seems like pretty banal and uninteresting advice, but it’s super-important and really interesting to see why. Read the following passage (taken from a study by the psychologists John Bransford and Marcia Johnson):

The procedure is actually quite simple. First you arrange things into different groups depending on their makeup. Of course, one pile may be sufficient depending on how much there is to do. If you have to go somewhere else due to lack of facilities that is the next step, otherwise you are pretty well set. It is important not to overdo any particular endeavour. That is, it is better to do too few things at once than too many. In the short run this may not seem important, but complications from doing too many can easily arise. A mistake can be expensive as well. The manipulation of the appropriate mechanisms should be self-explanatory, and we need not dwell on it here. At first the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then one never can tell.

Didn’t make much sense did it? Now read it again only this time add in the following topic sentence at the very start: “We need to talk about washing clothes”.

Isn’t it amazing how this one little sentence can transform an incoherent mess of words into something that actually makes sense? It is still not a paragon of clear writing, but it is vastly different. If that doesn’t convince you of the importance of telling the reader what you are writing about, then I’m not sure what will. The same goes for telling them why you are writing about it. In other words, telling them what it is you want them to get out of reading your paper. Do you want to educate them? Convince them of some conclusion? Illuminate some obscure area of research? Get them to do something different with their lives? It’s important that they know as soon as possible. Otherwise they won’t be able to see how everything you say fits together.

To be sure, there is some judgment to be exercised here. You don’t want to bludgeon the reader to death with topic sentences and constant reminders of where it is all going. They’ll be able to keep a certain amount of this detail in their heads as the read through. In a short piece (e.g. one that will take less than 20 minutes to read) one mention of the topic and the point will usually suffice (with the proviso that sometimes you might change topics and you’ll need to inform the reader of this). In longer pieces, you might want to add a few reminders so that they keep on track. As a general guide, I find that I end up taking reminders out of what I’ve written rather than adding them in. This is because it takes longer to write than it does to read, and I often need to remind myself of the topic and the point as I write. But many of these reminders are unnecessary from the reader’s perspective.


3. Keep the focus on your protagonists
Everything you write will have one or more protagonists. The protagonist could be an actual person, or group of persons; or it could be an abstract concept or idea. Whatever the case may be, it is essential that you keep the reader’s focus on that protagonist throughout your discussion. They need to know what the protagonist is up to. If you constantly switch focus — without proper foreshadowing — you end up with something that is disjointed and incoherent.

Again, some concrete examples might help. Suppose I’m writing an essay about Charles Darwin and what he thought about evolution. In that case, Darwin — or, more precisely, his thinking — is my protagonist. I must keep the reader’s focus on what he thought throughout the essay. So I might start by talking about his days in Cambridge, what he was taught, and how this might have influenced his thinking. I would then move on to discuss his time onboard the HMS Beagle, how he collected fossils throughout South America and the importance of his observations in the Galapagos Islands. I would then talk about his return to England, his taking up residence in Down House in Kent, the slow maturation of his ideas, and the eventual publication of his work. As I write, I might occasionally switch focus. For instance, to fully understand his observations on the Galapagos Islands, I might need to take a paragraph explaining some of the unusual geographical features of those islands. Or, when I write about the eventual publication of his ideas, I might need to talk about Alfred Russell Wallace and his independent discovery of the principle of natural selection. These divagations would be perfectly acceptable; the important thing would be to bring the focus back to Darwin soon afterwards.

A more abstract example might be an essay on the concept of justice. In this case, justice itself is my protagonist. I must keep the reader focussed on its meaning, importance and implications. So I could start with a basic definition, talking about the role of justice in shaping political and social institutions. I could then divide justice up into different sub-concepts (distributive justice/corrective justice) and talk about them for a while. I might occasionally switch focus to a particular thinker and what he or she thought about justice. For example, I might talk about John Rawls and his concept of “justice as fairness”. This could involve a couple of paragraphs about Rawls as a person, how he developed his concept, and its influence on contemporary political thinking. This switch to a different protagonist would be fine, so long as it was foreshadowed (e.g. “The 20th century philosopher John Rawls had some interesting ideas about justice. Let’s talk about him for a bit”), and so long as the focus switched back to the concept itself once the discussion of Rawls reaches its natural endpoint.

Keeping the protagonists front and centre in your prose is essential to coherent writing. To do it effectively, you must have some consistent way of referring to them. One of the worst mistakes you can make is to indulge in the sin of elegant variation, i.e. constantly coming up with new ways of referring to an old protagonist. For example, referring to Rawls as “the Harvard sage” or the “bespectacled justice-fetishist” or whatever. Some, occasional, variation is nice, but too much of it is confusing. You don’t want the reader pausing every few minutes to figure out if you are still talking about the same thing.

I have to say, elegant variation is one of the biggest flaws I see among student essays, particularly those written by better students. They are often taught that variation is the hallmark of sophisticated prose; that repetitive use of the same word evinces an underdeveloped vocabulary. This is wrong. The goal of written communication is not to impress the reader with your verbosity; it is to be understood.


4. Understand how coherence relations work
The final tip is the most technical. As David Hume noted, there are a few basic types of relationship that can exist between different ideas (resemblance, contiguity and cause-and-effect). We can call these coherence relations. When writing, it is important to use these basic types of relationship to knit your ideas together. The easiest way to do this is to use connectives, particular words or strings of words that explicitly signal which type of relationship exists.

Pinker identifies four types of coherence relation: the three Humean ones, and additional type he calls attribution. In one of the most useful sections of his book, he goes through each of these relations, giving examples and explaining how they work. I’ll do the same now.

Let’s start with resemblance relations. The name is a little bit misleading because it doesn’t merely cover situations in which one idea resembles another; it also covers situations in which one idea differs from another, or clarifies or generalises another. Here’s a list of the most common types of resemblance relation:

Similarity: Shows how one idea is similar to another, e.g. “Darwin’s theory of evolution was like that of Alfred Russell Wallace.” A similarity relation is commonly signalled by the use of and, similarly, likewise and too.
Contrast: Shows how one idea differs from another, e.g. “Hobbes conceived of the state of nature as a war of all against all. Rousseau had a much rosier view.” A contrast relation is commonly signalled by the use of but, in contrast, on the other hand, and alternatively.
Elaboration: Describes something in a generic way first, and then in specific detail, e.g. “Justice is about fairness. It is about making sure that everybody gets an equal share of public resources.” Elaboration is commonly signalled by the use of a colon (:), that is, in other words, which is to say, also, furthermore, in addition, notice that, and which.
Exemplification: Starts with a generalisation and then gives one or more examples, e.g. “Free will is a deeply contested concept. There are as many different theories of free will as there are days of the week: agent causalist theories, event-causal libertarianist theories, compatibilist and semi-compatibilist theories, illusionist theories, hard-determinist theories and so on.” Exemplification is commonly signalled by the use of for example, for instance, such as, including and a colon (:).
Generalisation: Starts with a specific example and then gives a general rule, e.g. “There are as many different theories of free will as there are days of the week: agent causalist theories, event-causal libertarianist theories, compatibilist and semi-compatibilist theories, illusionist theories, hard-determinist theories and so on. This shows that free will is a deeply contested concept.” Generalisation is commonly signalled by in general, and more generally.
Exception - exception first: Gives an exception first and then gives the general rule, e.g. “David Hume was good-natured and witty. But philosophers are usually a sour bunch.” This is commonly signalled by however, on the other hand, and then there is.
Exception - generalisation first: Gives the generalisation first and then givse the exception, e.g. “Philosophers are usually a sour bunch. But David Hume was good-natured and witty.” This is commonly signalled by nonetheless, nevertheless, and still.

In my experience, resemblance relations are most common in academic writing. This is because academic writing typically talks about the relationships between abstract concepts and ideas, or between conclusions and premises and so on. That said, it sometimes talks about real people and real events. When it does, the other kinds of coherence relation are relevant.

Contiguity relations show how different events are related to one another in space and time. There are really only two forms this can take:

Sequence - before-and-after: Says that one thing happened and then another thing happened afterwards, e.g. “Darwin went on a five year voyage on the HMS Beagle. He then came home and developed his theory of evolution.” This type of sequence is commonly signalled by and, before, and then.
Sequence - after-and-before: Says that one thing happened and before that another thing, e.g. “Darwin developed his theory of evolution while living in Down House in Kent. Before that he had been a five-year voyage on the HMS Beagle.” This type of sequence is commonly signalled by after, once, while and when.

Although both of these sequences are acceptable, human beings tend to follow things better if they are written in their natural sequence (i.e. if you describe them in the order in which they happened). That’s not to say that reverse-ordering should be avoided — sometimes it can cast an interesting light on a topic — but it should be used with discernment.

Then, we have relations of cause-and-effect. These are common in scientific and historical discussions where you are trying to explain why things happened the way they did. There are four types of these relation:


Result (cause-effect): Introduces an explanatory principle or rule, then says what follows from that rule, e.g. “David Hume was living in an era of religious intolerance, that’s why he never published his Dialogues Concerning Natural Religion during his lifetime.” This type of relation is commonly signalled by and, as a result, therefore, and so.
Explanation (effect-cause): States what happened first, then introduces the explanation, e.g. “The Soviet Union collapsed in 1991. This was because of internal corruption and decay.” This type of relation is commonly signalled by because, since, and owing to.
Violated expectation (preventer-effect): Used when the cause prevents something from happening that would otherwise have happened, e.g. “Darwin would never have published his theory were it not for Huxley’s intervention.” This is commonly signalled by but, while, however, nonetheless, and yet.
Failed prevention (effect preventer): Used when the cause fails to prevent something from happening, e.g. “Darwin published his theory, despite his concerns about the religious backlash.” This is commonly signalled by despite and even though.


This brings us to the final category of coherence relation, which has only one member:

Attribution: Used when you want to attribute an idea or action or belief (or whatever) to a particular agent or individual, e.g. “Hume thought that there was no logical connection between the fact that the sun rose yesterday, and the fact that it would rise again tomorrow.” This is commonly signalled by according to, or X stated that.



Attribution is important when one wants to distinguish between who believes what and who did what. It is particularly useful when you want to distinguish between what you, as the writer, believe and what someone else believes.

These coherence relations are summarised in the table below. One thing should be stated before concluding: you don’t always have to use connectives to signal the existence of a coherence relation. Indeed, too much signalling can make your writing seem awkward and laboured. You need to exercise some judgment. When is the relationship between two sentences or paragraphs clear and when is it not? Put in the connectives whenever it seems unclear. This, incidentally, is why re-reading and re-drafting is essential to good writing. If you don’t put yourself in the shoes of the reader — or get others to play this role for you — you won’t be able to get the mix of explicit and implicit signalling right.


5. Conclusion
So that’s it. Four tips for improving the coherence of one’s writing. To briefly recap:

1. Adopt a sensible overarching structure: Make your point in a logical, easy-to-follow fashion. Adopting, spatial or temporal metaphors can help you to do this, e.g. imagining your argument as something with a visible structure.
2. Introduce the reader to the topic and the point: Make sure they know what you are talking about and why you are talking about it.
3. Help the reader keep track of the protagonists: Always be mindful of the person, concept or argument you are discussing. Make sure you keep the reader focused on that person, concept or argument. Avoid elegant variation.
4. Understand how coherence relations work: Be aware of how the ideas, concepts, agents, or events you are discussing relate to one another. Make sure the reader can follow those relations, either explicitly (through connective phrases) or implicitly (by good paragraph and sentence structuring).

* This is “fussily prescriptive” because it is a pseudo-rule. From my limited research, it seems that no one knows where the “rule” came from, and it is silly to insist on it because breaking it doesn’t hinder one’s ability to communicate.

Monday, October 6, 2014

Sousveillance and Surveillance: What kind of future do we want?



Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not. Either way, the prisoners would never know if they were being watched. This uncertainty would keep them in check:

The building circular—A cage, glazed—a glass lantern about the Size of Ranelagh—The prisoners in their cells, occupying the circumference—The officers in the centre. By blinds and other contrivances, the inspectors concealed […] from the observation of the prisoners: hence the sentiment of a sort of omnipresence—The whole circuit reviewable with little, or if necessary without any, change of place. One station in the inspection part affording the most perfect view of every cell. 
(Bentham, Proposal for a New and Less Expensive mode of Employing and Reforming Convicts, 1798)

Bentham’s panopticon was never built, though certain real-life prisons got pretty close (e.g. the Presidio Modelo in Cuba). But many see echoes of the panopticon in the modern surveillance state. We are like the prisoners. Our world has become flooded with devices capable of recording and monitoring our personal data, and feeding them to various authorities (governments and corporations). And although nobody may care about our personal data at any given time, we can never be sure you are not being watched.

This is all pretty banal and obvious if you’ve been paying attention over the past few years. But as many futurists and technophiles point out, there is one critical difference between the panopticon and our current predicament. In the pantopticon, the information flows in one direction only: from the watched to the watchers. In the modern world, the information can flow in many directions: we too can be the watchers.

But is this a good thing? Should this ability to surveil and monitor everything be embraced or resisted? Those are the questions I wish to pursue in this post. I do so by focusing, in particular, on the writings of sousveillance advocate Steve Mann. On a previous occasion, I analysed and evaluated Mann’s case for the sousveillant society. Today, I want to do something slightly less ambitious: I want to review the possible future societies that are open to us depending on our attitude toward surveillance technologies. Following Mann, I’ll identify four possibilities and briefly comment on their desirability.


1. The Surveillance-Sousveillance Distinction
The four possible future arise from the intersection of two competing approaches to “veillance” technologies. These are the surveillant and sousveillant approaches, respectively. You may be familiar with the distinction already, or have a decent enough grasp of etymology to get the gist of it (“sur” means “from above”; “sous” means “from below”). Nevertheless, Mann offers a couple of competing sets of definitions in his work, and its worth talking about them both.

The first set of definitions focuses on the role of authority in the use of veillance technologies. It defines surveillance as any monitoring that is undertaken by a person or entity in some position of authority (i.e. to whom we are inclined/obliged to defer). The authority could be legal or social or personal in nature. This is the kind of monitoring undertaken by governments, intelligence agencies and big corporations like Facebook and Google (since they both have a kind of “social” authority). By way of contrast, sousveillance is any monitoring undertaken by persons and entities that are not in a position of authority. This is the kind of citizen-to-authority or peer-to-peer monitoring that is now becoming more common.

The second set of definitions shifts the focus away from “authorities” and onto activities and their participants. It defines surveillance as the monitoring of an activity by a person that is not a participant in that activity. This, once again, is the kind of monitoring undertaken by governments or businesses: they monitor protests or shopping activities without themselves engaging in those behaviours (though, of course, people employed in governments and businesses could be participants in other activities in which they themselves are surveilled). In contrast, sousveillance is monitoring of an activity that is undertaken by actual participants in the activity.

I’m not sure which of these sets is preferable, though I incline toward the first. The problem with the first one is that it relies on the slightly nebulous and contested concept of an “authority”. Is a small, local shop-owner with a CCTV camera in a position of authority over the rich businessperson who buys cigarettes in his store? Or does the power disparity turn what might seem in the first instance to be a case of surveillance into one of sousveillance? Maybe this is a silly example but it does help to illustrate some of the problems involved with identifying who the authorities are.

The second set of definitions has the advantage of moving away from the concept of authority and focusing on the less controversial concepts of “activities” and their “participants”. Still, I wonder whether that advantage is outweighed by other costs. If we stuck strictly to the participant/non-participant distinction then citizen-to-authority monitoring would seem to count as surveillance not sousveillance. For example protestors who record the behaviour of security forces would be surveilling them, not sousveilling them. You might think that’s fine — they’re just words after all — but I worry that it misses something of the true value of the sousveillance concept.

That’s why I tend to prefer the first set of definitions.


2. Four Types of Veillance Society
And the definitions matter because, as noted above, the surveillance-sousveillance distinction is critical to understanding the possible futures that are open to us. You have to imagine that surveillance and sousveillance represent two different dimensions or matrices along which future societies can vary. A society can have competing attitudes toward both surveillance and sousveillance. That is: they can reject both, embrace both, or embrace one and reject the other. The result is four possible futures, which can be represented by the two-by-two matrix below.




(Note: Mann adopts a slightly more complicated model in his work. He imagines this more like a coordinate plane with the coordinates representing the number of people or entities engaging in different types of veillance. There may be some value to this model, but it is needlessly complex for my purposes. Hence the more straightforward two-by-two matrix).

Let’s consider these four possible futures in more detail:


The Equiveillance Society: This is a society which embraces both surveillance and sousveillance. The authorities can watch over us with their machines of loving grace and we can watch over them with our smartphones, smartwatches and other smart devices (though there are questions to be asked about who really controls those technologies).
The Univeillance Society: This is a society which embraces sousveillance but resists surveillance. It’s not quite clear why it is called univeillance (except for the fact that it embraces one kind of veillance only, but then that would imply that a society that embraced surveillance only should have the same name, which it doesn’t). But the basic idea is that we accept all forms of peer-to-peer monitoring, but try to cut out monitoring by authorities.
The McVeillance Society: This is a society that embraces surveillance but resists sousveillance. Interestingly enough, this is happening already. There are a number of businesses that use surveillance technologies but try to prevent their customers or other ordinary citizens from using sousveillance technologies (like smartphone cameras). For example, in Ireland, the Dublin Docklands Development Authority tries to prevent photographs being taken in the streets of the little enclave of the city that it controls (if you are ever there, it seems like the streets are just part of the ordinary public highways, but in reality they are privately owned). The name “McVeillance” comes from Mann’s own experiences with McDonalds (which you can read about here).
The Counterveillance Society: This is a society that resists both types of veillance technology. Again, we see signs of this in the modern world. People try to avoid being caught by police speed cameras (and there are websites set up to assist this), or having their important life events being recognised by big data, and or having their photographs taken on nights out.


The modern world is in a state of flux. It is only recently that surveillance and sousveillance technologies have become cheap and readily available. As a result we are lurching between these possibilities. Still, it is worth asking what do we want the future to look like?


3. So what future should we aim for?
The answer to that question is difficult. Each society has its appeal to different interests and values. And it is possible that we could subdivide society into different “regions” with different possibilities prevailing in different regions. Also, it might be more worthwhile asking which of the four is really possible? Though one can certainly imagine circumstances in which each becomes a genuine reality, that’s not saying a whole lot. Imagination is often unconstrained by probability. In considering what is desirable we might be better off constraining ourselves to what it probable.

I suspect no one really wants to live in the McVeillant society. Even those businesses and authorities that want to stamp out sousveillance probably wouldn’t tolerate the same policy being imposed on them. But I think it is a genuine possibility and one we should guard against. I suspect some people would like to live in the univeillant society, but it’s not a real possibility. There will always be some kinds of social authority and they will always try to monitor and control behaviour. Similarly, the counterveillant society will hold appeal for some, but I’m not sure about its likelihood. There are some resistive technologies out there, but can they cope with everything? Will we all have to walk around in movable Faraday cages to block out all EM-fields?

That, of course, leaves us with the equiveillant society. I tend to think this has the best combination of desirability and probability, but I certainly wouldn’t like to give the impression that it would be a panacea. As I noted in a previous series of posts, widespread availability of sousveillant technologies is unlikely to solve issues of power imbalance or social injustice. Still, it could help and focusing on carefully engineering that possible future would probably be better than sleepwalking into the McVeillant society. What do you think?

Friday, October 3, 2014

Should we abolish work?




I seem to work a lot. At least, I think I work a lot. Like many in the modern world, I find it pretty hard to tell the difference between work and the rest of my life. Apart from when I’m sleeping, I’m usually reading, writing or thinking (or doing some combination of the three). And since that is essentially what I get paid to do, it is difficult to distinguish between work and leisure. Of course, reading, writing and thinking are features of many jobs. The difference is that, as an academic, I have the luxury of deciding what I should be reading, writing and thinking about. This luxury has, perhaps, given me an overly positive view of work. But I confess, there are times when I find parts of my job frustrating and overbearing. The thing is: maybe that’s the attitude we should all have towards work? Maybe work is something we should be trying to abolish?

That, at any rate, is the issue I want to consider in this post. In doing so, I’m driven by one of my current research projects. For the past few months, I’ve been looking into the issue of technological unemployment and the possible implications it might have for human society. If you’ve been reading the blog on a regular basis, you will have seen this crop up a number of times. As I noted in one of my earlier posts, there are basically two general questions one can ask about technological unemployment:

The Factual Question: Will advances in technology actually lead to technological unemployment?
The Value Question: Would long-term technological unemployment be a bad thing (for us as individuals, for society etc.)?

It’s the value question that I’m interested in here. Suppose we could replace the vast majority of the human workforce with robots or their equivalents? Would this be a good thing? If we ignore possible effects on income distribution — admittedly a big omission but let’s do it for the sake of this post — then maybe it would be. That would seem to be the implication of the abolish work arguments I outline below.

Those arguments are inspired by a range of sources, mainly left-wing anti-capitalist writers (e.g. David Graeber, Bob Black, Kathi Weeks and, classically, Bertrand Russell), but do not purport to accurately reflect or represent the views of any. They are just my attempt to simplify a diverse set of arguments. I do so by dividing them into two main types: (i) “Work is bad”-arguments; and (ii) opportunity cost arguments. I’ll discuss both below, along with various criticisms.


1. What is work anyway?
If we are going to be abolishing work, it would be helpful if we had some idea of what it is we are abolishing. After all, as I just noted, it can sometimes be hard to tell the difference between work and other parts of your life. In crafting a definition we need to guard against the sins of over and under-inclusiveness, and against the risk of a value-laden definition. An under-inclusive definition will exclude things that really should count as work; an over-inclusive definition will risk turning “work” into a meaningless category; and a value-laden definition will simply beg the question. For example, if we define work as everything we do that is unpleasant, then we are being under-inclusive (since many people don’t find all aspects of their work unpleasant) and begging the question (since if we assume work is unpleasant we naturally imply that is the kind of thing we ought to abolish).

Consider Bertrand Russell’s famous, and oft-quoted, definition of work:

Work is of two kinds: first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and ill paid; the second is pleasant and highly paid. The second kind is capable of indefinite extension: there are not only those who give orders, but those who give advice as to what orders should be given. 
(Russell, In Praise of Idleness)

This is pithy, clever and no doubt captures something of the truth. It certainly corresponds to the definition I first learned in my school physics textbook, and it also conjures up the arresting image of the hard-working labourer and the pampered, over-paid manager. Nevertheless, it is over-inclusive and value-laden. If we were take Russell seriously, then every time I lifted my teacup to my lips, I would be “working” and I would be doing something “unpleasant”. But, of course, neither of these things seems right.

How might we go about avoiding the sins to which I just alluded? I suggest we adopt the following definition:

Work: The performance of some skill (cognitive, emotional, physical etc.) in return for economic reward, or in the ultimate hope of receiving some such reward.

This definition is quite broad. It covers a range of potential activities: from the hard labour of the farm worker, to the pencil-pushing of the accountant and everything in between. It also covers a wide range of potential rewards: from traditional wages and salaries to any other benefit which can be commodified and exchanged on a market. It also, explicitly, includes what is sometimes referred to as “unpaid employment”. Thus, for example, unpaid internships or apprenticeships are included within my definition because, although they are not done in return for economic reward, they are done in the hope of ultimately receiving some such reward.

Despite this broadness, I think the definition avoids being overly-inclusive because it links the performance of the skill to the receipt of some sort of economic reward. Thus, it avoids classifying everything we do as work. In this respect, it does seem to capture the core phenomenon of interest in the anti-work literature. Furthermore, the definition doesn’t beg the question by simply assuming that work is, by definition, “bad”. The definition is completely silent on this issue.

That said, definitions are undoubtedly tricky, and philosophers love to pull them apart. I have no doubt my proposed definition has some flaws that I can’t see myself right now (we are often blind to the flaws in our own position). I’ll be happy to hear about them from commenters.


2. “Work is bad”- Arguments
If we can accept my proposed definition of work, we can proceed to the arguments themselves. The first class of arguments proposes that we ought to abolish work because work is “bad”. In other words, the arguments in this class fit the following template:


  • (1) If something is bad, we ought to abolish it.
  • (2) Work is bad.
  • (3) Therefore, we ought to abolish work.


Premise (1) is dubious in its current form. Just because something is bad does not mean we should abolish it. If it we can reform or ameliorate its badness, then we might be able to avoid complete abolition. This might even make sense if the thing in question has good qualities in addition to the bad ones. We wouldn’t want to throw the proverbial baby out with the bathwater. It is only really if something is intrinsically and overwhelmingly bad that it ought to be abolished. For in that case, its good qualities will be minimal and its bad qualities will be ineradicable without complete abolition. This suggests the following revision to premise (1) and the remainder of the argument:


  • (1*) If something is intrinsically and overwhelmingly bad, we ought to abolish it.
  • (2*) Work is intrinsically and overwhelmingly bad.
  • (3) Therefore, we ought to abolish work.


This raises the bar considerably for proponents of abolition, but it seems to chime pretty well with many of the traditional critiques. For instance, Bob Black issues the following indictment of work:

Work is the source of nearly all the misery in the world. Almost any evil you’d care to name comes from working or from living in a world designed for work. In order to stop suffering, we have to stop working.
(Black, The Abolition of Work

And Bertrand Russell chimes in:

I want to say, in all seriousness, that a great deal of harm is being done in the modern world by belief in the virtuousness of work, and that the road to happiness and prosperity lies in an organized diminution of work.
(Russell, In Praise of Idleness

More recently, Kathi Weeks argued that there is something mysterious about our willingness to do something so unpleasant:

Why do we work so long and so hard? The mystery here is not that we are expected to work or that we devote so much time and energy to its pursuit, but rather that there is not more active resistance to this state of affairs. The problems with work today…have to do with both its quantity and its quality and are not limited to the travails of any one group. Those problems include the low wages in so many sectors of the economy; the unemployment, underemployment, and precarious employment suffered by many workers; and the overwork that often characterizes even the most privileged forms of employment; after all, even the best job is a problem when it monopolizes so much of life. 
(Weeks, The Problem with Work, p. 1)

To be sure, not all of these authors claim that work ought to be abolished. Some merely call for a reduction or diminution. Nevertheless, they seem agreed that there is something pretty bad about work. What could that be?

There are many candidate accounts of work’s badness. Some focus on how work compromises autonomy and freedom. The classic Marxist critique would hold that work is bad because it involves a form of alienation and subordination: workers are alienated from the true value of their labour and subordinated to the will of another. There is also the complaint that work is a form of coercion or duress: because we need access to economic rewards to survive and thrive, we are effectively forced into work. We are, to put it bluntly, “wage slaves”. Finally, there is Levine’s worry that the necessity of work compromises a particular conception of the good life: the life of leisure and gratuitous pursuit.

Moving beyond the effects of work on autonomy and freedom, there are other accounts of work’s badness. There are those that argue that work is stultifying and boring: it forces people into routines and limits their creativity and personal development. It is often humiliating, degrading and tiring: think of cleaning shift workers, forced to work long hours cleaning up other people’s dirt. This cannot be a consistently rewarding experience. In addition to this, some people cite the effect that work has on health and well-being, as well as its colonising potential. As Weeks points out, one of the remarkable features of modern work is how its seems to completely dominate our lives. This certainly seems to be true of my working life, as I suggested in the intro.

This is far from an exhaustive list of reasons why work is bad, but already we can see some problems with the argument. I’ll mention two here. The first, and most obvious, is that these accounts of work’s badness seem to be insufficiently general. At best, they might apply to specific workers and specific forms of work. Thus, for example, it is not true that all workers are coerced into work. Some people are independently wealthy and have no need for the economic rewards that work brings, and some countries have sufficiently generous welfare provisions to take work out of the “coercion” bracket (as noted previously, the basic income guarantee could be game-changer in this regard). Similarly, while it is true that some forms of work are humiliating, stultifying, degrading, tiring, and deleterious to one’s health and well being, this isn’t true of all forms of work. That’s not to say we should do nothing about the forms of work that share these negative qualities; but it is to say that the complete abolition or diminution of work goes too far. We should just focus on the bad forms of work (which, of course, requires a revised argument).

A second problem with the argument is that it seems to fly in the face of what many people think about their work. Many people actually seem to enjoy work, and actively seek it out. They attach a huge amount of self-worth and self-belief to success in their working lives. From their perspective, work doesn’t seem all that bad. How does the argument account for them? There is a pretty standard reply. People who derive such pleasure and self-worth from work are victims of a kind of false-consciousness. The virtuousness of the work ethic is an ideology that has been foisted upon them from youth. Consequently, they’ve been trained to associate hard work with all manner of positive traits, and unemployment with negative ones. But there is nothing essential to these associations. Work is only contingently associated with positive traits. For example, it is only because society places such value in the work ethic that our sense of self-worth and confidence gets wrapped up in it. We could easily break down these learned associations.

Is this response persuasive? It’s a tricky philosophical issue. I think there is some truth to the false-consciousness line. There are at least some strictly contingent relationships between work and positive outcomes. A restructuring or reordering of societal values could dissolve those relationships. For example, during the wave of unemployment that followed the 2008 financial crisis, it certainly seemed to me like unemployment carried less of a social stigma. Many of my friends lost their jobs or found it difficult to get work, but no one thought less of them as a result. Nevertheless, I can’t completely discount the pleasure or enjoyment that people claim to get from work. The question is whether this could be disassociated from the pursuit of economic reward, and whether greater pleasures could be found elsewhere. That’s what the next argument contends.


3. Opportunity Cost Arguments
Opportunity cost arguments are simple. They argue that work ought to be abolished because there are better uses of our time. In other words, they do not claim that work is overwhelmingly and necessarily bad, but simply claim it is a worse alternative. The arguments fit the following template:


  • (4) If engaging engaging in activity X prevents us from engaging in a more valuable activity, then X ought to be abolished.
  • (5) Working prevents us from engaging in more valuable activities.
  • (6) Therefore, work ought to be abolished.


Let’s go through the premises of this one. Premise (4) may, once again, go too far in arguing that an activity that denies us access to another must be abolished. It may be possible to reform or revise the activity so that it doesn’t prevent us from engaging in the other activity. So, for example, shortening the working week dramatically might reduce the obstacle work poses to engaging in other activities. This may be why the likes of Bertrand Russell and Kathi Weeks argue for such reductions (to four hours in Russell’s case and thirty in Weeks’s). Another problem with premise (1) is that it ignores the possible need for the less desirable activity. Cleaning my kitchen certainly prevents me from engaging in other more desirable activities, but it is probably necessary if I wish to avoid creating a health hazard. This is something many people argue in relation to work: it may be unpleasant but it is necessary. Without it we wouldn’t generate the wealth needed to bring us longer lives, better education, improved healthcare and so on.

That suggests the following revision is in order:


  • (4*) If engaging in activity X prevents us from engaging in a more valuable activity, and if X is not necessary for some greater good, then X ought to be abolished.
  • (5*) Working prevents us from engaging in more valuable activities, and it is not necessary for some greater good.
  • (6) Therefore, work ought to be abolished.


This revision makes it harder to defend premise (5*), but let’s see what can be said on its behalf. In his effort to praise idleness, Russell makes the point that leisure and idleness is a better use of our time. To back this up he points out that the leisure classes have historically been responsible for the creation of civilization. They did so at the expense of others, to be sure, but that doesn’t defeat the point:

In the past, there was a small leisure class and a larger working class. The leisure class enjoyed advantages for which there was no basis in social justice; this necessarily made it oppressive…but in spite of this drawback it contributed nearly the whole of what we call civilization. It cultivated the arts and discovered the sciences; it wrote the books, invented the philosophies and refined social relations. 
(Russell, In Praise of Idleness)

Bob Black, likewise, points out that work denies us access to a more valuable activity, play:

[Abolishing work] does mean creating a new way of life based on play; in other words, a ludic revolution. By “play” I mean also festivity, creativity, conviviality, commensality, and maybe even art. There is more to play than child’s play, as worthy as that is. I call for a collective adventure in generalized joy and freely interdependent exuberance…The ludic life is totally incompatible with existing reality.
(Black, The Abolition of Work

The suggestion from both authors is that non-work is better, all things considered, than work. Russell bases this on an instrumentalist argument: we get more things of value from non-work (arts, sciences, political organisation etc.). Black bases it on an intrinsic argument: the playful life is, in and of itself, better than the working life. I think there is something to be said for both arguments. Although work undoubtedly has benefits and can be intrinsically rewarding to some, there is reason to think a life of non-work would be better than a life of work. Why? Well, one obvious problem with work is that one’s skills and talents are directed at providing things that are of value on an economic market. And there is reason think that markets won’t always value things that are best for society or best for the individuals who work to satisfy the market demands. David Graeber puts it rather bluntly:

[I]f 1% of the population controls most of the disposable wealth, what we call “the market” reflects what they think is useful or important, not anybody else. 
(Graeber, On the Phenomenon of Bullshit Jobs)

Indeed, freedom from market pressures is one of the great luxuries of my own line of work. I am able — for now anyway — to pursue the research that I find interesting and rewarding. It may not always be this way. Many of my academic colleagues are forced to produce research that has economic benefits or impacts. But I think that is genuinely inferior to being able to captain one’s own ship. In addition to this, I like the opportunity cost argument because it doesn’t force one to make unrealistic claims about the badness of all forms of work. It just says that whatever the benefits of work, non-work is slightly better.

Still, there are criticisms to be made of the argument. I’ll discuss three here. The first one is the “necessity” objection. This links into the revised form of the argument. A critic might concede that non-work is better, all things considered, than work, but argue that work is, unfortunately, necessary for some greater good. After all, we need those tax dollars to support education, healthcare and the self-directed research interests of academics. People wouldn’t produce food or houses or other basic necessities without financial reward, would they? This is a fair point, but it is worth noting that far fewer people are employed meeting basic human needs now than there were a hundred years ago. Why? Technology has allowed us to automate most agricultural and manufacturing jobs. Machines can now be used to meet our basic needs. Maybe machines could take over all the other socially valuable aspects of economic activity, and free us up to live the ludic life? One can always dream.

The second objection might be termed the “idleness” objection. Proponents of this will say that the opportunity cost argument presumes a far too rosy picture of human motivation. It presumes that if left to their own devices, people will pursue projects of great worth to both themselves and others. But this is mere fantasy. If freed from the discipling (invisible) hand of the market, people will simply fall idle and succumb to vice. We know this to be true because people suffer from weakness of the will: it is only the necessity of meeting their economic needs that allows them to overcome this weakness. I find this objection unpersuasive. One reason for this is that it is difficult to determine what is so bad about so-called “vice” and “idleness”. But suppose we could determine this. In that case, I have no doubt that in the absence of work many will succumb to “vice”, but I’m pretty sure they do that in presence of work anyway. It’s not clear to me that things will be any worse in a world without work. People have basic psychological needs — e.g. for autonomy, competence and relatedness — that will drive them to do things in the absence of economic reward. Ironically, the major driver of vice and idleness might be advances in automation and artificial intelligence. If AIs don’t just takeover the world of work, but also the world of moral projects (e.g. the alleviation of suffering), scientific discovery and artistic creation, then there might be nothing left for us humans to do. I suspect we are a long way from that reality, but it is something to consider nonetheless.

The final objection is the “efficiency” objection. The idea here is that even though the market does force us to cater to specific kinds of demands, it does have the virtue of forcing us to do things in an efficient manner. We all know the historical mistakes of communism and socialism: central planning and state-directed projects bred (and continued to breed) bloated and inefficient bureaucracies. Wouldn’t a world without work lead us to commit the same mistakes? I’m not sure about this. I agree that markets can be efficient (though sometimes they aren’t) but, as pointed out above, it’s not clear that humans need to be the ones working to meet market demands. Also, in calling for an abolition or diminution of work, it does not follow that one is calling for the re-installation of centrally planned governments.


4. Conclusion
So what’s the takeaway? Should work be abolished or, at the very least, diminished? It’s too difficult to answer that question in a blog post — or maybe in any venue — but we can reach some general conclusions. First, it’s probably wrong to say that all forms of work are sufficiently bad to warrant its abolition. At best, we can say that certain types of work are bad, and their badness is of sufficient magnitude to warrant abolition. That argument needs to be developed at a much more job-specific level. Second, if we are going to make the case for the abolition of work, it’s probably best to do so on the basis of the opportunity cost argument. The advantage of that argument is that it doesn’t commit us to proving that work is irredeemably awful; it just commits us to proving that the alternatives are better. And I think there is some reason to think that freedom from the demands of economic markets would be better for many people. To make the case fully persuasive, however, we would need to show that work is not necessary for greater goods. This is something that technological unemployment may actually help to prove: it we can use technology to meet our basic needs, the necessity of work may slowly erode.

None of this addresses the white elephant in the room: the effects of technological unemployment on wealth and income inequality. A life without work is no good if the economic rewards it brings are necessary to our survival and flourishing. It is only by reorganising the system of wealth distribution that this can be overcome. Whether that is desirable or feasible is a topic for another day.

Thursday, September 25, 2014

Dawkins and the "We are going to die"-Argument



(I originally thought this would be a more interesting blog post, but I think the final product is slightly underwhelming. Indeed, I thought about not posting it at all. In the end, I felt there might be some value to it, particularly since there might be those who disagree with my analysis. If you are one of them, I'd love to hear from you in the comments.)

Consider the following passage from Richard Dawkins’s book Unweaving the Rainbow:

We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. The potential people who could have been here in my place but who will in fact never see the light of day outnumber the sand grains of Arabia. Certainly those unborn ghosts include greater poets than Keats, scientists greater than Newton. We know this because the set of possible people allowed by our DNA so massively exceeds the set of actual people. In the teeth of these stupefying odds it is you and I, in our ordinariness, that are here.We privileged few, who won the lottery of birth against all odds, how dare we whine at our inevitable return to that prior state from which the vast majority have never stirred?

As Steven Pinker points out in his recent book, this is a rhetorically powerful passage. It is robust, punchy and replete with evocative and dramatic imagery (“the sand grains of Arabia”, “unborn ghosts”, “teeth of these stupefying odds”). Indeed, so powerful is it that many non-religious people — Dawkins included — have asked for it to be read at their funerals (click here to see Dawkins read the passage at public lecture to rapturous applause).

While I can certainly appreciate the quality of the writing, I am, alas, somewhat prone to “unweaving the rainbow” myself. If we stripped away the lyrical writing, what would we be left with? To be more precise, what kind of argument would we be left with? It seems to me that Dawkins is indeed trying to present some kind of argument: he has conclusions that he wants us to accept. Specifically, he wants us to be consoled by the fact that we are going to die; to stop whining about our deaths; to stop fearing our ultimate demise. And this is all because we are lucky to be alive. In this respect, I think that what Dawkins is doing is analogous to what the classic Epicurean writers did when they tried to soothe our death-related anxieties. But is his argument any good? That’s the question I will try to answer.

I’ll start by looking at the classic Epicurean arguments and draw out the analogy between them and what Dawkins is trying to do. Once that task is complete, I’ll try to formulate and evaluate Dawkins’s argument.


1. The Epicurean Tradition
There are two classic Epicurean arguments about death. The first comes from Epicurus himself; the second comes from Lucretius, who was a follower of Epicureanism. Epicurus’s argument is contained in the following passage:

Foolish, therefore, is the man who says that he fears death, not because it will pain when it comes, but because it pains in the prospect. Whatever causes no annoyance when it is present, causes only a groundless pain in the expectation. Death, therefore, the most awful of evils, is nothing to us, seeing that, when we are, death is not come, and, when death is come, we are not. It is nothing, then, either to the living or to the dead, for with the living it is not and the dead exist no longer 
(Epicurus, Letter to Menoeceus)

The argument is all about our attitude towards death (that is: the state of being dead, not the process of dying). Most people fear death. They think it among the greatest of the evils that befall us. But Epicurus is telling us they are wrong. The only things that are good or bad are conscious pleasure and pain. Death entails the absence of both. Therefore, death is not bad and we should stop worrying about it. I’ve discussed a more complicated version of this argument before, in case you are interested, but that’s the gist of it.

Let’s turn then to Lucretius’s argument. This one comes from a passage of De Rerum Natura, which I believe is the only piece of writing we have from Lucretius:

In days of old, we felt no disquiet... So, when we shall be no more — when the union of body and spirit that engenders us has been disrupted — to us, who shall then be nothing, nothing by any hazard will happen any more at all. Look back at the eternity that passed before we were born, and mark how utterly it counts to us as nothing. This is a mirror that Nature holds up to us, in which we may see the time that shall be after we are dead.

This argument builds upon that of Epicurus by adding a supporting analogy. This analogy asks us to compare the state of non-existence prior to our births with the state of non-existence after our deaths. Since the former is not something we worry about; so too should the latter “count to us as nothing”. This is sometimes referred to as the symmetry argument. This is because it argues that we should have a symmetrical attitude toward pre-natal and post-mortem non-existence. Some people think that Lucretius adds little to what Epicurus originally argued; some people think Lucretius’s argument has its own merits. Again, this is something I discussed in more detail before.

I won’t assess the merits of either argument here. Instead, I’ll just highlight some general features. Note how both arguments try to call our attention to some “surprising fact”: the centrality of pain and pleasure to our well-being, in Epicurus’s case (this might be less radical now than it was in his day); and our attitude to pre-natal non-existence, in Lucretius’s case. Then note how they both use this surprising fact to reorient our perspective on death. They both claim that this surprising fact has the implication that we should not join the masses in fearing our deaths; instead, we should treat our deaths with equanimity.


2. Dawkins’s and the Argument from Genetic Luck
My feeling is that Dawkins is trying to do the same thing in his “We are going die”-passage. Only in Dawkins’s case the “surprising fact” has nothing to do with conscious experience or our attitudes towards non-existence prior to birth, it has to do with the improbability of our existence in the first place.

So how should we interpret this argument? Look first to the wording. Dawkins seems to be concerned with those who spend their lives ‘whining’ about death. He thinks they don’t fully appreciate the rare ‘privilege’ they have in being alive at all, particularly when they compare their ordinariness to the set of possible people who could have existed. He tells them (actually all of us) that they are the “lucky ones” because they are going to die, not in spite of it.

This suggests that we could interpret Dawkins’s argument in something like the following form:


  • (1) If we are lucky to be alive, then we should not be upset by the fact that we are going to die.
  • (2) We are lucky to be alive.
  • (3) Therefore, we should not be upset by the fact that we are going to die.


How do the premises of this argument work? Let’s start with premise (1). The implication contained in the premise is that we should be grateful for the opportunity of being alive, even if that entails our deaths. This suggests that the argument is an argument from gratitude. He is telling us to be grateful for the rare privilege of dying. The problem I have with this is that gratitude has a somewhat uncertain place in a non-religious worldview. Gratitude is typically something we experience in our relationships with others. I am grateful to my parents for supporting me and paying for my education; I am grateful to my friends for buying me an expensive gift; and so on. If we think of our lives as being gifts from a benevolent creator, then being grateful, arguably, makes sense. But Dawkins is, famously, an atheist. So he must be relying on a different notion of gratitude. He must be saying that we should be grateful to the causal contingency of the natural order for allowing us to exist. But this seems perverse. The natural order is impersonal and uncaring: it just rolls along in accordance with certain causal laws. Why should we feel grateful to it? This same natural order is, after all, responsible for untold human suffering, e.g. suffering from natural disasters, viral infections, cancer and other unpleasantries. These are facets of the natural order that we tend not to accept. In fact, they are things we generally try to overcome. Why should we feel grateful for being plunged into a life filled with suffering of this sort? Couldn’t it be that death is one of the facets of life that we should use our ingenuity to overcome?

Now, I don’t want to be entirely dismissive of this line of argument. Michael Sandel and Michael Hauskeller have tried to articulate a secular, non-religious sense of gratitude that might fit with Dawkins’s argument (though I have my doubts). And I also don’t think that rejecting gratitude should lead us to resentment either. I don’t think resentment toward the natural order is any more appropriate than gratitude. Indeed, I suspect it may even be counter-productive. For example, if we take up the suggestion at the end of the previous paragraph — and think that we should use our ingenuity to overcome death — I suspect we will end up being pretty disappointed. That’s not to say that efforts to achieve life extension are to be rejected. It’s just to say that it’s probably unwise to make them hinge upon which you hang all your hopes and aspirations. I tend to favour a more stoic attitude to the natural order, which involves adjusting one’s hopes and desires so that they are reconciled with the likelihood of death.

I think these criticisms point toward the untenability of Dawkins’s argument — at least insofar as it attempts to console us about our deaths. But for the sake of completeness, let’s also consider the second premise. Is is true to say that we are lucky to be alive? Dawkins probably spends more time addressing this issue in the passage. He uses an argument from genetic luck: the set of possible combinations of DNA is vastly larger than the set of actual people, including you. Your particular combination of DNA is just a tiny, tiny slice of that probability space.

I would be inclined to accept this argument. I don’t doubt that set of possible people is much larger than the set of actual people. The question, of course, is whether all members of that set are equiprobable. Dawkins seems to think that they are. Indeed, he seems to adopt something akin to the principle of indifference when it comes to assessing the probability members of that set. Is this the right principle to adopt? I’m not sure. If one accepts causal determinism, then maybe my existence wasn’t lucky at all: it was causally predetermined by the previous events. It could never have been any other way. Still, I don’t the fact (if it is a fact) of causal determinism affect my probability judgments in relation to other, potentially causally determined, phenomena, like say national or state lotteries. So it probably shouldn’t affect my judgment in this case either.

In other words, I think premise (2) is okay. The real issue is with premise (1) and whether luck entails some change in our attitude toward death. As I said above, I don’t see why this has to be the case.