Monday, May 23, 2016

Vacancy - Research Assistant on the Algocracy and Transhumanism Project





I'm hiring a research assistant as part of my Algocracy and Transhumanism project. It's a short-term contract (5 months only) and available from July onwards. The candidate would have to be able to relocate to Galway for the period. Details below. Please share this with anyone you think might be interested.

Algocracy and the Transhumanist Project, IRC New Horizons NUI Galway
Whitaker Institute, NUI Galway
Ref. No. NUIG 067-16
Applications are invited from suitably qualified candidates for a full time, fixed term position as a Research Assistant with the Algocracy and Transhumanism Project at the Whitaker Institute, National University of Ireland, Galway. This position is funded by the Irish Research Council and is available for a five month period from July 2016.
 The project critically evaluates the interaction between humans and artificially intelligent, algorithm-based systems of governance.  It focuses on the role of algorithms in public decision-making processes and the increased integration between humans and technology. It examines how technology creates new governance structures and new governance subjects and the effect this has on core political values such as liberty and equality. Further information about the project can be found on the project webpage http://algocracy.wordpress.com
Job Description:The post holder will perform a variety of duties associated with the project.  They will participate in research, preparation and editing of interviews with leading experts in the areas of algorithmic governance and human enhancement. They will prepare literature reviews. They will review and edit manuscripts for publication. They will assist in the organisation of research seminars and one major workshop. They will contribute to the project webpage and provide general assistance in disseminating project results. The post holder will report to Dr John Danaher.
 Qualifications: Candidates should have completed a degree in a relevant field of study. Given the broad, interdisciplinary nature of the project, this includes (but is not limited to) law, philosophy, politics, sociology, psychology and information systems. Ideally, the candidate will have some experience in analytical and philosophical modes of research. Candidates should have a strong academic record, and good IT skills. Ideal candidates will be professional, highly motivated, and able to work effectively in a team environment, have an abundance of creativity, and have enthusiasm for research. Strong analytical skills, writing, and organisational abilities are important prerequisites. Support/training will be provided to the successful candidate interested in furthering their own academic/research career.
 Salary: €21,850 per annum, pro rata for this five month contract.   Start date: July 2016.
NB: Work permit restrictions apply to this category of post.
Further information on research and working at NUI Galway is available at http://www.nuigalway.ie/our-research/ Further information is available at www.whitakerinstitute.ie
Informal enquiries concerning the post may be made to Dr John Danaher – john.danaher@nuigalway.ie
To Apply: Applications by email, to include a covering letter, CV, and the contact details of three referees should be sent, via e-mail (in word or PDF only) to Gwen Ryan gwen.ryan@nuigalway.ie
Please state reference number NUIG 067-16 in the subject line of your e-mail application.
Closing date for receipt of applications is 5.00 pm on Wednesday, 15th June 2016.
National University of Ireland, Galway is an equal opportunities employer.

Friday, May 13, 2016

Episode #3 - Sven Nyholm on Love Enhancement, Deep Brain Stimulation and the Ethics of Self Driving Cars

photo

This is the third episode in the Algocracy and Transhumanism project podcast. In this episode I talk to Sven Nyholm who is an Assistant Professor of Philosophy at the Eindhoven University of Technology. Sven has a background in Kantian philosophy and currently does a lot of work on the ethics of technology. We have a wide ranging conversation, circling around three main themes: (i) how technology changes what we value (using the specific example of love enhancement technologies); (ii) how technology might affect the true self (using the example of deep brain stimulation technologies) and (iii) how to design ethical decision-making algorithms (using the example of self-driving cars).

The work discussed in this podcast on deep brain stimulation and the design of ethical algorithms is being undertaken by Sven in collaboration with two co-authors: Elizabeth O'Neill (in the case of DBS) and Jilles Smids (in the case of self-driving cars). Unfortunately we neglected to mention this during our conversation. I have provided links to their work above and below.

Anyway, you can download the podcast here, listen below or subscribe on Stitcher or iTunes.



 

Show Notes


0:00 - 1:30 - Introduction to Sven

1:30 - 7:30 - The idea of love enhancement

7:30 - 10:30 - Objections to love enhancement

10:30 - 12:30 - The medicalisation objection to love enhancement

12:30 - 21:10 - Medicalisation as an evaluative category mistake

21:10 - 24:00 - Can you favour love enhancement and still value love in the right way?

24:00 - 28:10 - Evaluative category mistakes in other debates about technology

28:10 - 30:50 - The use of deep brain stimulation (DBS) technology

30:50 - 35:20 - Reported effects of DBS on personal identity

35:20 - 41:20 - Narrative Identity vs True Self in debates about DBS

41:20 - 46:25 - Is the true self an expression of values? Can DBS help in its expression?

46:25 - 50:30 - Use of DBS to treat patients with Anorexia Nervosa

50:30 - 55:20 - Ethical algorithms in the design of self-driving cars

55:20 - 1:02:40 - Is the trolley problem a useful starting point?

1:02:40 - 1:06:30 - The importance of legal and moral responsibility in the design of ethical algorithms

1:06:30 - 1:09:00 - The important of uncertainty and risk in the design of ethical algorithms

1:09:00 - end - Should moral uncertainty be factored into the design?  


Links

  • Jilles Smids (Sven's Co-author on ethical algorithms for self-driving cars)

Wednesday, May 11, 2016

New Paper - Robots, Law and the Retribution Gap




Apologies for the dearth of posts lately, I'll be back to more regular blogging soon enough. To fill the gap, here's a new paper I have coming out in the journal Ethics and Information Technology. In case you are interested, the idea for this paper originated in this blogpost from late 2014. I was somewhat ignorant of the literature back then; I know more now.

Title: Robots, Law and the Retribution Gap
Journal: Ethics and Information Technology
Links: Philpapers; Academia; Official
Abstract: We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap.

Tuesday, May 3, 2016

Episode #2: James Hughes on the Transhumanist Political Project


James_Hughes

This is the second episode in the Algocracy and Transhumanism project podcast. In this episode I interview Dr. James Hughes, executive director of the Institute for Ethics and Emerging Technologies and current Associate Provost for Institutional Research, Assessment and Planning for the University of Massachusetts Boston. James is leading figure in both transhumanist thought and political activism. He is the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future. I spoke to James about the origins of the transhumanist project, the political values currently motivating transhumanist activists, as well as some more esoteric and philosophical ideas associated with transhumanism. You can download the podcast here. You can listen below. You can also subscribe on Stitcher and iTunes.




Show Notes

0:00 - 1:00 - Introduction to James  
1:00 - 11:00 - The History of Transhumanist Thought (Religious and Mythical Origins) 
11:00 - 17:00 - Transhumanism and the Enlightenment Project  
17:00 - 25:30 - Transhumanism and Disability Rights Movement  
25:30 - 34:30 - The Political Values for Hiveminds and Cyborgs  
34:30 - 41:00 - The Dark Side of Transhumanist Politics  
41:00 - 43:00 - Technological Unemployment and Technoprogressivism  
43:00 - 51:00 - Building Better Citizens through Human Enhancement  
51:00 - 1:01:55 - The Threat of Algocracy?  
1:01:55 - 1:07:55 - Internal and External Moral Enhancement   

Links

     

Saturday, April 23, 2016

The Ethics of Intimate Surveillance (2): A Landscape of Objections



(Part One)

This is the second in a two-part series looking at the ethics of intimate surveillance. In part one, I explained what was meant by the term ‘intimate surveillance’, gave some examples of digital technologies that facilitate intimate surveillance, and looked at what I take to be the major argument in favour of this practice (the argument from autonomy).

To briefly recap, intimate surveillance is the practice of gathering and tracking data about one’s intimate life, i.e. information about prospective intimate partners, information about sexual and romantic behaviours, information about fertility and pregnancy, and information about what your intimate partner is up to. There are a plethora of apps allowing for such intimate surveillance. What’s interesting about them is how they not only facilitate top-down surveillance (i.e. surveillance by app makers, corporate agents and governments) but also interpersonal and self-surveillance. This suggests that a major reason why people make use of these services is to attain more control and mastery over their intimate lives.

I introduced some criticisms of intimate surveillance at the end of the previous post. In this post, I want to continue that critical mode by reviewing several arguments against their use. The plausibility of these arguments will vary depending on the nature of app or service being used. I’m not going to go into the nitty gritty here. I want to survey the landscape of arguments, offering some formalisations of commonly-voiced objections along with some critical evaluation. I’m hoping that this exercise will prove useful to others who are researching in the area. Again, the main source and inspiration for this post is Karen Levy’s article ‘Intimate Surveillance’.


1. Arguments from Biased Data
All forms of intimate surveillance depend on the existence of data that can be captured, measured and tracked. Is it possible to know the ages and sexual preferences of all the women/men within a 2 mile radius? Services like Tinder and Grindr make this possible. But what if you wanted to know what they ate today or how many steps they have walked? Technically this data could be gathered and shared via the same services, but at present it is not.

This dependency of these services on data that is and can be captured, measured and tracked creates problems. What if the data that is being gathered is not particularly useful? What if it is biased in some way? What if it contributes to some form of social oppression? There are at least three objections to intimate surveillance that play upon this theme.

The first rests on a version of the old adage ‘what gets measured gets managed’. If data is being gathered and tracked, it becomes more salient to people and they start to manage their behaviour so as to optimise the measurements. But if the measurements being provided are no good (or biased) then this may thwart preferred outcomes. For example, mutual satisfaction is a key part of any intimate relationship: it’s not all about you and what you want; it’s about working together with some else to achieve a mutually satisfactory outcome. One danger of intimate surveillance is that it could get one of the partners to focus on behaviours that do not contribute to mutually satisfactory outcomes. In general terms:

  • (1) What gets measured gets managed, i.e. if people can gather and track certain forms of data they will tend to act so as to optimise patterns in that data.
  • (2) In the case of intimate surveillance, if the optimisation of the data being gathered does not contribute to mutual satisfaction, it will not improve our intimate lives.
  • (3) The optimisation of the data being gathered by some intimate surveillance apps does not contribute to mutual satisfaction.
  • (4) Therefore, use of those intimate surveillance apps will not improve our intimate lives.

Premise (1) here is an assumption about how humans behave. Premise (2) is the ethical principle. It says that mutual satisfaction is key to a healthy intimate life and anything that thwarts that should be avoided (assuming we want a healthy intimate life). Premise (3) is the empirical claim, one that will vary depending on the service in question. (4) is the conclusion.

Is the argument any good? There are some intimate surveillance apps that would seem to match the requirements of premise (3). Levy gives the example of Spreadsheets — the sex tacker app that I mentioned in part one. This app allows users to collect data about the frequency, duration, number of thrusts and decibel level reached during sexual activity. Presumably, with the data gathered, users are likely to optimise these metrics, i.e. have more frequent, longer-lasting, more thrusting and decibel-raising sexual encounters. While this might do it for some people, the optimisation of these metrics is unlikely to be a good way to ensure mutual satisfaction. The app might get people to focus on the wrong thing.

I think the argument in the case of Spreadsheets might be persuasive, but I would make two comments about this style of argument more generally. First, I’m not sure that the behavioural assumption always holds. Some people are motivated to optimise their metrics; some aren’t. I have lots of devices that track the number of steps I walk, or miles I run. I have experimented with them occasionally, but I’ve never become consumed with the goal of optimising the metrics they provide. In other words, how successful these apps actually are at changing behaviour is up for debate. Second, premise (3) tends to presume incomplete or imperfect data. Some people think that as the network of data gathering devices grows, and as they become more sensitive to different types of information, the problem of biased or incomplete data will disappear. But this might not happen anytime soon and even if it does there remains the problem of finding some way to optimise across the full range of relevant data.



Another argument against intimate surveillance focuses on gender-based inequality and oppression. Many intimate surveillance apps collect and track information about women (e.g. the dating apps that locate women in a geographical region, the spying apps that focus on cheating wives, and the various fertility trackers that provide information about women’s menstrual cycles and associated moods). These apps may contribute to social oppression in at least two ways. First, the data being gathered may be premised upon and contribute to harmful, stereotypical views of women and how they relate to men (e.g. the ‘slutty’ college girl, the moody hormonal woman, the cheating wife and her cuckolded husband etc.). Second, and more generally, they may contribute to the view that women are subjects that can be (and should be) monitored and controlled through surveillance technologies. To put it more formally:

  • (5) If something contributes to or reinforces harmful gender stereotypes, or contributes to and reinforces the view that women can be and should be monitored and controlled, it is bad.
  • (6) Some intimate surveillance apps contribute to or reinforce harmful gender stereotypes and support the view that women can and should be monitored and controlled.
  • (7) Therefore, some intimate surveillance apps are bad.

This is a deliberately vague argument. It is similar to many arguments about gender-based oppression insofar as it draws attention to the symbolic properties of a particular practice and then suggests that these properties contribute to or reinforce gender-based oppression. I’ve looked at similar arguments in relation to prostitution, sex robots and surrogacy in the past. One tricky aspect of any such argument is proving the causal link between the symbolic practice (in this case the data being gathered and organised about women) and gender-based oppression more generally. Empirical evidence is often difficult to gather or inconclusive. This leads people to fall back on purely symbolic arguments or to offer revised views of what causation might mean in this context. A final problem with the argument is that even if it is successful it’s not clear what it’s implications are. Could the badness of the oppression be offset by other gains (e.g. what if the fertility apps really do enhance women’s reproductive autonomy)?



The third argument in this particular group is a little bit more esoteric. Levy points in its direction with a quote from Deborah Lupton:

These technologies configure a certain type of approach to understanding and experiencing one’s body, an algorithmic subjectivity, in which the body and its health states, functions and activities are portrayed and understood predominantly via quantified calculations, predictions and comparisons.
(Lupton 2015, 449)
 
The objection that derives from this stems from a concern about algorithmic subjectivity. I have seen it expressed by several others. The concern is always that the apps encourage us to view ourselves as aggregates of data (to be optimised etc). Why this is problematic is never fully spelled out. I think it is because this form of algorithmic subjectivity is dehumanising and misses out on something essential to the well-lived human life (the unmeasurable, unpredictable, unquantifiable):

  • (8) Algorithmic subjectivity is bad: it encourages us to view ourselves as aggregates of data to be quantified, tracked and optimised; it ignores essential aspects of a well-lived life.
  • (9) Intimate surveillance apps contribute to algorithmic subjectivity.
  • (10) Therefore, intimate surveillance apps are bad.



This strikes me as a potentially very rich argument — one worthy of deeper reflection and consideration. I have mixed feelings about it. It seems plausible to suggest that intimate surveillance contributes to algorithmic subjectivity (though how much and in what ways will require empirical investigation). I’m less sure about whether algorithmic subjectivity is a bad thing. It might be bad if the data being gathered is biased or distorting. But I’m also inclined to think that there are many ways to live a good and fulfilling life. Algorithmic subjectivity might just be different; not bad.


2. Arguments from Core Relationship Values
Another group of objections to intimate surveillance are concerned with its impact on relationships. The idea is that there are certain core values associated with any healthy relationship and that intimate surveillance tends to corrupt or undermine those values. I’ll look at two such objections here: the argument from mutual trust; and the argument from informal reciprocal altruism (or solidarity).

Before I do so, however, I would like to voice a general concern about this style of argument. I’m sceptical of essentialistic approaches to healthy relationships, i.e. approaches to healthy relationships that assume they must have certain core features. There are a few reasons for this, but most of them flow from my sense that the contours of a healthy relationship are largely shaped by the individuals that are party to that relationship. I certainly think it is important for the parties to the relationship to respect one another’s autonomy and to ensure that there is informed consent, but beyond that I think people can make all sorts of different relationships work. The other major issue I have is that I’m not sure what a healthy relationship really is. Is it one that lasts indefinitely? Can you have a healthy on-again off-again relationship? Abuse and maltreatment are definite no-gos, but beyond that I’m not sure what makes things work.

Setting that general concern to the side, let’s look at the argument from mutual trust. It works something like this:

  • (11) A central virtue of any healthy relationship is mutual trust, i.e. a willingness to trust that your partner will act in a way that is consistent with your interests and needs without having to monitor and control them.
  • (12) Intimate surveillance undermines mutual trust.
  • (13) Therefore, intimate surveillance prevents you from having a healthy relationship.

The support for (12) is straightforward enough. There are certain apps that allow you to spy on your partner’s smartphone: see who they have been texting/calling, where they have been, and so on. If you use these apps, you are clearly demonstrating that you are unwilling to trust your partner without monitoring and control. So you are clearly undermining mutual trust.

I agree with this argument up to a point. If I spy on my partner’s phone without her consent, then I’m definitely doing something wrong: I’m failing to respect her autonomy and privacy and I’m not being mature, open and transparent. But it strikes me that there is a deeper issue here: what if she is willing to consent to my use of the spying app as gesture of her commitment? Would it still be a bad idea to use it? I’m less convinced. To argue the affirmative you would need to show that having (blind?) faith in your partner is essential to a healthy relationship. You would also have to contend with the fact that mutual trust may be too demanding, and that petty jealousy is all too common. Maybe it would be good to have a ‘lesser evil’ option?



The other argument against intimate surveillance is the argument from informal reciprocal altruism (or solidarity). This is a bit of a mouthful. The idea is that relationships are partly about sharing and distributing resources. At the centre of any relationship there are two (or more) people who get together and share income, time, manual labour, emotional labour and so on. But what principle do people use to share these resources? Based on my own anecdotal experience, I reckon people adopt a type of informal reciprocal altruism. They effectively agree that if one of them does something for the other, then the other will do something else in return, but no one really keeps score to make sure that every altruistic gesture is matched with an equal and opposing altruistic gesture. They know that it is part of their commitment to one another that it will all pretty much balance out in the end. They think: “we are in this together and we’ve got each other’s backs”.

This provides the basis for the following argument:

  • (14) A central virtue of any healthy relationship is that resources are shared between the partners on the basis of informal reciprocal altruism (i.e. the partners do things for one another but don’t keep score as to who owes what to whom)
  • (15) Intimate surveillance undermines informal reciprocal altruism.
  • (16) Therefore, intimate surveillance prevents you from having a healthy relationship.

The support for (15) comes from the example of apps that try to gamify relationships by tracking data about who did what for whom, assigning points to these actions, and then creating an exchange system whereby one partner can cash in these points for favours by the other partner. The concern is that this creates a formal exchange mentality within a relationship. Every time you do the laundry for your partner you expect them to do something equivalently generous and burdensome in return. If they don’t, you will feel aggrieved and will try to enforce their obligation to reciprocate.

I find this objection somewhat appealing. I certainly don’t like the idea of keeping track of who owes what to whom in a relationship. If I pay for the cinema tickets, I don’t automatically expect my partner to pay for the popcorn (though we may often end up doing this). But there are some countervailing considerations. Many relationships are characterised by inequalities of bargaining power (typically gendered): one party end’s up doing the lions share of care work (say). Formal tracking and measuring of actions might help to redress this inequality. It could also save people from emotional anguish and feelings of injustice. Furthermore, some people seem to make formal exchanges of this sort work. The creators of the Beeminder app, for instance, appear to have a fascinating approach to their relationship.




3. Privacy-related Objections

The final set of objections returns the debate to familiar territory: privacy. Intimate surveillance may involve both top-down and horiztonal privacy harms. That is to say, privacy harms due to the fact that corporations (and maybe governments) have access to the data being captured by the relevant technologies; and privacy harms to due to the fact that one’s potential and actual partners have access to the data.

I don’t have too much to say about privacy-related objections. This is because they are widely-debated in the literature on surveillance and I’m not sure that they are all that different in the debate about intimate surveillance. They all boil down to the same thing: the claim that the use of these apps violates somebody’s privacy. This is because the data is gathered and used either without the person whose data it is consenting to this gathering and use (e.g. facebook stalking), or with imperfect consent (i.e. not fully informed). It is no doubt true that this is often the case. App makers frequently package and sell the data they mine from their users: it is intrinsic to their business model. And certain apps — like the ones that allow you to spy on your partner’s phone — seem to encourage their users to violate their partner’s privacy.

The critical question then becomes: why should we be so protective of privacy? I think there are two main ways to answer this:

Privacy is intrinsic to autonomy: The idea here is that we have a right to control how we present ourselves to others (what bits get shared etc) and how others use information about us; this right is tied into autonomy more generally; and these apps routinely violate this right. This argument works no matter how the information is used (i.e. even if it is used for good). The right may need to be counterbalanced against other considerations and rights, but it is a moral harm to violate it no matter what.

Privacy is a bulwark against the moral imperfection of others: The idea here is that privacy is instrumentally useful. People often argue that if you are a morally good person you should have nothing to hide. This might be true, but it forgets that other people are not morally perfect. They may use information about you to further some morally corrupt enterprise or goal. Consequently, it’s good if we can protect people from at least some unwanted disclosures of personal information. The ‘outing’ of homosexuals is a good example of this problem. There is nothing morally wrong about being a homosexual. In a morally perfect world you should have nothing to fear from the disclosure of your sexuality. But the world isn’t morally perfect: some people in some communities persecute homosexuals. In those communities, homosexuals clearly should have the right to hide their sexuality from others. The same could apply to the data being gathered through intimate surveillance technology. While you might not being doing anything morally wrong, others could use the information gathered for morally corrupt ends.

I think both of these arguments have merit. I’m less inclined toward the view that privacy is an intrinsic good and necessarily connected to autonomy, but I do think that it provides protection against the moral imperfection of others. We should work hard to protect the users of intimate surveillance technology from the unwanted and undesirable disclosure of their personal data.

Okay, that brings me to the end of this series. I won’t summarise everything I have just said. I think the diagrams given above summarise the landscape of objections already. But have I missed something? Are there other objections to the practice of intimate surveillance? Please add suggestions in the comments section.

Friday, April 22, 2016

New Podcast - Ep 1 Tal Zarsky on the Ethics of Big Data and Predictive Analytics





I've started a new podcast as part of my Algocracy and Transhumanism project. The aim of the project is to ask three questions:

  • How does technology create new governance structures, particularly algorithmic governance structures?
  • How does technology create new governance subjects, particularly through the augmentation and enhancement of the human body?
  • What implications does this have for our core political values such liberty, equality, privacy, transparency, accountability and so on?

The first episode is now available. I interview Professor Tal Zarsky about the ethics of big data and predictive analytics. You can download here or listen below. I will add iTunes and Stitcher subscription information once I have received approval from both.


Show Notes

  • 0:00-2:00 - Introduction 
  • 2:00-12:00 - Defining Big Data, Data-Mining and Predictive Analytics 
  • 12:00-17:00 - Understanding a predictive analytics systems 
  • 17:00 - 21:30 - Could we ever have an intelligent, automated decision-making system? 
  • 21:30 - 29:30 - Evaluating algorithmic governance systems: efficiency and fairness 
  • 29:30 - 36:00 - Could algocratic systems be less biased? 
  • 36:00 - 42:00 - Wouldn't algocratic systems inherit the biases of programmers/society? 
  • 42:00 - 54:30 - The value of transparency in algocratic systems
  •  
  • 54:30 - 1:00:1 - The gaming the system objection   


Links



Thursday, April 21, 2016

The Ethics of Intimate Surveillance (1)



'Intimate Surveillance’ is the title of an article by Karen Levy - a legal and sociological scholar currently-based at NYU. It shines light on an interesting and under-explored aspect of surveillance in the digital era. The forms of surveillance that capture most attention are those undertaken by governments in the interests of national security or corporations in the interests of profit.

But ‘smart’ technology facilitates other forms of surveillance . One particularly interesting form of surveillance is that relating to our intimate lives, i.e. activities associated with dating and mating. There are (or have been) a plethora of apps developed to allow us to track and quantify data associated with our intimate activities. Although many of these apps have a commercial dimension — and we shouldn’t ignore that dimension — users are primarily drawn to them for personal and interpersonal reasons. They think that accessing and mining intimate data will enhance the quality of their intimate lives. But are they right to think this?

That’s the question I want to answer over the next two posts. Levy’s article does a good job sketching out the terrain in which the conversation must take place, and so I will follow her presentation closely in what follows, but I want to add a layer of philosophical formalism to her analysis. I start, in this post, by sketching out the different forms of surveillance and explaining in more detail what is interesting and significant about intimate surveillance. I will follow this with some examples of intimate surveillance apps. And I will close with what I take to be the core argument in favour of their use. I’ll postpone the more critical arguments to part two.


1. The Forms of Intimate Surveillance
I have thrashed out the concept of surveillance many times before on this blog. In particular, I’ve looked at the frameworks developed by David Brin and Steve Mann to distinguish surveillance from sousveillance. Here, I want to develop a slightly different framework. It starts with a simple and intuitive definition of surveillance as the practice of observing and gathering data about human beings and their activities. I guess, technically, the concept could be expanded to include gathering data about other subjects, and if you wanted you could insist that data analysis and mining is part and parcel of surveillance, but I won’t insist on those things here. I don’t think we need to be overly formal or precise.

What’s more important are the forms of surveillance. What I mean by this is: who exactly is gathering the data? About whom? And for what purpose? Steve Mann might insist that the word ‘surveillance’ has a particular form built into its etymology: ‘sur’-veillance is monitoring and observation from above, i.e. from the top-down. As such, it is to be contrasted with other forms of ‘veillance’, such as ‘sous’-veillance, which is monitoring from below, i.e. from the bottom-up. This can be a useful distinction, but it does not exhaust the possibilities. In fact, we can distinguish between at least four different forms of ‘veillance’:

Top-down Veillance: This is where data is being gathered by socially powerful organisations about their subjects. The most common practitioners of top-down monitoring are governments and corporations. They gather information about their citizens and customers, usually in an effort to control and manipulate their behaviour in desired directions.

Bottom-up Veillance: This is where data is being gathered about socially powerful organisations by their subjects. For example, the citizens in a state could gather information about police abuse of minority populations by recording such abuse on their smartphones. Brin and Mann believe that bottom-up monitoring of this sort is the key to creating a transparent and fair society in the digital age.

Horizontal Veillance: This is where data is being gathered by individuals about other individuals (at roughly the same scale in a social hierarchy). Humans do this all the time through simple observation and gossip. We seem to have strong desire to know more about our social peers. Technology fuels this desire by providing additional windows into their lives.

Self-veillance: This is where data is being gathered by individuals about themselves. It is common enough for us to monitor our own activities. But modern technologies allow us to gather more precisely quantified data about our own lives, e.g. number of steps walked, average heartbeat, hours of deep sleep, daily work-related productivity (emails answered, words written, sales made etc.).


So where does intimate surveillance fit into this schema? Intimate surveillance involves the gathering of data about our romantic and sexual lives. Technically, intimate surveillance could span all four categories, but what is particularly interesting about it is that it often takes the form of horizontal or self-veillance. People want to know more about their actual and potential intimate partners. And they want to know more about their performance/productivity in their intimate lives. This is not to discount the fact that the digital tools that enable horizontal and self-veillance also enable top-down veillance, but it is to suggest that the impact of intimate surveillance on how we relate to our intimate partners and how we understand our own intimate lives is possibly the most significant impact of this technology. At least, that’s how I feel about it.


2. Technologies of Intimate Surveillance
So how does intimate surveillance work? What kinds of information can we gather about our intimate lives? What apps are available to do this? Levy suggests that we think about this in relation to the ‘life-cycle’ of the typical relationship. Of course, to suggest that there is a typical life-cycle to a relationship is a dangerous thing — relationships comes in many flavours and people can make different patterns work — nevertheless there do seem to be three general stages to relationships: (i) searching; (ii) connecting and (iii) committing (with breakdown/dissolution being common in many instances too).

Different kinds of data are important at the different stages in the life-cycle of a relationship, and different digital services facilitate the gathering of that data. In what follows, I want to give more detailed characterisations of the three main stages in a relationship and explain the forms of surveillance that take place at those stages. Levy’s paper is filled with examples of the many apps that have been developed to assist with intimate surveillance. Some of these apps were short-lived; some are still with us; others have, no doubt, been created since she published her article. I won’t review the full set here. I’ll just give some choice examples.

Searching: This is when we are looking for someone with whom to form an intimate connection. We usually don’t want to do this in a reckless fashion. We want to find someone who is suitable, shares our interests, to whom we are attracted, is geographically proximate, doesn’t pose a risk to us and so on. This requires some data gathering. Various apps assist with this. Two examples stick out from Levy’s article:
Tinder/Grindr: These are apps allows you to find people in your geographical locale. You set the parameters on what you are looking for (age range, how close etc) and then you can search through profiles matching those criteria and ‘like’ them. If the other person likes you too, you can make a connection. Note how this is unlike traditional online dating services like Match.com or eHarmony. Those services tried to do the searching for you by using a complex algorithm to match you to other people. Tinder/Grindr are much more self-controlled: you set the parameters and surveil the data.
Lulu: This is an app that allows female users to evaluate male users. It works kind of like a tripadvisor for men where women are the reviewers. They rate the men on the basis of romantic, personal and sexual appeal. This allows for women to gather and share information about prospective intimate partners. It is mainly targeted at undergraduate college students.

Connecting: This is when we actually make an intimate connection. Obviously, intimate connections can take a variety of forms. Two main ones are of interest here: (i) sex and (ii) romance. A variety of apps are available that allow you to track and gamify your sexual and romantic performance. Again, I’ll use two examples:
Spreadsheets: This bills itself as a ‘sex improvement’ app. It enables you to record how frequently you and your partner have sex. It also records how long each sexual encounter lasted, the number of ‘thrusts’ that took place, and the moans and groans (decibel level reached). The dubious assumption here being that these metrics are useful tools for optimising sexual performance.
Kahnoodle: This (defunct) app tried to gamify relationships. It allowed partners to rank ‘love signs’ from one another that would then earn them kudos points. Once they accumulated enough points they could be redeemed for ‘koupons’ and other rewards.
With the rise of wearable tech and the development of new more sophisticated sensors, the number of apps that try to gamify our sexual and romantic lives is likely to increase. Apps of this sort explicitly or implicitly include behaviour change dimensions, i.e. they try to prompt you to alter your romantic and sexual behaviours in various ways.

Committing: This when we have made a connection and then try to commit to our partner(s). Again, commitment can take different forms and partners often determine the parameters of commitment for themselves (e.g. some are comfortable with open relationships or polyamorous relationships). For many, though, commitment comes with two main concerns: (i) fertility (i.e. having or not having children) and (ii) fidelity (i.e. staying loyal to your partner). Various apps are available to assist people in ensuring fertility (or lack thereof) and fidelity:
Glow: This is an app that tries to assist women in getting pregnant. It does this by allowing them to track various bits of data, including menstruation, position and firmness of cervix, mood, position during sexual intercourse. The related app Glow Nurture is focused on women who are actually pregnant and allows them to track pregnancy symptoms. Both apps have an interpersonal dimension to them: women are encouraged to share data with their partners; the partners are encouraged to provide additional data, and are then prompted to behave in different ways. The app makers have also partnered with pharmacies to enable refilling of prescriptions for birth control etc. (There were also a bunch of menstrual cycle apps targeted at men that were supposed to enable them to organise their lives around their partner’s menstrual cycle - most of these seem to be defunct, e.g. PMSBuddy and iAmaMan)
Flexispy: This is one of a range of apps that allow you to spy on other people’s phones and smart devices. Though this could be used for many purposes, it explicitly states that one of its potential uses is to spy on ‘cheating’ spouses. The app allows you to see pictures/videos, messages, location data, calendars, listen to phone calls and ‘ambient’ audio. As Levy puts it, with these kinds of apps we enter a much darker world of intimate surveillance.

I have tried to illustrate all these examples in the image below.



3. The Argument from Autonomy
By now you should have a reasonable understanding of how intimate surveillance works. What about its consequences? Is it a good or bad thing? It’s difficult to answer this in the abstract. The different apps outlined above have different properties and features. Some of these properties might be positive; some might be negative. To truly evaluate their impact on our lives, we would have to go through them individually. That said, there are some general arguments to be made. I’ll start with an argument in favour of intimate surveillance.
 
The argument in favour of intimate surveillance is based on the value of individual autonomy. Autonomy is a contested concept but it refers, roughly, to the ability to make choices for oneself, be the author of one’s own destiny, and perform actions that are consistent with one’s higher order goals and preferences. I suspect that the attraction of these surveillance apps lies predominantly in their perceived ability to enhance autonomy associated with intimate behaviour. They give us the information we need to make better decisions at the searching, connecting and committing phases. Through tracking and gamification they help us to avoid problems associated with weakness of the will and ensure that we act in accordance with our higher order goals and preferences.

Think about an analogous case: exercise-related surveillance. Many people want to be fitter and healthier. They want to make better decisions about their health and well-being. But they find it hard to choose the right diet and exercise programmes and stick to them in the long run. There is a huge number of apps dedicated to assisting people in doing this — apps that allow them to track their workouts, set targets, achieve goals, and share with their peers in order to stay motivated. The net result (at least in principle) is that they acquire greater control or mastery over their health-related destinies. I think the goal is similar in the case of intimate surveillance: the data, the tracking, the gamification allows people to achieve greater control and mastery over their intimate lives. And since autonomy is a highly prized value in modern society, you could argue that intimate surveillance is a good thing.

To set this out more formally:

  • (1) Anything that allows people to enhance their autonomy (i.e. make better choices, control their own destiny, act in accordance with higher-order preferences and desires) is, ceteris paribus, good.
  • (2) Intimate surveillance apps allow people to enhance their autonomy.
  • (3) Therefore, intimate surveillance apps are, ceteris paribus, good.

There are two main ways to attack this argument. The first is to focus on the ‘ceteris paribus’ (all else being equal) clause in premise (1). You might accept that autonomy is an important value but that it must be balanced against other important values (e.g. mutual consent, trust, privacy etc) and then show how intimate surveillance apps compromise those other values. I’ll be looking at arguments along those lines in part 2.

The other way to attack the argument is to take issue with premise (2). Here everything turns on the properties of the individual app and the dispositions of the person using it. I suspect the biggest problem in this area is with the surveillance apps that include some element of behaviour change, e.g. the sex and romance tracking apps described above. Two specific problems would seem to arise. First, the apps might make dubious assumptions about what is optimal or desirable behaviour in this aspect of one’s intimate life. The assumptions might be flawed and might encourage behaviour that is not consistent with your higher order goals and preferences. Second, and more philosophically-minded, by including behaviour prompts the apps would seem to take away a degree of autonomy. This is because they shift the locus of control away from the user to the behaviour-change algorithm developed by the app-makers. Now, to be clear, we often need some external motivational scaffolding to help us achieve greater autonomy. For instance, I need an alarm clock to help me wake up in the morning. But if our goal is greater autonomy, I would be sceptical of any motivational scaffolding that makes our choices for us. I think it is best (from an autonomy perspective) if we can set the parameters for preferred choices and then set up the external scaffolding that helps us satisfy those preferences. I worry that some apps try to do both of these things.
 
Okay, I’ll leave it there for today. In part two, I’ll consider a variety of objections to the practice of intimate surveillance.