Saturday, August 27, 2016

Episode #10 - David Gunkel on Robots and Cyborgs


david_gunkel

This is the tenth episode in the Algocracy and Transhumanism Podcast. In this episode I talk to David Gunkel. David is a professor of communication studies at Northern Illinois University. He specialises in the philosophy and ethics of technology. He is the author of several books, including Hacking CyberspaceThe Machine Question and Of Remixology. I talk to David about two main topics: (i) robot rights and responsibilities and (ii) the cyborgification of society.

You can download the episode at this link. You can listen below. You can also subscribe on Stitcher and iTunes (via RSS - click on 'add to iTunes').


Show Notes

  • 0:00 - 1:50 - Introduction
  • 1:50 - 4:23 - Robots in the News
  • 4:23 - 10:46 - How to think about robots: agency vs patiency
  • 1:46- 13:20 - The problem of distributed agency
  • 13:20 - 18:00 - Robots as tools, machines and agents
  • 18:00 - 24:25 - The spectrum of robot autonomy
  • 24:25 - 28:04 - Machine learning: is it different this time?
  • 28:04 - 39:40 - Should robots have rights and responsibilities?
  • 39:40 - 43:55 - New moral patients and emotional manipulation
  • 43:55 - 57:14 - Understanding the three types of cyborg
  • 57:14 - 1:02:26 - The Borg and the Hivemind Society
  • 1:02:26 - End - Cyborgification as a threat to Enlightenment values
   

Relevant Links


Tuesday, August 23, 2016

Who knows best? Personal Happiness and the Search for a Good Life




I would like to be happier. I would like to live a good life. But I often get it wrong. Once upon a time I thought that getting a PhD would make me happy. It didn’t. It made me painfully aware of my own ignorance and more anxious about the future. Another time I thought that going on holidays to Spain for a week would make me happy: what could be better than a week relaxing in the sunshine, without a care in the world? Surely it would be just the balm that my overactive mind needed? But it didn’t make me happy either. It was too hot and I quickly got bored. By the end of the week I was itching to get home.

It turns out that my unfortunate tangles with happiness are not that uncommon. Daniel Gilbert wrote an entire book about these problems called Stumbling Upon Happiness. In it, he argued that various psychological biases mean that humans are not good predictors of what will make them happy. They often stumble: intentionally doing things that make them miserable while accidentally falling into things that make them happy. He suggested that we shouldn’t rely on our own judgments about what will make us happy. Instead, we should rely on others: we should learn from their mistakes and successes.

Some scientists think they can come up with better tools for judging what will make us happy. Allen McConnell and his colleagues, for example, have developed a modified version of the implicit association test (IAT) that can use our implicit preferences to predict what will make us happy. Similarly, Robb Rutledge and his colleagues have developed a computational model of the brain that can predict subjective well-being during certain tasks. These tests and models are in their relative infancy, but they suggest ways in which careful scientific scrutiny of our minds could assist in the search for happiness.* All of which provokes the following question:

Who knows best? - When it comes to figuring out what makes us happy, who knows best: (i) ourselves or (ii) scientists who have carefully studied the neural and cognitive markers of happiness?

That’s the question asked in Stephanie Hare and Nicole Vincent’s article “Happiness, Cerebroscopes and Incorrigibility: Prospects for Neuroeudaimonia”. They pose an interesting thought experiment. Imagine if some future scientific discoveries allow us to construct a cerebroscope, i.e. a device for looking at the activities and networks within our own brains and identifying the patterns that are correlated with happiness. Should we rely on the cerebroscope in lieu of our own subjective judgment?

They make three arguments in response. First, they suggest that you have to distinguish between two different versions of the ‘who knows best?’ question. Second, they suggest that on the first interpretation of the question, we will always be more reliable judges of our own happiness than any scientist or prospective cerebroscope might be. And third, on the second interpretation of the question, they suggest that scientists and prospective cerebroscopes might be able to offer some useful assistance, but we probably shouldn’t overestimate their contribution.

I want to look at all three arguments in what follows. While I agree with much of what Hare and Vincent have to say, I think their second argument is less important than they seem to believe and that their third argument underestimates the prospects for future technological happiness-assistance.


1. Two different versions of the ‘Who Knows Best?’ Question
Hare and Vincent’s first argument is not really an argument so much as it is an observation. There is little to disagree with in it. But it is important as it sets the stage for their two other arguments.

We must start by noting that ‘happiness’ is complex concept. It’s one of those terms that bandied about a lot in philosophy and psychology and is open to different interpretations. Hare and Vincent focus largely on mental understandings of happiness (i.e. understandings that suppose happiness to be a mental phenomenon). They do this because they are interested in the potential for the mind sciences to contribute to our search for happiness. But they acknowledge that happiness may not be a strictly mental phenomenon.

Furthermore, even if it is a mental phenomenon, there is room for disagreement about the nature and character of that mental phenomenon. Happiness might be raw conscious pleasure or euphoria, or it might be a less extreme feeling more akin to satisfaction. Alternatively, happiness could be a diffuse mood or temperament that affects how you perceive and understand the world. At the very least, we can say that it is a positive mental feeling/state/mood and that it can be ephemeral or long-lived.

Humans are interested in happiness because of its positive attributes and because they believe happiness is an important part of a well lived life. Most people want to live happy lives. But then they must confront the two different aspects of happiness that could be relevant to their quest for happiness:

In the moment happiness: This is the occurrent mental state/feeling/mood of being happy.
Dispositional happiness: These are properties or attributes of the individual that make them likely to experience ‘in the moment’ happiness in particular contexts.

Suppose I am an opera-lover. I am endlessly fascinated by the opera; I like to attend operatic-performances as often as I can. This suggests that I am disposed to experience ‘in the moment’ happiness when I attend an operatic performance. But, of course, just because I am disposed to be happy while attending the opera does not mean I will actually experience happiness while there. I could attend a particularly bad performance which upsets my sophisticated tastes.

These two different aspects of happiness have knock-on effects on the ‘who knows best’ question. Indeed, they suggest that there are two parallel versions of that question:

Who knows best?1 - Who knows best about my occurrent, in the moment, feelings of happiness?

Who knows best?2 - Who knows best about my dispositions toward happiness and the contexts and experiences that are likely to make me experience in the moment happiness?

Hare and Vincent suggest two different answers to both of these questions.


2. Who knows best about ‘in the moment’ happiness?
The first version of the question has a simple answer: we do and we always will know best the current state of our happiness. Hare and Vincent defend this answer by reference to Richard Rorty’s idea of incorrigibility. Rorty was interested in finding one of the distinctive marks of the mental. Many have been proposed over the years: intentionality, consciousness, privacy and so on. He thought incorrigibility was a mark of the mental.

By ‘incorrigibility’ he meant that certain self-reports about mental activity are incapable of being corrected. For example, when you say that are experiencing the colour red in your visual field right now, your self-report is incapable of being corrected. I might say to you ‘but there is nothing red in your visual field right now’, and objectively speaking I might be right, but that doesn’t mean that you are wrong about your subjective experience. Or, to take another example, suppose I claim that right now I am thinking about the movie I watched last night and how bad it was. Would you be able to jump in and say ‘No, you are not thinking about that’? Of course not. I am the only real authority on the current contents of my conscious thought. Claims about occurrent subjective experience are, in this sense, incorrigible.

Something similar must be true for in the moment happiness. If someone says that they are currently experiencing a state of happiness, who are we to correct them? If my friend turns to me at his mother’s funeral and says that he is feeling happy, I might proffer the view that it is inappropriate for him to feel happy at this time, or predict that the happiness is a strange psychological reaction to trauma and that it will soon pass, but I cannot doubt that he is genuinely experiencing happiness. Judgments of in the moment happiness are incorrigible. The subject of the experiences always knows best. No amount of scientific discovery could disrupt this.

This seems right but it is not beyond criticism. Rorty himself suggested that the incorrigibility of ‘in the moment’ judgments was contingent on current technology and that if a cerebroscope was invented that allowed us to see the current state of our brain activity we might have reason to doubt our own judgment. But Hare and Vincent argue that this is wrong. No matter how sophisticated and precise the cerebroscope becomes, your judgment of your own in the moment happiness would always be incorrigible.

They offer three arguments in support of this. First, any hypothetical cerebroscope would have to be built upon a foundation of self-reported judgments of happiness. A scientist would get a subject to report on their current feelings and then correlate these self-reports with brain states. This would enable them to build a model of the subject’s brain that would offer meaningful predictions about whether the subject is currently experiencing happiness. It would not enable them to question the epistemic authority of the subject’s self-report. Their entire scientific project presumes that the self-report of happiness is correct. The model could not be built without that presumption.

This feeds into a second argument. Our knowledge of the human brain is always going to be incomplete. And what we currently know suggests that the brain is remarkably adaptive and flexible. Brain regions that we think are correlated with one particular mental function can be co-opted and used for another function (particularly in cases of disease or damage). So if we did end up in a situation where our cerebroscopic model told us that the subject was unhappy, but the subject insisted that they were happy, this would really be an opportunity to adjust the model to take account of new data, not to question the judgment of the subject.

This brings us to a final argument, which is slightly more philosophical in nature. It is a variant on Frank Jackson’s classic ‘Knowledge Argument’. That argument is based around the famous ‘Mary in the Black and White Room’ thought experiment. I’ve written about it at greater length before. Jackson asked us to imagine Mary - a scientist of human visual perception —who spent her entire life in a black and white room, dressed in black and white clothes, with black gloves and no mirrors. She knew everything a scientist could possibly know about the visual experience of the colour red. One day, she was released from her black and white lab and went out into the real world. She saw a red apple. Here’s the question: would she learn something knew from her experience or would she already know it all thanks to her scientific discoveries? Jackson insisted that she would learn something knew because no amount of third party data about human visual experience could allow her to know what it would be like to experience the colour red. Scientific inquiry is simply incapable of telling us anything about qualia (i.e. the quality of experience). Assuming Jackson is right, no cerebroscope could tell us anything about our in the moment experience of happiness.

Hare and Vincent consider a more nuanced objection to this way of thinking about in the moment happiness. According to some theorists, occurrent happiness is a broader, more diffuse and more complicated thing. We often suppress or ignore aspects of our current affective experiences or moods that might call into question our claims to happiness. For example, somebody who suffers from anxiety might, on occasion, not realise how their anxiety affects their feelings over time. Hare and Vincent think this is plausible insofar as it goes but that it doesn’t affect the incorrigibility of occurrent in the moment judgments. Those judgments are still correct even if they fail to factor in things that might affect happiness over a longer timeframe. Think back to my interaction with my friend at his mother’s funeral. I might argue that he is ignoring certain emotions that might affect his happiness in the near future, but I still cannot question his occurrent judgment as to his own happiness.

For my part, I think Hare and Vincent are right to say that one’s own judgments of one’s own occurrent happiness are incorrigible, but this is quite a narrow claim and isn’t as practically significant as you might think. There are two reasons for this. First, I don’t think anyone doubts their own judgments of their own occurrent experiences. So I don’t think anyone will be looking for help from scientists or others to figure out whether they are occurrently happy. What they will really be interested in is figuring out what kinds of activities or states of being are likely to generate and sustain that occurrent sensation. That’s where they will need help and where scientists might be able to provide it (see below).

Second, the only cases in which we might call into question occurrent judgments are when we are interested in what other people are feeling and we doubt the verisimilitude of their reported judgments. Thus, I will never question my own judgment about my occurrent happiness, but I might question my friend’s. If he says he is happy at his mother’s funeral I might be inclined to think he is lying or putting brave face on it -- that his self-report is not a true reflection of his internal experience. People often deceive and mislead others. In those cases, philosophical argumentation in favour of the incorrigibility of one’s own judgments is unlikely to make a practical difference. Third parties will look for ways to figure out what a person is truly thinking. That’s why there is so much interest in things like brain-based lie detection devices (and other deception detection techniques).


3. Who knows best about dispositional happiness?
The second version of the who knows best question focuses on dispositional happiness. That is to say: it focuses on figuring out what kinds of states and activities are likely to induce occurrent feelings of happiness given our dispositions and traits. It is a predictive question. When answering this question, we want to know where to turn to when making decisions about our lives. For example, I want to know whether writing more blog posts or articles is likely to make me happy or whether I should do something else with my precious time. Should I trust my own judgment on this matter or should I turn to others for help?

Hare and Vincent agree that scientists could have important advice to offer when it comes to this matter. To illustrate, they sketch a story of young woman who is unlucky in love. She always chooses the ‘wrong type of guy’. They suggest that something akin to McConnell’s modified implicit association test, could help her out. She could find out what her implicit preferences are when it comes to romantic partners and use this data to chart a better course through the rocky waters of romance.

But they then suggest that there are three limitations to the advice that scientists can offer.
First, what actually makes us happy is likely to be a multi-factorial phenomenon. Many factors will combine to make you happy and some of those factors will be ignored or downplayed by both yourself and your putative scientific advisor. It’s worth quoting from them in full on this point:

Everyone has blind spots when it comes to predicting the future; this holds true for scientists as well. You can only base your predictions and choices on your incomplete and maybe even incorrect, self-knowledge and past experiences. But, neuroscientists cannot do much better, and arguably the decontextualized brains that they tend to study might even mean that their chances of making the right prediction are lower. After all, they only base their predictions and advice on what they know about your brain. 
(Hare and Vincent 2016, 81)

I am sympathetic to this point of view. Scientific insight has its blindspots and limitations. We should always keep that in mind. But I wonder whether this criticism doesn’t underestimate the potential for technological assistance when it comes to predicting our own happiness. I appreciate this may be an instance of someone with a hammer seeing a nail everywhere he looks, but my current research on big data, surveillance and algorithmic governances makes me wonder whether those technologies could be leveraged to create highly-individualised, constantly updated, predictive models of an individual’s moods. These models could be based on thousands (millions) of datapoints, helping to address the problem of blindspots and multifactorial causes. I then wonder whether we could each have personalised ‘happiness oracles’ who provide ongoing advice on what we should do to make us happy. These oracles would not be individual scientists or people; they would be sophisticated AI assistants, perpetually mining our personal mood data for important correlations and using this to issue recommendations. Interestingly, Hare and Vincent hint at this possibility earlier in their article when they discuss the phenomenon of mood-tracking apps, but they seem to retreat to a narrower conception of scientific assistance later on when highlighting the limitations of third party advice.

They are on firmer ground with their two other limitations. The second is that we should not confuse predictions of future happiness with normative guidance. This is important. For instance, a modified IAT might reveal an implicit preference for sexual relationships with children. This does not, of course, mean that you should have a sexual relationship with a child in order to be happy. What you should or should not do in a particular context depends on more than just a prediction of your likely happiness. This feeds into their third limitation which is that happiness is only one component of human flourishing.

Those points seem unobjectionable to me and should warrant some caution when it comes to weighing the predictive powers of technology in our quest for the good life.



* (note: I stole the examples of McConnell and Rutledge’s tests/models from Hare and Vincent’s article).

Saturday, August 20, 2016

Phenomenological Coupling, Augmented Reality and the Extended Mind




Contrast these two scenarios. First, I’m in the supermarket. I want to remember what I need to buy but I’m not the kind of guy who write things down in lists. I just keep the information stored in my head and then jog my memory when I arrive at the store. If I'm lucky, the list of items immediately presents itself to my conscious mind. I remember what I need to buy. Second, I’m in the supermarket. I want to remember what I need to buy. But I’m hopelessly forgetful so I have to write things down in a list. I take the list from my pocket and look at the items. Now, I remember what I needed to buy.

Is there any difference between these two scenarios? Proponents of the extended mind thesis (EMT) would argue that there isn’t any significant difference between them. Both involve functionally equivalent acts of remembering. In the first scenario, the functional mechanism is intra-cranial. No external props are used to access the content of the list: it is just immediately present in the conscious mind. In the second instance, the functional mechanism is partly extra-cranial. An external prop (the list) is used to access the content. The information is then present in the conscious mind. The mechanisms are slightly different; but the overall functional elements and effects are the same.

But think about it again. There does seem to be something phenomenologically different about the two scenarios. That is to say, the two acts of remembering have a different conscious representation and texture. The first scenario involves immediate and direct access to mental content. The second scenario is more indirect. There is an interface (namely: the list) that you have to locate and perceptually represent before you can access the content.

This raises a question: could we ever have external mental aids that are phenomenologically equivalent to intra-cranial mental mechanisms? And if so, would this provide support for the extended mind thesis? I want to consider an argument from Tadeusz Zawidzki about this very matter. It comes from a paper he wrote a few years back called ‘Transhuman cognitive enhancement, phenomenal consciousness and the extended mind’. He claims that future technology could result in external mental aids that are phenomenologically equivalent to intra-cranial mental mechanisms. And that this does provide some support for the EMT.


1. The Basic Argument: The Need for Frictionless Access to Mental Content
To understand Zawidzki’s argument we have to start by formalising the argument for phenomenological difference that I sketched in the introduction and turning it into an objection to the EMT. The argument would go something like this:


  • (1) The phenomenology of intra-cranial remembering is characterised by a frictionless and transparent access to the relevant mental content (the list of shopping items)
  • (2) The phenomenology of extra-cranial remembering is characterised by a frictionful and non-transparent access to the relevant mental content: you have to engage with the physical list first.
  • (3) Truly mental acts of remembering (or mental cognitive acts more generally) are characterised by frictionless and transparent access to mental content.
  • (4) Therefore, extra-cranial remembering is not truly mental in nature.



This is rough and ready, to be sure. The focus on ‘friction’ and ‘transparency’ is intended to capture some distinctive mark of the mental. The idea is that mental acts are noteworthy because the semantic or intentional content (e.g. beliefs, desires etc) that feature in those acts are just immediately present in our minds. We don’t have to think about where they came from or how we gain access to them. They are just there. There is nothing between us and the mental content. This is formulated into a general principle (premise 3) and this then determines the rest of the argument.

Of course, these are really characteristics of conscious mental activity — something that the original proponents of the EMT (Clark and Chalmers) studiously avoided in their defence of an extended mind — not subconscious mental activity. So you could dispute the reliance on premise (3) in this argument. You could argue that frictionless and transparent access to mental content is not a necessary mark of the mental: mental activity can and does take place without those properties. And hence mentality can extend beyond the cranium without those properties.

That’s fine, insofar as it goes. But it doesn’t render this argument completely pointless. You can take this argument (and the remainder of this post) to be about the extension of conscious mental activity only. Indeed, focusing only on this type of mental activity arguably sets the bar higher for the proponent of the EMT. One of the frequent critiques of the EMT is that it doesn’t account for the distinctive nature of conscious mental activity that is mediated through intra-cranial mechanisms (I discussed this in a previous post). If you can show some phenomenological equivalence between mental content accessed intra-cranially and mental content accessed extra-cranially, then you will show something significant.

And that’s exactly what Zawidzki tries to do. He tries to show how content that is accessed with the help of extra-cranial props can be both frictionless and transparent.


2. The Possibility of a Technological Metacortex
Zawidzki uses examples drawn from Charles Stross’s novel Accelerando to illustrate the possibility. The novel is about the social and personal consequences of rapidly evolving technologies. It is infused with the singularitarian ethos: i.e. the belief that technological progress is accelerating and will have radical consequences for humanity. One example of this is how our interactions with the world will be affected by the combination of augmented reality tech and artificial intelligence. In the novel, individual humans are equipped with augmented reality technology that displays constant streams of information to them about their perceived environments. This information is updated and presented to their conscious minds with the help of artificially intelligent assistants.

Zawidzki describes the future depicted in Stross’s novel in the following terms:

Swarms of on-line, virtual agents constantly and automatically “whisper” pertinent information in one’s ear in real time, or display it in one’s visual field, as one experiences a passing scene. For example, the microscopic video cameras mounted on one’s spectacles provide a constant video stream of one’s point of view as one walks about. This information is constantly processed by the swarm of on-line agents — appropriately called one’s “metacortex” by Stross — which search the Internet for information relevant to what one is visually experiencing and provide continuous updates on it. All of this happens automatically: users need not deliberately initiate searches about the persons with whom they are currently interacting, or the environs they are currently exploring. The information is displayed for them, through earphones or on virtual screens projected by the spectacles, as though it were being unconsciously retrieved from memory. 
(Zawidzki 2012, 218)

I have underlined the last part because I think it is critical to Zawidzki’s argument. His point is that the metacortical technologies depicted in the novel involve truly frictionless and transparent access to mental content. There is no separation or disjunction between you and the presentation of the information in your mind. You don’t have to follow a series of steps or program instructions into some user-interface. The information is just there: immediately present in your conscious mind.

To make the point more intuitive consider the following. Last night I was watching a film. There was an actor in it who I knew I had seen in some other films but whose name I could not recall. So I took out my smartphone and looked up the name of the film on IMDB. I then scrolled down through the cast list until I came across the actor I was interested in. I then clicked on her profile to see what else she had been in. In this manner, knowledge of her past triumphs and failures as an actor made its way into my conscious mind.

I’m sure many people have had a similar experience. They are doing something — watching a film, having a conversation — and they want know something else that is either critical to, or would improve the quality of, that activity. They don’t have the information in their own heads. They have to go elsewhere for it. Smartphones and the internet have made this much easier to do. But they haven’t made the process frictionless and transparent. To get the information displayed on your phone, you have to follow a series of steps. Furthermore, when you are following those steps you are acutely aware of the fact that the information is presented to you via a user-interface. There is considerable phenomenological distance between you and the information you desire.

Now imagine if instead of having to look up the information on my smartphone, I had something akin to the metacortex depicted in Stross’s novel. As I was watching the film with my AR glasses, a facial recognition algorithm would automatically identify the actor and display in my visual field information about them. There would no longer be any friction. The information would just be there. And although it would be displayed to me on a user-interface, the likelihood is that as I became more used to the device, my awareness of that interface would fade away. There would be no separation between me and the cognitive information. Something analogous already happens to elite musicians: as they improve in their musical abilities the phenomenological distance between themselves and the instrument they are playing evaporates. What is to stop something similar happening between us and the metacortex?

The reason this is an interesting question to ask is because the kinds of technologies needed to create a Strossian metacortex don’t seem all that far-fetched. Indeed, they seem eminently feasible. Sophisticated AR technologies are being created, and the advances in AI in the past decade have been quite impressive. It seems like it is really a matter of when, not if, we will build a metacortex.


3. The Phenomenologically Extended Mind
And when we do, what will have happened? Will we have proven the extended mind thesis to be correct? Will we have established once and for all that the human mind is not confined to the brain-blood barrier?

Not so fast. There are many critics of this view. Rupert (2009) argues that phenomenological arguments of this sort fail because they leap from this sort of claim:


  • (P) The phenomenology of interacting with the extra-cranial world reveals no cognitively relevant boundary between the organism and the extra-organismic world.


To this sort of claim:


  • (C) There is no cognitively relevant boundary between the organism and the extra-organismic world.


That’s obviously an illogical leap. Just because something seems (phenomenologically) to be one way does not mean that it actually is that way. Our perceptions of the world can be misleading.

Consider the rubber hand illusion (wherein stroking a rubber hand repeatedly while looking at it results in the phenomenological feeling of having one’s hand stroked). Does this prove that the rubber hand is actually ours? That there is no relevant difference between the rubber hand and our own? Of course not. The same could be true of the metacortex: it might seem to be part of our minds, but that doesn’t mean it actually is.

Zawidzki has responses to this. He says that his phenomenological argument is only intended to complement other arguments for the extended mind; not replace them. When you add all the lines of argument together, you end up with a more robust case for extended cognition. Furthermore, he insists he is talking about a particularly advanced form of technology that has not been invented yet.

But, in some ways, the technical debate about the extended mind is by-the-way. What really matters is what will happen once we do create metacortical technologies with frictionless and transparent phenomenological integration. Suddenly we will all start to feel as though our minds are intimately integrated into or dependent upon our metacortices. But presumably the technologies underlying those metacortices will not belong solely to us? The machine learning algorithms upon which we rely will probably be used by many others too. What will happen to our sense of individuality and identity? Will it just be like everyone reading the same book? Or will a more radical assault on individuality result? Philosophers may restrain us and say that there is an important philosophical difference between the intra-cranial and extra-cranial. But it won’t feel that way to many of us. It will all feel like it is part of a seamless cognitive whole.

It's worth thinking about the consequences of such a reality.

Wednesday, August 17, 2016

Episode #9 -Rachel O'Dwyer on Bitcoin, Blockchains and the Digital Commons


bQTHPAUp_400x400

This is the ninth episode in the Algocracy and Transhumanism Podcast. In this episode I talk to Rachel O'Dwyer who is currently a postdoc at Maynooth University. We have a wide-ranging conversation about the digital commons, money, bitcoin and blockchain governance. We look at the historical origins of the commons, the role of money in human society, the problems with bitcoin and the creation of blockchain governance systems.

 You can download the podcast at this link. You can also listen below, or subscribe on Stitcher and iTunes (via RSS feed - just click add to iTunes).


Show Notes

  • 0:00 - 0:40 - Introduction
  • 0:40 - 9:00 - The history of the digital commons
  • 9:00 - 17:20 - What is money? What role does it play in society?
  • 17:20 - 29:20 - The value of transactional data and how it gets tracked
  • 29:20 - 34:25 - The centralisation of transactional data tracking and its role in algorithmic governance
  • 34:25 - 37:50 - Resisting transactional data-tracking
  • 37:50 - 46:00 - What is bitcoin? What is a cryptocurrency?
  • 46:00 - 54:25 - Can bitcoin be a currency of the digital commons?
  • 54:25 - 1:04:47 - The promise of blockchain governance: smart contracts and smart property
  • 1:04:47 - End - Criticisms of blockchain governance - the creation of an ultra-neo-liberal governance subject?
 

Relevant Links:

Friday, August 12, 2016

New paper: Why Internal Moral Enhancement Might be Politically Better than External Moral Enhancement




I have a new paper coming out in the journal Neuroethics. This one argues that directly augmenting the brain might be the most politically appropriate method of moral enhancement. This paper brings together my work on enhancement, the extended mind, and the political consequences of advanced algorithmic governance. Details below:

Title: Why Internal Moral Enhancement Might be Politically Better than External Moral Enhancement
Journal: Neuroethics
Links: Official; Philpapers; Academia
Abstract: Technology could be used to improve morality but it could do so in different ways. Some technologies could augment and enhance moral behaviour externally by using external cues and signals to push and pull us towards morally appropriate behaviours. Other technologies could enhance moral behaviour internally by directly altering the way in which the brain captures and processes morally salient information or initiates moral action. The question is whether there is any reason to prefer one method over the other? In this article, I argue that there is. Specifically, I argue that internal moral enhancement is likely to be preferable to external moral enhancement, when it comes to the legitimacy of political decision-making processes. In fact, I go further than this and argue that the increasingly dominant forms of external moral enhancement (algorithm-assisted enhancement) may already be posing a significant threat to political legitimacy, one that we should try to address. Consequently, research and development of internal moral enhancements should be prioritised as a political project.