Rebecca Saxe and Clockwork Orange 2.0

http://embed.ted.com/talks/rebecca_saxe_how_brains_make_moral_judgments.html

 

In this excellent presentation Saxe claims that Transcranial Magnetic Simulation applied to the  temporo-parietal junction (TPJ) – a region specialized for mentalizing in human adults – can improve the effectiveness of moral reasoning by improving our capacity to understand other human minds.

This suggests an interesting conundrum for moral philosophers working in the Kantian tradition, where recognizing the rationality and personhood of offenders is held to be a sine qua non for justifications of punishment. We can imagine a Philip K Dick style world in which miscreants are equipped with surgically implanted TMS devices which zap them where an automated surveillance system judges them to be in a morally tricky situation calling for rapid and reliable judgements about others’ mental states. Assuming that such devices would be effective, would this still constitute a violation of the offender’s personhood – treating the offender as a refractory animal who must be conditioned to behave in conformity with societal norms, like Alex in a Clockwork Orange ? Or would the enhancement give that status its due by helping the offender become a better deliberator ?

 

Assuming the TMS devices could achieve their aim of improving moral cognition, it seems odd to say that this would be a case of “tiger training” which bypasses the offender’s capacity for moral reasoning since it would presumably increase that very capacity. It is even conceivable that an effective moral enhancement could be co-opted by savvy Lex Luthor types to enhance the criminal capacities of their roughnecks, making them more effective at manipulating others and sizing up complex situations. At the same time, it would be quite different from punishment practices that appeal to the rational capacities of the offender. Having one’s TPJ zapped is not the same as being asked to understand the POV of your victim – though it might enhance your ability to do so.

So an effective moral enhancement that increases the capacity for moral reasoning in the cognitively challenged would neither be a violation of  nor an appeal to to their reason. It would not be like education or a talking therapy, but neither would be like the cruder forms of chemical or psychological manipulation. It could enhance the moral capacities of people but it would do so by tying them into technical networks that, as we know, can be co-opted for ends that their creators never anticipated. It might enhance the capacity for moral agency while also increasing its dependence on the vagaries of wider technical systems. Some would no doubt see such a development as posthuman biopower at its most insidious. They would be right, I think, but technology is insidious precisely because our florid agency depends on a passivity before cultural and technical networks that extend it without expressing a self-present and original human subjectivity.

17 thoughts on “Rebecca Saxe and Clockwork Orange 2.0

  1. The questionable speculative leaps aside that she is making here, is this really so different from present-day cases of forced psychiatric treatment in the name of health/conformity-to-social-norms?

  2. I suppose that’s the question which was nagging me after seeing this talk. For sure, such treatment would only be instituted if it could result in people conforming to social norms of behaviour. But by increasing an agent’s capacity for social cognition we would also increase their ability to negotiate social situations and thus increasing the worthwhile options open to them. So while conformity might be one of the intended effects it would not be imposed through conditioning or by altering the offender’s desires or moods. So moral enhancement of this kind seems to be importantly different from medical interventions. If Saxe is right, it does not seem to require an appeal to the agent to conform to some set of social values either.

  3. Thanks Arran. As you may infer from my post, I’m not convinced that it is any more threatening than any other technology with unpredictable long run iterations and hacks, but there’s some value to exercising the hermeneutics of suspicion here and seeing what exactly might be wrong with it. This seems to be a classic case where we can envisage enhancing agency by increasing our dependence on extra-biological systems of various kinds. But I’ve yet to see a case showing why this is necessarily a bad thing. Our brains are only susceptible to this technique because they are physical systems that can be interfaced with other physical systems. There is no theologically inscribed boundary to the human body or mind that can tell us where some extension is a “bad” thing. That needs to be addressed on a case by case basis and this is only possible if we develop the technology and see where this goes.

  4. if social cognition is just a sense of understanding/calculating the dynamics/interests at play that allows one to interact in a way which furthers one’s own interests than sociopaths would fit the bill, but for neurotypical able-bodied folks our deeply/bodily socialized sense of what we should desire and or act out on and how others are to be treated/manipulated this is norms all the way down…
    http://www.academia.edu/598411/The_cradle_of_language_making_sense_of_bodily_connexions

    1. @DMF Well, whatever underlies our capacity for social cognition it must be bodily because we need a body and brain to do it – I don’t think anyone would seriously contest that. Social cognition may, for all I know, be embedded insofar as much of it depends on constant interaction with the social world. Though this needs to be qualified since even five year olds are pretty good at thinking through fictional scenarios, such as false belief tests, for which this is not true. So embodied yes and embedded up to a point.

      While being adept at social cognition is probably rather good for us, I don’t think it was suggested here that it is purely predicated on self-interest.

  5. I agree that our bodies enjoy an extended existence and that we are “prosthetic Gods”, in Freud’s turn of phrase, but nonetheless I’ve become convinced that its not just a hermeneutics of suspicion but an active atheism that is required when confronted with new technologies. An atheism because I refuse to believe in these technologies- I see them, I use them, I am excited about many of them (3D printing and newer forms of automation, for example) but I refuse to invest any kind of faith in them. I’m a complete technological atheist- which is a position neither of optimism nor pessimism.

    For this reason, I’d be concerned about allowing the development of technology to drive our normative assessment of that technology. This already be out of step with technological developments but nevertheless unchecked techno-acceleration that establishes its own normative criteria is highly ambiguous. What criteria does applied tech demand of us? Pretty much only those of efficacy and efficiency- does it do the job and does it do it well?

    In the end these are probably separable problems.

    1. @Arran OK, but there are two problems with your position: 1) For any particular technology, it is an irrational phobia devoid of evidence. 2) Techno-acceleration is, I’ve argued, a disposition due to the highly replicable nature of modern tech. Any invention can be replicated in multiple contexts. We are no longer in a situation where a device like Hero’s aeolipile can be just lost.So if I’m right, the only way to scotch techno-acceleration is by disinventing modernity! Good luck with that.

  6. in the broader context of this thread it really depends on what you mean by “understanding” others, I can understand how to interact with them in ways which suit/further my interests without caring for (or even really ‘getting’) what they want or even yet caring to meet, rather than circumvent/abuse, those wants.

  7. On point 1:

    I don’t think it is irrational. It is critical. Such a position simply comes from being aware, in Virilio’s terms, that every technology has an accident lurking inside it. There is nothing irrational in that, and nothing that prevents adoption of the technology. It’s asking for a critical relationship to it- one that suspends belief in techno-salvation and hence an atheism. I’m not so irrational as to declare a moratorium on tech- I love tech…I just don’t believe in it.

    That said, if it is irrational to ask that technology be submitted to ethico-political evaluation then I’m irrational.

    On point 2:

    I don’t have time atm to read your paper, so this can only be a partial reply.

    Firstly, the question of replication is a good one. Yet what we get in a techno-acceleration is an acceleration of the same…the endless recontextualisation of old technologies in new forms. This allows for a kind of “algal bloom” of technologies that is a qualitative acceleration and which effects a temporal acceleration in duree, whilst actually being a kind of cultural nonsense. The infinite repetition of this same is a bad infinite.

    A case in point would be the production of Apple products where the replication is only ever tweaked with minor modifications. Another example might be automation. In the high GDP nations we’ve managed to automate most manufacturing and industry without having had a coexstensive reorganisation of the production-labour relationship. The effects of this are clear.

    Some might argue that our “modernist” injunction for the new, as it is deployed in technology, is really an injunction for the simulacrum of the new; a novelty without novelty; innovations without invention. We have never been modern: when it comes to technology how many of us are really a certain kind of animist, enthralled by the living spirit of the internet or what have you. So in one way, disinventing modernity might not be the problem- completing it might be.

    Yet from the other side, from the side of the question of modernity’s disinvention, aren’t we already on that path? Wouldn’t ecocide, the re-emergence of a Feudalist “style” to capitalism, and other phenomena not suggest that we’re undoing modernity, or it’s illusion?

  8. For me this kind of stuff illustrates the degree to which ‘moral reason,’ whatever it is, requires ignorance to function in its historically adaptive modes. It can be seen as an example of ‘moral habitat destruction’: our base moral trouble-shooting systems (which we can now track developmentally back to infancy) are adapted to problem-ecologies that can be solved absent information pertaining to our bio-mechanical nature. The more pervasive this information becomes, the more unreliable our moral intuitions become.

    So consider if we made TMS conditioning of the TPJ a *secret,* something that prison officials administered without the knowledge of inmates or the public. “Seeing the light” is something our moral heuristics seem to handle quite well!

    None of this isn’t to say that we can’t jury-rig any number of johnny-on-the-spot fixes, only that they will all seem to be glaringly arbitrary – simply because the mechanical contingency of the entire system has been revealed. There’s no going back to simple ‘good’ or ‘bad.’ You’re right to say that we can’t determined the good and the bad of a technology in advance David, but to the degree that a technology outruns the problem ecology of those determinations, you have to wonder if any such determination can be made ever.

  9. Arran, Scott,

    Mea Culpa – I’ve not been checking up on my comments of late! Sorry lots of real world stuff happening at the moment. @Arran very interesting points regarding the good and bad replications. I’m not sure that the replication of the same is necessarily bad or that genuinely transformative tech is necessarily good. If your atheism is a denial of techno-Utopianism, then I’m some sort of techno-atheist too. That’s why I’ve consistently argued that the posthumanist/transhumanist debate needs to bracket the term “enhancement” in favour of an ontology of difference.

    @Scott your comment ties in nicely with some comments by Claire Colebrook on ethics and anti-ethics, which, again, transpose Derrida’s early paper on Levinas. If ethical reasoning is, as you say, dependent on a topos or environment then there can only be an anti-ethics of transformative technology, an attempt to think the violent infraction of an unknown possibility space.

  10. I’ll definitely have a look at the Colebrook. I read Derrida as limning the form of our metacognitive impasse from inside the semantic bottle, so I find this take very interesting. The ABC Research Group has gone some distance successfully operationalizing heuristics, and there’s no way of conceiving them short of ‘problem ecologies.’ The interesting thing is that this, wedded with BBT, actually provides a way to naturalize context, as well as to understand first-order ‘moral reasoning’ in second-order causal terms, and to hazard hypotheses about the future of these kinds of debates. Interpretative ambiguities will be gamed, solution after solution will be proposed, but nothing will stick simply because our moral cognitive toolbox is ‘imperial’ (intentional) rather than ‘metric’ (causal).

    What’s the pub date on the book, David?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s