Eric Schwitzgebel has a typically clear-eyed, challenging post on the implications of (real) artificial intelligence for our moral systems over here at the Splintered Mind. The take home idea is that our moral systems (consequentialist, deontologistical, virtue-ethical, whatever) are adapted for creatures like us. The weird artificial agents that might result from future iterations of AI technology might be so strange that human moral systems would simply not apply to them.
Scott Bakker follows this argument through in his excellent Artificial Intelligence as Socio-Cognitive Pollution , arguing that blowback from such posthuman encounters might literally vitiate those moral systems, rendering them inapplicable even to us. As he puts it:
The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development thatraises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines.
As any reader of Posthuman Life, might expect, I think Erich and Scott are asking all the right questions here.
Some (not me) might object that our conception of a rational agent is maximally substrate neutral. It’s the idea of a creature we can only understand “voluminously” by treating it as responsive to reasons. According to some (Davidson/Brandom) this requires the agent to be social and linguistic – placing such serious constraints on “posthuman possibility space” as to render his discourse moot.
Even if we demur on this, it could be argued that the idea of a rational subject as such gives us a moral handle on any agent – no matter how grotesque or squishy. This seems true of the genus “utility monster”. We can acknowledge that UM’s have goods and that consequentialism allows us to cavil about the merits of sacrificing our welfare for them. Likewise, agents with nebulous boundaries will still be agents and, so the story goes, rational subjects whose ideas of the good can be addressed by any other rational subject.
So according to this Kantian/interpretationist line, there is a universal moral framework that can grok any conceivable agent, even if we have to settle details about specific values via radical interpretation or telepathy. And this just flows from the idea of a rational being.
I think the Kantian/interpretationist response is wrong-headed. But showing why is pretty hard. A line of attack I pursue concedes to Brandom-Davidson that that we have the craft to understand the agents we know about. But we have no non-normative understanding of the conditions something must satisfy to be an interpreting intentional system or an apt subject of interpretation (beyond commonplaces like heads not being full of sawdust).
So all we are left with is a suite of interpretative tricks whose limits of applicability are unknown. Far from being a transcendental condition on agency as such, it’s just a hack that might work for posthumans or aliens, or might not.
And if this is right, then there is no a future-proof moral framework for dealing with feral Robots, Cthulhoid Monsters or the like. Following First Contact, we would be forced to revise our frameworks in ways that we cannot possible have a handle on now. Posthuman ethics must proceed by way of experiment.
Or they might eat our brainz first.