Contemporary transhumanists argue that humans can be technologically re-engineered to free them from limitations which have hampered their life chances throughout history: ageing, disease, restricted cognitive capacities, underdeveloped social virtues, scarcity-based economic rationing.
This ethic is premised on prospective developments the so-called ‘NBIC’ suite of technologies – Nanotechnology, Biotechnology, Information Technology, Cognitive Science – supplying the means to make the requisite modifications. In the area of human cognitive enhancement, drugs like Amphetamine and Modafinil are already used to increase the efficiency of learning and working memory (Bostrom and Sandberg 2006). More speculatively, micro-electric neuroprostheses’ might eventually be used to interface the brain directly with non-biological cognitive or robotic systems (Kurzweil 2005, 317). Such developments might bring forward the day in which all humans will be more intellectually (and physically) capable due to enhancements in their native biological machinery or through interfacing with supplemental cognitive technologies such as immersive virtual realities or artificial intelligences.
Some believe that a convergence of NBIC technologies will not only increase intelligence within the current human range but, beyond a critical point, contribute to a discontinuously rapid change in nature of mentation on this planet. Virnor Vinge refers to this point as ‘the technological singularity’ (Vinge 1993). Vinge, along with Ray Kurzweil and Hans Moravec, argues that were a single super-intelligent machine created it could create still more intelligent machines, resulting in a recursive growth in cognitive capacity to levels that (lacking this capacity) we cannot imagine.
Since such a situation is unprecedented, the best we can do to understand the post-singularity dispensation, Vinge claims, is to draw parallels with the emergence of an earlier transformative intelligence: ‘And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind.’ (Vinge 1993) If this analogy between the emergence of the human and the emergence of the post-human holds, we could no more expect to understand a post-singularity entity than a rat or non-human primate – lacking the capacity for refined propositional attitudes – could be expected to understand human conceptions like justice, number theory or public transportation.
Vinge’s position nicely exemplifies a generic posthumanist philosophy which I will refer to as ‘speculative posthumanism’.
Speculative posthumanists claim that descendants of current humans could cease to be human by virtue of a history of technical alteration. The notion of descent is ‘wide’ insofar as the entities that might qualify could include our biological descendants or beings resulting from purely technological activities (e.g. artificial intelligences, synthetic life-forms or uploaded minds)
In his presentation at Humanity+ UK 2010 philosopher and futurist Max More recently used the metaphor of an ‘event horizon’ to convey this vision of our wide-descendants receding beyond the point at which the sense or meaning of their lives could be accessible to an observer situated outside the singularity. He expressed some scepticism about this prospect, suggesting that ‘our’ capacity to understand recursively generated technological change would increase so as to ford the incommensurability. However, this assumes that the singularity would be evenly distributed (not everybody may elect to be augmented or be capable of it) and that the development of cognition would be smooth rather than discontinuous (involving transliteration into radically different formats), unitary rather than multiple (why not many singularities?). Moreover, there remains the philosophical problem of how us mildly augmented primates are to envisage the prospect of being radically transcended by entities that, for want of a better word, we must describe as ‘posthuman’.
Speculative posthumanism claims that an augmentation history of this kind is metaphysically and technically possible. It does not imply that the posthuman would improve upon the human state or that there would be a scale of values by which the human and posthuman lives could be compared – for example, in terms of their levels of happiness, autonomy or virtue. If radically posthuman lives were very non-human indeed, we cannot assume they will be prospectively evaluable.
Speculative posthumanism is thus logically independent of transhumanism and (besides) generates an interesting ethical ‘aporia’ (an ancient Greek term denoting a contradiction or impossibility). After all, the kinds of policies with which transhumanists hope to reengineer our cognitive architecture (widespread use of cognitive enhancements, the development of Artificial General Intelligences (AGI’s), brain-computer interfaces (BCI’s) and synthetic biology) may also be precursors to singularities. Since transhumanists want to improve the human lot they are bound to assess the risks as well as the ethical benefits of pursuing any specific technology. On the other hand, it is ex hypothesi impossible to prospectively evaluate the conditions of life beyond the event horizon of a technological singularity. Hence the aporia: the conceivability of the event horizon implies that the transhumanist is bound to evaluate that which in principle transcends anthropocentric philosophical values such as utility, welfare, autonomy, virtue or flourishing.
Bostrom N, Sandberg A (2006), ‘Converging Cognitive Enhancements’, Ann. N.Y. Acad. Sci. 1093: 201–227.
Kurzweil, Ray (2005), The Singularity is Near (New York Viking).
Vinge, Vernor (1993), ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, http://www.rohan.sdsu.edu/faculty/vinge/misc/singularity.html. Accessed 24 April 2008.