Many philosophers argue that humans are a distinctive kind of creature and that some capacities that distinguish humans from nonhumans give us a moral dignity denied to nonhumans. This status supposedly merits special protections that are not extended to nonhumans and special claims on the resources to cultivate those capacities reserved for humans alone.
However, I will argue that if we are committed to developing human capacities and welfare using advanced (NBIC) technologies (see below) our commitment to other humans and our interest in remaining human cannot be overriding. This is because such policies could engender posthumans and the prospect of a posthuman dispensation should, be properly evaluated rather than discounted. I will argue that evaluation (accounting) is not liable to be achievable without posthumans. Thus transhumanists – who justify the technological enhancement and redesigning of humans on humanist grounds – have a moral interest in making posthumans or becoming posthuman that is not reconcilable with the priority humanists have traditionally attached to human welfare and the cultivation of human capacities.
To motivate this claim, I need to distinguish three related philosophical positions: Humanism, Transhumanism and Posthumanism and explain how they are related.
For the purposes of this argument, a philosophical humanist is anyone who believes that humans are importantly distinct from non-humans.
For example, many humanists have claimed that humans are distinguished by their reasoning prowess from nonhuman animals. One traditional view common to Plato, Aristotle, Descartes, Rousseau, Kant and others is that humans are responsive to reasons while animals respond only to sensory stimuli and feeling. Being rational allows humans to bypass or suppress emotions such as fear or anger and (for better or worse) cultivate normatively sanctioned forms of action and affection.
Responsiveness to reasons is both a cognitive and a moral capacity. The fact that I can distinguish between principles like equality and freedom, for example, allows me to see these as alternative principles of conduct: “The power to set an end – any end whatsoever — is the characteristic of humanity (as distinguished from animality)” (Kant 1948, 51).
Most humanists claim that the capacities – such as rationality or sociability – that distinguish us from cats, dogs and chimps also single us out for special treatment.
For Kant, this capacity to choose the reasons for our actions – to form a will, as he puts it, – is the only thing that is good in an unqualified way (Kant 1948, 62).
Even thinkers who allow that the human capacity for self-shaping is just one good among a plurality of equivalent but competing goods claim that “autonomy” confers a dignity on humans that should be protected by laws and cultivated by the providing the means to exercise it.
Thus most humanists hold some conception of what makes a distinctively human life a valuable one and have developed precepts and methods for protecting and developing these valuable attributes.
At the risk of oversimplifying, the generic humanist techniques for achieving this are politics and education.
For example, in Politics 1 Aristotle claimed that virtues like justice, courage or generosity need a political organization to provide the leisure, training, opportunities and resources to develop and exercise these valuable traits:
Hence it is evident that the state is a creation of nature, and that man is by nature a political animal. And he who by nature and not by mere accident is without a state, is either a bad man or above humanity; he is like the
“Tribeless, lawless, hearthless one, “
whom Homer denounces- the natural outcast is forthwith a lover of war; he may be compared to an isolated piece at draughts…
Rousseau and Marx, likewise see the political as the setting in which humans become fully human. Liberal political philosophers may be more wary of attributing intrinsic value to politics but most see the social goods secured by it as the sine qua non of a decent existence.
Transhumanists share core humanist values and aspiration. They think that human-distinctive attributes like rationality and autonomy are good, as are human social emotions and human aesthetic sensibilities.
They also think that these capacities should be cultivated where possible and protected: e.g. by ensuring basic liberties and providing the resources for their fullest possible development.
However, they believe that the traditional methods that humanists have used to develop human capacities are limited in their scope by the material constraints of human biology and that of nature more generally.
Our biological and material substrate was not a political issue until relatively recently because we lacked the technological means to alter it. Although philosophers like Aristotle, Hume and Kant proposed theories of human nature, this nature was essentially an encapsulated black box. One could know what it did and why it did it, but not how it did it. Thus a basic cognitive function, such as imagination is described by Kant as a “hidden art in the depths of the human soul, whose true operations we can divine from nature and lay unveiled before our eyes only with difficulty” (Kant 1978, A141–2/B180–1).
Transhumanists believes that prospective developments in a suite of technologies called the NBIC technologies and sciences will at last allow humans unprecedented control over their own and morphology.
NBIC stands for Nanotechnology, Biotechnology, Information Technology, and Cognitive Science.
- Nanotechnology – very fast and precise atom-scale manufacturing, (Programmable Matter, New Materials, Post-Scarcity Economics).
- Biotechnology – manipulating life and living systems at the genetic/sub-cellular level, synthetic life (Genetic Enhancement, Ageing Cures)
- Information Technology – computing, cybernetics (Artificial Intelligence, Brain Machine Interfaces)
- Cognitive Science – understanding the architecture and implementation details of human and nonhuman minds (Cognitive Enhancement, Mind-Uploading)
The smarter we are the more effectively we can develop techniques for developing human capacities: e.g. by eliminating starvation or scarcity with new agricultural and manufacturing techniques, finding cures for diseases or by becoming better democratic deliberators.
Thus if advancing human welfare is a moral priority, and extending human cognitive capacities is the best way of achieving this, we should extend our cognitive capacities using NBIC technologies all other things being equal (A supplementary argument for a transhuman politics assumes that certain capacities are necessarily characterized in terms of some end or fulfilment. Thus they are exercised appropriately when their possessor strives to refine and improve them – See Mulhall 1998).
The exercise of rationality requires many cognitive aptitudes: perception, working and long-term memory, general intelligence and the capacity to acquire cultural tools such as languages and reasoning methods. There appear to have been significant increases the level of general intelligence in industrialized countries during the twentieth century – particularly at the lower end of the scale. These may be explained by public health initiatives such as the removal of organic lead from paints and petrol, improved nutrition and free public education.
These increases, if real, are a clear social good. However, there seems to be a limit to the effect of environmental factors upon cognition because the efficiency of our brains is constrained by the speed, interconnectedness, noisiness and density of the neurons packed into our skulls.
Thus the best scientists, philosophers or artists currently alive are no more intelligent or creative than Aristotle, Descartes, Leibniz or Kant. There are far more thinkers on the planet than in Aristotle’s time and they are better equipped than ever before but their minds, it seems, are no more able than those of previous artists, scientists and philosophers.
For transhumanist thinkers like Nick Bostrom and Ray Kurzweil, this suggests that many major improvements of intelligence will require us to escape our biology by outsourcing our thinking to non-biological platforms such as computing devices. The components of the fastest computers operate tens of millions times faster than the spiking frequency of the fastest human nerve cell (neuron) so this suggests an obvious way in which humans transcend the biological limitations on our brains.
Many early 21st century humans offload the tedious tasks like arithmetic, memorizing character strings like phone numbers or searching for the local 24-hour dry cleaner to computing devices. Transhumanists claim that the process of outsourcing biologically based cognition onto non-biological platforms is liable to accelerate as our artificially intelligent devices get more intelligent and as we devise smarter ways of integrating computing hardware into our neurocomputational wetware. Here the convergence of nanotechnology, information technology and biotechnology is liable to be key.
Brain Computer Interfaces like the BrainGate BCI show that it is possible to directly interface computer operated systems with neural tissue, allowing tetraplegic patients to control devices such as robotic arms with their thoughts.
Transhumanists see future humans becoming ever more intimate with responsive computer systems that can extend physical functions using robotic limbs or arms well as cognitive functions such as perception or working memory.
Thus it seems quite possible that future humans or transhumans will be increasingly indistinguishable from their technology. Humans will become “cyborgs” or cybernetic organisms like the Borg in the TV series Star Trek with many of the functions associated with thinking, perception and even consciousness subserved by increasingly fast and subtle computing devices.
As Star Trek aficionados will be aware, the Borg do not seem to represent an attractive ideal for the humanist who values individual autonomy and reason. The Borg area technological swarm intelligence – like an ant or termite colony – whose individual members are slaved to goals of the Collective.
Collectively the Borg possesses great cognitive powers and considerable technical prowess – though these powers emerge from the interactions of highly networked “drones”, each of which has its human rationality, agency and sociability violently suppressed.
However, many argue that it is naïve to associate the status of the cyborg with that of dehumanized machines.
The cognitive scientist and philosopher Andy Clark has argued that the integration of technology into biology is a historical process that has defined human beings since the development of flint tools, writing and architecture. We are, in Clark’s words, “Natural Born Cyborgs” whose mental life has always extruded into culturally constructed niches such as languages and archives:
The promise, or perhaps threatened, transition to a world of wired humans and semi-intelligent gadgets is just one more move in an ancient game. . . We are already masters at incorporating nonbiological stuff and structure deep into our physical and cognitive routines. To appreciate this is to cease to believe in any post-human future and to resist the temptation to define ourselves in brutal opposition to the very worlds in which so many of us now live, love and work (Clark 2003, 142).
If this is the case, then perhaps the wired, transhuman future that I am sketching here will still be inhabited by beings whose aspirations and values will be recognizable to humanists like Aristotle, Rousseau and Kant.
These transhuman descendants might still value autonomy, sociability and artistic expression. They will just be much better at being rational, sensitive and expressive. Perhaps, also, these skills will repose in bodies that are technologically modified by advanced biotechnologies to be healthier and far more resistant to ageing or damage than ours. But the capacities that define that humanist tradition here are not obviously dependent on a particular kind of physical form.
For this reason transhumanists believe that we should add morphological freedom – the freedom of physical form – to the traditional liberal rights of freedom of movement and freedom of expression. We should be free do discover new forms of embodiment – e.g. new ways of integrating ourselves with cognitive technologies – in order to improve on the results of traditional humanizing techniques like liberal arts education or public health legislation.
As someone who shares many of the humanist values and aspirations that I’ve described, I’ll admit to finding the transhuman itinerary for our future attractive. Perhaps some version of it will also be an ecological and economic necessity as we assume responsibility for a planetary ecosystem populated by nine billion humans.
However, there is a catch. While the technological prospectus I’ve given may result in beings that are recognizably like us: only immeasurably smarter, nicer, weller and more capable. It might produce beings that are not human at all in some salient respect.
Such technologically engendered nonhumans – or posthumans – may not be the kinds of beings to which humanist values apply. They may still be immeasurably smarter and more robust than we are, but also alien ways that we cannot easily understand.
I call the position according to which there might be posthumans “Speculative Posthumanism” to distinguish it from posthuman philosophies not directly relevant to this discussion.
The speculative posthumanist is committed to the following claim:
(SP) Descendants of current humans could cease to be human by virtue of a history of technical alteration.
Clearly, this is a very schematic statement and needs some unpacking.
For example, it does not explain what “ceasing to be human” could involve. If Clark and the transhumanists are right, then ceasing to be human is not just a matter of altering one’s hardware or wetware. A human cyborg modified to live in hostile environments like the depths of the sea or space might look strange to us but might use a natural language whose morphology and syntax is learnable unmodified humans, value her autonomy and have characteristic human social emotions such as exclusive feelings towards other family members or amour-propre. Thus many of the traits with which we pick out humans from nonhumans could well generalize beyond morphology.
Some argue that the self-shaping, reflective rationality that Kant thought distinguished humanity from animality is an obvious constituent of a “human essence”. An essential property of a kind is a property that no member of that kind can lack. If this is right, then losing the capacity for practical rationality by some technological process (as with the Borg) is a decisive, if unappealing, path to posthumanity.
It can be objected of course that members of the human species (very young children) lack the capacity to exercise reflective rationality while other humans (individuals with severe mental disabilities) are not able to acquire it. Thus that it cannot be a necessary condition for humanity. Being rational might better be described as a qualification for moral personhood: where a person is simply a rational agent capable of shaping its own life and living on fair terms with other persons.
If posthumans were to qualify as moral persons by this or some other criterion we appear to have a basis for a posthuman republicanism. The fact that other beings may be differently embodied from regular humans – intelligent robots, cyborgs or cognitively enhanced animals – does not prevent us living with them as equals.
However, it is possible to conceive of technological alterations producing life forms or worlds so alien that they are not recognizably human lives or worlds.
In a 1993 article “The Coming Technological Singularity: How to survive in the posthuman era” the computer scientist Vernor Vinge argued that the invention of a technology for creating entities with greater than human intelligence would lead to the end of human dominion of the planet and the beginning of a posthuman era dominated by intelligences vastly greater than ours (Vinge 1993).
For Vinge, this point could be reached via recursive improvements in the technology. If humans or human-equivalent intelligences could use the technology to create superhuman intelligences the resultant entities could make even more intelligent entities, and so on.
Thus a technology for intelligence creation or intelligence amplification would constitute a singular point or “singularity” beyond which the level of mentation on this planet might increase exponentially and without limit.
The form of this technology is unimportant for Vinge’s argument. It could be a powerful cognitive enhancement technique, a revolution in machine intelligence or synthetic life, or some as yet unenvisaged process. The technology needs to be “extendible” in as much that improving it yields corresponding increases in the intelligence produced. Our only current means of producing human-equivalent intelligence is non-extendible: “If we have better sex . . . it does not follow that our babies will be geniuses” (Chalmers 2010: 18).
The “posthuman” minds that would result from this “intelligence explosion” could be so vast, according to Vinge, that we have no models for their transformative potential. The best we can do to grasp the significance of this “transcendental event” is to draw analogies with an earlier revolution in intelligence: the emergence of posthuman minds would be as much a step-change in the development of life on earth as the “The rise of humankind”.
Vinge’s singularity hypothesis – the claim that intelligence-making technology would generate posthuman intelligence by recursive improvement – is practically and philosophically important. If it is true and its preconditions feasible, its importance may outweigh other political and environmental concerns for these are predicated on human invariants such as biological embodiment, which may not obtain following a singularity.
However, even if a singularity is not technically possible – or not imminent – the Singularity Hypothesis (SH) still raises a troubling issue concerning our capacity to evaluate the long-run consequences of our technical activity in areas such as the NBIC technologies. This is because Vinge’s prognosis presupposes a weaker, more general claim to the effect that activity in NBIC areas or similar might generate forms of life which might be significantly alien or “other” to ours.
If we assume Speculative Posthumanism it seems we can adopt either of two policies towards the posthuman prospect.
Firstly, we can account for it: that is, assess the ethical implications of contributing to the creation of posthumans through our current technological activities.
Vinge’s scenario gives us reasons for thinking that the differences between humans and posthumans could be so great as to render accounting impossible or problematic in the cases that matter. The differences stressed in Vinge’s essay are cognitive: posthumans might be so much smarter than humans that we could not understand their thoughts or anticipate the transformative effects of posthuman technology. There might be other very radical differences. Posthumans might have experiences so different from ours that we cannot envisage what living a posthuman life would be like, let alone whether it would be worthwhile or worthless one. Finally, the structure of posthuman minds might be very different from our kind of subjectivity.
Moral personhood presumably has threshold cognitive and affective preconditions such as the capacity to evaluate actions, beliefs and desires (practical rationality) and a capacity for the emotions, and affiliations informing these evaluations. However, human-style practical reason might not be accessible to a being with nonsubjective phenomenology. Such an entity could be incapable of experiencing itself as a bounded individual with a life that might go better or worse for it.
We might not be able to coherently imagine what these impersonal phenomenologies are like (e.g. to say of them that they are “impersonal” is not to commit ourselves regarding the kinds of experiences might furnish). This failure may simply reflect the centrality of human phenomenological invariants to the ways humans understand the relationship between mind and world rather than any insight into the necessary structure of experience (Metzinger 2004: 213). Thomas Metzinger has argued that our kind of subjectivity comes in a spatio-temporal pocket of an embodied self and a dynamic present whose structures depends on the fact that our sensory receptors and motor effectors are “physically integrated within the body of a single organism”. Other kinds of life – e.g. “conscious interstellar gas clouds” or (somewhat more saliently) post-human swarm intelligences composed of many mobile processing units – might have experiences of a radically impersonal nature (Metzinger 2004: 161).
For this reason, we may just opt to discount the possibility of posthumanity when considering the implications of our technological activity: considering only its implications for humans or for their souped-up transhuman cousins
But surely humans and transhumans have a duty to evaluate the outcomes of their technical activities of these differences with a view to maximizing the chances of good posthuman outcomes or minimizing the chances of bad ones (Principle of Accounting)
From the human/transhuman of view some posthuman worlds might be transcendently good. But others could lead to a very rapid extinction of all humans, or something even more hellish.
Charles Stross’ brilliant futurist novel Accelerando envisages human social systems being superseded by Economics 2.0: a resource allocation system in which supply and demand relationships are computed too rapidly for those burdened by a “narrative chain” of personal consciousness to keep up. Under Economics 2.0 first person subjectivity is replaced “with a journal file of bid/request transactions” between autonomous software agents. Whole inhabited planets are pulverized and converted to more “productive” ends (Stross 2006: 177).
This post-singularity scenario is depicted as comically dreadful in Stross’ novel. It is bad for humans and transhumans who prove incapable of keeping up with the massively accelerated intelligences implementing E 2.0.
As the world-builder of Accelerando’s future, Stross is able to stipulate the moral character of Economics 2.0. If we were confronted with posthumans, things might not be so easy. We cannot assume, though, that a posthuman world lacking humans would be worse than one with humans but no posthumans. If posthumans were as unlike humans as humans are unlike non-human primates, a fair evaluation of their lives might be beyond us.
Thus accounting for our contribution to making posthumans seems obligatory but may also be impossible with radically alien posthumans, while discounting our contribution is irresponsible. We can call this double bind: “the posthuman impasse”.
If the impasse is real rather than apparent, then there may be no principles by which to assess the most significant and disruptive long-term outcomes of current developments in NBIC (and related) technologies.
One might try to circumvent the impasse by casting doubt on Speculative Posthumanism. It is conceivable that further developments in technology, on this planet at least, will never contribute to the emergence of significantly nonhuman forms of life.
However, Speculative Posthumanism is a weaker claim than SH and thus more plausible. Vinge’s essay specifies one recipe for generating posthumans. But there might be posthuman difference-makers that do not require recursive self-improvement. Moreover, we know that Darwinian natural selection has generated novel forms of life in the evolutionary past since humans are one such. Since there seems to be nothing special about the period of terrestrial history in which we live it seems hard to credit that comparable novelty resulting from some combination of biological or technological factors might not occur in the future.
Is there any way round the impasse that is compatible with Speculative Posthumanism? I will argue that there is, though some ethicists may still prefer more venerable methods like hoping for the best.
3. Becoming Posthuman
I’ve suggested that the alienness of posthumans presents us with an ethical difficulty because they might be so much different to humans that we cannot understand them sufficiently to figure out whether their lives are worth living.
However, it can be objected that I may be overstating the difficulty here. Yes, posthumans might behave in ways that defy comprehension. But there are plenty of things that are initially difficult to understand that we can understand if we put ourselves in the right situation.
Some areas of science are difficult to understand in non-ideal circumstances – e.g. if we don’t know the math – but become much easier when we have the math.
Moreover, the very idea of a form of life that is humanly-incomprehensible in principle seems questionable. It implies that there is a “glassy essence” of human whose laws determine that certain things are incomprehensible to us.
To be sure, there are some things we know that we cannot know – e.g. whether an arbitrary computer program will halt or go one forever. But what we cannot comprehend we cannot know that we do not know – since then we would at least need to comprehend it. Thus the claim that posthumans could be so radically weird that they would be beyond our ken in principle could never be demonstrated.
If transhumanists are right, then plenty of things that may be incomprehensible for us that might not be if we upgraded ourselves with the right cognitive enhancements or extensions or simply made careful observations and interpretations. So characterizing the posthuman as just intrinsically weird is suspect. Maybe nothing is that weird!
The dated non-existence of posthumans is a bigger impediment to knowledge than their hypothetical strangeness. There are, after all, no posthumans yet. The emergence of posthumans would be unprecedented. Thus there are no empirical regularities to appeal to when predicting how they will emerge or what they will be like.
Vinge’s singularity scenario is a conceivable recipe for posthumans but we don’t yet know if it’s a feasible one. Even if a singularity is possible the nature of what comes out the other side of it is predictable. That’s what makes it singular.
It follows that if there are ideal or best situations for coming to understand posthumans, they are going to be ones in which posthumans exist. To date there are no posthumans.
Now, the principle of accounting stated:
(Accounting) Humans and transhumans have a duty to evaluate the outcomes of their technical activities of these differences with a view to maximizing the chances of good posthuman outcomes or minimizing the chances of bad ones.
So if we make this strong epistemic obligation assumption:
EOAS: If we are obliged to understand something, we are obliged to bring about the best conditions for understanding it (Strong Epistemic Obligation Principle).
Then we a) have plausible route to Accounting and b) we are obliged by EOAS to adopt it.
- Understanding posthumans is not possible only if there is a human cognitive essence.
- There is no human cognitive essence (assumption).
- Understanding posthumans is possible (1, 2)
- Given their dated non-existence, the best conditions for understanding posthumans involve us making posthumans or becoming posthuman (True for any non-existent technological artefact).
- We are obliged to attempt to understand posthumans (Accounting).
- If we are obliged to understand something, we are obliged to bring about the best conditions for understanding it (Strong Epistemic Obligation Principle).
- We are obliged to bring about the best conditions for understanding posthumans (5, 6)
Conclusion: We are obliged to make posthumans or become posthuman (4, 7)
EOAS does seem excessively strong. There might well be cases where we are obliged to understand something but have no overriding duty to choose the best method of doing this because it would be dangerous, cruel or otherwise deleterious to do so.
We can, though, weaken the principle to:
(EOAm) If we are obliged to understand something, we are obliged to bring about the necessary (only) conditions for understanding it (Moderate Epistemic obligation).
Then we can still generate our original conclusion by placing stronger constraints on understanding rather than stronger obligations to understand. We can do this by assuming that posthuman nature is a “diachronically emergent” phenomenon.
A diachronically emergent behaviour or property occurs as a result of a temporally extended process, but cannot be inferred from the initial state of that process. It can only be derived by allowing the process to run its course (Bedau 1997).
If posthumans are diachronically emergent phenomena their morally salient characteristics and effects will not be predictable prior to their occurrence. While this constrains our ability to prognosticate about posthuman makers, it leaves other aspects of their epistemology quite open. As Paul Humphrey reminds us, diachronic emergence is a one-time event. Once we observe a formerly diachronically emergent event we are in a position to predict tokens of the same type of emergent property from causal antecedents that have been observed to generate it in the past (Humphrey 2008).
The diachronic emergence assumption seems to follow from the claim that the emergence of posthumans – whatever or whoever they turn out to be – would be akin in many ways to the emergence of an entirely new biological species like Homo sapiens. There are cases of species such the “naked mole rat” (a mammal that lives in hives organized around a single fertile female) whose nature was predicted with some accuracy before their discovery. But the eusociality of mole rats – while unusual among mammals – is common among social insects like ants, termites and wasps. Posthumans generated by an unprecedented technogenetic process might exhibit properties that are not exhibited by any historical kind that humans have encountered up to now. Thus the claim that posthumans would be diachronically emergent seems supportable.
If this is right, then we have a very strong interest in producing or becoming posthumans. This is not to deny that we could have countervailing interests. For example, given the radically uncertainty surrounding a posthuman emergence or “disconnection” from human life and society (See Roden 2012) some argue that we should observe the precautionary principle when considering how to develop the NBIC suite. However, allowing for countervailing reasons, the argument for an interest in becoming posthuman remains compelling for transhumanists who claim that we have an overriding interest in cultivating human capacities with NBIC technologies. Thus the transhumanist commitment to humanism must be is ethically unsustainable.
Bedau, Mark A. 1997. “Weak Emergence”. Philosophical Perspectives 11:375-399.
Chalmers, David J. 2010. “The Singularity: A Philosophical Analysis”. Journal of
Clark, Andy. 2003. Natural Born Cyborgs. Oxford: OUP.
Cranor, Carl F. 2004. “Toward Understanding Aspects of the Precautionary Principle”. Journal of Medicine and Philosophy 29 (3): 259 – 279.
Humphreys, Paul. 2008. “Computational and Conceptual Emergence”. Philosophy of
Science 75 (5): 584-594.
Kant, I. (1778). Critique of Pure Reason. Trans. Norman Kemp-Smith. London: Macmillan.
Kant, Immanuel. 1948. Groundwork of the Metaphysic of Morals. Trans. H.J. Paton under the title The Moral
Law. London: Hutchinson.
Metzinger, Thomas. 2004. Being No One: The Self-Model Theory of Subjectivity.
Cambridge, MA.: MIT Press 2004.
Stephen Mulhall (1998). “Species?being, teleology and individuality part II: Kant on human nature”, Angelaki: Journal of the Theoretical Humanities, 3:1, 49-58
Roden, David (2012). “The Disconnection Thesis”, in The Singularity Hypothesis: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart. Springer Frontiers Collection.
Stross, Charles. 2006. Accelerando. London: Orbit.
Vinge, Vernor. 1992. A Fire Upon the Deep. New York: Tor.
Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era”, Vision-21:Interdisciplinary Science and Engineering in the Era of Cyberspace. Accessed 8 December 2007.
 Although bland, this description has simplicity and generality going for it.
For example, it includes philosophers who have very different conceptions of what the humanly distinctive features are and what of their relation to other kinds of things in the world.
- It includes Hedonists who argue that pleasure is the only valuable thing and anti-hedonists who hold that pleasure has no value or is just one valuable state among others.
- It includes naturalists like who hold that humans are just bits of natural world with anti-naturalists who hold that humans transcend or “rise above” nature somehow – whether because they possess a supernatural bit such as an immortal soul or because, unlike hydrogen atoms or cats, they are capable of adding attributes to nature that can only be understood from a human standpoint – e.g. meaning or truth.
- It includes theists – who believe that humans exist in a world containing gods or a God, as well as anti-theists who reject God and gods or have no interest in them.
 Current estimates of the brains raw processing power run to about 100 teraflops (100 trillion operations per second). The world’s fastest supercomputers currently exceed this this by a factor of ten. The fastest neurons in our heads have a maximum spike frequency of about 200Hz. The fastest transistors in the world currently operate ten million times faster at about 2 GHz. Moreover, since the 1950’s the increase in the processing power of computer components – integrated circuits – has obeyed Moore’s law: approximately doubling every two years. If this trend continues over the next couple of decades then the artificial processing power on this planet is likely to significantly exceed that of biological systems.
 Rousseau distinguishes amour-propre (self-love) from amour de soi (love of self). The former is an attitude towards an individual’s status or relationships to others.
 Even if we suppose that there is a human essence (i.e. a set of necessary and jointly sufficient conditions for humanity – it does not follow that each individual is necessarily human. Thus I may not be able to understand posthumans qua human while being able understand them qua nonhuman.
 Although there is no canonical formulation, most versions of the precautionary principle state that we should place a greater burden of proof on arguments for an activity alleged to have to potential for causing extensive public or environmental harm than on arguments against it (Cranor 2004). In Roden 2012 I argue that a reasonable interpretation of precautionary principle would be compatible with pursuing a path towards posthumanity and disconnection.