A DEFENCE OF PRE-CRITICAL POSTHUMANISM

Transcript of a paper given at Nottingham University’s Psychoanalysis and the Posthuman Conference, Sept 7, 2010.

Mankind’s a dead issue now, cousin. There are no more souls. Only states of mind.[1]

 

Since emerging in nineties critical theory, transhumanism and cyberpunk literature, the term ‘posthuman’ has been used to mark a historical juncture at which the status of the human is radically in doubt. Two main usages or, if you will, two distinct posthumanisms can be discerned over this period.

 

Transhumanists, futurists and science fiction authors regularly concatenate or hyphenate ‘post’ and ‘human’ when speculating about the long-run influence of advanced technologies on the future shape of life and mind.

 

By contrast, for cultural theorists and philosophers in the ‘continental’ tradition the posthuman is a condition in which the foundational status of humanism has been undermined. The causes or symptoms of this supposed crisis of humanism are various as the bio-engineered ‘clades’ ramifying through the post-anthropoform solar system of Bruce Sterling’s 1996 novel Schismatrix. Posthumanism, in this diagnostic or critical sense, is expressed in the postmodern incredulity towards enlightenment narratives of emancipation and material progress; the deconstruction of transcendental or liberal subjectivities; the end of patriarchy; the emergence of contrary humanisms in post-Colonial cultures; the reduction of living entities to resources for a burgeoning technoscience, or, if some theorists are to be believed, all of the above.[2]

 

In this paper, I will argue that these two usages do not only reflect divergent understandings of the posthuman – the speculativeand the critical – but reflect a foreclosure of radical technogenetic change on the part of critical posthumanists. This gesture can be discerned in four arguments which occur in various forms within the extant literature of critical posthumanism:

 

  • the anti-humanist argument
  • the technogenesis argument
  • the materiality argument
  • and the anti-essentialist argument

 

All four, as I hope to show, are unsound.

 

Analysing why these arguments fail has the dual benefit of preventing us from being distracted by the anti-humanist hyperbole accruing to theoretical frameworks employed within critical posthumanism –  such as deconstruction and cognitive science – but, more importantly, contributes to the development of a rigorous, philosophically self-aware speculative posthumanism.

 

*    *    *

Contemporary transhumanists argue that human nature is an unsatisfactory ‘work in progress’ that should be modified through technological means where the instrumental benefits for individuals outweigh the technological risks. This ethic of improvement is premised on prospective developments in four areas: Nanotechnology, Biotechnology, Information Technology and Cognitive Science – the so-called ‘NBIC’ suite. For example, improved bionic neural interfaces may allow the incorporation of a wide range of technical devices within an enhanced ‘cyborg’ body or ‘exo-self’ while genetic treatments may increase the efficiency or learning or memory (Bostrom and Sandberg 2006)  or be used to increase the size of the cerebral cortex. The wired and gene-modified denizens of the transhuman future could be sensitive to a wider range of stimuli, faster, more durable, more intellectually capable and morphologically varied than their unmodified forebears.

 

Just how unrestricted and capable transhuman minds and bodies can become is contested since the scope for enhancement depends both on often hypothetical technologies and upon hotly contested metaphysical claims. Among the prospective technologies which excite radical transhumanists like Ray Kurzweil are the use of ‘micro-electric neuroprostheses’ which might non-invasively stimulate or probe the brain’s native neural networks, allowing it to jack directly into immersive cognitive technologies or map its ‘state vector’ prior to uploading an entire personality (Kurzweil 2005, 317);[3] the elusive goal of ‘artificial general intelligence’ – the creation of robots or software systems which approximate or exceed the flexibility of human belief-fixation and comportment; or, perhaps less speculatively, improvements in processor technology sufficient to emulate the computational capacity of human and other mammalian brains (Ibid. 124-125).

 

Among the metaphysical issues that trouble all but the most facile of transhumanist itineraries is the scope of functionalist accounts of mental states and processes. Functionalist philosophers of mind claim that the mental states types such as beliefs or pains are constituted by the ‘causal role’ of token states within a ‘containing system’ rather than by the stuff that the system is constituted from. The causal role of a token state is defined by the set of states that can bring it about (its inputs) and set of the states that it causes in turn (its outputs). The substrate on which that state is realized is irrelevant to its functional role.[4] Some philosophers of mind – David Chalmers, say – are functionalists with regard to representational states like beliefs or desires, but not with regard to phenomenal states, like having a toothache or seeing pink. If Chalmers is right, then we can never produce artificial consciousness purely in virtue of emulating the kinematics of brain states. However, if we accept the accounts of philosophers with (however divergent) functional analyses of the property of state consciousness like Daniel Dennett and Michael Tye the prospects for artificial consciousness seem somewhat brighter (Dennett 1991). Given a sufficiently global functionalism, a simulation of an embodied nervous system in which these constitutive relationships were actually instantiated would also be a replication lacking none of the preconditions for intentionality or conscious experience regardless of whether they were implemented with biological material as this is currently understood. For radical transhumanists influenced by functionalist and computationalist approaches in the philosophy of cognitive science, then, neural replication opens up the possibility of copying the patterns that constitute a given mind onto non-biological platforms that will be inconceivably faster, more flexible and more robust than evolved biological bodies (Kurzweil 2005).

 

These radical augmentation scenarios indicate to some that a future convergence of NBIC technologies could lead to a new ‘posthuman’ form of existence. Following an influential paper by the computer scientist Virnor Vinge, this ontological step change is sometimes referred to as ‘the technological singularity’ (Vinge 1993): an epochal ‘discontinuity’ resulting from positive feedback exerted by technical change upon itself (Bostrom 2005, 8). Characteristically the scenario is painted in terms of the creation of artificial super-intelligence – intelligence being the variable considered most liable to affect the rate of technical growth. Vinge claims that were a single super-intelligent machine created, it could create still more intelligent machines, resulting in a growth in mentation to plateaux far exceeding our current capacities. Lacking this intellectual prowess, we cannot envisage some of the ways post-singularity intelligences might re-order the world. A post-singularity world would be constituted in ways that cannot be humanly conceived. If it could be humanly conceived it would not be the genuine article. The idea of the singularity, then, is that of a principled limit on human cognition, and predictive power, in particular. It is homologous, in many respects, to Immanuel Kant’s idea of the thing-in-itself, which lacking any mode of presentation in the phenomenal world of space and time must necessarily elude systematic empirical knowledge.

 

Commitment to the possibility of a singularity nicely exemplifies the philosophical position of speculative posthumanists. Posthumans in this sense are hypothetical ‘descendants’ of current humans that are no longer human in consequence of some augmentation history.

 

For speculative (or pre-critical) posthumanism, a technically mediated transcendence of the human constitutes a significant ontological possibility.

 

Speculative posthumanism is logically independent of the normative thesis of transhumanism: one can be consistently transhumanist while denying the ontological possibility of posthuman transcendence. Similarly, speculative posthumanism is consistent with the rejection of transhumanism. One could hold that a posthuman divergence is a significant ontological possibility but not a desirable one.[1]

 

Critical posthumanists such as Katherine Hayles, Andy Clark, Don Ihde and Neil Badmington do not contest the potential of NBIC technologies or advance principled arguments against enhancement (Clark is a warm-blooded, moderate transhumanist according to my taxonomy) but argue that speculative or pre-critical posthumanism reflects a philosophically naïve conception of the human such that the posthuman would constitute a radical break with it. This position is clearly implied in the title of Katherine Hayles’ seminal work of cultural history How We Became Posthuman. For Hayles, the posthuman is not a hypothetical state which could follow some prospective singularity event, say, but a work in progress: a complex and contested re-conception of the human subject in terms drawn from the modern ‘sciences of the artificial’: information theory, cybernetics, Artificial Intelligence and Artificial Life (Hayles 199, 286).

 

One example of the intellectual tendencies that inform this new cultural moment is so-called ‘Nouvelle AI’ (NAI). Where the manipulation of syntactically structured representations is the paradigm of intelligence traditional AI, NAI draws inspiration from computational prowess exhibited in biological phenomena involving no symbolization, such as swarm intelligence, insect locomotion or cortical feature maps. The guiding insight of NAI is that the preconditions of intelligence – such as error-reduction strategies, pattern recognition or categorization – can emerge in biological systems from local interactions between dumb specialized agents (like ants or termites) without a central planner to choreograph their activities.

 

If human mentation ’emerges’ likewise from millions of asynchronous, parallel interactions between dumb components, Hayles argues, there is no classically self-present ‘human’ subjectivity for the posthuman to transcend. Mental powers of deliberation, inference, consciousness, etc. are already distributed between biological neural networks, actively-sensing bodies and artefacts (Hayles 1999, 239).

 

I have christened this ‘the anti-humanist objection to posthumanism’ given its striking similarities to the deconstruction of subjectivist philosophy and phenomenology undertaken in post-war French anti-humanisms – Derrida’s in particular (Ibid. 146). Hayles’ proximate target, here, is the putatively autonomous subject of modern liberal theory. The ‘autonomous liberal subject’, she argues, is unproblematically present to itself and distinct from the conceptually-ordered world in which it works out its plans for the good (Ibid. 286). The posthuman subject, by contrast, is problematically individuated, because its agency is constituted by an increasingly ‘smart’ extra-bodily environment on which its cognitive functioning depends and because of the open, ungrounded materiality – or ‘iterability’ – of language which is both arrested by the context of embodied action and infected by its opacity (Derrida 1988 152; Hayles 1999, 264-5). The decentered or distributed posthuman subject is no longer sufficiently distinct from the world to order it autonomously as the subject of liberal theory is required to do.

 

But is this right?

 

Let’s suppose, along with Hayles and other proponents of embodied and distributed cognition, that the skin-bag is an ontologically permeable boundary between self and non-self (or exo-self). Proponents of the extended mind thesis like Andy Clark and David Chalmers argue from a principle of ‘parity’ between processes that go on in the head and any functionally equivalent process in the world beyond.[5] The parity principle implies that mental processes need not occur only in biological nervous systems but in the environments and tools of embodied thinkers. If I have to make marks on paper to keep in mind the steps of a lengthy logical proof, the PP states that my mental activity is constituted by these inscriptional events as well as by the knowledge and habits reposing in my acculturated neural networks.

 

However, given the parity between bodily and extra-bodily processes, this cannot make the activity less evaluable in terms of the rationality standards we apply to deliberative acts. Even if the humanist subject emerges from the summed activities of biological and non-biological agents, this metaphysical dependence (or supervenience) need not impair its capacity to subtend the powers of deliberation or reasoning liberal theory requires of it.[6] Derrida’s more systematic deconstruction of the semantically constitutive subject nuances this picture by entailing limits on the scope of practical reason in the face of the ‘outside’ or exception which infects any rule-governed system (Derrida 1988, p.152). The rule or desire is always precipitate, in this way, but there is a difference between being ahead of oneself and being be-headed. The posthuman, in Hayles critical sense of the term, is not less human for confronting the fragile, constitutively precipitate character of cognition and desire.

 

This is not to say, of course, that there is no merit in the model of the hybrid self that Hayles presents as ‘posthuman’ or that it has no implications for pre-critical or speculative posthumansim. On the contrary, a ‘deconstruction’ of the classically constitutive subject of post-Cartesian thought is, I have argued, a useful prophylactic against immaterialist fancies or transcendentally inspired objections to the naturalizing project of cognitive science (Roden 2006). However, the naturalization of subjectivity and mind is at best a conceptual precondition for envisaging certain transcendent posthumanist itineraries involving the emergence of artificial minds from new technological configurations of matter. It does not represent their culmination.

 

There are two other objections that may potentially survive this analysis. Firstly, it could be objected that critical posthumanism – like the extended mind thesis – shows that the human is “always already” technically constituted. In her contribution to a recent Templeton Research Seminar on transhumanism Hayles argues that transhumanists are wedded to atechnogenetic anthropology for which humans and technologies have existed and co-evolved in symbiotic partnership. Not only would future transhuman enhancement be a technogenetic process; but so, according to this story, are comparable transformations in the deep past. Human technical activity has, for example, equipped some with lactose tolerance or differential calculus without monstering the beneficiaries into posthumans. One of the proponents of the extended mind thesis, Andy Clark, has framed the technogenesis argument against posthumanism particularly clearly in his book Natural Born Cyborgs:

 

The promise, or perhaps threatened, transition to a world of wired humans and semi-intelligent gadgets is just one more move in an ancient game. . . We are already masters at incorporating nonbiological stuff and structure deep into our physical and cognitive routines. To appreciate this is to cease to believe in any post-human future and to resist the temptation to define ourselves in brutal opposition to the very worlds in which so many of us now live, love and work (Clark 2003, p. 142).

 

‘Natural born cyborgs, as suggested, are already dealers in hybrid mental representations which exploit both a linguistically mapped environment and our multifariously talented brains. This is significant because our capacity to ascribe structured propositional attitudes to others arguably presupposes the capacity to use language to represent their contents. Representing the contents of beliefs is necessary for evaluating them and it is independently plausible to suppose that, as Donald Davidson argues in his essay ‘Thought and Talk’, having the capacity to evaluate beliefs is part of what is required in a believer (Davidson 1984).

 

Clearly, if we restrict the evidence base to cases where augmentation has not resulted in a species divergence or something very like it, then we will induce that this is not liable to happen in the future. However, some pre-human divergence had to have happened in our evolutionary past and it is at least plausible – given the ‘natural born cyborgs’ thesis – that technologies such as public symbol systems were a factor in the hominization process. Given a pre-human divergence has occurred in the past, perhaps due to evolutionary pressures brought about the development of simpler symbolization techniques, why preclude the possibility that convergent NBIC technologies might prompt a similar step change in the future?

 

I have argued elsewhere that a cognitive augmentation that replaced public language with a non-symbolic vehicle of cognition and communication might – assuming Clark’s account of hybrid representations – lead to the instrumental elimination of propositional attitude psychology through the elimination of its public vehicles of content. Post-folk folk might, arguably, be opaque to the practices of intentional interpretation we bring to bear in ‘our’ – i.e. ‘human’ – social intercourse and thus might well form initially discrete social and reproductive enclaves that might later seed entirely posthuman republics.

 

Another of Hayles’ objections to standard posthumanists visions of transcendence is their supposed elision of the materiality of human embodiment and cognition: the materiality argument. The fact that computer simulations can help us understand the self-organizing capacities of biological systems does not entail that these can be fully replicated by some system by virtue of implementing a sufficiently fine-grained software representation of their functional structure.

 

It is true that some posthumanist scenarios presuppose that minds or organisms can be fully replicated on speculative non-biological substrates like the computronium or ‘smart matter’ imagined in Ken MacLeod’s Fall Revolution novels. However, this objection applies to a fairly restricted class of posthuman transcendence itineraries: namely those involving the replication ofexisting minds and organisms in computational form. Although Hayles provides no arguments against pan-computationalism or global-functionalism, it might well be the case that synthetic-life forms or robots, being differently embodied, will be differently-minded as well (who knows?). In this case, the materiality of embodiment argument works in favour of the pre-critical posthumanist account, not against it. On the other hand, she may be wrong and the pan-computationalists right. Mental properties of things may, for all we know, supervene on their computational properties because every other property supervenes on them as well.

 

I turn, finally, to an objection that is perhaps implicit rather than explicit in the arguments of Critical Posthumanists to date but is worth considering on its own, if only for its speculative payoff. I refer to this as the anti-essentialist argument.

 

The anti-essentialist objection to posthumanism starts from a particular interpretation of the disjointness of the human and the posthuman. This is that the only thing that could distinguish the set of posthumans and the set of humans is that all posthumans would lack some essential property of humanness by virtue of their augmentation history. It follows that if there is no human essence – no properties that humans possess in all possible worlds – there can be no posthuman divergence or transcendence.

 

This is a potentially serious objection to speculative posthumanism because there seem to be plausible grounds for rejecting essentialism in the sciences of complexity or self-organization that underwrite many posthumanist prognostications. Some philosophers of biology hold that the interpretation of biological taxa most consonant with Darwinian evolution is that they are not kinds (i.e. properties) but individuals. Evolution by natural selection is a form of self-organisation involving feedback relationships between the distribution of genetic traits across populations and their phenotypic consequences in particular environments. An individual or proto-individual can undergo a self-organizing process, but an abstract kind or universal cannot. Thus, the argument goes, evolution happens to species qua individuals (or proto-individuals) not species qua kinds.  To be biologically ‘human’ on this view is not to exemplify some set of necessary and sufficient properties, but to be genealogically related to earlier members of the population of humans (Hull 1988).

 

Clearly, if biological categories are not kinds and posthuman transcendence requires the technically mediated loss of properties essential to membership of some biological kind, then posthuman transcendence envisaged by pre-critical posthumanism is metaphysically impossible. [7]

 

Underlying the anti-essentialist objection is the assumption that the only significant differences are differences in the essential properties demarcating natural kinds. But why adhere to this philosophy of difference?[8] The view that nature is articulated by differences in the instantiation of abstract universals sits poorly with the idea of an actively self-organizing nature underlying the leading edge cognitive and life sciences. A view of difference consistent with self-organization would locate the engines of differentiation in those micro- components and structural properties whose cumulative activity generates the emergent regularities of complex systems.

 

For example, we might adopt an immanent ontology of difference for which individuating boundaries are generated by local states of matter: such as differences in pressure, temperature, miscibility or chemical concentration (Delanda 2004). For immanent ontologies of difference –that of Gilles Deleuze, say – the conceptual differences articulated in the natural language kind-lexicons are asymmetrically dependent upon active individuating differences (Ibid. 10). A Deleuzean ontology is obviously not the only option here: any ontology which reconciles the existence of real or radical differences with the lack of transcendent or transcendental organizing principles would do.

 

In short: we can be anti-essentialists and anti-Platonists while holding that the world is profoundly differentiated in a way that owes nothing to the transcendental causality of abstract universals, subjectivity or language.

 

Conclusion:

 

I have argued that critical posthumanists provide few convincing reasons for abandoning pre-critical or speculative posthumanism. The anti-essentialist argument presupposes a model of difference that is ill-adapted to the sciences that critical posthumanists cite in favour of their naturalized deconstruction of the human subject. The deconstruction of the humanist subject implied in the anti-humanist objection may itself be a useful prolegomenon to a posthuman-engendering cognitive science; but it complicates rather than corrodes the philosophical humanism that critical posthumanism problematizes while leaving open the possibility of a radical differentiation of the human and the posthuman. The technogenesis objection is weak, if conceptually productive. The elision of materiality argument is based on problematic assumptions and, even if sound, would preclude only some scenarios for posthuman divergence.

 

Of these, the anti-essentialist objection seems the strongest and most wide ranging in its implication. Our response to it suggested that it might be circumventable with an immanent ontology of emergent differences such as Deleuze’s ontology of the virtual. However, a consequence of embracing locally emergent differences in this way is that there can be no adequate concept of posthuman difference without posthumans. For it is surely a consequence of any such account that a science of the different cannot precede its historical emergence or morphogenesis, even if only in simulated form. This implies that the posthuman is at best a placeholder signifying a possibility that we cannot adequately conceptualize ahead of its actualization. However, this does not preclude a theoretical development of the implications of the posthuman insofar as we can conceptualize it.

 

Moreover, the emptiness of the signifier ‘posthuman’ has an ethical or, perhaps, ‘anti-ethical’[9] consequence that arguably should be considered more fully in the light of Derrida’s remarks about the precipitate character of thought. If the speculative idea of the posthuman is a placeholder for differences that are determinable only via some synthetic process – such as the creation of actual posthumans, modified transhumans, or a range of simulations or aesthetic models (as in cybernetic art) – these differences can be determined only by progressive actualization. Thus posthumanist philosophy is locked into a dialectically unstable preterition, falling between speculative and synthetic activity. To understand what it as yet undetermined, it must attempt – however incrementally – to bring it into being and to give it shape.

 

BIBLIOGRAPHY:

Bostrom, Nick (2005), ‘A History of Transhumanist Thought’, Journal of Evolution and Technology, 14 (1).

____(2008), ‘Why I Want to Be Posthuman When I Grow Up’, B. Gordijn, R

Bostrom N, Sandberg A (2006), ‘Converging Cognitive Enhancements’, Ann. N.Y. Acad. Sci. 1093: 201–227.

Chadwick (eds.), Medical Enhancement and Posthumanity, Springer.

____ (2005b), ‘In Defence of Posthuman Dignity’, Bioethics 19(3), pp. 203-214.

Clark, Andy (2003), Natural Born Cyborgs, (Oxford OUP).

___’Language, Embodiment and the Cognitive Niche’, Trends in Cognitive Science 10(8), pp. 370-374)

___(1993)Associative Engines, (MIT Bradford).

___(2006) ‘Material Symbols’, Philosophical Psychology Vol. 19, No. 3, June 2006, pp. 291–307.

Clark Andy, Chalmers David (1998), ‘The Extended Mind’, Analysis 1998 58(1), pp. 7-19.

Churchland, Paul (1998), ‘Conceptual Similarity Across Sensory and Neural Diversity: The             Fodor/LePore Challenge Answered’, Journal of  Philosophy , XCV, No.1, pp. 5-32.

_____(1995). The Engine of Reason, The Seat of the Soul (Cambridge Mass.: MIT  Press.

_____(1989)‘Folk Psychology and the Explanation of Human Behaviour,’ Philosophical Perspectives 3, pp. 225–241.

____(1981), ‘Eliminative Materialism and the Propositional Attitudes’, Journal of Philosophy 78(2), 67-90.

Cilliers, Paul (1998). Complexity and Postmodernism. London: Routlege.

DavidsonDonald (1984) ‘Thought and Talk’, in Inquiries into Truth and Interpretation (Oxford, Clarendon Press), pp. 155-170.

Deacon, Terrence (1997), The Symbolic Species: The Co-evolution of Language and the Human Brain (London: Penguin).

DeLanda, Manuel (1997), ‘Immanence and Transcendence in the Genesis of Form’, South Atlantic Quarterly96:3, Summer 1997, pp. 499-514.

_____(2004), Intensive Science & Virtual Philosophy, London: Continuum.

Deleuze, Gilles and Guattari, Felix (1992), A Thousand Plateaus, Brian Massumi (trans.). London: Athlone.

Derrida, Jacques (1986), Margins of Philosophy, Alan Bass (trans.). Brighton: Harvester Press), pp. 209-271.

___(1988), Limited Inc. Samuel Weber (trans.). Northwestern University Press.

___(2002), Acts of Religion, Gil Anidjar (ed.). New York: Routledge.

Daniel, Dennett (1991). Consciousness Explained. London: Penguin.

Fukuyama, Francis (2002), Our Posthuman Future: Consequences of the Biotechnology Revolution (London: Profile Books).

Hayles, N Katharine (1999), How We Became Posthuman: Virtual bodies in Cybernetics, Literature and Informatics (Chicago: University of Chicago Press).

Hull, David (1988), On Human Nature, in PSA 1986, vol. 2, A. Fine and P. Machamer (eds.), East Lansing, MI: Philosophy of Science Association, pp. 3-13; reprinted in Hull (1989) and Hull and Ruse (eds.), Philosophy of Biology (1998).

Jones, Richard (2009), ‘Brain Interfacing with Kurzweil’, http://www.softmachines.org/wordpress/?p=450Accessed08.09.2009.

Kurzweil, Ray (2005), The Singularity is Near (New York Viking).

LaPorte, Joseph (2004), Natural Kinds and Conceptual Change (Cambridge CUP). Lisewski, Andreas Martin (2006), ‘The concept of strong and weak virtual reality’, Minds and Machines 16, 201–219.

Lycan, William G. (1999), ‘The Continuity of Levels of Nature’, in William Lycan (ed.) Mind and Cognition(Oxford Blackwell), pp. 49-63.

MacLennan, B.J. (2002), Transcending Turing Computability, Minds and Machines 13: 3–22.

Marx, Karl and Engels, Frederick (1994), The German Ideology, C.J. Arthur (Ed.). London: Lawrence and Wishart.

Mackenzie, Adrian (2002), Transductions: bodies and machines at speed (London: Continuum).

Patton, Paul (2007), ‘Utopian Political Philosophy: Deleuze and Rawls’, Deleuze Studies 1, pp. 41-59.

Rawls, John (1999), A Theory of Justice (Harvard University Press).

Simondon, Gilbert (1989), Du mode d’existence des objets techniques (Editions Aubier).

Shragrir, Oron (2006), ‘Why we View the Brain as a Computer’, Synthese (153), pp 393-416.

Soper, Kate (1986), Humanism and Anti-humanism. London: HarperCollins.

Sterling, Bruce (1996), Schismatrix Plus, (Berkley, New York).

Vinge, Vernor (1993) [online] ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’,http://www.rohan.sdsu.edu/faculty/vinge/misc/singularity.html Accessed 2008.24.04.

 

 

 


[1] Sterling (1996), p. 59.

[2] This appears to be the position of Rosi Braidotti in her recent plenary address to the 2009 Society for European Philosophy and Forum for European Philosophy Conference in Cardiff.

[3] For a rather less sanguine commentary on the state of the art in non-invasive scanning see Jones 2009.

[4] By analogy, any system could count as being in the state White Wash Cycle if inputting dirty whites at some earlier time resulted in it outputting clean whites at some later time.

[5]

Parity Principle. If, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process.(from Clark and Chalmers (1998) p.XX)

 

[6] The notion of supervenience is frequently used by non-reductive materialists to express the dependence of mental properties on physical properties without entailing their reducibility to the latter. Informally:  M properties supervene on P properties if a thing’s P properties determine its M properties. If aesthetic properties supervene on physical properties, if x is physically identical to y and x is beautiful, y must be beautiful. Supervenience accounts vary with the modal force of the entailments involved. ‘Natural’ or ‘nomological’ supervenience holds in worlds whose physical laws are like our own. ‘Metaphysical supervenience’, on the other hand, is often claimed to hold with logical or conceptual necessity.

[7] This objection is overdetermined, further, by the fact that the possibility of successfully implementing radical transhumanist policies seems incompatible with a stable human nature. If there are few cognitive or body invariants that could not – in principle – be modified with the help of some hypothetical NBIC technology – then transhumanism arguably presupposes that there are no such essential properties for humanness. Transhumanism might still be consistent with an etiolated historical essentialism which holds that any being descended from from a member of soe hypothetical ancestor population is human.

[8] David Hull points out that the genealogical boundaries between species can be considerably sharper than boundaries in ‘character space’ (Hull 1988, 4). The fact that nectar-feeding hummingbird hawk moths and nectar-feeding hummingbirds look and behave in similar ways does not invalidate the claim that they have utterly distinct lines of evolutionary descent (Laporte 2004, 44).

[9] In her address to the Cardiff, SEP-FEP conference, ‘The Ethics of Extinction’ Claire Colebrook argued that while ethos implies habit, place and environment, situations of catastrophic change (e.g. climate change) imply the need to overcome  these rooted modes of action and affect. Hence the prospect of humanity being superseded by non-humans requires an anti-ethics which imagines or simulates the radically non-human.


[1] Although some hold that the singularity is ‘beyond good or evil’, one might hold that certain posthumans would be worse off than even the most miserable human; a possibility that could warrant anti-transhumanist policies such as technological relinquishment or pre-emptive species suicide.

 

 

Advertisements

One thought on “A DEFENCE OF PRE-CRITICAL POSTHUMANISM

  1. Pingback: h+ Magazine | Covering technological, scientific, and cultural trends that are changing human beings in fundamental ways.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s