Politics as Zombie Warfare: Against Steve Fuller's Transhumanism

Justin Novak -- from 'Disfigurines' series

Steve Fuller has a wildly provocative article over at IEET entitled “We May Look Crazy to Them, But They Look Like Zombies to Us: Transhumanism as a Political Challenge”

As the title suggests, the article seeks to portray the political challenge of transhumanism as an existential conflict between transhumanists (who are committed to indefinite life extension) and a bioconservative hoi polloi who believe:

  1. that they will live no more than 100 years and quite possibly much less.
  2. that this limited longevity is not only natural but also desirable, both for themselves and everyone else.
  3. that the bigger the change, the more likely the resulting harms will outweigh the benefits.

Fuller’s argument goes as follows:

i) Biocons are comprehensively wrong. 1, 2 and 3 are false (Transhumanist assumption)

ii) The Biocons are thus programmed for destruction – not only their own but ours.

iii) The Biocons are thus relevantly similar to zombies.

Or to employ Fuller’s overlit prose:

These are people who live in the space of their largely self-imposed limitations, which function as a self-fulfilling prophecy. They are programmed for destruction – not genetically but intellectually. Someone of a more dramatic turn of mind would say that they are suicide bombers trying to manufacture a climate of terror in humanity’s existential horizons. They roam the Earth as death-waiting-to-happen.

 This much is clear: If you’re a transhumanist, ordinary people are zombies.

It follows that, for transhumanists,the  zombie apocalypse is an ongoing political reality and a substantial proportion of those reading this are its benighted vectors. Fuller derives only three political options from extant zombie survival guides:

a) You kill [the zombies] once and for all

b) You avoid them.

c) You enable them to be fully alive.

All three have their costs, but a) is, in many ways, the most attractive. After all, b) may be just too resource intensive, while c) is similarly problematic. As Fuller concludes:

Here there is a serious public relations problem, one not so different from development aid workers trying to persuade ‘underdeveloped’ peoples that their lives would be appreciably improved by allowing their societies to be radically re-structured so as to double their life expectancy from 40 to 80. While such societies are by no means perfect and may require significant change to deliver what they promise their members, nevertheless the doubling of life expectancy would mean a radical shift in the rhythm of their individual and collective life cycles – which could prove quite threatening to their sense of identity.

Of course, the existential costs suggested here may be overstated, especially in a world where even poor people have decent access to more global trends. Nevertheless the chequered history of development aid since the formal end of Imperialism suggests that there is little political will – at least on the part of Western nations — to invest the human and financial capital needed to persuade people in developing countries that greater longevity is in their own long-term interest, and not simply a pretext to have them work longer for someone else.

I think there’s scope for a transhumanist critique of the Zombie Argument and a posthumanist critique. I’ll say more about the former than the latter in what follows since Fuller’s piece is largely directed at a transhumanist constituency rather than a posthumanist one.

Suppose we understand transhumanism (H+) as a kind of humanism with added gizmos (or control knobs). Then (as I’ve argued in Posthuman Life) H+ is minimally committed to traditional humanist values: in particular, the cultivation of autonomy and rationality. We may construe autonomy as a matter of degree. A person is more autonomous, the more their range of worthwhile choices increases.

A commitment to autonomy seems like a good way to support H+ since increasing our powers to modify nature and ourselves will plausibly increase the ambit of our worthwhile choices. It will make us more autonomous. (We may even add a rider that the cultivation of any power implies a commitment on the part of rational beings to its open-ended extension)

Now I take it that a commitment to rationality includes a commitment to some form of public reason and accountability. I’m not excluding the possibility of emancipatory political violence here,  but the rationale for violence must be genuinely emancipatory and framed in terms that could enlist the support of reasonable interlocutors in the game of “giving and asking for reasons”. A commitment to public reason implies a commitment to the politics of recognition: treating others as rational subjects capable of being swayed by the better argument while being reciprocally committed to abandoning one’s claims in the light of persuasive counter arguments. To use Rawlsian terminology, transhumanism has a political as well as a comprehensive component. The political component provides a side constraint on the way in which its comprehensive aims (life extension, intelligence augmentation, etc.) can be promulgated.

I don’t think Fuller’s zombie argument can pass through this political filter. Not only does it assume that other interlocutors are comprehensively wrong, it portrays them as essentially wrong. Non-transhumanists are not fellow people whose reason one can appeal to, but zombies, a plague on the Earth.

Casting one’s political opponents in this way isn’t humanism with control knobs; it’s anti-humanist zealotry with control-knobs. For a humanist, it constitutes a political betrayal of the project of humanism that transhumanists hope to continue.

This holds even for those who accept the conclusion of the zombie argument but opt for persuasion. If we fail to engage with others as rational beings, we’re betraying the core commitments of humanism and foundering into irrational violence. So the zombie argument not only begs the question too strongly in favour of transhumanism, it is pragmatically self-vitiating because it fails the public reason test.

Having set out the bones of a transhumanist rebuttal of Fuller, I’ll content myself with a brief sketch of a posthumanist one. The Speculative Posthumanism that I’ve espoused in Posthuman Life is characterised by a position I call Anthropologically Unbounded Posthumanism (AUP). AUP holds that the space of possible agents is not bound (a priori) by conditions of human agency or society. Since we lack future-proof knowledge of possible agents AUP allows that the results of techno-political interventions could be weird in ways that we are not in a position to imagine (Roden 2014: Ch.3-4; 2013; forthcoming). AUP note is an epistemic position that is consonant with some of the claims of critical posthumanists, but also with forms of naturalism and speculative realism.

The ethical predicament of the Speculative Posthumanist is (as I’ve emphasised elsewhere) more complex than that of the Transhumanist or their Promethean and Accelerationist cousins (Roden 2014, Chapters 1-2; Brassier 2014). Given AUP there need be no structure constitutive of all subjectivity or agency. Thus she cannot appeal to an unbounded theory of rational subjectivity to support an ethics of becoming posthuman.

It doesn’t follow that SP implies the rejection of the transhumanist objection. It holds locally for beings of a kind for which the politics of recognition makes sense (e.g. as long as we’re not Jupiter Brains or swarm intelligences). But whether or not this is true, AUP seems to go with a far more pluralist value theory than H+. If we have no a priori grip on the kind of agents that might result from some iteration of future technical activity, we have no grip on what will be important to them. Would life extension make sense to a being that lacked a conception of its self as a persistent agent? We might think that such a being could not be a candidate for properly posthuman status, but I’ve adduced plenty of arguments in PHL and elsewhere to undermine this intuition. In addition, AUP is consistent with multiple posthuman becomings, some of which may involve quite subtle adjustments to gender identity, sexuality, embodiment, and phenomenology. These may or may not involve life extension. In fact it does not seem irrational to adopt certain forms of posthumanist alteration in the knowledge that one’s life might be shortened by so doing (space colonisation, anyone?).

So AUP tells against the claim that only one position regarding life-extension is the right one. It doesn’t preclude the project of life-extension either, but provides strong supplementary grounds for not portraying our Biocon friends as zombies.

References

Brassier, Ray (2014). “Prometheanism and its Critics”. In R. Mackaey and Avenessian (eds.) #Accelerate: the Accelerationist Reader (Falmouth: Urbanomic), 467-488.

Roden, David 2013. “Nature’s Dark Domain: An Argument for a Naturalised Phenomenology”. Royal Institute of Philosophy Supplements 72: 169–88.

Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.

Roden Forthcoming. “On Reason and Spectral Machines: an anti-normativist response to bounded posthumanism” Forthcoming in Rosi Braidotti Rick Dolphijn (ed.), Philosophy After Nature.

 

26 thoughts on “Politics as Zombie Warfare: Against Steve Fuller's Transhumanism

  1. well notice also that short-term, what the transhumanists ought to be working on is 1) buffing our life extension techniques so that folks can more regularly live more than 100 2) Improve our life extension techniques so that living more that 100 years looks desirable, rather than like merely extending an unpleasant dying process and 3) trying out various “big changes” to see which ones really do have the benefits outweigh the harms.

    That is short-term the transhumanist following their own goals for their own reasons are basically ALSO building the elements of a persuasion case. Even if there are staunch biocons who will never be convinced, there are surely plenty of bioskeptics, who CAN be convinced by well functioning actual techniques, and field tests showing that the unpleasant side effects can be controlled well, etc. Eventually we may get to the point of having to choose between strategy a, b, and c for dealing with biocons, but right now we have to do c anyway for our own reasons, so this whole discussion is pre-mature. And even then, we’ll probably want to do more c to separate out the convincable from the die-hards.

    And to be very frank, it’s the bad life-extension that look zombie-like to me. If you promised me several more decades in a 90 year old body, with a badly senile mind, I doubt I’d bite, even before looking at the rest of the price tag. And I’d have zombie related nightmares of being stuck in a state like that. Maybe liches or fossils are other tempting views of the recipients or advocates of imperfect early deep life extension techniques. Uploads that capture some, but not enough of what the person once was, or do so with deep flaws, can look likes ghosts or echoes or shadows of the folks that once were, until maybe we get good enough at the techniques for them to seem like genuine continuations. The whole field of necromancy is rife with metaphors for flawed and imperfect stages of transhumanism. And since I’m more on the “self-alteration” than “life-extension” side of transhumanism, I’ll say the Biocons and life-extension can BOTH look like they are afraid of change, or overly static, or unwilling to morph into new forms of being, like statues rather than living things or larvae unwilling to enter their chrysalis. What is a zombie but something that has a partial semblence of life without having the full robust reality of life?

  2. Seemed to me that Steve Fuller’s intent in that article was not to put forward a real political argument, but to simply hold up an ironic mirror to a biocon position on life-extension. Ie. if biocon is the current prevailing norm of our culture, from that perspective, transhumanists look like mad scientists and deranged hubristic tech entrepeneurs. I took it that the idea was meant to show how such biocon attitudes might look from the perspective of a future normal.

    It is entirely possible (or probably even) that this line of thought will be taken up as a serious form of political rhetoric at some point. Rawlsian overlapping consensus looks a sensible liberal solution to handling the various political orientations that will emerge around various post/transhumanisms. But I take it the thrust of Fuller’s transhumanism focuses on life extension as an economic, and not just political problem. Those who are alive today who want life extension are faced with a stronger economic imperative that makes overlapping consensus look like a sticking point.

    1. Hi Leah, Thanks for commenting!

      It’s possible that I’ve missed the rhetorical tenor of Steve’s article. His subsequent defence of the argument on the IEET site suggests to me that he wasn’t operating in an ironic mode. But maybe he was ironising himself without realising it. There are no reliable subscripts for marking or limiting irony.

      My use of Rawls was strategic since embedded in a posthumanist theory for which the politics of recognition only has local validity. I can certainly envisage situations where questions of posthuman or transhuman alteration simply can’t be resolved in a democratic way – if only because it is no longer clear who or what has the right to counted as a citizen. Posthuman politics runs deeper than liberal politics in this way. I think B P Morton admits that and I try to give a clear justification for that position in the final chapter of PHL. However while the considerations adduced Fuller’s argument are largely economic, they’re implications are surely political.

  3. Hi BP

    I’m at an age where a substantial number of my friends and family are either very old, facing terminal disease or chronic age-related disease. I don’t like it and find it extremely hard to come to terms with it. So the prospect of deferring this has great gut appeal. I’ve also discovered that I like running comparatively late in life, but I accept that I won’t always be able to run – because of joint problems or respiratory problems or whatever – and that pisses me off. i’m happier with some idiosyncratic aspects of my sexuality in middle age than I ever was as a young person, but my capacity to explore these will probably recede at some point.The list goes on….

    So there are plenty of subjectively appealing reasons in favour of modest life extension. And they may also scale up as reasons for radical life extension, for all I know. But, as you say, we need to be able to do it in a way that’s compatible with a life that folks want to live. We don’t want to survive as ghosts or echoes of ourselves, whatever that might entail in practice. As you say, achieving effective life extension is the best way of promulgating the transhumanist position here. But I can’t see how this could be politically achievable by following Fuller’s prescription and painting non-transhumanists as a hordes of orcs or zombies. It not only violates the values that I take to be core to transhumanism, it’s dumb – it replays Bush’s war on terror in the biopolitical sphere.

    And, yes, I think value pluralism should incline us to value morphological freedom and exploration, not some lading list of H+ commitments.

  4. Oh yeah, don’t get me wrong. Moderate life extension has done a lot of good in the 20th century! It’s a good thing I’m not trying to bad mouth it. But it isn’t a cureall, it never got to the extremes many techno-optimists hoped, it wasn’t equitably distributed, it came with an elaborate medical-industrial complex, and it tended to give a mix of extra good years and extra bad years. I have good hope that life extension techniques will continue to improve in the 21st century. But it will probably continue to be modest in actual accomplishment, expensive, inequitably distributed, come with huge social/political/economic/environmental downsides, and give a frustrating mix of extra robust life, and extra barely life.

    A common, probably even the normal, experience for well off folks is to go slowly from “further life extension is definitely worth it at this point” to “we could try to extend life even further in this case, but the trade-offs are just too dire” And there is a lot of painful moral ambiguity, and personal decision making, and soul searching in the grey area there. I believe it is possible and indeed not uncommon to cling to life too long, and that will probably become even more tempting and complex in the future. But the majority of the world living past 100 by the end of the 21st century? And doing so in overall desirable ways? That seems like techno-fairy pollyannism to me. But hey, perfect it and get it to work and I’ll probably become a cheerleader for it too …

  5. Roden’s response draws attention to the sense in which transhumanism’s commitment to rationality enables it to lay claim to humanism’s intellectual legacy. I actually agree with much of his characterization of what it means to be rational oneself and to ascribe rationality to others. However, when Kant – or Hegel, Rawls, Habermas, whoever – talk about, say, the mutual accountability of reason-givers, they are literally referring to fully self-realized rational agents. To be sure, this is something within the capacity of any ‘human’ to be, but it is not necessarily something that human beings actually are. This is why for Kant and his heirs rationality never happens in real discourses but only in counterfactual ones that take place in , e.g. the ‘kingdom of ends’, ‘ideal speech situation’ or the ‘original position’. Thus, the question that always lurks for people in this tradition is the exact distance between the ideal world of rational agents and the sub-ideal world of sub-rational agents – and how, if at all, the gap can be bridged.

    This is not a completely hopeless task, and people in this tradition tend to believe that something called ‘education’ can work miracles at bridging the gap. But let’s stick to politics. The various artifices of democratic politics – from parliamentary procedure to constitutional law – can be seen as setting up ‘smart environments’ for humans to live up to their full rational potential by being forced or nudged to make claims using certain sorts of arguments in certain sorts of ways. The result is to approximate in concrete terms the various Kantian fantasies of rationality. Moreover, the sorts of conflicts made visible in these environments effectively sublimate what might otherwise spill over into violence in ordinary life. Thus, political opponents learn to take ‘no’ for answer – at least until they get an opportunity to pose the question again. But there are limits to the magic that can be worked by the artifices of democracy. Then what?

    Here Roden doesn’t take the full measure of my appeal to imperialism and development policy in the latter half of the piece. If I as (British, American, Soviet, etc.) imperialist believe that you are in the Kantian sense just as ‘rational’ as I am, yet I cannot win you over to my ideology (which admittedly would require a world-view change on your part), what should I conclude? Specifically, should I conclude that you are just as fully self-realized as a rational agent as I am? Or rather, that you are not yet fully self-realized? If I believe the former, I’ll leave you alone to live as you wish, regardless of the consequences (even though I believe that this will amount to suicide, which may also do some collateral damage to me). But if I believe the latter, I may start to adopt more coercive tactics that culminate in serious violence – once again all for your own good, until you are able to assent in your own voice what reason dictates.

    I would call that ‘genuinely emancipatory political violence’. Maybe Roden would as well – and perhaps our disagreement simply rests on his suspicion that I would not have exhausted every other means of persuasion before reaching for my revolver. But as someone who has always thought of himself as playing a long game, I understand patience.

    Finally, I’m still surprised people can’t see that my original piece is ironic AND making a serious point about the transhumanist mindset. Like the old imperialists, transhumanists are firmly convinced of the rationality of their own position and equally baffled by others’ resistance to it. My point, once again, is that the very theory of rationality that all of us appear to be defending (the Kantian one) does keep the door open to violence as the ultimate means of resolution.

  6. “To be sure, this is something within the capacity of any ‘human’ to be”
    hmm maybe any future AI driven cyborg could achieve this but any cognitive-biased human-being in our age, not a chance, and since the machines to date haven’t overcome their dependencies on our own buggy bodies/systems the odds are very long indeed especially as we are wrecking the biosphere and our built infrastructures at an ever escalating pace…

  7. The problem with Fuller’s justification above for political violence on the grounds that it is a way to force others to assent to reason I find both strange and historically suspect.

    Wasn’t the whole struggle over modern rights the effort to establish space for a plurality of values where adults were free to chose for themselves? And wasn’t modernity characterized by the move away from “guardian “ classes based on arbitrary qualities or beliefs?

    Adoption of a coercive policy when it comes to transhumanism (even if it was politically feasible, which I see no evidence that it is) would return us to the same tragic mistake we’ve been repeating since the Roman Emperor Theodosius destroyed the pagan temples. More than one form of truth can exist side by side in the world as long as they give each other space which we should have learned in the Wars of Religion, the republican imperialism of the French which lead to the still birth of democracy and nationalist fanaticism in Europe, the age of Western imperialism, the ideological wars and totaling brutalities of the 20th century, Al Qaeda, Bush’s War on Terror, and now ISIS.

    I think Fuller is getting his ideas on transhumanism and its relationship to the coercive power of the state via a false analogy with the industrial modernization under the Communist totalitarian regimes. But I am not sure, even if one could argue that Stalin’s USSR or Mao’s China successfully modernized those countries in a way that was better than a more organic form of development that the analogy would apply to our situation. Industrial modernization was a real thing not a suite of technologies as we have today. It’s forward motion was often prevented by entrenched classes that clung to the old ways because they were the source of their power.

    Can one really say that elites today are not empowered by technology and are trying to actively hold back modernization? Or that we even know towards what type of society and technologies we are ultimately moving?

  8. Thanks for your response, Steve. I too have worries about bridging the relationship between normative and factual discourse that I’ve pursued elsewhere when considering claims about the space of possible minds or agents; so there’s an implicit irony in using these considerations in my response. My “transhumanist critique” is explicitly intended to draw on assumptions that I take most transhumanists to be committed to, merely insofar as they are humanists.

    Whether intended ironically your piece sets out an argument based on the the analogy between Biocons and zombies that has the content it has and the conclusions it has. Given the humanist background it is hard to see how it can be defended in its current form. Your point about the sub-optimal rationality of actual agents is well made, but you’d need to show that our sub-optimal behaviour merits a relaxation of some kind in our normative commitments. In this case the revision would be so great as to imply a wholesale rejection of the universalism underwriting the politics of recognition. Maybe there are some transhumanists who are willing to go this far, but I’m not sure that they can really claim to be humanists any longer, raising questions about their commitment to transhumanism (as opposed to some more reactionary form of modernist politics).

    Abstract principles of rationality underdetermine what is good in the way of belief. So in the case of your imperialist, it is quite consistent to hold both that he and the anti-modernist are equally rational and principled. Isn’t this just a case of Rawls’ burdens of reason – the possibility of principled disagreements between reasonable persons?

    It doesn’t follow that there won’t be practical incompatibilities between their positions, but I’m not sure that the argument has been made here. As Rick points out, coercive modernisation seems not to have worked all that well in the past . Well, maybe there’s a detailed case to be made here. I’ve also argued for limits on the capacity of public reason to arbitrate transitions to posthumanity, but it isn’t obvious to me that this is a case of that kind.

  9. Awesome! It’s the debate from my “Crash Space” story come to life.

    “Finally, I’m still surprised people can’t see that my original piece is ironic AND making a serious point about the transhumanist mindset. Like the old imperialists, transhumanists are firmly convinced of the rationality of their own position and equally baffled by others’ resistance to it. My point, once again, is that the very theory of rationality that all of us appear to be defending (the Kantian one) does keep the door open to violence as the ultimate means of resolution.”

    I’m with Steve on this one. The point of the piece isn’t to consider how transhumanists should reason, the point is to consider the way they will reason. So, David, when you argue,

    “Given the humanist background it is hard to see how it can be defended in its current form. Your point about the sub-optimal rationality of actual agents is well made, but you’d need to show that our sub-optimal behaviour merits a relaxation of some kind in our normative commitments. In this case the revision would be so great as to imply a wholesale rejection of the universalism underwriting the politics of recognition. Maybe there are some transhumanists who are willing to go this far, but I’m not sure that they can really claim to be humanists any longer, raising questions about their commitment to transhumanism (as opposed to some more reactionary form of modernist politics).”

    You’re assuming that things like definitional consensus and inferential consistency are anything more than *inputs* that may or may not constrain the behaviour of the disputants. The same way humanists once found stuffing primates a moral credit, transhumans need only convince themselves that baseline humans are primates to carry out whatever atrocity they wish while keeping their humanistic scruples intact. Inferences are far more apt to cover for the optics of self-interest than otherwise, particular when it comes to moral argumentation. This is an empirical fact.

    1. I was hoping you’d weigh in Scott 🙂 Yes, I think we can adopt this radically anti-normativist route. And maybe Steve really is disposed to do this – irony not withstanding. But then, as I’ve argued, we must relinquish the humanist kernel of transhumanism. What remains then is *transhumanism* as a kind of nomadic war machine – which calls for some refined political physics, I think. Here, the political project of H+ recedes in importance compared with the distributed tendencies of vast technical systems. I’m not averse to shifting the discussion this way, but then we’re not in Kansas any more…

  10. The creation of “nomadic war machines” and “the distributed tendencies of vast technical systems” seems spot on as a take on what I think Steve Fuller is ultimately up to. He said it himself best in Humanity 2.0 (which come to think of it is a horrible white bread title for a very radical book.):

    “The history of eugenics is relevant to the project of human enhancement because it establishes the point-of-view from which one is to regard human-beings: namely, not as ends in themselves but as a means for the production of benefits… (142)”

    “Once the modes of legitimate succession started to be forged along artificial rather than natural lines with the advent of the corporation… the path to the noosphere had been set. (205)”

    “… with nature-inspired technologies we might think more imaginatively (aka divinely) about the terms on which ‘the greatest good’ can be secured for ‘the greatest number’, especially how parts of individuals might be subsumed under this rubric.” (228)”

    If Steve Fuller was starting a political party its’ slogan should be “We take the human out of Trans-humanism”. I think Pink Floyd has the copyright on “Welcome to the machine.”

  11. That seems like candid, sincere speech to me, Rick. Though this leaves premise 1 in the zombie argument somewhat bereft – y’ know the bit about folks being *wrong* concering the benefits of indefinite life extension. Why favor this ethics rather than the pluralist one I sketched out? In ironical moments I’ve suggested that PHL be viewed as an *argument for human extinction*. It’s more complicated than that *obviously* but if we take the scare quotes off there is an argument.

  12. I believe the possibility for some sort of rupture with the human becomes increasingly more likely with the passage of time.

    I think many trans-humanists think this rupture will take the form of something like indefinite life extension and uploading, but I think things are likely to be much weirder, and even should we achieve traditional trans-humanist goals, those stranger things are likely to prove much more important because they’ll revolve around whole new ways in which consciousness and sentience can be made manifest.

    The question I think is what actions do we take in light of this potential rupture? Here I part ways with Scott in his suggestion that Fuller isn’t ultimately making a normative argument. Fuller’s views aren’t predictions, rather they are political tracts, for his fear is that the kinds of rupture I am talking about is not happening quickly enough or proving radical enough as we currently see it unfold. He would like us to seize the reins and accelerate its’ pace whatever the moral risks and costs.

    My hope is that we will find some ways to preserve what most of us find beautiful and noble about being human across this rupture and that Fuller’s approach carries over instead those features I prefer we would leave stranded on the “merely” human side.

    In regards to human extinction I see no reason why humans in something like their current form couldn’t exist (and for quite some time) alongside whatever new forms of sentience make their appearence. I was hoping we had left this kind of monism behind us, but it keeps coming back.

  13. I haven’t read Humanity 2.0 but I certainly damn well will now. I love the ‘nomadic war machine’ analogy, and I think there’s good reason for seeing it as a paradigmatic metaphor. I’m also inclined (given yours and Rick’s comments) to reverse my position, er, somewhat…

    Could it be the inconsistency isn’t so much between humanism and transhumanism (as you frame it, David), as it is between Steve’s own commitments. If we look at what Steve’s doing as exploring the ‘crash space’ that follows from human augmentation, then there’s a sense in which he’s directly undermining his own techno-optimism.

    I hesitate to frame the issue in terms of the overarching commitments conjoining humanism and transhumanism across a narrative, or historical axis, because 1) they are so overdetermined to begin with; and 2) because there’s no fact of the matter regarding the ‘fidelity’ at issue. We should expect transhuman fascists to have their commitments VERY well rationalized, to have numerous ways of showing how their commitment to humanism surpasses that of mere, puny humans (like us).

    In other words, it could just as easily be argued that humanists are the inconsistent ones. (And this is just to say it’s nomadic war machines all the way down.)

    I actually sent my ‘Augmentation Paradox’ to him the other day, angling to suggest that he was grappling with a crash space issue. The formulation runs:

    The ‘improvement’ of any ancestral cognitive capacity amounts to the degradation of those ancestral cognitive capacities that depend on the ancestral form of the ancestral capacity ‘improved.’

    Have either of you guys come across anything like this anywhere? Improving human capacities seems easy enough in principle, but only so long as you overlook the way their heuristic nature differentially entangles those capacities with various ecologies–social ecologies in particular. Steve’s zombie argument provides a great way to anchor thinking through the kinds of dysfunctions we might expect.

  14. First, you all owe me a debit of gratitude for providing an opportunity for you to air your worst fears about the future! I have responded to Scott Bakker’s augmentation paradox privately, since I think it raises a valid point about transhumanist thinking – though whether it eventuates in fascism is another matter.

    One thing I really don’t get is why you guys would prefer to call yourselves ‘posthumanists’ rather than ‘transhumanists’, yet at the same time you also think you’re somehow defending the ‘human’ in a way that I or other transhumanists do not. For example, what David calls ‘coercive modernization’ is, like it or not, just a pejorative name for humanism. Moreover, it has worked to such an extent that there really is no competing view of the human condition that can claim similar universality. From this standpoint, capitalists and socialists are only splitting hairs. Even the Abrahamic religions have largely accommodated — if not outright contributed — to this hegemony. When we talk about our geological epoch as the ‘anthropocene’, we are referring to coercive modernization. Moreover, it is coercive modernization that continues to make us feel guilty when people in Africa live only half as long as Europeans (why don’t we just tolerate the spread in longevities as ‘pluralism’ in action?). And it also make us want to integrate rather than segregate humanity (the latter sometimes done in the name of ‘pluralism’).

    It’s times like this that I see the force of Zoltan Istvan running for US president as a transhumanist. He may be a bit simplistic for this crowd but he clearly knows how to tie humanism to transhumanism in ways that actual human beings can relate to. In contrast, I don’t believe that ‘posthumanism’ of the sort upheld here has any coherent political identity whatsoever. (Here’s an exercise: You might ask what a Posthumanist Zoltan would use as his/her/its tagline to attract voters.) All you guys seem to agree on is that the future is likely to be ‘weird’ — and ‘weird’ is better than what we’ve got now and the transhumanists have on offer. I don’t see why you don’t just declare yourselves ‘anti-humanist’, perhaps in the name of some sort of pan-vitalism, which many radical environmentalists espouse. Why do you feel the need to defend the ‘human’ when you are so quick to disown the trajectory of human history?

    1. “First, you all owe me a debit of gratitude for providing an opportunity for you to air your worst fears about the future! I have responded to Scott Bakker’s augmentation paradox privately, since I think it raises a valid point about transhumanist thinking – though whether it eventuates in fascism is another matter.”

      This has been an exceptionally instructive discourse, for sure. But, withholding the enemyindustry chutzpah rosette for now, I’m not sure anyone can steal a march on Scott when it comes to grimdark. It’s his profession, after all! If his fictive and speculative discussion of the semantic apocalypse in Neuropath and elsewhere is well founded, then we don’t just face the worst, we face the end of the possibility of the worst, the end of the end (sorry if it’s too early for late Derridianese). I’ve tried to flesh some related ideas out independently in PHL and in briefer stuff like this – though my position is more affirmative, I think.

      Why I don’t call myself a transhumanist? As Rick says, my philosophical interests are broader and I’m the opinion that we need to get the metaphysics and epistemology of post/transhumanism on track, then see how this constrains our politics. Actually existing transhumanism is far too uncritical of its metaphysical foundations to provide a workable political trajectory. The Speculative Posthumanist position I set out in PHL and here is also critical of soi-disant posthumanisms that reject humanism on the grounds that we’re all cyborgs (failure of local supervenience, etc.). It’s main target is transcendental humanism: the idea that anthropological invariants constrain posthuman design space a priori. So I’m willing to allow that something like a politics of recognition is applicable in our neck of possibility space, while being leery about whether this provides a workable normative framework for the technological future. With that in mind, I’m not sure that you’ve explained why you’re transhumanist! You seem to reserve the term “humanism” for a process of technological modernisation whose long-term dynamics are incalculable.

      ” All you guys seem to agree on is that the future is likely to be ‘weird’ — and ‘weird’ is better than what we’ve got now and the transhumanists have on offer. I don’t see why you don’t just declare yourselves ‘anti-humanist’, perhaps in the name of some sort of pan-vitalism, which many radical environmentalists espouse. Why do you feel the need to defend the ‘human’ when you are so quick to disown the trajectory of human history?”

      I hope my position is somewhat more nuanced than “weirder is better”! The claims regarding weirdness are epistemological not normative. I don’t see any good reason to embrace vitalism (or panpsychism) either. I’ve argued for a kind of hyper-modernism on the basis of a philosophy of technology developed in Ch7 of PHL. This position coincides with Transhumanism in some respects but it’s framed in terms of a psychology-free conception of autonomy rather than Kantian moral autonomy, so it leaves the nature of what exercises autonomy and how somewhat more open. Here’s a brief summary from a recent paper:

      …. I argue, to the contrary, that the systemic complexity of modern technique precludes binding technologies to norms. Modern self-augmenting technical systems are so complex as to be both out of control and characterised by massive functional indeterminacy – rendering them independent of any rules of use.
      As the world is re-made by this vast planetary substance any agent located in the system needs to maximize its own ability to acquire new ends and purposes or bet (against the odds) on stable environments or ontological quiescence. Any technology liable to increase our ability to accrue new values and couplings in anomalous environments, then, is of local ecological value (Roden 2014: Ch. 7). This is not because such technologies make us better or happier, but because the only viable response to this deracinative modernity is more of the same.

  15. @Steve:

    “One thing I really don’t get is why you guys would prefer to call yourselves ‘posthumanists’ rather than ‘transhumanists’, yet at the same time you also think you’re somehow defending the ‘human’ in a way that I or other transhumanists do not. For example what David calls ‘coercive modernization’ is, like it or not, just a pejorative name for humanism.”

    I can’t claim to speak for David, but I’ll take a crack at that. For me at least posthumanism conotes a broader area of concern than transhumanism, especially including other and new form of sentience. I personally don’t see any contradiction in wanting to preserve characteristics many of us deeply value in human beings in any new forms of sentience that might emerge from our efforts, or even to try and bend things so that these values are safe in any future technological/social order.

    “Moreover, it is coercive modernization that continues to make us feel guilty when people in Africa live only half as long as Europeans (why don’t we just tolerate the spread in longevities as ‘pluralism’ in action?). And it also make us want to integrate rather than segregate humanity (the latter sometimes done in the name of ‘pluralism’).”

    It certainly isn’t coercive modernization that makes me feel guilty about this and other forms of inequality it’s the structural asymmetry in the global system that prevents choice on individuals and collectivities rather than having reality imposed by others. Asymmetries that cause very real human suffering.

    As for modernization vs pluralism: I live in Amish country surrounded by people who live what is the equivalent of 150 years ago. Good for those with the courage to leave these communities and good for those with the courage to stay. As long as individuals are free to leave small societies like this (and they often do) then having this kind of chronological diversity is a very good thing. Should the whole thing ever come crashing down such diversity will make society more resilient than otherwise.

    As far as a political agenda for Zoltan- approaching the question in transhumanist terms I think what he needs to stress is liberty over one’s own body in order to open up space for self-directed forms of alteration and augmentation. Also, given the agining population I don’t think securing more funding for longevity research should be a difficult case to make. But I’m all out of bumper stickers.

  16. @Scott:

    The ‘improvement’ of any ancestral cognitive capacity amounts to the degradation of those ancestral cognitive capacities that depend on the ancestral form of the ancestral capacity ‘improved.’

    Have either of you guys come across anything like this anywhere? Improving human capacities seems easy enough in principle, but only so long as you overlook the way their heuristic nature differentially entangles those capacities with various ecologies–social ecologies in particular. Steve’s zombie argument provides a great way to anchor thinking through the kinds of dysfunctions we might expect.

    I am probably way over simplifying your point, but If I understand you, on the level of individual cognitive capacity the augmentation paradox you propose I find to be pretty widespread though I’ve never read any hard studies. Very few of us now walk around with a memory palace in our heads now that we can just look the answer up on our phones. I am sure any educated person 100 years could kick my ass when it comes to mental calculation etc. Though I think there’s a danger that we would go too far with examples like these a project a future filled with Wells’s morlocks. We’d also be missing the fact that it seems we’ve gotten much better in fitting people into narrowly defined niches. For example any semi-professional chess player today would probably cream the grand masters of the 19th century.

    Again, if I understand your meaning, I think the augmentation paradox is more important when it comes to social ecology. In a certain light it almost appears that transformation along the lines of technological requirements (modernization) demands the destruction of old forms- the creation of what I think you mean by “crash space” and once you start looking for it you find it everywhere – Schumpeter’s neo-liberal “creative destruction”, “disaster capitalism”, National Socialism’s goal of being Levithan that lived off the corpses of other states, the kinds of destruction of traditional social structures beliefs and classes that were followed by revolutionary states in the French Revolution, under Stalin and Mao.

    Do you think, Scott, that we will ever free ourselves from the crash space, or is it even possible to establish islands of safety and stability while the ground beneath us keeps moving?

  17. This has been a fantastic discussion: I especially would like to thank Steve for helping me think through the form and consequences of the Augmentation Paradox, and the kinds of crash spaces that might arise as a result.

    Rick: “Do you think, Scott, that we will ever free ourselves from the crash space, or is it even possible to establish islands of safety and stability while the ground beneath us keeps moving?”

    As you note, social ecology is the target, the domain where the paradox has the most bite. The problem it poses transhumanism seems to be about as radical as can be. And the argument falling out of it is dreadfully simple:

    1) Heuristic cognition depends on stable, taken-for-granted backgrounds.
    2) Social cognition is heuristic cognition.
    /3) Social cognition depends on stable, taken-for-granted backgrounds.
    4) Transhumanism entails the transformation of stable, taken-for-granted backgrounds.
    /5) Transhumanism entails the collapse of social cognition.

    What makes this so devilish, I think, is that it’s so bloody easy to understand heuristics in *mechanical terms,* and thus incredibly difficult to take flight in underdetermined abstractions. Reliable heuristic cognition turns on cues possessing reliable differential relations to the systems cognized–end of story. Once what David calls ‘hyperplasticity’ is upon us, then the differential relations between cues and systems become endlessly variable, and the cues become useless, and *heuristic cognition becomes impossible.*

    Since intentional cognition is (indisputably, I think) heuristic cognition, this means intentional cognition will become impossible. Transhumanism basically means the death of meaning. And so to answer your question, Rick, I think crash space just is our future so long as we require that meaning play some cognitive role. Causal cognition is the only way forward I can see.

    This is another reason why I love Steve’s zombie metaphor, although I think it would be more ‘accurate’ to see *both* parties as zombies, the transhumanists that embrace their biomechanical nature, and the bioconservatives, who are deluded by the limits of their biomechanical nature (their need to rely on heuristics) into thinking they are more than zombies, or ‘human.’

    For years I’ve been searching for a ‘master argument’ for the Semantic Apocalypse, and I think this might be it.

  18. Thing is I fully expect fringe-human/ post-human / trans-human types to be persecuted minorities before they are persecuting majority-power holders. They’ll “seem different” before they “seem powerful” either way they’ll make good boogey-men, and thus are likely to be cast into that role. To escape they are going to need to try to be targets of empathy for the majority power holders (probably both center-humans, and elite-human power holders). Every minority goes through this tension of how much to normalize to the over-culture, and how much of their own distinctiveness to retain. But attempts to emphasize why center-humans should sympathize with their plight, are surely going to be part of the mix. Now maybe eventually the script will flip and the fringe-human/post-humans will wind up so much power that they’ll become a oppressing majority power holder in turn. But when that happens the tone won’t be “these center-humans are curs” or let’s get them before they get us, but rather “these are the people who were oppressing us all those years, remember (incident 1, incident 2, etc.). The zombie metaphor, or the stuffing primates metaphor, or the borg cube encountering humans, or the European explorers encountering a new island of natives, those work when one side is long accustomed to the arrogance of power, not when someone has felt downtrodden and then finally gotten the upper hand … that looks more like the French Revolution, or the Hutus and Tutsis, or First Emporer Qin and the philosophers …

    As for the augmentation paradox, I’m skeptical (big surprise). Humans do a LOT of non-monotonic reasoning. We use heuristic cognition that relies on assumed stable, taken for granted backgrounds, but we ALSO reason on the assumption that backgrounds are never stable, and the taken-for-granted needs to be foregrounded and re-examined periodically. We’ve always reasoned in shifting worlds, and shifting social worlds. This has always proved challenging and provoked various kinds of anxiety and cognitive and social dysfunction, we’re not GOOD at coping with shifting backgrounds, but we can do it. We need more redundancies and error-tolerance in our reasoning processes, it’s slower, less efficient, more devisive, but we can do it. We have all sorts of inferential back up systems and fallbacks. Now maybe the rate of background change is getting worse and pushing out ability to cope, and we’re to all the 70s futurologists stuff about future shock and frame shift and a what not. When elaborate systems of meaning collapse, we fall back on more primitive but robust forms of meaning. when the snarkosphere collapses we can fall back on mythology as (less-detailed but more robust and error-tolerant) social cognition. Maybe we can still be pushed to cognitive apocalypse were we can’t make enough assumptions about anything to get us through even using our most error-tolerant systems, but we still have a long way to go here I think. I expect trans-humans to have mythologies and stereotypes and ideologies and workable fictions of their own. Maybe the post-humans will get past that and just have a variety of different error-correcting code strategies for different environments, but again we have a while.

  19. “but we ALSO reason on the assumption that backgrounds are never stable, and the taken-for-granted needs to be foregrounded and re-examined periodically.” yeah so ideally this might be so but in practice not so much, very hard to change our individual habits (give it a try) and much harder to bring about and sustain (never mind trying to add an extra layer/level of continuing reflexivity) institutional changes, also our reaches (extended thru tech) have become immensely more powerful (in effect and range) and the feedback loops more intense, complex, and unpredictable, such that past adaptations aren’t much of a measure for our current and future trials of affordances and resistances:
    http://syntheticzero.net/2015/09/18/working-well-with-wickedness-john-law-on-wicked-problems/

  20. Morton: “As for the augmentation paradox, I’m skeptical (big surprise). Humans do a LOT of non-monotonic reasoning. We use heuristic cognition that relies on assumed stable, taken for granted backgrounds, but we ALSO reason on the assumption that backgrounds are never stable, and the taken-for-granted needs to be foregrounded and re-examined periodically. We’ve always reasoned in shifting worlds, and shifting social worlds.”

    This is Steve’s response as well, what I call the ‘optimistic induction.’ We’ve adapted in the past, so it stands to reason we’ll adapt in the future. The problem is that hyperplasticity represents an entirely different problem–so much so, in fact, you could argue that it’s actually a shift in evolutionary, as opposed to ecological, paradigms. We have never reasoned in worlds where *our biology* is what’s doing the shifting. So the optimistic induction does not hold.

    Meanwhile the problem stands. The question is whether our *existing forms* of heuristic cognition could reliably function in a transhumanistic future, not whether we’ll find some (inevitably ‘weird’) way to cope. We have all kinds of optimizing approaches we could use in lieu of intentional cognition. The *only* way to argue that intentional cognition will remain reliable, as far as I can tell, is to argue for certain minimal kinds of background stability, since heuristic cognition is quite simply impossible without it. There’s just no consistent way that a transhumanist can argue this, given their existing commitments.

  21. What’s funny about this whole discussion is not between the terms of trans or post human… but that it reinstalls the old mythos of theological immortality as extension of life into the very concrete ideology. If anything evolutions shows that 99% of all species that ever existed are now extinct, and even now we’re in the midst of a Sixth Extinction. If the notion of bioinformic intervention into our own genome is a part of the new influx of ideas it seems whatever we are doing is outside norms and normative reasoning altogether. Once the cat is out of the bag its only a matter of time that it becomes an engineering problem rather than a philosophical or religious issue.

    Yet, I tend toward the speculative posthumanist thesis of disconnection in which our wide descendants will probably evolve through various bifurcations of machinic and bioengineering forms. Selection and adaptation whether artificial or natural is still part of the conflictual progression of a combative struggle through time. Who will win in the end… my bet is for the machinic phylum rather than the organic. Why? What stands a better chance to extend its systems beyond the planetary home base? – If you think on that I think you’ll no the answer.

    1. Hi Steve, Yes, I suppose the focus on life extension here seems anthropocentric. Though, as you imply, if we take an immanent view of transhumanism as “War Machine”, we need to think about the dominant trajectories of technical systems (of which we form a part) rather than reading fragile ethical agenda onto iffy and unpredictable processes. These may, as you say, favour some form of machinic intelligence, though it’s probably too early to tell in either case. For example, machinic intelligence might not be the right kind of intelligence to exercise functional autonomy – or only when embodied in something like a biological system.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s