From Posthuman Life, pp. 178-9
‘[differences in phenomenology] can be significant obstructions to our understanding without being impassable barriers. It follows that the situation following phenomenological speciation might not preclude interpretation where it is technically possible to adapt human phenomenology to cope with posthuman phenomenology (See sections 5.5, 5.7). In extreme cases, no available technology may suffice to overcome the phenomenological obstruction confronting human interpreters so long as they remain empathically attuned to the human life-world. It may be that all idioms are interpretable in principle, but it is always an open question whether they are interpretable (for some variable “we”). But these phenomenological fixes might necessitate derangements that modify or extinguish the values brought to encounter by interpreters.
A further thought experiment will concretize the ethical problems this would present within a technical system that has become disconnection-potent. Suppose your world is poised for disconnection, following which a proportion of formerly human individuals will acquire some disruptive cognitive capacity caused by a new Intelligence Amplification technology. The IA technology has been tested on rats and consenting human subjects in deregulated corporate interzones. There is, consequently, some fragmentary information about its functioning and its effects on human and nonhuman cognition and subjectivity. Perhaps it assimilates users into a Churchlandish centipede or eliminates their folk-understanding of human psychology by patching them into non-symbolic workspaces (See Introduction; section 4.1).
The IA technique turns out to be incompatible with human social relations for reasons we can fill in, giving rise to a phenomenological speciation event. It might be because its users are cognitively far more adept than unaugmented humans, cease to think in propositions, or have a phenomenology that makes no sense to us.
However it works its derangements, the technology irreversibly disconnect its users from the human life-world, giving rise to deep social divisions between users and non-users. As a result the users devise their own social arrangements and proprietary infrastructure. Increasingly autonomous posthuman enclaves flower across the globe.
As the IA technology disseminates over the planetary network (section 7.4) you would have no choice about whether to deal with the personal and social consequences of disconnection. You would have to decide whether to become posthuman or to remain human. Suppose some of your friends have adopted the IA. Their wayward and cryptic emails imply they are exploring “caverns measureless to man”; but they seem indifferent to Italian food, sex with other humans, REM, or landscape painting. You cannot yet understand their idiosyncratic joys, but risk losing much that made existence meaningful to the human you will have been if you adopt the IA.
The IA example illustrates Meacham’s claim that that the phenomenological unity of the human species has an ethical pull. For although dialogue with posthumans is not precluded, it might require partially abandoning the human phenomenological purview for an altogether different one. This process might be incremental and reversible, as in Ramberg’s example of radical-interpretation, or it might involve a transformative and irreversible change in embodiment or phenomenology, as in the IA thought experiment, where any benefits of benefits of posthumanity would only be apparent on leaving humanity.
The relative abruptness and irreversibility of its effects would presumably discourage less risk averse humans from adopting the IA technology and leaving their phenomenological moorings.
Most importantly, for our purposes, there would be no milieu in which shared moral norms could be easily discussed or adjudicated by members of both groups. Even if radical interpretation of posthumans is always possible in principle. Neither group can enter into a unitary moral community without extirpating the other. Transitional models – like the vestigially propositional cognizers discussed in section 5.5 – might allow facilitate communication between the two groups, but, given the derangements involved, interpretation could remain patchy and unsatisfactory. Both groups might be composed of beings worthy of moral consideration. But this might never be expressible in a shared moral experience or democratic dialogue (Habermas 2005: 40).
It could be objected that modern political thought has eschewed grounding in metaphysical ideas such as human nature. Rousseau and successors like Rawls argued that there can be no metaphysical justification for a given political order outside that order (for example, in the Will of God). A political structure is legitimate if its governing principles could enlist the consent of its members.
If this “postmetaphysical” view of legitimacy is applicable to human societies, perhaps it is applicable in some hybrid human-posthuman dispensation in which not all the prospective members of the polity are human. There are many different models of justification which fit with this approach. However, the model is inherently democratic. As Michael Walzer puts this, a political structure must be acceptable to those who live under it because of “who they are” not because of what they know or what they can do (Walzer 2003: 365).
However, purging political language of metaphysical elements is tenable only where the nature of the participants is not at odds with the communicative demands of democracy and shared governance. This implies, as Meacham writes: “[That] species recognition and disconnection are relevant to the understanding of intersubjectivity in general.” Establishing and arbitrating intersubjective norms require a community of beings sufficiently alike that dialogue among them is not significantly burdensome or risky. Thus a disconnection could undermine the intersubjective unity of the human community if the burdens of interpretation or radical interpretation became significant.
This has serious consequences for a democratic conception of Accounting. We can envisage a select band of intrepid Posthuman Accountants who act like technological food tasters. Their role would to test the effects of disconnection-potent technologies and summarise the results for the public, allowing them to make informed decisions about potentially disruptive technologies.
Given the principled difficulties involved in leak-proof testing and the limitations on control implied by New Substantivism, the democratic model of Accounting sketched is self-undermining. Any posthuman experimentation that could contribute to an understanding of emergent behaviours or modes of life post-disconnection would also increase the disconnection-potential of the technological system as the associated technologies iterated across its communication networks (sections 5.6 and 7.4).
Democratic Accounting would override the democratic process it is intended to inform. Even allowing for interpretability in principle, the results might fail the publicity test since only posthumans or near-posthumans might be in positions to understand them. Thus the composition of the community that deliberates on the posthuman would be put in doubt by the very attempt to deliberate upon it. There can be posthuman Accounting only if, pace Walzer, we do not know who can participate in it.”
Habermas, J. 2005. The Future of Human Nature, Rehg, W., Penksy, M. & H Beister, (trans). London: Polity.
Meacham, D., 2014. Empathy and alteration: The ethical relevance of a phenomenological species concept. Journal of Medicine and Philosophy, 39(5), pp.543-564.
Ramberg, B. T. 1989. Donald Davidson’s Philosophy of Language: An Introduction. Basil Blackwell.
Walzer, M. 2003. “Philosophy and Democracy.” Debates in Contemporary Political Philosophy: 361. Warren, M. A. 1973. On the moral and legal status of abortion. The Monist,57(1), 43-61.