On the Very Idea of a Super-Swarm

The best we can do to understand a post-singularity dispensation, Virnor Vinge argues, is to draw parallels with the emergence of an earlier transformative intelligence: “[What] happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind” (Vinge 1993).

Vinge’s analogy implies that we could no more expect to understand a post-singularity mind than a rat or non-human primate – lacking refined propositional attitudes – could understand justice, number theory, and public transportation.

While this does not provide a rigorous theory of the human-posthuman difference (indeed, I would argue that no such theory could be possible a priori) it captures a central ethical worry about the implications of radical enhancement.

This is that prospective technically modified successors to humans may be not be recognizable as potential members of a communal ‘we’ or prone to recognize humans as ethically significant others. It is not the prospect of technical modification to humanity per se (wide or narrow) that concerns us but changes engendering beings so ‘alien’ that there would no longer be a basis for affinity, citizenship or shared concern with humans.

One reason why this might occur is suggested in Vinge’s worry that a posthuman reality could be “Too different to fit into the classical frame of good and evil” (Vinge 1993). Otherwise put, a post-singularity dispensation might include aliens minds, phenomenologies and values so different from those supervening on either narrow human biology or upon what I refer to as our ‘wide human‘ systems (e.g. enculturation into propositionally structured languages) that human-posthuman communication, co-operation or co-evaluation is impossible or pointless.

For example, public ethical frameworks in the secular democracies and beyond presuppose that candidates for our moral regard have similar phenomenologies, if only in sentient capacities for pain, fear, or enjoyment. However, most of these have more maximal conditions. Liberals, for example, place great emphasis on the human capacity for moral autonomy, allowing us, in Rawls words, to ‘to form, to revise, and rationally to pursue a conception of the good’ (Rawls 1980, 525).

While theories of autonomy vary hugely in their metaphysical commitments, most require that candidates for moral personhood be capable of reflecting upon their lives and projects and thereby on the values expressed in the actions, lives and projects of their fellow persons. Arguably, this capacity has cognitive, affective and phenomenological preconditions. Cognitively, it presupposes the capacity for higher-order representation (to represent one’s own or others’ beliefs, desires, etc.). Affectively, it presupposes the capacity for feelings, emotions, and affiliations that form a basis for evaluating a life.  Phenomenologically it presupposes that persons experience the world as (or as if they were) a persistent subject or ‘self’.

Without the cognitive preconditions, no rational evaluation of values or social life would be possible. Without the affective and phenomenological preconditions, these evaluations would lack point or salience. A cognitive system incapable of experiencing itself as a persistent subject might have a purely formal self-representation (like my current thought about a region on my big toe), but could not experience humiliation, resentment or satisfaction that its life is going well because these attitudes require a rich apperceptive experience of oneself as a persistent subject.

This claim does not, it should be noted, entail metaphysical commitment to a substantial or metaphysically real self, but to a subjective phenomenology: the experience of being a self. It is possible and even likely, as Thomas Metzinger has argued, that our first person phenomenology is a ‘functionally adequate but representationally unjustified fiction’ resulting from the fact that the neural processes that generate our sense of embodied and temporally situated selfhood are phenomenally (if not cognitively) inaccessible to components of the system responsible for meta-representing its internal states (Metzinger 2004, 58, 279). If Metzinger is right, the self around whom my egocentric fears and ambitions revolve does not exist. As for Nietzsche, this implies that our experience of the self as a source of agency is likewise illusory, for it only reflects our unawareness of the chains of causation leading to our decisions and actions (Nietzsche 1992, 218-219, cited in Sommers 2007).

So the issue here is not metaphysical adequacy of public ethical frameworks like liberalism or virtue ethics but their applicability in a posthuman future. Allowing for arguable exceptions (such as Buddhism), they may all rest on a metaphysical error. However, the propensity for self-evaluation, feeling ‘reactive attitudes’ to the quality of others’ attitudes, attributing responsibility and praise, etc. all presuppose first person phenomenology and, arguably, are necessary for human social forms. Thus the ‘user illusion’ of persistent selfhood may be functionally necessary for human life because necessary for any culturally mediated experience of moral personhood.

However, Vinge argues that a super-intelligent AI++ might lack awareness of itself as a persistent “subject”.

Some philosophers might regard this prospect with scepticism. After all, if having subjectivity, or Dasein, etc., is a condition for general intelligence, a subjectless posthuman could not be regarded as generally intelligent. However, the validity of such objections would hinge a) on the scope of any purported deduction of the subjective conditions of experience or objective knowledge and b) on the legitimacy of transcendental methodology as opposed, say, to naturalistic accounts of subjectivity and cognition.

If, as writers such as Metzinger, Daniel Dennett or Michael Tye argue, we can naturalize subjectivity by analysing it in terms of the causal-functional role of representational states in actual brains, then it is legitimate to speculate on the scope for other role-fillers. Even if all intelligences need Dasein, it doesn’t follow that all modes of Being-in-the-world are equivalent or mutually comprehensible. Our Dasein, Metzinger emphasizes, comes in a spatio-temporal pocket (an embodied self and a living, dynamic present):

[The] experiential centeredness of our conscious model of reality has its mirror image in the centeredness of the behavioral space, which human beings and their biological ancestors had to control and navigate during their endless fight for survival. This functional constraint is so general and obvious that it is frequently ignored: in human beings, and in all conscious systems we currently know, sensory and motor systems are physically integrated within the body of a single organism. This singular “embodiment constraint” closely locates all our sensors and effectors in a very small region of physical space, simultaneously establishing dense causal coupling (see section 3.2.3). It is important to note how things could have been otherwise—for instance, if we were conscious interstellar gas clouds that developed phenomenal properties (Metzinger 2004, 161).

A post-human swarm intelligence composed of many mobile units might distribute its embodiment or presence to accommodate multiple processing threads in multiple presents. We might not be able to coherently imagine or describe this phenomenology, but our incapacity to imagine X is, as Dennett emphasizes, not an insight into the necessity of not-X (Dennett 1991, 401; Metzinger 2004, 213).

The inaccessibility of the posthuman and the posthuman impasse

If artificial intelligences or other potential entities of the kind grouped under the ‘posthuman’ rubric could have non-subjective phenomenologies, then there are prima facie grounds for arguing that they would be both hermeneutically and evaluatively inaccessible for contemporaneous humans or for modestly augmented transhumans – we might refer to both variants of humans using Nicholas Agar’s neologism ‘MOSH’:  Mostly Original Substrate Human (Agar 2010, 41-2).

The alienness and inaccessibility of such beings would not be due to weird body plans or, directly, superhuman intelligence. There are numerous coherent SF speculations in which humans, intelligent extra-terrestrials, cyborgs and smart, loquacious AI’s communicate, co-operate, manipulate one another, argue about value systems, fight wars, and engage in exotic sex. However, these democratic transhumanist utopias or galactic empires are predicated on narrow humans and narrow non-humans (whether ET’s or droids) sharing the functional requirements for subjective phenomenology and moral personhood. The kind of beings that might result from Vinge’s transcendental event, however, could lack the phenomenological self-presentation which grounds human autonomy while having phenomenologies and metarepresentational capacities that would elude human comprehension.

As I have suggested elsewhere, this prospect represents a possible impasse for contemporary transhumanism rooted, as it is, in these public ethical frameworks grounded on conceptions of autonomy and personhood. How should transhumanists respond to the possibility that their policies might engender beings whose phenomenology and thought might exceed both our hermeneutic and evaluative grasp?

On the Very Idea of an Impasse: A Davidsonian objection

Donald Davidson’s objections to the intelligibility of radically incommensurate or alien conceptual schemes or languages might give us grounds to be suspicious of the very idea of the radically alien intelligences. In ‘On the Very Idea of a Conceptual Scheme’, Davidson suggests that theories of incommensurability must construe conceptual schemes: in terms of a Kantian scheme/content dualism; or a relation ‘fitting’ or ‘matching’ between language and world. Davidson claims that the Kantian trope presupposes that the thing organized is composite, affording comparison with our conceptual scheme after all (Davidson 2001a, 192). Since incommensurability implies the absence of such a common point of comparison, the propositional trope – fitting the facts or the totality of experience, or whatever – is all that is left. For Davidson, this just means that the idea of an acceptable conceptual scheme is one that is mostly true (Ibid. 194). So an alien conceptual scheme or language by these lights would be largely true but uninterpretable (Ibid.).

For Davidson’s interpretation-based semantics, this is equivalent to a language recalcitrant to radical interpretation. But the assumption that alien linguistic behaviour generates largely true ‘sentences’ is just the principle of charity that radical interpreter must assume when testing a theory of meaning for that language.

To re-state this in terms of the current problematic, if alien posthumans had minds, they would have a publicly accessible medium which tracks truths; allowing us to test a semantics for alienese.

Davidson holds that knowledge of an empirical theory specifying the truth conditions of arbitrary sentences of a language would suffice for interpreting the utterances of its speakers (given knowledge that the theory in question was interpretative for it). If we allow this (ignoring, for now, the standard objections to the claim that a truth theory for L would be, in effect, be a theory of meaning for it), then that posthumans having minds at all would entail their interpretability in principle for beings with different kinds of minds.

So does Davidson’s hermeneutics of radical interpretation rescue transhumanism from aporia by deflating the idea of the radical alien?

I think not. Firstly, we have to relinquish the idea that our interpretative knowledge of the radical alien must consist in some explicit formal device such as a Tarskian truth theory. The role of formal semantics in Davidson’s work is to explicate our informal comprehension of language. An interpretative theory can be implicit in an interpreter’s pre-reflective grasp of the inference relationships of a language and her ability to match truth conditions with true utterances (Davidson 1990, p. 312).

Now suppose a human radical interpreter is required to interpret a really ‘weird’ posthuman such as an ultra-intelligent swarm. Davidson’s semantics provides grounds for believing that the swarm-mind would not be a cognitive thing-in-itself: inaccessible in principle to minds of a different stamp. However, this entails that if we could learn to follow whatever passes for inferences for the swarm and track the recondite facts that it affirms and denies, we would understand swarmese. But contingencies might hinder attempts by any MOSH’s in the area to understand the swarm medium of thought, even given principled interpretability.

Even if we suspend the assumption that interpretative knowledge must consist in a formal theoretical model, it is not clear that we can suspend the constraint that it constitutes beliefs or issues in sentences about the truth conditions of sentences or sententially structured attitudes.

However, the public medium employed by a swarm could be non-propositional it nature and thus not straightforwardly expressible in sentential terms. For example, it might be a non-symbolic system lacking discrete expressions. Simulacra – as the computer scientist Brian MacLennan refers to these continuous formal systems – would, by hypothesis, be richer and more nuanced than any discrete language (MacLennan 1995; Roden Forthcoming). Their semantics as well as their syntax would be continuous in nature. The formal syntax and semantics for a simulacrum can be represented symbolically in continuous mathematics but an interpretation of a non-discrete representational system with a discrete one could be massively partial since it would have to map discrete symbols onto points of a continuum. Thus whereas a discrete system might distinguish the proposition P from its negation using the binary operator ‘Not’ via a semantic mapping onto one of two semantic values ({true, false}) a non-discrete equivalent could have any number of shadings between P and its negation.

The effectiveness of any propositional interpretation of a simulacrum would hinge on the dynamical salience of these shadings within the cognitive dynamics of the system under interpretation. Most of the shadings between ‘Snow is white’ and ‘Snow is not white’ might be differences that make no difference for the swarm. On the other hand, the continuum could contain a rich dynamic structure whose cognitive implications could not be conveyed in discrete form at all.

We do not know whether sophisticated thought could function without using a syntax and semantics along the lines of our recursively structured languages and formal systems – at least as a component of the hybrid mental representations discussed by active externalists (See Clark 2006). However, my response to the Davidsonian objection makes a case for the conceivability of sophisticated cognitive systems surpassing Wide Human interpretative capacities – i.e. those mediated by public symbol systems. If our imaginary swarm intelligence were a system of this type, then swarm thinking could be as practically inaccessible to humans as human thinking is for cats or dogs; if not inaccessible in principle to systems with the right computational resources.

These considerations support the speculative claim that posthuman lives might be interpretable in principle, but not by us. Moreover, even if the cognitive inaccessibility of posthumans is exaggerated in this claim, we have noted grounds for thinking that they could be so phenomenologically unlike us that public ethical systems of personal autonomy, good or virtue cannot be applied to them.

References

Agar, Nicholas (2010), Humanity’s End (MIT).

Chalmers, D. (2009), ‘The Singularity: A Philosophical Analysis’, http://consc.net/papers/singularity.pdf, accessed 4 July, 2010.

Clark, A. (2003), Natural Born Cyborgs’. Oxford: Oxford University Press.

Clark, A. (2006), ‘Material Symbols’, Philosophical Psychology Vol. 19, No. 3, June 2006, 291–307.

Clark A. and D. Chalmers (1998), ‘The Extended Mind’, Analysis 58(1), 7-19.

Davidson, D., (1984,) ‘On the Very Idea of a Conceptual Scheme’, in D. Davidson, Inquiries into Meaning and Truth, (Clarenden press, Oxford) pp. 183-198.

____(1990). ‘The Structure and Content of Truth’, Journal of Philosophy, 87 (6), pp. 279-

328.

MacLennan, B.J. (1995), ‘Continuous Formal Systems: A Unifying Model in Language and

Cognition’ in Proceedings of the IEEE Workshop on Architectures for Semiotic Modeling

and Situation Analysis in Large Complex Systems, Monterey,

Nietzsche, F. (1992). Beyond Good and Evil. In The Basic Writings of Nietzsche, edited

by Walter Kaufmann. The Modern Library.

Rawls, John (1980), ‘Kantian Constructivism in Moral Theory’, The Journal of Philosophy, 77(9), pp. 515-572.

Sommers, Tamler (2007). ‘The Illusion of Freedom Evolves’, in Distributed Cognition and the Will, David Spurrett, Harold Kincaid, Don Ross, Lynn Stephens (eds). MIT Press.

Vinge, V. (1993), ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, http://www.rohan.sdsu.edu/faculty/vinge/misc/singularity.html. Accessed 24 April 2008.

Advertisements

One thought on “On the Very Idea of a Super-Swarm

  1. Pingback: Cyborgs, Cascadia, Capitalism, Superstruct, Superflux « Justin Pickard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s