Objective Ecological Value


In order to construct an anthropologically unbounded account of posthumans, we need a psychology-free account of value. There may, after all, be many possible posthuman psychologies but we don’t know about any of them to date. However, the theory requires posthumans to be autonomous systems of a special kind: Functionally Autonomous Systems (see below). I understand  “autonomy” here as a biological capacity for active self maintenance. The idea of a system which intervenes in the boundary conditions required for its existence can be used to formulate an Autonomous Systems Account of function which avoids some of the metaphysical problems associated with the more standard etiological theory.  The version of ASA developed by Wayne Christensen and Mark Bickhard defines the functions of an entity in terms of its contribution to the persistence of an autonomous system, which they conceive as a group of interdependent processes (Christensen and Bickhard 2002: 3). Functions are process dependence relations within actively self-maintaining systems.

Ecological values are constituted by functions. The conception, in turn, allows us to formulate an account of “enlistment” which then allows us to define what it is to be an FAS.

1)      (ASA) Each autonomous system has functions belonging to it at some point in its history. Its functions are the interdependent processes it requires to remain autonomous at that point.

2)      (Value) If a process, thing or state is required for a function to occur, then that thing or process is a value for that function. Any entity, state or resource can be a value. For example, the proper functioning of a function can be a value for the functions that require it to work.[1]

3)      (Enlistment) When an autonomous system produces a function, then any value of that function is enlisted by that system.

4)      (Accrual) An FAS actively accrues functions by producing functions that are also values for other FAS’s.

5)      (Functional Autonomy) A functionally autonomous system (FAS) is any autonomous system that can enlist values and accrue functions.

People are presumably FAS’s on this account, but also nonhuman organisms and (perhaps) lineages of organisms. Likewise, social systems (Collier and Hooker 2009) and (conceivably) posthumans. To date, technical entities are not FAS’s because they are non-autonomous. Historical technologies are mechanisms of enlistment, however. For example. Without mining technology, certain ores would not be values for human activities. Social entities, such as corporations, are autonomous in the relevant and sense and thus can have functions (process interdependency relations) and constitute values of their own. However, while not-narrowly human, current social systems are wide humans not posthumans. As per the Disconnection Thesis: Posthumans would be FAS’s no longer belonging to WH (the Wide Human socio-technical assemblage – See Roden 2012).

This is an ecological account in the strict sense of specifying values in terms of environmental relations between functions and their prerequisites (though “environment” should be interpreted broadly to include endogenous and well as exogenous entities or states). It is also an objective rather than subjective account which has no truck with the spirit (meaning, culture or subjectivity, etc.). Value are just things which enter into constitutive relations with functions (Definition 2 could be expanded and qualified by introducing degrees of dependency). Oxygen was an ecological value for aerobic organisms long before Lavoisier. We can be ignorant of our values and mistake non-values for values, etc. It is also arguable that some ecological values are pathological in that they support some functions while hindering others.[2]

The theory is partial because it only provides a sufficient condition for value. Some values – Opera, cigarettes, incest prohibitions and sunsets – are arguably things of the spirit, constituted as values by desires or cultural meanings.


Christensen, W. D., and M. H. Bickhard. 2002. “The Process Dynamics of Normative Function.” The Monist 85 (1): 3–28.

Collier, J. D., & Hooker, C. A. 1999. Complexly organised dynamical systems. Open Systems & Information Dynamics, 6(3): 241-302.

Roden. 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart.Springer Frontiers Collection.


[1] An issue I do not have time to consider is that ecological dependency is transitive. If a function depends on a thing whose exist depends on another thing, then it depends on that other thing. Ecological dependencies thus overlap.

[2] Addictive substances may fall into this class.


6 thoughts on “Objective Ecological Value

  1. This is fascinating stuff, David. Since I think this is the inevitable future of philosophy, I actually think the points of debate lurking behind the scenes of this post will at some point structure the whole of philosophy.

    I think the problem you face is the problem that Christensen and Bickhard face as well: because you have no way of explaining intentionality, you’re forced to rely on what might be ‘analogue analysis,’ to root around for naturalistic parallels to what seems to be going on when we refer to things such as ‘value.’ The obvious problem, however, is that interpretative underdetermination is built into this way of proceeding. Short of knowing what value is naturalistically, your attempts to deploy naturalistic versions of the concept are just going to be shots in the dark.

    So consider the way ‘value’ is defined here: as any process/entity prerequisite for any function understood as process dependence relations within auto-maintaining systems. An ‘objective value,’ in effect, becomes *any necessary condition* for any function of any FAS – which means that everything is value! especially given that ‘systematic causal dependency relations’ are just another way of saying ‘mechanism.’ The question then becomes one of why bother using the term at all, save for certain associative linkages that are largely, if not entirely, rhetorical.

    What you want is a conceptual apparatus that allows to pick out the same dependency relations as value does in its intentional incarnations, only without the intentionality, otherwise you can never be certain that simple equivocation is doing all the work, or that you’re saying anything substantial. The threat is you’re just talking machinery (‘Oxygen is an environmental condition’) in an intuition-friendly manner (‘Oxygen is an ecological value’).

  2. Pingback: Objective Ecological Value | real utopias | Sco...

  3. Hi Scott,

    Your commentary is on the button. I shouldn’t dignify this with the descriptor “theory”. It’s a segue from the metaphysics of agency required by the Disconnection Thesis to certain ethical (not normative!) conclusions about our posthuman predicament.

    The point about indeterminacy is very well made! I’m frankly not sure whether I need to reign this in to get me where I need to be now. But a serious theory of ecological value presumably would. If dependency is just a formal transitive relation, then the indeterminacy ramifies. So we probably need to give it an ordinal measure of intensity or whatever. Not sure how I do this just now, but it could be something to keep me off the streets for a bit next year.

    This is a psychology-free account. Active self-maintenance does not require mental intentionality in any substantive sense. It requires plastic dispositions for functional behaviour. Ant colonies don’t have minds, but they have a plastic disposition to search for nearer food sources arising from the way individual ant foraging behaviour is recruited by pheromone trails. One could give this a minimalist intentional stance interpretation but I’m dubious of an IS construal of intentionality for pretty much the same reasons you are. If we’re speculating about entities within Posthuman Possibility Space, then not only don’t we have a theory of mind-in-general. We don’t have a theory of an interpreter-in-general (I’m assuming that we junk transcendental theories of this ilk natch). Thus we have no general conception of what a real intentionality pattern is that we can extend throughout PPS. But we do have a conception of a plastic self-maintaining behaviour and we do have a working metaphysics of the kind of systems-complexity this requires. Or so I hope 🙂

  4. And your final point cuts many different ways, I think. The idea is to always recognize that our attempts to cognize always turn on the *hardware we actually got* (and not the ‘stances’ we can ‘take’). Natural scientific cognition as it stands remains trapped by various complexity thresholds, subject to who knows how many ‘refresh rate’ illusions. It’s highest dimensional frame of reference we got, so you really have no choice but to advert to its idioms, but one of the things that makes your terrain so compelling, David, is the way it pitches ALL forms of human cognition against their limits. It’s a virus that can only infect so many mysteries. Who’s to say what kinds of ‘intelligences’ might be immune?

    An additional worry I have with the importation of intentional concepts turns on our blindness of our own problem-solving machinery. A soon as you use words like ‘autonomy,’ no matter how carefully you define it in atural terms, you introduce the problem of intentional intuitions, potential misapplications of social heuristics. My money is on variable kinds and degrees of *componency,* and other concepts that all, I would argue at least, obviously water at the etiological well.

  5. I’ve attempted to acknowledge this issue in the book by treating FAS an internal requirement of the disconnection thesis rather than a proposal for a general account of living systems. It’s not clear that we can have that – there may be no hidden essence common to all life and in any case we are only acquainted with life in a particular corner of space time. The starting point, as you suggest, is that we we have no future proof knowledge here, but only an almost unanswerable speculative need arising from the apparent contingency of our own morphology and cognition. We have no grasp, as yet, of how far that contingency extends.

  6. We should come up with a neologism or phrase for this… Ekkremis, maybe? Or the ‘Cognitive Contingency Problem’? Or have you already written about this somewhere?

    The reason I think this is so important is simply that ALL problems of cognition can be viewed in terms of the contingency of resources relative to some set of problems that can or cannot be specified (BBT is just one example of this). With Disconnection you’re tackling this issue in its most general – and most significant – form. It forces you to engage the singularity as the problem ecology of problem ecologies. No small thing that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s