Autopoiesis and closure Q&A

Dark Chemistry has an interesting and thought provoking post about autopoiesis and objects here. This is not an area I know well and I’m happy to be put right, but I’ve a few newbie thoughts and questions about some of the concepts applied by fans of Luhmann, Varela, etc.

I accept that some processes can have the appearance of formal closure insofar as each stage requires a result of some earlier or later stage of the process (chicken-egg-chicken). Social processes and some biochemical processes exhibit this property (Collier 2006). Moreover, they depend on the maintenance of boundaries between inside and outside while also contributing to that maintenance in certain cases. However, there seem to be obvious cases where system maintenance depends on boundary permeability as well. E.g. Merleau-Ponty’s blind man has extended the boundaries of his self when he uses his cane (self seeps into exo-self). So complex systems need boundary maintenance and process closure up to a point, but they also need to be open enough to exploit resources and couple with other systems. So I’m not quite ready to buy the idea that process closure is the only ontologically salient fact when thinking about whether object-hood implies absolute closure and separation or something messier and by degree like Collier’s notion of ‘cohesion’ (Ibid.).

DC’s references to object-oriented programming is interesting – I think Robert Jackson has also discussed this from time to time over at Algorithm and Contingency. My Java is a bit rusty, but isn’t computational encapsulation an artifact at the level of code – a convenient way of thinking about computational structures by allocating proprietary methods to objects? Any algorithm that can be written OO style can be written procedurally (more messily and less debuggably) without encapsulating objects and their methods. So the unity of computational objects is notional and practical, not ontological.

Finally, I’m not clear about the free use of terms like ‘recursion’ and ‘self-reference’ in discussions of autopoiesis. If I define something recursively using a successor relation – e.g.:

1) If fx, then f(s(x))

2) f(0)

This tells us that 0 is f and that any successor of an f is an f. By mathematical induction, then, we can show that all entities – 0, s(0), s(s(0)), s(s(s(0))), etc. –  in the domain of quantification for which the successor relation is defined are f.

There is obviously a sense in which the recursive part of the definition defines fx in terms of itself. But this is not a case of self-reference. No part of the definition mentions another part.

True, the I could append a clause using quotes saying that this is a definition of ‘f’ – “Df ‘f'” – but this would be a way of informing someone that this is a definition of a predicate f, and is not part of the recursion.

Self-reference and recursion are different. The one is a semantic relation between a thing and itself while the latter is a syntactic relation whereby a composite contains instances of itself (self-similarity).

I suppose the obvious rejoinder to this is that I should get round to reading Luhmann, et al. at some point!

Collier, John (2006), ‘Autonomy and Process Closure as the Basis for Functionality’, Annals of the New York Academy of Sciences, Vol 901, 208-290.



3 thoughts on “Autopoiesis and closure Q&A

  1. Thanks for the follow up! Yes, I think his use of recursion and self-reference is after the fact in the autopoietic event. In the passage I quote below he is marking out the truth that we are always already enmeshed in the environment, and that – for him at least (i.e., he is an affirmer of the epistemological naturalists mode in a weak sense) – it is through an actual interference or operation that the distinction between or abstracting out of the environment takes place as a temporal event in the ongoing process that as he says splits reality through an artificial insertion of the distinction between – not the inside/outside (for he doesn’t like such terms, and sees them as part of an outmoded epistemology) – what he terms utterance (execution: as algorithm) and information (self-reference: memory functions, et. al.) As he states:

    Communication comes about by splitting reality through a highly artificial distinction between utterance and information, both taken as contingent events within an ongoing process that recursively uses the results of previous steps and anticipates further ones” (OC, 1424).

    This distinction between utterance/information or self-reference/external reference is central to this dualistic process that is both contingent and open to a temporal forms of difference. The most difficult question he tells us is “how to define the operation that differentiates the system and organizes the difference between system and environment while maintaining reciprocity between dependence and independence” (OC, 1426).

    As he states it the problem is in how do we do this and still maintain this reciprocal interaction of object (assemblage:system) and environment as it applies this artificial splitting distinction as an ongoing event that never ends. In fact that is one of his key points and criticisms of Maturana is that the notion of autopoiesis leaves out this temporal splitting event of utterance and information, self-reference/external reference.

    As you say, an I affirm, we both need to study his works a little more to tease out the underlying aspects. In the interview he seemed to be in agreement with Katherine N. Hayles epistemological naturalism and how she was using his ideas back in 1995, so I’ll have to dig a little deeper in a thourough investigation of his books before I can say that his ideas have anything to offer OOO or not… in this I’m only following up on some interesting posts of late by Levi on Larval Subjects in relation to his upcoming book and ideas on society et. al..

    Anyway… Cheers! thanks again…. food for thought.

  2. Hey David,

    “My Java is a bit rusty, but isn’t computational encapsulation an artifact at the level of code – a convenient way of thinking about computational structures by allocating proprietary methods to objects? Any algorithm that can be written OO style can be written procedurally (more messily and less debuggably) without encapsulating objects and their methods. So the unity of computational objects is notional and practical, not ontological.”

    By all means encapsulation allocates proprietary method to objects. As Bogost recognised, object domains like Video Game engines cannot be analysed as texts, nor deconstructed – they only work technically and economically, in the sense that a set of discrete protocols sold or given away as private property.

    But I’d argue that, what it is difference between the way code is encapsulated from human eyes, both practically, technologically, notionally, economically and the way Object Oriented Programmed Objects are encapsulated from each other? Note that OOO systems aren’t wholly steeped in relations; you can delete one thing and it does not cause the entire system to disappear.

    In addition, in computer lingo, encapsulation does not just refer to the way certain protocols and methods are concealed from view. It also refers to indirect communication from one object another. Simply put, encapsulation is an indirect way of telling an object a certain chunks of information without telling it what is actually is. Not just with human users, but within methods themselves.

    By the way I haven’t read much Luhmann, so like you, I can’t offer that much of a detailed commentary; but I’m not totally sure about operational closure. I envisage systems (Cellular Automata specifically) as being far more determined than this, whilst at the same time more contingent.

  3. Yeah in a sense encapsulation is a language construct that facilitates the bundling of data with the methods operating on that data. Information hiding is a design principle that strives to shield client classes from the internal workings of a class. Encapsulation facilitates, but does not guarantee, information hiding. Smearing the two into one concept prevents a clear understanding of either.

    The Java language manifestation of encapsulation doesn’t even ensure basic object-oriented objects. The argument is not necessarily that it should, just that it doesn’t. Java developers can blithely create bags of data in one class and place utility functions operating on that data in a separate class. This type of thing drives us System Analysts and Architects crazy.

    The first rule of thumb is to place data and the operations that perform on that data in the same class. In the example I used on my post in on dark chem I was actually talking about a sort of specialized Communication Object: the ESB or Enterprise Service Bus that operates as a black box that allows many types of objects to communicate to each other using the notion of loose coupling. Loose coupling allows a system based on one language: let’s say Cobalt 2, to speak to another system that speaks a totally different language: let’s say C++. Neither system could communicate directly with each other, but using this mediator they can do just that.

    What is unique is that this mediator as you say encapsulates all the necessary rules and operations that allow it to communicate to both of these systems without either of the systems knowing just what is going on within the black box. Think of it as a warehouse for objects: a place where differing objects truck right up, unload their data to be transformed, massaged, and repackaged for the consumer object on the other side. But what is interesting is that along with this translation of the object into alternating languages, one can also apply different rules based procedures against the data being transferred as well. Let’s say that the information sent isn’t needed till after the weekend by the company president for company X, while company president Y sent the data on Saturday just because he arbitrarily decided too. We can have a set of rules that can apply filters to the objects as the move through the system.

    I want bore you with more of this, but will agree with you that the objects themselves typically provide the both the data and the operations needed to apply against the data in one class. But one can also encapsulate classes within classes as well, exposing sets of data and operations that allow for intricate complexity while only exposing to view the methods that a client or customer needs: whether that client is another machine or a human. The older procedural languages just do not have this ability to provide a producer/consumer with the methods it really needs to get its work done. The older procedural languages were as you say convoluted sphagetti code that hide nothing and exposed everything. Object base programming hides what is not needed by a client, and exposes only the methods a client needs to access for execution.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s