Dark Chemistry has an interesting and thought provoking post about autopoiesis and objects here. This is not an area I know well and I’m happy to be put right, but I’ve a few newbie thoughts and questions about some of the concepts applied by fans of Luhmann, Varela, etc.
I accept that some processes can have the appearance of formal closure insofar as each stage requires a result of some earlier or later stage of the process (chicken-egg-chicken). Social processes and some biochemical processes exhibit this property (Collier 2006). Moreover, they depend on the maintenance of boundaries between inside and outside while also contributing to that maintenance in certain cases. However, there seem to be obvious cases where system maintenance depends on boundary permeability as well. E.g. Merleau-Ponty’s blind man has extended the boundaries of his self when he uses his cane (self seeps into exo-self). So complex systems need boundary maintenance and process closure up to a point, but they also need to be open enough to exploit resources and couple with other systems. So I’m not quite ready to buy the idea that process closure is the only ontologically salient fact when thinking about whether object-hood implies absolute closure and separation or something messier and by degree like Collier’s notion of ‘cohesion’ (Ibid.).
DC’s references to object-oriented programming is interesting – I think Robert Jackson has also discussed this from time to time over at Algorithm and Contingency. My Java is a bit rusty, but isn’t computational encapsulation an artifact at the level of code – a convenient way of thinking about computational structures by allocating proprietary methods to objects? Any algorithm that can be written OO style can be written procedurally (more messily and less debuggably) without encapsulating objects and their methods. So the unity of computational objects is notional and practical, not ontological.
Finally, I’m not clear about the free use of terms like ‘recursion’ and ‘self-reference’ in discussions of autopoiesis. If I define something recursively using a successor relation – e.g.:
1) If fx, then f(s(x))
This tells us that 0 is f and that any successor of an f is an f. By mathematical induction, then, we can show that all entities – 0, s(0), s(s(0)), s(s(s(0))), etc. – in the domain of quantification for which the successor relation is defined are f.
There is obviously a sense in which the recursive part of the definition defines fx in terms of itself. But this is not a case of self-reference. No part of the definition mentions another part.
True, the I could append a clause using quotes saying that this is a definition of ‘f’ – “Df ‘f'” – but this would be a way of informing someone that this is a definition of a predicate f, and is not part of the recursion.
Self-reference and recursion are different. The one is a semantic relation between a thing and itself while the latter is a syntactic relation whereby a composite contains instances of itself (self-similarity).
I suppose the obvious rejoinder to this is that I should get round to reading Luhmann, et al. at some point!
Collier, John (2006), ‘Autonomy and Process Closure as the Basis for Functionality’, Annals of the New York Academy of Sciences, Vol 901, 208-290.