Levi has a neat post here which cogently sets out the Object-Oriented-Approach to epistemology and its relationship to cybernetic notions of operational closure and structural openness. Levi unpacks the notion of object withdrawal in terms of operational closure which he describes as the thesis that events of a given type only ever ‘refer’ to events of the same type. I quote:
Second-order cybernetics argues that all systems (what I call “objects”) are 1) operationally closed and structurally open, and 2) self-referential. Operational closure is the thesis that operations taking place within a system only ever refer to one another, not an outside or external world. Neurological events only ever refer to other neurological events. Immune system events only ever refer to other immune system events. Thoughts only ever refer to other thoughts. Communications only ever refer to other communications (not the minds or brains of those that communicate). And so on. Structural openness, by contrast, means that systems are nonetheless open to perturbations from the world around them.
I think we need to tweak this formulation somewhat. Presumably operational closure implies that operations within a given system S only ‘refer’ to operations of the same type that are located in the system S.
If we don’t make this restriction then operational closure would be consistent with events in my cortex, retina or cochlea (etc.) ‘referring’ to events in your cortex, retina or cochlea and vice versa.
For an industrial strength representational realist like Jerry Fodor there is nothing problematic about this at all, since he holds that intelligent creatures like us have compositionally structured mental representations which includes a basic lexicon referring to properties in their environment. These referential relations are secured by facts about robust causal relationships between head and world: e.g. the fact that ‘blue’ sensitive cone cells respond preferentially to light wavelengths around 45 micrometers or that the different resonance frequencies of hair cells in our inner ear allows the Cochlea to contribute a ‘frequency domain analysis’ of complex sounds.
Given some complicated upstream processing this kind of causal covariance relation supposedly allows our brains to track surface reflectance properties of objects and distinguish periodic sounds like the tone of a tuning fork from aperiodic sounds like white noise.
So if we can track object properties beyond our heads, why not hold that our mental representations can track mental events in others’ heads? We seem to be at least as good at discerning changes in one another’s’ mental lives as we are at liming the inanimate world. Allowing that all these events are or are constituted by neural events, that our representations are neural events, then the standard representationalist picture has little problem accommodating the claim that patterns of neural events in one head can track patterns of neural events in another. However, note that the plausibility of this position derives from the claim that our brains are informationally cued to changes in our wider environment, neural or otherwise. Head-head reference is just a special case of Head-World reference and presents no greater problems than our capacity to refer to bumble bees, galaxies and cumuli.
Of course, it is notoriously hard to get from simple indicator-style causal relations like this to something more recognizably ‘semantic’ or ‘representational’. For example, our blue cones might be fooled into firing by direct electrical stimulations rather than EM waves in the 40-50 micron range – as with our medium and long wave cones. Alternatively, we might get our visual information via some kind of sensory substitution system tickling skin receptors on our back.
So any mentalistic semantics needs to tell a story which explains how certain property-brain covariances are constitutive of the meaning of the correlative mental states while others are not (they have to get round what Fodor calls the ‘disjunction problem’). There is a wealth of competing theories of this kind: Fodor has asymmetric causal dependence, Dretske and Millikan teleosemantics, then we have radical interpretationists like Haugeland and Davidson etc.
However, even if we acknowledge that telling a coherent naturalistic story about mental representation is hard, it is not obvious why the only story that can be told is one of neurological events referring exclusively to other neurological events.
Admittedly, certain classes of neurological event might be exclusively cued to other neurological events. If some kind of second representationalist theory of consciousness is correct, then every conscious state involves a second order representation of some first order representational event.
But second order representationalism is motivated by intuitions about the attentional and systemic availability of mental states: consciousness seems to have a built in reflexivity. On the other hand, given the acknowledged difficulty of eliminating deviant causal links from the set of reference constitutive properties of neural events (splendours and miseries of representationalism) it is problematic to claim that only neuron-neuron relations are semantically constitutive. We know that peripheral neural excitations are differentially sensitive to differences in the environment of the neural system (e.g. differences in EM wavelength, periodicity vs aperiodicity, etc.) so the restriction seems arbitrary and unmotivated.
Of course, Levi acknowledges that cognitive systems are open to the environment – in the ways that we have considered. However, self-referentiality is preserved, he thinks, because the preferential responses which provide the input for cognitive processing are on the systems own terms. I quote:
Every system faces an environment that is far more complex than the system itself. If a system is to endure or exist it must reduce this complexity. It is here that we encounter the self-referentiality of systems. Systems are self-referential in two senses: First, in the sense that their internal operations only ever refer to other internal operations. Second, in the sense that their openness to their environment is a product of the system’s own distinctions. There’s a very real sense in which every system or object creates it’s own environment.
Think of how this plays out with respect to colour vision. As Brother Cavil in Battlestar Galactica observes, while bemoaning his incarnation in a human body, our cones are only sensitive to a narrow portion of the EM band (40 to 70 microns). We can’t see in UV or IR without special goggles or suchlike. But it does not follow from this that 40-70 micron EM environment is a creation of our brains. Of course a sophisticate might observe that perceptions of individual colours do not correlate with individual cone firings but with higher dimensional activations in short, medium, long and black/white sensitive retinal cells which furnish sensitivity to holistic relationships such as differences in the saturation of hues or tone. We might interpret this as a sensitivity to complex environmental properties or relations as opposed to simple physical properties like wavelength or frequency.
Alternatively, we might want to say that this shows colour to really be a proximal property of our brains – an artifact of a particular neural coding strategy – rather than an objective feature of the world. Much the same observation could be made of auditory parameters like pitch, which does not vary according to frequency differences in a simple arithmetical way. However, if the fact that a neural layer codes the complexity of its input only indirectly warrants internalism of this kind, then this argument must apply to every stage of neural processing since the dimensionality of the layers is bound to vary (as with artificial neural networks, where the hidden layer is generally of lower dimensionality than the input or output layers). But if this argument works against neurological events referring to the world, it works also for neurological events at different stages of processing referring to other neurological events: it’s a universal acid.
However, the motivation of Levi’s position is more obscure. For example, he claims not only that neurological events refer exclusively to neurological events, but that immune system events refer only to immune system events. By the same logic one must claim that linguistic events refer only to other linguistic events! All reference – if this argument goes through – is quotation: the use mention distinction collapses. But apart from being entirely unmotivated – if words can refer to other words, then what, exactly, stops them referring to non-verbal entities ? – this position ultimately vitiates the very typologies that Levi wishes to install. For his claim that entities of a given type refer only to entities of the same type attributes a rich objective type structure to the world. If that type structure isn’t there, then the claim is false. So Levi can only deny representational (epistemological realism) by making industrial strength commitment to objective type-structure. However, he must also be committed to the claim that this type-structure is epistemologically and semantically inaccessible. Therefore he is also committed to the unjustifiablity or meaninglessness of his own position. Words which seem to refer to objective kinds like ‘neuron’ only refer to words and not to neurons. Thus Levi’s admirable clarity on this score only serves to indicate the profound incoherence of OOO epistemology.