Levi and the Paradoxes of OOO Epistemology

Levi has a neat post here which cogently sets out the Object-Oriented-Approach to epistemology and its relationship to cybernetic notions of operational closure and structural openness. Levi unpacks the notion of object withdrawal in terms of operational closure which he describes as the thesis that events of a given type only ever ‘refer’ to events of the same type. I quote:

Second-order cybernetics argues that all systems (what I call “objects”) are 1) operationally closed and structurally open, and 2) self-referential. Operational closure is the thesis that operations taking place within a system only ever refer to one another, not an outside or external world. Neurological events only ever refer to other neurological events. Immune system events only ever refer to other immune system events. Thoughts only ever refer to other thoughts. Communications only ever refer to other communications (not the minds or brains of those that communicate). And so on. Structural openness, by contrast, means that systems are nonetheless open to perturbations from the world around them.

I think we need to tweak this formulation somewhat. Presumably operational closure implies that operations within a given system S only ‘refer’ to operations of the same type that are located in the system S.

If we don’t make this restriction then operational closure would be consistent with events in my cortex, retina or cochlea (etc.) ‘referring’ to events in your cortex, retina or cochlea and vice versa.

For an industrial strength representational realist like Jerry Fodor there is nothing problematic about this at all, since he holds that intelligent creatures like us have compositionally structured mental representations which includes a basic lexicon referring to properties in their environment. These referential relations are secured by facts about robust causal relationships between head and world: e.g. the fact that ‘blue’ sensitive cone cells respond preferentially to light wavelengths around 45 micrometers or that the different resonance frequencies of hair cells in our inner ear allows the Cochlea to contribute a ‘frequency domain analysis’ of complex sounds.

Given some complicated upstream processing this kind of causal covariance relation supposedly allows our brains to track surface reflectance properties of objects and distinguish periodic sounds like the tone of a tuning fork from aperiodic sounds like white noise.

So if we can track object properties beyond our heads, why not hold that our mental representations can track mental events in others’ heads? We seem to be at least as good at discerning changes in one another’s’ mental lives as we are at liming the inanimate world. Allowing that all these events are or are constituted by neural events, that our representations are neural events, then the standard representationalist picture has little problem accommodating the claim that patterns of neural events in one head can track patterns of neural events in another. However, note that the plausibility of this position derives from the claim that our brains are informationally cued to changes in our wider environment, neural or otherwise. Head-head reference is just a special case of Head-World reference and presents no greater problems than our capacity to refer to bumble bees, galaxies and cumuli.

Of course, it is notoriously hard to get from simple indicator-style causal relations like this to something more recognizably ‘semantic’ or ‘representational’. For example, our blue cones might be fooled into firing by direct electrical stimulations rather than EM waves in the 40-50 micron range – as with our medium and long wave cones. Alternatively, we might get our visual information via some kind of sensory substitution system tickling skin receptors on our back.

So any mentalistic semantics needs to tell a story which explains how certain property-brain covariances are constitutive of the meaning of the correlative mental states while others are not (they have to get round what Fodor calls the ‘disjunction problem’). There is a wealth of competing theories of this kind: Fodor has asymmetric causal dependence, Dretske and Millikan teleosemantics, then we have radical interpretationists like Haugeland and Davidson etc.

However, even if we acknowledge that telling a coherent naturalistic story about mental representation is hard, it is not obvious why the only story that can be told is one of neurological events referring exclusively to other neurological events.

Admittedly, certain classes of neurological event might be exclusively cued to other neurological events. If some kind of second representationalist theory of consciousness is correct, then every conscious state involves a second order representation of some first order representational event.

But second order representationalism is motivated by intuitions about the attentional and systemic availability of mental states: consciousness seems to have a built in reflexivity. On the other hand, given the acknowledged difficulty of eliminating deviant causal links from the set of reference constitutive properties of neural events (splendours and miseries of representationalism) it is problematic to claim that only neuron-neuron relations are semantically constitutive. We know that peripheral neural excitations are differentially sensitive to differences in the environment of the neural system (e.g. differences in EM wavelength, periodicity vs aperiodicity, etc.) so the restriction seems arbitrary and unmotivated.

Of course, Levi acknowledges that cognitive systems are open to the environment – in the ways that we have considered. However, self-referentiality is preserved, he thinks, because the preferential responses which provide the input for cognitive processing are on the systems own terms. I quote:

Every system faces an environment that is far more complex than the system itself. If a system is to endure or exist it must reduce this complexity. It is here that we encounter the self-referentiality of systems. Systems are self-referential in two senses: First, in the sense that their internal operations only ever refer to other internal operations. Second, in the sense that their openness to their environment is a product of the system’s own distinctions. There’s a very real sense in which every system or object creates it’s own environment.

Think of how this plays out with respect to colour vision. As Brother Cavil in Battlestar Galactica observes, while bemoaning his incarnation in a human body, our cones are only sensitive to a narrow portion of the EM band (40 to 70 microns). We can’t see in UV or IR without special goggles or suchlike. But it does not follow from this that 40-70 micron EM environment is a creation of our brains. Of course a sophisticate might observe that perceptions of individual colours do not correlate with individual cone firings but with higher dimensional activations in short, medium, long and black/white sensitive retinal cells which furnish sensitivity to holistic relationships such as differences in the saturation of hues or tone. We might interpret this as a sensitivity to complex environmental properties or relations as opposed to simple physical properties like wavelength or frequency.

Alternatively, we might want to say that this shows colour to really be a proximal property of our brains – an artifact of a particular neural coding strategy – rather than an objective feature of the world. Much the same observation could be made of auditory parameters like pitch, which does not vary according to frequency differences in a simple arithmetical way. However, if the fact that a neural layer codes the complexity of its input only indirectly warrants internalism of this kind, then this argument must apply to every stage of neural processing since the dimensionality of the layers is bound to vary (as with artificial neural networks, where the hidden layer is generally of lower dimensionality than the input or output layers). But if this argument works against neurological events referring to the world, it works also for neurological events at different stages of processing referring to other neurological events: it’s a universal acid.

However, the motivation of Levi’s position is more obscure. For example, he claims not only that neurological events refer exclusively to neurological events, but that immune system events refer only to immune system events. By the same logic one must claim that linguistic events refer only to other linguistic events! All reference – if this argument goes through – is quotation: the use mention distinction collapses. But apart from being entirely unmotivated – if words can refer to other words, then what, exactly, stops them referring to non-verbal entities ? – this position ultimately vitiates the very typologies that Levi wishes to install. For his claim that entities of a given type refer only to entities of the same type attributes a rich objective type structure to the world. If that type structure isn’t there, then the claim is false. So Levi can only deny representational (epistemological realism) by making  industrial strength commitment to objective type-structure. However, he must also be committed to the claim that this type-structure is epistemologically and semantically inaccessible. Therefore he is also committed to the unjustifiablity or meaninglessness of his own position. Words which seem to refer to objective kinds like ‘neuron’ only refer to words and not to neurons. Thus Levi’s admirable clarity on this score only serves to indicate the profound incoherence of OOO epistemology.

 

 

 

 

 

Advertisements

9 thoughts on “Levi and the Paradoxes of OOO Epistemology

  1. Thanks for the post, David. Reading your criticism, I get the sense that we’re using the term “reference” in two different ways. When I talk about reference, I am not talking about reference to the world. Of course language, for example, can refer to the world. The point is that neurological events only ever produce other neurological events and are only ever produced by other neurological events and that language only ever produces other signifiers and is produced by other signifiers. This is a perfectly banal and, I think, obvious point. Minds do not communicate (there’s no way you and I can share the same neurological events and those neurological events never enter the order of language), but rather only communication can communicate. Does that mean that one system can’t refer to other systems? Absolutely not. It just means that it will only relate to those systems in terms of its own operational closure. Does this change your criticism at all?

  2. Hi Levi,

    It’s true that I was using the word ‘reference’ in the semantic sense usual in analytic philosophy. The scare quotes around the term indicated that this semantic relationship is problematic in certain ways – in particular, there are competing naturalistic construals of the relationship, none of which seems entirely satisfactory.

    But I’m not convinced that your qualification gets around the issues I’ve posed here. If by ‘reference’ we simply understand productive causal relations between tokens of a given type (neurons, etc.) then we just have a placeholder term for a subset of causal relationships. If that’s all reference means then it’s not clear why we’re arguing over epistemology or semantics at all here.

    In any case, why grant these proximal relations epistemological/semantic significance over long-arm causal relationships such as the relationships of covariance between the surface reflectance properties of objects and firing patterns in our rods and cones? Neural events cause and are caused by things other than neural events – otherwise we wouldn’t not be able to see or hear anything or perform actions. I gave the example of retinal cone cells and cochlea hair cells in my post. A similar point could be made regarding language – surely linguistic utterances cause and are caused by non-linguistic events, even if the causal regularities are not robust! Why give orders otherwise?

    We can always define terms which privilege causal relations of a specific kind, but there needs to be some supporting motivation for treating these as epistemically and semantically privileged.

  3. Hi David,

    Well first, the two uses of reference are entirely different. If I’m not rejecting semantic reference– and I’m not –then your argument doesn’t follow. Second, systems are operationally closed and structurally open. They can be and are perturbed by their environment. That addresses your points about reflective surfaces, rods, and cones. There’s no claim being made that interaction doesn’t take place with an environment and therefore I can’t be guilty of the criticism you’re leveling against me. Third, because systems are structurally open to their environment, they evolve and develop in response to that environment. A system that wasn’t responsive to its environment would, quite simply, 1) end up being destroyed, and 2) would not evolve or develop in any way because it wouldn’t be perturbed in any way (i.e., no operations would take place in it at all). This, I think, is part of what makes cybernetic systems theory so superior to the old structuralism in the social sciences. Structuralism can’t explain why structures evolve and change because for it you only have operations taking place internal to structure. Cybernetic systems theory, by contrast, begins from the distinction between system and environment, self-reference and hetero- or other-reference, and the interaction between those two domains. Language certainly refers to things that aren’t linguistic, but always and only does so through things that are linguistic.

  4. OK – I’m happy to accept that I’m missing something here.

    We agree that interactions takes place within cybernetic systems and between cybernetic systems and their environments which, somehow, suffice for reference. From what you say, this is enough to secure representational relationships between cybernetic systems and their environments – again, I agree. I also accept your criticism of structuralism and completely acknowledge your account of the superiority of cybernetic models.

    So everything seems to hang on this notion of operational closure. If, as you say, this amounts to there being special class of intra-typical causal relationship, you need to explain why the intratypical-extratypical difference grounds withdrawal. I’ve argued that intra-typical causal relations are just a subset of causal relations and that there is no reason not to think that these are every bit as mediated and iffy as extra-typical relations.

  5. David,

    This might get at the nub of the issue:

    So everything seems to hang on this notion of operational closure. If, as you say, this amounts to there being special class of intra-typical causal relationship, you need to explain why the intratypical-extratypical difference grounds withdrawal. I’ve argued that intra-typical causal relations are just a subset of causal relations and that there is no reason not to think that these are every bit as mediated and iffy as extra-typical relations.

    I’m not entirely sure what you’re claiming here, but are you equating my concept of withdrawal with Graham’s? For Harman objects cannot causally interact because they are withdrawn from one another. This is not a position I share. For me all withdrawal means is that systems are operationally closed or that they relate to the world through their own specific mode of organization. I don’t see causality as some deep mystery but as a perfectly ubiquitous phenomenon in the world and do not advocate any sort of occasionalism.

  6. I accept that your concept of withdrawal is different from Graham’s. But, as I’ve said, if the operational closure thesis just comes down to the observation that there are in-type causal relations, then I don’t see how it motivates anything analogous to an OOO conception of withdrawal. They are just a subset of causal relationships with no special semantic or epistemic efficacy per se. In order to motivate a semantic or epistemic position you need to explain why causal relations between the same types of thing are more epistemically or semantically important than causal relations between different kinds of thing.

    There’s also the problem of objective structure: For operational closure to occur there have to be types and (presumably) these have to be kinds which can figure in the scientific truth claims to which both of us appeal but whose truth is a mind-independent affair. If these claims are not claims about the world as it is in itself, then in what does their truth consist? If they are, then in what sense are they withdrawn? It would be circular to appeal to operational closure here, since it remains to be seen why this is epistemically or semantically significant.

  7. I’m pretty much in agreement with you David. Although I think we draw the boundary between issues to do with information processing and semantics (and thus the distinction between information and representational content) in different ways (me being part of the interpretationalist tradition of Davidson, Dennett, and Brandom).

    Nonetheless, I think some sense can be made of Levi’s notion of ‘operational closure’, if it’s reworked in different terms. This involves getting clear about the information/representation distinction just mentioned. From a Deleuzian perspective, Ideas are concrete universals, which is to say that they’re n-dimensional manifolds of qualitative variation plus the spatial (but not temporal) dimensions within which these qualities are expressed (insofar as the actual world is treated as a surface that curves through those dimensions, i.e., as a graph that does possess a temporal dimension), which are then situated in a higher dimension in which the manifold curves (i.e., a phase space + vector field). This curvature thus stores the information about the ways in which the various dimensions of qualitative variation tend to vary in relation to one another, or the differential relations between the variables these dimensions correspond to. We thus have a distinction between the information about actual states-of-affairs encoded in graph form, and information about the virtual tendencies that produce these states-of-affairs encoded in phase space form.

    This is all a bit complicated, but the point is that the Idea is essentially a schema for encoding information, but it is a concrete universal insofar as it incorporates the absolute spatial dimensions of the world (there can, following Leibniz, potentially be different forms of emergent space, but we’ll leave that). This means that the Idea is not only a schema, but genuinely encodes the information about the tendencies of everything in space with respect to those dimensions of qualitative variation. The complication/perplication of all Ideas together thus contains all the modal features of the whole world at any pure instant, out of which local stable actual states-of-affairs are produced within local metric space-times. To put it crudely, this is the metaphysics of information in-itself.

    The stuff about ‘operational closure’ or its equivalent is essentially an account of information for-us. The analog of Kantian forms of sensibility are schemas for encoding information, which are to be understood as including both spatio-temporal dimensions and qualitative dimensions. A system grasps an Idea insofar as it has evolved some truncated schema for encoding information that enables it to be sensitive to the information which is objectively encoded in the Idea. Thus, when actual states-of-affairs are produced in its local space-time, i.e., when events occur within its environment, it can respond to them within the limits set by its encoding schema. This is just to say that causal interaction between entities (or processes) is information transmission, it’s just that this information transmission can be looked at from two perspectives, i.e., that of the things themselves, and that of the Ideas within which the whole interaction is encoded. We needn’t think that it’s possible to achieve such a God’s eye view of the world in order to talk about it’s metaphysical structure.

    The real issue then is what it is to talk about information transmission from the perspective of system’s themselves. I’ve written a bit about the Deleuzian perspective on this before (though in a fairly provisional way): http://deontologistics.wordpress.com/2009/08/26/deleuze-the-song-of-sufficient-reason-part-2/ though I think that my interpretation of Deleuze differs from Levi and his own system’s theoretic/object-oriented appropriation of it. The crucial issue is precisely how we understand the ‘sensitivity’ of a system to information. This is because a system can be causally effected by information transmission that it is not itself able to encode and respond to. These are Deleuze’s intensities, or shocks to thought, which force the systems that receive them to either adapt or die. Understanding these, and the fuzzy (or confused) border between information a system is sensitive to and information it isn’t sensitive to is essentially a matter of understanding the way in which systems receive and process information from their subsystems.

    In short, we can make sense of the idea of a system only being able to process certain kinds of input, and thus having an ‘umwelt’ of sorts, but the trick is to properly balance these notions of closure and openness, or perhaps simply to abandon that metaphor entirely in favour of something more precise. I certainly don’t see how the notion of ‘reference’ can contribute to this, however it is understood, because even though systems can only be sensitive to (and thus respond to) certain kinds of events that occur within the environment as they encode it, they nonetheless sense/respond to those very events, and not some indirect representation of them. This is not because they are intentional agents (I’d argue this involves a very specific kind of information processing), but precisely because they aren’t intentional at all. There is no representation here, so there can be no representational mediation. All there is is cause and effect, albeit understood in informational terms.

  8. Hi Pete,

    I accept that we can treat Deleuzean Ideas as vector fields representing differentials (tendencies) for a field of scalars. Such a field can encode abstract properties of the space such as the presence of attracting, repelling or stationary points. So what we are talking about are operators which transform the raw array of information about scalar quantities (e.g. an array of pixel activity) and output something more abstract (such as a vector space which maps intensity changes in the array). Does this give us a handle on the notion of operational closure or Umwelt?

    It can certainly give us a way of describing the computational strategy employed within a particular system or sub-system. However, where does the notion of ‘closure’ enter here? In any complex computational system any such operation is liable to occur as part of a multistage process (some stages of which may well occur beyond the spatial boundary of the organism). To obtain a closure we need to isolate a set of entities and operations such that those operations always generate entities within the relevant set. Some sets are closed under some operations but not others: e.g. the natural numbers are closed under addition but not subtraction (which can generate negative integers, etc.). In the case of cognition it seems to me to be an empirical and not ontological matter whether there is an interesting class of computational operations and entities which exhibit such formal properties.

    We could, I suppose, construct a set by legislation (it wouldn’t be hard). For example, it could be the set V of all possible vector states {v1, …vn} of all the neural networks in a particular brain and O could be the set of all transformations {o1, …on} of all those vectors. V is closed under O by definition since transformations can only be vector-vector. But this is just a trivial mathematical construction. It doesn’t tell us anything about how that brain represents the world.

    Of course, it could be objected, at this point, that operational closure in the second order cybernetics sense is a semantic notion: it asserts that operations carried out by the cognitive system only refer to themselves or to internal system states. This certainly seems to be a more interesting and provocative idea. But, as I was arguing in response to Levi, this implies something about the semantic properties of cognitive systems which cannot be discerned from the nature of itheir material substrates. Neither can it be deduced from their computational properties alone. After all, there might be a good argument to the effect that by virtue of encoding abstract properties of a pixel array a system represents properties in the environment that inputs to that array – such as the presence of edges.

  9. Pingback: A Quickie on the KK Principle « Deontologistics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s