Nature Unbound: Brassier on Churchland’s Realism


A number of writers in the Speculative Realist blogosphere have cited Ray Brassier’s discussion of Paul Churchland’s attempt to reconcile scientific realism and a Prototype Vector Activation (PVA) theory of content in Chapter 1 of Nihil Unbound (Brassier 2007). Though I am reasonably familiar with the work of Paul and Patricia Churchland, I recall finding the argument in this section tough to disentangle first and second time round. But enough people out there seem convinced by Ray’s position to warrant another look.

This is my first attempt at a reconstruction and evaluation of Ray’s position in Nihil (it does not yet take account of any subsequent changes in his position – I suspect that others will be better placed than me to incorporate these into the discussion). In what follows I’ll briefly summarize the PVA theory in the form familiar to Ray at the time of Nihil’s publication. The second section will then attempt to reconstruct his critique of Churchland’s attempt to reconcile his theory of content with a properly realist epistemology.

1. The Prototype Activation Theory of Content

Firstly, what is the PVA theory of content? As many will already be aware, the term comes from the argot of neural network modeling. Artificial Neural Networks (ANN’s) are a technique for modeling the behaviour of biological nervous systems using software representations of neurons and their interconnections. Like actual neurons, the software neurons in ANN’s respond to summed inputs from other neurons or from the ‘world’ by producing an output. Many ANN’s consist of three layers: an input layer which is given some initial data, a hidden layer that transforms it, and an output layer which presents the network’s ‘response’ to the input.

Learning in Neural Nets usually involves reducing the error between the actual output of the network (initialized randomly) and the desired output, which might well be the allocation of an input pattern to some category like ‘true’ or ‘false’, ‘male’ or ‘female’, ‘combatant’ or ‘non-combatant’ or ‘affiliation unknown’, represented by activations values at the output.

One of the key properties adjusted during the training of ANN’s are the ‘weights’ or connection strengths between neurons since these determine whether a given input generates random noise (always the case prior to training) or useful output. In ANN’s there are supervised learning algorithms that tweak the NN’s weights until the error between the actual output and that desired by the trainers is minimized. Some ANN’s (for example, Kohonen Self-Organizing Feature Maps) use more biologically plausible unsupervised learning algorithms to generate useful output such as pattern identification, without that pattern having to be pre-identified by a trainer.  One example is the “Hebb Rule” which adjusts connection weights according to the timing of neuron activations (neurons that fire together, wire together). So ANN’s don’t have to be spoon-fed. They can latch onto real structure in a data set for themselves.

Learning in ANN’s, then, can be thought of as a matter of rolling down the network’s “error surface” – a curve graphing the relationship of error to weights – to an adequate minimum error. An error surface represents the numerical difference between desired and actual output, against relevant variables like the interneuron weights generating the output.

Categories acquired through training are represented as prototype regions within the “activation space” (the space of all possible activation values of its neurons) of the network where the activations representing the items falling under a corresponding category are clustered. For Churchland, prototypes represent a structure-preserving mapping or “homomorphism” from uncategorized input onto conceptual neighborhoods within the n-dimensional spaces of neural layers downstream from the input layer (Churchland 2012, viii, 77, 81, 103, 105).  In effect, the neural network learns concepts by “squishing” families of points or trajectories in a high-dimensional input space onto points or trajectories clustered in a lower-dimensional similarity space. Two trained up neural nets, then, can be thought of as having acquired similar concepts if the prototypes in the first net form the vertices of a similar geometrical hypersolid to those in the second net. The Euclidean distances between the prototypes do not need to have the same magnitude but they need to be proportionate between corresponding or nearly -corresponding points. It’s important for Churchland that the distance-similarity metric is insensitive to dimensionality, for this, he argues, allows conceptual similarity to be measured across networks that have different connectivities and numbers of neurons (Churchland 1998).

The resultant theory of content is geometrical rather than propositional and, according to Churchland, internalist rather than externalist; it is also holist rather than atomist. It is geometric insofar as conceptual similarity is a matter of structural conformity between sculpted activation spaces. Such representations can capture propositional structure, but need not represent it propositionally. For one thing, the stored information in the inter-neural weights of the network need not exhibit the modularity that we would expect if that information were stored in discrete sentences. In most neural net architectures all the inter-neural weightings of the trained up network are involved in generating its discrepant outputs (Ramsey, Stich and Garon 1991).

Churchland’s internalism is a little more equivocal, arguably, than his anti-sententialism. The account is internalist insofar as it is syntactic, where the relevant syntactic elements are held to reside inside our skulls. Information about the real world features or structures tracked by prototypes plays no role in measures of conceptual similarity at all. Theoretically, conceptually identical prototypes could track entirely disparate environmental features so long as they exhibited the relevant structural conformity. Thus conceptual content for Churchland is a species of narrow content. However, Churchland regards conceptual narrow content as but one component of the “full” semantic content in PVA. The other components are the abstract but physically embodied universals tracked by sculpted activation spaces:

A point in activation space acquires a specific semantic content not as a function of its position relative to the constituting axes of that space, but rather as a function of (1) its spatial position relative to all the other contentful points within that space; and (2) its causal relations to stable and objective macro-features of the external environment (Churchland 1998, 8).

Fans of active-externalist or embodied models of cognition might argue that this syntactic viewpoint on conceptual similarity might need to be subsumed within a wide process externalist conception to allow for cases in ethology and robotics where the online prowess of a neural representation depends on the presence of enabling factors in an organism or robot’s environment (Wheeler 2005). However, I will not consider this possibility further since it is not directly relevant to Brassier’s discussion.

2. Brassier’s Critique

Brassier argues for two important claims. The first, B1, concerns the capacity of Churchland’s naturalism to express the epistemic norms that might distinguish between competing theories – most relevantly, here, different theories of mental content or processing such as PVA, on the one hand, or folk psychology (FP), on the other.

Brassier claims that Churchland’s attempt to express superempirical criteria for theoretical virtue – “ontological simplicity, conceptual coherence and explanatory power” (Brassier 2007, 18) – in neurocomputational terms leaves his account vacillating between competing theories or ontologies. This is because his revisionary account of the superempirical virtues is either 1) essentially pragmatic, concerned only with functional effectiveness of organisms who instantiate these prototype frameworks in their nervous systems, or 2) a metaphysical account whose claims go beyond mere pragmatic efficacy.

The second, B2, is the more programmatic and general. B2 is the claim that naturalism and empiricism are each unable to provide a normative foundation for the scientific destruction of the “manifest image”. B1 supports B2, according to Brassier, because Churchland – who Brassier regards as one of the most brilliant, radical and revisionary of naturalist metaphysicians – is unable to support his vaulting ontological ambitions without sacrificing his pragmatic scruples. Brassier thus sees Churchland’s philosophy as “symptomatic of a wider problem concerning the way in which philosophical naturalism frames its own relation to science”.

Much of Brassier’s argument in section 1.6 of Nihil Unbound – “From the Superempirical to the Metaphysical” centers on a relatively short text by Churchland on Bas van Fraassen’s constructive empiricism (Churchland 1985). According to Brassier, Churchland uses this text to propose replacing the “normative aegis of truth-as-correspondence” with “‘superempirical’ virtues of ontological simplicity, conceptual coherence, and explanatory power.” (Brassier 2007,18).

In the context of our familiar folk-distinction between epistemic criteria for belief-selection and semantic relationships between beliefs and things, Brassier’s gloss might seem to confuse epistemology and semantics. Superempirical truth is a putative aim of scientific enquiry not a criterion by which we may independently estimate its success (albeit an aim that is question both by Churchland and van Fraassen). This also seems to be Churchland’s position in the van Fraassen essay. The superempirical virtues are, he writes, “some of the brain’s criteria for recognizing information, for distinguishing information from noise.” (Churchland 1985; Brassier 2007, 23)

Churchland’s claim in context is not that these are better criteria for theory choice than truth but that they are preferable to the goal of empirical adequacy favoured by van Fraassen’s constructive empiricism, since the latter is committed to an ultimately unprincipled distinction between modal claims about observables and unobservables. From this we might infer that the superempirical virtues are not alternatives to truth but ways of estimating either truth or the relevant alternatives to truth that could be adopted by post sententialist realisms.

Churchland questions the status of scientific truth not (as in van Fraassen) to restrict sentential truth claims to correlations with their “empirical sub-structures” but because truth is a property of sentences or a property of what sentences express (propositions or statements) and he questions whether sentences are the basic elements of cognitive significance in human and non-human cognizers.

If we are to reconsider truth as the aim or product of cognitive activity, I think we must reconsider its applicability across the board, and not just in some arbitrarily or idiosyncratically segregated domain of ‘unobservables.’ That is, if we are to move away from the more naive formulations of scientific realism, we should move in the direction of pragmatism rather than in the direction of positivistic instrumentalism (Churchland 1985, 45).

Churchland’s claim that sentential or linguaformal representations are not basic to animal cognition is supported by the two claims: 1) that natural selection favours neural constructions attuned to the dynamical organization of adaptive behaviour and 2) that this role is not best understood in sententialist terms.

When we consider the great variety of cognitively active creatures on this planet – sea slugs and octopi, bats, dolphins and humans – and when we consider the ceaseless reconfiguration in which their brains or central ganglia engage – adjustments in the response potentials of single neurons made in the microsecond range, changes in the response characteristics of large systems of neurons made in the seconds-to-hours range, dendritic growth and new synaptic connections and the selective atrophy of old connections effected in the day-upwards range – then van Fraassen’s term “construction” begins to seem highly appropriate. . . . Natural selection does not care whether a brain has or tends towards true beliefs, so long as the organism reliably exhibits reproductively advantageous behaviour. Plainly there is going to be some connection between the faithfulness of the brain’s ‘world model’ and the propriety of the organism’s behaviour, but just as plainly the connection is not going to be direct.

When we are considering cognitive activity in biological terms and in all branches of the phylogenetic tree, we should note that it is far from obvious that sentences and propositions or anything remotely like them constitute the basic elements of cognition in creatures generally. Indeed . . . it is highly unlikely that the sentential kinematics embraced by folk psychology and orthodox epistemology represents or captures the basic elements of cognition and learning even in humans . . . If we are ever to understand the dynamics of cognitive activity, therefore, we may have to reconceive our basic unit of cognition as something other than the sentence or the proposition, and reconceive its virtue as something other than truth (Churchland 1985, 45-6).  .

There is nary a mention of concepts derived from theories of neurocomputation in the 1985 text but it is pretty easy to see that the PVA model is at least a candidate for Churchland’s notional alternative to the semantics, epistemology and psychology of folk. Prototypes points or trajectories are cases of dynamical entities called attractors. An attractor is a limit towards with orbits within a region of a phase space tend as some function (an iterative map or differential equation) is applied to them. When a neural network is trained up orbits whose vectors include a large variety of input states will evolve towards some preferred prototypical point – that is just how the network extracts categories from complex data sets. This allows trained up networks to engage in a process that Churchland calls ‘vector completion’: embodying expectations about the organization and category of the input data set which may tend towards a correct assay even when that data set is degraded somehow (Churchland 2007, 102). Since attractors also reflect a flexible, dynamical response to varying input, they are also potential controllers for an organism’s behaviour – with vector completion offering the benefits of graceful degradation in a noisy, glitch-ridden world.

This suggests a potential cognitive and cybernetic advantage over sententialist models. Humans and higher nonhuman animals regularly make skillful, and occasionally very fast, abductive inferences about the state of their world. For example,

  • Smoke is coming out of the kitchen – the toast is burning!
  • There are voices coming from the empty basement – the DVD has come off pause!
  • Artificial selection of horses, pigeons, pigs, etc. can produce new varieties of creature – Evolution is natural selection!

But is our capacity for fast and fairly reliable abduction consistent with the claim that beliefs are “sentences in the head” or functionally independent representations some other kind. Jerry Fodor, for one, concedes that this makes abduction hard to explain because it requires our brains to put a “frame” around the representations relevant to making the abduction – information about the Highway Code or the diameter of the Sun probably won’t be relevant to figuring out that burning toast is causing the smoke in the Kitchen. But within the FP framework, relevance is a holistic property beliefs have in virtue of their relations to lots of other beliefs. But which ones? How do our brains know where to affix the frame in any given situation without first making a costly, unbounded search through all our beliefs, inspecting each for its relevance to the problem?

Churchland thinks that the PVA model can obviate the need for epistemically unbounded search because the holistic and parallel character of neural representation means that all the information stored in a given network is active in the relaxation to a specific prototype (Churchland 2012, 68-9). It’s possible that Churchland is being massively over-optimistic here. For example, can PVA theory convincingly account for the kind of analogical reasoning that is being employed in case of Darwin’s inference to the best explanation? Churchland thinks it can. He argues reasonably that prototype frameworks are the kind of capacious cognitive structure that can be routinely redeployed from the narrow domain in which they are acquired, so as to reorganizes some new cognitive domain. The details of this account are a thin as things stand, but the basic idea seems worth pursuing. Children and adults regularly misapply concepts – e.g. when seeing a dolphin as a fish – with the result that other prototypes (e.g. mammal) end up having to be rectified and adjusted (Churchland 2012, 188-9).

Moreover, according to Churchland, the PVA system provides a semantic substitute for truth in the form of the aforementioned homomorphism or structural conformity between prototype neighborhoods and the structure of some relevant parts of the world.

So the take-home moral of the excursion into the biology of neural adaptation, for Churchland, is that truth is not a necessary condition for the adaptive organization of behaviour and that if we are to understand the relationship between cognitive kinematics and the organization of behaviour we may need to posit units of the cognitive significance other than sentential/propositional ones. This new conception of cognitive significance, he thinks, is liable to be constructive because it will make possible a closer understanding of the connection between the morphogenesis of neuronal systems, the dynamics of representation and the dynamical organization of behaviour.

Strangely, Brassier seems to read Churchland as making a quite different claim in the quoted passage: namely that the superempirical criteria of theory choice or prototype-framework are reducible (somehow) to the adaptive value of trained networks in guiding behaviour:

On the one hand, since ‘folk-semantical’ notions as ‘truth’ and ‘reference’ no longer function as guarantors of adequation between ‘representation’ and ‘reality’, as they did in the predominantly folk psychological acceptation of theoretical adequation – which sees the latter as consisting in a set of word-world correspondences – there is an important sense in which all theoretical paradigms are neurocomputationally equal. They are equal insofar as there is nothing in a partitioning of vector space per se which could serve to explain why one theory is ‘better’ than another. All are to be gauged exclusively in terms of what Churchland calls their ‘superempirical’ virtues; viz. according to the greater or lesser degree of efficiency with which they enable the organism to adapt successfully to its environment. (Brassier 2007, 19)

It is implicit in Churchland’s account that the superempirical virtues must be virtues applicable to neural representational strategies – since these are the more basic elements of cognition to which he alludes in his discussion of van Fraassen. However, it does not remotely follow that these virtues should be identified with “the greater or lesser degree of efficiency with which they enable the organism to adapt successfully to its environment” since, as Churchland emphasizes even here, there is only an indirect relation between “the faithfulness of the brain’s ‘world model’” and its organizational efficacy. For example, the functional value of a prototype scheme for an organism is only indirectly related to its representational prowess or accuracy – factors like speed, ease of acquisition and energy consumption would also need to be factored into any ethological assessment of competing schemes’ costs and benefits. As work in artificial intelligence shows, fast and dirty representational schemes which work in a reliably present-at-hand environmental contexts, while lacking rich representational or conceptual content, seem to be evolutionarily favoured in many instances (See Wheeler 2005).

In fact, there is nothing in this passage that suggests that Churchland thinks that the superempirical virtues must be reduced to evolutionary-functional terms at all – evolutionary theory just does not play this constitutive role in his theory of content or his epistemology.

Of course, it does not follow that Churchland precludes a neurocomputation-friendly understanding the superempirical virtues. He claims that they need to be as applicable to the understanding of epistemological systems that do not incorporate cultural or linguistic components as to those that do. He also implies, as we have seen, that these systems should be understood as engaged in a constructive activity evaluable according to criteria that can be generalized well beyond the parochial sphere of propositional attitude psychology. Churchland states as much when he claims that they are the brain’s “criteria” for distinguishing information from noise: simplicity, coherence and explanatory power need to be interpreted in a generalized manner consilient with the PVA theory of content (See also Brassier 2007, 23).

Churchland thus needs generalized, PVA-friendly account of the superempirical virtues. Brassier agrees but thinks that this requires Churchland to either embrace a neurocomputational version of idealism – which, as a realist, he would not want – or to posit a “pre-constituted physical reality” and thus to “forsake his neurocentric perspective” by adopting a metaphysics which cannot be secured from within a naturalistic framework (Brassier 2007, 20-1).

Well, for sure, no realist worth her salt will want to commit to the claim that reality is constituted by it being a possible representatum of a neurological process.  The nearest any contemporary realist comes to this idea is the claim on the part of Ontic Structural Realists that to be is to be a pattern and that a pattern ‘is real’ if the compression algorithm required to encode it requires a smaller number of bits than ‘bit string’ representation of the entire data set in which the pattern resides (Dennett 1991, 34; Ladyman and Ross, 202). But a) this is a far more general constraint on existence than Brassier’s touted neurological variant and should in no way be confused with a commitment to a kind of transcendental subjectivity; b) there is no reason why Churchland has to embrace anything like it (though he might for all I know). From the claim that the superempirical virtues are ascribable, in some form, to neurocomputational structures it does not follow that every constituent of reality must necessarily be accessible to neural coding strategies.

Now, clearly, in order to frame this thought the cognizer must have a concept of reality and a concept of what it is to represent it (e.g. a partial mapping or homomorphism from abstract prototype structure onto abstract world-structure) and these must be embodied in thinker’s neural states, somehow. If we are dualists or if we believe that conceptual content is not a property of neural states, then we will deny that this is possible. However, Brassier does not explain why one should reject the claim that conceptual content is a property of neural states in his critical discussion of Churchland. Indeed he specifically disclaims this critical option earlier on when rejecting Lynn-Rudder Baker’s criticism that Churchland-style eliminativism rejection of propositional attitudes involves a self-vitiating performative contradiction (Brassier 2007, 17).

Does Brassier have any other arrows in his quiver? Well, he argues if the superempirical virtues are “among the brain’s most basic criteria for recognizing information” then all conceptual frameworks that fail to maximize representational adequacy – like FP – would have been eliminated. Thus if simplicity, coherence and explanatory power are constitutive of representational success: “all theories are neurocomputationally equal inasmuch as all display greater or lesser degrees of superempirical distinction” (Brassier 2007, 23). This seems wrong for at least two reasons. Quite obviously, if superempirical distinction is an ordinal concept (as Brassier concedes in this passage) some theories can have more of it than others and will not be neurocomputationally equal. This is a recurrent trope in Churchland’s work: some conceptual frameworks mesh the ontology of natural science with our experience better than others. Learning to discriminate temperatures according to the Kelvin scale, for example, allows us to map our experience more directly onto the regularities expressed in ideal gas laws. Thus Kelvin has greater superempirical distinction than the Fahrenheit and Celsius scales, though, as Churchland amusingly recounts in Plato’s Camera: somewhat less cultural heft in common rooms of the University of Winnipeg (Churchland 2012, 227).

Of course, it is always possible that the empirical and structural virtues of theories might underdetermine theory choice and thus choice of ontology in certain situations. There could, in principle, be theories with disparate ontologies that are equally good by way of whatever variants of simplicity, coherence and explanatory power are applicable to the PVA model. This seems to be right, but this is not the same as all theories being on equal terms. Nor, does this obviously preclude the naturalist framing an ontology that is constrained by these virtues in some way. I conclude that Brassier fails to establish B1. The PVA model does not leave Churchland unable to say why some theories are better than others. And it does not preclude Churchland or the fan of the PVA model from having a naturalistically constrained ontology. But if B1 is not established then B2 – the claim that naturalism is unable to provide a satisfactory account of science – is not established in this reading.

References:

Brassier, Ray (2007), Nihil Unbound: Enlightenment and Extinction, Palgrave-Macmillan.

Churchland, Paul (1985), “The Anti-Realist Epistemology of van Fraassen’s The Scientific Image”, in Images of science, edited by P. M. Churchland and C.A. Hooker, Chicago: University of Chicago Press.

Churchland, Paul (1998) ‘Conceptual similarity across sensory and neural diversity: The Fodor/Lepore challenge answered’ Journal Of Philosophy 95 (1), 5-32.

Churchland, Paul (2007), Neurophilosophy at Work, Cambridge: Cambridge University Press.

Churchland, Paul (2012), Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals, Cambridge Mass: MIT Press.

Dennett 1991, ‘Real Patterns’, Journal of Philosophy 88: 27-51.

Ladyman James, Ross Don, (2007), Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press.

Ramsey, William; Stich, Stephen; P. & Garon, J. (1991), ‘Connectionism, eliminativism, and the future of folk psychology’, In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.

Wheeler, Michael (2005) Reconstructing the Cognitive World: the Next Step. MIT Press, 2005.

Advertisements

3 thoughts on “Nature Unbound: Brassier on Churchland’s Realism

  1. Sounds like Plato’s Camera is well worth the read! This post certainly was.

    Fantastic recapitulation of the details and the overarching stakes of Churchland’s position, David. Just a quick question regarding the following:

    “In fact, there is nothing in this passage that suggests that Churchland thinks that the superempirical virtues must be reduced to evolutionary-functional terms at all – evolutionary theory just does not play this constitutive role in his theory of content or his epistemology.”

    Doesn’t this cut against the requirement that PVA apply to nonhuman species as well? How could this constitute a SE virtue short of some prior commitment to the evolutionary continuity of humans and nonhumans, and the presumption that linguaformal cognition is raised upon something more basic, archaic, and prelinguistic?

    I took the larger point of Brassier’s critique of Churchland to be one of demonstrating the ineliminability of the normative from an exemplary, *thoroughgoing* mechanistic account of epistemic content. Churchland doesn’t go far enough (just as I think Brassier doesn’t go far enough!), and as a result is *threatened* with interpretations that drag him back into the mire of semantic cognition.

    So while I agree with the specifics of your critique, there’s a sense in which Brassier’s question remains hanging above your defence of Churchland. Saying, for instance, that super-empirical virtue is ordinal is well and fine, but the task of explaining why, short of question-begging or some transcendental skyhook, this particular conceptual framework is ‘better’ than that, remains incomplete.

    In other words, I’m wondering if Brassier doesn’t frame his critique of Churchland in such a way that he doesn’t need any of his arguments to stick, only to *apply.* Perhaps his ultimate conclusion – that philosophy has yet to think through the nihilistic implications of science without blinking – could very well survive your critique on this modified reading.

    For my own part, I’m convinced that these debates are in for some pretty exhaustive rennovation once people come around to the fact that we are only secondarily a universal problem solving device (if at all), that we are primarily the assemblage of heuristics cognitive psychology is in the process of unearthing, a box of specialized tools, each of them *matched* (in the ecological rationality sense) to specific tasks, and all of them invisible to metacognition as well as blind to one another. This is what Dennett is trending toward with his ‘stances’ approach, the notion that certain stances are inappropriate to certain problematics. But the picture is about to get far more complicated (and the intentional idiom of ‘stances’ needs to be ejected altogether).

    If you consider the *selectivity* necessarily involved in Churchland’s account of how PVAs extract and compress environmental information you can quickly see the plausibility, if not inevitability of a ‘heuristics all the way down’ approach. On this view Churchland is simply suffering from ‘normative sentimentalism,’ an unwillingness to surrender the universal presumption of a certain pragmatic sensibility (the family of ‘norm heuristics,’ or a ‘normative stance’), one that connects his views, as radical as they are, to the normative assumptions of his audience and peers.

    Seeing the logic of the Enlightenment through, as Brassier argues, is no easy feat!

    • Hi Scott,

      Really glad you enjoyed the post. I should admit that I found Nihil a very difficult, if fantastically provoking and original work of philosophy. I still don’t understand the Chapter on Laruelle and probably never will! As I understand it, Ray’s current position would probably be that Churchland’s theory of content is just wrong and that conceptual thinking is socially constituted by norms of public language. This is fine, if you are willing to buy the idea that non-linguistic creatures don’t represent, know or infer (or do so in an “as if” handwavy manner). Maybe that’s right and language users are the only “sapients”. But the argument in 1.6 of NU is an immanent critique which seeks to demonstrate that Churchland’s naturalism fails to support a realism worthy of the name, not that it starts from false semantic premises.

      Regarding the biological stuff. You’re right that Churchland ceaseless emphasizes our cognitive kinship and continuity with other creatures.Cognitive capacities are adapations, but that’s not the same as saying that being adaptive and having one or another cognitive virtue are the same thing.

      Your wider point is very well made. Even in the early article I quote that Churchland doubts whether it makes much sense to suppose that there is a complete or final theory – or whether the idea even makes much sense (Churchland 1985, 46). But while this may be epistemically problematic, it seems to be compatible with a robust metaphysical realism. In that text this amounts to three commitments: 1) that claims about theoretical entities have no less claim to semantic virtue than claims about observables and 2) that “there exists a world, independent of our cognition, with which we interact and of which we construct representations” (Ibid.) 3) global theoretical excellence is the only rational measure of ontological excellence. The only justification for 3 that I can think of is that if there is consistent ontological structure in the world, then the globally more eminent theories will capture that better than the less eminent ones. If there are Azathoths – cosmic sumps of hyperchaos – in the world, these will only show up as gaps or noise. Likewise if there are noumenal properties that are perfectly inert and never make any observable difference to anything, our heuristics will be perfectly useless to explicate those. But admitting such possibilities seems to strengthen realism rather than weaken it.

  2. For what it’s worth I wrote this satirical piece

    http://rsbakker.wordpress.com/speculative-musings/adventures-in-speculative-realism/rhapsophy-a-prolegomena-to-the-next-whacked-out-problematic-assumption/

    after reading the Laruelle chapter – an attempt to see what Laruelle is up to through a Derridean lens. Don’t let the tone fool you: I actually think Laruelle is up to something quite profound with non-philosophy. Though (like Laruelle himself) I’m not sure about its utility.

    I’m not convinced that language using creatures represent! I too think that Nihil Unbound is a very profound and important work, but I think Brassier actually runs afoul the very criticism he levels (or at least I think he levels) at Churchland, and Nietzsche, ultimately, at the end of the book: grabbing the last intentional rope (inferentialism) hanging rather than seeing the nihilistic consequences of the Enlightenment (old and new) through to the bitter end (which is to say, the posthuman).

    “but that’s not the same as saying that being adaptive and having one or another cognitive virtue are the same thing”

    I agree. And I think you’re right that this represents an unwarranted assumptive leap on Brassier’s part. But it’s still an *easy* assumption to make as well, especially since his critique, like most of his critiques throughout the book, turns on holding his target’s own theoretical practice accountable to their first order theoretical claims. (His critique of Meillasoux was absolutely devastating in this respect).

    Reading eliminativists like Dennett and Churchland I often get the same feeling I get with my own approach, namely that their pragmatism smacks of theoretical convenience (read: a way to avoid being pinned down by my critical interlocutors) the way I worry my skepticism does. In a strange sense they almost force you to make the kind of assumptive move Brassier makes. Or am I misreading you?

    Regarding ‘final theory,’ the thing I always try to remember is that the picture emerging out cognitive science is so strange that it’s anybody’s guess. What if it turns out, for instance, that the intuitions underwriting ‘metaphysical realism’ are *heuristically* anchored? This is where the second-order implications of a first-order commitment to evolution becomes so important, simply because heuristic problem solving pretty clearly seems to be the way evolution rolls. This makes the issue of ecological MATCHING absolutely crucial. The ancient question, What are the limits of cognition? which assumes a (probably mythical) universal capacity for problem solving, is replaced with, What are the limits of heuristic X?

    This is why I’m always nattering on about the blackboard being wiped clean for philosophy: If we come to realize that our metaphysical intuitions are anchored in a heuristic that possesses a broad, but ultimately constrained scope of application, that this is why philosophy seems to continually (and fruitlessly) recapitulate the same conceptual tropes, then the possibility of finally thinking beyond ‘realisms’ and ‘idealisms,’ ‘subjectivity’ and ‘objectivity’ becomes very real.

    But because it forces us to think around our heuristic cognitive defaults instead of through, it likely won’t be easy. I read Churchland and Brassier as making mighty attempts, only to ultimately shrug their shoulders in exhaustion and seek refuge in the intuitive comforts of intentional cognition.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s