Predictive Coding and Brassier’s Freedom

I’m currently revisiting earlier work on Brassier’s (20b13a) short text on improvisation with a view to using the Predictive Coding (PC) account of cognition and agency as framework for understanding the role of improvisation and similar performance in our cognitive economy. The key take home that paper, I think, is its picture of a not-necessarily-human (posthuman?) freedom as coupling noumenal entities: generative systems whose operations are strictly inaccessible within our phenomenology or the manifest image, though not cognitively inaccessible as such:

The ideal of ‘free improvisation’ is paradoxical: in order for improvisation to be free in the requisite sense, it must be a self-determining act, but this requires the involution of a series of mechanisms. It is this involutive process that is the agent of the act—one that is not necessarily human. It should not be confused for the improviser’s self, which is rather the greatest obstacle to the emergence of the act. The improviser must be prepared to act as an agent—in the sense in which one acts as a covert operative—on behalf of whatever mechanisms can effect the acceleration or confrontation required for releasing the act 

My goal here is speculative rather than explanatory. It is not to suggest that PC is an unassailable or final account of cognition or agency (though it is compelling and rich) but to hint at the functional complexity that is productive of the improvisor’s manifest image. Additionally, an account predicated on the idea that brains are prediction machines ironically helps to foreground the insurgently ‘unpredictable’ and open character of improvisation and, thus, its pertinence to a posthuman conception of agency.

PC understands perception and action as hierarchically ordered cycles of prediction-error minimization operating at multiple temporal scales and levels of processing throughout animal nervous systems. The predictions are made by generative models (neural networks) in the form of modulatory feedback that gets compared to ‘bottom up’ driving signals from ‘input’ neuronal layers lower in the processing hierarchy. Where the model fails to predict the ‘driving signal’ its hypothesis is updated until it issues predictions that match the driving signal – thereby retuning the model to better predict the agent’s perceptual transactions with its environment.

From the purview of Bayesian epistemology, this updating process is weighted both on the ‘likelihood’ – how well a hypothesis predicts the evidence (input) – and the ‘priors’ encoding background expectations which exert their influence from further up in the hierarchy (Feldman 2013, 18). In effect each prior functions as a conditional likelihood in respect of models further up the hierarchy.

Some top-down predictions may code relatively abstract properties of the world in terms of the prior probabilities of coincident features in the environment or umwelt of the agent – such as that changes in objects are typically caused by changes in other objects. As Andy Clark observes in his surveys of the predictive coding literature, these abstract ‘hyperpriors’ have organizing features analogous to those Kant attributes to transcendental synthesis (Clark 2013: 196. Section 3.3; Clark 2015, 174-5). However, if it makes sense to talk of ‘synthesis’ here, it is better unpacked in terms of the fluent organization of agency and self-maintenance than as a conceptual operation whereby a sensory manifold become united in one cognition.

One of the most fascinating aspects of the PC accounts is the way it complicates our folk distinction between perception, inference and intention or will, since. From this perspective, intention and perception share the same satisfaction conditions. In the PC model, actions are predictions embodied as specific motor patterns at sub-cortical levels.[1] In action cycles, error reduction will involve the realization of more abstract ‘goals’ through the minimization of proprioceptive errors; thereby moving the organism into a predicted configuration (Clark 2015, 131; Adams, Shipp & Friston: 2013). [2]

Thus, when an improvising pianist explores – say – possibilities for sharing rhythmic or melodic lines between hands she is augmenting her freedom (or functional autonomy) by sculpting ‘dark’ generative mechanisms. These are outside her phenomenologically manifest world although they are generative conditions for it.

Only their output is phenomenologically available and, as per the PC account, even these embodied processes are subject to the suppression of sensory awareness of the consequences of fluent action. This is explained under the PC model because active inference can only operate if the ‘gain’ on prediction error from sensory input is attenuated, according functional primacy to the motor system for the reduction of error (Clark 2015, 213-217). J Limanowski suggests this may explain the standard phenomenological distinction between the lived and objective body.[3]

When things ‘click’ in a group improvisation we feel an affective state or groove that seems shared – ‘pressing the same buttons in each of us’ – perhaps because such states are multifunctional elements which can also be used to perceive others’ affective states (See Limanowski 2017, 6).

However, such responses be of little value to the ongoing improvisation if they were reducible to an intellectualist model according to which each performer ‘types’ or ‘ascribes’ the relevant affective state and infers her partner’s propensities for action. Firstly, such affects are not paradigmatic emotions or ‘affect programs’ evoking standard responses – e.g. fear and fight and flight – but subtle indicators of potentials or inflexions in a unique improvisational process. Thus, their influence on the coherence of the performance is plausibly due to their capacity to modulate performance (constrained by the affordances of the player’s instrument, idiomatic choice of materials) rather than some folk-theoretical inference.

Secondly, interaction here does not only or even primarily issue in predictions but in complex dynamical patterns in which there may occur spontaneous islands of coherence as well as considerable divergence. Walton et al describe statistical analyses of patterns of coherence between bodily (forearm and head) movements and playing behaviour of pianists improvising jointly against ostinato patterns, swing backing and drones. For example, the analysis of the right forearm movements over the ostinato pattern with the two pianists improvising together freely, displays regularly spaced pockets of coordination at multiple temporal scales reflecting durations over which the ostinato was repeated. This contrasts with the far more homogenous stretches of coherence when the pianists were asked to play in unison but also with the patchier dynamics that occurred against the drone (Walton et al 2015). Finally, they also uncovered some surprising multiscale coherences between up and down head movements against a swing track, suggesting that the interaction of performers extends beyond explicit musical gesture to bodily movements that do not enact specific intentions, rendering them inaccessible from a folk-theoretical perspective.

The preceding examples also indicate that the generative mechanisms or models entrained in improvisation are not primarily predictive but differentially productive, spilling out novel and singular (unrepeatable) sonic and bodily events. This is compatible with the PC account if it is construed less in internalist/representationalist terms but as a mechanism for implementing fluent embodied behaviour. As Clark (2015) and Feldman (2013) point out, the mechanisms posited by PC do not operate in a stable, changeless environment that could be characterized by a single true ‘prior’ (the ‘Lord’s Prior’ – see Feldman 2013) but a profligate, alterable reality. Models that overfit data sets over a time-slice from a mutable environment may be prone to ‘overtuning’ to noise and less adaptable to change. It follows that improvisation may be a special case of culturally transmissible tools of auto-disruption that carry the added benefit of freeing agents to explore potentially stable alternative modes of action in a complex, ever shifting affordance landscape.

References:

Adams, R. A., Shipp, S., & Friston, K. J. 2013. ‘Predictions not commands: Active inference in the motor system’. Brain Struct. Funct., 218(3), 611–643.

Brassier, Ray 2013a. “Unfree Improvisation/Compulsive Freedom”, http://www.mattin.org/essays/unfree_improvisation-compulsive_freedom.htm (Accessed March 2015)

Bruineberg, J. 2017. Active Inference and the Primacy of the ‘I Can’ In T. Metzinger & W. Wiese (Eds.). Philosophy and Predictive Processing: 5. Frankfurt am Main: MIND Group. doi: 10.155027/9783958573062 5

Clark, A. 2013. “Whatever Next? Predictive brains, Situated Agents, and the Future of Cognitive Science”. Behavioural and Brain Sciences 36(3): 181–204.

Clark, A. 2015. Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press.

Debruille, J. B., Brodeur, M. B., & Porras, C. F. 2012. ‘N300 and social affordances: A study with a real person and a dummy as stimuli’. PLoS ONE, 7(10), e47922.

Feldman, J., 2013. ‘Tuning Your Priors to the World’. Topics in Cognitive Science, 5(1), pp.13–34.

Gallagher, S. and Allen, M., 2018. ‘Active inference, enactivism and the hermeneutics of social cognition’. Synthese, 195(6), pp.2627-2648.

Hohwy, Jakob. 2013. The Predictive Mind. Oxford University Press

Walton, A.E., Richardson, M.J., Langland-Hassan, P. and Chemero, A., 2015. ‘Improvisation and the self-organization of multiple musical bodies’. Frontiers in psychology, 6, p.313.

Walton, A.E., Washburn, A., Langland‐Hassan, P., Chemero, A., Kloos, H. and Richardson, M.J. 2018. Creating Time: Social Collaboration in Music Improvisation. Topics in cognitive science, 10(1), pp.95-119.

[1] Reducing prediction error by changing specific body trajectories or relative positions of body parts.

[2] Thus, constraining the improbability (or more accurately the self-information or ‘surprisal’) of an environment relative to a probability distribution of environments corresponding to the nature of an encoding agent (See Hohwy 2013: 51-58).

[3] To do so, he appeals to Thomas Metzinger’s claim that our phenomenology is generated by a dynamic phenomenal self model (PSM) representing the modeler as a distinct and always present (‘untranscendable’) part of its world (Limanowski 2017, 10). The phenomenal world model thus includes a phenomenal self-model but neither sub-model represents the processes that implement them – for example error reduction processes or the transient attenuations of input for reallocation of attention or functional role

5 thoughts on “Predictive Coding and Brassier’s Freedom

  1. David,

    Where might these “dark” generative mechanisms reside?

    I think that Deleuze might have had something like this in mind when he describes memories as always being (re)created anew, that is, with no single true “prior”.

    Cheers, Tom.

    1. Hi Tom, They reside literally in the agent and in the agent’s environment. So they include neural circuits but also every thing or process that could become entrained in an action loop in which these circuits play some mediating role. So they can also include instruments real and virtual (like C-C-combine https://enemyindustry.wordpress.com/2015/03/24/pete-furniss-improvising-with-c-c-combine/) other musicians, drones, sequencers, it goes on. Finally, the assemblies of these mechanisms are mechanisms coupling both bodily, social and technological systems.

      Yeah, I think this is open to a Deleuzean-materialist reading. In some ways Deleuze is a better fit here than Brassier since the normative dimension of agency that he covers in his Mattin paper turns out to be largely irrelevant to what he wants to say.

      1. That’s a good question, if a little off topic. A boring answer to that question is that a social system is a theoretical posit that explains the social phenomena – namely, the regularities to which you allude. Most methodological individualists would acknowledge that it is not enough to allude only to individual capacities of human beings since their interactions fall out of social relationships of various kinds. Co-operative behaviour presupposes someone to cooperate with about something – e.g. whether to take strike action. I’m agnostic about whether we need to posit Durkheimian collective representations, but social relations (which could include class relations obviously) seem like a bare minimum for explanantia here. Also it seems important to take into account the way social relations are mediated – e.g. the fact that technologies extend the reach of social relations, etc. Background expectations and values seem important to – both in cooperative activity and in explaining conflict.

      2. I think it matters for all of the reasons that the “frame” problem (see Bert Dreyfus and all) remains a major barrier in the development of AI, one of the major reasons that autonomous vehicles for example are so limited currently is that they can’t cope with the irregularities of human behavior (driving, walking, etc) and if they are ever going to further mesh with the world we have shaped and still populate this will be a central concern.
        See how facebook’s algorithms come up short in dealing with questions of context/mattering”
        https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s