Laurie Anderson’s elegant proposal for a sound installation in which the audience’s own body is used simultaneously as conductive medium and speaker nicely illustrates a problem confronting the Located Event theory of sound (LET – See Roden 2010 – web published version here). LET comes in two flavors. The first, due to Robert Casati and Jerome Dokic holds that sounds are resonance events in objects. The other, due to Casey O’Callaghan, holds that sounds are disturbances in a medium caused by vibrating objects. On the first theory, space ships really make sounds in a vacuum since the sounds just are the vibrations induced in them by their propulsion systems. According to the second, they don’t, since there’s no medium in which auditory pressure waves can occur.
The fact that both theories cohere more or less equally with folk psychoacoustics is a nice case of epistemic underdetermination. While Casati and Dokic’s view implies that there is a sound located in a vibrating tuning fork contained in an evacuated jar; O’Callaghan’s implies that there is none. Thus most folk would likely judge that there is no sound in the evacuated jar. However, were the air in a jar containing a vibrating tuning fork to be alternately evacuated and replenished they would probably perceive this as an alteration in the conditions of audition of a continuous sound, rather than the alternation of discrete sound events.
However, the LET is also subject to metaphysical indeterminacy or ‘slack’. The causal influence that eventually produces an auditory experience propagates through various stages of processing and transduction. In a digital audio system the information which eventually determines the vibratory behaviour may be an array of sample values stored in an mp3 audio file. These need to converted into an AC/DC current by a digital analog converted which itself will control a speaker diaphragm generating pressure waves in the air. A speaker diaphragm doesn’t resonate on its own – it needs electrical input.
Thus it might seem that a sound produced by in such a setup is located in the whole system (computer-DAC-speaker) and not in the speaker diaphragm. On the other hand, the computer doesn’t vibrate so as to output the sound stored in the mp3: its activity simply consists in producing a stream of numerical values which tell the DAC what to do. So where is the sound event? If you cut off the digital stream the sound will stop, so it is tempting to view the computer-dac-speaker system as a single resonating system since it is the whole shebang which produces and maintains the sound. On the other hand, the computer does not vibrate so as to distrub the air; only the speaker diaphragm does that. So there are reasons for locating the sound in the speaker if you are Casati and Dokic or on the interface between diaphragm and air if you are O’Callaghan.
In Anderson’s setup, however, the sound is produced by a tape source in the table (one candidate for event location) but what you hear is also due to resonance in your own cranial cavities. So have we one sound located in the system tape-table-screws-elbows-skull represented in the diagram or a series of sonic events (including the one under the table and the one in your head)? Need there be any metaphysical fact to the matter about where the sound is? I think the claim that there need not be is quite supportable. The sound event occurs, but certain facts about its extent and location are inherently vague.
Interestingly, this does not imply that the sound event is some weird noumenal pulsion welling up beyond our representational capacities. Clearly, we do (in some sense) locate sounds, run fourier transforms on them, record them, sample them, etc. Representing sounds is what our auditory systems are designed to do and what studio technicians are paid to do.
Roden, David 2010 ‘Sonic Art and the Nature of Sonic Events’, in Bullot, N.J. & Egré, P. (eds.) Objects and Sound Perception special issue, Review of Philosophy and Psychology 1(1), pp. 141-156