now? towards a phenomenology of real time sonification

9
ORIGINAL ARTICLE Now? Towards a phenomenology of real time sonification Stuart Jones Received: 28 October 2010 / Accepted: 1 August 2011 / Published online: 17 September 2011 Ó Springer-Verlag London Limited 2011 Abstract The author examines concepts of real time and real-time in relation to notions of perception and processes of sonification. He explores these relationships in three case studies and suggests that sonification can offer a form of reconciliation between ontology and phenomenology, and between ourselves and the flux we are part of. Keywords Sonification Real time Real-time Perception Attention Ontology Phenomenology No man is an island, entire of itself. John Donne, Meditation XVII, 1624 I can indeed say that my representations follow one another; but this is only to say that we are conscious of them as a time sequence, that is, in conformity with inner sense. Immanuel Kant, Critique of Pure Reason, 1781 You can never directly know the outer world. Instead, you are conscious of the results of some of the com- putations performed by your nervous system on one or more representations of the world. In a similar way, you can’t know your innermost thoughts. Instead, you are aware only of the sensory repre- sentations associated with these mental activities. Christof Koch, The Quest for Consciousness, 2004 1 Introduction To start, some basic thoughts about the key objects of scrutiny: sonification, perception and time. 1.1 Sonification Sonification is, after all, through its representations, con- necting your understanding to something else (the data) by means of your capacity to connect with and understand (represent to yourself) sound. And often this sonification is of something that has happened or is happening in a traversing of time; in this essay, I will be focused on situations where our sense is that the sonification that we are hearing is being generated from the originating data synchronously. Whether or not the data are, most surely the sonification itself is traversing time, sound being the sensory domain that is most clearly time dependent, and thus sonification is a time-based practice, whether in mechanical, utilitarian dispositions, such as in medical use, or in more aesthetics- based application in sound art. In all, the representations and our representations of them in consciousness follow one another, and that is how we make sense of them. And because it is sound, and we are trying to make sense of it, it is very likely that we will have some ‘musical’ sense of it. That is, musical in the way of being a sequence of aural events that makes conceptual sense as it traverses a span of time. This can be said to be true even of the mechanical sonifications heard in medical situations, that they may have this non-musical ‘musicality’. The rise of interest in sonification is supported by a growing understanding of how our apprehension of the world, as well as our perception of it, is multi-sensorial. This reflects a move away from a culture dominated by S. Jones (&) 20 Cleveleys Road, London E5 9JN, UK e-mail: [email protected] 123 AI & Soc (2012) 27:223–231 DOI 10.1007/s00146-011-0342-6

Upload: stuart-jones

Post on 25-Aug-2016

216 views

Category:

Documents


4 download

TRANSCRIPT

ORIGINAL ARTICLE

Now? Towards a phenomenology of real time sonification

Stuart Jones

Received: 28 October 2010 / Accepted: 1 August 2011 / Published online: 17 September 2011

� Springer-Verlag London Limited 2011

Abstract The author examines concepts of real time and

real-time in relation to notions of perception and processes

of sonification. He explores these relationships in three

case studies and suggests that sonification can offer a form

of reconciliation between ontology and phenomenology,

and between ourselves and the flux we are part of.

Keywords Sonification � Real time � Real-time �Perception � Attention � Ontology � Phenomenology

No man is an island, entire of itself.

John Donne, Meditation XVII, 1624

I can indeed say that my representations follow one

another; but this is only to say that we are conscious

of them as a time sequence, that is, in conformity with

inner sense.

Immanuel Kant, Critique of Pure Reason, 1781

You can never directly know the outer world. Instead,

you are conscious of the results of some of the com-

putations performed by your nervous system on one

or more representations of the world. In a similar

way, you can’t know your innermost thoughts.

Instead, you are aware only of the sensory repre-

sentations associated with these mental activities.

Christof Koch, The Quest for Consciousness, 2004

1 Introduction

To start, some basic thoughts about the key objects of

scrutiny: sonification, perception and time.

1.1 Sonification

Sonification is, after all, through its representations, con-

necting your understanding to something else (the data) by

means of your capacity to connect with and understand

(represent to yourself) sound.

And often this sonification is of something that has

happened or is happening in a traversing of time; in this

essay, I will be focused on situations where our sense is

that the sonification that we are hearing is being generated

from the originating data synchronously.

Whether or not the data are, most surely the sonification

itself is traversing time, sound being the sensory domain

that is most clearly time dependent, and thus sonification is

a time-based practice, whether in mechanical, utilitarian

dispositions, such as in medical use, or in more aesthetics-

based application in sound art. In all, the representations

and our representations of them in consciousness follow

one another, and that is how we make sense of them.

And because it is sound, and we are trying to make sense

of it, it is very likely that we will have some ‘musical’

sense of it. That is, musical in the way of being a sequence

of aural events that makes conceptual sense as it traverses a

span of time. This can be said to be true even of the

mechanical sonifications heard in medical situations, that

they may have this non-musical ‘musicality’.

The rise of interest in sonification is supported by a

growing understanding of how our apprehension of the

world, as well as our perception of it, is multi-sensorial.

This reflects a move away from a culture dominated by

S. Jones (&)

20 Cleveleys Road, London E5 9JN, UK

e-mail: [email protected]

123

AI & Soc (2012) 27:223–231

DOI 10.1007/s00146-011-0342-6

sight. Henri Bergson (1911) said, speaking about our

consciousness of time when we apprehend a melody:

Doubtless we have a tendency to divide it and to

represent it to ourselves as a linking together of dis-

tinct notes instead of the uninterrupted continuity of

the melody. But why? Simply because our auditive

perception has assumed the habit of saturating itself

with visual images.

Bergson was talking about the phenomenology of

time—our phenomenal consciousness of time as a contin-

uum (which in fact he exampled through musical melody,

as did James and Husserl), as opposed to an ontology of

time, in which any instant is distinct, separable and mea-

surable. He presents a model where sight cuts time into

bits, while sound binds it together.

In an overall, and very important sense, Bergson talked

about our perception of time through our perception of

change (e.g. the changing pitches in a melody), and how

this was not a perception of discrete events, but a percep-

tion of flow and sequence, the arch of relationships

unfolding in time, which, in music, enables us to apprehend

a melody or phrase as a whole.

1.2 Sound and vision

The doctor in the operating theatre does not listen to the

sonifications of vital functions. Their conscious attention is

focused on the relationship between hands, eye and flesh.

The moment there is a change in the sound, however, it

abruptly commands their conscious attention.

For most of our evolutionary history we were all hunter-

gatherers, either on the lookout for food, or for that which

might feed on us. This has had important implications for the

way our senses work together. The eyes are in constant

movement (saccades) as they scan the visual field for infor-

mation; in doing so they focus on one thing at a time, while

keeping the entire visual field in the background. We are not

aware of this process: when we are examining a picture, we

may deliberately scan its surface; when we are looking at it,

our eyes are unconsciously saccading over the surface, putting

together a composite image of the whole. However, while our

eyes are engaged in this process, and in particular when the

visual attention is focused on a particular object, our hearing is

continually attending to, assessing and decoding sound com-

ing from every direction. Of course if an arresting sound event

occurs, the aural attention focuses on it and usually brings the

visual apparatus round to that region of the aural field. How-

ever, in our normal alert state, we are either looking for or

looking at a particular thing, while listening to everything.

There seem to be three implications of this difference: firstly,

we can attend to distinct events with our sight and our hearing

simultaneously (Spelke et al. 1976; Muller et al. 2003);

secondly, we are much better at handling, and keeping dis-

crete, multiple strands of information with our hearing than

with our sight: we can listen to the totality and understand it

(Bregman 1990), or we can focus on a single strand (someone

talking to us) (Cherry 1953), or several unrelated (someone

talking, the traffic behind us, music coming out of a shop door

to our left) (Song et al. 2010) or related (the parts in a fugue)

(Bigand et al. 2000). Thirdly, considering seeing and hearing

in a ‘Bergsonian’ way, our sight puts us at a distance from

what it attends to, it separates us; our hearing puts us at the

centre of what it attends to, we are inside.

1.3 Real-time and real time

The term(s) comes from computer science, and here are

some definitions/statements:

1. A system is said to be real-time if the total correctness of

an operation depends not only upon its logical correct-

ness, but also upon the time in which it is performed.

http://en.wikipedia.org/wiki/Real-time_computing

2. Occurring immediately. The term is used to describe a

number of different computer features. For example,

real-time operating systems are systems that respond to

input immediately [my italics] http://www.webopedia.

com/TERM/R/real_time.html;

however,

3. Real-time applications have operational deadlines

between some triggering event and the application’s

response to that event. To meet these operational

deadlines, programmers use real-time operating sys-

tems (RTOS) on which the maximum response time

can be calculated or measured reliably for the given

application and environment. A typical RTOS uses

priorities. The highest priority task wanting the CPU

always gets the CPU within a fixed amount of time

after the event waking the task has taken place. On

such an RTOS, the latency [my italics] of a task only

depends on the tasks running at equal or higher pri-

orities, all other tasks can be ignored. https://rt.wiki.

kernel.org/index.php/Frequently_Asked_Questions

also

4. Real time can also refer to events simulated by a

computer at the same speed that they would occur in

real life. In graphics animation, for example, a real-

time program would display objects moving across the

screen at the same speed that they would actually move

[my italics]. http://www.webopedia.com/TERM/R/

real_time.html

and

5. Real time is a level of computer responsiveness that a

user senses as sufficiently immediate or that enables

224 AI & Soc (2012) 27:223–231

123

the computer to keep up with some external process

(for example, to present visualizations of the weather

as it constantly changes). Real-time is an adjective

pertaining to computers or processes that operate in

real time. Real time describes a human rather than a

machine [my italics] sense of time. http://searchcio

midmarket.techtarget.com/sDefinition/0,sid183_gci214

344,00.html

The term, in its computer science origins, is ontological.

It is concerned with time as it can be recognised as a thing,

quantified and subdivided. Creating and using ontologies of

time is a necessary part of computer science, as exampled

in this from the DAML (DARPA Agent Markup Language)

project for the Semantic Web (Hobbs and Pan 2004):

The time ontology links to other things in the world

through four predicates: atTime, during, holds, and

timeSpan. We assume that another ontology provides

for the description of events, either a general ontol-

ogy of event structure abstractly conceived, or spe-

cific, domain-dependent ontologies for specific

domains.

This ontological basis is clear from definitions (1), (2)

and (3). However, as we scrutinise these statements, things

start to get a bit fuzzy. Although (2) uses ‘immediately’ to

describe real-time iterations, (3) talks about latency

(delay). This throws into question what ‘immediately’

could possibly mean. Does it describe an acceptable

(within the computer process as defined in a RTOS) level

of latency? Or does it describe the human experience of a

very short latency? If it does then it has strayed into the

world of phenomenology.

In (4), we start to go seriously adrift in relation to

ontology: we can have no idea what ‘the same speed’ might

mean, or via which (scaling) parameters this ‘same speed’

might be measured. Does the statement imply that, again,

we are in the realm of perception?

In (5), we have definitely moved into another conceptual

domain, where the user’s sensory apparatus (? or, in fact,

their interpretation?) is invoked, measured with the word

‘sufficiently’, whatever that might delineate, and where, in

contra-distinction to ‘real-time’, presumably as defined in

(1), (2) and (3), we are asked to understand ‘real time’ as a

human rather than a machine sense (?) of time, whatever

either of those might be.

Furthering the confusion at the level of definition is the

way the term has been appropriated, as exampled by (5), to

mean ‘something being represented or iterated in the

computer environment at the (approximately) same time

that it is happening in the external world’. I should make it

clear that it is this quotidian ‘folk’ use of the term that I

will be looking at.

(I write the above not to make nit-picking criticisms

of use of language, but because something that

emerges from examining these texts is an apparent

confusion between ontology and phenomenology.

The fact that real-time is a concept that lies within the

taxonomy of computer usage can lead to it appearing

to be an ontological term in situations where in fact it

is being used as a phenomenological one. This might

be important when considering Sonification, where

the data would reside in the realm of ontology, while

our apprehension and understanding of the sonic

representation of it is within the realm of phenome-

nology. In this way sonification provides a bridge

between ontology and phenomenology.)

So (1) through (3) are definitely in the realm of ontol-

ogy: they are about what time actually is in a particular

instance—something that is measured by the computer’s

clock. (4) is ambiguous: it seems to be ontology: ‘same

speed’ is surely measurable in centimetres per second;

however, in order to arrive at a semblance of ‘same speed’,

the simulation relies on the viewer’s capacity to decode the

(small) image of, for example, a horse travelling at not

many centimetres per second across a small screen, and to

extrapolate that back to a mental image of a full size horse

moving at metres per second at a certain distance from the

eye. This is phenomenology, and, most definitely and

clearly, so is (5).

It is, in fact, (5) that most people are talking about when

they use, more or less loosely, the terms ‘real-time’, or

‘realtime’ or ‘real time’ and they mean something like: ‘the

computer response was fast enough (the latency was low

enough) that it felt in synch with the real world’.

If we take the phenomenological statement: ‘It feels as if

the results of the sonification process I’m listening to are in

synch with the data they represent’ as a starting point for

exploration then we need to uncover what ‘in synch’ means

for different persons in different situations and engage-

ments, what levels and sorts of consciousness might be

engaged, and how the various strands above might relate to

each other in a phenomenological timespan where some

data are being sonified in what is perceived as real time.

While doing so I will attempt to keep sight of that onto-

logical time that is measured and quantified.

2 Three case studies

2.1 Christina Kubisch: Electrical Walks

Christina Kubisch uses induction loops in headphones

worn by participants to make audible the electromagnetic

waves that surround us in both urban and rural locations.

AI & Soc (2012) 27:223–231 225

123

There is no question about the synchronicity of the expe-

rience, as the sound is produced directly by the waves.

Indeed, it is not a representation as one would usually think

of it, more a mediation—inasmuch as the electronic

oscillations that are experienced as sounds are produced by

the interaction of the waves with the induction loops, they

are not directly the sound of the waves themselves nor are

they Kubisch’s rendering of data into sound.

A most important aspect of these walks is that Kubisch

stands outside the process of the listener’s relationship with

where they go and what they hear: she provides the means,

gives some guidance, and in some way locates or bounds

the physical area that is being explored. As she herself

says:1

The basic idea of these sound spaces is to provide the

viewer/listener access to his own individual spaces of

time and motion. The musical sequences are experi-

enceable in ever-new variations through the listener’s

motion. The visitor becomes a ‘‘mixer’’ who can put

his piece together individually and determine the time

frame for himself.

The relationship of her work to derive is clear and has

been noted by other (Cox 2006; Young 2005). What is of

interest to me is what real time means here, and how it is,

in fact, constructed.

The electromagnetic landscape that an electrical walk

reveals is pre-existent from the piece’s point of view. The

listener is invited to explore, not intervene. The electro-

magnetic vibrations will be proceeding in ontological time,

whether they are heard or not. However, the hearing of

them, and more importantly the listening to them, is played

out in phenomenological time. That is, the hearing, in time,

of these vibrations, is something that is constructed by the

explorative and aesthetic decisions of the listener, how they

choose to conduct themselves on the walk, and is a func-

tion of how their hearing elicits listening, and how their

attention is engaged, held and diverted in a landscape that

contains both the vibrations and all the other components

that are apprehensible with the unaided senses. Further, the

experience of time—whether it seems to pass quickly or

slowly, whether its passing is recognised or not—is a

function of this attention which the exploration and hearing

bring about. This phenomenological time, the ‘melody’

(continuum) that Bergson speaks about (1911), is in con-

stant flux, stretching and compressing, yet it at all times

remains completely in synch with the ontological time of

the vibrations, and the whole experience is charged with

this real time sense, that one is hearing these electrical

events as they happen, and that they are part of the

‘landscape’. This sense of real time and location is central

to the experience and irreducible—that one is revealing

these vibrations in this moment in this place by performing

these actions. As Kubisch says:2

The perception of everyday reality changes when one

listens to the electrical fields; what is accustomed

appears in a different context. Nothing looks the way

it sounds. And nothing sounds the way it looks.

And one might note that part of that ‘everyday reality’, the

perception of which changes, is the time. I would suggest that

the apprehension of phenomenological time is, in part, a

function of attention. I will return to this in Sect. 3.

2.2 Peter Sinclair: Road Music

In British law, there is a motoring offence called ‘Driving

without due care and attention’. The neurobiologist

Christof Koch, talking about attention, cites an example of

inattentional blindness in a test where 25% of pilots on a

flight simulator did not notice a small aircraft unexpectedly

superimposed on the runway (Koch 2004). Simons and

Chabris (1999) report how subjects having to track two

balls in a game do not see someone in a gorilla suit walking

through the players (it’s true, I’ve done the test). Aristotle

himself says:

There is a further question about sensation, whether it

is possible to perceive two things in one and the same

indivisible time or not, if we assume that the stronger

always overrides the weaker stimulus; which is why

we do not see things presented to our eyes, if we

happen to be engrossed in thought, or in a state of

fear, or listening to a loud noise.3

In spite of this rather off-putting context, Peter Sinclair’s

Road Music4 invites you to do exactly that—attend to two

things at once. In most sonifications, we are not presented

with the unfolding source data at the same time that we

hear their representation in sound. In Road Music the

sounds are generated in real time with minimal latency,

from various aspects of the driving experience: elements of

the car’s motion and elements of the visual scene that the

car is moving through. In some sense, therefore, the driver

is the performer in this piece: her/his decisions and actions

affect the route and the motion of the car, and thence the

output of the system. The resulting sound is clearly musi-

cal, both in intention and effect, which supports its eliciting

of your attention at exactly the time that British Law says

you should be attending to the road (‘Keep your mind on

your driving, Keep your hands on the wheel, Keep your

1 On her website.

2 On her website.3 From On Sense and Sensible Objects.4 Discussed further in this issue in Peter Sinclair’s Artist Statement.

226 AI & Soc (2012) 27:223–231

123

snoopy eyes on the road ahead’5). But, having experienced

RoadMusic both as a driver and a passenger, I can affirm

that it is possible to attend to both the road and the soni-

fication, pace Aristotle.

So why is this so? Koch (2004) notes two sorts of mental

function that are relevant. One he describes as ‘zombie

agents’—these are specialised sensory-motor processes

that take care of very complex sequences of action while

consciousness gets on with something else: most of us can

recognise having driven from A to B and on arrival at B

having no memory at all of having done so, because con-

sciousness was taken up with, say, thinking about a com-

plex problem. Yet we clearly have driven perfectly

competently and in fact executed a series of controls of

motion that have played out in a span of ontological time of

which we had no phenomenological experience.6 He also

refers to ‘gist perception’ where we are conscious of

components of the visual scene with only marginal per-

ception of them. Gist perception in hearing has also

recently been researched (Harding et al. 2007).7

However, I don’t think that evoking these functions is

sufficient to explain the multi-laterality of the experience in

Road Music, though they may have bearing on it.8 It

seemed to me in my own experiences of the work that my

attention was fully on both the music and the visual and

physical scene, and certainly my consciousness at the time

was that these attentions, of different sorts, were both

proceeding uninterrupted. If there was an alternation of

attention then it was too rapid for me to notice. This

coherence of attentions is, I think, due to two factors: the

real-time relationship of the two and the fact that this

relationship is evident and perceptible; and the fact that the

audio is experienced as musical (at the very minimum in

the sense that I use in 1.1 paragraph 4), and, acting in the

Bergsonian sense, draws the other attention into its ‘mel-

ody’, creating a sort of flux ubermusik, i.e. that the totality

of the phenomenal experience is drawn and held together

by the ‘melodic’ continuum of the music; one could even

perhaps describe Road Music as a sort of Gesamtkunst-

werk,9 though I’m not sure if Sinclair would be pleased.

This accord is in contradistinction to the kind of situation

for which the UK law was predicated, one where the

attention to the scene needed to drive the car is in conflict

with something else that demands or seizes the driver’s

attention.10 The coherence of the experience is one of the

most engaging aspects of Road Music, and it is dependent

on the sense of real time synchrony that the relatively low

latency of the computer process delivering the sonification

permits, the sense that this is all happening together now.

2.3 Stuart Jones: Meter

Meter dates from 1970. Performers are asked to track one

environmental variable (e.g. temperature, light level,

atmospheric pressure, humidity and noise level) and one

own body variable (e.g. heart rate, breathing rate, temper-

ature and blood pressure) and use them as the ‘score’ for

performance; they are also asked to start by tracking the

parameters using meters of whatever sort, but to aim to be

tracking using their own unaided senses by the end; the

piece ends when all performers consider they have

achieved this. There are no other instructions—instru-

mentation and the way of re-presenting the data (the word

‘sonifying’ did not exist then) is up to individual per-

formers. The performers used a variety of means to gather

the data, which were usually ‘lo-tech’—barometers, ther-

mometers both atmospheric and clinical, fairly crude heart

monitors, stethoscopes or contact microphones strapped to

the chest, stopwatches and other timing devices. In some

instances, performers would have to put down their musical

instruments in order to gather data. The data gathering was

part of the performance, as was the move from instruments

to senses as the data-gathering technique. All in all, I would

say that it was the process of gathering the data and rep-

resenting it that interested me, rather than the sheer accu-

racy of the data gleaned.

The piece differs from Alvin Lucier’s Music for Solo

Performer (1965) in that performers are expressly asked

not to attempt to control their body variables, merely to

observe them and use the data. I was interested in different

ways of generating material for use in performance and in

different approaches to performance, and in this piece I

was particularly interested in the performers’ sensitivity to

the flux they were in and how their own actions changed

5 From the song ‘‘Seven Little Girls Sitting in the Backseat’’ by Lee

Pockriss and Bob Hilliard.6 Musicians are dependant on ‘zombie agents’. The reason they have

to practice so much is that, in order for them to be able execute the

extremely complex physical tasks involved, these tasks have to be

learned to the point of automation and unconsciousness. This is not

just so that the mind can focus on the creative act that directs this

unconscious activity, it is actually a cognitive necessity: Castiello

et al. (1991) estimated that more than 250 ms intervened between

action and conscious percept. Just think how many notes Jimi Hendrix

or Franz Liszt routinely played in a quarter of a second!7 Their paper is also of interest in relation to my remarks in 1.22

above. See also the POP project (Horaud et al. 2006–2008).8 One could perhaps imagine several people in the car: one is

engrossed in both the scene and the music; one is conscious of neither

because they’re thinking about chess; one is listening intently to the

music and oblivious to the scene; one is vice versa; one has the music

in attention and the scene in gist consciousness; and so on. Hmmm,

there’s a lot of people in this car….

9 Wagner’s term for his music dramas (operas) which were predi-

cated on an enmeshed text, music and staging.10 See Strayer and Johnston (2001) for an account of experiments

combining simulated driving and mobile phone use.

AI & Soc (2012) 27:223–231 227

123

that flux: these performers, creating this, in this place, at

this time, in these circumstances, under these conditions;

and how these factors affect one another.

As they do. The piece attempts to make this explicit,

inasmuch as variations in the data chart (they can do no

more) the variation of factors that are synchronous. It is

also explicit that the performance (the product) is part of

this, that the piece is reflexive in that the performers’ heart

rates, etc., are likely to be affected by the performance

itself as well as other factors.

The piece has some historical interest in its relation to

various artistic currents of the time, in particular Fluxus. It

was also at the least unusual in its use of sonication as a

basis for performance. It operated in a sort of real time,

though with, of course, very high latency (humans are

much slower than computers at converting data into

sounds). However, it is fundamentally rooted in a notion of

real time as an explicit concurrence between what is hap-

pening and what is experienced, where the relationship

between ontological time and phenomenological time is

fundamental to the experience, where what would be

classified as part of ontology (the variation of measured

temperature in measured time, the heart rate in bpm) is

clearly incorporated into the ‘Bergsonian Melody’.

The audience were outside of this. They were intellec-

tually aware of what was going on as the programme note

consisted of the score, but not privy to the actions of the

performers except as spectators. My interest was with

performance, and whether the kind of concentration the

piece was intended to elicit would work to generate a

viable spectacle—that the engagement of the performers

and the resultant sounds could be musically compelling. In

this respect the piece differs from the works of Kubisch and

Sinclair, where the ‘audience’ is the performer. However,

inasmuch as all three examples are performed in some way,

and that they all in some way depend on particular and high

level attention, I would now like to consider real time in

relation to these factors.

3 Attention, real time and performance

In Sect. 1.2, I suggested that humans can simultaneously

attend to both the visual and aural scenes and discriminate

several simultaneous audio strands. In their different ways,

and to different degrees of explicitness, these three exam-

ples of real time sonification seem to evoke and support

this capacity to attend to simultaneous but distinct

sequences of events. I would suggest that they support it, in

the case of Electrical Walks by the understood categorical

relationship between the headphone sounds and the loca-

tion; in the case of Road Music by the apprehensible

relationship between the musical sonification and the

parameters that cause it; in the case of Meter by the obli-

gation to actively read data and render it into sounds.

Although the ‘zombie agents’ and gist perception

referred to above in 2.2 (paragraph 3) can help us under-

stand how sequences in consciousness can be supported by

underlying unconscious activity, or how attention can be

diverted or enlarged to draw elements at its edge into full

consciousness, they do not account for the kind of attention

that these pieces seem to demand: it is clear that to succeed

to their full potential, all three require very high level and

‘divided’ or ‘dispersed’ attention in order that simultaneous

sequences and their relationship may be fully available to

conscious discrimination.

The capacity to divide attention does exist. Experiments

have shown that people can simultaneously focus their

visual attention (very briefly—Muller et al. 2003) or their

aural attention (Shinn-Cunningham and Ihlefeld 2004) on

two objects, or attend to a visual and an aural object

(though with some identification deficit—Bonnel and

Hafter 1998). At a high cognitive level, in a classic

experiment people learned how to simultaneously read and

write different texts (Spelke et al. 1976); it has also been

shown that presenting material in a mix of visual and aural

modes can reduce cognitive load (Mousavi et al. 1995).

However, dividing high level attention and sustaining the

resultant consciousness does not seem to be an everyday

habit. Far from it—no doubt we’ve all heard ‘Pay atten-

tion!’ at some point in our youth (Broadbent et seq).

In sum, I would say that real time sonification requires

of us to attend to separate streams of information which we

perceive as having a real time relationship (the data; its

representation) in such a way that we can correlate them

and contextualise them not only in understanding but in

‘here and now’ experience; I will call this ‘dispersed

attention’. I would suggest that this dispersed attention

demands of us a cognitive state that requires expectation,

alertness and engagement.

John Cage was someone who understood this expecta-

tion–alertness–engagement very well, recognising it as a

condition that is necessary in the span of attention that

makes his work into art (403300: the composer and per-

former frame; the listener attends; the art is made). I well

remember him performing Cheap Imitation, and using his

(then) very real pain from arthritis and gout, and his

(always) nervousness as a performer, to generate an

excruciating 10-min pantomime of distress and disability

that screwed the audience to an almost unbearable pitch of

sympathetic anxiety which he released with a smile as he

started to play, and which left them in an exalted state of

engagement that ensured the attention the piece required.

It is my own belief that Cage’s success as a composer

stems, in part, from his understanding that attention–per-

ception–consciousness is not unitary, but that it is

228 AI & Soc (2012) 27:223–231

123

(a) divisible and (b) the divided parts are sustainable within

an apprehended whole. It is easy enough to understand this

if one listens to contrapuntal music such as Javanese

Gamelan or Bach. We can perfectly well distinguish each

part, listen to all of them simultaneously and hear them

together as a whole. If we couldn’t, the makers of such

music would be wasting their time; in fact, they would be

unable to make such music except as an abstract,

mechanical exercise. Cage himself was an advocate of

inclusion, asserting that to enjoy this we do not have to

exclude that; that we can include all we are perceiving in

the act (the creative act) of enjoyment. He thus extends

outwards from the enclosed world of Bach to bring in to the

accounted everything that is there at the time to be per-

ceived (this is quite different from the projects of Wagner

and Scriabin, which, although multi-sensory, were restric-

ted to those events that the composer had prescribed).

Cage achieved this by the seemingly simple trick of giving

the listener permission to include. I still remember my own

experience of the European Premiere of HPSCHD11 (John

Cage and Lejaren Hiller 1967–1969), in which I partici-

pated. As audience one could be with the piece in any way

one wished:12 one could focus on one thing, take in the

totality, ignore it all, have a conversation while still paying

attention, etc.

I would like to underline that it seems to me that the

separate and to consciousness distinct parts (in real time

sonification, the data on one hand and the representation on

the other) are brought together in consciousness as parts of

an apprehended whole that is continuous and indivisible

within the span of the attention that holds it together. In this

sense, it adheres to Bergson’s concept of duration.

In HPSCHD, the ‘attender’ is within the space of the

piece but outside the action of it. No matter how engaged

they may be, their position is fundamentally passive. On

the other hand, the case studies in Sect. 2 all have an ele-

ment of performance: rather than passively listening to the

sonification, participants engage with it on another level by

in some way performing it. This is not a question of

intensity of engagement but of experiential and psycho-

logical position of engagement: the performer is de facto

inside the action of the work and in various ways respon-

sible for its unfolding. In real time data sonification that

involves performance, this unfolding is not just of the

sonification itself, but of the data also (this is particularly

clearly the case in Road Music). The ‘performer’ is both

witness to and part of the scene—the unfolding of which

their actions contribute to—which is played out in the data

and thence played, in real time, in the sonification. This

performativity places the attender at the centre of a com-

plete scene, as both originator and validator of their own

experience—which is, of course, the assertion of phe-

nomenology (Merleau-Ponty 1945, preface). I contend that

this is only possible because of the quality and positioning

of the attention we bring to this span of time, a state of

attention that is evoked by our involvement as performer. I

would also contend that the faculty to attend in this way is

completely natural to us who evolved to hunt and be hunted

simultaneously. If it seems in any way unfamiliar, it is prob-

ably because everyday modern (domesticated) life not only no

longer requires it of us but also favours the sense of sight above

the sum of senses as decoder of the scene (the data).

4 Conclusion

If we conceive of a span of time when we are attending to all

that is presented to us by the sum of our senses, it is hard to

conceive of this without including the ontology of the scene

(indeed, part of Bergson’s project was to reconcile the phe-

nomenological experience of duration with, as argued by

Deleuze and Guattari (1991), an ontological conception of it

as ‘the variable essence of things’). We can only conceive of

or read the ontology, but perhaps one of the things that real

time sonification of data does is to offer a form of ‘recon-

ciliation’ of its own: (some of) what we perceive forms at the

same time a data set, ontological, measured, synchronously

represented to our perception. The ontology remains

unperceived, but perhaps appreciated in understanding, as

we ourselves reconcile in consciousness the originating

phenomena and their representation.

There seems to be a ‘recipe’ of requirements for the sort

of consciousness I have been describing in this essay:

the understanding that we need to attend in a certain

way, and the willingness to do so; alertness; active

engagement; multi-sensory perception; dispersed

attention; and in the case of real time sonification,

apprehension of the relationship between the data and

the representation.

I suggest that this capacity of consciousness derives

from our human origins, not out of any romantic

11 In 1972 at the Philharmonie, Berlin. A full performance of

HPSCHD entails 7 harpsichords, 52 tape recorders, 6,400 slides and

40 movies. In this performance only one harpsichord (playing the

originating Mozart material) was in the auditorium, everything else

was dispersed throughout the multi level foyer, with virtually every

square inch of ceiling and wall projected on. One could imagine

bedlam but in fact the experience was gloriously coherent. The

subsequent British Premiere at the Roundhouse as part of the Proms

was a disastrous travesty.12 Cage himself said after the premiere in America: ‘‘When I produce

a happening, I try my best to remove intention in order that what is

done will not oblige the listener in any one way. I don’t think we’re

really interested in the validity of compositions any more. We’re

interested in the experiences of things.’’ (quoted in Time Magazine,

Friday, May 30, 1969).

AI & Soc (2012) 27:223–231 229

123

prelapsarian nostalgia, but because it seems clear that it

exists as an innate faculty, that is valuable but now unde-

rused. I suggest also that engaging in performative real

time sonification can arouse it and the connection that it

delivers.13

If phenomenology, from Bergson and Husserl to He-

idegger and Merleau-Ponty, has a metaphysical ‘mission’,

it is to alert us to the scope and capabilities of our con-

sciousness, and I would like to end with three quotes. The

first is from Merleau-Ponty, in the preface to Phenome-

nology of Perception (1945):

Phenomenology’s task was to reveal the mystery of

the world and of reason. […] It is as painstaking as

the works of Balzac, Proust, Valery, or Cezanne—by

reason of the same kind of attentiveness and wonder,

the same demand for awareness, the same will to

seize the meaning of the world or of history as that

meaning comes into being [my italics].

The second is from Bruce Chetwin’s book The Song-

lines (1987) and is part of an account of a journey with an

Aboriginal who wanted to visit a part of his Dreaming

(Native Cat) to which he had never been:

This lesser stream was the route of the Tjilpa [Native

Cat] Men, and we were joining it at right angles.

As Arkady turned the wheel to the left, Limpy

bounced back into action. Again he shoved his head

through both windows. His eyes rolled wildly over

the rocks, the cliffs, the palms, the water. His lips

moved at the speed of a ventriloquist’s and, through

them, came a rustle: the sound of wind through

branches.

Arkady knew at once what was happening. Limpy

had learnt his Native Cat couplets for walking pace,

at 4 miles an hour, and we were travelling at twenty-

five.

Arkady shifted into bottom gear, and we crawled

along no faster than a walker. Instantly, Limpy mat-

ched his tempo to the new speed. He was smiling. His

head swayed to and fro. The sound became a lovely

melodious swishing; and you knew that, as far as he

was concerned, he was the Native Cat.

And the third, once more Bergson (1911):

There is simply the continuous melody of our inner

life—a melody which is going on and will go on,

indivisible, from the beginning to the end of our

conscious existence.

Acknowledgments The excerpt from The Songlines by Bruce

Chatwin, published by Jonathan Cape, is reprinted by permission of

The Random House Group Ltd.

References

Bergson H (1911) La Perception du Changement (The Perception of

Change), collected in La Pensee et le mouvant (1934), translated

as The creative mind: an introduction to metaphysics. Citadel

Press, New York

Bigand E, McAdams S, Foret S (2000) Divided attention in music. Int

J Psychol 35:270–278

Bonnel A-M, Hafter ER (1998) Divided attention between simulta-

neous auditory and visual signals. Percept Psychophys 60(2):

179–190

Bregman AS (1990) Auditory scene analysis: the perceptual organi-

zation of sound. Cambridge, MIT Press

Castiello U et al (1991) Temporal dissociation of motor responses and

subjective awareness. A study in normal subjects. Brain

114:2639–2655

Cherry EC (1953) Some experiments on the recognition of speech,

with one and with two ears. J Acoust Soc Am 25:975

Chetwin B (1987) The songlines. Jonathan Cape, London

Cox C (2006) Invisible cities: an interview with Christina Kubisch,

‘‘Electricity’’, Cabinet Magazine, Issue 21, spring

Deleuze G, Guattari F (1991) Qu’est-ce que la philosophie? Paris:

Editions de Minuit. (trans: Tomlinson J, Burchell III G). What is

Philosophy? Columbia University Press, New York

Harding S, Cooke M, Konig P (2007) Auditory gist perception: an

alternative to attentional selection of auditory streams? In:

Lecture notes in computer science 4840:399–416. doi: 10.1007/

978-3-540-77343-6_26

Hobbs JR, Pan F (2004) An ontology of time for the semantic web.

ACM TALIP 3(1):66–85

Horaud R et al (2006–2008) POP project. http://perception.

inrialpes.fr/POP/. Accessed 14 Sep 2010

Koch C (2004) The quest for consciousness. Roberts and Company

Publishers, Englewood, Colorado

Kubisch C http://www.christinakubisch.de/english/install_induktion.

htm. Accessed 14 Sep 2010

Merleau-Ponty M (1945) Phenomenologie de la perception. Paris:

Gallimard (1962) Phenomenology of perception (trans: Smith

C). Routledge & Kegan Paul, London

Mousavi SY, Low R, Sweller J (1995) Reducing cognitive load by

mixing auditory and visual presentation modes. J Educ Psychol

87(2):319–334

Muller MM, Malinowski P, Gruber T, Hillyard SA (2003) Sustained

division of the attentional spotlight. Nature 424:309–312

Shinn-Cunningham B, Ihlefeld A (2004) Selective and divided attention:

extracting information from simultaneous sound sources. In:

Proceedings of ICAD 04-tenth meeting of the international

conference on auditory display, Sydney, Australia, July 6–9

Simons DJ, Chabris CF (1999) Gorillas in our midst: sustained

inattentional blindness for dynamic events. Perception

28:1059–1074

Song J, Skoe E, Banai K, Kraus N (2010) Perception of speech in

noise: neural correlates. J Cogn Neurosci. doi:10.1162/jocn.

2010.21556

13 One might say that the Songlines of the indigenous Australians are

the oldest existing form of sonification. It would seem that, to the

indigenous Australians, the songline traces a vast repository of spatial,

temporal and mythic data, represented in its singing. For the most

part, the song acts as a mapping and storing of knowledge, but it

seems also to be in some way a representation of the duration of life,

and can be sung in situ, in real time (Chetwin 1987).

230 AI & Soc (2012) 27:223–231

123

Spelke E, Hirst W, Neisser U (1976) Skills of divided attention.

Cognition 4(3):215–230

Strayer DL, Johnston WA (2001) Driven to distraction: dual-task

studies of simulated driving and conversing on a cellular phone.

Psychol Sci 12:462–466

Time Magazine Archive http://www.time.com/time/magazine/article/

0,9171,840155,00.html. Accessed 14 Sep 2010

Young R (2005) Christina Kubisch Electrical Walks. Derivative

sounds. In: Neset AH, Dzuverovic L (eds) Her noise. Forma arts

and media, Newcastle upon Tyne, pp 35–36

AI & Soc (2012) 27:223–231 231

123