conceptual metaphor in the practice of computer music
Post on 27-Nov-2014
Embed Size (px)
DESCRIPTIONGiven the breadth of possibilty in using the computer as a musical instrument, one should consider the design of novel interfaces as informed by research in conceptual metaphor.
CONCEPTUAL METAPHOR IN THE PRACTICE OF COMPUTER MUSIC Thesis Submitted in partial fulllment of the requirements for the Degree of Master of Fine Arts in Electronic Music and Recording Media Mills College, Spring 2011 by Peter Ho-Kin Wong
Approved by: Reading Committee
_________________________ Chris Brown Director of Thesis
_________________________ James Fei Reader of Thesis _________________________ Chris Brown Head of the Music Department
_________________________ Dr. Sandra C. Greer Provost and Dean of the Faculty
CONTENTS 1. Introduction 2. Background: 2.1. The mapping question 2.2. Conceptual metaphor 3. Interface strategies 3.1. Manifested metaphors: simple yet telling examples 3.1.1. A thought experiment: a prototype conforming to an alternate pitch metaphor 3.1.2. Our prototype as a methodological proposal: what really happened here? 3.2. Existing interfaces and conceptual coherence 3.2.1. Tangible user interfaces (TUIs) 3.2.2. The CHOAM ducial ball controller 3.3. Proposed future work 3.3.1. Input: gestural imaging and marked objects 3.3.2. Mapping imageschemas to low or high level outputs 4. Informing the conceptual sphere 4.1. Choices: just pitch space, the conduit metaphor 5. Conclusion 6. Appendix: contents of accompanying media 7. Bibliography 4 5 5 14 21 22
29 33 36 40 42 42 44 47 47 53 55 56
1. Introduction Arguably the quintessential musical instrument of the early 21st century, the computer as a musical performance instrument has received generous attention. Practitioners of computer music have devoted their energy to detailed study of very narrow aspects of its use; by small changes in key places in the chain of causation, it can be made into nearly any kind of instrument. How then, within a nearly innite realm of possibility with regard both to generable sounds and to input mechanisms, can one decide what kind of instrument to build into it? After some background on the mapping question as it pertains to computer music, and on some particularly pertinent aspects of human cognition, I will examine a crude controller prototype which will illustrate the fundamentals behind a design procedure that maintains coherence between mappings of gestural controls to sonic outputs and metaphorically based cognitive structures. After a discussion of how this method pertains to existing interfaces, both of others construction and of my own, I will propose a direction for future work which maintains consistency with this methodology. Finally, I will examine ways in which, beyond the more technical correspondences in the previous sections, the cognitive structure discussed can inform the conceptual and compositional grounding of musical works, using a piece of my own as an example.
The ideas which I will bring up in this paper are incredibly simple. However, their simplicity belies a subtlety which should not be discounted; the structure of the language we must use inadvertently encourages eliding some important distinctions.1
2. Background There are two areas with which the reader will need to be familiar before going further: the question of mapping as it pertains to computer music, and the contemporary theory of conceptual metaphor.
2.1. The mapping question The issue of input to output mapping through the computer as a musical instrument is a vexing problem. Jon Drummond denes mapping as connecting gestures to processing and processing to response.2 Thus at its most general, it is little more than connecting what goes into the black box with what comes out.3 (see Figure 1) If one considers the case of traditional acoustic instruments, the1 Michael J. Reddy, The conduit metaphor: A case of frame conict in our language about language, in Metaphor and Thought, 2nd ed., ed. Andrew Ortony (Cambridge: Cambridge University Press, 1993). 2 Jon Drummond, Understanding Interaction in Contemporary Digital Music: from instruments to behavioral objects, Organised Sound 14/2 (2009): 131. 3 Drummond's paper goes into much further detail about the different ways to conceive of this metaphor, with varying degrees of complexity. This simpler conception will provide better clarity here.
relationship is straightforward: A performer's physical gestures, breath and movement, for example, go into the box, directly causing some kind of resonance, which then comes out of the box as sound in response to the gestures ofFigure 1: The computer as a musical instrument is essentially a black box.
that performer. The mapping
here is direct physical coupling and cannot be altered without an alteration of the instrument itself, which may or may not be possible, but will always have limitations. This relationship is further simplied in that there is a direct correlation between the processing and the response; the resonance of the instruments body is, in a real sense, both these things. In addition, as Bown, et al. point out, for traditional acoustic instruments a musician is adaptive towards an instrument,4 meaning that the instrument, by its synchronic inalterability, forces change on the musicians part during the interaction. In order for the instrument to change, an instrument builder would need to incorporate any modications in the next iteration. Thus the mapping is largely constant for a given instrument and performer pair. The computers mapping is in no way as rigidly coupled. Because the
4 Oliver Bown, Alice Eldridge, and Jon McCormack, Understanding Interactive Systems, Organised Sound, 14/2 (2009): 191.
processing is, in contrast to acoustic instruments, something very open, its connection to the sound output is arbitrary and can become so complicated that it often is considered an integral part of the compositional process. Further, especially with the current arsenal of human interface devices (HIDs),5 the kinds of gestures that can be captured as input to the processing are even more open-ended than the already huge realm of possibility in translating a single gesture to a single datum for processing. Several strategies have surfaced to deal in particular with how to map the input half of the black box metaphor: one-toone, one-to-many, many-to-one , and many-to-many.6 A one-to-one mapping is the most transparent, but as a proliferation of such mappings can affect either system performance or the instruments performability, a one-to-many strategy is often employed to reduce processing load on input mappings and mental load on the performer. A possible manifestation of this strategy would be a single control updating several synthesizer parameters each of which use a differently scaled value from the control. To reduce output mappings while keeping a greater number of input mappings, a many-to-one strategy is used. This could be useful if, as one example, several performers each have separate controls for the same parameter. Many-to-many combines the two in any5 Commonly found on laptops at the time of writing are joysticks, trackballs, trackpads (many multi-touch), keyboards, cameras, accelerometers, photosensors, ngerprint readers, infrared sensors, bluetooth modems, wireless ethernet, and microphones, to name a few. This listing excludes any attachable peripherals, which only increase the possibilities. 6 Drummond, Understanding Interaction, 131.
number of ways, and is probably the most commonly used in practice.7 Notably, with this freedom in gesture/processing/response mapping, and the clear notion that the computer user is a programmer,8 the adaptive relationship of the musician towards the instrument is then broken down, and one need not wait for the next iteration of the instrument or for the builders whims to also allow the adaptation of the instrument toward the musician.9 Furthermore, this adaptation can take place immediately or even dynamically. Given the degree of complexity in dealing with the mapping question, electronic musicians have been known to approach the design of systems and algorithms from a compositional standpoint. Even before the widespread use of computers, for Gordon Mumma, his designing and building of circuits is really composing, and his instruments are inseparable from the compositions themselves.10 In light of this attitude, which is by no means Mummas alone,11 it seems unsurprising that a culture of behavioral objects12 has arisen in the internet-connected virtual community that is inseparable from the computer7 Ibid. 8 A programmer for these purposes can be anyone who causes a change in a computer's behavior through intentional manipulation of that behavior. This manipulation can be accomplished through writing original software or by manipulating pre-written software. 9 Bown et al., Understanding Interactive Systems. 10 Gordon Mumma, Creative Aspects of Live-Performance Electronic Music Technology, Papers of 33rd National Convention (1967): 1. 11 Among proponents are David Tudor (reported in John D.S. Adams, Giant Oscillators, Musicworks 69 (1996)), Chris Brown, and John Bischoff (Chris Brown and John Bischoff, Indigenous to the Net: Early Network Music Bands in the San Francisco Bay Area, (2002) (15 April 2010)). 12 Bown et al., Understanding Interactive Systems.
music world. Interface systems and other components of the sound production process, written by programmer/musicians for their own purposes, have been and are being shared as code snippets and modular patches. These objects (Bown, et al. use the term in both its material and its programmatic sense) can take a number of different forms, and have varying degrees of utility. They may be nearly whole programs that could almost be considered pieces in their own right, or modules that require some manipulation to be usable at all. That these objects can be shared, modied and repurposed and are the currency and building blocks both func