[iee iee colloquium on audio and music technology: the challenge of creative dsp - london, uk (18...

5
SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS Paul Masri, Nishan Canagarajah. Abstract A new method is introduced for encapsulating the properties of a musical instriiment for synthesis that is realistic both in terms of sound quality and responsiveness. Sound quality is achieved using analysis- synthesis techniques for capturing and reproducing the sounds of an instrument from actual recordings. The concept of the Musical Instrument Character Map (MICMap) is introduced as the basis for achieving responsiveness. The MICMap relates parameters about how an instrument is played to the sounds that the instrument creates. For example, the MICMap of a cello might relate the playing parameters of bowing force and bowing speed to the sound properties of harmonic magnitude. The MICMap has been impiemented with neural networks, using a combination of supervised and unsupervised learning methods. Results are presented for an instrument model that accepts initial excitation only (e.g. plucked and struck instruments) and progress-to- date is described for making the transition to instruments which receive continuous excitation (e.g. bowed and blown instruments). 1 Introduction In this paper, a new paradi,gm for musical instrument synthesis is described. The basis of the method is a combination of analysis-synthesis and a nonlinear mapping function. The power of analysis-synthesis is in the representation of sound after analysis, prior to transformation or synthesis. This representation is complete (in that no other data is needed to synthesize a reproduction) but it is also musically relevant. This latter property makes it possible to perform highly non-linear but intuitively simple transformations, such as time-stretch, in a straightforward manner. For the same reason, it also makes analysis-synthesis desirable for the rendering engine of a musical instrument synthesizer. Normally, analysis-synthesis operates on sounds without reference to how they were originally created; see Figure la. To integrate it into a musical instrument synthesizer, the nonlinear mapping function provides a link between the musician’s playing controls and the sound description. Also, analysis-synthesis normally requires a source sound in order to generate a synthesized sound. Again using the mapping function, the analysis and synthesis sections are separated: the analysis section is used in the synthesis, the mapping function replaces the analysis section, generating the synihesis parameters itself (Figure 1 c). Sound User-controlled Description Transformation (a) Standard analysis-synthesis tool Plavinc 1 Training a Mapping Functior * Algorithm Analysis is Parameters (b) Analysis (extraction) and training Parameiers User-controlled Playing Parameters (c) Mzpping 2nd synthesis (rendering) Figure 1. Parameters to Synthesis Parameters The Mapping Function links Playing creation of the mapping function (Figure lb); at The authors are affiliated with the Digital Music Research Group, University of Bristol. 5.01 Merchant Venturers Building, Woodland Road, Bristol BS8 lUB, United Kingdom. Tel: +44 10)117 954-5192; Fax: 4 4 (0)117 954-5206 email: [email protected]; URL: http://www.fen.bris.ac.uWelec/dmr 10/1

Upload: n

Post on 08-Apr-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEE IEE Colloquium on Audio and Music Technology: the Challenge of Creative DSP - London, UK (18 Nov. 1998)] IEE Colloquium on Audio and Music Technology: the Challenge of Creative

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS Paul Masri, Nishan Canagarajah.

Abstract A new method is introduced for encapsulating the properties of a musical instriiment for synthesis that is realistic both in terms of sound quality and responsiveness. Sound quality is achieved using analysis- synthesis techniques for capturing and reproducing the sounds of an instrument from actual recordings. The concept of the Musical Instrument Character Map (MICMap) is introduced as the basis for achieving responsiveness.

The MICMap relates parameters about how an instrument is played to the sounds that the instrument creates. For example, the MICMap of a cello might relate the playing parameters of bowing force and bowing speed to the sound properties of harmonic magnitude. The MICMap has been impiemented with neural networks, using a combination of supervised and unsupervised learning methods. Results are presented for an instrument model that accepts initial excitation only (e.g. plucked and struck instruments) and progress-to- date is described for making the transition to instruments which receive continuous excitation (e.g. bowed and blown instruments).

1 Introduction In this paper, a new paradi,gm for musical instrument synthesis is described. The basis of the method is a combination of analysis-synthesis and a nonlinear mapping function.

The power of analysis-synthesis is in the representation of sound after analysis, prior to transformation or synthesis. This representation is complete (in that no other data is needed to synthesize a reproduction) but it is also musically relevant. This latter property makes it possible to perform highly non-linear but intuitively simple transformations, such as time-stretch, in a straightforward manner. For the same reason, it also makes analysis-synthesis desirable for the rendering engine of a musical instrument synthesizer.

Normally, analysis-synthesis operates on sounds without reference to how they were originally created; see Figure la. To integrate it into a musical instrument synthesizer, the nonlinear mapping function provides a link between the musician’s playing controls and the sound description. Also, analysis-synthesis normally requires a source sound in order to generate a synthesized sound. Again using the mapping function, the analysis and synthesis sections are separated: the analysis section is used in the

synthesis, the mapping function replaces the analysis section, generating the synihesis parameters itself (Figure 1 c).

Sound User-controlled Description Transformation

(a) Standard analysis-synthesis tool

Plavinc

1 Training a

Mapping Functior

* Algorithm Analysis

is Parameters

(b) Analysis (extraction) and training

Parameiers User-controlled Playing Parameters

(c) Mzpping 2nd synthesis (rendering)

Figure 1. Parameters to Synthesis Parameters

The Mapping Function links Playing

creation of the mapping function (Figure lb); at The authors are affiliated with the Digital Music Research Group, University of Bristol. 5.01 Merchant Venturers Building, Woodland Road, Bristol BS8 lUB, United Kingdom. Tel: +44 10)117 954-5192; Fax: 4 4 (0)117 954-5206 email: Paul.Masri@ bristol.ac.uk; URL: http://www.fen.bris.ac.uWelec/dmr

10/1

Page 2: [IEE IEE Colloquium on Audio and Music Technology: the Challenge of Creative DSP - London, UK (18 Nov. 1998)] IEE Colloquium on Audio and Music Technology: the Challenge of Creative

Since analysis-synthesis is well established as a powerful tool for representing and transforming

controls comprising brightness, richness, stability, etc..

sound [1][2][3], the focus of this paper is the design and implementation of the non-linear mapping function. The following section .

introduces the concept of the Musical Instrument Character Map (MICMap), which implements the nonlinear mapping function and describes how it is interfaced to the analysis-synthesis engine. Section three details the implementation of a model for instruments that take an initial excitation only, with the example of a plucked string. This is expanded upon in section four as the case of continuous excitation is investigated.

2 The Musical Instrument Character Map The nonlinear mapping function associates the Playing Parameters (PP), which the musician affects, and the Synthesis Parameters (SP), from which the sound is generated. It therefore implicitly contains all the sounds that the target instrument can create (within the domain of PP) and the conditions under which a particular sound is created. Hence, the PP-to-SP association has been termed the Musical Instrument Character Map (MICMap).

Uniike physical models which capture the character of an instrument through the physics of the resonating structure, the MICMap captures the character through the sound of the instrument directly. The MICMap is created using learning algorithms, which are applied to sound examples that have been analysed (using the first half of an analysis-synthesis tool). These sound examples are actual recordings of the target instrument; therefore the sonic realism of the synthesizer is already assured. The challenge in designing a framework for the MICMap is to incorporate responsiveness.

2.1 Possibilities for the User Interface

The musician interfaces with the instrument by controlling the PP data-set. The set could comprise physical controls, such as the bowing force and speed, string tension and fingering on a stringed instrument; in this case the MICMap synthesizer would emulate a physical modeliing synthesizer (without needing to know the physics). Alternatively, the data-set could comprise signal processing controls, such as oscillator frequencies, modulation indices and so forth, so that the MICMap synthesizer emulates an analogue or €34 synthesizer. Totally new interfaces are also possible, including for example a set of perceptual

In ail these cases, the role of the MICMap is to provide an association, based on examples for which the individual PP and SP are available. Through the choice of PP and SP, the MICMap becomes a virtual synthesis architecture, producing outputs in response to inputs that indirectly implement the chosen architecture. In this sense, the MICMap has the potential for unifying current synthesis methods within a single framework, or for extending the range with new types of virtual synthesizer.

2.2

The principle of learning an association by example is that the ‘general rule’ can be deduced by the algorithm from the specific examples. If each PP and SP data pair are considered as multidimensional vectors in their respective spaces (where the dimension of each is the number of parameters for each), then the complete description of the target instrument is the association from a surface in the PP space to a surface in the SP space; see Figure 2. Each training example represents a point on each surface. (Note: Figure 2 shows three dimensional surfaces for easy visualisation. In practice both would have higher dimensionality, with SP usually much higher than PP.)

Importance of the Sound Description

Figure 2. A mapping function from a surface in one space to a surface in another space

The greatest challenge in defining the MICMap has been to find a sound description format that is meaningful, flexible and compact. The format is ‘meaningful’ if the mapping function is smooth. This enables a low order solution (similar to the case of finding a polynomial that approximates a curve to example points). To the end-user, a smooth mapping function will make control easy, since small control changes will result in small timbre changes whilst large control changes will result in large timbre changes. Furthermore, it will not require many examples.

A flexible sound description would not be instrument specific and would also be able to

10/2

Page 3: [IEE IEE Colloquium on Audio and Music Technology: the Challenge of Creative DSP - London, UK (18 Nov. 1998)] IEE Colloquium on Audio and Music Technology: the Challenge of Creative

contain the evolution of sound from an instrument that does not have a deterministic duration. For example, a bowed or blown instrument responds constantly to the excitation generated by the musician, so the duration of a note cannot be predetermined, as it often can for plucked or struck instruments.

A compact format requires few parameters and therefore SP is of low dimension. This makes training easier and faster and synthesis more efficient.

The most direct way to specify the Synthesis Parameters would be to store the entire sound of a note directly, as the output from the analysis tool. This would comprise, at a minimum, the frequency and amplitude of each sinusoidal partial for each time frame. Taking the example of a plucked string and using a conservative description for only ten harmonics, a three second sound at 86 frames per second (= 44.1kH.z + 512 sample frame-hop) would contain more than 5000 data. Although simple to generate, this sound description is unfortunately not meanin,oful, flexible or data-compact, using the definitions above.

Investigations revealed that the problems of flexibility and data-size were both derived from the challenge of describing time. In the plucked string example, there are only ten harmonics but the data size of the whole description is huge because it is necessary to describe the sound over a long period of time. Similarly, time is the key to the challenge of making the description flexible, so that a sustained oboe note, for example, would stay ‘alive’ (not static) when the Playing Parameters were constant and would respond naturally to changing Playing Parameters.

A solution was found by separating the sound description into two parts: the Timbre Map and the Evolution Map. The Timbre Map contains the instantaneous sound description; through training, it comprises all the possible instantaneous sound ‘states’ of the target instrument. The Evolution Map describes the navigation around the Timbre Map. Using a state-space approach, this can retain movement when the Playing Parameters are static and it can respond when they change.

3 Implementation of a Plucked String As a first step towards the goal, a simpler Evolution Map was implemented that accepts only an initial set of Playing Parameters. This is sufficient for implementing plucked and struck

instruments (assuming that each note is allowed to fully decay before the next note is initiated).

The plucked string was chosen as the first instrument to model because a) a physical model implementation had already been created within the Digital Music Research Group - this made it easy to generate an arbitrary number of sound examples and also made extraction of the PP data- set straightforward, and b) once plucked, the decay of the harmonics is deterministic and constant. Of itself, the emulation of an already efficient physical model serves little purpose; within the context of this paper, it serves to demonstrate the efficacy of the MICMap approach to instrument synthesis.

The PP data set included four parameters from the physically modelled string:

F - Target frequency (a combination of strinz length and tensioning),

L, - Loss coefficient (a filter coefficient for modelling the intemal viscous friction of the string and air resistance),

D, - Dispersion coefficient (a filter coeff- icient for modelling the string’s stiffness),

P, - Pluck coefficient (a filter coefficient for modelling the stiffness of the plectrum).

The Timbre Map implemented the PP-to-SPmmrAL association, where S P ~ ~ I ~ ~ was the instantaneous sound description during the first period, including:

To - Pitch period,

A, - Initial magnitude of rth harmonic; r=0,1,2,. . .9.

The Evolution Map implemented the PP-to- SPDECAY association, where SPDEC~Y comprised:

dA, - Decay per period of the rth harmonic.

See Figure 3 on the following page. Each of the mapping functions was implemented using a feedforward Neural Network (NN) with a single hidden layer. Since only ten harmonics were modelled, the synthesized sounds were effectively low-pass filtered versions of their physically modelled counterparts. For aural comparison, the MICMap synthesized sounds were compared with analysis-synthesis versions of the sound where only ten harmonics were synthesized. Both training and test examples were compared. Even though phase had not been preserved during analysis, the subjective comparison was good.

1013

Page 4: [IEE IEE Colloquium on Audio and Music Technology: the Challenge of Creative DSP - London, UK (18 Nov. 1998)] IEE Colloquium on Audio and Music Technology: the Challenge of Creative

Nut -

Bridge

--- Junction 8

Sound Output

(4 I (b) I Figure 3. Plucked string model; a) physical model implementation; b) static MICMap implementation

4 Implementation of Continuously Responsive Instruments The plucked string implementation was intentionally simple, in order to demonstrate the concept of capturing and synthesizing a complete instrument using an analysis-synthesis based MICMap. Having completed this successfully, the aim of current work has been extended to investigating a dynamic framework that will allow for less predictable decay profiles and dynamic control by the musician. Once again, because the aim is to demonstrate the concept, the timbre is constrained to the first ten (harmonic) partials.

In general, it is not possible to derive all the sounds an instrument can make from the initial sound after excitation. Therefore the Timbre Map must be more sophisticated. In place of a feedforward NN, a Self-Organising Map (SOM) [4] has been created. The neurons, or cells, of the SOM are notionally organised in a lattice of a pre- determined dimension, often 2-D. Instead of associating one data set to another, the SOM associates a single data set - in this case the instantaneous timbre - with a grid location on the lattice. Hence, each cell effectively stores a timbre definition.

As training proceeds, data vectors that are similar to one another become associated with cells that are close to one another on the SOM lattice; see Figure 4. Since sounds with similar spectra are likely to be produced by similar Playing Parameters, this localisation is meaningful. (For the present implementation, the timbre description is simply the instantaneous spectrum - the magnitudes of the first ten partials.)

Figure 4. A small 2-D SOM (hexagonal mesh). Each cell contains a ’spectrum vector’ from the sound examples of the plucked string instrument

The evolution of a particular sound produced by an instrument can be traced as a path through the SOM lattice; see Figure 5. Therefore the Evolution Map must associate the Playing Parameters to a trajectory in SOM space. For a fully responsive instrument, it is anticipated that the trajectory should be dependent on both the Playing Parameters and the current state (current trajectory) of the virtual instrument. As a step towards this goal, the current implementation uses a feedforward NN to associate the Playing Parameters (only) with a trajectory.

Figure 5. The evolution of a note is the mapped by the trajectory through the SOM lattice

The progression from the static model of the previous section involves radical changes to both the Timbre Map and the Evolution Map. This transition will be made in several steps: the first implements the Timbre Map based on the SPD\J~ML data used before and the Evolution Map is a feedforward NN that associates PP with a SOM ,gid location in addition to SPDECAY.

1014

Page 5: [IEE IEE Colloquium on Audio and Music Technology: the Challenge of Creative DSP - London, UK (18 Nov. 1998)] IEE Colloquium on Audio and Music Technology: the Challenge of Creative

5 Applications

5.1 Palette

Analysis-synthesis is traditionally not efficient enough to be implemented in real-time (or close to real-time) and its control parameters have to-date necessitated significant expertise from the user in sound composition and signal processing. It is not surprising, therefore, that commercial products have not appeared (outside the research community) using this technology. However, by decoupling the computationally intense analysis from the computationally light synthesis, and by providing a custom-designed user interface of Playing Parameters, the MICMap overcomes both of these obstacles. The computationally heavy analysis and training calculations are done off-line during the creation of the virtual instrument; using a DSP device to implement the mapping and synthesis, it becomes feasible to create a responsive real-time instrument.

The instrument examples used in this paper have centred on emulation of physical models. This was because the plucked string model readily provided examples on demand for training, validation and testing and the Playing Parameters were predetermined. Therefore, the investigation could be solely focused upon designing, implementing and evaluating the MICMap. Although MICMap synthesizers could be created as rivals to physical models, they were conceived of as a complementary technology. The physics- based and the sounds-based instrument modelling techniques each have their own strengths in terms of the processes of instrument construction and the quality of the final synthesizer.

Perhaps the most exciting applications are:

Intuitive Control and a Broad Sound

synthesizers that can be programmed (by the musician) and played using perceptual indices - this would overcome the traditional obstacle to synthesizer programming that the controls are not intuitive;

instruments that 'morph' between timbres because they have been trained using sounds from more than one real instrument.

5.2 Beyond Synthesis.. . So far, the focus of the Musical hstrument Character Map has been the PP-to-SP association, which in concert with a sound rendering engine, emulates synthesis. Work is also in progress

'wiring the circuit the wrong way round', so that the MICMap implements the SP-to-PP association. Connected after an analysis tool, it promises the capability to recognise not just a sound from an instrument? but the way it wus played. Depending on the choice of PP and SP, this could, in time, help with traditional instrument tuition and speech therapy.

6 Conclusions The authors have set out to investigate a musical instrument synthesizer that can model real instruments and other synthesis architectures purely on the basis of the sounds that they create. The combination of a sound tool (using analysis- synthesis methodology) and a Musical Instrument Character Map that associates the musician's playing controls to the sound were proposed as the solution.

For practical implementation, it was found that the sound representation needs to be meanin,&l, flexible and compact. This has been achieved by splitting the MICMap into two parts based on time dependency: the Timbre Map and the Evolution Map. A simple implementation has been presented, demonstrating that this synthesis approach is viable and sonically accurate. Further work is currently ongoing (detailed in section 4) with the aim of extending the capabilities for dynamic control, so that the synthesizer is responsive.

7 Acknowledgements The authors wish to acknowledge the support of EPSRC, through whose funding this research was made possible (project code ). The support from Texas Instruments, in providing the hardware and software resources for this project, is also gratefully welcomed. Personal thanks are also due to Joel Laird for his physical model of the plucked string.

References [ l ] P. M a n . 1996. Computer Modelling of Sound for

Transformation and Synthesis of Musical Signals. Ph.D. thesis. (University of Bristol)

[2] X Rodet. 1997. "Musical Sound Signals Analysis/Synthesis: Sinusoidal + Residual and Elementary Waveform Models" in Proc. IEEE Time-Frequency and Time-Scale Workrhop (TFTS'97).

[3] X Serra, J 0 Smith. 1989. "Spectral Modeling Synthesis" in Proc. International Computer Music Conference (ICMC). pp. 28 1-284.

[4] T Kohonen. 1997. Self-organizing Maps. (2"d Ed.) #30 in series: information Services. Pub: Springer (Berlin).

1015