adept: a framework for adaptive digital audio...

2
Proceedings of the 2 nd AES Workshop on Intelligent Music Production, London, UK, 13 September 2016 ADEPT: A FRAMEWORK FOR ADAPTIVE DIGITAL AUDIO EFFECTS Owen Campbell 1 , Curtis Roads 1 , Andr´ es Cabrera 1 , Matthew Wright 3 , and Yon Visell 2 1 Media Arts and Technology 2 Media Arts and Technology and Electrical and Computer Engineering University of California, Santa Barbara 3 Center for Computer Research in Music and Acoustics Stanford University [email protected] ABSTRACT The field of music information retrieval (MIR) has enabled research in ’intelligent’ audio processing. Emerging appli- cations of MIR techniques in music production might pro- vide mixing engineers, musicians and composers control over extant effects processing plugins based on the audio content of the recorded music. In adaptive digital effects (ADAFx) processing, mapping functions are used to mod- ulate algorithm parameters via features that are extracted from the input audio signal or signals. We present an au- dio software plugin that is designed to facilitate feature- parameter mappings within digital audio workstations. We refer to it as the Adaptive Digital Effects Processing Tool (ADEPT). 1. BACKGROUND There are many examples of adaptive effects, such as dy- namic range processors, gates, and pitch-correctors that have long been mainstays of music production. Previous research has investigated organizational methods for adaptive digi- tal audio effects [1, 2]. [1] proposed a general taxonomy of adaptive effects, describing auto-adaptive, external-adaptive, feedback-adaptive, and cross-adaptive techniques. [2] went beyond purely signal-level models of content with a com- prehensive treatment of ’content-based transformations’. Re- lated research has investigated adaptive processing for auto- mated mixing tasks, for example [3, 4]. [5] and [6] focused on implementing flexible frameworks for user-defined adap- tive effects routing using audio plugins as target effects. 2. SYSTEM DESIGN AND IMPLEMENTATION The ADEPT adaptive audio processing software utilizes ex- isting audio plugins and implements feature extraction and a mapping algorithm in order to facilitate ADAFx process- ing. As a result it can control any processing parameters that are made available as automatable controls within the digital audio workstation (DAW). Developed using JUCE, a C++ framework which enables cross-platform deployment of audio plugins, it communicates with Ableton Live using Open Sound Control [7] and the LiveOSC MIDI Remote Script. ADEPT works with the DAW and existing plug- ins, providing immediate benefits due to the large variety and quality of available effects processing and synthesis al- gorithms, and integration with production and performance practices. Each instance of the ADEPT plugin computes a stream of audio features using Essentia [8], a C++ audio feature ex- traction library. Audio features from a given source may be mapped to any plugin parameter in the DAW session, in- cluding those belonging to plugins residing on other tracks. At present, the feature-parameter mappings are one-to-many; a single feature stream may feed multiple mappings control- ling different parameters. The software makes it easy to re- alize evocative real-time effects (Fig. 1) via feature-driven effects automation (Fig. 2) controlled through the graphical user interface of a software plugin (Fig. 3). Output Audio Decay Time Parameter Values Extracted Pitch Values (Hz) Input Audio (Analysis Frames) (Analysis Frames) (Samples) (Samples) (Seconds) (Seconds) Figure 1: Example time series of audio feature and effect parameter data with mapping from pitch to reverb decay time 3. EVALUATION Two pilot studies were conducted to explore potential eval- uation methods and to gain insight into the strengths and weaknesses of the software prototype. Qualitative data was collected regarding the utility of the proposed ADAFx rout- ing framework, the user experience, and the aesthetic qual- ity of the audio effects produced by participants in study. Part I was based on music production tasks to be performed Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

Upload: phamdung

Post on 03-Apr-2018

231 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ADEPT: A FRAMEWORK FOR ADAPTIVE DIGITAL AUDIO EFFECTSc4dm.eecs.qmul.ac.uk/events/wimp2/Campbell.pdf · ADEPT: A FRAMEWORK FOR ADAPTIVE DIGITAL AUDIO EFFECTS ... ABSTRACT The field

Proceedings of the 2nd AES Workshop on Intelligent Music Production, London, UK, 13 September 2016

ADEPT: A FRAMEWORK FOR ADAPTIVE DIGITAL AUDIO EFFECTS

Owen Campbell1, Curtis Roads1, Andres Cabrera1, Matthew Wright3, and Yon Visell2

1Media Arts and Technology2Media Arts and Technology and Electrical and Computer Engineering

University of California, Santa Barbara3Center for Computer Research in Music and Acoustics

Stanford [email protected]

ABSTRACT

The field of music information retrieval (MIR) has enabledresearch in ’intelligent’ audio processing. Emerging appli-cations of MIR techniques in music production might pro-vide mixing engineers, musicians and composers controlover extant effects processing plugins based on the audiocontent of the recorded music. In adaptive digital effects(ADAFx) processing, mapping functions are used to mod-ulate algorithm parameters via features that are extractedfrom the input audio signal or signals. We present an au-dio software plugin that is designed to facilitate feature-parameter mappings within digital audio workstations. Werefer to it as the Adaptive Digital Effects Processing Tool(ADEPT).

1. BACKGROUND

There are many examples of adaptive effects, such as dy-namic range processors, gates, and pitch-correctors that havelong been mainstays of music production. Previous researchhas investigated organizational methods for adaptive digi-tal audio effects [1, 2]. [1] proposed a general taxonomy ofadaptive effects, describing auto-adaptive, external-adaptive,feedback-adaptive, and cross-adaptive techniques. [2] wentbeyond purely signal-level models of content with a com-prehensive treatment of ’content-based transformations’. Re-lated research has investigated adaptive processing for auto-mated mixing tasks, for example [3, 4]. [5] and [6] focusedon implementing flexible frameworks for user-defined adap-tive effects routing using audio plugins as target effects.

2. SYSTEM DESIGN AND IMPLEMENTATION

The ADEPT adaptive audio processing software utilizes ex-isting audio plugins and implements feature extraction anda mapping algorithm in order to facilitate ADAFx process-ing. As a result it can control any processing parametersthat are made available as automatable controls within thedigital audio workstation (DAW). Developed using JUCE, aC++ framework which enables cross-platform deploymentof audio plugins, it communicates with Ableton Live usingOpen Sound Control [7] and the LiveOSC MIDI RemoteScript. ADEPT works with the DAW and existing plug-ins, providing immediate benefits due to the large variety

and quality of available effects processing and synthesis al-gorithms, and integration with production and performancepractices.Each instance of the ADEPT plugin computes a stream ofaudio features using Essentia [8], a C++ audio feature ex-traction library. Audio features from a given source may bemapped to any plugin parameter in the DAW session, in-cluding those belonging to plugins residing on other tracks.At present, the feature-parameter mappings are one-to-many;a single feature stream may feed multiple mappings control-ling different parameters. The software makes it easy to re-alize evocative real-time effects (Fig. 1) via feature-driveneffects automation (Fig. 2) controlled through the graphicaluser interface of a software plugin (Fig. 3).

Out

put A

udio

Dec

ay T

ime

Par

amet

er

Valu

es

Extra

cted

Pitc

h V

alue

s (H

z)In

put A

udio

(Analysis Frames)

(Analysis Frames)

(Samples)

(Samples)

(Seconds)

(Seconds)

Figure 1: Example time series of audio feature and effectparameter data with mapping from pitch to reverb decaytime

3. EVALUATION

Two pilot studies were conducted to explore potential eval-uation methods and to gain insight into the strengths andweaknesses of the software prototype. Qualitative data wascollected regarding the utility of the proposed ADAFx rout-ing framework, the user experience, and the aesthetic qual-ity of the audio effects produced by participants in study.Part I was based on music production tasks to be performed

Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

Page 2: ADEPT: A FRAMEWORK FOR ADAPTIVE DIGITAL AUDIO EFFECTSc4dm.eecs.qmul.ac.uk/events/wimp2/Campbell.pdf · ADEPT: A FRAMEWORK FOR ADAPTIVE DIGITAL AUDIO EFFECTS ... ABSTRACT The field

Proceedings of the 2nd AES Workshop on Intelligent Music Production, London, UK, 13 September 2016

ADEPT

DAW

ReverbAudio In Audio Out

UnalteredAudio

OSCMessages

Decay TimeUpdates

Figure 2: Block diagram of pitch to reverb decay time map-ping

Figure 3: View of the ADEPT plugin GUI

with the ADEPT plugin or with conventional methods. InPart II, expert listeners were asked to evaluate the audio out-put from Part I. Due to the small number of participants wecannot draw strong conclusions, but the results suggest thatsuch a system may be a valuable addition to conventionalmixing and performance workflows for effects processingand synthesis.

4. CONCLUSION

We developed a conceptual framework for flexible adaptiveeffects routing in the DAW environment and investigatedpotential use cases for such a system. A software imple-mentation of this framework was created in the form of anaudio plugin, where we explored the utility of a number ofaudio features and an interface for defining control map-pings. Finally, we conducted a small two-part pilot study toevaluate the software’s utility and integration with existingworkflows.

5. REFERENCES

[1] V. Verfaille, U. Zolzer, and D. Arfib, “Adaptive digitalaudio effects (a-dafx): A new class of sound transfor-mations,” IEEE Transactions on audio, speech, and lan-guage processing, vol. 14, no. 5, pp. 1817–1831, 2006.

[2] X. Amatriain, J. Bonada, l. Loscos, J. L. Arcos, andV. Verfaille, “Content-based transformations,” Journal

of New Music Research, vol. 32, no. 1, pp. 95–114,2003.

[3] J. D. Reiss, “Intelligent systems for mixing multichan-nel audio,” in 2011 17th International Conference onDigital Signal Processing (DSP), pp. 1–6, IEEE, 2011.

[4] E. Perez-Gonzalez and J. Reiss, “Automatic equaliza-tion of multichannel audio using cross-adaptive meth-ods,” in Audio Engineering Society Convention 127,Audio Engineering Society, 2009.

[5] M. E. Stabile, C. Roads, S. T. Pope, M. Turk, andJ. Kuchera-Morin, “Adapt: A networkable plug-in hostfor dynamic creation of real-time adaptive digital audioeffects,” 2010.

[6] Ø. Brandtsegg, “A toolkit for experimentation with sig-nal interaction,” in Proceedings of the 18th Interna-tional Conference on Digital Audio Effects (DAFx-15),pp. 42–48, 2015.

[7] M. Wright, A. Freed, and A. Momeni, “Opensoundcontrol: State of the art 2003,” in Proceedings of the2003 conference on New interfaces for musical expres-sion, pp. 153–160, National University of Singapore,2003.

[8] D. Bogdanov, N. Wack, E. Gomez, S. Gulati, P. Her-rera, O. Mayor, G. Roma, J. Salamon, J. R. Zapata, andX. Serra, “Essentia: An audio analysis library for mu-sic information retrieval.,” in ISMIR, pp. 493–498, Cite-seer, 2013.

Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).