technical documentation_embedded_acoustic_dsp_projects

17
of some Real-Time Embedded Acoustic DSP Projects April 2013

Upload: emmanuel-chidinma

Post on 12-Jan-2017

92 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Technical Documentation_Embedded_Acoustic_DSP_Projects

of some

Real-Time

Embedded Acoustic

DSP Projects

April 2013

Page 2: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 2 of 17

TABLE OF CONTENTS Page

ACRONYMS AND OTHER TERMS ------------------------------------------------------------ 3

1.0 OBJECTIVE AND TECHNOLOGIES APPLIED --------------------------------------- 4

1.1 OBJECTIVE ---------------------------------------------------------------------------------------- 5

1.2 TECHNOLOGIES APPLIED-------------------------------------------------------------------- 5

2.0 THEORY AND SYSTEM FUNCTIONS---------------------------------------------------- 8

2.1 BUILDING BLOCKS AND FILTER DESIGN ---------------------------------------------- 9

2.1.1 Single Echo Effect (FIR Comb Filter) -------------------------------------------------- 9

2.1.2 Multiple Echo Effect (IIR Comb Filter) ------------------------------------------------ 9

2.1.3 All-Pass Filter ------------------------------------------------------------------------------ 9

2.1.4 Notch IIR Filter ----------------------------------------------------------------------------- 10

2.1.5 Flanger Effect ------------------------------------------------------------------------------- 10

2.1.6 Chorus Effect ------------------------------------------------------------------------------- 11

2.1.7 Phasing Effect ------------------------------------------------------------------------------ 11

2.1.8 Reverb Effect ------------------------------------------------------------------------------- 12

2.1.9 Tremelo Effect ------------------------------------------------------------------------------ 12

2.1.10 Ring Modulation Sound Effect --------------------------------------------------------- 12

2.1.11 Fuzz Effect -------------------------------------------------------------------------------- 13

3.0 REAL-TIME EMBEDDED DSP APPLICATION ---------------------------------------- 14

3.1 REAL-TIME DSP --------------------------------------------------------------------------------- 15

3.2 TMS320C6713 DSP ARCHITECTURE ------------------------------------------------------- 15

3.3 EMBEDDED SOFTWARE DEVELOPMENT CONSIDERATIONS -------------------- 16

3.4 SAMPLE CODE SNAPSHOT ------------------------------------------------------------------ 17

3.5 REFERENCES ------------------------------------------------------------------------------------- 17

Page 3: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 3 of 17

ACRONYMS AND OTHER TERMS

1 ADC Analog-to-Digital-Converter

2 CPLD Complex Programmable Logic Device

3 CPU Central Processing Unit

4 DAC Digital-to-Analog-Converter

5 DIP Dual-Inline-Package

6 DMA Direct Memory Access

7 DSK DSP Starter Kit

8 DSP Digital Signal Processor/Processing

9 EDMA Enhanced Direct Memory Access

10 FFT Fast Fourier Transform

11 FIR Finite-length Impulse Response

12 IDE Integrated Development Environment

13 IIR Infinite-length Impulse Response

14 ISR Interrupt Service Routine

15 JTAG Joint Test Action Group

16 MIC Microphone

17 PC Personal Computer

18 RISC Reduced Instruction Set Computer

19 SDRAM Synchronous Dynamic Random Access Memory

20 USB Universal Serial Bus

21 VLIW Very Long Instruction Word

Page 4: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 4 of 17

Section 1.0

OBJECTIVE AND TECHNOLOGIES APPLIED

Page 5: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 5 of 17

1.1 OBJECTIVE

Audio special effects are pleasant sound variations that are created with fairly simple DSP

algorithms. Some of these effects are: echo, chorus, “flanger,” phasing, “reverb,” “tremelo,”

frequency translation, ring modulation, fuzz.

When DSP was first being taught in engineering schools, some of these audio special effects

algorism were quickly developed for musicians. However, the cost of acquiring the hardware

remained high for many years. But in the 1990s, digital and embedded electronics

implementation of the special effects quickly replaced analog implementations. Today, a

single DSP-based effects “box” is able to produce many variations of audio special effects

with improved signal-to-noise ratio.

While demonstrations using MATLAB

are extremely valuable they typically use

previously stored data/signal files and, therefore, cannot be considered “real-time”

applications. Some MATLAB

programs, using a PC sound card or data acquisition card,

have a fairly limited ability to do some real-time processing using the general purpose CPU

of the PC.

The limitations of general purpose microprocessors is as a result of the fact that performance

demands and power constraints of real-time systems often mandate specialized hardware.

This may include specialized microprocessors optimized for signal processing (digital signal

processors or DSPs), programmable logic devices, application specific integrated circuits

(ASICs), or a combination of any or all of them as required to meet system constraints. This

project aims to reinforce MATLAB

simulations by emphasizing implementations on a real-

time, embedded systems DSP platform.

Lastly, it must be emphasized that due to scarcity of resources (particularly memory) on

embedded DSP platforms (characterized by high volumes of “number crunching”), codes

must be optimized until they meet real-time constraints. Optimization is desirable as it results

in codes running faster; however, optimization tends to turn a simple straightforward DSP

algorithm in to a fairly complex one. For instance, consider the Haar Wavelet Transform

Algorithm (available on demand) I developed during the course of implementing embedded

DSP projects. Ordinarily, this implementation should require the allocation of a scratch array

of the same size as the array frame that stores the audio samples. However, the transform (in

the aforementioned algorithm) was implemented in-place without the use of any scratch

array.

1.2 TECHNOLOGIES APPLIED

1.2.1 Hardware

1) MP3 player (a cellular phone was used)

2) 2 X 3Watts pair of speakers.

3) TMS320C6713 DSK featuring:

a) A TMS320C6713 DSP operating at 225 MHz.

b) An AIC23 stereo codec with Line In, Line Out, MIC, and headphone stereo jacks.

Page 6: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 6 of 17

c) 16 Mbytes of synchronous DRAM.

d) 512 Kbytes of non-volatile Flash memory (256 Kbytes usable in default

configuration).

e) 4 user accessible LEDs and DIP switches.

f) Software board configuration through registers implemented in CPLD.

g) Configurable boot options.

h) Expansion connectors for daughter cards.

i) JTAG emulation through on-board JTAG emulator with USB host interface or

external emulator.

1.2.2 Software compiler

Code Composer Studio version 5.2.1.00018 and MATLAB

version 7.7.0.471 IDEs

Figure 1.1: DSK connected to audio source and speakers

Page 7: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 7 of 17

Figure 1.2: TMS320C6713 DSK Block Diagram

Figure 1.3: Workbench

Page 8: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 8 of 17

Section 2.0

THEORY AND SYSTEM FUNCTIONS

Page 9: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 9 of 17

2.1 BUILDING BLOCKS AND FILTER DESIGN

The building blocks for the audio special effects are the comb, all-pass and notch filters.

Other special audio effects can be constructed by appropriately cascading any of these in

series and/or parallel.

1) Single Echo Effect (FIR Comb Filter) Y(z) = (1 + αZ

-R).X(z) or H(z) = 1 + αZ

-R

Therefore, y(n) = x(n) + αx(n-R) In the block diagram, there are only feed-forward paths, thus only a single echo results. The values of the sampling frequency and the R will determine the length of the delay. The volume of the echo can be increased by an increased value of α. The delay time is always constant.

2) Multiple Echo Effect (IIR Comb Filter) Y(z).(1 - αZ

-R) = X(z) or H(z) = 1/(1 - αZ

-R) , |α| < 1

Therefore, y(n) = x(n) + αy(n-R) Using a feedback path to achieve multiple echoes implies an IIR filter. The echoes actually “repeat forever,” but the volume of the delayed sound decreases each sample time because of the stability condition: |α| < 1. The delay time is always constant.

3) All-Pass Filter Y(z).(1 + αZ

-R) = X(z).( α + Z

-R) or H(z) = ( α + Z

-R) /(1 + αZ

-R) ,

|α| < 1 is the conditions for stability. Therefore, y(n) = -αy(n-R) + αx(n) + x(n-R) An all-pass filter has a frequency response with a constant magnitude. It uses a combination of feed-forward and feedback paths with complementary gain values. It is useful for group delay equalization to compensate for nonlinearities. When cascaded in series with other systems, a change in phase can be achieved while maintaining the magnitude of the frequency response.

Z-R

α

x(n)

n)

y(n)

Z-R

α

y(n) x(n)

n)

y(n)

α

x(n)

n)

Z-R

Page 10: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 10 of 17

Z-ß(n)

α

x(n)

n)

y(n)

4) Notch IIR Filter Y(z).[1 – β(1 + α)Z

-1 + αZ

-2]= X(z).[ (1 + α)/2].[1 – 2βZ

-1 + Z

-2]

or H(z) = [ (1 + α)/2].[( 1 – 2βZ

-1 + Z

-2)/( 1 – β(1 + α)Z

-1 + αZ

-2)],

0 < α < 1, and -1 < β < 1 are the conditions for stability. Therefore, y(n) = -αy(n-2) + β(1 + α)y(n-2) + (1 + α)x(n) /2 - β(1 + α)x(n-1) + (1 + α)x(n-2) /2 A notch filter is similar to the comb filter; however, it has a single stop band unlike the latter which has multiple, evenly-spaced stop bands. The sharpness (width) of the stop band and location of the stop band (notch frequency) can both be adjusted by varying the values of α and β respectively. To set a particular frequency fNotch in the allowable range 0 to fs (the sampling frequency), set β = cos(2π fNotch / fs). Furthermore, the closer α approaches 1.0, the narrower the notch width will be.

While the building blocks were presented with constant delays R and constant notch

frequency determined by β, in other to derive some other special effect, these parameters may

be varied over time. Furthermore, while some effects need just a filter stage, others might

require multiple filter stages.

5) Flanger Effect

The block diagram of the flanging effect is similar to that of the single echo. As in the latter, α is the gain

on one of the feed forward paths. However, instead of using a constant R for delay, β(n), which represents a periodically varying delay, is used to vary the delay sinusoidally. β(n) = [R/2].[1 - cos(2π fon/fs)], where fo is a low frequency value usually lower than 1Hz and fs is the sampling frequency. In order to produce a different sound, β(n) can also be a periodic sawtooth or trangle signal.

y(n) x(n)

n)

Β(1+α)

-2β

(1+α)/2

1

Z-1

Z-1

Page 11: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 11 of 17

6) Chorus Effect

The block diagram for the chorus effect is arranged to make one musician sound like four musicians playing the same notes. Here, three separate chorus signals (identical to flanging except having longer delay times) are summed with the original sound.

7) Phasing Effect

The phasing effect can be achieved in various ways. One method uses the output of an all-pass filter (with a depth) that is added back to the original signal. The depth is a slowly changing delay with a gain, just like a flanger. Due to the phase shift, some frequencies will cancel out (while others will reinforce) thereby creating notches, and thus the special effect. Another approach that is easier to fine-tune uses a notch filter with a slowly varying notch frequency (as the depth) that is added back to the original signal. The two block diagrams are shown.

x(n)

n)

y(n)

α2 Z

-ß2(n)

α3 Z

-ß3(n)

α1 Z

-ß1(n)

allpass

x(n)

n)

y(n)

depth

notch

x(n)

n)

y(n)

depth

Page 12: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 12 of 17

8) Reverb Effect

Reverb effect is caused by a multitude of reflections coming from the walls of a concert room. In a larger room, one can hear these reflections arrive at different times. Therefore, to realistically simulate the sound effect, multiple delay effects would have to be used. One possible block diagram is shown in which four IIR comb filters (arranged in parallel) are cascaded in series with two all-pass filters (arranged in series).

Other sound effects such as tremolo, ring modulation, fuzz, compression/expansion are

created by altering the amplitude, rather than the phase, of the signal. In general, an

intentional variation in the amplitude of a signal is called Amplitude Modulation (AM). AM

is a modulation techniques used in radio communications as well as audio special effects.

9) Tremelo Effect Tremelo is the repetitive up/down variations in the volume of a signal. In the block diagram, the rate of the variation in volume is determined by β, while the amount of depth compared with the original sound is controlled by α (0 < α < 1). Typically, β(n) = [1/2].[ 1 - cos(2π fon/fs)], where fo is the frequency of the variation and fs is the sampling frequency. Tremelo effect is actually a form of AM called “double side band large carrier” (DSB-LC).

10) Ring Modulation Sound Effect Ring modulation is a special effect whereby the audio signal is multiplied by some other signal, usually an internally generated constant frequency sinusoidal signal such as β(n) = cos(2π fon/fs). Ring modulation results in frequency translation of the original audio signal. Ring modulation effect is actually a form of AM called “double side band suppressed carrier” (DSB-SC).

x(n)

n)

y(n)

α1

33

43

33

3

Z-R1

α2

33

43

33

3

Z-R2

α3

33

43

33

3

Z-R3

α4

33

43

33

3

Z-R4

-α6

α65

Z-R6

-α5

α55

Z-R5

β(n)

1 - α

α

x(n)

n)

y(n)

β(n)

x(n)

n)

y(n)

Page 13: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 13 of 17

11) Fuzz Effect

Fuzz is an intentionally introduced distortion in the signal, typically caused by “clipping” or

limiting the amplitude variations of the signal. The easiest way to implement clipping is to

adjust the gain (cascaded in series with the audio signal) to the extent that the maximum rated

ADC voltage is exceeded. This amplitude clipping eventually results in harmonic distortion

which can be added back to the original signal.

Page 14: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 14 of 17

Section 3.0

REAL-TIME EMBEDDED DSP APPLICATION

Page 15: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 15 of 17

3.1 REAL-TIME DSP

DSP operations proceed with the acquisition of the sampled signal – the digital signal – that

is to be processed. If the digital signal is stored for subsequent retrieval for processing, then

this cannot be said to be real-time processing. However, if the digital samples are processed

immediately they are being acquired, this is real-time processing.

Real-time processing implies that the processing of a particular sample must occur within a

given time period or the system will not operate properly. In a hard real-time system, the

system will fail if the processing is not done in a timely manner; whereas, in a soft real-time

system, the system will tolerate some failures to meet real-time targets and still continue to

operate, but with some degradation.

3.2 TMS320C6713 DSP ARCHITECTURE

Figure 3.1: Block Diagram of the TMS320C67xx DSP

The TMS320C67xx DSP is an eight-way VLIW implementation of a RISC load-store

architecture. The CPU core contains 32 general purpose registers (A0 to A15, B0 to B15) and

eight functional units split in to two clusters as shown in Figure 3.0. The statically scheduled

VLIW architecture fetches eight instructions in parallel (a fetch packet) to simultaneously

pass to its eight function units. If a function unit is not used, then it is passed a no-operation

(NOP). Each functional unit has a primary specialization (as shown in the table below) but

most are capable of multiple operations.

The A and B registers banks both have data buses for transferring data to and from the

functional units associated with them, as well as for loading and storing operands. There are

two cross paths to permit the use of a single A-side register with a B-side functional unit, and

verse versa.

Page 16: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 16 of 17

Unit Integer Operations Floating Point Operations

.L Logical Arithmetic

Arithmetic / Compare Integer / floating point conversions

.S Shifts and bit fields Compare

Logical Reciprocal

Arithmetic Reciprocal square root

Branches Absolute value

Constant generation Single / double precision conversions

.M Multiply Multiply

.D Load and store Load and store

Address calculation

Addition / subtraction

3.3 EMBEDDED SOFTWARE DEVELOPMENT CONSIDERATIONS

Real-time processing can be accomplished on a sample-by-sample basis. That is, an input

sample x(t) can be converted to digital form x(n) by the DSK’s codec and transferred to the

DSK’s CPU for whatever processing desired. The processed sample y(n) will then be

transferred to the DAC part of the codec, converted back to analog form y(t) and sent to an

output device (a speaker in this case). Processing this way (sample-by-sample) has the

advantage that the system’s latency is minimized as each sample is acted upon as soon as it

arrives. However, sample-by-sample processing has serious drawbacks.

One of the implications of real-time sample-based DSP is that all processing must be

completed within the time in-between samples. For fast sampling rates (48kHz was used in

this project), this is quite impossible. Another implication of real-time sample-based DSP is

that only one sample is available for processing at any given time: this method, obviously,

cannot be used for FFT implementations as these operations require a contiguous range of

sampled data to be available at any given time.

A second implication of real-time sample-based DSP is that the processor must respond to

each interrupt from the devices (typically codecs) that are data sources and sinks in order to

perform the required data transfers. Doing this means that the current processing is

interrupted, the state of the processor preserved, and control is transferred to the relevant ISR

which is executed. This process is called context switching. Numerous context switching

introduces additional inefficiencies such as pipeline flushes and cache misses. This overhead

represents lost processing time, and can significantly reduce the overall performance of the

DSP.

It is especially for the second reason stated above that frame-based DSP, rather than sample-

based DSP, was utilized. Frame-based DSP works in consonance with the DMA controllers.

Once the DMA controller is programmed to respond to the codec (that is sourcing and

sinking data), it will automatically perform the required transfers to and from a memory

buffer without the intervention of the processor. When a buffer has been filled up or emptied,

the DMA controller then interrupts the processor. This frees the processor from the mundane

task of repetitive data transfers, and allows its resources to be focused on the

computationally-intensive processing once a buffer of data is available.

Page 17: Technical Documentation_Embedded_Acoustic_DSP_Projects

Real-Time Embedded Acoustic DSP Projects Page 17 of 17

3.4 SAMPLE CODE SNAPSHOT

Figure 3.2: Code Snapshot

The code snapshot of Figure 3.2 shows the setting up of TIMER0 for synchronization of

EDMA transfers.

3.5 REFERENCES

• The Texas Instruments® SPRU190D: TMS320C6000 Peripherals Reference Guide.

• Monson H. Hayes, Digital Signal Processing, 2012.

• James S. Walker, A Primer on Wavelets and their Scientific Applications, 2008.

• Thad B. Welch, Cameron H. G. Wright and Michael G. Morrow, Real Time Digital

Signal Processing from MATLAB® to C with the TMS320Cx, 2012.

• Rulph Chassaing, Digital Signal Processing with the C6713 and C6416 DSK, 2005.

• The MathWorks, Inc. MATLAB®

: The Language of Technical Computing.