synopnis format

Upload: nazsh-aslam

Post on 08-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Synopnis Format

    1/15

    CHAMELI DEVI INSTITUTE OF TECHNOLOGY

    & MANAGEMENT, INDORE

    SYNOPSYS

    ON

    ACOUSTIC NOISE CANCELLATION

    Submitted to: Submitted by:Mr. Shafi Patel Punita Sharma

    0832EC071041

    Ruchi Dubey

    0832EC071048

    Surabhi Sharma

    0832EC071055

    Surbhi Jain

    0832EC071056

  • 8/7/2019 Synopnis Format

    2/15

    CONTENTS

    Abstract

    Background Theory

    Introduction (Adaptive Filter)

    Block diagram

    Alogrithm:-

    o Lmso Nlms

    Tool used (matlab)

    Reference

  • 8/7/2019 Synopnis Format

    3/15

    ABSTRACTAcoustic echo cancellation is a

    common occurrence in todays

    elecommunication systems. It occurs

    when an audio source and sink

    operate in full duplex mode, an

    example of this is a hands-free

    loudspeaker telephone. In this

    situation the received signal is output

    through the telephone loudspeaker

    (audio source), this audio signal is

    then reverberated through the

    physical environment and picked up by the systems microphone (audio sink). The

    effect is the return to the distant user of time delayed and attenuated images of their

    original speech signal. The signal interference caused by acoustic echo is

    distracting to both users and causes a reduction in the quality of thecommunication. This thesis focuses on the use of adaptive filtering techniques to

    reduce this unwanted echo, thus increasing communication quality. Adaptive filters

    are a class of filters that iteratively alter their parameters in order to minimise a

    function of the difference between a

    desired target output and their output.

    In the case of acoustic echo in

    telecommunications, the optimal

    output is an echoed signal that

    accurately emulates the unwanted

    echo signal. This is then used to

    negate the echo in the return signal.

    The better the adaptive filter emulates

    this echo, the more successful the

    cancellation will be. This thesis

    examines various techniques and

    algorithms of adaptive filtering,

    employing discrete signal processing

    in matlab. Also a real-time

    implementation of an adaptive echo cancellation system has been developed . Thetopic for this thesis is .acoustic echo cancellation using digital signal processing.

    Acoustic echo occurs when an audio signal is reverberated in a real environment,

    resulting in the original intended signal plus attenuated, time delayed images of

    this signal. This report will focus on the occurrence of acoustic echo in

    telecommunication systems. Such a system consists of coupled acoustic input and

    output devices, both of which are active concurrently. An example of this is a

  • 8/7/2019 Synopnis Format

    4/15

    hands-free telephony system. In this scenario the system has both an active

    loudspeaker and microphone input operating simultaneously. The system then acts

    as both a receiver and transmitter in full duplex mode. When a signal is received

    by the system, it is output through the loudspeaker into an acoustic environment.

    This signal is reverberated within the environment and returned to the system via

    the microphone input. These reverberated signals contain time delayed images of

    the original signal, which are then returned to the original senderthe occurrence of

    acoustic echo in speech transmission causes signal interference and reduced quality

    of communication. (figure 1.1, ak is the attenuation, tk is time delay). The method

    used to cancel the echo signal is known as adaptive filtering. Adaptive filters are

    dynamic filters which iteratively alter their characteristics in order to achieve an

    optimal desired output. An adaptive filter algorithmically alters its parameters in

    order to minimise a function of the difference between the desired output d(n) and

    its actual output y(n). This function is known as the cost function of the adaptive

    algorithm. Figure 1.2 shows a block diagram of the adaptive echo cancellation

    system implemented throughout this thesis. Here the filter h(n) represents theimpulse response of the acoustic environment, w(n) represents the adaptive filter

    used to cancel the echo signal. The adaptive filter aims to equate its output y(n) to

    the desired output d(n) (the signal reverberated within the acoustic environment).

    At each iteration the error signal, e(n)=d(n)-y(n), is fed back into the filter, where

    the filter characteristics are altered accordingly.

    This report deals with acoustic echo as applies to telecommunication systems,

    although the techniques will be applicable to a variety of other disciplines.

    This goals of this project are as follows:

    to examine adaptive filtering techniques as they apply to acoustic echo

    Cancellation.

    to simulate these adaptive filtering algorithms using matlab.

    to develop a real time acoustic echo cancellation system .

  • 8/7/2019 Synopnis Format

    5/15

    BACKGROUND THEORY

    In order to understand the content presented in this thesis it is first necessary to

    provide some background information regarding digital signal theory. It will start

    out rather elementary and then progress to more complicated matters. Later

    chapters will build upon this theoretical basis in the derivation and implementationof the adaptive filtering techniques used in acoustic echo cancellation.

    Acoustics

    Acoustics is the interdisciplinary science that deals with the study of all

    mechanical waves in gases, liquids, and solids including vibration, sound,

    ultrasound and infrasound. A scientist who works in the field of acoustics is

    an acoustician while someone working in the field of acoustics technology may be

    called an acoustical or audio engineer. The application of acoustics can be seen inalmost all aspects of modern society with the most obvious being the audio and

    noise control industries.

    hearing is one of the most crucial means of survival in the animal world, and

    speech is one of the most distinctive characteristics of human development and

    culture. So it is no surprise that the science of acoustics spreads across so many

    facets of our societymusic, medicine, architecture, industrial production, warfare

    and more. Art, craft, science and technology have provoked one another to advance

    the whole, as in many other fields of knowledge. Linsday's wheel of acoustics is awell accepted overview of the different fields in acoustics.

    The word "acoustic" is derived from the greek word (akoustikos),

    meaning "of or for hearing, ready to hear" and that from (akoustos),

    "heard, audible", which in turn derives from the verb (akouo), "i hear".

    The latin synonym is "sonic", after which the term sonics used to be a synonym fo

    acousticsand later a branch of acoustics. After acousticians had extended their

    studies to frequencies above and below the audible range, it became conventional

    to identify these frequency ranges as "ultrasonic" and "infrasonic" respectively,while letting the word "acoustic" refer to the entire frequency range without limit.

  • 8/7/2019 Synopnis Format

    6/15

    ADAPTIVE FILTERS

    Introduction

    Figure 3.1. Adaptive filter block diagram (farhang-boroujeny 1999, p. 120) here w

    represents the coefficients of the fir filter tap weight vector, x(n) is the input vectorsamples, z-1 is a delay of one sample perios, y(n) is the adaptive filter output, d(n)

    is the desired echoed signal and e(n) is the estimation error at time n. The aim of an

    adaptive filter is to calculate the difference between the desired signal and the

    adaptive filter output, e(n). This error signal is fed back into the adaptive filter and

    its coefficients are changed algorithmically in order to minimise a function of this

    difference, known as the cost function. In the case of acoustic echo cancellation,

    the optimal output of the adaptive filter is equal in value to the unwanted echoed

    signal. When the adaptive filter output

    is equal to desired signal the error signalgoes to zero. In this situation the echoed

    signal would be completely cancelled

    and the far user would not hear any of

    their original speech returned to them.

    This section examines adaptive filters

    and various algorithms utilised. The

    various methods used in this thesis can

    be divided into two groups based on

    their cost functions. The first class are

    known as mean square error (mse)

    adaptive filters, they aim to minimise a

    cost function equal to the expectation of

    the square of the difference between the desired signal d(n), and the actual output

    of the adaptive filter y(n) (equation 3.1).

    (n) = e[e2 (n)] = e[(d(n) y(n))2 ]

    The second class are known as recursive least squares (rls) adaptive filters and they

    Aim to minimise a cost function equal to the weighted sum of the squares of the

    Difference between the desired and the actual output of the adaptive filter for

    different time instances. The cost function is recursive in the sense that unlike the

    mse cost function, weighted previous values of the estimation error are also

    considered. The cost function is shown below in equation 3.2, the parameter is in

    the range of 0

  • 8/7/2019 Synopnis Format

    7/15

    previous values to have an increasingly negligible effect on updating of the filter

    tap weights (diniz 1997, p. 184). The value of 1/(1- ) is a measure of the memory

    of the algorithm (farhang-boroujeny 1999, p. 420), this thesis will primarily deal

    with infinite memory ,i.e. =1. The cost function for rls algorithm, (n), is stated

    in equation.

    Where k=1, 2, 3.n., k=1 corresponds to the time at which the rls algorithm

    commences. Later we will see that in practice not all previous values are

    considered, rather only the previous n (corresponding to the filter order) error

    signals are considered. As stated previously, considering that the number of

    processes in our ensemble averages is equal to one, the expectation of an input or

    output value is equal to that actual value at a unique time instance. However, for

    the purposes of deriving these algorithms, the expectation notation shall still beused.

    Block Diagram of generic echo cancellation

  • 8/7/2019 Synopnis Format

    8/15

    Least mean square (lms) algorithm

    The least mean square (lms) algorithm was first developed by widrow and hoff in

    1959 through their studies of pattern recognition (haykin 1991, p. 67). From there

    it has become one of the most widely used algorithms in adaptive filtering. The lms

    algorithm is a type of adaptive filter known as stochastic gradient-based algorithms

    as it utilises the gradient vector of the filter tap weights to converge on the optimal

    wiener solution. It is well known and widely used due to its computational

    simplicity. It is this simplicity that has made it the benchmark against which all

    other adaptive filtering algorithms are judged (haykin 1991, p. 299). With each

    iteration of the lms algorithm, the filter tap weights of the adaptive filter are

    updated according to the following formula (farhang-boroujeny 1999, p. 141).

    w(n +1) = w(n) + 2e(n)x(n) (eq 3.7)

    Here x(n) is the input vector of time delayed input values,

    x(n) = [x(n) x(n-1) x(n-2) .. X(n-n+1)]t.

    The vector

    W(n) = [w0(n) w1(n) w2(n) .. Wn-1(n)] t

    represents the coefficients of the adaptive fir filter tap weight vector at time n. The

    parameter is known as the step size parameter and is a small positive constant.

    This step size parameter controls the influence of the updating factor. Selection of

    a suitable value for is imperative to the performance of the lms algorithm, if thevalue is too small the time the adaptive filter takes to converge on the optimal

    solution will be too long; if is too large the adaptive filter becomes unstable and

    its output diverges.

    Derivation of the lms algorithm.

    The derivation of the lms algorithm builds upon the theory of the wiener solution

    for the optimal filter tap weights, wo, as outlined in section 3.2. It also depends on

    the steepestdescent algorithm as stated in equation 3.8, this is a formula which

    updates the filter coefficients using the current tap weight vector and the current

    gradient of the cost function with respect to the filter tap weight coefficient vector,

  • 8/7/2019 Synopnis Format

    9/15

    As the negative gradient vector points in the direction of steepest descent for the

    ndimensional quadratic cost function, each recursion shifts the value of the filter

    Coefficients closer toward their optimum value, which corresponds to the

    minimum achievable value of the cost function, (n). This derivation is based on

    diniz 1997, pp. 71-3 and farhang-boroujeny 1999, pp.139.41. The lms algorithm is

    a random process implementation of the steepest descent algorithm, from equation

    3.8. Here the expectation for the error signal is not known so the instantaneous

    value is used as an estimate. The steepest descent algorithm then becomes equation

    3.9.

    The gradient of the cost function, (n), can alternatively be expressed in the

    following form.

    Substituting this into the steepest descent algorithm of equation 3.8, we arrive at

    the recursion for the lms adaptive algorithm.

    Implementation of the lms algorithm.

    Each iteration of the lms algorithm requires 3 distinct steps in this order:

    1. The output of the fir filter, y(n) is calculated using equation 3.12.

  • 8/7/2019 Synopnis Format

    10/15

    2. The value of the error estimation is calculated using equation 3.13.

    3. The tap weights of the fir vector are updated in preparation for the nextiteration, by equation 3.14.

    The main reason for the lms algorithms popularity in adaptive filtering is its

    Computational simplicity, making it easier to implement than all other commonly

    used adaptive algorithms. For each iteration the lms algorithm requires 2n

    additions and 2n+1 multiplications (n for calculating the output, y(n), one for

    2e(n) and an additional n for the scalar by vector multiplication) (farhang-

    boroujeny 1999, p. 141).

  • 8/7/2019 Synopnis Format

    11/15

    normalised least mean square (nlms) algorithmOne of the primary disadvantages of the lms algorithm is having a fixed step size

    Parameter for every iteration. This requires an understanding of the statistics of the

    input signal prior to commencing the adaptive filtering operation. In practice this is

    rarely achievable. Even if we assume the only signal to be input to the adaptive

    echo cancellation system is speech, there are still many factors such as signal input

    power and amplitude which will affect its performance.

    The normalised least mean square algorithm (nlms) is an extension of the lms

    Algorithm which bypasses this issue by selecting a different step size value, (n),

    for each iteration of the algorithm. This step size is proportional to the inverse of

    the total expected energy of the instantaneous values of the coefficients of the input

    vector x(n) (farhang-boroujeny 1999, p.172). This sum of the expected energies of

    the input samples is also equivalent to the dot product of the input vector with

    itself, and the trace of input vectors auto-correlation matrix, r (farhang-boroujeny

    1999, p.173).

    The recursion formula for the nlms algorithm is stated in equation 3.16.

    Derivation of the nlms algorithm.

    This derivation of the normalised least mean square algorithm is based on farhang-

    Boroujeny 1999, pp.172-175, and diniz 1997, pp 150-3. To derive the nlms

    algorithm we consider the standard lms recursion, for which we select a variable

    step size parameter, (n). This parameter is selected so that the error value , e+(n),

    will be minimised using the updated filter tap weights, w(n+1), and the currentinput vector, x(n).

  • 8/7/2019 Synopnis Format

    12/15

    Next we minimise (e+(n))2, with respect to (n). Using this we can then find a

    value for (n) which forces e+(n) to zero.

    This (n) is then substituted into the standard lms recursion replacing , resultingin the following.

    Often the nlms algorithm is expressed as equation 3.20, this is a slight modification

    of the standard nlms algorithm detailed above. Here the value of is a small

    positive constant in order to avoid division by zero when the values of the input

    vector are zero.

    This was not implemented in the real time implementation as in practice the input

    signal is never allowed to reach zero due to noise from the microphone and from

    the ad codec on the texas instruments dsk. The parameter is a constant step size

    value used to alter the convergence rate of the nlms algorithm, it is within the

    range of 0

  • 8/7/2019 Synopnis Format

    13/15

    Practical implementation is very similar to that of the lms algorithm. Each iteration

    of the nlms algorithm requires these steps in the following order (farhang-

    boroujeny 1999, p. 175).

    1. The output of the adaptive filter is calculated.

    2. An error signal is calculated as the difference between the desired signal and the

    Filter output.

    3. The step size value for the input vector is calculated.

    4. The filter tap weights are updated in preparation for the next iteration.

    Tool used. (MATLAB)

    MATLAB is a powerful, general-purpose, mathematical software package.

    MATLAB possesses excellent graphics and matrix handling capabilities. It

    integrates mathematical computing in a powerful language to provide a flexible

    environment for technical computing. The salient features ofMATLAB are its in-

    built mathematical toolboxes and graphic functions. Additionally, external routines

    that are written in other languages such as C, C++, Fortran and Java, can be

    integrated with MATLAB applications. MATLAB also supports importing data

    from files and other external devices. Most of the functions in MATLAB are

    matrix-oriented and can act on arrays of any appropriate dimension. MATLAB

    also has a separate toolbox for signal processing applications, which provided

    simpler solutions for many of the problems encountered in this research.

    The MATLAB software environment suited the needs of this research for thefollowing reasons:

    1) The input signals (far-end and near-end talker signals) were voices.These voices were stored as wav files and the wav files were easily imported

    into the code. The intermediate signals (echo signals) and output signals

    (error signal and Signals obtained after echo cancellation) were obtained as

  • 8/7/2019 Synopnis Format

    14/15

    wav files. Thus the audio of the voice signals could be literally be heard,

    which aided immensely judgments with respect to the results obtained.

    2) The signal processing toolbox has in-built functions for almost allsignal processing applications. The toolbox helped the efficiency of the code

    since these functions could be called wherever necessary instead of writing

    separate sub-routines.

    Since MATLAB supports graphics, the results of a simulation could be

    represented in a graphical format with ease.

  • 8/7/2019 Synopnis Format

    15/15

    References :-

    y http://en.wikipedia.org/wiki/Adaptive_filtery http://cnx.org/content/m11909/latest/y http://ieeexplore.ieee.org/Xplore/login.jsp?url=http://ieeexplore.ieee.org/iel5/10241/32659/01529581.pdf&authDecision=-203y http://en.wikipedia.org/wiki/Echo_cancellationy http://www.seminarprojects.com/tag/echo-cancellation-using-matlaby http://ylikaaki.homeip.net/Papers/AEC/ACOUSTIC%20ECHO%20CANCELLATION%20USI

    NG%20DIGITAL%20SIGNAL%20PROCESSING.pdf