elg5377 illustration of performance of steepest descent...

22
ELG5377 Illustration of Performance of Steepest Descent Haykin 4e Section 4.4 Eric Dubois School of Electrical Engineering and Computer Science University of Ottawa November 2012 Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4 November 2012 1 / 22

Upload: others

Post on 01-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • ELG5377 Illustration of Performance of SteepestDescent

    Haykin 4e Section 4.4

    Eric Dubois

    School of Electrical Engineering and Computer ScienceUniversity of Ottawa

    November 2012

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 1 / 22

  • Second order predictor for an AR(2) process

    u(n) = −a1u(n − 1)− a2u(n − 2) + v(n)We assume that σ2u = 1 = r(0)

    From the first Yule-Walker equation, r(0)(−a1) + r(1)(−a2) = r(1),we find r(1) = − a11+a2 = ρ.Thus the correlation matrix is

    R =

    [r(0) r(1)r(1) r(0)

    ]=

    [1 ρρ 1

    ]The optimal predictor is given by

    w0 =

    [−a1−a2

    ]The variance of the white noise is given by

    Jmin = σ2v =

    (1− a2)((1 + a2)2 − a21)1 + a2

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 2 / 22

  • Eigenvalues and eigenvectors

    We find the eigenvalues by solvingdet(R− λI) = 1− 2λ+ λ2 − ρ2 = 0.λ1 = 1 + ρ = λmax and λ2 = 1− ρ = λminThe orthonormal eigenvectors are found to be

    q1 =1√2

    [11

    ]q2 =

    1√2

    [1−1

    ]Thus

    Q =1√2

    [1 11 −1

    ]QTRQ =

    [1 + ρ 0

    0 1− ρ

    ]Eigenvalue spread is χ = λmaxλmin =

    1+ρ1−ρ

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 3 / 22

  • Evolution of steepest descent

    From the definition,

    v(k) = − 1√2

    [1 11 −1

    ] [w

    (k)1 + a1

    w(k)2 + a2

    ]

    Thus, if w(0) = 0, then v(0) = − 1√2

    [a1 + a2a1 − a2

    ]The evolution of the transformed tap vector is

    v(k) =

    [(1− µ(1 + ρ))kv1(0)(1− µ(1− ρ))kv2(0)

    ]For stability, we need

    0 < µ <2

    1 + ρ

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 4 / 22

  • Test case parameters

    0.1 0.5

    0.818 0.980

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 5 / 22

  • Test case parameters (with correction)

    0.1 0.5

    0.818 0.9��

    1.980

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 6 / 22

  • Loci of v(n)1 versus v

    (n)2 , with µ = 0.3, χ = 1.22

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 7 / 22

  • Loci of v(n)1 versus v

    (n)2 , with µ = 0.3, χ = 3

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 8 / 22

  • Loci of v(n)1 versus v

    (n)2 , with µ = 0.3, χ = 10

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 9 / 22

  • Loci of v(n)1 versus v

    (n)2 , with µ = 0.3, χ = 100

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 10 / 22

  • Loci of w(n)1 versus w

    (n)2 , with µ = 0.3, χ = 1.22

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 11 / 22

  • Loci of w(n)1 versus w

    (n)2 , with µ = 0.3, χ = 3

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 12 / 22

  • Loci of w(n)1 versus w

    (n)2 , with µ = 0.3, χ = 10

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 13 / 22

  • Loci of w(n)1 versus w

    (n)2 , with µ = 0.3, χ = 100

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 14 / 22

  • Learning curve with µ = 0.3 and four values of eigenvaluespread

    From Haykin third edition

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 15 / 22

  • Learning curve with µ = 0.3 and four values of eigenvaluespread

    Generated with MATLAB

    χ τ1 τ21.22 1.25 1.59

    3 0.84 3.0810 0.63 8.92

    100 0.55 83.95

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 16 / 22

  • Learning curve with µ = 0.3 and four values of eigenvaluespreadGenerated with MATLAB, first 40 iterations

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 17 / 22

  • Loci of v(n)1 versus v

    (n)2 , with µ = 0.3, χ = 10

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 18 / 22

  • Loci of v(n)1 versus v

    (n)2 , with µ = 1.0, χ = 10

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 19 / 22

  • Loci of w(n)1 versus w

    (n)2 , with µ = 0.3, χ = 10

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 20 / 22

  • Loci of w(n)1 versus w

    (n)2 , with µ = 1.0, χ = 10

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 21 / 22

  • Learning curve with µ = 1.0 and four values of eigenvaluespread

    Generated with MATLAB

    χ τ1 τ21.22 0.22 0.22

    3 0.72 0.7210 2.49 2.49

    100 25.01 25.01

    Eric Dubois (EECS) ELG5377 Illustration of Performance of Steepest DescentHaykin 4e Section 4.4November 2012 22 / 22