the congugate gradient methodthe congugate … filetma 4180 optimeringsteori revision spring 2012...
TRANSCRIPT
TMA 4180 OPTIMERINGSTEORI
REVISION SPRING 2012
THE CONGUGATE GRADIENT METHODTHE CONGUGATE GRADIENT METHOD
- TESTS AND CONVERGENCE
Harald E. Krogstad, IMF
CONVERGENCE TESTS
Generatating the A matrix
CONVERGENCE TESTS
% Define matrixndim = 100;R = randn(ndim); % Random matrix( )npot = .5 % Steers the eigenvalue A = (R'*R)^npot; % A matrix, >0.lamb = eig(A); % Eigenvalueskappa= max(lamb)/min(lamb) % The condition numberxsol = rand(ndim,1); %The solutionb = A*xsol; % The b-vector
Norm2 = sqrt(xsol'*xsol);NormA = sqrt(xsol'*A*xsol);
The C-G algorithm
x = zeros(size(b)); g = -b; p = -g;%%for loop = 1:ndim
Ap = A*p;alfa = -(p'*g)./(p'*Ap);(p g) /(p p);x = x + alfa*p;g = g + alfa*Ap; % g = A*x-b;beta = (g'*Ap)./ (p'*Ap);p = -g + beta*p;err2(loop) = sqrt((x-xsol)'*(x-xsol))/Norm2;errA(loop) = sqrt((x-xsol)'*A*(x-xsol))/NormA;
end;
Note: Every iteration requires only one matrix/vector operation!
0 5Eigenvalues, ndim=100
100npot= 0.1 =3.6247
Well-conditioned matrix
0.5
10-5
10 _____ A-norm_____ 2-norm
0 10-10
Erro
r
-0.5 10-20
10-15
0 0.5 1 1.5 20.5
0 50 10010
Iteration number
Medium Conditioned Matrix
0.5Eigenvalues, ndim=100
100npot= 0.5 =625.7325
A
Medium Conditioned Matrix
10-5
_____ A-norm_____ 2-norm
0
10-10
Erro
r0 5 10 15 20
-0.50 50 100
10-15
0 5 10 15 20 0 50 100Iteration number
Ill conditioned Matrix
0.5Eigenvalues, ndim=100
100
npot= 1.3 =1.86x107
A
Ill-conditioned Matrix
0 5
10-1
0 _____ A-norm_____ 2-norm
0
10-2
Erro
r
0 500 1000 1500 2000-0.5
0 50 10010
-3
0 500 1000 1500 2000 0 50 100Iteration number
Krylov sequenceERROR ANALYSIS
2 10 0 0 0 0span{ , , , , }k
kx x g Ag A g A g
Krylov sequence
1
0 0 0 00
kj
k j k kx x A g P A g P A Ax b
Matrix polynomial
0
&* *
j
x x x x x x
0 0
* * *
k kx x x x x x
x x P A A I x x Q A x x
0 0* * *k k k
n n
x x P A A I x x Q A x x
1{ , }n eigenvalues
2 22 2 20
1 1* , *
n n
k k i i i i iA Ai i
x x Q x x
THE EXACT ERROR BOUND
2* 2 2
1
,n
k j jAj
x x Q
1
*0 ,
j
n
j jx x e
1j
where, because we know that this A-normis as small as possible:
2 2arg minn
Q Q
is as small as possible:
1 k-th order,
0 1
,arg min j jjQ
Q
Q Q
A more convenient form:
*
maxk Ai k i
x xQ
0
max*
of order k0 1
i k iA
k
Qx x
0 1kQ
1Qk
1
0
0
n eigenvalues (some may be equal)
For the best universal bound on the convergencef th CG th d (if l k th t
of the CG-method (if we only know the extremeeigenvalues), we need to solve
1 k-th order,
0 1
arg min maxnQ
Q
Q Q
Solution:
2
1
1
2 nk
n
TQ
1
1
nk
n
T
The Chebyshev PolynomialsThe Chebyshev Polynomials –the flattest polynomials in the universe!
cos arccoskT x k x
0
1
1T x
T x x
22
33 3 4
T x x x
T x x x
2 44 1 8 8T x x x
(interval [-1,1])( [ , ])
The optimal Ch. polynomials on the interval [0.3, 2.0]
Interval
1
1
21 1
nkkT
1
1
1
1
1 1max 21 11
n
n
nkk
n
QTT
k
01* 2 *1k A A
x x x x
(more info in the note on the web)