kasetsart university workshop multigrid methods: an...
TRANSCRIPT
![Page 1: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/1.jpg)
Kasetsart University Workshop
Multigrid methods: An introduction
Dr. Anand Pardhanani
Mathematics Department
Earlham College
Richmond, Indiana
USA
A copy of these slides is available at
http://www.earlham.edu/∼pardhan/kaset/workshop/
![Page 2: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/2.jpg)
Topics to be covered
• Outline of PDE solution approach
• Algebraic system solvers
• Basic iterative methods
• Motivation for multigrid
• Basic multigrid ingredients
• Cycling strategies
• Generalization to nonlinear problems
General notation
Scalars → a, b, c, · · ·
Vectors → a, b, c, · · ·
Matrices → A,B,C, · · ·
![Page 3: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/3.jpg)
PDE Solution Outline
General PDE problem:
Lu = f in Ω
Bu = g on ∂Ω
where L = PDE operator, B = boundary operator, Ω = domain,
∂Ω = domain boundary, and u = unknown solution
Specific example:
∂2u
∂x2= 625
e25x
e25 − 1in 0 < x < 1
u(0) = 0, u(1) = 1
The analytical solution to this problem is:
u(x) =e25x − 1
e25 − 1
![Page 4: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/4.jpg)
Numerical solution method:
1. Discretize domain: Ω→ Ωh
=⇒ Construct suitable mesh (grid)
2. Discretize PDE on Ωh: Lu = f → Ahuh = bh=⇒ Use finite-difference, finite-element, finite-volume ...
=⇒ PDE becomes algebraic system
3. Solve algebraic system for uh
4. Interpolate discrete solution uh
1-D example:
1. Grid generation (uniform spacing h with N nodes)
r r r r r r r r r Ωhuh
1 2 i− 1 i i + 1 N
2. PDE discretization: Use finite difference
At any node i, from Taylor series
ui+1 = ui + hu′i +h2
2u′′i +
h3
6u′′′i +
h4
24uivi + · · ·
ui−1 = ui − hu′i +h2
2u′′i −
h3
6u′′′i +
h4
24uivi + · · ·
Add the two and get the approximation
∂2u
∂x2≈ ui−1 − 2ui + ui+1
h2
![Page 5: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/5.jpg)
Therefore, our PDE problem
∂2u
∂x2= f (x) becomes
ui−1 − 2ui + ui+1
h2≈ fi
Assemble over entire grid and get algebraic system
Ahuh = bh
where
uh = [u1, u2, · · · , ui, · · · , uN ]T
bh = h2[0, f2, f3, · · · , fi, · · · , 1/h2]T
Ah =
1 0 0 0 · · · · · · · · · · · ·1 −2 1 0 · · · · · · · · · · · ·0 1 −2 1 · · · · · · · · · · · ·......
0 0 · · · · · · · · · 1 −2 1
0 0 · · · · · · · · · 0 0 1
and
fi = 625e25xi
e25 − 1
3. Solve N ×N algebraic system for uh
4. Interpolate uh between nodes if necessary
![Page 6: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/6.jpg)
Taylor series centered at xi for deriving other finite
difference approximations
Assume following uniform mesh structure:
i− 4 i− 3 i− 2 i− 1 i i + 1 i + 2 i + 3 i + 4
-h
Taylor series expansions of function u(x) about node i
ui−4 = ui − 4hu′i + 16h2
2u′′i − 64
h3
6u′′′i + 256
h4
24uivi − 1024
h5
120uvi + · · ·(1)
ui−3 = ui − 3hu′i + 9h2
2u′′i − 27
h3
6u′′′i + 81
h4
24uivi − 243
h5
120uvi + · · · (2)
ui−2 = ui − 2hu′i + 4h2
2u′′i − 8
h3
6u′′′i + 16
h4
24uivi − 32
h5
120uvi + · · · (3)
ui−1 = ui − hu′i +h2
2u′′i −
h3
6u′′′i +
h4
24uivi −
h5
120uvi + · · · (4)
ui+1 = ui + hu′i +h2
2u′′i +
h3
6u′′′i +
h4
24uivi +
h5
120uvi + · · · (5)
ui+2 = ui + 2hu′i + 4h2
2u′′i + 8
h3
6u′′′i + 16
h4
24uivi + 32
h5
120uvi + · · · (6)
ui+3 = ui + 3hu′i + 9h2
2u′′i + 27
h3
6u′′′i + 81
h4
24uivi + 243
h5
120uvi + · · · (7)
ui+4 = ui + 4hu′i + 16h2
2u′′i + 64
h3
6u′′′i + 256
h4
24uivi + 1024
h5
120uvi + · · ·(8)
![Page 7: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/7.jpg)
Remarks:
• PDE transformed to system of algebraic equations
• With N grid points:
– Size of algebraic system = N ×N– Number of unknowns = O(N)
• For large N , algebraic solution dominates computational effort
(CPU time)
• Efficient algebraic solvers are very important
• Observe that Ah is very sparse; this is typical in PDE problems
Key point: Efficient algebraic solvers are crucial for solving PDE’s
efficiently
Multigrid methods are basically very efficient algebraic system solvers
![Page 8: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/8.jpg)
Linear Algebraic System Solvers
• Algebraic problem
Au = b
Given A (N ×N matrix) and b (N -vector), solve for u.
• Two main approaches:
(1) Direct methods (Gauss elimination & variants)
(2) Iterative methods
• For large, sparse, “well-behaved” matrices, iterative methods are
much faster
• Multigrid methods are a special form of iterative methods
• We’ll focus on iterative methods first
![Page 9: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/9.jpg)
Basic Iterative Methods: An illustration
Suppose we want to solve 1 −2 1
−1 1 0
2 1 1
u1u2u3
=
1
2
3
• Our system can be written as
u1 − 2u2 + u3 = 1
−u1 + u2 = 2
2u1 + u2 + u3 = 3=⇒
u1 = 1 + 2u2 − u3u2 = 2 + u1
u3 = 3− 2u1 − u2
• Now consider the iteration scheme
uk+11 = 1 + 2uk2 − uk3uk+12 = 2 + uk1uk+13 = 3− 2uk1 − uk2
where k denotes the iteration number
• Suppose the initial guess is u0 = [0 0 0]T
Then we get u1 = [1 2 3]T
u2 = [2 3 − 1]T
u3 = [8 4 − 4]T
etc.
• This is the classic Jacobi method
• If it converges, we should eventually get the exact solution
![Page 10: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/10.jpg)
Iterative Methods: More general form
• Recap problem statement:
Au = b (9)
Given A and b, solve for u
• Iterative procedure:
1. Pick arbitrary initial guess u0
2. Recursively update using the formula
uk+1 = [I −Q−1A]uk + Q−1b (10)
where k = 0, 1, 2, · · · is the iteration index, and Q is a matrix
that depends on the specific iterative method
• Example:
– Jacobi method =⇒ Q = D
– Gauss-Seidel method =⇒ Q = D + L
where D and L are the diagonal and lower-triangular parts
of A
• The form (10) is useful for analysis, but
– in practice Q is not explicitly computed
– we often rewrite (10) in the form
uk+1 = Ruk + g (11)
where R is the “iteration matrix” and g is a constant vector
![Page 11: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/11.jpg)
Practical Implementation:
• Rewrite algebraic system (9) as:
N∑j=1
aijuj = bi, i = 1, 2, · · · , N
• Split LHS into 3 parts
i−1∑j=1
aijuj + aiiui +
N∑j=i+1
aijuj = bi
=⇒ aiiui =
bi − i−1∑
j=1
+
N∑j=i+1
aijuj
(12)
• Iteration formulas:
– Jacobi
aiiunewi =
bi − i−1∑
j=1
+
N∑j=i+1
aijuoldj
(13)
– Gauss-Seidel
aiiunewi =
bi − i−1∑
j=1
aijunewj +
N∑j=i+1
aijuoldj
(14)
![Page 12: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/12.jpg)
• Recap of iterative procedure:
1. Pick initial guess u0 = [u01, u02, · · · , u0N ]
2. Recursively apply iterative formula:
for k=1:kend % (Iteration loop)
for i=1:N % (Loop over rows of algebraic system)
uk+1i = [bi −
i−1∑j=1
aijuρj −
N∑j=i+1
aijukj ] / aii
where
ρ =
k for Jacobi
k + 1 for Gauss-Seidel
end
end
Variants of Jacobi & Gauss-Seidel:
• Damped Jacobi:
At each iteration k + 1, do the following:
(1) Compute Jacobi iterate, say uk+1, as before
(2) Set uk+1 = ω uk+1 + (1− ω) uk, (0 < ω ≤ 1)
• SOR (Successive Over-Relaxation):
At each iteration k + 1, do the following:
(1) Compute Gauss-Seidel iterate uk+1
(2) Set uk+1 = ω uk+1 + (1− ω) uk, (0 < ω < 2)
![Page 13: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/13.jpg)
Convergence of Iterative Methods
• Depends on properties of A
• Assume A arises from discretizing elliptic PDE
• Typical behavior
– Fast convergence in 1st few iterations
– Slow convergence thereafter
![Page 14: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/14.jpg)
Reason for such convergence behavior:
• Let initial error = e0
• e0 can be viewed as superposition of discrete Fourier modes
e0 =
N−1∑n=0
αnwn
e.g., wn = cos(nπxi)Ni=1
Low n =⇒ low frequency
High n =⇒ high frequency
• Iter. methods not equally effective on all error frequencies
(1) More effective on high frequencies (HF)
(see matlab animation)
(2) Less effective on low frequencies (LF)
• Quick elimination of HF errors leads to rapid convergence ini-
tially; lingering LF errors decrease convergence thereafter
![Page 15: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/15.jpg)
![Page 16: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/16.jpg)
Remarks:
• “High” frequency is defined relative to N (i.e., no. of grid points)
• Highest visible frequency (i.e., wN−1) depends on N
• HF on coarse grid looks like LF on fine grid
• Example: Ω1 → N points and Ω2 → 2N points
Highest frequency on Ω2 = 2× HF on Ω1
Motivation for Multigrid
• Iterative methods preferentially damp HF errors
– This is called “smoothing property”
• Overall convergence rate is slowed by LF errors
• HF and LF are defined relative to grid spacing
• To improve convergence rate, LF must be disguised as HF
• This is precisely what multigrid does
• Multigrid =⇒ use progressively coarser grids to attenuate LF
error modes
![Page 17: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/17.jpg)
Convergence analysis in more detail
• Consider the algebraic system and the iteration algorithm
Au = b
u(k+1) = Ru(k) + g
• Clearly, the exact solution u is a fixed point, since
u = Ru + g ⇒ u = [I −Q−1A]u + Q−1b
which is always true
• Subtract [u(k+1) = Ru(k) + g] − [u = Ru + g].
• We get e(k+1) = Re(k)
where e(k) is the error at the kth iterate.
• Let e(0) be the initial error. Then we have
e(k) = Rk e(0)
• For convergence, we want Rk → 0 as k →∞
• From linear algebra we know that
Rk → 0 iff ρ(R) < 1
where ρ(R) is the spectral radius (eigenvalue of largest magni-
tude) of R
• ρ(R) < 1 guarantees the method converges, and ρ(R) is the
convergence rate
![Page 18: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/18.jpg)
Rate of convergence
• Suppose we want to reduce the initial error by 1/10. Then∣∣∣e(M)∣∣∣ ≤ ∣∣e(0)∣∣
10
⇒∣∣RM
∣∣ ∣∣∣e(0)∣∣∣ ≤ ∣∣e(0)∣∣10
⇒ |ρ(R)|M ≤ 1
10
∴ M ≈ ln(1/10)
ln(ρ(R))
• M is the number of iterations needed to reduce error by 1/10
• Note that ρ(R)→ 0 ⇒ M → 0
and ρ(R)→ 1 ⇒ M →∞
• Thus ρ(R) is a key indicator of the convergence rate
![Page 19: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/19.jpg)
Illustration using damped Jacobi method
As before, consider the PDE problem
∂2u
∂x2= f (x)
which gives the algebraic system
Au = b
where
A =
−2 1 0 0 · · · · · · · · · · · ·1 −2 1 0 · · · · · · · · · · · ·0 1 −2 1 · · · · · · · · · · · ·......
0 0 · · · · · · · · · 1 −2 1
0 0 · · · · · · · · · 0 1 −2
Damped Jacobi iterations can be written as
u(k+1) = Ru(k) + g
where R = I − ωD−1A, and D = diag(A), 0 < ω ≤ 1.
The next few slides are from the the multigrid tutorial by Briggs,
which contains very clear and complete visuals for this case.
![Page 20: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/20.jpg)
28 of 119
Convergence analysis forweighted Jacobi on 1D model
ULDIR −ω )+(ω+)ω−(= 1 1
ω−= ADI −1
IRω
−⋅⋅⋅⋅⋅⋅⋅⋅⋅
−−−−
−
ω−=
21
121121
12
2
)(λω
−=)(λ ARω 21
For the 1D model problem, he eigenvectors of the weightedJacobi iteration and the eigenvectors of the matrix A arethe same! The eigenvalues are related as well.
![Page 21: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/21.jpg)
29 of 119
Good exercise: Find theeigenvalues & eigenvectors of A
• Show that the eigenvectors of A are Fouriermodes!
π
=,
π
=)(λ jkk Nkj
wN
kA nis
2nis4 2
,
0 1 0 2 0 3 0 4 0 5 0 6 0
-1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
-1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
-1
-0 . 8
-0 . 6
-0 . 4
-0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
-1
-0 . 8
-0 . 6
-0 . 4
-0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
![Page 22: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/22.jpg)
30 of 119
Eigenvectors of Rωωωω and A are thesame,the eigenvalues related
• Expand the initial error in terms of theeigenvectors:
• After M iterations,
• The kth mode of the error is reduced by λk at eachiteration
π
ω−=)(λk Nk
Rω 2nis21 2
wce−
=
)(1
1
0 = kkN
k
wcwRceR kMkk
N
kk
Mk
N
k
M−
=
−
=
)(1
1
1
1
0 λ==
![Page 23: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/23.jpg)
31 of 119
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
k
Eig
enva
lue
/=ω 31
/=ω 21
/=ω 32=ω 1
N2
Relaxation suppresseseigenmodes unevenly
• Look carefully at
π
ω−=)(λk Nk
Rω 2nis21 2
Note that ifthen for
For ,
10 ≤ω≤<|)(λ| k Rω 1
Nk −,...,,= 121
10 ≤ω≤
π
ω−=λ 21 2
nis21N
π
ω−=2
nis21 2 h
≈)(−= 11 hO 2
![Page 24: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/24.jpg)
32 of 119
Low frequencies are undamped
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
k
Eig
enva
lue
/=ω 31
/=ω 21
/=ω 32=ω 1
N2
• Notice that no value of will damp out the long(i.e., low frequency) waves.
ω
What value of givesthe best damping ofthe short waves ?
Choose such that
NkN2
≤≤
ω
)(λ−=)(λ NN2
RR ωω
ω
=ω
32
![Page 25: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/25.jpg)
33 of 119
The Smoothing factor
• The smoothing factor is the largest absolute valueamong the eigenvalues in the upper half of thespectrum of the iteration matrix
• For Rω, with , the smoothing factor is ,since
and for .
• But, for long waves .
2rofxamrotcafgnihtooms ≤≤)(λ= k Nk
NR
=ω32
31
=λ=λ NN2
31
<λk 31
NkN2
<<
π−≈λk 32
1 hk 222
Nk
2«
![Page 26: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/26.jpg)
34 of 119
Convergence of Jacobi on Au=0
• Jacobi method on Au=0 with N=64. Number ofiterations required to reduce to ||e||∞ < .01
• Initial guess :
0 10 20 30 40 50 600
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 600
10
20
30
40
50
60
70
80
90
100
=N
jkvkjπsin
Wavenumber, kWavenumber, k
Unweighted Jacobi Weighted Jacobi
![Page 27: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/27.jpg)
35 of 119
Weighted Jacobi RelaxationSmooths the Error
• Initial error:
• Error after 35 iteration sweeps:0 0 . 5 1
- 2
- 1
0
1
2
0 0 . 5 1
- 2
- 1
0
1
2
Many relaxationschemes
have the smoothingproperty, where
oscillatorymodes of the error
areeliminated
effectively, butsmooth modes are
dampedvery slowly.
Nj
Nj
Nj
v jk
π
+
π
+
π
=23
nis2161
nis212
nis
![Page 28: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/28.jpg)
Multigrid Methods
Main ingredients
• Error smoother
• Nested iteration
• Coarse Grid Correction (CGC)
Multigrid notation: All quantities have subscripts to denote grid
on which they are defined
Error smoother
• Also called “relaxation” techniques
• Must efficiently eliminate oscillatory (HF) component of error
• Certain iterative methods are ideal
• Gauss-Seidel is very effective for elliptic operators
Nested iteration
• Improve initial guess by solving on coarser grid and interpolating
• Can generalize to include several “nested” coarser grids
• Example: Ω4h ⊂ Ω2h ⊂ Ωh
On Ω4h : A4hu4h = f 4h
On Ω2h : A2hu2h = f 2h, use u02h = I [u4h]
On Ωh : Ahuh = fh, use u0h = I [u2h]
![Page 29: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/29.jpg)
Coarse Grid Correction
• Mechanism for attenuating LF errors
• Some preliminaries:
– We have grid Ωh and linear system Ahuh = bh
– If u∗h is approximate solution, define
error: eh = uh − u∗hresidual: rh = bh −Ahu
∗h
– Therefore, we have
Ahuh −Ahu∗h = bh −Ahu
∗h
⇒ Aheh = rh (15)
Note that (15) is equivalent to the original system
![Page 30: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/30.jpg)
Two-grid CGC Cycle:
• Original grid is Ωh and linear system
Ahuh = bh (16)
• Introduce nested coarser grid Ω2h ⊂ Ωh
• CGC cycle:
1. Perform ν1 smoothing iterations on (16) with some initial
guess on grid Ωh;
obtain approximate solution u∗h.
2. Compute residual rh = bh −Ahu∗h;
– restrict rh and Ah to the coarse grid (Ω2h):
Ah → A2h, rh → r2h
– Construct the following coarse grid problem
A2he2h = r2h [recall equation (15)]
3. Solve for e2h.
4. Interpolate e2h to fine grid & improve fine grid approximation
e2h −→ e∗hunewh = u∗h + e∗h
5. Perform ν2 smoothing iterations on (16) using unewh as initial
guess.
• Remarks:
– Step (1) is called “pre-smoothing”
– Step (5) is called “post-smoothing”
– Typical values of ν1 and ν2 range from 1 to 4
![Page 31: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/31.jpg)
Restriction & Prolongation
• CGC involves inter-grid transfer operations
– Fine-to-coarse transfer is called “restriction”
– Coarse-to-fine transfer is called “prolongation”
• Notation:
Restriction operator = I2hh (e.g., r2h = I2h
h [rh])
Prolongation operator = Ih2h (e.g., uh = Ih2h[u2h])
• Prolongation methods:
– Use interpolation formulas
– Linear interpolation is most common
– 1-D example
u u u u u u u u uu u u u u
Ωhuh
Ω2hu2h
0 1 (2i− 1) 2i (2i + 1) 2M
0 i− 1 i i + 1 M
u2ih = ui2h
u2i+1h =
1
2(ui2h + ui+1
2h ), (0 ≤ i ≤M)
Thus
Ih2h =
. . . · · · · · · · · · · · · · · ·· · · 0 1 0 · · · · · · · · ·· · · 0 1/2 1/2 0 · · · · · ·· · · · · · 0 1 0 · · · · · ·· · · · · · 0 1/2 1/2 0 · · ·· · · · · · · · · · · · . . .
= [2M×M ] matrix
![Page 32: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/32.jpg)
Restriction methods
• Want to restrict matrix Ah and vector rh from Ωh to Ω2h
I2hh [Ah] = A2h and I2h
h [rh] = r2h
• Can discretize original PDE on Ω2h to get A2h
• For vectors rh, we can use “injection” or “full-weighting”
u u u u u u u u uu u u u u
Ωhrh
Ω2hr2h
0 1 (2i− 1) 2i (2i + 1) 2M
0 i− 1 i i + 1 M
Injection: ri2h = r2ih
Full-weighting: ri2h =1
4(r2i−1h + 2r2ih + r2i+1
h ) (0 < i < M)
• This yields the opetator
I2hh =
. . . · · · · · · · · · · · · · · ·
· · · 0 1 0 · · · · · · · · ·· · · · · · · · · 0 1 0 · · ·· · · · · · · · · · · · 0 0 1
= [M × 2M ] matrix
OR
I2hh =
. . . · · · · · · · · · · · · · · ·
· · · 1/4 1/2 1/4 · · · · · · · · ·· · · · · · · · · 1/4 1/2 1/4 · · ·· · · · · · · · · · · · · · · · · · . . .
= [M×2M ] matrix
• Notice that 2I2hh = [Ih2h]
T (if we use full-weighting)
![Page 33: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/33.jpg)
Some remarks on restriction/prolongation
• It is very useful to have a relation of the form Ih2h = c[I2hh ]T
(for c ∈ R)
• It has some important theoretical implications
• Furthermore, we can define
A2h = I2hh AhI
h2h
This provides an alternative way to define the coarse grid matrix,
without rediscretizing the underlying PDE system.
• This leads to the following natural strategy for intergrid transfers
1. Define Ih2h in some suitable way
2. Define I2hh = c[Ih2h]
T (e.g., c = 1)
3. Define A2h = I2hh AhI
h2h
• These ideas apply in higher dimensions as well
![Page 34: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/34.jpg)
Generalization to multiple grids
• Recall CGC cycle:
u u u u u u u u u u u u u u u u u
u u u u u u u u u
Ωh
Ω2h
On Ωh : Ahuh = bh
On Ω2h : A2he2h = r2h
-
h
-
2h
• Problem on Ω2h has same form as that on Ωh
• Therefore, we can compute e2h by introducing an even coarser
grid Ω4h, and using CGC again
• The problem on Ω4h again has the same form
• Recursive application of CGC leads to MG cycles involving sev-
eral grid levels
Multigrid Cycling Strategies
• Obtained by using CGC recursively between multiple grid levels
• Common MG cycles: V -cycle, W -cycle and FMG (Full Multi-
Grid)
![Page 35: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/35.jpg)
V -cycle
• Uses one recursive CGC between pairs of grid levels
• Sketch of resulting grid sequence has “V” shape
u u u u u u u u u u u u u u u u uu u u u u u u u uu u u u uu u u
Ωh
Ω2h
Ω4h
Ω8h
-
h
-
2h
-
4h
-
8hExample of four nested grid levels in 1-D.
BBBBBNBBBBBNBBBBBN
Ωh
Ω2h
Ω4h
Ω8h
Grid sequence for V -cycle on 4 nested grid levels.
![Page 36: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/36.jpg)
Recursive definition of V -cycle (with initial iterate vh)
vh ←MVh(vh,fh, ν1, ν2)
It consists of the following steps:
1. If h = hc (coarsest grid), go to step 5. Else, perform ν1smoothing iterations on Ahuh = fh with initial guess vh;
obtain v∗h, and assign vh ← v∗h.
2. Set rh = fh −Ahvh, f 2h = I2hh rh; define coarse grid prob-
lem A2hu2h = f 2h.
3. Set v2h ← 0, and perform v2h ← MV2h(v2h,f 2h, ν1, ν2) on
coarse grid problem.
4. Set vh ← vh + Ih2hv2h.
5. If h = hc solve Ahuh = fh exactly, and set vh ← uh. Else,
perform ν2 smoothing iterations with initial guess vh; obtain
v∗h, and assign vh ← v∗h.
![Page 37: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/37.jpg)
W -cycle
Uses 2 recursive CGC cycles between pairs of levels
BBBBBNBBBBBNBBBBBNBBBBBNBBBBBNBBBBBNBBBBBN
Ωh
Ω2h
Ω4h
Ω8h
Both V - and W -cycle are a special case of the µ-cycle.
It uses µ recursive CGC cycles between pairs of levels
vh ←MGµh(vh,fh, ν1, ν2)
1. If h = hc (coarsest grid), go to step 5. Else, perform ν1 smooth-
ing iterations on Ahuh = fh with initial guess vh; obtain v∗h,
and assign vh ← v∗h.
2. Set rh = fh −Ahvh, f 2h = I2hh rh, and construct coarse grid
problem A2hu2h = f 2h.
3. Set v2h ← 0, and perform µ times: v2h ←MGµ2h(v2h,f 2h, ν1, ν2).
4. Set vh ← vh + Ih2hv2h.
5. If h = hc solve Ahuh = fh exactly, and set vh ← uh. Else,
perform ν2 smoothing iterations with initial guess vh; obtain v∗h,
and assign vh ← v∗h.
![Page 38: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/38.jpg)
FMG-cycle
• FMG stands for Full MultiGrid
• Generalization of the nested iteration concept
• Use MG cycles in place of ordinary iterations
• Start at coarsest level & perform nested V -, W - or µ-cycles till
finest level
BBBBBNBBBBBNBBBBBNBBBBBNBBBBBNBBBBBN
Ωh
Ω2h
Ω4h
Ω8h
![Page 39: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/39.jpg)
Nonlinear PDE Problems
• General form:
N (u) = f in Ω
B(u) = g on ∂Ω
where N is nonlinear differential operator, B is boundary oper-
ator, u is unknown, f and g are given functions
• Upon discretizing, we get nonlinear algebraic system:
→N (
→u) =
→f
• Example:
uxx + λeu = 0, 0 < x < 1
u(0) = u(1) = 0
– Discretize on grid of spacing h
u1 = 0
ui−1 − (2ui − h2λeui) + ui+1 = 0, (i = 2, 3, · · · ,M − 1)
uM = 0
• Nonlinear algebraic systems more difficult to solve than linear
systems
![Page 40: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/40.jpg)
Nonlinear System Solution
• Basic solution approaches:
1. Iterative global linearization
– Successive approximation
– Newton-type iteration
2. Nonlinear relaxation
– Nonlinear Jacobi, Gauss-Seidel, SOR, etc.
• Usual difficulties:
– Iterations may not converge
– Convergence may be very slow
– Initial guess affects convergence
![Page 41: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/41.jpg)
Global Linearization
Successive Approximation
• Basic idea:
– Iteratively “lag” some unknowns to linearize globally
• Mathematical form (drop vector notation for convenience):
– Original algebraic system: N(u) = f
– Rewrite as: A(u) u = b(u)
where A and b are matrix and vector functions of u
– Successive approximation algorithm:
1. Pick initial guess, u0
2. Set k = 1
3. Construct linear system: A(uk) uk+1 = b(uk)
4. Solve for uk+1
5. If solution is converged, stop; else, set k = k + 1 and go
to step (3)
![Page 42: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/42.jpg)
Global Linearization
Successive Approximation
• Example:
uxx + λeu = 0, 0 < x < 1
u(0) = u(1) = 0
– Original algebraic system
ui−1 − (2ui − h2λeui) + ui+1 = 0
– Rewrite as
ui−1 − 2ui + ui+1 = −h2λeui
which corresponds to
A =
1 0 0 0 · · · · · · · · · · · ·1 −2 1 0 · · · · · · · · · · · ·0 1 −2 1 · · · · · · · · · · · ·......
0 0 · · · · · · · · · 1 −2 1
0 0 · · · · · · · · · 0 0 1
b = [0, · · · ,−h2λeui, · · · , 0]T
![Page 43: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/43.jpg)
Global Linearization
Newton iteration
• Given algebraic system: N(u) = f
• Use Taylor series expansion about iterate uk
N(uk) +∂N
∂u(uk+1 − uk) + O(∆u2) = f
• Neglect higher order terms, and get Newton iteration formula
∂N
∂u(uk+1 − uk) = f −N(uk)
• Iterative solution algorithm analogous to successive approxima-
tion case
Remarks
– Both Newton iteration and successive approximation require re-
peated solution of linear algebraic systems
– Standard linear multigrid can be used for this
![Page 44: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/44.jpg)
Nonlinear Relaxation
• Given algebraic system: N(u) = f
• Rewrite as:
Ni(u1, u2, · · · , ui, · · · , uM) = fi, i = 1, 2, · · · ,M
• Nonlinear relaxation ⇒
– Iteratively decouple the system
– Solve ith equation for ui assuming all other u’s are known
– Jacobi variant:
1. Pick initial guess, u0
2. Set k = 1
3. for i=1:M
Solve Ni(uk1, u
k2, · · · , uk+1
i , · · · , ukm) = fi for uk+1i
end
4. If solution is converged, stop; else, set k = k + 1 and go
to step (3)
![Page 45: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/45.jpg)
Nonlinear Relaxation
Remarks
• Can construct analogs of other linear relaxation schemes in a
similar way
– e.g., for Gauss-Seidel, simply replace the equation in step (3)
with
Ni(uk+11 , uk+1
2 , · · · , uk+1i , · · · , ukm) = fi
• In general, step (3) involves nonlinear solution for uk+1i ; one can
use successive approximation or Newton iteration
• Thus, we again have inner iteration embedded within outer iter-
ation
• Nonlinear relaxation schemes also often exhibit smoothing prop-
erty, like their linear counterparts
![Page 46: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/46.jpg)
Nonlinear Multigrid
• Consider nonlinear PDE, as before
N(u) = f in Ω
• Discretize on grid Ωh and get nonlinear algebraic system
Nh(uh) = fh (17)
where Nh is a nonlinear algebraic operator
• Suppose we apply a nonlinear relaxation scheme, and get the
approximate solution u∗h. Define
error: eh = uh − u∗hresidual: rh = fh −Nh(u
∗h)
Error to residual relation
Nh(uh)−Nh(u∗h) = rh
• Note that eh cannot be explicitly computed for nonlinear case
• However, we assume eh behaves as in the linear case
– quick decay of HF error modes
– much slower decay of LF modes
![Page 47: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/47.jpg)
Nonlinear Multigrid
Full Approximation Scheme (FAS)
• FAS is nonlinear generalization of CGC
• Assume nonlinear relaxation/smoothing method is available
• Basic steps very similar to CGC:
1. On fine grid: Nh(uh) = fh;
perform ν1 nonlinear smoothing iterations, compute u∗h and
rh = fh −Nh(u∗h)
2. On 2h grid: Construct N 2h(u2h) = N 2h(I2h
h u∗h) + I2hh rh;
solve for u2h, and compute e2h = u2h − I2h
h u∗h
3. Interpolate and correct: unewh = u∗h + Ih2he2h
4. Perform ν2 nonlinear post-smoothing iterations
• Notice that step (2) computes e2h indirectly
– the full solution u2h is computed first
– after that we compute e2h = u2h − I2h
h u∗h
• In linear CGC, step (2) is simpler:
2. On 2h grid: A2h(e2h) = I2hh rh;
solve for e2h directly
![Page 48: Kasetsart University Workshop Multigrid methods: An ...legacy.earlham.edu/~pardhan/kaset/workshop/mgtut_rev2012.pdfKasetsart University Workshop Multigrid methods: An introduction](https://reader035.vdocuments.site/reader035/viewer/2022071507/612869217c1d0e53135105cb/html5/thumbnails/48.jpg)
Recap key ideas from last week
•
• Algebraic systems typically very large and sparse
• Efficient solvers crucial for computational efficiency
• Iterative solvers are attractive for such large linear systems
• Typical behavior of iterative methods:
– Fast convergence in 1st few iterations
– Slow convergence thereafter
• Reason for this:
– Iterative methods preferentially damp HF error modes.
This is called “smoothing property”
– Overall convergence rate is slowed by LF error modes
• Motivation for multigrid:
– Take advantage of smoothing property
– HF and LF are defined relative to grid spacing
– Multigrid⇒ use smoothing on progressively coarser grids to
handle LF error modes