-
Model reduction of large-scalesystems
An overview and some new results
Thanos Antoulas
Rice University
[email protected]: www-ece.rice.edu/˜aca
University of Minnesota, April 2006
Model reduction of large-scale systems – p.1
-
Support: • NSF + NSF ITR Grants
Collaborators: • Dan Sorensen (CAAM), TA (ECE)• Paul van Dooren (Belgium)• Ahmed Sameh (Purdue)• Mete Sozen (Purdue)• Ananth Grama (Purdue)• Chris Hoffmann (Purdue)• Kyle Gallivan (Florida State)• Serkan Gugercin (Virginia Tech)
Model reduction of large-scale systems – p.2
-
Contents
1. Introduction and problem statement
2. Overview of approximation methods
SVD – Krylov – Krylov/SVD
3. Choice of projection points in Krylov methods
4. Krylov/SVD methods – Solution of large Sylvester equations
5. Open problems/Concluding remarks
Model reduction of large-scale systems – p.3
-
The big picture
��
��
���
��
��
���
��
��
���
� discretization
Modeling
Model reduction
����
Simulation
Controlreduced # of ODEs
ODEs PDEs
Physical System + Data
Model reduction of large-scale systems – p.4
-
Dynamical systems
x1(·)x2(·)
...
xn(·)
y1(·)←−y2(·)←−...yp(·)←−
←− u1(·)←− u2(·)...←− um(·)
We consider explicit state equations
Σ : ẋ(t) = f(x(t), u(t)), y(t) = h(x(t), u(t))
with state x(·) of dimension n�m,p.
Model reduction of large-scale systems – p.5
-
Problem statement
Given: dynamical system Σ = (f, h) with: u(t) ∈ Rm,x(t) ∈ Rn, y(t) ∈ Rp.Problem: Approximate Σ with:
Σ̂ = (f̂ , ĥ), u(t) ∈ Rm, x̂(t) ∈ Rk, y(t) ∈ Rp, k � n :
(1) Approximation error small - global error bound
(2) Preservation of stability, passivity
(3) Procedure computationally efficient
Model reduction of large-scale systems – p.6
-
Approximation by projection
Unifying feature of approximation methods: projections.
Let V, W ∈ Rn×k, such that W∗V = Ik ⇒ Π = VW∗ is aprojection.
Define x̂ = W∗x. Then
Σ̂ :
⎧⎪⎨⎪⎩
ddt x̂(t) = W
∗f(Vx̂(t), u(t))
y(t) = h(Vx̂(t), u(t))
Thus Σ̂ is "good" approximation of Σ, if x−Πx is "small".
Model reduction of large-scale systems – p.7
-
Special case: linear dynamical systems
Σ: ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)
Σ =
⎛⎝ A B
C D
⎞⎠ ∈ R(n+p)×(n+m)
Problem : Approximate Σ by projection: Π = V W ∗, Π2 = Π,
Σ̂ =
⎛⎝ Â B̂
Ĉ D̂
⎞⎠ =
⎛⎝ W ∗AV W ∗B
CV D
⎞⎠ ∈ R(k+p)×(k+m), k � n
Norms :• H∞ norm: worst output error ‖y(t)− ŷ(t)‖ for ‖u(t)‖ = 1.• H2-norm: ‖h(t)− ĥ(t)‖
Model reduction of large-scale systems – p.8
-
Motivating Examples: Simulation/Control1. Passive devices • VLSI circuits
• Thermal issues2. Data assimilation • North sea forecast
• Air quality forecast• Sensor placement
3. Biological/Molecular systems • Honeycomb vibrations• MD simulations• Heat capacity
4. CVD reactor • Bifurcations5. Mechanical systems: •Windscreen vibrations
• Buildings6. Optimal cooling • Steel profile7. MEMS: Micro Electro-Mechanical Systems • Elf sensor
Model reduction of large-scale systems – p.9
-
Passive devices : VLSI circuits
1960’s: IC 1971: Intel 4004 2001: Intel Pentium IV
10μ details 0.18μ details2300 components 42M components64KHz speed 2GHz speed
2km interconnect7 layers
Model reduction of large-scale systems – p.10
-
Passive devices : VLSI circuits
Conclusion: Simulations are required to verify that internalelectromagnetic fields do not significantly delay or distort circuitsignals. Therefore interconnections must be modeled.
⇒ Electromagnetic modeling of packages and interconnects⇒resulting models very complex:
using PEEC methods (discretization of Maxwell’s equations):n ≈ 105 · · · 106 ⇒ SPICE: inadequate
• Source: van der Meijs (Delft)
Model reduction of large-scale systems – p.11
-
Passive devices : Thermal management of semiconductor devices
Semiconductor package ANSYS grid Temperature distribution
Semiconductor manufacturers provide to customers models of heatdissipation.
Procedure: from finite element models generated using ANSYS, areduced order model is derived, which is provided to the customer.
• Source: Korvink (Freiburg)
Model reduction of large-scale systems – p.12
-
Mechanical systems: cars
Car windscreen simulation subject to acceleration load.
Problem: compute noise at points away from the window.
PDE: describes deformation of a structure of a specific material; FE
discretization: 7564 nodes (3 layers of 60 by 30 elements). Material: glass withYoung modulus 7·1010 N/m2; density 2490 kg/m3; Poisson ratio 0.23 ⇒coefficients of FE model determined experimentally.The discretized problem has dimension: 22,692.
Notice: this problem yields 2nd order equations:Mẍ(t) + Cẋ(t) + Kx(t) = f(t).
• Source: Meerbergen (Free Field Technologies)
Model reduction of large-scale systems – p.13
-
Mechanical Systems: Buildings
Earthquake prevention
Yokohama Landmark Tower: two active tuned mass dampers dampthe dominant vibration mode of 0.185Hz.
• Source: S. Williams
Model reduction of large-scale systems – p.14
-
MEMS: Elk sensor
Mercedes A Class semiconductor rotation sensor
• Source: Laur (Bremen)
Model reduction of large-scale systems – p.15
-
Approximation methods: Overview
��������
������
Krylov
• Realization• Interpolation• Lanczos• Arnoldi
SVD
����
���
Nonlinear systems Linear systems
• POD methods • Balanced truncation• Empirical Gramians • Hankel approximation
��
���
���
Krylov/SVD Methods
Model reduction of large-scale systems – p.16
-
SVD Approximation methods
SVD: A = UΣV ∗. A prototype problem.
Supernova Clown
0.5 1 1.5 2 2.5
−5
−4
−3
−2
−1
Singular values of Clown and Supernova Supernova: original picture
Supernova: rank 6 approximation Supernova: rank 20 approximation
green: clownred: supernova(log−log scale)
Clown: original picture Clown: rank 6 approximation
Clown: rank 12 approximation Clown: rank 20 approximation
Singular values provide trade-off between accuracy and complexity
Model reduction of large-scale systems – p.17
-
POD: Proper Orthogonal Decomposition
Consider: ẋ(t) = f(x(t), u(t)), y(t) = h(x(t), u(t)).Snapshots of state:
X = [x(t1) x(t2) · · · x(tN )] ∈ Rn×N
SVD: X = UΣV ∗ ≈ UkΣkV ∗k , k � n. Approximate the state:
ξ(t) = U∗kx(t) ⇒ x(t) ≈ Ukξ(t), ξ(t) ∈ Rk
Project state and output equations. Reduced order system:
ξ̇(t) = U∗k f(Ukξ(t), u(t)), y(t) = h(Ukξ(t), u(t))
⇒ ξ(t) evolves in a low-dimensional space.Issues with POD: (a) Choice of snapshots, (b) singular values not I/O invariants.
Model reduction of large-scale systems – p.18
-
SVD methods: the Hankel singular values
Trade-off between accuracy and complexity for linear dynamicalsystems is provided by the Hankel Singular Values, which arecomputed as follows: Gramians
P =∫ ∞
0
eAtBB∗eA∗t dt, Q =
∫ ∞0
eA∗tC∗CeAt dt
To compute gramians solve 2 Lyapunov equations :
AP + PA∗ + BB∗ = 0, P > 0A∗Q+QA + C∗C = 0, Q > 0
⎫⎬⎭⇒ σi =
√λi(PQ)
σi: Hankel singular values of system Σ
Model reduction of large-scale systems – p.19
-
SVD methods: approximation by balanced truncation
There exists basis where P = Q = S = diag (σ1, · · · , σn) . This is thebalanced basis of the system.
In this basis partition:
A =
⎛⎝ A11 A12
A21 A22
⎞⎠, B =
⎛⎝B1
B2
⎞⎠, C = (C1 | C2), S =
⎛⎝ Σ1 0
0 Σ2
⎞⎠
ROM obtained by balanced truncation Σ̂ =
⎛⎝ A11 B1
C1
⎞⎠
Σ2 contains the small Hankel singular values.
Projector: Π = V W ∗ where V = W =
(Ik
0
).
Model reduction of large-scale systems – p.20
-
Properties of balanced reduction
1. Stability is preserved
2. Global error bound :
σk+1 ≤‖ Σ− Σ̂ ‖∞≤ 2(σk+1 + · · · + σn)
Drawbacks
1. Dense computations, matrix factorizations andinversions⇒ may be ill-conditioned
2. Need whole transformed system in order to truncate⇒ number of operations O(n3)
Model reduction of large-scale systems – p.21
-
Approximation methods: Krylov
��������
������
Krylov
• Realization• Interpolation• Lanczos• Arnoldi
SVD
����
���
Nonlinear systems Linear systems
• POD methods • Balanced truncation• Empirical Gramians • Hankel approximation
��
���
��
Krylov/SVD Methods
Model reduction of large-scale systems – p.22
-
Krylov approximation methods
Given Σ =(
A B
C D
), expand transfer function around s0:
G(s) = η0 + η1 (s− s0) + η2 (s− s0)2 + η3 (s− s0)3 + · · ·
Moments at s0: ηj . Find Σ̂ =(
 B̂
Ĉ D̂
), with
Ĝ(s) = η̂0 + η̂1 (s− s0) + η̂2 (s− s0)2 + η̂3 (s− s0)3 + · · ·
such that for appropriate �:
ηj = η̂j, j = 1, 2, · · · , �
Model reduction of large-scale systems – p.23
-
Key fact for numerical reliability
But: computation of moments is numerically problematic
Solution: given Σ =(
A B
C
)moment matching methods can be
implemented in a numerically efficient way:
• moments can be matched without moment computation
• iterative implementation : ARNOLDI – LANCZOS
Special cases:Expansion around infinity: ηj : Markov parameters. Problem: partial realization.
Expansion around zero: ηj : moments. Problem: Padé interpolation.
In general: arbitrary s0 ∈ C: Problem: rational Krylov ⇔ rational interpolation.
Model reduction of large-scale systems – p.24
-
Properties of Krylov methods
(a) Number of operations: O(kn2) or O(k2n) vs. O(n3) ⇒efficiency(b) Only matrix-vector multiplications are required. No matrixfactorizations and/or inversions. No need to compute transformedmodel and then truncate.
(c) Drawbacks• global error bound?• Σ̂ may not be stable.• Lanczos breaks down if detHi = 0.
Model reduction of large-scale systems – p.25
-
Choice of projection points in Krylov methods
1. Positive real interpolation and model reduction.
2. Optimal H2 model reduction.
3. Decay rate of Hankel singular values
4. Reduction of 2nd order systems with proportional damping.
Model reduction of large-scale systems – p.26
-
Passive model reduction
Passive systems:
Re∫ t−∞ u(τ)
∗y(τ)dτ ≥ 0, ∀ t ∈ R, ∀ u ∈ L2(R).
Positive real rational functions:
(1) G(s) = D + C(sI −A)−1B is analytic for Re(s) > 0,(2) Re G(s) ≥ 0 for Re(s) ≥ 0, s not a pole of G(s).
Theorem: Σ =
(A B
C D
)is passive ⇔ G(s) is positive real.
Model reduction of large-scale systems – p.27
-
The new approach: Ingredients
• Positive realness of G(s) implies the existence of a spectralfactorization G(s) + G∗(−s) = W (s)W ∗(−s), where W (s) is thestable rational and W (s)−1 is also stable. The spectral zeros λi ofthe system are the zeros of the spectral factor W (λi) = 0,i = 1, · · · , n.
• Rational Lanczos. Skelton – de Villemagne and Grimme – VanDooren have shown (assuming for simplicity m = p = 1) thefollowing. Let
V =[(λ1I − A)−1B · · · (λkI − A)−1B
] ∈ Rn×k
W̄ ∗ =
⎡⎢⎢⎢⎣
C(λk+1I −A)−1...
C(λ2k I −A)−1
⎤⎥⎥⎥⎦ ∈ Rk×n
Model reduction of large-scale systems – p.28
-
where λi = λk+j , i, j = 1, · · · , k; let the k × k matrix Γ = W ∗V benon-singular: detΓ = 0; W ∗ = Γ−1W̄ ∗. The transfer function of Σ̂:
 = W ∗AV, B̂ = W ∗B, Ĉ = CV, D̂ = D
interpolates the transfer function of the original system Σ at theinterpolation points λi:
G(λi) = D + C(λiIn −A)−1B = D̂ + Ĉ(λiIk − Â)−1B̂ = Ĝ(λi)
for i = 1, · · · , n.
Model reduction of large-scale systems – p.29
-
Main result
• Method: Rational Lanczos
• Difficulty: Data (A, B, C, D)⇒ all interpolation points must besamples of same rational function: G(s) = D + C(sI −A)−1B.
• Solution: interpolation points = spectral zeros
Main result. Given is the stable and passive system Σ. If V , Ware defined as above, where λ1, · · · , λk are spectral zeros, and inaddition λk+i = −λ∗i , the reduced system Σ̂ satisfies:
(i) the interpolation constraints, (ii) it is stable, and (iii) it is passive.
Model reduction of large-scale systems – p.30
-
Spectral zeros as generalized eigenvalues
If D + D∗ is singular, the spectral zeros are the finitegeneralized eigenvalues λ ∈ σ(A, E): rank (A− λE) < 2n + p,where
A =
⎛⎜⎜⎜⎝
A B
−A∗ −C∗C B∗ D + D∗
⎞⎟⎟⎟⎠ and E =
⎛⎜⎜⎜⎝
I
I
0
⎞⎟⎟⎟⎠
Due to the structure, the following reflection property is satisfied:
λ ∈ σ(A, E) ⇐⇒ −λ∗ ∈ σ(A, E).
Model reduction of large-scale systems – p.31
-
Interpolation via invariant subspaces: Sorensen
����
A B
−A∗ −C∗C B∗ D + D∗
����
k� � ���
X
Y
Z
���� =
k� � ���
X
Y
0�
���R
Lemma. X and Y are both full rank with X∗Y = Y ∗X.
Construction of projectors:
V = X, W̄ = Y
Theorem. The transfer function Ĝ of the reduced system (Â, B̂, Ĉ, D) is
positive real and thus the reduced system is both stable and passive.
Model reduction of large-scale systems – p.32
-
OptimalH2 model reduction
Krylov projection methods. Given Π = V W ∗, V, W ∈ Rn×r , W∗V = Ir :
Ar = W∗AV, Br = W ∗B, Cr = CV
The H2 norm of the system is:
‖Σ‖H2 =
�� +∞−∞
h2(t)dt�1/2
Goal: construct a Krylov projector such that
Σr = arg mindeg(Σ̂) = r
Σ̂ : stable
���Σ− Σ̂���
H2.
Model reduction of large-scale systems – p.35
-
Facts on moment matching and rational Krylov
• Recall
V = [(σ1I −A)−1B · · · (σrI −A)−jB], σk ∈ C, k = 1, · · · , r
and W such that W ∗V = Ir ; then
G(s)|s=σk = Gr(s)|s=σk , k = 1, · · · , r.
• If we choose
W̄ = [(σ1I −A∗)−1C∗ · · · (σrI −A∗)−jC∗]
and W ∗ = (W̄ ∗V )−1W̄ , then in addition
dG(s)
ds����
s=σk
=dGr(s)
ds
����
s=σk
, for k = 1, . . . , r.
Model reduction of large-scale systems – p.36
-
First-order necessary optimality conditions
Let Gr(s) =
�� Ar Br
Cr 0
� solve the optimalH2 problem and let λ̂i denote the
eigenvalues of Ar (assumed to have multiplicity one). The necessary conditions become
dk
dskG(s)
����
s=−λ̂∗i=
dk
dskGr(s)
����
s=−λ̂∗i, k = 0, 1.
Thus the reduced model has to match the first two moments of the original system at themirror images of the eigenvalues of Ar.
Model reduction of large-scale systems – p.37
-
An iterative algorithm
Let the system obtained after the (j − 1)st step be (Cj−1,Aj−1,Bj−1), whereAj−1 ∈ Rk×k , Bj−1,C∗j−1 ∈ R
k. At the jth step the system is obtained as follows
Aj = (W∗j Vj)
−1W∗j AVj , Bj = (W∗j Vj)
−1W∗j B, Cj = CVj ,
where
Vj =
�
(λ1I−A)−1B, · · · , (λkI−A)−1B
�
,
W∗j =
�
C(λ1I− A)−1, · · · ,C(λkI−A)−1
�
,
and
−λ1, · · · ,−λk ∈ σ(Aj−1),i.e., −λi are the eigenvalues of the (j − 1)st iterate Aj−1.
Model reduction of large-scale systems – p.39
-
The Newton step
The Newton step for our iterative procedure can be computed explicitly:
�����
λ(k)1
λ(k)2
...
����� ←−
�����
λ(k)1
λ(k)2
...
����� − J−1
�����
λ(k−1)1
λ(k−1)2
...
�����
which guarantees local convergence.
Model reduction of large-scale systems – p.40
-
An Iterative Rational Krylov Algorithm (IRKA)
The proposed algorithm produces a reduced order model Gr(s) that satisfies theinterpolation-based conditions, i.e.
dk
dskG(s)
����
s=−λ̂i=
dk
dskGr(s)
����
s=−λ̂i, k = 0, 1.
1. Make an initial selection of σi, for i = 1, . . . , r
2. W̄ = [(σ1I −A∗)−1C∗, · · · , (σrI −A∗)−1C∗].3. V = [(σ1I −A)−1B, · · · , (σrI −A)−1B], and let W ∗ = (W̄ ∗V )−1W̄ .4. while (not converged)
(a) Ar = W ∗AV ,(b) σi ←− −λi(Ar) + Newton correction, i = 1, . . . , r(c) W̄ = [(σ1I −A∗)−1C∗, · · · , (σrI −A∗)−1C∗](d) V = [(σ1I −A)−1B, · · · , (σrI −A)−1B], W∗ = (W̄ ∗V )−1W̄ .
5. Ar = W ∗AV , Br = W ∗B, Cr = CV
Model reduction of large-scale systems – p.41
-
Numerical examples: the CD Player
• CD player model: n = 120, m = p = 1.• The Hankel singular values do not decay rapidly⇒ model is hard to reduce.• The performance of the proposed method is compared with balanced truncation forr = 2, · · · , 40.
0 4 8 12 16 20 24 28 32 36 40
10−5
10−4
10−3
10−2
10−1
r
Rel
ativ
e H
2 E
rror
BTLarge φ
iRandom
1 2 3 4 5 6 7 8 9 102.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3
number of iterations ||
Σ e ||
2
H2 norm of the error system vs the number of iterations
r = 10r = 8
RelativeH2 norm of the error system vs r (left) and the cases r = 8, r = 10 vs thenumber of iterations (right)
Model reduction of large-scale systems – p.42
-
Numerical examples: heat transfer
The problem of cooling steel profiles is modeled as boundary control of a twodimensional heat equation. A finite element discretization with mesh width of1.382× 10−2 results in a system with state dimension n = 20209, m = 7 inputs andp = 6 outputs.
10−8
10−6
10−4
10−2
100
102
10−5
10−4
10−3
10−2
10−1
freq (rad/sec)
| H
(jw)
|
Amplitude Bode plot of H(s) and Hr(s)
H(s)H
r(s)
We consider the full-order SISO system that associates the sixth input of this systemwith the second output. The reduced order is r = 6. IRKA converged in 7 iteration stepsand the relativeH∞ error is 7.85× 10−3.
Model reduction of large-scale systems – p.43
-
Hankel approximation: Decay rate of HSVs
A new set of system invariants. Let the transfer function be
G(s) = C(sI −A)−1B + D = p(s)q(s)
, λi(A) distinct
Hankel singular values σi(Σ) =√
λi(PQ) , i = 1, · · · , n, σi ≥ σi+1. Withq(s)∗ = q∗(−s), define:
γi =p(λi)q∗(λi)
=p(s)q(s)∗
∣∣∣∣s=λi
, |γi| ≥ |γi+1|, i = 1, · · · , n− 1.
Decay rate of the Hankel singular values. The vector of Hankel singularvalues σ majorizes the absolute value of the vector of new invariants γmultiplicatively:
|γ| ≺μ σ
Model reduction of large-scale systems – p.44
-
Reduction of 2nd order systems with proportional damping
Krylov-based model reduction of second order systems where proportional damping isused to model energy dissipation. The distribution of system poles contains a circle inthe complex plane. Through a connection with potential theory, the structure of thesepoles is exploited to obtain an optimal single shift strategy used in rational Krylov modelreduction. (Gugercin, Beattie).
Model reduction of large-scale systems – p.45
-
Application: LA Administration Building
2nd order model: Mẍ + Dẋ + Kx = Bu, y = Cx, x ∈ Rn, n = 26394Proportional damping: D = αM + βK
Model reduction of large-scale systems – p.46
-
−800 −700 −600 −500 −400 −300 −200 −100
−300
−200
−100
0
100
200
300
Damped Evals alpha = 0.67 , beta = 0.0033
0 5 10 15 20 25 30 35 40 45 500
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6Freq. Response alpha = 0.67 , beta = 0.0033
Frequency ω
Res
pons
e |G
( jω
)|
reducedoriginal
Projection matrices:W = V , span V = spanned by eigenvectors of 200 dominanteigenvalues
M̂ = V ∗MV , D̂ = V ∗DV , K̂ = V ∗KV , B̂ = V ∗B, Ĉ = CV .
Optimal shift:√
αβ
Model reduction of large-scale systems – p.47
-
Choice of points: summary
• Positive real interpolation and model reduction mirror image of spectral zeros
• Optimal H2 model reduction mirror image of reduced system poles
• Hankel norm model reduction mirror image of system poles
• Systems with proportional damping mirror image of ’focal’ point
Model reduction of large-scale systems – p.48
-
Approximation methods: Krylov/SVD
��������
������
Krylov
• Realization• Interpolation• Lanczos• Arnoldi
SVD
����
���
Nonlinear systems Linear systems
• POD methods • Balanced truncation• Empirical Gramians • Hankel approximation�
��
���
��
Krylov/SVD Methods
Model reduction of large-scale systems – p.49
-
Iterative solution of large sparse Lyapunov equations
Consider AP + PA∗ + BB∗ = 0. For large A this equation cannot be solved exactly.However iterative methods can be applied to compute low-rank approximations to thesquare-root factor of the solution: P ≡ VV∗, where V has k n.Methods: ADI, approximate power; also exploit parallelicity.Players in this endeavor: Benner, Sorensen, Embree, ...
Example. Power grid delivery network
Eẋ(t) = Ax(t) + Bu(t)
• x an N−vector of node voltages and inductor currents,• N ≈ millions;• A is an N ×N conductance matrix,• E (diagonal) represents capacitance and inductance terms,
• u(t) includes the loads and voltage sources.
Model reduction of large-scale systems – p.50
-
Power grid: N = 1,048,340
• No inductance
• Pf is rank k = 17• Computational time = 77.52 min = 1.29 hrs.
• Residual Norm ≈ 9.5e-06• Symmetric problem A = A∗
• Modified Smith with Multi-shift strategy
• Optimal shifts
• run on PC
Model reduction of large-scale systems – p.51
-
Power grid: N = 1821, with inductance
Problem is non-symmetric; 4 real shifts used, 30 iters per shift;Pf is rank k = 121, Comptime(Pf ) = 11.4 secs;‖P−Pf‖
‖P‖ = 2.5e− 6, Comptime(P) = 370 secs.
−1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0−2
−1.5
−1
−0.5
0
0.5
1
1.5
2Kai Comp. Eigenvalues A vs H, k=20, n=1821
comp e−vals Ae−vals full He−vals H
Model reduction of large-scale systems – p.52
-
Convergence History
1 2 3 4 5 6 720
22
24
26
28
30
32
34
36
38
40History −− Dominant Evals of P
s1s2s3s4s5s6s7s8
0 200 400 600 800 1000 1200 1400 1600 1800 200010
−40
10−35
10−30
10−25
10−20
10−15
10−10
10−5
100
105
Eigenvalues P, N=1821
Eigenvalues Pσ
1 *1e−8
σ1 *1e−4
σ1 *1e−2
Evals of Pj – Decay rate for evals of P
Model reduction of large-scale systems – p.53
-
Convergence history – power grid: N = 48,892
Iter‖P+−P‖‖P+‖ ‖Bj‖ ‖B̂j‖
4 2.3e-01 8.5e+00 8.2e+01
5 5.6e-02 2.2e+00 8.2e+00
6 8.0e-02 1.1e+00 2.2e+00
7 6.4e-02 2.4e-03 1.1e+00
8 7.4e-05 3.0e-06 2.3e-03
9 5.8e-08 1.9e-08 2.4e-06
run on PCPf is rank k = 307Comptime(Pf ) = 28 minsAt N = 196K we run out of memory due to rank of P.
Model reduction of large-scale systems – p.54
-
Approximation methods: Properties
��������
������
�Krylov
• Realization• Interpolation• Lanczos• Arnoldi
SVD
����
����
Nonlinear systems Linear systems
• POD methods • Balanced truncation• Empirical Gramians • Hankel approximation�
��
���
��
Krylov/SVD Methods
���
�
���
�
Properties
• numerical efficiency• n� 500
Properties
• Stability• Error bound• n ≈ 500
Model reduction of large-scale systems – p.55
-
Open problems
• Decay rate of the Hankel singular values• Choice of expansion points for rational Krylov• Choice of spectral zeros in passive model reduction• Iterative SVD model reduction methods• Model reduction of uncertain systems• Model reduction of second-order systems• Model reduction of descriptor (DAE) systems• Model reduction for e.g. parabolic PDE systems• Parallel algorithms for sparse computations in model reduction• Development/validation of control algorithms based on reduced models• In general: Model reduction of structured systems
Model reduction of large-scale systems – p.56
-
Reference
Model reduction of large-scale systems – p.57
{ormalsize }{ormalsize �box {yellow Contents}}ormalsize �box {The big picture}green ormalsize Dynamical systemsormalsize white �box {Problem statement}ormalsize white �box {Approximation by projection}ormalsize yellow �box {Special case: linear dynamical systems}ormalsize white {�box {Motivating Examples: Simulation/Control}}ormalsize {greenonblue {�f Passive devices}}ormalsize : underline {�f VLSI circuits}ormalsize {greenonblue {�f Passive devices}}ormalsize : {�f VLSI circuits}ormalsize {greenonblue {�f Passive devices}}small : underline {�f Thermal management of semiconductor devices} greenonblue {ormalsize Mechanical systems: cars}ormalsize greenonblue {�box {Mechanical Systems: Buildings}}ormalsize greenonblue {�box {MEMS: Elk sensor}}�romSlide {1}{ormalsize �box {Approximation methods: {green Overview}}}{�box {ormalsize SVD Approximation methods}}�box {small POD: Proper Orthogonal Decomposition}small yellow �box {�f SVD methods: the Hankel singular values}yellow �box {small SVD methods: approximation by balanced truncation}�box {ormalsize Properties of balanced reduction}ormalsize �box {Approximation methods: {green Krylov}}�box {ormalsize Krylov approximation methods}yellowonblue {small Key fact for numerical reliability}yellowonblue {ormalsize Properties of Krylov methods}ormalsize {yellow �box { Choice of projection points in Krylov methods}}ormalsize yellow �box {Passive model reduction}ptsize {14} �box {The new approach: Ingredients}ptsize {14}ptsize {14} �box {Main result}ptsize {12}�box {Spectral zeros as generalized eigenvalues}small �box {Interpolation via invariant subspaces: Sorensen}ormalsize yellow Example: power grid delivery networkormalsize yellow Optimal $H 2$ model reductionsmall Facts on moment matching and rational Krylovsmall First-order necessary optimality conditionssmall green An expression for the $H 2$ normsmall {yellow An iterative algorithm}small yellow The Newton stepsmall yellow An Iterative Rational Krylov Algorithm (IRKA)small yellow Numerical examples: the CD Playersmall yellow Numerical examples: heat transfersmall yellow �box {Hankel approximation: Decay rate of HSVs}small yellow Reduction of 2nd order systems with proportional damping�box {small yellow Application: LA Administration Building}small yellow Choice of points: summaryormalsize �box {Approximation methods: {green Krylov/SVD}}ormalsize yellow �box {Iterative solution of large sparse Lyapunov equations}small Power grid: N = 1,048,340small Power grid: N = 1821, with inductancesmall Convergence Historysmall Convergence history -- power grid: N = 48,892hspace *{2.6cm} small �box {Approximation methods: {green Properties}}�lueonyellow {�box {ormalsize Open problems}}small �lueoncyan {�box {Reference}}