an overview of model reduction methods for large-scale …an overview of model reduction methods for...

54
An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University [email protected] URL: www-ece.rice.edu/˜ aca Cluster Symposium, Numerical Methods for Model Order Reduction Eindhoven University of Technology, 6 December 2006 An overview of model reduction methods for large-scale systems – p.

Upload: others

Post on 07-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

An overview of model reductionmethods for large-scale systems

Thanos Antoulas

Rice University

[email protected]: www-ece.rice.edu/˜aca

Cluster Symposium, Numerical Methods for Model Order ReductionEindhoven University of Technology, 6 December 2006

An overview of model reduction methods for large-scale systems – p.

Page 2: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Support: • two regular NSF grants

• NSF ITR grant

Collaborators: • Dan Sorensen (CAAM), TA (ECE)

• Paul van Dooren (Belgium)

• Ahmed Sameh (Purdue)

•Mete Sozen (Purdue)

• Ananth Grama (Purdue)

• Chris Hoffmann (Purdue)

• Kyle Gallivan (Florida State)

• Serkan Gugercin (Virginia Tech)

An overview of model reduction methods for large-scale systems – p.

Page 3: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Contents

1. Introduction and problem statement

2. Overview of approximation methods

SVD – Krylov – Krylov/SVD

3. Choice of projection points in Krylov methods

4. Krylov/SVD methods – Solution of large Lyapunov equations

5. Future challenges/Concluding remarks

An overview of model reduction methods for large-scale systems – p.

Page 4: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

The big picture

��

��

���

��

��

���

��

��

���

� discretization

Modeling

Model reduction

����

Simulation

Controlreduced # of ODEs

ODEs PDEs

Physical System + Data

An overview of model reduction methods for large-scale systems – p.

Page 5: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Dynamical systems

x1(·)x2(·)

...

xn(·)

y1(·)←−y2(·)←−...yp(·)←−

←− u1(·)←− u2(·)...←− um(·)

We consider explicit state equations

Σ : x(t) = f(x(t), u(t)), y(t) = h(x(t), u(t))

with state x(·) of dimension n�m,p.

An overview of model reduction methods for large-scale systems – p.

Page 6: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Problem statement

Given: dynamical system Σ = (f, h) with: u(t) ∈ Rm, x(t) ∈ R

n,

y(t) ∈ Rp.

Problem: Approximate Σ with:

Σ = (f , h), u(t) ∈ Rm, x(t) ∈ R

k, y(t) ∈ Rp, k � n :

(1) Approximation error small - global error bound

(2) Preservation of stability, passivity

(3) Procedure computationally efficient

An overview of model reduction methods for large-scale systems – p.

Page 7: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Approximation by projection

Unifying feature of approximation methods: projections.

Let V, W ∈ Rn×k, such that W∗V = Ik ⇒ Π = VW∗ is a projection.

Define x = W∗x. Then

Σ :

⎧⎨⎩

ddt x(t) = W∗f(Vx(t), u(t))

y(t) = h(Vx(t), u(t))

Thus Σ is "good" approximation of Σ, if x−Πx is "small".

An overview of model reduction methods for large-scale systems – p.

Page 8: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Special case: linear dynamical systems

Σ: x(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)

Σ =

�� A B

C D

��

Problem : Approximate Σ by projection: Π = V W∗, Π2 = Π

Σ =

�� A B

C D

�� =

�� W ∗AV W ∗B

CV D

�� , k � n

An

n

C

B

D

⇒ Ak

k

C

B

D

Σ: :Σ

= Vn

k

W ∗

An overview of model reduction methods for large-scale systems – p.

Page 9: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Special case: linear dynamical systems

Σ: x(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)

Σ =

�� A B

C D

��

Problem : Approximate Σ by projection: Π = V W∗, Π2 = Π

Σ =

�� A B

C D

�� =

�� W ∗AV W ∗B

CV D

�� , k � n

Norms :

• H∞ norm: worst output error ‖y(t) − y(t)‖ for ‖u(t)‖ = 1.

• H2-norm: ‖h(t) − h(t)‖

An overview of model reduction methods for large-scale systems – p.

Page 10: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Motivating Examples: Simulation/Control

1. Passive devices • VLSI circuits

• Thermal issues

• Power delivery networks

2. Data assimilation • North sea forecast

• Air quality forecast

3. Molecular systems • MD simulations

• Heat capacity

4. CVD reactor • Bifurcations

5. Mechanical systems: •Windscreen vibrations

• Buildings

6. Optimal cooling • Steel profile

7. MEMS: Micro Electro-Mechanical Systems • Elf sensor

An overview of model reduction methods for large-scale systems – p.

Page 11: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Passive devices: VLSI circuits

1960’s: IC 1971: Intel 4004 2001: Intel Pentium IV

10μ details 0.18μ details

2300 components 42M components

64KHz speed 2GHz speed

2km interconnect

7 layers

64nm technology: gate delay < interconnect delay!An overview of model reduction methods for large-scale systems – p. 1

Page 12: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Passive devices: VLSI circuits

Conclusion: Simulations are required to verify that internal electromagnetic fields

do not significantly delay or distort circuit signals. Therefore interconnections

must be modeled.

⇒ Electromagnetic modeling of packages and interconnects⇒ resulting models

very complex:

using PEEC methods (discretization of Maxwell’s equations): n ≈ 105 · · · 106

⇒ SPICE: inadequate

• Source: van der Meijs (Delft)

An overview of model reduction methods for large-scale systems – p. 1

Page 13: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Passive devices: Thermal management of semiconductor devices

Semiconductor package ANSYS grid Temperature distribution

Semiconductor manufacturers provide to customers models of heat dissipation.

Procedure: from finite element models generated using ANSYS, a reduced order

model is derived, which is provided to the customer.

• Source: Korvink (Freiburg)

An overview of model reduction methods for large-scale systems – p. 1

Page 14: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Power delivery network for VLSI chips

VDD

� � � �� � � �

� � � �� � � ��

��

��

��

���

��

��

��

��

��

��

��

��

��

��

��

��

��

��

��

�� �� �� ��

�� �� �� ��

�� �� �� ��

�� �� �� ��

�� ��� ��

��

��

R L

C C

#states ≈ 8 · 106

#inputs/outputs ≈ 1 · 106

An overview of model reduction methods for large-scale systems – p. 1

Page 15: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Mechanical systems: cars

Car windscreen simulation subject to acceleration load.

Problem: compute noise at points away from the window.

PDE: describes deformation of a structure of a specific material; FE discretization: 7564

nodes (3 layers of 60 by 30 elements). Material: glass with Young modulus 7·1010 N/m2;

density 2490 kg/m3; Poisson ratio 0.23 ⇒ coefficients of FE model determined

experimentally.

The discretized problem has dimension: 22,692.

Notice: this problem yields 2nd order equations: Mx(t) + Cx(t) + Kx(t) = f(t).

• Source: Meerbergen (Free Field Technologies)

An overview of model reduction methods for large-scale systems – p. 1

Page 16: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Mechanical Systems: Buildings

Earthquake prevention

Yokohama Landmark Tower: two active tuned mass dampers damp the dominant

vibration mode of 0.185Hz.

• Source: S. Williams

An overview of model reduction methods for large-scale systems – p. 1

Page 17: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

MEMS: Elk sensor

Mercedes A Class semiconductor rotation sensor

• Source: Laur (Bremen)

An overview of model reduction methods for large-scale systems – p. 1

Page 18: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Approximation methods: Overview

��������

������

Krylov

• Realization

• Interpolation

• Lanczos

• Arnoldi

SVD

����

Nonlinear systems Linear systems

• POD methods • Balanced truncation

• Empirical Gramians • Hankel approximation

���

Krylov/SVD Methods

An overview of model reduction methods for large-scale systems – p. 1

Page 19: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

SVD Approximation methods

SVD: A = UΣV ∗. A prototype problem.

Supernova Clown

0.5 1 1.5 2 2.5

−5

−4

−3

−2

−1

Singular values of Clown and Supernova Supernova: original picture

Supernova: rank 6 approximation Supernova: rank 20 approximation

green: clownred: supernova(log−log scale)

Clown: original picture Clown: rank 6 approximation

Clown: rank 12 approximation Clown: rank 20 approximation

Singular values provide trade-off between accuracy and complexity

An overview of model reduction methods for large-scale systems – p. 1

Page 20: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

POD: Proper Orthogonal Decomposition

Consider: x(t) = f(x(t), u(t)), y(t) = h(x(t), u(t)).Snapshots of state:

X = [x(t1) x(t2) · · · x(tN )] ∈ Rn×N

SVD: X = UΣV ∗ ≈ UkΣkV ∗k , k � n. Approximate the state:

ξ(t) = U∗k x(t) ⇒ x(t) ≈ Ukξ(t), ξ(t) ∈ R

k

Project state and output equations. Reduced order system:

ξ(t) = U∗k f(Ukξ(t), u(t)), y(t) = h(Ukξ(t), u(t))

⇒ ξ(t) evolves in a low-dimensional space.Issues with POD: (a) Choice of snapshots, (b) singular values not I/O invariants.

An overview of model reduction methods for large-scale systems – p. 1

Page 21: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

SVD methods: the Hankel singular values

Trade-off between accuracy and complexity for linear dynamical systems is

provided by the Hankel Singular Values, which are computed as follows:

Gramians

P =∫ ∞

0

eAtBB∗eA∗t dt, Q =∫ ∞

0

eA∗tC∗CeAt dt

To compute gramians solve 2 Lyapunov equations :

AP + PA∗ + BB∗ = 0, P > 0

A∗Q+QA + C∗C = 0, Q > 0

⎫⎬⎭⇒ σi =

√λi(PQ)

σi: Hankel singular values of system Σ

An overview of model reduction methods for large-scale systems – p. 2

Page 22: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

SVD methods: approximation by balanced truncation

There exists basis where P = Q = S = diag (σ1, · · · , σn) . This is the balanced

basis of the system.

In this basis partition:

A =

⎛⎝A11 A12

A21 A22

⎞⎠, B =

⎛⎝B1

B2

⎞⎠, C = (C1 | C2), S =

⎛⎝Σ1 0

0 Σ2

⎞⎠

ROM obtained by balanced truncation Σ =

⎛⎝ A11 B1

C1

⎞⎠

Σ2 contains the small Hankel singular values.

Projector: Π = V W ∗ where V = W =

(Ik

0

).

An overview of model reduction methods for large-scale systems – p. 2

Page 23: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Properties of balanced reduction

1. Stability is preserved

2. Global error bound :

σk+1 ≤‖ Σ− Σ ‖∞≤ 2(σk+1 + · · ·+ σn)

Drawbacks

1. Dense computations, matrix factorizations and inversions⇒ may be

ill-conditioned

2. Need whole transformed system in order to truncate⇒ number of

operations O(n3)

3. Bottleneck : solution of two Lyapunov equations

An overview of model reduction methods for large-scale systems – p. 2

Page 24: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Approximation methods: Krylov

��������

������

Krylov

• Realization

• Interpolation

• Lanczos

• Arnoldi

SVD

����

Nonlinear systems Linear systems

• POD methods • Balanced truncation

• Empirical Gramians • Hankel approximation

���

Krylov/SVD Methods

An overview of model reduction methods for large-scale systems – p. 2

Page 25: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Krylov approximation methods

Given Ex = Ax + Bu, y = Cx + Du, expand transfer function

around s0:

G(s) = η0 + η1 (s− s0) + η2 (s− s0)2 + η3 (s− s0)

3 + · · ·

Moments at s0: ηj . Find E ˙x = Ax + Bu, y = Cx + Du, with

G(s) = η0 + η1 (s− s0) + η2 (s− s0)2 + η3 (s− s0)

3 + · · ·

such that for appropriate s0 and �:

ηj = ηj, j = 1, 2, · · · , �

An overview of model reduction methods for large-scale systems – p. 2

Page 26: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Key fact for numerical reliability

But: computation of moments is numerically problematic

Solution: given the original system matrices, moment matching methods can be

implemented in a numerically efficient way:

moments can be matched without moment computation

iterative implementation : ARNOLDI – LANCZOS

Special cases:

Expansion around infinity: ηj : Markov parameters. Problem: partial realization.

Expansion around zero: ηj : moments. Problem: Pade interpolation.

In general: arbitrary s0 ∈ C: Problem: rational Krylov ⇔ rational interpolation.

An overview of model reduction methods for large-scale systems – p. 2

Page 27: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Rational Lanczos: matching moments

Consider the system Ex = Ax + Bu, y = Cx + Du,

and the projection Π = V (W∗V )−1W ∗, where

V =

(λ1E − A)−1B · · · (λkE − A)−1B

� ∈ Rn×k

W ∗ =

����

C(λk+1E − A)−1

...

C(λ2k E − A)−1

���� ∈ R

k×n

The projected system: E = W ∗EV , A = W ∗AV , B = W ∗B, C = CV , D = D,

satisfies:

• If λi �= λk+j , i, j = 1, · · · , k:

η0 = G(λi) = D+C(λiE−A)−1B = D+C(λiE−A)−1B = G(λi) = η0 , i = 1, ..., 2k

• If λi = λk+i, i = 1, · · · , k, above conditions satisfied for i = 1, · · · , k, and in addition

η1 =dG

ds(λi) = −C(λiE−A)−2B = −C(λiE−A)−2B =

dG

ds(λi) = η1 , i = 1, ..., k

An overview of model reduction methods for large-scale systems – p. 2

Page 28: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Properties of Krylov methods

(a) Number of operations: O(kn2) or O(k2n) vs. O(n3) ⇒efficiency

(b) Only matrix-vector multiplications are required. No matrix factorizations

and/or inversions. No need to compute transformed model and then truncate.

(c) Drawbacks

• global error bound?

• Σ may not be stable.

Q : How to choose the projection points?

An overview of model reduction methods for large-scale systems – p. 2

Page 29: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Choice of projection points: passive model reduction

Passive systems:Re∫ t

−∞ u(τ)∗y(τ)dτ ≥ 0, ∀ t ∈ R, ∀ u ∈ L2(R).

Positive real rational functions:(1) G(s) = D + C(sE −A)−1B is analytic for Re(s) > 0,

(2) Re G(s) ≥ 0 for Re(s) ≥ 0, s not a pole of G(s).

Theorem: Σ =

(E, A B

C D

)is passive ⇔ G(s) is positive real.

Positive realness of G(s) implies the existence of a spectral factorization

G(s) + G∗(−s) = W (s)W ∗(−s), where W (s) is the stable rational and W (s)−1

is also stable. The spectral zeros λi of the system are the zeros of the spectral

factor W (λi) = 0, i = 1, · · · , n.

An overview of model reduction methods for large-scale systems – p. 2

Page 30: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Main result

Method: Rational Lanczos

Solution: projection points = spectral zeros

Main result. If V , W are defined as above, where λ1, · · · , λk are spectralzeros, and in addition λk+i = −λ∗

i , the reduced system satisfies:

(i) the interpolation constraints, (ii) it is stable, and (iii) it is passive.

An overview of model reduction methods for large-scale systems – p. 2

Page 31: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Projectors as generalized eigenspaces

The spectral zeros are the generalized eigenvalues of

A =

� �

A B

−A∗ −C∗

C B∗ D + D∗

���� and E =

� �

E

E∗

0

����

An overview of model reduction methods for large-scale systems – p. 3

Page 32: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Projectors as generalized eigenspaces

Passivity preservation via invariant subspaces

� �

A B

−A∗ −C∗

C B∗ D + D∗

����

� �� �

×

×k� �� ����

X

Y

Z

��� =

� �

E

E∗

0�

���

� �� �

×

×k� �� ����

X

Y

Z

��� R����

k×k

, = 2n + m

Construction of projectors: V = X , W = Y .

Conclusion: preservation of passivity = eigenvalue problem

An overview of model reduction methods for large-scale systems – p. 3

Page 33: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Example: simple RLC ladder circuit

u

yR2 L2 L1

C3 C2 C1 R1

A =

���������

−2 1 0 0 0

−1 0 1 0 0

0 −1 0 1 0

0 0 −1 0 1

0 0 0 −1 −5

���������

, B =

���������

0

0

0

0

2

���������

, C = [0 0 0 0 −2], D = 1

G(s) =s5 + 3s4 + 6s3 + 9s2 + 7s + 3

s5 + 7s4 + 14s3 + 21s2 + 23s + 7

An overview of model reduction methods for large-scale systems – p. 3

Page 34: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

G(s) + G(−s) =2(s10 − s8 − 12s6 + 5s4 + 35s2 − 21)

(s5 + 7s4 + 14s3 + 21s2 + 23s + 7)(s5 − 7s4 + 14s3 − 21s2 + 23s − 7).

The zeros of the stable spectral factor are:

−.1833 ± 1.5430i

−.7943

−1.3018

−1.8355

As interpolation points we choose: λ1 = 1.8355, λ2 = −1.8355, λ3 = 1.3018, λ4 = −1.3018.

V = [(λ1I5 − A)−1B (λ2I5 − A)−1B], W =

�� C(λ3I5 − A)−1

C(λ4I5 − A)−1

��

The reduced system is

A = W AV = −

�� 3.2923 5.0620

0.9261 2.5874

�� , B = W B = −

�� 1.4161

0.2560

�� ,

C = CV = [1.9905 5.0620], and D = D. ⇒ G(λi) = G(λi), i = 1, 2, 3, 4.

An overview of model reduction methods for large-scale systems – p. 3

Page 35: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

u

yR2 L

C R1

The reduced order model can be realized by means of the second-order LRC circuit as shown, with the

following values of the parameters:

R1 = 4.8258, R2 = 1.2907, C = 0.1459, L = 8.4796.

The corresponding transfer function turns out to be

G(s) =0.7784 s2 + 0.4409 s + 0.6263

s2 + 5.8797 s + 3.8306.

Large-dimensional example: power grid delivery network

An overview of model reduction methods for large-scale systems – p. 3

Page 36: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Choice of points: optimal H2 model reduction

Krylov projection methods. Given Π = V W∗, V, W ∈ Rn×r , W∗V = Ik:

A = W ∗AV, B = W ∗B, C = CV

TheH2 norm of the system is:

‖Σ‖H2 =

�� +∞

−∞h2(t)dt

1/2

Goal: construct a Krylov projector such that

Σk = arg mindeg(Σ) = r

Σ : stable

Σ− Σ

H2.

An overview of model reduction methods for large-scale systems – p. 3

Page 37: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

First-order necessary optimality conditions

Let (A, B, C) solve the optimalH2 problem and let λi denote the eigenvalues of A. The necessary

conditions are

G(−λ∗i ) = G(−λ∗

i ) andd

dsG(s)

����

s=−λ∗i

=d

dsG(s)

����s=−λ∗

i

Thus the reduced system has to match the first two moments of the original system at the mirror

images of the eigenvalues of A.

TheH2 norm: if G(s) =

�nk=1

φks−λk

⇒ ‖G‖2H2=

n

k=1

ck G(−λ∗i )

Corollary.Let in addition G(s) =

�rk=1

φk

s−λk. TheH2 norm of the error system, is

J =

G− G

2

H2=

n i=1

φi�

G(−λi)− G(−λi)

+r

j=1

φj

G(−λj)−G(−λj)

Conclusion. TheH2 error is due to the mismatch of the transfer functions G− G at the mirror images

of the full-order and reduced system poles λi, λi.

An overview of model reduction methods for large-scale systems – p. 3

Page 38: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

An iterative algorithm

Let the system obtained after the (j − 1)st step be (Cj−1,Aj−1,Bj−1), where Aj−1 ∈ Rk×k ,

Bj−1,C∗j−1 ∈ R

k . At the jth step the system is obtained as follows

Aj = (W∗j Vj)

−1W∗j AVj , Bj = (W∗

j Vj)−1W∗

j B, Cj = CVj ,

where

Vj =

(λ1I−A)−1B, · · · , (λkI−A)−1B

�,

W∗j =

C(λ1I− A)−1, · · · ,C(λkI−A)−1

,

and

−λ1, · · · ,−λk ∈ σ(Aj−1),

i.e., −λi are the eigenvalues of the (j − 1)st iterate Aj−1.

The Newton step: can be computed explicitly

�����

λ(k)1

λ(k)2

...�

���� ←−�

����λ(k)1

λ(k)2

...

����� − J−1

�����

λ(k−1)1

λ(k−1)2

...

�����

⇒ local convergence guaranteed.

An overview of model reduction methods for large-scale systems – p. 3

Page 39: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

An iterative rational Krylov algorithm (IRKA)

The proposed algorithm produces a reduced order model G(s) that satisfies the interpolation-based

conditions, i.e.

G(−λ∗i ) = G(−λ∗

i ) andd

dsG(s)

����

s=−λ∗i

=d

dsG(s)

����s=−λ∗

i

1. Make an initial selection of σi, for i = 1, . . . , k

2. W = [(σ1I −A∗)−1C∗, · · · , (σkI −A∗)−1C∗]

3. V = [(σ1I −A)−1B, · · · , (σkI −A)−1B]

4. while (not converged)

(a) A = (W ∗V )−1W ∗AV ,

(b) σi ←− −λi(A) + Newton correction, i = 1, . . . , k

(c) W = [(σ1I −A∗)−1C∗, · · · , (σkI −A∗)−1C∗]

(d) V = [(σ1I −A)−1B, · · · , (σkI − A)−1B]

5. A = (W ∗V )−1W ∗AV , B = (W ∗V )−1W ∗B, C = CV

An overview of model reduction methods for large-scale systems – p. 3

Page 40: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Choice of points: summary

• Positive real interpolation and model reduction mirror image of spectral zeros

• OptimalH2 model reduction mirror image of reduced system poles

An overview of model reduction methods for large-scale systems – p. 3

Page 41: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Approximation methods: Krylov/SVD

��������

������

Krylov

• Realization

• Interpolation

• Lanczos

• Arnoldi

SVD

����

Nonlinear systems Linear systems

• POD methods • Balanced truncation

• Empirical Gramians • Hankel approximation

����

Krylov/SVD Methods

An overview of model reduction methods for large-scale systems – p. 4

Page 42: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Iterative solution of large sparse Lyapunov equations

Consider AP + PA∗ + BB∗ = 0. Solution method: since P > 0 compute square root P = LL∗.

For large A this equation cannot be solved exactly. Iterative methods can be applied to compute

low-rank approximations to L: P = V V ∗, where V has rank k � n:

P = L L∗ ≈ V

V ∗

= P

Methods: ADI, approximate power; guaranteed convergence.

Example. Power grid delivery network Ex(t) = Ax(t) + Bu(t)

x = n-vector of node voltages and inductor currents,

n ≈ millions, A = n× n conductance matrix, E capacitance and inductance terms (diag),

u(t) includes the loads and voltage sources.

An overview of model reduction methods for large-scale systems – p. 4

Page 43: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Power grid: RLC – N = 1, 821

P has rank k = 121, computational time of P = 11 secs;

‖P−P‖‖P‖ = 10−6, computational time of P = 370 secs.

−1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0−2

−1.5

−1

−0.5

0

0.5

1

1.5

2Kai Comp. Eigenvalues A vs H, k=20, n=1821

comp e−vals Ae−vals full He−vals H

1 2 3 4 5 6 720

22

24

26

28

30

32

34

36

38

40History −− Dominant Evals of P

s1s2s3s4s5s6s7s8

Problem is non-symmetric; 4 real shifts used, 30 iterations per shift.

An overview of model reduction methods for large-scale systems – p. 4

Page 44: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Power grid: further examples

N = 48, 892 N = 1, 048, 340

RLC RC

rank of P k = 307 k = 17

computational time 28 minutes 77 minutes

residual norm 10−8 10−5

run on PC

Modified Smith with Multi-shift strategy

An overview of model reduction methods for large-scale systems – p. 4

Page 45: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Comparison of model reduction methods

Two coupled transmission lines:

M M

• •�

� �

��� �

�. . . . . .

u

R

C R

L R

C R

L R

C R

y

• •�

� �

��� �

�. . . . . .R

C R

L R

C R

L R

C R

An overview of model reduction methods for large-scale systems – p. 4

Page 46: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Reduction methods

We consider N = 161 sections, i.e. n = 642. Reduced order systems k = 51.

We will reduce using the following methods:

1. Balanced truncation: diagonalization of two gramians P , Q, where

AP + PA∗ + BB∗ = 0, A∗Q+QA + C∗C = 0.

2. OptimalH2 method: IRKA (Iterative Rational Krylov Algorithm).

3. Truncation by diagonalization of one gramian P (PMTBR).

4. Positive real balancing: diagonalization of solutions to PR Riccati equations

A∗P + PA + C∗C + (PB + C∗D)(I −D∗D)−1(PB + C∗D)∗ = 0,

AQ+QA∗ + BB∗ + (QC∗ + BD∗)(I −DD∗)−1(QC∗ + BD∗)∗ = 0.

5. PRIMA (Arnoldi method with expansion point s0 = 0).

6. Spectral zero method (Lanczos method with matching of spectral zeros).

An overview of model reduction methods for large-scale systems – p. 4

Page 47: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Plots

10−2

10−1

100

101

102

−12

−11

−10

−9

−8

−7

−6

−5

−4

−3

−2 n = 642, k = 51; Frequency response: original system

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

−0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25−60

−40

−20

0

20

40

60 n = 642, k = 51; Poles: blue dots −− SpecZeros: red dots −− Chosen SpecZeros: green circles

Freq. response original system Poles and spectral zeros

0 100 200 300 400 500 600−20

−18

−16

−14

−12

−10

−8

−6

−4

−2

0 n = 642, k = 51; Hankel singular values −− Eigenvalues P −− Eigenvalues Q −− PR HSV

eig(P)

eig(Q)

HSV

PRHSV

10−2

10−1

100

101

102

−18

−16

−14

−12

−10

−8

−6

−4

−2

0

2

n = 642, k = 51; Frequency response original and reduced systems

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

Orig

BalTrunc

OneGram

PR BalTrunc

PRIMA

SpecZer

Hankel singular values Frequency response all systems

An overview of model reduction methods for large-scale systems – p. 4

Page 48: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Plots

10−3

10−2

10−1

100

101

102

103

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: balancing −− one gramian

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

BalTrunc

OneGram

10−3

10−2

10−1

100

101

102

103

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: balancing −− pr balancing

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

BalTrunc

PR BalTrunc

Error plots: TBR and PMTBR Error plots: TBR and PRTBR

10−3

10−2

10−1

100

101

102

103

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: balancing −− prima

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

BalTrunc

PRIMA

10−3

10−2

10−1

100

101

102

103

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: balancing −− spectral zeros

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

BalTrunc

SpecZer

Error plots: TBR and PRIMA Error plots: TBR and spectral zeros

An overview of model reduction methods for large-scale systems – p. 4

Page 49: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Plots

10−3

10−2

10−1

100

101

102

103

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: one gramian −− spectral zeros

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

OneGram

SpecZer

10−3

10−2

10−1

100

101

102

103

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: prima −− spectral zeros

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

PRIMA

SpecZer

Error plots: PMTBR and spectral zeros Error plots: PRIMA and spectral zeros

10−3

10−2

10−1

100

101

102

103

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: balancing −− optimal H

2

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

Opt H2

BalTrunc

10−2

10−1

100

101

102

−13

−12

−11

−10

−9

−8

−7

−6

−5

−4

−3

−2 n = 642, k = 51; Frequency response: original −− balancing −− optimal H

2

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

OrigSys

BalTrunc

Opt H2

Error plots: TBR and optimalH2 Original – TBR – Opt. H2

An overview of model reduction methods for large-scale systems – p. 4

Page 50: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Additional method: MOR from I/O Data

We take 2000 frequency response measurements in [−1012,−107] and [107, 1012].

Using the Loewner matrix identification method we construct a

reduced order model directly from the data

10−2

10−1

100

101

102

−12

−11

−10

−9

−8

−7

−6

−5

−4

−3

−2 n = 642, k = 51; Frequency response: Original −− I/O Data

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

Orig

I/O Data

10−2

10−1

100

101

102

−80

−70

−60

−50

−40

−30

−20

−10

0 n = 642, k = 51; Error plots: Balancing −− I/O Data −− Optimal H

2

Frequency × 109 (rad/sec)

Sin

gula

r V

alue

s (d

B)

BalTrunc

I/O Data

OptH2

Original – I/O Data Error plots: TBR, I/O Data, OptH2

An overview of model reduction methods for large-scale systems – p. 4

Page 51: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Comparison of the seven methods

Reduction method n = 642 , k = 51 H∞ norm H2 norm Stability Passivity

Balanced truncation 0.539 0.236 Y

OptimalH2 0.421 0.129 Y

One gramian (PMTBR) 0.752 0.414

Positive real BT 0.368 0.201 Y Y

PRIMA 1.096 0.657 MNA MNA

Spectral zero method 0.861 0.567 Y Y

I/O Data method 0.546 0.215

Relative errors :‖Σ−Σred‖‖Σ‖

An overview of model reduction methods for large-scale systems – p. 5

Page 52: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Approximation methods: Properties

��������

������

�Krylov

• Realization

• Interpolation

• Lanczos

• Arnoldi

SVD

����

����

Nonlinear systems Linear systems

• POD methods • Balanced truncation

• Empirical Gramians • Hankel approximation

����

Krylov/SVD Methods

���

���

Properties

• numerical efficiency

• n� 103

Properties

• Stability

• Error bound

• n ≈ 103

An overview of model reduction methods for large-scale systems – p. 5

Page 53: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

(Some) Future challenges in complexity reduction

Model reduction of uncertain systems

Model reduction of differential-algebraic (DAE) systems

Domain decomposition methods

Parallel algorithms for sparse computations in model reduction

Development/validation of control algorithms based on reduced models

Model reduction and data assimilation (weather prediction)

Active control of high-rise buildings

MEMS and multi-physics problems

VLSI design

Molecular Dynamics (MD) simulations

Nanostructure simulation

An overview of model reduction methods for large-scale systems – p. 5

Page 54: An overview of model reduction methods for large-scale …An overview of model reduction methods for large-scale systems Thanos Antoulas Rice University aca@rice.edu URL: œaca Cluster

Reference

An overview of model reduction methods for large-scale systems – p. 5