ece504: lecture 5ece504: lecture 5 lecture 5 major topics we are still in part ii of ece504:...
TRANSCRIPT
ECE504: Lecture 5
ECE504: Lecture 5
D. Richard Brown III
Worcester Polytechnic Institute
30-Sep-2008
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 1 / 43
ECE504: Lecture 5
Lecture 5 Major Topics
We are still in Part II of ECE504: Quantitative and qualitative
analysis of systems
mathematical description → results about behavior of system
Today:
1. Existence and uniqueness of solutions to state equations forcontinuous-time systems
2. Linear algebraic tools that we are going to need for analysis ofAk and exp{At}.
◮ Subspaces◮ Nullspace and range◮ Rank◮ Matrix invertibility
You should be reading Chen Chapter 4 now (and referring back toChapter 3 for the necessary linear algebra).
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 2 / 43
ECE504: Lecture 5
Continuous-Time Linear Systems
x(t) = A(t)x(t) + B(t)u(t) (1)
y(t) = C(t)x(t) + D(t)u(t) (2)
Theorem
For any t0 ∈ R, any x(t0) ∈ Rn, and any u(t) ∈ R
p for all t ≥ t0, thereexists a unique solution x(t) for all t ∈ R to the state-update differentialequation (1). It is given as
x(t) = Φ(t, t0)x(t0) +
∫ t
t0
Φ(t, τ)B(τ)u(τ) dτ t ∈ R
where Φ(t, s) : R2 7→ R
n×n is the unique function satisfying
d
dtΦ(t, s) = A(t)Φ(t, s) with Φ(s, s) = In.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 3 / 43
ECE504: Lecture 5
Theorem Remarks
◮ Note that this theorem claims two things:
1. A solution to the state-update equation always exists.2. The solution is unique.
◮ Why is this important?◮ Not every differential equation has a solution, e.g.
x(t) =1
twith x(0) = 5
◮ Not every differential equation has a unique solution
x(t) = 3(x(t))2/3 with x(0) = 0
◮ We have already established uniqueness for the vector/matrix stateupdate differential equation x(t) = A(t)x(t) + B(t)u(t) with initialcondition x(t0).
◮ We still need to establish existence.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 4 / 43
ECE504: Lecture 5
Theorem: Existence Proof Warmup #1
To develop some intuition, let’s first assume that everything is scalar,i.e. p = q = n = 1. Our state update equation becomes
x(t) = a(t)x(t) + b(t)u(t)
Let
φ(t, s) := exp
{∫ t
s
a(τ) dτ
}
What is φ(s, s)?
What is ddt
φ(t, s)?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 5 / 43
ECE504: Lecture 5
Theorem: Existence Proof Warmup #1
Note that φ(t, s) = exp{∫ t
sa(τ) dτ
}
always exists and satisfies its own
differential equation:
d
dtφ(t, s) = a(t)φ(t, s) with φ(s, s) = 1.
Now lets try the following solution to the scalar state-update differentialequation with initial state condition x(t0):
x(t) = φ(t, t0)x(t0) +
∫ t
t0
φ(t, τ)b(τ)u(τ) dτ ∀t ∈ R
To see that this is indeed a solution, we need to confirm two things:
1. Does our solution satisfy the initial condition requirement of thescalar state-update DE?
2. Does our solution really solve the scalar state-update DE?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 6 / 43
ECE504: Lecture 5
Theorem: Existence Proof Warmup #2
To develop additional intuition, let’s now assume that everything istime-invariant, i.e. A(t) ≡ A and B(t) ≡ B. Our state update equationbecomes
x(t) = Ax(t) + Bu(t)
Let
Φ(t, s) :=
∞∑
k=0
Ak 1
k!(t − s)k
What is Φ(s, s)?
What is ddt
Φ(t, s)?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 7 / 43
ECE504: Lecture 5
Theorem: Existence Proof Warmup #2
Note that Φ(t, s) =∑
∞
k=0 Ak 1k!
(t − s)k exists for any A ∈ Rn×n, t ∈ R,
and s ∈ R. Moreover, Φ(t, s) satisfies its own differential equation:
d
dtΦ(t, s) = AΦ(t, s) with Φ(s, s) = In.
Now lets try the following solution to the scalar state-update differentialequation with initial state condition x(t0):
x(t) = Φ(t, t0)x(t0) +
∫ t
t0
Φ(t, τ)B(τ)u(τ) dτ ∀t ∈ R
To see that this is indeed a solution, we need to confirm two things:
1. Does our solution satisfy the initial condition requirement of thescalar state-update DE?
2. Does our solution really solve the scalar state-update DE?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 8 / 43
ECE504: Lecture 5
Theorem: Existence Proof for General Case
For the general (non-scalar, time-varying) case, we propose the solution
x(t) = Φ(t, t0)x(t0) +
∫ t
t0
Φ(t, τ)B(τ)u(τ) dτ (3)
where the state transition matrix satisfies the matrix differential equation
d
dtΦ(t, s) = A(t)Φ(t, s) with Φ(s, s) = In. (4)
Note that (4) is consistent with our two warmup cases.
To complete the existence proof, we need to:
1. Show that (3) with Φ(t, s) defined according to (4) satisfies theinitial condition requirement of the state-update DE.
2. Show that (3) with Φ(t, s) defined according to (4) is indeed asolution to the state-update DE.
3. Show that there always exists a solution to the matrix DE (4).
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 9 / 43
ECE504: Lecture 5
Theorem: Existence Proof for General Case: Part 1
Show that (3) with Φ(t, s) defined according to (4) satisfies the initialcondition requirement of the state-update DE.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 10 / 43
ECE504: Lecture 5
Theorem: Existence Proof for General Case: Part 2
Show that (3) with Φ(t, s) defined according to (4) is indeed a solution tothe state-update DE.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 11 / 43
ECE504: Lecture 5
Theorem: Existence Proof for General Case: Part 3
Show that there always exists a solution to the matrix DE (4).
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 12 / 43
ECE504: Lecture 5
Peano-Baker Series Example
A(t) =
[0 0t 0
]
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 13 / 43
ECE504: Lecture 5
Fundamental Matrix Method
While the Peano-Baker series establishes existence (and thus concludes theproof of the existence and uniqueness theorem), it is sometimes easier tofind Φ(t, s) via the “fundamental matrix method” (Chen section 4.5).
Basic idea:
1. Consider the the continuous time DE with x(t) ∈ Rn
x(t) = A(t)x(t) (5)
2. Choose n different initial conditions x1(t0), . . . ,xn(t0). These n
initial condition vectors must be linearly independent.
3. These n different initial conditions lead to n different solutions to (5).Call these solutions x1(t), . . . ,xn(t) and put them into a matrixX(t) = [x1(t), . . . ,xn(t)] ∈ R
n×n.
4. Note that X(t) = A(t)X(t). The quantity X(t) is called afundamental matrix of (5). Is the fundamental matrix unique?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 14 / 43
ECE504: Lecture 5
Fundamental Matrix Method
Let X(t) be any fundamental matrix of (5). Note that X(t) is invertiblefor all t (see Chen p. 107). The state transition matrix Φ(t, s) can then becomputed as
Φ(t, s) = X(t)X−1(s).
Check:
Φ(s, s) =
d
dtΦ(t, s) =
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 15 / 43
ECE504: Lecture 5
Fundamental Matrix Example
A(t) =
[0 0t 0
]
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 16 / 43
ECE504: Lecture 5
Remarks on the CT State-Transition Matrix Φ(t, s)
1. There are many ways to compute Φ(t, s). Some are easier thanothers, but computing Φ(t, s) is almost always difficult.
2. Do different methods for computing Φ(t, s) lead to differentsolutions?
3. Unlike the DT-STM Φ[k, j], the CT-STM Φ(t, s) is defined for any(t, s) ∈ R
2. This means that we can specify an initial state x(t0) andcompute the system response at times prior to t0.
4. It is easy to show that Φ(t, s) possesses the semi-group property, i.e.
Φ(t, τ) = Φ(t, s)Φ(s, τ)
for any (t, τ, s) ∈ R3 from the fundamental matrix formulation:
Φ(t, τ) = Φ(t, s)Φ(s, τ) = X(t)X−1(s)X(s)X−1(τ) = X(t)X−1(τ)
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 17 / 43
ECE504: Lecture 5
Important Special Case: A(t) ≡ A
When A(t) ≡ A, the state-transition matrix Peano-Baker series becomes
Φ(t, s) =∞∑
k=0
Mk(t, s)
=∞∑
k=0
∫ t
s
∫ τ1
s
· · ·
∫ τk−1
s
AA · · ·A︸ ︷︷ ︸
k−fold product
dτk · · · dτ1
=
∞∑
k=0
Ak
∫ t
s
∫ τ1
s
· · ·
∫ τk−1
s
dτk · · · dτ1
To compute Mk(t, s), let’s look at k = 0, 1, 2, . . . to see the pattern...
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 18 / 43
ECE504: Lecture 5
Important Special Case: A(t) ≡ A
By induction, we can show that
Mk(t, s) = Ak 1
k!(t − s)k
hence
Φ(t, s) =
∞∑
k=0
Ak 1
k!(t − s)k
which is consistent with our earlier result (warmup #2).
Suppose, for x ∈ C, we have
f(x) =
∞∑
k=0
xk
k!
What is f(x)?Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 19 / 43
ECE504: Lecture 5
Matrix Exponential
Definition (Matrix Exponential)
Given W ∈ Cn×n, the matrix exponential is defined as
exp(W ) =∞∑
k=0
W k
k!
Note that the matrix exponential is not performed element-by-element, i.e.
exp
([w11 w12
w21 w22
])
6=
[ew11 ew12
ew21 ew22
]
Matlab has a special function (expm) that computes matrix exponentials.Calling exp(W) will not give the same results as expm(W).
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 20 / 43
ECE504: Lecture 5
Important Special Case: A(t) ≡ A
Putting it all together, when A(t) ≡ A, we can say that
Φ(t, s) =
∞∑
k=0
Ak 1
k!(t − s)k = exp {(t − s)A}
Then the solution to the LTI continuous-time state-update DE is
x(t) = exp {(t − t0)A}x(t0) +
∫ t
t0
exp {(t − τ)A}B(τ)u(τ) dτ
and the output equation is
y(t) = C(t) exp {(t − t0)A}x(t0) + C(t)
Z
t
t0
exp {(t − τ )A}B(τ )u(τ ) dτ + D(t)u(t)
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 21 / 43
ECE504: Lecture 5
Contrast/Comparison Between CT and DT Solutions
Similarities
◮ CT and DT solutions have same “look”.
◮ CT and DT solutions have state transition matrices with sameintuitive properties, e.g. semigroup.
Differences
◮ In DT systems, x[k] is only defined for k ≥ k0 because the DT-STMΦ[k, k0] is only defined for k ≥ k0.
◮ In CT systems, x(t) is only defined for all t ∈ R because the CT-STMΦ(t, t0) is defined for all (t, t0) ∈ R
2.
◮ We didn’t prove this, but the CT-STM Φ(t, t0) is always invertible.This is not true of the DT-STM Φ[k, k0].
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 22 / 43
ECE504: Lecture 5
What We Know
◮ We know how to solve discrete-time LTV and LTI systems.“Solve” means “write an analytical expression for x[k] and y[k]given A[k], B[k], C[k], D[k], and x[k0]”.
◮ We know that solutions must exist and must be unique.
◮ We know how to solve continuous-time LTV and LTI systems.“Solve” means “write an analytical expression for x(t) and y(t)given A(t), B(t), C(t), D(t), and x(t0)”.
◮ We know that solutions must exist and must be unique.
◮ We also know two ways to compute the state transition matrix.
◮ We know some of the properties of state transition matrices.
◮ We know differences between the DT-STM and the CT-STM.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 23 / 43
ECE504: Lecture 5
Where We Are Heading
Our focus is going to shift primarily to LTI systems for a little while.
Recall that, when A is not a function of time, the state transitionmatrices become
Φ[k, j] = Ak−j (discrete time)
Φ(t, s) = exp{(t − s)A} (continuous time)
We would like to be able to better analyze these matrix functions inorder to, for example, efficiently compute Ak−j.
We are going to need to learn some more linear algebra first...
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 24 / 43
ECE504: Lecture 5
Sets and Subspaces
Let A and B be sets.
◮ A ⊂ B means that all elements of the set A are also in the set B.
◮ x ∈ A to mean that x is an element of the set A.
◮ A ⊂ B and x ∈ A implies that x ∈ B.
Definition
S ⊂ Rn is a subspace if and only if S is closed under addition and scalar
multiplication, i.e.
x ∈ S and y ∈ S ⇒ x + y ∈ S
and
x ∈ S and α ∈ R ⇒ αx ∈ S.
Note that subspaces must always include the zero vector.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 25 / 43
ECE504: Lecture 5
Spanning Set of a Subspace
Definition
A spanning set for the subspace S ⊂ Rn is a set of vectors s1, . . . sp, each
in S, such that every element of S can be expressed as a linearcombination of the vectors s1, . . . sp, i.e.
x ∈ S ⇒ there exists α1, . . . , αp such that x = α1s1 + · · · + αpsp
where αi ∈ R for i = 1, . . . , p.
Example: Suppose S is the xy plane in R3. Which of the following are
spanning sets?
8
<
:
2
4
1
0
0
3
5
9
=
;
or
8
<
:
2
4
1
0
0
3
5 ,
2
4
0
1
0
3
5
9
=
;
or
8
<
:
2
4
1
0
0
3
5 ,
2
4
0
1
0
3
5 ,
2
4
0
0
1
3
5
9
=
;
or
8
<
:
2
4
1
0
0
3
5 ,
2
4
1
1
0
3
5
9
=
;
or
8
<
:
2
4
1
0
0
3
5 ,
2
4
1
1
0
3
5 ,
2
4
0
1
0
3
5
9
=
;
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 26 / 43
ECE504: Lecture 5
Some Facts About Subspaces, Spanning Sets, and Bases
1. Every S ⊂ Rn possesses a linearly independent spanning set. Such a
set is called a basis for S. This basis is not unique, of course.
2. The number of vectors in any basis for S is the same. This number iscalled the dimension of S. We use the notation dim(S) to denotethe dimension of a subspace.
3. If S is a subspace of Rn, then dim(S) ≤ n with equality if and only if
S = Rn.
4. Any spanning set for S contains at least dim(S) vectors.
5. Any set with elements from S containing more than dim(S) vectors islinearly dependent.
6. A basis is a minimally-sized spanning set of S.
7. A basis is a maximally-sized linear independent set of vectors in S.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 27 / 43
ECE504: Lecture 5
Nullspace and Range
Given W ∈ Rm×n (not necessarily square), there are two important
subspaces related to this matrix.
Definition
The nullspace of W is defined as the set of all x ∈ Rn such that
Wx = 0. We denote this subspace of Rn as null(W ).
Definition
The range of W is defined as the set of all y ∈ Rm such that there exists
an x satisfying Wx = y. We denote this subspace of Rm as range(W ).
The range is also sometimes called the “column space” because it is thesubspace generated by linear combinations of the columns of W .
Note that both subspaces always include the zero vector.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 28 / 43
ECE504: Lecture 5
Nullspace
The matrix W ∈ Rm×n maps vectors from R
n to Rm. The nullspace of
W is a subspace of Rn.
null
Rn R
m
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 29 / 43
ECE504: Lecture 5
Range
The matrix W ∈ Rm×n maps vectors from R
n to Rm. The range of W is
a subspace of Rm.
range
Rn R
m
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 30 / 43
ECE504: Lecture 5
Nullspace and Range Examples
Suppose
W =
1 11 11 1
(6)
W =
[1 1 11 1 1
]
(7)
W =
[1 10 1
]
(8)
What is the nullspace and range of W in each case?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 31 / 43
ECE504: Lecture 5
Existence and Uniqueness of Solutions to Ax = b
For any A ∈ Rm×n and b ∈ Rm, a solution to Ax = b
exists if and only if b ∈ range(A).
For any A ∈ Rm×n and b ∈ Rm, a solution to Ax = b isunique if and only if dim(null(A)) = 0.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 32 / 43
ECE504: Lecture 5
Gaussian Elimination and Echelon Form
◮ GE is an algorithm for reducing a matrix to echelon form.◮ Once you have a matrix in echelon form, you can easily determine its
range and the dimension of its nullspace.◮ This allows you to easily answer questions about the existence and
uniqueness of solutions to Ax = b.
1. Form “augmented matrix” U = [A | b] ∈ Rm×n+1.
2. Notation U (k, :) is the kth row of U and U(k, j) is the k, jth
element of U .3. Force U(2, 1) = 0 by forming an appropriate combination of other
rows and subtracting this combination from U(2, :).4. Force U(3, 1) = U (3, 2) = 0 using the same technique.5. Keep doing this until you have an upper triangular matrix.6. You can now solve the last row since it has only one unknown.7. Back substitute your answer and solve the second last row.8. Keep doing this until you solve the top row.Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 33 / 43
ECE504: Lecture 5
Gaussian Elimination and Echelon Form Examples
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 34 / 43
ECE504: Lecture 5
Using the Echelon Form to Determine a Basis for range(A)
The pivot columns of A for a basis for the range of A. Note that theechelon form matrix tells you which columns to pick from A. Example:
A =
1 2 −1 3 0−1 −2 2 −2 −11 2 0 4 00 0 2 2 −1
and b =
1167
(9)
After reduction to echelon form of U = [A | b], we have
1 2 −1 3 0 10 0 1 1 −1 20 0 0 0 1 30 0 0 0 0 0
(10)
The pivot columns here are the first, third, and fifth. Continued...
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 35 / 43
ECE504: Lecture 5
Using the Echelon Form to Determine a Basis for range(A)
Hence a basis for the range of A is
1−110
,
−1202
,
0−10−1
You should be able to verify that these vectors are linearly independent.
The range of A is the subspace formed by all linear combinations of thesevectors. Solutions to Ax = b exist only when b ∈ range(A).
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 36 / 43
ECE504: Lecture 5
Using the Echelon Form to Determine dim(null(A))
The dimension of the nullspace of A (also called the nullity) is simply thenumber of non-pivot columns in the echelon form.
You can use the echelon form to also find a basis for null(A) (see anygood linear algebra textbook for the details). In our example, a basis forthe nullspace is
−21000
,
−40−110
You should be able to verify that these vectors are linearly independentand that Ax = 0 if x is any linear combination of these basis vectors.
Most importantly, solutions to Ax = b are unique only whendim(null(A)) = 0.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 37 / 43
ECE504: Lecture 5
Rank
Definition
The rank of W is defined as the dimension of the range of W , i.e.
rank(W ) := dim(range(W )).
Some useful facts:
◮ For W ∈ Rm×n, 0 ≤ rank(W ) ≤ min{m,n}.
◮ rank(W ) is equal to the number of pivot columns in the echelonform of W .
◮ Since dim(null(W )) is equal to the number of non-pivot columns inthe echelon form, rank + nullity must equal n.
◮ 0 ≤ rank(UW ) ≤ min{rank(U ), rank(W )}. In other words, matrixmultiplication can only decrease rank.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 38 / 43
ECE504: Lecture 5
Matrix Transpose
Definition
Given
W =
w11 w12 . . . w1n
w21 w22 . . . w2n
......
...wm1 wm2 . . . wmn
∈ R
m×n
the transpose of W is given as
W⊤ =
w11 w21 . . . wm1
w12 w22 . . . wm2
......
...w1n w2n . . . wmn
∈ R
n×m.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 39 / 43
ECE504: Lecture 5
An Important Property of the Matrix Transpose
For any A ∈ Rm×p and B ∈ R
p×n, the product C = AB is an m × n
real-valued matrix. The transpose of C is
C⊤ = (AB)⊤ = B⊤A⊤ ∈ Rn×m
Note that the order of the matrix product has been changed by thetranspose. Do the matrix dimensions agree?
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 40 / 43
ECE504: Lecture 5
Invertibility of Square Matrices
Definition
Given W ∈ Rn×n, we say that W is invertible if there exists V ∈ R
n×n
such that V W = WV = In. The quantity V is called the matrix inversefor W and we use the notation: V = W−1.
The matrix inverse does not always exist, but when it does, it is unique.
Fact: If W is invertible, then W⊤ is also invertible. To see this, just usewhat you know about the matrix inverse and the matrix transpose
WW−1 = In
(WW−1)⊤ = I⊤
n
(W−1)⊤W⊤ = In
and, by the definition, (W−1)⊤ = (W⊤)−1.
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 41 / 43
ECE504: Lecture 5
Invertibility of Square Matrices: Equivalences
The following statements are equivalent:
1. W is invertible.
2. The only x ∈ Rn satisfying Wx = 0 is x = 0.
3. For every b ∈ Rn, there exists a unique x ∈ R
n solving Wx = b.
4. The echelon form of W has no rows composed of all zeros.
5. det(W ) 6= 0.
6. rank(W ) = n.
Proofs...
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 42 / 43
ECE504: Lecture 5
Conclusions
1. Existence and uniqueness of solutions to CT and DT
systems.
2. Proofs were constructive: you can now “solve” thesesystems.
3. Linear algebra tools to lay foundation for analysis ofAk and exp (A):
◮ Subspaces◮ Nullspace and range◮ Rank◮ Matrix invertibility equivalences
Worcester Polytechnic Institute D. Richard Brown III 30-Sep-2008 43 / 43