lect2.pdf

5
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Civil and Environmental Engineering 1.731 Water Resource Systems Lecture 2, Linear Algebra Review, Sept. 12, 2006 The notation and some of the basic concepts of linear algebra are needed in optimization theory, which is concerned with large systems of equations in many variables. You should learn or review the following topics. See any introductory linear algebra text for details. Indicial notation: Vector: x = [x 1 , x 2 , …, x n ] x i Matrix: ij mn m n A A A A A A = L M O M L 1 1 11 Matrix transpose and symmetry: MATLAB operator ji T ij T A A A = A is symmetric if A T =A (no change if rows and columns are interchanged) Vector & matrix operations products: MATLAB operators + and * Vector sum i j i y x z y x z + = + = Matrix sum ij ij ij B A C B A C + = + = j i ax z ax z = = ij ij aA B aA B = = Scalar multiplication Scalar product, x and y are orthogonal if x j j T y x z y x z = = T y = 0 j ij i x A y Ax y = = Matrix-vector product, implied sum over repeated indices (j) Matrix product, implied sum over repeated indices (j) jk ij ik B A C AB C = = quadratic form, implied sum over repeated indices (i,j) j ij i T x A x q Ax x q = = 1

Upload: abera-mamo

Post on 16-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: lect2.pdf

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Civil and Environmental Engineering

1.731 Water Resource Systems

Lecture 2, Linear Algebra Review, Sept. 12, 2006 The notation and some of the basic concepts of linear algebra are needed in optimization theory, which is concerned with large systems of equations in many variables. You should learn or review the following topics. See any introductory linear algebra text for details. Indicial notation: Vector: x = [x1, x2, …, xn] → xi

Matrix: ij

mnm

nA

AA

AAA →

⎥⎥⎥

⎢⎢⎢

⎡=

L

MOM

L

1

111

Matrix transpose and symmetry: MATLAB operator ′ ji

Tij

T AAA =→

A is symmetric if AT=A (no change if rows and columns are interchanged) Vector & matrix operations products: MATLAB operators + and * Vector sum iji yxzyxz +=→+=

Matrix sum ijijij BACBAC +=→+=

ji axzaxz =→= ijij aABaAB =→= Scalar multiplication Scalar product, x and y are orthogonal if xjj

T yxzyxz =→= Ty = 0 jiji xAyAxy =→= Matrix-vector product, implied sum over repeated indices (j) Matrix product, implied sum over repeated indices (j) jkijik BACABC =→=

quadratic form, implied sum over repeated indices (i,j) jiji

T xAxqAxxq =→=

1

Page 2: lect2.pdf

Systems of linear equations: m equations, n unknowns xijij bxAbAx =→= j (i = 1,…, m j = 1,…,n) Row echelon form of a matrix This is convenient for analyzing and solving systems of linear equations. A is in row echelon form if:

• All rows with non-zero entries are above rows containing only zeros • Leading (non-zero) entry of each row is to right of leading entry in row above it • All entries below a leading coefficient are zero

A is row echelon: A is not row echelon

⎥⎦

⎤⎢⎣

⎡=

2/910521

A ⎥⎦

⎤⎢⎣

⎡=

643521

A

Reduced row echelon form of a matrix MATLAB function rref(A) Add requirements that:

• All leading entries = 1 • All entries above and below leading entries = 0

A is reduced row echelon:

⎥⎦

⎤⎢⎣

⎡ −=

2/910401

A

Using elementary row operations to derive row and reduced row echelon forms:

• Elementary row operations → replace a given row with weighted sum of any two rows.

• Carry out elementary row operations starting from top row down to meet requirements for row echelon form.

⎥⎦

⎤⎢⎣

⎡→⎥

⎤⎢⎣

⎡=

2/910521

643521

A

221 ))(2/1()2/3( rrr →−+

• Carry out additional elementary row operations starting from top row down to meet requirements for reduced row echelon form.

⎥⎦

⎤⎢⎣

⎡ −→⎥

⎤⎢⎣

⎡=

2/910401

2/910521

A

121 ))(2( rrr →−+

Gaussian elimination: MATLAB operator x=A\b This is a procedure for solving systems of linear equations by applying a series of elementary row operations:

• Augment A by appending b as last column to obtain [A | b] • Put [A | b] in reduced row echelon form

2

Page 3: lect2.pdf

• Solution x is last column of row echelon matrix

⎥⎦

⎤⎢⎣

⎡ −=⎥

⎤⎢⎣

⎡ −→⎥

⎤⎢⎣

⎡=

⎥⎦

⎤⎢⎣

⎡=⎥

⎤⎢⎣

⎡==

2/94

2/910401

65

4321

]|[

65

4321

xbA

bAbAx

Matrix rank MATLAB function rank(A) Rank of a matrix is number of non-zero rows in row echelon form:

⎥⎦

⎤⎢⎣

⎡→⎥

⎤⎢⎣

⎡=

000321

642321

A Rank = number of non-zero rows = 1 Consistency and uniqueness

• System of linear equations bxA = is consistent if ])|([)( bARankARank = • The m by n homogeneous system Ax=0 always has the trivial solution x = 0. • An m by n consistent non-homogeneous system Ax=b has a unique solution if Rank(A)

= n = number of unknowns. • An m by n consistent non-homogeneous system Ax=b has a non-trivial non-unique

solution if Rank(A) = r < n = number of unknowns. The number of free parameters in the solution is n - r.

Determinant of a square matrix: MATLAB function det(A) Determinant |A| of A is a scalar matrix property useful for solving eigen problems. Determinant can be evaluated from row echelon form (which is upper triangular for a square matrix). Apply the following rules:

• If an elementary row operation that transforms A to B has the form ciri+cjrj → rj , then |A|= |B| / cj.

• |A|= product of the leading (diagonal) terms of the final row echelon form.

In this example row echelon is produced by an elementary row operation that replaces row 2 using c2 =1:

21/)2(/)2(20

214321

2 −=−=−=−

→ c 1

)3(

2

221=

→+−c

rrr

In this example row echelon is produced by an elementary row operation that replaces row 2 using c2 =-1/2:

3

Page 4: lect2.pdf

2)2/1/()1(/)1(1021

4321

2 −=−==→ c 2/1

))(2/1()2/3(

2

221−=

→−+c

rrr

Inverse of a square matrix: MATLAB function inv(A) [ ] ikjkij AAIAAAA δ=→== −−− 111

Find A-1 by solving A[A-1] = I using Gaussian elimination (with A given and each column of A-1 considered to be an unknown vector). If Rank(A) = r < n then |A|=0 and matrix is singular and has no inverse Linear Dependence/Independence Linear combination of a set of m n-dimensional vectors x1j, …, xmj is:

; xijj

m

jijj xaxa =∑

=1ij is an n by m matrix whose columns are the xi1, …, xjim vectors

The vectors xj1, …, xjm are linearly independent if ajxij has a unique (trivial) solution aj = 0. If the aj’s can be non-zero the vectors are linearly dependent. Linear Vector Spaces, Subspaces, and Projections The set of all possible n-dimensional vectors [x1, x2 … xn] forms a linear vector space Vn since it is closed under vector addition and scalar multiplication (i.e. any vector a1xi1 + a2xi2 is in Vn if the vectors xi1 and xi2 are in Vn). Consider the n by m matrix Aij with columns consisting of the m vectors Ai1, …, Aim from Vn. The set of vectors that are linear combinations of these m column vectors form a linear vector space Vm which is a subspace (subset) of Vn . The subspace Vm is spanned by the Ai1, …, Aim. If the m spanning vectors are linearly independent they form a basis for Vm . The dimension of Vm is the rank of Aij, which will be equal to m if the spanning vectors are linearly independent and form a basis. If m = n = Rank (A) then Vm = Vn. If the m columns of Aij are linearly independent the n-m solutions of the system Aij xi = 0 form a basis for an n-m dimensional subspace Vn-m of Vn. Vn-m is the null space of Vm and vice versa. Every vector in Vn-m is orthogonal to every vector in Vm. Each basis vector of Vm may be viewed as the normal vector to a n-1 dimensional hyperplane. The intersection of all m such hyperplanes is an n-m hyperplane that contains all vectors in the null space Vn-m.

4

Page 5: lect2.pdf

The projection Pij xj of any vector xi in Vn onto the subspace Vm is a vector that 1) lies in Vm and 2) obeys the property ijiji xxPx ⊥+= , where lies in the null space Vix⊥ n-m .(i.e. xi - Pijxj

is orthogonal to all the vectors in Vm). It follows from these properties that the n by n projection matrix Pij is: TT AAAAP 1][ −=

The matrix m by m [ATA] is invertible since A has rank m. Eigen problems Eigen problem seeks a set of scalar eigenvalues λk and eigenvectors uk, for k = 1,…, n associated with the n by n matrix A. The eigenvalues and eigenvectors satisfy: unknown are and ,...1 k k

iki

kkjij unkuuA λλ ==

The n eigenvalues are found by solving nth order polynomial in λ for n roots : nIA λλλλ ,...,,0 21→=− The corresponding n eigenvectors are found by substituting each λk into and

solving for the corresponding .

ki

kkjij uuA λ=

kiu

Example: ⎥⎦

⎤⎢⎣

⎡=

4301

A

Eigenvalues: 4,1)4)(1(43

01 21 ==→−−=−

−λλλλ

λλ

Eigenvector 1: aa

a

uu

uu

uu any for

3300

4301

12

11

12

11

12

11

1

1⎥⎦

⎤⎢⎣

⎡−

=⎥⎥⎦

⎢⎢⎣

⎡→

⎥⎥⎦

⎢⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡=

⎥⎥⎦

⎢⎢⎣

⎥⎥⎦

⎢⎢⎣

−−

λλ

Eigenvector 2: aau

uuu

uu any for

00303

4301

22

21

22

21

22

21

2

2⎥⎦

⎤⎢⎣

⎡=

⎥⎥⎦

⎢⎢⎣

⎡→

⎥⎥⎦

⎢⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡−=

⎥⎥⎦

⎢⎢⎣

⎥⎥⎦

⎢⎢⎣

−−

λλ

Quadratic Forms/Definiteness: Quadratic form (or the matrix A that it depends upon) can be classified as follows:

jijiT xAxqAxxxq =→=)(

• q(x), A positive definite if q(x) > 0 for all - if A symmetric all eigenvalues of A > 0 • q(x), A positive semidefinite if q(x) ≥ 0 for all x - if A symmetric all eigenvalues of A ≥ 0 • q(x), A negative definite if q(x) < 0 for all x - if A symmetric all eigenvalues of A < 0 • q(x), A negative semi-definite if q(x) ≤ 0 for all x - if A symmetric all eigenvalues of A ≤ 0 • otherwise q(x) and A are indefinite

5