linear algebra review - university of california, san …tdlc.ucsd.edu/events/boot_camp_2009/2006...

22
Linear Algebra Review September 6, 2005 Tim Marks 1 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 108F Linear Algebra review Vectors The length of x, a.k.a. the norm or 2-norm of x, is e.g., x = x 1 2 + x 2 2 + L + x n 2 x = 3 2 + 2 2 + 5 2 = 38

Upload: hatuong

Post on 27-May-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Linear Algebra Review September 6, 2005

Tim Marks 1

Linear Algebra Review

By Tim K. Marks

UCSD

Borrows heavily from:

Jana Koseckahttp://cs.gmu.edu/~kosecka/cs682.html

Virginia de Sa (UCSD)

Cogsci 108F Linear Algebra review

Vectors

The length of x, a.k.a. the norm or 2-norm of x, is

e.g.,

!

x = x1

2+ x

2

2+L+ x

n

2

!

x = 32

+ 22

+ 52

= 38

Linear Algebra Review September 6, 2005

Tim Marks 2

Good Review Materials

http://www.imageprocessingbook.com/DIP2E/dip2e_downloads/review_material_downloads.htm

(Gonzales & Woods review materials)

Chapt. 1: Linear Algebra Review

Chapt. 2: Probability, Random Variables, Random Vectors

Online vector addition demo:

http://www.pa.uky.edu/~phy211/VecArith/index.html

Linear Algebra Review September 6, 2005

Tim Marks 3

Vector Addition

vvuu

u+vu+v

Vector Subtraction

uuvv

u-vu-v

Linear Algebra Review September 6, 2005

Tim Marks 4

Example (on board)

Inner product (dot product) of

two vectors

!

a =

6

2

"3

#

$

% % %

&

'

( ( (

!

b =

4

1

5

"

#

$ $ $

%

&

' ' '

!

a " b = aTb

!

= 6 2 "3[ ]

4

1

5

#

$

% % %

&

'

( ( (

!

= 6 " 4 + 2 "1+ (#3) " 5

!

=11

Linear Algebra Review September 6, 2005

Tim Marks 5

Inner (dot) Product

vv

uu

!!The inner product is a The inner product is a SCALAR.SCALAR.

!

uTv = 0 " u # v

Linear Algebra Review September 6, 2005

Tim Marks 6

Transpose of a Matrix

Transpose:Transpose:

Example of symmetric matrixExample of symmetric matrix

!

Cm"n = A

Tn"m

!

cij = a ji

!

(A + B)T

= AT

+ BT

!

(AB)T

= BTAT

Examples:Examples:

!

6 2

1 5

"

# $

%

& '

T

=6 1

2 5

"

# $

%

& '

!

6 2

1 5

3 8

"

#

$ $ $

%

&

' ' '

T

=6 1 3

2 5 8

"

# $

%

& '

If If , we say , we say AA is is symmetricsymmetric..

!

AT

= A

Linear Algebra Review September 6, 2005

Tim Marks 7

Linear Algebra Review September 6, 2005

Tim Marks 8

Linear Algebra Review September 6, 2005

Tim Marks 9

Linear Algebra Review September 6, 2005

Tim Marks 10

!

6 2

1 5

"

# $

%

& ' (2 5

3 1

"

# $

%

& ' =

18 32

17 10

"

# $

%

& '

Matrix ProductProduct:Product:

Examples:Examples:

A and B must haveA and B must havecompatible dimensionscompatible dimensions

In Matlab: >> A*B

!

Cn" p = An"mBm" p

!

cij = aikbkjk=1

m

"

Matrix Multiplication is not commutative:Matrix Multiplication is not commutative:

!

An"nBn"n # Bn"nAn"n!

2 5

3 1

"

# $

%

& ' (6 2

1 5

"

# $

%

& ' =

17 29

19 11

"

# $

%

& '

Linear Algebra Review September 6, 2005

Tim Marks 11

!

cij = aij + bij

!

Cn"m = A

n"m + Bn"m

Matrix Sum

Sum:Sum:

A and B must have the sameA and B must have the samedimensionsdimensions

Example:Example:

!

2 5

3 1

"

# $

%

& ' +

6 2

1 5

"

# $

%

& ' =

8 7

4 6

"

# $

%

& '

!

deta11

a12

a21

a22

"

# $

%

& ' =

a11

a12

a21

a22

= a11a22( a

21a12

!

det

a11

a12

a13

a21

a22

a23

a31

a32

a33

"

#

$ $ $

%

&

' ' '

= a11

a22

a23

a32

a33

( a12

a21

a23

a31

a33

+ a13

a21

a22

a31

a32

Determinant of a Matrix

Determinant:Determinant:

!

det2 5

3 1

"

# $

%

& ' = 2 (15 = (13Example:Example:

A must be squareA must be square

Linear Algebra Review September 6, 2005

Tim Marks 12

Determinant in Matlab

Inverse of a Matrix

If A is a square matrix, the inverse of A, called A-1,satisfies

AA-1 = I and A-1A = I,

Where I, the identity matrix, is a diagonal matrixwith all 1’s on the diagonal.

!

I2

=1 0

0 1

"

# $

%

& '

!

I3

=

1 0 0

0 1 0

0 0 1

"

#

$ $ $

%

&

' ' '

Linear Algebra Review September 6, 2005

Tim Marks 13

Inverse of a 2D Matrix

!

6 2

1 5

"

# $

%

& '

(1

)6 2

1 5

"

# $

%

& ' =

1

28

5 (2

(1 6

"

# $

%

& ' )6 2

1 5

"

# $

%

& ' =

1

28

28 0

0 28

"

# $

%

& ' =

1 0

0 1

"

# $

%

& '

!

6 2

1 5

"

# $

%

& '

(1

=1

28

5 (2

(1 6

"

# $

%

& '

Example:Example:

Inverses in Matlab

Linear Algebra Review September 6, 2005

Tim Marks 14

Other Terms

Linear Algebra Review September 6, 2005

Tim Marks 15

Matrix Transformation: Scale

A square diagonal matrix scales each dimension by

the corresponding diagonal element.

Example:

!

2 0 0

0 .5 0

0 0 3

"

#

$ $ $

%

&

' ' '

6

8

10

"

#

$ $ $

%

&

' ' '

=

12

4

30

"

#

$ $ $

%

&

' ' '

Linear Algebra Review September 6, 2005

Tim Marks 16

Linear Independence

• A set of vectors is linearly dependent if one of

the vectors can be expressed as a linear

combination of the other vectors.

Example:

!

1

0

0

"

#

$ $ $

%

&

' ' '

,

0

1

0

"

#

$ $ $

%

&

' ' '

,

2

1

0

"

#

$ $ $

%

&

' ' '

• A set of vectors is linearly independent if none

of the vectors can be expressed as a linear

combination of the other vectors.

Example:

!

1

0

0

"

#

$ $ $

%

&

' ' '

,

0

1

0

"

#

$ $ $

%

&

' ' '

,

2

1

3

"

#

$ $ $

%

&

' ' '

Rank of a matrix• The rank of a matrix is the number of linearly

independent columns of the matrix.

Examples:

has rank 2

!

1 0 2

0 1 1

0 0 0

"

#

$ $ $

%

&

' ' '

• Note: the rank of a matrix is also the number of

linearly independent rows of the matrix.

has rank 3

!

1 0 2

0 1 0

0 0 1

"

#

$ $ $

%

&

' ' '

Linear Algebra Review September 6, 2005

Tim Marks 17

Singular Matrix

All of the following conditions are equivalent. We

say a square (n " n) matrix is singular if any one

of these conditions (and hence all of them) is

satisfied.

– The columns are linearly dependent

– The rows are linearly dependent

– The determinant = 0

– The matrix is not invertible

– The matrix is not full rank (i.e., rank < n)

Linear Spaces

A linear space is the set of all vectors that can be

expressed as a linear combination of a set of basis

vectors. We say this space is the span of the basis

vectors.

– Example: R3, 3-dimensional Euclidean space, is

spanned by each of the following two bases:

!

1

0

0

"

#

$ $ $

%

&

' ' '

,

0

1

0

"

#

$ $ $

%

&

' ' '

,

0

0

1

"

#

$ $ $

%

&

' ' '

!

1

0

0

"

#

$ $ $

%

&

' ' '

,

0

1

2

"

#

$ $ $

%

&

' ' '

,

0

0

1

"

#

$ $ $

%

&

' ' '

Linear Algebra Review September 6, 2005

Tim Marks 18

Linear Subspaces

A linear subspace is the space spanned by a subset

of the vectors in a linear space.

– The space spanned by the following vectors is a

two-dimensional subspace of R3.

!

1

0

0

"

#

$ $ $

%

&

' ' '

,

0

1

0

"

#

$ $ $

%

&

' ' '

!

1

1

0

"

#

$ $ $

%

&

' ' '

,

0

0

1

"

#

$ $ $

%

&

' ' '

What does it look like?

What does it look like?

– The space spanned by the following vectors is a

two-dimensional subspace of R3.

Orthogonal and Orthonormal

Bases

n linearly independent real vectors

span Rn, n-dimensional Euclidean space• They form a basis for the space.

– An orthogonal basis, a1, …, an satisfies

ai # aj = 0 if i ≠ j– An orthonormal basis, a1, …, an satisfies

ai # aj = 0 if i ≠ jai # aj = 1 if i = j

– Examples.

Linear Algebra Review September 6, 2005

Tim Marks 19

Orthonormal Matrices

A square matrix is orthonormal (also called

unitary) if its columns are orthonormal vectors.

– A matrix A is orthonormal iff AAT = I.

• If A is orthonormal, A-1 = AT

AAT = ATA = I.

– A rotation matrix is an orthonormal matrix with

determinant = 1.

• It is also possible for an orthonormal matrix to have

determinant = -1. This is a rotation plus a flip (reflection).

http://www.math.ubc.ca/~cass/courses/m309-8a/java/m309gfx/eigen.html

Linear Algebra Review September 6, 2005

Tim Marks 20

Some Properties of

Eigenvalues and Eigenvectors

– If !1, …, !n are distinct eigenvalues of a matrix, then

the corresponding eigenvectors e1, …, en are linearly

independent.

– If e1 is an eigenvector of a matrix with corresponding

eigenvalue !1, then any nonzero scalar multiple of e1 is

also an eigenvector with eigenvalue !1.

– A real, symmetric square matrix has real eigenvalues,

with orthogonal eigenvectors (can be chosen to be

orthonormal).

Linear Algebra Review September 6, 2005

Tim Marks 21

SVD: Singular Value DecompositionAny matrix A (m " n) can be written as the product of three

matrices:

A = UDV T

where

– U is an m " m orthonormal matrix

– D is an m " n diagonal matrix. Its diagonal elements, "1, "2, …, are

called the singular values of A, and satisfy "1 ! "2 ! … ! 0.

– V is an n " n orthonormal matrix

Example: if m > n

!

• • •

• • •

• • •

• • •

• • •

"

#

$ $ $ $ $ $

%

&

' ' ' ' ' '

=

( ( ( (

| | | |

u1

u2

u3

L um

| | | |

) ) ) )

"

#

$ $ $ $ $ $

%

&

' ' ' ' ' '

*1

0 0

0 *2

0

0 0 *n

0 0 0

0 0 0

"

#

$ $ $ $ $ $

%

&

' ' ' ' ' '

+ v1

T ,

M M M

+ vn

T ,

"

#

$ $ $

%

&

' ' '

A U D V T

SVD in Matlab>> a = [1 2 3; 2 7 4; -3 0 6; 2 4 9; 5 -8 0]

a =

1 2 3

2 7 4

-3 0 6

2 4 9

5 -8 0

>> [u,d,v] = svd(a)

u =

-0.24538 0.11781 -0.11291 -0.47421 -0.82963

-0.53253 -0.11684 -0.52806 -0.45036 0.4702

-0.30668 0.24939 0.79767 -0.38766 0.23915

-0.64223 0.44212 -0.057905 0.61667 -0.091874

0.38691 0.84546 -0.26226 -0.20428 0.15809

d =

14.412 0 0

0 8.8258 0

0 0 5.6928

0 0 0

0 0 0

v =

0.01802 0.48126 -0.87639

-0.68573 -0.63195 -0.36112

-0.72763 0.60748 0.31863

Linear Algebra Review September 6, 2005

Tim Marks 22

Some Properties of SVD

– The rank of matrix A is equal to the number of nonzero

singular values "i

– A square (n " n) matrix A is singular

if and only if

at least one of its singular values "1, …, "n is zero.

Geometric Interpretation of SVD

If A is a square (n " n) matrix,

– U is a unitary matrix: rotation (possibly plus flip)

– D is a scale matrix

– V (and thus V T) is a unitary matrix

Punchline: An arbitrary n-D linear transformation isequivalent to a rotation (plus perhaps a flip), followed by ascale transformation, followed by a rotation

Advanced: y = Ax = UDV Tx

– V T expresses x in terms of the basis V.

– D rescales each coordinate (each dimension)

– The new coordinates are the coordinates of y in terms of the basis U

!

• • •

• • •

• • •

"

#

$ $ $

%

&

' ' '

=

( L (

u1

L un

) L )

"

#

$ $ $

%

&

' ' '

*1

0 0

0 *2

0

0 0 *n

"

#

$ $ $

%

&

' ' '

+ v1

T ,

M M M

+ vn

T ,

"

#

$ $ $

%

&

' ' '

A U D V T