matrices i
DESCRIPTION
First part of description of Matrix Calculus at Undergraduate in Science (Math, Physics, Engineering) level. Please send comments and suggestions to [email protected]. For more presentations please visit my website at http://www.solohermelin.com.TRANSCRIPT
1
Matrices I
SOLO HERMELIN
Updated: 30.03.11http://www.solohermelin.com
2
SOLO Matrices I
Table of Content
Introduction to Algebra
Matrices
Vectors and Vector Spaces Matrix
Operations with Matrices
Domain and Codomain of a Matrix ATranspose AT of a Matrix A
Conjugate A* and Conjugate Transpose AH=(A*)T of a Matrix A
Sum and Difference of Matrices A and BMultiplication of a Matrix by a ScalarMultiplication of a Matrix by a MatrixKronecker Multiplication of a Matrix by a MatrixPartition of a Matrix
Elementary Operations with a MatrixRank of a MatrixEquivalence of Two Matrices
3
SOLO Matrices I
Table of Content (continue – 1)
MatricesSquare Matrices
Trace of a Square Matrix, Diagonal Square Matrix
Identity Matrix, Null Matrix, Triangular MatricesHessenberg MatrixToeplitz Matrix, Hankel Matrix
Householder MatrixVandermonde MatrixHermitian Matrix, Skew-Hermitian Matrix, Unitary Matrix
Matrices & Determinants History
L, U Factorization of a Square Matrix A by Elementary Operations
Invertible Matrices
Diagonalization of a Square Matrix A by Elementary Operations
4
SOLO Matrices I
Table of Content (continue – 2)
MatricesSquare Matrices
Determinant of a Square Matrix – det A or |A|
Eigenvalues and Eigenvectors of Square Matrices Anxn
Jordan Normal (Canonical) FormCayley-Hamilton Theorem
Matrix Decompositions Companion Matrix
References
5
SOLO Algebra
Set and Set Operations
A collection of objects sharing a common property is called a Set. We use the notation
PpropertyhasxxS :
We write Sx
S1 is a subset of S if every element of S1 is an element of S
SxSxxSS 11 :
x is an element of S
1
2 elementsno Null (Empty) set
2121 : SxorSxxSS Union of sets 3
2121 : SxandSxxSS Intersection of sets 4
2121 : SxandSxxSS Difference of sets 5
SSandxandSxxS : Complement of S relative to Ω
6
21 SS 1S
2S
21 SS 1S
2S
21 SS
1S
2S
S
S
6
SOLO Algebra
Set and Set Operations
A collection of objects sharing a common property is called a Set. We use the notation
PpropertyhasxxS :
We write Sx
S1 is a subset of S if every element of S1 is an element of S
SxSxxSS 11 :
x is an element of S
1
2 elementsno Null (Empty) set
2121 : SxorSxxSS Union of sets 3
2121 : SxandSxxSS Intersection of sets 4
2121 : SxandSxxSS Difference of sets 5
SSandxandSxxS : Complement of S relative to Ω
6
21 SS 1S
2S
21 SS 1S
2S
21 SS
1S
2S
S
S
7
SOLO Algebra
Group
A nonempty set G is said to be a group if in G there is defined an operation * such that:
GbaGba ,* Closure1
Gcbacbacba ,,**** Associativity2
3 GaaaeeatsGe **.., Identity element
4 eabbatsGbGa **..,, Inverse element b = a-1
Lemma1: A group G has exactly one identity element
Proof: If e and f are both identity elements, then
feffeef
eeffe
**
**
Lemma2: Every element in G has exactly one inverse element
Proof: If b and c are both inverse elements of x, then
cxebx ** cxbbxbee
**** *b
cebe ** cb
8
SOLO Algebra
Ring
A Ring is a set R equipped with two binary operations +: R R→R (called addition), and : R R→R (called multiplication), such that:
(R,+) is an Abelian Group with identity element 0:
cbacba
aaa 00
abba
0..,, aaaatsRaRa
(R,.) is associative
cbacba
Multiplication distributes over addition:
cabacba
cbcacba
RbaRba ,, Closure
Associativity
Identity element
Inverse element
GroupProperties
Abelian Group property
9
SOLO Algebra
Field
A Field is a Ring satisfying two additional conditions:
(1) There also exists an identity e with respect to multiplication, i.e.:
aaa 11
(2) All but the zero element have inverse with respect to multiplication
1..,,0& 111 aaaatsRabaRa
10
Synthetic GeometryEuclid 300BC
Algebras History SOLO
Extensive AlgebraGrassmann 1844
Binary AlgebraBoole 1854
Complex AlgebraWessel, Gauss 1798
Spin AlgebraPauli, Dirac 1928
Syncopated AlgebraDiophantes 250AD
QuaternionsHamilton 1843
Tensor CalculusRicci 1890
Vector CalculusGibbs 1881
Clifford AlgebraClifford 1878
Differential FormsE. Cartan 1908
First Printing1482
http://modelingnts.la.asu.edu/html/evolution.html
Geometric Algebraand Calculus
Hestenes 1966
Matrix AlgebraCayley 1854
DeterminantsSylvester 1878
Analytic Geometry Descartes 1637
Table of Content
11
SOLO Matrices
Definitions:
Vectors and Vector Spaces
Vector: A n-dimensional n-Vector is an ordered set of elements x1, x2,…,xn over a field F. One other way is to define it as Row Matrix or a Column Matrix
n
n
xxxr
x
x
x
c 21
2
1
,
we have where T is the Transpose operation.crrc TT &
Scalar: A one-dimensional Vector with its element a real or a complex number.
Null Vector: A n-dimensional Vector with all elements equal zero.
Equality of two Vectors:
niforyxyx ii ,1
000,
0
0
0
rc oo
12
SOLO Matrices
12
VECTOR SPACE
Given the complex numbers .
A Vector Space V (Linear Affine Space) with elements over C if its elements satisfy the following conditions:
I. Exists a operation of Addition with the following properties:
Commutative (Abelian) Law for Addition 1
Associative Law for Addition 2
Exists a unique vector 3
II. Exists a operation of Multiplication by a Scalar with the following properties:
4Inverse
5Associative Law for Multiplication 6Distributive Law for Multiplication 7Commutative Law for Multiplication 8
We can write:
00101010 3
575
xxxxxxxx
yxyx xxx
xx xx 1
0.. yxtsVyVxxx 0 0
zyxzyx xyyx
Vzyx ,,
C ,,
13
SOLO Matrices
Linear Dependence and Independence
Vectors and Vector Spaces
Vectors are said to be Linear Independent if:mvvv ,,, 21
00 212211 mmm ifonlyandifvvv
Vectors are said to be Linear Dependent if :mvvv ,,, 21
0&02211 imm somevvv
k
m
kii
iik vv /1
If the vectors are Linear Dependent, the vectors whose coefficientsαk ≠ 0 in can be obtained as a Linear Combination of other Vectors
mvvv ,,, 21 kv011 mmkk vvv
14
SOLO Matrices
Linear Dependence and Independence
Vectors and Vector Spaces
Theorem
If Vectors are said to be Linear Independent and vectors are Linear Dependent, than can be expressed as a Unique Linear Combination of .
mvvv ,,, 21 121 ,,,, mm vvvv
mvvv ,,, 21 1mv
Proof
0&0 1112211 mmmmm vvvv since αm+1 = 0 implies
mvvv ,,, 21 are Linear Dependent, and this is a contradiction.
therefore: 122111 / mmmm vvvv
q.e.d.
121 ,,,, mm vvvv Linear Dependent implies that exists some (more than one) αi ≠ 0 s.t.
To prove Uniqueness suppose that there are two expressions
nivvvv ii
tIndependenLinearvvm
iiii
m
iii
m
iiim
m
,10,,
1111
1
15
SOLO Matrices
Basis of a Vector Space V
Vectors and Vector Spaces
A set of Vectors of a n-Vector Space is called a Basis of V if these n Vectors are Linearly Independent and every Vector can be Uniquely expressed as a Linear Combination of those Vectors:
nvvv ,,, 21 y
n
iiivy
1
16
SOLO Matrices
Vectors and Vector Spaces
Relation Between Two Bases of a Vector Space VIf we have Two Bases of Vectors , we can writenn wwwandvvv ,,,,,, 2121
n
A
nnnn
n
n
nnnnnnn
nn
nn
v
v
v
w
w
w
vvvw
vvvw
vvvw
nxn
2
1
21
22221
11211
2
1
2211
22221212
12121111
In the same way
n
B
nnnn
n
n
nnnnnnn
nn
nn
w
w
w
v
v
v
wwwv
wwwv
wwwv
nxn
2
1
21
22221
11211
2
1
2211
22221212
12121111
Therefore
n
nxnnxn
n
nxn
n v
v
v
AB
w
w
w
B
v
v
v
2
1
2
1
2
1
Bnxn is called the Inverse of the Square Matrix Anxn and is written as Anxn
-1.
n
nxnnxn
n
nxn
n w
w
w
BA
v
v
v
A
w
w
w
2
1
2
1
2
1
nnxnnxn IBA
nnxnnxn IAB
17
SOLO
Inner Product If V is a complex Vector Space, for the Inner Product (a scalar) < , > between the elements (complex numbers) is defined by:
Vzyx ,,
*,, xyyx1 Commutative law
zxyxzyx ,,,2 Distributive law
Cyxyx ,,3
00,&0, xxxxx4
Using to we can show that:1 4
xyxyyxyxyyxxyy ,,,,,, 21
1*
2*
1
2*
21
1
21
yxxyxyyx ,,,, *
2***
2
xxxxxx ,000,0,0,00,0,2
Matrices Vectors and Vector Spaces
18
SOLO
Inner Product
**:, xyyxyx TT
We can define the Inner Product in a Vector Space as
Matrices
therefore
n
iiinn
nn
yxyxyxyxyx
y
y
y
y
x
x
x
x1
**2
*21
*1
2
1
2
1
,&
Outer Product
**2
*1
*2
*22
*12
*1
*21
*11
**2
*1
2
1
*:
nnnn
n
n
n
n
T
yxyxyx
yxyxyx
yxyxyx
yyy
x
x
x
yxyx
Vectors and Vector Spaces
19
SOLO
(Identity)00 xx2
1 Vxx 0(Non-negativity)
xx 4
Norm of a Vector .x
Vyxyxyxyx ,3 (Triangle Inequalities)
Matrices
The Norm of a Vector is defined by the following relations:
If V is an Inner Product space, than we can induce the norm: 2/1, xxx
and
We can see that 0,2/1
1
22/1
1
*2/1
n
ii
n
iii xxxxxx 1
0,1002/1
1
2
xnixxx i
n
ii
2
Vectors and Vector Spaces
20
SOLO
Inner Product
yxyx ,
Cauchy, Bunyakovsky, Schwarz Inequality known as Schwarz Inequality
Let x, y be the elements of an Inner Product space V, than :
x
y
y
yx
y
yx,
,
y
y
y
yxxy
y
yxx
,
,2
0,,,,,2* yyxyyxxxyxyx
Assuming that (for which the equality holds)we choose:
yy
yx
,
,
we have:
0,,
,
,
,,
,
,,,
2
2*
yyyy
yx
yy
xyyx
yy
yxyxxx
which reduce to:
0,
,
,
,
,
,,
222
yy
yx
yy
yx
yy
yxxx
or: yxyxyxyyxx ,0,,,2
q.e.d.
Augustin Louis Cauchy ) 1789-1857(
Viktor YakovlevichBunyakovsky1804 - 1889
Hermann Amandus Schwarz
1843 - 1921
Matrices Vectors and Vector Spaces
0y
21
SOLO
Inner Product
Cauchy Inequality
Let ai, bi (i = 1,…,n) be complex numbers, than :
n
ii
n
ii
n
iii baba
1
2
1
22
1
Augustin Louis Cauchy ) 1789-1857(
Viktor YakovlevichBunyakovsky1804 - 1889
Hermann Amandus Schwarz
1843 - 1921
Buniakowsky-Schwarz Inequality
dttgdttfdttgtf 222
Buniakowsky, V., “Sur quelques inéqualités concernantLes intégrales ordinaires et les intégrales aux différences finite”, Mémoires de l’Acad. de St. Pétersbourg (VII),(1859)
Schwarz, H.A., “Über ein die Flächen kleinstein Flächeninhalts betreffendes Problem der Variationsrechnung”, Acta Soc. Scient. Fen., 15, 315-362,(1885)
Matrices Vectors and Vector Spaces
22
SOLO
Inner Product
2/1, xxx
Parallelogram law
Given an Inner Product space V, than is a norm on V. Moreover for any x,y є X the parallelogram law
222222 yxyxyx
is valid.
Proof
q.e.d.
x
y
yx
yx
22
22
22,2,2
,,,,
,,,,
,,
yxyyxx
yyxyyxxx
yyxyyxxx
yxyxyxyxyxyx
Matrices
Vectors and Vector Spaces
23
SOLO
Inner Product
Let compute:
From this we can see that
x
y
yx
yx
yi
yi
yix
yix
xyyx
yyxyyxxx
yyxyyxxx
yxyxyxyxyxyx
,2,2
,,,,
,,,,
,,22
xyiyxi
yyxyiyxixx
yyxyiyxixx
yiyixyiyixxx
yiyixyiyixxx
yixyixyixyixyixyix
,2,2
,,,,
,,,,
,,,,
,,,,
,,22
yxyixiyixiyxyx ,42222
*2222,4,4 yxxyyixiyixiyxyx
Matrices Vectors and Vector Spaces
24
SOLO
Norm of a Vector .
Matrices
Let use the Norm definition to develop the following relations:
yxyx
yyxxyx
yyyxxyxxyxyxyx
,Re2
,,
,,,,,
22
22
2
We obtain the Triangle Inequalities
yxyxyxyxyx ,2,222222
yxyxyxyx ,Re,Im,Re,22 use the fact that:
to obtain:
use the Scwarz Inequality: yxyx ,
yxyxyxyxyx 2222222 to obtain:
or: 222yxyxyx
yxyxyx
Vectors and Vector Spaces
x
y
yx
x
25
SOLO
Norm of a Vector .
Matrices
Other Definitions of Vector Norms
n
iixx
1
The following definitions satisfy Vector Norm Properties:
1
2 ii
xx max
n
i
n
jjiij
TTxxqxQxxTTxxTxTx
1 1
*2/1*2/1
**2/1
**3
Vectors and Vector Spaces
x
Return toTable of Content
26
SOLO Matrices
Matrix
A Matrix A over a field F is a rectangular array of elements in F.
If A is over a field of real numbers, A is called a Real Matrix.
If A is over a field of complex numbers, A is called a Complex Matrix.
A n rows by m columns Matrix A, n x m Matrix, is defined as:
s
w
o
r
n
r
r
r
ccc
aaa
aaa
aaa
A
n
columnsm
m
nmnn
m
m
nxm
2
1
21
21
22221
11211
aij (i=1,n,j=1,m) are called the elements of A, and we use also the notation:
ijaAnxm
Return to
Table of Content
27
SOLO Matrices
Definitions:
Any complex matrix A with n rows (r1, r2,…,rn) and m columns (c1,c2,…,cm)
m
n
nxm ccc
r
r
r
A ,,, 21
2
1
can be considered as a linear function (or mapping or transformation) for am-dimensional domain to a n-dimensional codomain.
AcodomyAdomxxAyA nxmxnxm 11;:
In the same way its conjugate transpose:
H
n
HH
H
m
H
H
H
mxn rrr
c
c
c
A ,,, 212
1
is a linear function (or mapping or transformation) for a n-dimensional codomain to a m-dimensional domain.
AcdomxAcodomyyAxA mxnx
HH
mxn 111111 ;:
Operations with Matrices
28
SOLO Matrices
Domain and Codomain of a Matrix A
The domain of A can be decomposed into orthogonal subspaces:
ANARAdom H
HAR
AN
HAN
AR
xAy
11 yAx H
Adomxmx 1
11mxx
Acodomy nx 11
1nxyR (AH) – is the row space of AH (dimension r)
N (A) – is the null-space of A (x N (A) A x = 0) or the kernel of A (ker (A)) (dimension m-r)
The codomain of A (domain of AH) can be decomposed into orthogonal subspaces:
HANARAcodom
R (A) – is the column space of A (dimension r)
N (AH) – is the null-space of AH (dimension n-r)
Operations with Matrices
Return toTable of Content
29
SOLO Matrices
Operations with Matrices
The Transpose AT of a Matrix A is obtained by interchanging the rows with the columns.
For
nmnn
m
m
aaa
aaa
aaa
Anxm
21
22221
11211
Transpose AT of a Matrix A
the transpose is
nmmm
n
n
TT
aaa
aaa
aaa
AA mxnnxm
21
22212
12111
From the definition it is obvious that (AT)T = A Return toTable of Content
30
SOLO Matrices
Operations with Matrices
The Conjugate AT of a Matrix A is obtained by tacking the conjugate complex of each of the elements of A.
*
**2
*1
*2
*22
*21
*1
*12
*11
*ij
nmnn
m
m
a
aaa
aaa
aaa
A nxm
Conjugate A* of a Matrix A
the transpose is
**2
*1
*2
*22
*12
*1
*21
*11
*
nmmm
n
n
TH
aaa
aaa
aaa
AA nxmmxn
Conjugate Transpose AH=(A*)T of a Matrix A
Return toTable of Content
31
SOLO Matrices
Operations with Matrices
The sum/difference of two matrices A and B of the same dimensions n x m is obtained by adding/subtracting the elements bij to/from elements aij.
Sum and Difference of Matrices A and B of the same dimensions n x m
ijij
nmnmnnnn
mm
mm
ba
bababa
bababa
bababa
BAnxmnxm
2211
2222222121
1112121111
xAy
BAdomxmx ,1
Acodomynx 1
xBz Bcodomznx 1
1111 mxmxnxnx xBxAzy
1111 mxmxnxnx xBxAzy
Given the following transformations
1111 , mxnxmnxmxnxmnx xBzxAy
11111 mxnxmnxmmxnxmmxnxmnxnx xBAxBxAzy
Return toTable of Content
32
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Scalar
The product of a Matrix by a Scalar is a Matrix in which each Element is multiplied by the Scalar.
ij
nmnn
m
m
a
aaa
aaa
aaa
Anxm
21
22221
11211
xAy
BAdomxmx ,1
Acodomynx 1
xAz Acodomznx 1
Given the following operations
1111 , mxnxmnxmxnxmnx xAzxAy
Return toTable of Content
33
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Matrix
Consider the two consecutive transformations:
nxp
npnn
p
p
mpmm
p
p
nmnn
m
m
C
ccc
ccc
ccc
bbb
bbb
bbb
aaa
aaa
aaa
BAmxpnxm
21
22221
11211
21
22221
11211
21
22221
11211
where
pmpmm
p
p
pxmx
m z
z
z
bbb
bbb
bbb
zBx
x
x
x
mxp
2
1
21
22221
11211
112
1
11
2
1
21
22221
11211
12
1
pxmxpnxmmxnxmzBA
x
x
x
aaa
aaa
aaa
xAy
y
y
y
mnmnn
m
m
nx
n
34
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Matrix (continue -1)
The Multiplication of a Matrix by a Matrix is possible between Matrices in which the number of the columns in the first Matrix is equal to the number of rows in the second Matrix .
nxp
npnn
p
p
mpmm
p
p
nmnn
m
m
C
ccc
ccc
ccc
bbb
bbb
bbb
aaa
aaa
aaa
BAmxpnxm
21
22221
11211
21
22221
11211
21
22221
11211
where
m
jjkijik bac
1
:
35
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Matrix (continue - 2)
CABBCA )()( Matrix multiplication is associative:
Transpose of Matrix MultiplicationTTT ABAB )(
Matrix product is compatible with scalar multiplication: BABAAB )(
Matrix multiplication is distributive over matrix addition: CBCACBACABACBA ,)(
In general Matrix Multiplication is not Commutative ABAB
Return toTable of Content
36
SOLO Matrices
Operations with Matrices
Kronecker Multiplication of a Matrix by a Matrix
pmxrnnmnn
m
m
rprr
p
p
nmnn
m
m
BaBaBa
BaBaBa
BaBaBa
bbb
bbb
bbb
aaa
aaa
aaa
BArxpnxm
21
22221
11211
21
22221
11211
21
22221
11211
:Leopold Kronecker (1823 –1891)
CBACBA
BABABA
CBCACBA
CABACBA
Properties
Return toTable of Content
37
SOLO Matrices Operations with Matrices
Partition of a Matrix
pmxqnxpqn
pmqxqxp
nxm
AA
AA
aaaa
aaaa
aaaa
aaaa
A
nmnpnpn
mqpqpqq
qmqpqpq
mpp
1221
1211
11
111111
11
111111
qpq
p
aa
aa
Aqxp
1
111
11 :
qmqp
mp
aa
aa
Apmqx
1
111
12 :
npn
pqq
aa
aa
Axpqn
1
111
21 :
nmnp
mqpq
aa
aa
Apmxqn
1
111
12 :
38
SOLO Matrices Operations with Matrices
Partition of a Matrix (continue)
srxpmpmxqnsrpxxpqnxspmpmxqnpxsxpqn
srxpmpmqxsrpxqxpxspmpmqxpxsqxp
srxpmxspm
srpxpxs
pmxqnxpqn
pmqxqxp
mxrnxm
BABABABA
BABABABA
BB
BB
AA
AA
BA
2222122121221121
2212121121121111
2221
1211
2221
1211
Return toTable of Content
39
SOLO Matrices Operations with Matrices
Elementary Operations with a Matrix
j
iEEji cr
100
00
001
AEir
jcEA
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
The reverse operation is to multiply the row/column elements by the scalar inverse.
j
iEEji cr
100
0/10
001
/1/1
nrrrr IEEAAEEiiii /1/1
mcccc IEEAEEAjjjj /1/1
1. Multiple the elements of a row/column by a nonzero scalar
The reverse operations are written as:
1/1
1/1 &
iiii rrrr EEEE
1/1
1/1 &
jjjj cccc EEEE
40
SOLO Matrices Operations with Matrices
Elementary Operations with a Matrix (continue – 1)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
1000
010
0010
0001
jij rrrE
←i
←j
nrrrrrr IEEjijjij
1000
0100
0010
0001
1000
010
0010
0001
1000
010
0010
0001
The reverse operation is to multiply each element of row i by the scalar (-α) and add to elements of row j
AAEEjijjij rrrrrr
nnnjnin
injnijjjiijiij
inijiii
nji
rrr
aaaa
aaaaaaaa
aaaa
aaaa
AEjij
1
11
1
11111
←i
←j
2.a Multiply each element of row i by the scalar α and add to elements of row j
1000
010
0010
0001
jij rrrE
←i
←j
41
SOLO Matrices Operations with Matrices
Elementary Operations with a Matrix (continue – 2)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
ji
Ejij ccc
1000
0100
010
0001
←i
←j
ncccccc IEEjijjij
1000
0100
0010
0001
1000
0100
010
0001
1000
0100
010
0001
The reverse operation is to multiply each element of column i by the scalar (-α) and add to elements of column j AEEA
jijjij cccccc
1000
0100
010
0001
jij cccE ←i
←j
ji
aaaaa
aaaaa
aaaaa
aaaaa
EA
nnninjnin
jnjijjjij
iniiijiii
niji
ccc jij
1
1
1
111111
←i
←j
2.b Multiply each element of column i by the scalar α and add to elements of column j
42
SOLO Matrices Operations with Matrices
Elementary Operations with a Matrix (continue – 3)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
1000
0010
0100
0001
ji rrE ←i
←j
nrrrr IEEjiij
1000
0100
0010
0001
1000
0010
0100
0001
1000
0010
0100
0001
The reverse operation is again interchange row j with row i
AAEEjiij rrrr
nnnjnin
inijiii
jnjjjij
nji
rr
aaaa
aaaa
aaaa
aaaa
AEji
1
1
1
11111
←i
←j
jiij rrrr EE
1
3.a Interchange row i with row j
43
SOLO Matrices Operations with Matrices
Elementary Operations with a Matrix (continue – 4)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
ji
Eji rr
1000
0010
0100
0001
ncccc IEEijji
1000
0100
0010
0001
1000
0010
0100
0001
1000
0010
0100
0001
The reverse operation is again interchange column j with column i
AEEAijji cccc
ji
aaaa
aaaa
aaaa
aaaa
EA
nnninjn
jnjijjj
iniiiji
nij
cc ji
1
1
1
11111
jiij cccc EE
1
3.b Interchange column i with column j
Return toTable of Content
44
SOLO Matrices Operations with Matrices
Rank of a Matrix
Given a Matrix Anxm we want, by using Elementary (reversible) Operations to reduce it toa Main Diagonal Unit Matrix and zeros in all other positions.
nmnn
m
m
aaa
aaa
aaa
Anxm
21
22221
11211
Assume that a11 ≠ 0. If this is not the case interchange the first row/column (a Elementary operation) until this is satisfied. Divide the elements of the first row by a11.For i=2,n multiply the first row by (–ai1/a11) and add to i row (a Elementary operation) to obtain:
mn
nmn
n
mm
m
aa
aaa
a
aa
aa
aaa
a
aa
a
a
a
a
AEnxm
111
112
11
12
111
21212
11
2122
11
1
11
12
1
0
0
1
45
SOLO Matrices Operations with Matrices
Rank of a Matrix (continue – 1)
Repeat this procedure for second column (starting at the new a22), third column (starting at the new a33), and so on, as long as we ca obtain non-zero elements on the main diagonal,using the rows bellow. At the end we obtain:
r
a
aa
aaa
AEEE nm
mr
mr
rowrowrrow nxm
0000
0000
'100
''10
'''1
22
1112
1_2__
←r
Define the multiplications of Elementary Operations as:1_2__: rowrowrrow EEEP
Those Elementary Operations can be reversed in opposite order to obtain:
1_
12_
11_
1 : rrowrowrow EEEP nIPP 1
46
SOLO Matrices Operations with Matrices
Rank of a Matrix (continue – 2)
Now use column operation starting with the first column in order to nullify all the elementsabove the Main Unit Diagonal:
rmxrnxrrn
rmrxr
rcccrowrowrrow
IEEEAEEE
nxm 00
0
0000
0000
0100
0010
0001
_2_1_1_2__
Define the multiplications of Elementary Operations as:rccc EEEQ _2_1_:
Those Elementary Operations can be reversed in opposite order to obtain:
11_
12_
1_
1 : ccrc EEEQ mIQQ 1
47
SOLO Matrices Operations with Matrices
Rank of a Matrix (continue – 3)
We obtained:
1111
00
0
00
0
Q
IPQQAPP
IQPA r
II
r
m
nxm
n
nxm
The maximum number of Linearly Independent Rows of A = r
11
00
0
Q
IPA r
nxm
From the relation we can see that the maximum number of Linearly Independent Rows and the maximum number of Linearly Independent Columns of Matrix PAQ is r.
00
0rIQPA
nxm
0000
0
00
0 121
111
221
211
121
111
1 QQ
QQIQ
IPA rr
nxmSince the maximum number of Linearly Independent Rows of Matrix PA is also r. But the Elementary Operations Pare not changing the number of Linearly Independent Rows of A, therefore:
The maximum number of Linearly Independent Columns of A = r
0
0
00
0
00
0
211
111
221
211
121
111
1
P
PI
PP
PPIPQA rr
nxmSince the maximum number of Linearly Independent Columns of Matrix A Q is also r. But the Elementary Operations Qare not changing the number of Linearly Independent Columns of A, therefore:
48
SOLO Matrices Operations with Matrices
Rank of a Matrix (continue – 4)
We obtained:
00
0rIQPA
nxm
The maximum number of Linearly Independent Rows of Anxm = The maximum number of Linearly Independent Columns of Anxm = r ≤ min (m,n) := Rank of Matrix Anxm
11
00
0
Q
IPA r
nxm
TrTmxn
TT PI
QAAnxm
11
00
0
Since in the Transpose of A we interchanged the columns with the rows of A:
nxmmxnT ARankARank
49
SOLO Matrices Operations with Matrices
Rank of a Matrix (continue – 5)
Proof
mxpmxp
mxp
BRankBARank
ARankBARank
nxm
nxmnxm
Rank of A B:
mnrARanknxm
,minAssume:
0000
0
00
0
00
0 121
111
221
211
121
111
1 QQ
QQIQ
IPA
IQPA rrr
nxmnxm
rpxrnxrrn
rprxrxr
rpxrmxrrm
rprxrxr
rmxrnxrrn
rmrxrxrmxp
HH
BB
BBQQBPA
nxm 0000
1211
1221
1211121
111
Therefore (P A B) has at most r nonzero rows:
ARankrABRankPABRankrNonsingulaP
Since
BRankBRankABRankABRankABRankABAB TTTTTTT
q.e.d.
50
SOLO Matrices Operations with Matrices
Rank of a Matrix (continue – 6)
nxnnxnnxn
nxnnxnnxn
BRankARankmBARank
BRankARankBARank
nxn
nxn
If A and B are Square nxn Matrices then:
[3] K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967, p.104
nxpmxnmxnnxpmxn
BRankARankBARanknBRankARank nxp ,min
Sylvester’s Inequality:
James Joseph Sylvester(1814 – 1887)
[4] T. Kailath, “Linear Systems”, Prentice Hall, Inc., 1980, p.654
Return toTable of Content
51
SOLO Matrices Operations with Matrices
Equivalence of Two Matrices
Proof
Two Matrices Anxm and Bnxm are said to be Equivalent, if and only if there exist aNonsingular Matrix Pnxn and a Nonsingular Matrix Qmxm such that A=P B Q.
This is the same to saying that A and B are Equivalent if and only if they have the same rank.
Since A and B have the same rank r, we can write:
TI
SBHI
GA rr
00
0,
00
0
where G,H, S, T are square invertible matrices.
q.e.d.
QBPHTBSGATBSI
QP
r
1111
00
0
P and Q are square invertible matrices since
THHTQGSSGPHTQSGP 1111111111 ,:&:
Return toTable of Content
52
SOLO Matrices Square Matrices
In a Square Matrix Number of Rows = Number of Columns = n
nnnn
n
n
aaa
aaa
aaa
Anxn
21
22221
11211
Trace of a Square Matrix
n
iiiaAtrAoftrace
nxnnxn1
Diagonal Square Matrix
ijij
nn
a
a
a
a
Dnxn
00
00
00
22
11
Return toTable of Content
53
SOLO Matrices Square Matrices
Identity Matrix
Triangular Matrices
ijnnxnII
100
010
001
A Matrix whose elements below or above the main diagonal are all zero is called a Triangular Matrix
nnnn aaa
aa
a
Lnxn
21
2221
11
0
00
nxnnxnnxnnxnnxnAAIIA
Null Matrix
0nxn
Onxnnxnnxnnxnnxn
OOIIO
Upper Triangular Matrix Lower Triangular Matrix
nn
n
n
a
aa
aaa
Unxn
00
0 222
11211
Return toTable of Content
54
SOLO Matrices Square Matrices
Hessenberg Matrix
An Upper Hessenberg Matrix has zero entries below the first subdiagonal:
nnnn
nnn
nnn
nnn
nnn
H
aa
aaaa
aaaaa
aaaaaa
aaaaaa
Unxn
1
4142443
313233332
21222232211
11121131211
0000
00
0
An Lower Hessenberg Matrix has zero entries below the first superdiagonal:
nnnnnn
nnnnnn
nnnn
H
aaaaa
aaaaa
aaaa
aaaa
aaa
aa
Lnxn
4321
141312111
42322212
34333231
232221
1211
0
0
00
000
A Hessenberg Matrix is an “almost” Triangular Matrix.
Return toTable of Content
55
SOLO Matrices Square Matrices
Toeplitz MatrixA Toeplitz Matrix or a “Diagonal-constant Matrix”, named after Otto Toeplitz, is a Matrix in which each descending Diagonal from left to right is constant.
Otto Toeplitz(1881 – 1940)
0121
101
21
012
101
1210
aaaa
aaa
aa
aaa
aaa
aaaa
T
n
n
nxn
Hankel Matrix
A Hankel Matrix is closed related to a Toeplitz Matrix (a Hankel Matrix is an upside-down Toeplitz Matrix), named after Hermann Hankel, is a Matrix in which each uprising Diagonal from left to right is constant.
nnn
n
n
nxn
aaa
a
aa
aaa
aaaa
H
2121
12
32
321
1210
Hermann Hankel(1839 – 1873)
Return toTable of Content
56
SOLO Matrices
Householder Matrix
n̂
xnn T ˆˆ
xnn T ˆˆ
x
'x
O A
We want to compute the reflection ofover a plane defined by the normal 1ˆˆˆ nnn T
x
From the Figure we can see that:
xHxnnIxnnxx TT ˆˆ2ˆˆ2'
1ˆˆˆˆ2: nnnnIH TT
We can see that H is symmetric:
HnnInnIH TTTT ˆˆ2ˆˆ2
In fact H is also a rotation of around OA so it must be orthogonal, i.e.HTH=H HT=I.
x
InnnnnnInnInnIHHHH TTTTTT ˆˆˆˆ4ˆˆ4ˆˆ2ˆˆ21
Alston Scott Householder1904 - 1993
Square Matrices
Return toTable of Content
57
SOLO Matrices Square Matrices
Vandermonde Matrix
112
11
222
21
21
21
111
,,,
nn
nn
n
n
n
xxx
xxx
xxx
xxxVnxn
Vandermonde Matrix is a nxn Matrix that has in its j row the entriesx1
j-1 x2j-1 … xn
j-1
Alexandre-Théophile Vandermonde1735 - 1796
Return toTable of Content
58
SOLO
Hermitian = Symmetric if A has real components
Hermitian Matrix: AH = A, Symmetric Matrix: AT = A
Matrices
Pease, “Methods of Matrix Algebra”, Mathematics in Science and Engineering Vol.16, Academic Press 1965
Definitions:
Adjoint Operation (H):
AH = (A*)T (* is complex conjugate and T is transpose of the matrix)
Skew-Hermitian = Anti-Symmetric if A has real components.
Skew-Hermitian: AH = -A, Anti-Symmetric Matrix: AT =-A
Unitary Matrix: UH = U-1, Orthonormal Matix: OT = O-1
Unitary = Orthonormal if A has real components.
Charles Hermite1822 - 1901
Square Matrices
Hermitian Matrix, Skew-Hermitian Matrix, Unitary Matrix
Return toTable of Content
59
SOLO Matrices Square Matrices
Singular, Non-singular and Inverse of a Non-singular Square Matrix Anxn
We obtained:
00
0rIQPA
nxn
11
00
0
Q
IPA r
nxn
Singular Square Matrix Anxn: r < n Only r rows/columns of A are Linearly Independent
Non-singular Square Matrix Anxn: r = n The n rows/columns of A are Linearly Independent
For a Non-singular Matrix (r=n):
n
I
n IQQQPPQAPQQPQIPA
n
nxn 1111111
and:
n
I
n IPPPQQPPQAQPQIPAn
nxn 1111111
The Matrix (Q P) is the Inverse of the Non-singular Matrix A: PQAnxn
1
This result explains the Gauss–Jordan elimination algorithm that can be used to determine whether a given square matrix is invertible and to find the inverse Return to
Table of Content
60
SOLO Matrices
Invertible Matrices
Matrix Inversion
• Gauss–Jordan elimination is an algorithm that can be used to determine whether a given matrix is invertible and to find the inverse.
• An alternative is the LU decomposition which generates an upper and a lower triangular matrices which are easier to invert.
• For special purposes, it may be convenient to invert matrices by treating mxn-by-mxn matrices as m-by-m matrices of n-by-n matrices, and applying one or another formula recursively (other sized matrices can be padded out with dummy rows and columns).
• For other purposes, a variant of Newton's method may be convenient (particularly when dealing with families of related matrices, so inverses of earlier matrices can be used to seed generating inverses of later matrices).
Square Matrices
61
SOLO Matrices
Invertible Matrices
Square Matrices
Gaussian elimination, which first appeared in the text Nine Chapters on the Mathematical Art written in 200 BC, was used by Gauss in his work which studied the orbit of the asteroid Pallas. Using observations of Pallas taken between 1803 and 1809, Gauss obtained a system of six linear equations in six unknowns. Gauss gave a systematic method for solving such equations which is precisely Gaussian elimination on the coefficient matrix.
Sketch of the orbits of Ceres and Pallas, by Gauss
http://www.math.rutgers.edu/~cherlin/History/Papers1999/weiss.html
Gauss published his methods in 1809 as "Theoria motus corporum coelestium in sectionibus conicus solem ambientium," or, "Theory of the motion of heavenly bodies moving about the sun in conic sections."
62
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
In Linear Algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it.
Carl Friedrich Gauss (1777–1855)
Wilhelm Jordan ( 1842–1899)
See example
Square Matrices
63
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
If the original square matrix, A, is given by the following expression:
210
121
012
33xA
Then, after augmenting A Matrix by the Identity Matrix, the following is obtained:
100210
010121
001012
IA
Perform the following:1. row1 + row2 →row1 equivalent with left multiplication by
100
010
011
121 rrrE
100210
010121
011111
121IAE rrr
Square Matrices
64
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
2. row1 + row2 →row2 equivalent with left multiplication by
100
011
001
221 rrrE
100210
010121
011111
121IAE rrr
100210
021230
011111
121221IAEE rrrrrr
3. (1/3) row2 →row2 equivalent with left multiplication by
100
03/10
001
223
1rr
E
100210
03
2
3
1
3
210
011111
121221223
1 IAEEE rrrrrrrr
1. row1 + row2 →row1
Square Matrices
65
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
3. (1/3) row2 →row2
110
010
001
332 rrrE
100210
03
2
3
1
3
210
011111
121221223
1 IAEEE rrrrrrrr
4. row2+row3 →row3 equivalent with left multiplication by
13
2
3
1
3
400
03
2
3
1
3
210
011111
12122122
332
3
1 IAEEEE rrrrrrrr
rrr
5. row1-row2 →row1 equivalent with left multiplication by
100
010
011
121 rrrE
13
2
3
1
3
400
03
2
3
1
3
210
03
1
3
2
3
101
12122122
332121
3
1 IAEEEEE rrrrrrrr
rrrrrr
Square Matrices
66
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
5. row1-row2 →row1
4
300
010
001
334
3rr
E
13
2
3
1
3
400
03
2
3
1
3
210
03
1
3
2
3
101
12122122
332121
3
1 IAEEEEE rrrrrrrr
rrrrrr
6. (4/3) row3 →row3 equivalent with left multiplication by
4
3
2
1
4
1100
03
2
3
1
3
210
03
1
3
2
3
101
12122122
33212133 3
1
4
3 IAEEEEEE rrrrrrrr
rrrrrrrr
7. (1/3) row3+row1 →row1 equivalent with left multiplication by
100
0103
101
1133
1rrr
E
4
3
2
1
4
1100
03
2
3
1
3
210
4
1
2
1
4
3001
12122122
33212133113 3
1
4
3
3
1 IAEEEEEEE rrrrrrrr
rrrrrrrrrrr
Square Matrices
67
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
7. (1/3) row3+row1 →row1
1003
210
001
2233
2rrr
E
4
3
2
1
4
1100
03
2
3
1
3
210
4
1
2
1
4
3001
12122122
33212133113 3
1
4
3
3
1 IAEEEEEEE rrrrrrrr
rrrrrrrrrrr
8. (2/3) row3+row2 →row2 equivalent with left multiplication by
BIIAEEEEEEEE
B
rrrrrrrr
rrrrrrrrrrrrrr
4
3
2
1
4
1100
2
11
2
1010
4
1
2
1
4
3001
12122122
33212133113223 3
1
4
3
3
1
3
2
We found 1 ABIABBIIAB
1
3
1
4
3
3
1
3
2
4
3
2
1
4
12
11
2
14
1
2
1
4
3
:121221
22332121
33113223
AEEEEEEEEB rrrrrrrr
rrrrrrrrrrrrrr
11 AIIAAinationlimeJordanGaussTherefore
Square Matrices
Return toTable of Content
68
The first to use the term 'matrix' was Sylvester in 1850. Sylvester defined a matrix to be an oblong arrangement of terms and saw it as something which led to various determinants from square arrays contained within it. After leaving America and returning to England in 1851, Sylvester became a lawyer and met Cayley, a fellow lawyer who shared his interest in mathematics. Cayley quickly saw the significance of the matrix concept and by 1853 Cayley had published a note giving, for the first time, the inverse of a matrix.
Arthur Cayley1821 - 1895
Cayley in 1858 published “Memoir on the Theory of Matrices” which is remarkable for containing the first abstract definition of a matrix. He shows that the coefficient arrays studied earlier for quadratic forms and for linear transformations are special cases of his general concept. Cayley gave a matrix algebra defining addition, multiplication, scalar multiplication and inverses. He gave an explicit construction of the inverse of a matrix in terms of the determinant of the matrix. Cayley also proved that, in the case of 2 2 matrices, that a matrix satisfies its own characteristic equation.
James Joseph Sylvester
1814 - 1897
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
Return toTable of Content
69
SOLO Matrices Square Matrices
L, U Factorization of a Square Matrix A by Elementary Operations
Given a Square Matrix Number of Rows = Number of Columns = n
nnnn
n
n
aaa
aaa
aaa
Anxn
21
22221
11211
Consider the following Simple Operations on the rows/columns of A to obtaina U Triangular Matrix (all elements bellow the Main Diagonal are 0) :
1. Multiple the elements of a row/column by a nonzero scalar
2. Multiply each element of row i by the scalar α and add to elements of row jj
iEEji cr
100
00
001
AEir jcEA
1000
010
0010
0001
jij rrrE
AEjij rrr
L,U factorization was proposed by Heinz Rutishauser in 1955.
70
SOLO Matrices Square Matrices
L, U Factorization of a Matrix A by Elementary OperationsGiven a Square Matrix Number of Rows = Number of Columns = n for example:
210
121
012
33xA
Consider the following Simple Operations on the rows/columns of A to obtaina U1 Triangular Matrix (all elements bellow the Main Diagonal are 0) :
13/20
010
001
3323
2rrr
E
210
12
30
012
2212
1 AErrr
100
012/1
001
2212
1rrr
E
1
2
1
3
2
3
400
12
30
012
221332
UAEErrrrrr
1. (1/2) row1+row2 →row1 equivalent with left multiplication by
2. (2/3) row2 + row3 →row3 equivalent with left multiplication by
71
SOLO Matrices Square Matrices
L, U Factorization of a Matrix A by Elementary Operations
13/23/1
012/1
001
100
012/1
001
13/20
010
001
11221 2
1
2
1rrrrr
EE1
2
1
3
2
3
400
12
30
012
221332
UAEErrrrrr
we found:
To Undo the Simple Operations and to obtain again A, let perform:
13/20
010
001
3323
2rrr
E
we can see that:
100
010
001
13/20
010
001
13/20
010
001
332332 3
2
3
2rrrrrr
EE
3323
2rrr
E is the Inverse Operation to and we write
1
3
2
3
2332332
rrrrrrEE
3323
2rrr
E
100
012/1
001
2212
1rrr
E
100
010
001
100
012/1
001
100
012/1
001
221221 2
1
2
1rrrrrr
EE
2212
1rrr
E is the Inverse Operation to and we write
1
2
1
2
1221221
rrrrrrEE
2212
1rrr
E
1. (-2/3) row2 + row3 →row3 equivalent with left multiplication by
2. (-1/2) row1+row2 →row1 equivalent with left multiplication by
72
SOLO Matrices Square Matrices
L, U Factorization of a Matrix A by Elementary Operations
LEErrrrrr
13/20
012/1
001
13/20
010
001
100
012/1
001
221332 2
1
3
2
AUEEAEEEErrrrrrrrrrrrrrrrrr
1
3
2
2
1
2
1
3
2
3
2
2
1332221221332332221
we found:
Therefore we obtained an L U factorization of the Square Matrix A:
AUL
210
121
012
3
400
12
30
012
13/20
012/1
001
1
We can have 1 on the diagonal of U Matrix, by introducing the Diagonal Matrix D:
UDLA
100
2/310
02/11
4/300
03/20
002/1
13/20
012/1
001
210
121
012Return to
Table of Content
73
SOLO Matrices Square Matrices
Diagonalization of a Square Matrix A by Elementary Operations
we found:
1
2
1
3
2
3
400
12
30
012
221332
UAEErrrrrr
1. (3/2) row2 + row1 →row1 equivalent with left multiplication by
2. (4/3) row3 + row2 →row2 and (9/8) row3 + row1 →row1 equivalent with left multiplication by
100
010
02
31
1122
3rrr
E
3
400
12
30
2
302
221332112 2
1
3
2
2
3 AEEErrrrrrrrr
100
0108
9
3
41
223113 3
4
8
9rrrrrr
EEDAEAEEEEErrrrrrrrrrrrrrr
3
400
02
30
002
221332112223113 2
1
3
2
2
3
3
4
8
9
Return toTable of Content
74
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
To each Matrix A we associate a scalar called Determinant; i.e. det A or |A|
defined by the following 4 properties:
1 The Determinant of the Identity Matrix In is 1.
If the Matrix A has two identical rows/columns the Determinant of A is zero.2
0det
1
nr
r
r
r
0det 1 ncccc
←i row
↑i column
1
1000
0100
0010
0001
detdet
nI
75
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
To each Matrix A we associate a scalar called Determinant; i.e. det A or |A| defined by the following 4 properties:
3 If each element of a row/column of the Matrix A is the sum of two terms, the Determinant of A is the sum of the two Determinants formed by the separation of the terms
n
k
n
k
n
kk
r
r
r
r
r
r
r
rr
r
'detdet'det
111
nknknkk cccccccccc 'detdet'det 111
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
76
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
4 If the elements of a row/column of the Matrix A have a common factor λ thanthe Determinant of A is equal to the product of λ and the Determinant of the Matrix obtained by dividing the previous row/column by λ.
nknn
knkk
n
nknn
knkk
n
aaa
aaa
aaa
aaa
aaa
aaa
21
21
11211
21
21
11211
detdet
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
77
SOLO Matrices & Determinants History
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
The idea of a determinant appeared in Japan and Europe at almost exactly the same time although Seki in Japan certainly published first. In 1683 Seki wrote “Method of solving the dissimulated problems “ which contains matrix methods written as tables. Without having any word which corresponds to 'determinant' Seki still introduced determinants and gave general methods for calculating them based on examples. Using his 'determinants' Seki was able to find determinants of 2x2, 3x3, 4x4 and 5x5 matrices and applied them to solving equations but not systems of linear equations.
Takakazu Shinsuke Seki
1642 - 1708 Rather remarkably the first appearance of a determinant in Europe appeared in exactly the same year 1683. In that year Leibniz wrote to de l'Hôpital. He explained that the system of equations
10 + 11x + 12y = 020 + 21x + 22y = 030 + 31x + 32y = 0
had a solution because
302112322011312210312012302211322110 which is exactly the condition that the coefficient matrix has determinant 0.
Gottfried Wilhelm von Leibniz1646 - 1716
78
SOLO Matrices & Determinants History
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
Leibniz used the word 'resultant' for certain combinatorial sums of terms of a determinant. He proved various results on resultants including what is essentially Cramer's rule. He also knew that a determinant could be expanded using any column - what is now called the Laplace expansion. As well as studying coefficient systems of equations which led him to determinants, Leibniz also studied coefficient systems of quadratic forms which led naturally towards matrix theory.
Gottfried Wilhelm von Leibniz
1646 - 1716
Gabriel Cramer (1704-1752)
In the 1730's Maclaurin wrote Treatise of algebra although it was not published until 1748, two years after his death. It contains the first published results on determinants proving Cramer's rule for 2x2 and 3x3 systems and indicating how the 4x4 case would work. Cramer gave the general rule for n n systems in a paper Introduction to the analysis of algebraic curves (1750). It arose out of a desire to find the equation of a plane curve passing through a number of given points.
Cramer does go on to explain precisely how one calculates these terms as products of certain coefficients in the equations and how one determines the sign. He also says how the n numerators of the fractions can be found by replacing certain coefficients in this calculation by constant terms of the system.
Colin Maclaurin1698 - 1746
79
An axiomatic definition of a determinant was used by Weierstrass in his lectures and, after his death, it was published in 1903 in the note ‘On Determinant Theory‘. In the same year Kronecker's lectures on determinants were also published, again after his death. With these two publications the modern theory of determinants was in place but matrix theory took slightly longer to become a fully accepted theory.
Karl Theodor Wilhelm Weierstrass1815 - 1897
Leopold Kronecker1823 - 1891
Determinant
Weirstrass Definition of Determinant of a nxn Matrix A:
(1)det (A) is linear in the rows of A(2) Interchanging two rows change the sign of det (A)(3) det (In) = 1
For each positive integer n, there is exactly one function with these three properties.
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
http://www.sandgquinn.org/stonehill/MA251/notes/Weierstrass.pdf
80
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived
5 If in a Matrix Determinant we interchange two rows/columns the sign of the Determinant will change.
Proof nji cccc 1detgiven
nji
by
nii
njiji
cccccccc
cccccc
1
20
1
3
1
2
detdet
det0
20
11 detdet
by
njjnij cccccccc
therefore
nijnji cccccccc 11 detdet
q.e.d.
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
81
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived
6 The Matrix Determinant is unchanged if we add to a row/column any linear combination of the other rows/columns.
Proof
nji cccc 1detgiven
q.e.d.
ni
ijj
by
njjj
nin
ijj
jji
ccccccc
ccccccc
1
20
1
11
detdet
detdet
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
82
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived
7 If a row/column is a Linear Combination of other rows/columns the Determinant is zero.
Proof
q.e.d.
0detdet
20
11
ijj
by
njjjn
ijj
jj ccccccc
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
83
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived
8 Leibniz formula for determinants
n
n
i
n
iiii
n
iiii
nikiiL
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1
The meaning of this equation is that in the product there are no two elements of the same row or the same column, and the sign of the product is a function of the position of each element in the Matrix. The sign of each element, in the product, is given by
nnknnnn
nk
nk
jiijasign
11111
11
11
1
321
22
11
Gottfried Wilhelm Leibniz
(1646 – 1716)
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
84
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
8
n
n
i
n
iiii
n
iiii
nikiiL
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1
From Properties (3) and (4) of the Determinant:
010:detdetdetdet
1321
4,3
1
21
4,3
21
222221
111211
1 2
2
1
21
1
1
1
i
n
i
n
i
n
i
i
ii
n
i
n
i
i
nnnknn
nk
nk
ewhere
r
r
e
e
aa
r
r
e
a
aaaa
aaaa
aaaa
A↑i column
↑1st rowcoeff
2nd rowCoeff ↓
From Properties (2) if two rows are identical the determinant is zero, therefore,in the summation of i2 we can delete the case i2=i1.
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
85
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof (continue 1)
8
n
n
i
n
iiii
n
iiii
nikiiL
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1
From Properties (2),(3) and (4) of the Determinant:
n
i
n
iii
n
iiiii
i
i
i
niii
i
n
i
n
i
n
i
i
ii
n
i
n
i
i
nnnknn
nk
nk
nn
n
n
n
e
e
e
aaa
ewhere
r
r
e
e
aa
r
r
e
a
aaaa
aaaa
aaaa
A
1
12
2
121
2
1
21
1 2
2
1
21
1
1
1
,,,
21
4,3,2
1321
4,3
1
21
4,3
21
222221
111211
det
010:detdetdetdet
↑i column
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
86
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof (continue 1)
q.e.d.
8
n
n
i
n
iiii
n
iiii
nikiiL
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1
Let interchange the position of the rows to obtain a Unit Matrix, where, according with Property (5), each interchange will cause a change in determinant sign.We also use Property (1) that the determinant of the Unit Matrix is 1:
n
i
n
iiii
n
iiii
nikiiL
nnnknn
nk
nk
kk
k
nn
n
nkaaa
aaaa
aaaa
aaaa
A1
11 11
1
,, ,,
1
21
222221
111211
1detdet
L
n
L
i
i
i
e
e
e
e
e
e
n
1det1det1
2
1
2
1
where L is the Number of Permutations necessary to go from (i1,i2,…,in) to (1,2,…,n)
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
87
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived
9 A Determinant can be expanded along a row or column using Laplace's Formula:
n
kki
kiik
n
kkiik MaCaA
1,
1, 1det
where the Ci,k represents the i,k element of the matrix cofactors, i.e. Ci,k is ( − 1)i + k times the minor Mi,k, which is the determinant of the matrix that results from A by removing the i-th row and the k-th column, and n is the length of the matrix.
Pierre-Simon, marquis de Laplace
1749 - 1827
nnknnkknn
nikikikii
inkiikkii
nikikikii
nkkk
ki
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
M
111
11111111
111
11111111
11111111
, det
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
88
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
9 Laplace's Formula:
n
jijji
n
jji
jiij
n
jjiij CaMaCaA
1,
1,
1, 1det
Proof
n
k
nnknkknn
nkkk
nkkk
ik
nnnknn
nk
nk
aaaaa
aaaaa
aaaaa
a
aaaa
aaaa
aaaa
A1
1111
21121221
11111111
4,3
21
222221
111211
00100detdetdet
From Properties (3) and (4) of the Determinant, using Row summation:
From Properties (3) and (5) of the Determinant:
nnkknn
nkk
nkk
ki
nnknkknn
nkkk
nkkk
nnknkknn
nkkk
nkkk
aaaa
aaaa
aaaa
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
1111
2111221
1111111
1112
21121222
11111112
1111
21121221
11111111
det1det000100
det
kiki
nnkknn
nkk
nkk
ki
nnknkknn
nkkk
nkkk
M
aaaa
aaaa
aaaa
aaaaa
aaaaa
aaaaa
,
1111
2111221
1111111
1111
21121221
111111111
1det1det0
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
89
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
9 Laplace's Formula:
n
jijji
n
jji
jiij
n
jjiij CaMaCaA
1,
1,
1, 1det
n
jji
jiij
n
jjiij MaCaA
1,
1, 1det
Proof (continue 1)
Therefore the minor Mi,k, which is the determinant of the matrix that results from A by removing the i-th row and the k-th column. We obtain
kiki
nnknkknn
nkkk
nkkk
ki
nnknkknn
nkkk
nkkk
ki M
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
C ,
1111
21121221
11111111
1111
21121221
11111111
, 1:00100
det100100
det:
q.e.d.
In the same way we can use Column summation to obtain
n
jij
jiji
n
jijji MaCaA
1,
1, 1det
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
90
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
10 A-1 the Inverse of Matrix A with det A ≠ 0 is unique and given by:
nnninn
ni
ni
CCCC
CCCC
CCCC
AadjwhereA
AadjA
,,,2,1
2,2,2,22,1
1,1,1,21,1
1 :det
Proof
q.e.d.
A
A
A
CCCC
CCCC
CCCC
aaaa
aaaa
aaaa
aaaa
AadjA
nnninn
ni
ni
nnninn
iniiii
ni
ni
det00
0det0
00det
,,,2,1
2,2,2,22,1
1,1,1,21,1
21
21
222221
111211
ik
ikAACa ik
n
jjikj 0
detdet,
1, since
Therefore multiplying by A-1 and dividing by det A, we obtainA
AadjA
det1
A-1 exists if and only if det A ≠ 0,i.e., the n rows/columns of Anxn areLinearly Independent
nIAAadjA det Return toCharacteristic Polynomial
Return toCayley-Hamilton
adj A is the adjugate of A
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
91
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
10 A-1 the Inverse of Matrix A with det A ≠ 0 is unique and given by:
nnninn
ni
ni
CCCC
CCCC
CCCC
AadjwhereA
AadjA
,,,2,1
2,2,2,22,1
1,1,1,21,1
1 :det
Proof (continue – 1)
BABBIAABIAA n
I
BbytionMultiplicaLeft
n
n
111
A-1 exists if and only if det A ≠ 0,i.e., the n rows/columns of Anxn areLinearly Independent
UniquenessAssume that exists a second Matrix B such that BA=In and
q.e.d.
adj A is the adjugate of A
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
92
SOLO Matrices
Gabriel Cramer (1704-1752)
Cramer's rule is a theorem, which gives an expression for the solution of a system of linear equations with as many equations as unknowns, valid in those cases where there is a unique solution. The solution is expressed in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the vector of right hand sides of the equations.
Given n linear equations with n variables x1, x2,…,xn
nnnnknknn
nnkk
nnkk
bxaxaxaxa
bxaxaxaxa
bxaxaxaxa
2211
222222121
111212111
Cramer’s Rule states that the solution of this equation is
nk
aaaa
aaaa
aaaa
abaa
abaa
abaa
x
nnnknn
nk
nk
nnnnn
n
n
k ,,2,1det/det
21
222221
111211
21
222221
111211
if the determinant that we divide by is not equal zero.
Determinant of a Square Matrix – det A or |A|
Cramer’s Rule11
93
SOLO Matrices
Proof of Cramer's Rule
To prove the Cramer’s Rule we use just two properties of Determinants:1.adding one column to another does not change the value of the determinant2.multiplying every element of one column by a factor will multiply the value of the determinant by the same factor
In the following determinant let replace the b1,b2,…,bn by their equation
nnnnnknknnnn
nnnkk
nnnkk
nnnnn
n
n
axaxaxaxaaa
axaxaxaxaaa
axaxaxaxaaa
abaa
abaa
abaa
221121
2222221212221
1112121111211
21
222221
111211
detdet
By subtracting from the k column the first multiplied by x1, the second column multiplied by x2, and so on until the last column multiplied by xn, ( the value of the determinant will not change by Rule 1 above), and it is found to be equal to
nnnknn
nk
nk
k
Rule
nnknknn
nkk
nkk
nnnnn
n
n
aaaa
aaaa
aaaa
x
axaaa
axaaa
axaaa
abaa
abaa
abaa
21
222221
111211
2
21
222221
111211
21
222221
111211
detdetdet
q.e.d.
Determinant of a Square Matrix – det A or |A|
Cramer’s Rule11
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Therefore
The Cramer’s Rule can be rewritten as
nkbCA
aaaa
aaaa
aaaa
abaa
abaa
abaa
xn
jjjk
nnnknn
nk
nk
nnnnn
n
n
k ,,2,1det
1det/det
1,
21
222221
111211
21
222221
111211
bAbA
Aadj
b
b
b
CCC
CCC
CCC
A
x
x
x
x
nnnnn
n
n
n
12
1
,,2,1
2,2,22,1
1,1,21,1
2
1
detdet
1:
This result can be derived directly by using
nn b
b
b
b
x
x
x
xbxA2
1
2
1
,
Multiply from left by A-1
bAxAAnI
11
A
bAadjbAx
det1
Proof of Cramer's Rule (continue – 1)
Cramer’s Rule11
95
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
q.e.d.
nn
nnnknnnn
nk
nk
aaa
aaaa
aa
a
a
aaa
aaaa
A
2211
21
2221
11
2222
111211
00
000
det
000
0detdet
12 The Determinant of a Triangular Matrix is given by the product of the elements on the Main Diagonal
Use Laplace’s Formula
nn
nnnknnnnknnnnnknn
aaa
aaa
a
aa
aaaa
aa
a
a
aaaa
aa
a
2211
3
33
2211
32
3332
22
11
21
2221
11 00
det00
000
det00
000
det
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
96
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
13 The Determinant of a Matrix Multiplication is equal to the Product of the Determinants BABA detdetdet
Start with the Multiplication of a Diagonal Matrix and any Matrix B.
Bnnn
B
B
nnnn
m
n
nn rd
rd
rd
bbb
bbb
bbb
d
d
d
BD
222
111
21
22221
11211
22
11
00
00
00
In computing the Determinant use Property No. 4
BD
r
r
r
ddd
rd
rd
rd
BD
Bn
B
B
nn
Bnnn
B
B
detdetdetdetdet 2
1
2211
4222
111
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
97
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof (continue -1)
13 The Determinant of a Matrix Multiplication is equal to the Product of the Determinants BABA detdetdet
We have shown that by Invertible Elementary operations a Matrix A can be transformed to a Diagonal Matrix D. Each operation is to add to a given row one other row multiplied by a scalar (rj+α ri → rj ). According to Property (6) the value of the Determinant is unchanged by those operations.
AAEDAED detdetdet
Therefore by doing the same Elementary Operations on (A B) Matrix we have:
BABDBDBAEBAADDAE
detdetdetdetdetdetdetdetdet
BDBD detdetdet
q.e.d.
Diagonalization of A
n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A
212
1
21
222221
111211
98
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
mxmnxnmxmmxn
nxmnxn BABC
Adetdet
0det
14 Block Matrices Determinants
q.e.d.
mmxn
nxmn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
mmxn
nxmn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
ICB
I
B
I
I
A
ICB
I
B
A
BC
A11
0
0
0
0
00
0
00
mmxn
nxmn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
ICB
I
B
I
I
A
BC
A1
11 0det
0
0det
0
0det
0det
A
I
A
I
A Laplace
mxnm
mnxnxnLaplace
mxmmxn
nxmnxn det0
0det1
0
0det
11
1
mxm
LaplaceLaplace
mxmnmx
xmnnLaplace
mxmmxn
nxmnxn BB
I
B
Idet1
0
0det1
0
0det
1
11
1
0det
1
Triangular
Matrixmmxn
nxmn
ICB
I
mxmnxnmxmmxn
nxmnxn BABC
Adetdet
0det
99
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
existsBifCBDAB
existsAifDACBA
BC
DA
mxnmxmnxmnxnmxm
nxmnxnmxnmxmnxn
mxmmxn
nxmnxn
11
11
detdet
detdetdet
15 Block Matrices Determinants
q.e.d.
existsBifICB
CBDA
B
DI
existsAifDCAB
DAI
IC
A
BC
DA
m
n
n
m
mxmmxn
nxmnxn
1
1
1
1
1
1
0
0
0
0
existsBifCBDAB
existsAifDCABA
existsBifICB
CBDA
B
DI
existsAifDCAB
DAI
IC
A
BC
DA
m
n
n
m
mxmmxn
nxmnxn
11
1112
1
1
1
1
1
1
detdet
detdet
0det
0det
0det
0det
det
100
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
nxmmxnmmxnnxmn ABIBAI detdet
16 Block Matrices Determinants
q.e.d.
nxmmxnmxmmxnnxmnxn
nxmnxnmxnmxmmxnmxmnxmnxnmxmmxn
nxmnxn
ABIBAI
AIBIBIAIIB
AI
detdet
detdetdet 113
113
Sylvester's Determinant Theorem
James Joseph Sylvester(1814 – 1987)
101
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
17 Cauchy - Binet Formula
Jacques Philippe Marie Binet
(1786 –1856)
Augustin-Louis Cauchy
(1789 –1857)
Let A be an m×n matrix and B an n×m matrix (m≤n). Write [n] for the set { 1, ..., n }, and for the set of m-combinations of [n] (i.e., subsets of size m; there are of them). For , write A[m],S for the
m×m matrix whose columns are the columns of A at indices from S, and BS,[m] for the m×m matrix whose rows are the rows of B at indices
from S. The Cauchy–Binet formula then states
mnS
mSSm BAAB ,, detdetdet
nmnn
m
m
mnmm
n
n
nxmmxn
bbb
bbb
bbb
aaa
aaa
aaa
BA
21
22221
11211
21
22221
11211
If m=n, and we recover BAAB detdetdet 1
n
n
102
It was Cauchy in 1812 who used 'determinant' in its modern sense. Cauchy's work is the most complete of the early works on determinants. He reproved the earlier results and gave new results of his own on minors and adjoints. In the 1812 paper the multiplication theorem for determinants is proved for the first time although, at the same meeting of the Institut de France, Binet also read a paper which contained a proof of the multiplication theorem but it was less satisfactory than that given by Cauchy.
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
103
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Example
210
121A
03
11
12
B
403
12
20
11
03
11
21
12
11
12
10
21det
323311
BA
417
13det
17
13
03
11
12
210
121
BA
Using Cauchy - Binet Formula we obtain:
By multiplying the matrices A and B and computing det (AB), we obtain:
32
23
2
3,3,2
nm
17 Cauchy - Binet Formula
104
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
11 detdet AA18
Proof
q.e.d.
nIAA 1use AAAAAAIn det/1detdetdetdetdet1 11
111
1
19 AAT detdet Proof
UDLUDLAUDLA detdetdetdetdet11
L and U are Triangular Matrices with 1 on the Main Diagonal, and D is diagonal.
nndddDUDLAUDLA 221111
detdetdetdetdet
AdddLDUALDUA nn
TTTTTTTT detdetdetdetdet 221111
11
q.e.d.
105
SOLO Matrices
Determinant of the Vandermonde Matrix
nji
ij
nn
nn
n
n
xx
xxx
xxx
xxx
xxxVnxn
1
112
11
222
21
21
221
111
det,,,det
Vandermonde Matrix is a nxn Matrix that has in its j row the entriesx1
j-1 x2j-1 … xn
j-1
Determinant of a Square Matrix – det A or |A|
20
Proof:
Using elementary operation, let multiply the (j-1) row by –x1 and add to row j, starting with j=n, then (n-1) until j=1
112
11
222
21
21
111
11112122111111212211
nn
nn
n
n
rrrxrrrxrrrxrrrxrrrxrrrx
xxx
xxx
xxx
EEEVEEEnnnnnnnxnnnnnnn
jj
xE
jjj rrxr
1
1000
010
0010
0001
111
←j-1 ←j
11
1
11det,det
22
nn
nnnn xx
xxxxV
xWe have:
106
SOLO Matrices
Determinant of the Vandermonde Matrix
Vandermonde Matrix is a nxn Matrix that has in its j row the entriesx1
j-1 x2j-1 … xn
j-1
Determinant of a Square Matrix – det A or |A|
Proof (continue – 1):
Using fact (13) that determinant of a product of Matrices is the product of their determinants
21
1221
12
12
212
2
112
112
11
222
21
21
0
0
0
111111
1111212211
nn
nn
nn
nn
n
nn
nn
n
n
rrrxrrrxrrrx
xxxxxx
xxxxxx
xxxx
xxx
xxx
xxx
EEEnnnnnn
21
1221
12
12
212
2
112
21
1221
12
12
212
2
112
112
11
222
21
21
111
det
0
0
0
111
det
111
detdetdetdet1111212211
nn
nn
nn
nn
n
nn
nn
nn
nn
n
nn
nn
n
n
rrrxrrrxrrrx
xxxxxx
xxxxxx
xxxx
xxxxxx
xxxxxx
xxxx
xxx
xxx
xxx
EEEnnnnnn
107
SOLO Matrices
Determinant of the Vandermonde Matrix
Vandermonde Matrix is a nxn Matrix that has in its j row the entriesx1
j-1 x2j-1 … xn
j-1
Determinant of a Square Matrix – det A or |A|
Proof (continue – 2):
nnxnn
nn
n
n
n
nn
nn
nn
n
nn
nn
n
n
nnxn xxVxxxx
xx
xxxxxx
xxxxxx
xxxxxx
xxxx
xxx
xxx
xxx
xxxV ,,det
11
detdet
111
det,,,det 211112
222
2
112
4
12
122
2
1122
112
112
11
222
21
21
21
We obtained a recursive relation between the nxn Vandermonde MatrixV (x1, x2, … , xn) and the (n-1)x(n-1) Matrix V (x2, … ,xn), and by continuing the procedure, and because det V2x2 (xn-1,xn)=(xn-xn-1), we obtain
nji
ij
nn
nn
n
n
xx
xxx
xxx
xxx
xxxVnxn
1
112
11
222
21
21
221
111
det,,,det
q.e.d.
Use Property (4) that if the elements of a row/column of the Matrix A have a common factor λ than the Determinant of A is equal to the product of λ and the Determinant of the Matrix obtained by dividing the previous row/column by λ.
108
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn HAR
AN
11 nxnxnnx xAy
Adomxnx 1
The relation represents a Linear Transformation of the vector to .
11 nxnxnnx xAy 1nxx 1nxy
For the Square Matrix Anxn a nonzero Vector is an Eigenvector if there is a Scalar λ (called the Eigenvalue) such that:
HAR
AN
111 nxnxnxnnx xxAy
Adomxnx 1
1nxv
11 nxnxnxn vvA
To find the Eigenvalues and Eigenvectors we see that
01 nxnnxn vIA
This equation has a solution iff the Matrix (Anxn-λ In) is singular or01 nxv
0det nnxn IA
This equation may be used to find the Eigenvalues λ.
109
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
01det 11
'
21
22221
11211
n
nnnRuleLeibniz
nnnn
n
n
cc
aaa
aaa
aaa
The equation that may be used to find the Eigenvalues λ can be written as:
The polynomial:
nnnn ccp 21
11:
is called the Characteristic Polynomial of the Square Matrix Anxn, and it has degree nand therefore n Eigenvalues λ1, λ2,…, λn. However the Characteristic Equations need not have distinct solutions, there may be less than n distinct eigenvalues.
If the matrix has real entries, the coefficients of the characteristic polynomial are all real. However, the roots are not necessarily real; they may include complex numbers with a non-zero imaginary component. However, there is at least one complex number λ solving the characteristic equation, even if the entries of the matrix A are complex numbers to begin with. (This existence of such a solution is known as the Fundamental Theorem of Algebra.) For a complex eigenvalue, the corresponding eigenvectors also have complex components.
By Abel’s Theorem (1824) there are no algebraic formulae for the roots of a general polynomial with n > 4, therefore we need an iterative algorithm to find the roots.
110
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem: The n Eigenvectors of a Square Matrix Anxn that has distinct Eigenvalues are Linearly Independent.
Proof:
Let assume that we have k (2 ≤ k ≤ n) Linearly Dependent Eigenvvectors. Then there exist k nonzero constants αi (i=1,…,k) such that:
ivvv ikkii 0011
kivvvA iiiinxn ,,1,0 where:
we have: 10
0
111
11111
iifvvvAvIA
vvAvIA
iiiinxninnxn
nxnnnxn
0112122111 kkkiiikkiinnxn vvvvvvIA
In the same way multiplying the result by (Anxn – λ2 In) we obtain:
012213233121222 kkkkkkknnxn vvvvIA Continuing the procedure until, at the end, we multiply by (Anxn – λ(k-1) In) to obtain:
000
0
1
1
kk
k
iikk v
This contradicts the assumption that αk ≠ 0therefore the k Eigenvectors are Linearly Independent.
111
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem: If the n Eigenvectors of a Square Matrix Anxn corresponding to the n Eigenvalues (not necessary distinct) are Linear Independent than we can write
Proof:
n
PAP
00
00
00
2
1
1
Using the n Eigenvectors of a Square Matrix Anxn we can write
n
n
nn
P
n vvvvvvvvvA
212
1
221121
00
00
00
or PPA
PAP 1 q.e.d.
we say that the Square Matrix Anxn is Diagonalizable.
Since the n Eigenvectors of Anxn are Linear Independent P is nonsingular and we have
nvvv ,,, 21
Two Square Matrices A and B that are related by A=S-1B S are called Similar Matrices
Return toMatrix Decomposition
112
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Proof:
q.e.d.
with11 nxRnxnxn vvA
Definition: The n Eigenvectors of a Square Matrix Anxn satisfynvvv ,,, 21
0det nRnxn IA
Are called the Right Eigenvectors of the Square Matrix Anxn
Definition: The n Left Eigenvectors of a Square Matrix Anxn are given by
Hn
HH www ,,, 21
xnH
LnxnxnH wAw 11
Theorem: (1)The n Left Eigenvalues λL of a Square Matrix Anxn are equal to the Right Eigenvalues λR (2) We have
xnH
LnxnxnH wAw 11 (1) 01 nLnxnxn
H IAw RL
nRnxn
nLnxn
IA
IA
0det
0det
(2) jjH
ijnxnH
ijH
iijnxnH
ijnxnH
i vwvAwvwvAwvAw
ji
ji
0j
Hi vw 0 j
Hiji vw
jijH
iji jivwvw &0,
113
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem: If the n Right Eigenvectors of a Square Matrix Anxn corresponding to the n Eigenvalues (not necessary distinct) are Linear Independent than we can write
Proof:
n
P
n
P
Hn
H
H
vvvA
w
w
w
00
00
00
2
1
212
1
1
q.e.d.
nvvv ,,, 21
n
P
nnn
P
n vvvvvvvvvA
00
00
00
2
1
21221121
11
2
1
2
1
22
11
2
1
00
00
00
P
Hn
H
H
nH
nn
H
H
P
Hn
H
H
w
w
w
w
w
w
A
w
w
w
100
010
001
212
1
n
Hn
H
H
vvv
w
w
w
By choosing s.t. njiji
jivw ijij ,,1,
1
0,
jw
Hn
HH www ,,, 21 and
are calledReciprocal Vectors
nvvv ,,, 21 nji
ji
jivwwhere
wAw
vvAijijH
jjnxnH
j
iiinxn,,1,
1
0,
114
In 1826 Cauchy, in the context of quadratic forms in n variables, used the term 'tableau' for the matrix of coefficients. He found the eigenvalues and gave results on diagonalisation of a matrix in the context of converting a form to the sum of squares. Cauchy also introduced the idea of similar matrices (but not the term) and showed that if two matrices are similar they have the same characteristic equation. He also, again in the context of quadratic forms, proved that every real symmetric matrix is diagonalisable.
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
115
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
If the n Eigenvalues of a Square Matrix Anxn are not distinct the Characteristic Polynomial will be
nkkkIA lk
lkkn
nnxnl 2121
211det
ni is the Algebraic Multiplicity of the Eignvalue λi, i.e. the multiplicity of the corresponding root of the Characteristic Equation.
In this case we may have more than one Eigenvector for one Eigenvalue λi.
Example 1: The Identity Matrix In.
In has the Algebraic Multiplicity n since all n Eigenvalues are λi=1.
so we can define n Linearly Independent vectors that will form a Basis (eigenbasis) for this space.
Algebraic Multiplicity = n
The Geometric Multiplicity of an Eigenvalue is defined as the Dimension of the associated Eigenspace, i.e. the Number of Independent Eigenvectors with that Eigenvalue.
Geometric Multiplicity = n
11 nxnxn vvI Each vector in the Vn space is an Eigenvector since
A Matrix Anxn that has Geometric Multiplicity = Algebraic Multiplicity, i.e. it has n Linearly Independent Eigenvectors, is called Semisimple.
116
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Example 2:
100
010
011
33xA 3333 1
100
010
011
detdet
IA x
1
0
1
,
1
0
0
,
0
0
1
0
0
0
000
000
010
1 321
3
2
1
333 vvv
v
v
v
vIA x
λ1,2,3=1, Algebraic Multiplicity = 3
Every pair of Eigenvectors is Linearly Independent, but all three are Linearly Dependent, since: 0321 vvv
The Geometric Multiplicity of an Eigenvalue is defined as the Dimension of the associated Eigenspace, i.e. the Number of Independent Eigenvectors with that Eigenvalue.λ1,2,3=1, Geometric Multiplicity = 2
If the Geometric Multiplicity is less than Algebraic Multiplicity the Matrix A can not be Diagonalized and is called Defective.
117
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Proof:
q.e.d.
If there exists k Linearly Independent Eigenvectors of Anxn with the same Eigenvalue λ, we say that A has a k-fold degeneracy for this Eigenvalue. The presence of the degeneracy create an ambiguity in the specification of Eigenvectors.
We will use the Jordan Normal (Canonical) Form to describe a Degenerate (Non-Semisimple) Matrix.
Define:
k
iiivw
1
Theorem: If is a set of Eigenvectors of a Square Matrix Anxn with the same Eigenvalue λ, than any Linear Combination of is a Eigenvector of Anxn with Eigenvalue λ.
kvvv ,,, 21 kvvv ,,, 21
wvvvAwAk
iii
k
iii
k
iii
111
118
In 1870 the Jordan canonical form appeared in “Treatise on Substitutions and Algebraic Equations” by Jordan. It appears in the context of a canonical form for linear substitutions over the finite field of order a prime
Jordan Canonical Form
Marie Ennemond Camille Jordan
1838 - 1922
Eigenvalues and Eigenvectors of Square Matrices Anxn
119
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
If there exists ki Linearly Independent Eigenvectors of Anxn with the same Eigenvalue λi, we say that A has a ki-fold degeneracy for this Eigenvalue. An Eigenvector of A satisfies:
Jordan Normal (Canonical) Form
Jordan Normal (Canonical) Form is named after Camile Jordan
Marie Ennemond Camille Jordan
1838 - 1922 01, ininxn vIA
The Eigenvector is annihilated by (Anxn – λ In). Let try to find the Generalized Eigenvectors annihilated by (Anxn – λ i In)2, (Anxn – λ i In)3, …, (Anxn – λ i In)ki. For this let use a chain of length ki:
1,iv
1,,
1,2,
1, 0
ii kikininxn
iininxn
ininxn
vvIA
vvIA
vIA
0
0
0
,
2,2
1,
i
i
kik
ninxn
ininxn
ininxn
vIA
vIA
vIA
Multiply by (Anxn – λ i In)
1,iv is the Eigenvector, is the Generalized Eigenvector of rank 2, is the Generalized Eigenvector of rank ki. ikiv ,
2,iv
120
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Jordan Normal (Canonical) Form
i
i
i
i
ii
kik
ninxni
kik
ninxni
kininxnki
vIAv
vIAv
vIAv
,1
1,
,2
2,
,1,
1,iv is the Eigenvector, is the Generalized Eigenvector of rank 2, is the Generalized Eigenvector of rank ki. The be an Generalized
ikiv ,
2,iv
0
0
,
,1
i
i
i
i
kik
ninxn
kik
ninxn
vIA
vIA
The other Generalized Eigenvectors, in descending order, are obtained using
ikiv ,Eigenvector is obtained by finding the ki for which
1,,
1,2,
1, 0
ii kikininxn
iininxn
ininxn
vvIA
vvIA
vIA
121
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Jordan Normal (Canonical) Form
nkkkvvvvvvS likllkiiknxn li 1,1,,1,,11,1 1
:
Since the k Generalized Eigenvectors are Linearly Independent, we can add all the General Eigenvectors corresponding to all Eigenvalues to obtain a Nonsingular Matrix
The Inverse of this Matrix expressed as row vectors is defined as:
Tkl
Tl
Tk
T
nxn
lw
w
w
w
S
,
1,
,1
1,1
11
:
jnimnmT
jin
klT
kllT
klkT
kjT
kj
klT
llT
lkT
jT
j
klT
klT
kkT
kT
k
klT
lT
kTT
vwI
vwvwvwvw
vwvwvwvw
vwvwvwvw
vwvwvwvw
SS
llljj
l
l
l
,,
,,1,,,1,1,1,
,1,1,1,,11,1,11,
,,11,,1,1,11,1,1
,1,11,1,1,11,11,11,1
1
1
1
11111
1
:
122
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Jordan Normal (Canonical) Form
llii
li
klkllkikiikk
klkik
vvvvvvvvv
vAvAvAvAvAvASA
,11,1,1,11,1,1,111,11,11
,1,1,1,1,11,1
11
1:
We have:
l
klkllkk
Tkl
Tl
Tk
T
J
J
J
vvvvvv
w
w
w
w
SASll
l
00
00
00
2
1
,11,1,1,111,11,11
,
1,
,1
1,1
1
11
1
Therefore using , we obtain:jnimnm
Tji vw ,,
123
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Jordan Normal (Canonical) Form
We have:
lJ
J
J
SAS
00
00
00
2
1
1
where:
liJ
i
i
i
i
i
xkki ii,,2,1
0000
1000
0000
0010
0001
is the Jordan Canonical Form Matrix having λi on the main diagonal and the non-diagonal entries, that are non-zero must be equal to 1, and are situated immediately above the main diagonal.
Similarity Transformation
124
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Jordan Normal (Canonical) Form
Each subsequent multiplication by [Ji-λi] pushes the Diagonal of 1s further toward the top-right corner. After ki-1 steps, there is a single 1 in the top-right, and after ki steps we have 0. therefore:
00000
10000
00000
00100
00010
iixkkkii IJ
000000
000000
100000
000000
001000
000100
2
iixkkkii IJ
ii
i
ii xkkk
xkkkii IJ 0
125
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Power and Polynomials of a Matrix
Theorem 1: If we are given two Square Matrices Anxn and Bnxn, and change the basis of both by a given Similarity Transformation to A’ and B’, then A’B’ is the Similar Transformation of A B.
Proof:
111
1
1
'''
'
SBASSBSSASBA
SBSB
SASA
nI
126
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Power and Polynomials of a Matrix
where:
lJ
J
J
J
00
00
00
2
1
ml
m
m
m
m
J
J
J
JJJJ
00
00
00
2
1
i
i
i
i
i
xkki iiJ
0000
1000
0000
0010
0001
mi
mi
mi
mi
mi
mi
mi
mi
mi
m
xkki
m
m
mm
Jii
0000
1000
00
*1
0
**21
1
1
21
Proof: Straightforward using Theorem 1 and Matrix Multiplication
!!
!
imi
m
i
m
m
xkki iiJ is not a Jordan Form * value depends on the relation between m and ki.
Return toMatrix DecompositionTheorem 2: For A = S J S-1 then Am= S JmS-1
127
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Power and Polynomials of a Matrix
Suppose we have a polynomial of degree m
mmxaxaaxp 10
We will define the polynomial of Matrix A as:
If A is Semisimple with Eigenvalues λ1,λ2,…,λn, then J is Diagonal and we have:
mmn AaAaIaAp 10
1 SJSAWe have , therefore:
110
1110
SJaJaIaSSJSaSJSaIaAp mmn
mmn
S
p
p
p
SAp
n
00
00
00
2
1
1
128
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Power and Polynomials of a Matrix
Suppose we have a polynomial of degree m: mmxaxaaxp 10
mxkkmxkknxkk iiiiii
JaJaIaJp 10
mi
mi
mi
mi
mi
mi
mi
mi
mi
m
i
i
i
i
i
m
m
XXmm
aaa
0000
1000
**00
**1
0
21
0000
1000
0000
0010
0001
10000
01000
00100
00010
00001
1
1
21
10
i
i
ii
i
i
ii
i
i
i
ii
xkk
p
d
pdp
p
d
pdp
d
pd
d
pdp
Jpii
0000
!1
1000
00
*!1
10
**!2
1
!1
12
2
129
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
From this equation we have:
the Characteristic Polynomial of the Square Matrix Anxn is given by:
nnxnn
nnnn IAccp det1: 21
11
nxnn
nn
n
n
jiji
ji
n
iin
Apc
c
Atracec
det101 21
1,2
1211
We used the facts that:
n
ii
I
n
ii
JtraceJSStraceSJStraceAtrace
Jtrace
SJSA
BAtraceBAtrace
n1
31
11
2
1
1
4
3
2
1
det Anxn = 0 if at least one Eigenvalue of Anxn is 0.
130
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem:
The Eigenvalues μi of the inverse A-1 of the Nonsingular Square Matrix A are related to the Eigenvalues λi of A by:
niA
Ai
i ,,2,111
Proof:
q.e.d.
Since A is Nonsingular: niA in ,,2,100det 21
0
1
0
1
11
1
111
det1
detdetdet1
det
detdetdetdetdetdet0
AAIIAAII
AAIAAAIAIA
nnnn
nnn
Therefore:
0det1
det0det 1
nnn IAIAIA
131
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem:
Two Similar Matrices A and B (A=S-1B S) have the same Eigenvalues and the same Characteristic Polynomial.
Bnnxnn
BAAB
nnxnn
nnxnn
nnxnn
A
pIBSS
SIBSISBSIAp
detdetdet1
det1det1det1:
1
1detdetdet
11
Proof:
q.e.d.
132
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem:
The Eigenvalues of a Triangular Matrix are the elements on the Main Diagonal.
Proof:
q.e.d.
nnnn aaa
aa
a
Lnxn
21
2221
11
0
00
Upper Triangular Matrix Lower Triangular Matrix
nn
n
n
a
aa
aaa
Unxn
00
0 222
11211
niaiii ,,1
0
00
0detdet 2211
222
11211
nn
nn
n
n
n aaa
a
aa
aaa
IUnxn
niaiii ,,1
The same proof for Lnxn.
133
Arthur Cayley1821 - 1895
Cayley in 1858 published “Memoir on the Theory of Matrices”. Cayley also proved that, in the case of 2 2 matrices, that a matrix satisfies its own characteristic equation. He stated that he had checked the result for 3 3 matrices, indicating its proof, but says:-
I have not thought it necessary to undertake the labour of a formal proof of the theorem in the general case of a matrix of any degree.
That a matrix satisfies its own characteristic equation is called the Cayley-Hamilton theorem so its reasonable to ask what it has to do with Hamilton. In fact he also proved a special case of the theorem, the 4 4 case, in the course of his investigations into quaternions.
Cayley-Hamilton Theorem
Sir William Rowan Hamilton
1805 - 1865
In 1896 Frobenius became aware of Cayley's 1858 “Memoir on the Theory of Matrices” and after this started to use the term matrix. Despite the fact that Cayley only proved the Cayley-Hamilton theorem for 2 x2 and 3x3 matrices, Frobenius generously attributed the result to Cayley despite the fact that Frobenius had been the first to prove the general theorem
Ferdinand Georg Frobenius
1849 - 1917
Eigenvalues and Eigenvectors of Square Matrices Anxn
134
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Cayley-Hamilton Theorem
Cayley-Hamilton Theorem: Any Square Matrix Anxn satisfies its own Characteristic Equation
The Characteristic Equation of Anxn is given by
nnn
nnxnn ccIAp 1
1det1
The Theorem states that 011
nnnn IcAcAAp
Proof:We have:
nn
nnxnnnnxnnnxn IpIAIIAadjIA 1det
Since p (λ) In is a Matrix Polynomial of degree n, (Anxn-λIn) is a Matrix Polynomial of degree 1,then adj (Anxn-λIn) is a Matrix Polynomial of degree (n-1), i.e.:
1
01
110
n
ii
in
nnnxn BBBBIAadj
135
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Cayley-Hamilton Theorem
Proof:
We have:
1
0
11
0
1
0
1n
ii
in
ii
in
ii
innxnnnxnnnxnn
n BBABIAIAadjIAIp
0
1
111
111 BABBABIcc
n
iii
in
nnn
nnn
or:
nnn
nin
ii
nn
n
IcBA
niIcBBA
IB
1
1,,11
1
0
1
1
Equalizing the terms of both sides of the same λ power we have:
Let multiply second equation by Ai from the left and summarize from i=1 to i=n-1:
1
1
1
11 1
n
i
ii
nn
iii
i AcBBAA
136
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Cayley-Hamilton Theorem
Proof:
We have:
1
1
1
11 1
n
i
ii
nn
iii
i AcBBAA
nn
nnn
nn
nn
nn
n
iii
i
AIcBABA
BABABABABABABABABBAA
1110
21
123
34
12
23
012
1
11
nn
n
nn
n
IcBA
IB
1
1
0
1
Developing the left side of the last equation we obtain:
Equalizing the result with the right side gives:
1
1
111n
i
ii
nnnnn
n AcAIc
or: 011
0
ApAc nn
i
ii
n
q.e.d.
137
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
We obtained: 011
nnnn IcAcAAp
Let rewrite the first equation, using the second:
nnnn
nn IcAcAAIA 1
21
1det1
If A is Nonsingular det A≠0 and
A
IcAcA
A
AadjA nn
nnn
det1
det1
21
11
But we also found that AadjAIA n det
therefore: nnnnn IcAcAAadj 1
21
11
nxnn
nn
n Apc det101 21 and:
138
Let A be a square n by n matrix over a field F (for example the field R of real numbers). Then the following statements are equivalent:
• A is invertible. • A is row-equivalent to the n-by-n identity matrix In. • A is column-equivalent to the n-by-n identity matrix In. • A has n pivot positions. det A ≠ 0. • In general, a square matrix over a commutative ring is invertible if and only if its determinant is a unit in that ring. rank A = n. • The equation Ax = 0 has only the trivial solution x = 0 (i.e., Null A = {0}) • The equation Ax = b has exactly one solution for each b in Fn, (x ≠ 0). • The columns of A are linearly independent. • The columns of A span Fn (i.e. Col A = Fn). • The columns of A form a basis of Fn. • The linear transformation mapping x to Ax is a bijection from Fn to Fn. • There is an n by n matrix B such that AB = In = BA. • The transpose AT is an invertible matrix (hence rows of A are linearly independent, span Fn, and form a basis of Fn). • The number 0 is not an eigenvalue of A. • The matrix A can be expressed as a finite product of elementary matrices.
SOLO Matrices
Invertible Matrix Summary
http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
139
SOLO
Matrix Decompositions
Matrices
LU Decomposition A= L U (L = Lower Triangular, U = Upper Triangular)
Eigendecomposition
Block LU Decomposition
Rank Factorization
Cholesky Decomposition QR Decomposition
Decompositions Related to Solving Systems of Linear Equations
Decompositions Based on Eigenvalues and Related Concepts
Jordan Decomposition
Schur Decomposition QZ Decomposition
Takagi’s Factorization
11
00
0
Q
IPA r
nxm
1 PDPA1 SJSA
140
LU decomposition
Applicable to: square matrix A• Decomposition: A = LU, where L is lower triangular and U is upper triangular• Related: the LDU decomposition is A = LDU, where L is lower triangular with ones on the diagonal, U is upper triangular with ones on the diagonal, and D is a diagonal matrix.• Related: the LUP decomposition is A = LUP, where L is lower triangular, U is upper triangular, and P is a permutation matrix.• Existence: An LUP decomposition exists for any square matrix A. When P is an identity matrix, the LUP decomposition reduces to the LU decomposition. If the LU decomposition exists, the LDU decomposition does too.• Comments: The LUP and LU decompositions are useful in solving an n-by-n system of linear equations Ax = b. These decompositions summarize the process of Gaussian elimination in matrix form. Matrix P represents any row interchanges carried out in the process of Gaussian elimination. If Gaussian elimination produces the row echelon form without requiring any row interchanges, then P=I, so an LU decomposition exists.
SOLO
Matrix Decompositions
Matrices
LU Factorization
141
SOLO
Matrix Decompositions
Matrices
Block LU Decomposition
rmrxrxrxrrnrmxrnxrrn
rmrxrxr
rnxrnrxrxrrn
rnrxrxr
rmxrnxrrn
rmrxrxr
BACD
BA
IAC
I
DC
BA11 0
0
Let write (assuming A square and nonsingular):
rm
r
rn
r
I
BAI
BCAD
A
BCAD
A
ICA
I
DC
BA
00
0
0
00 1
2/11
2/1
2/11
2/1
1
UL
BCAD
BAA
BCADCA
A
DC
BA
2/11
2/12/1
2/112/1
2/1
0
0
Finally:
rmxrmxrrm
rmrxrxrrxr
rmxrnrmxrn
rmrxrxr
rnxrnrxrxrrn
rnrxrxr
I
BAI
BCAD
A
IAC
I
00
00 1
11
142
SOLO
Matrix Decompositions
Matrices
Block UL Decomposition
rxrrmrx
xrrnrmxrn
rxrrnrx
rxrxrrnrnxrn
rxrrmrx
xrrnrmxrn
DC
CBDA
I
DBI
DC
BA 0
0
11
Let write (assuming D square and nonsingular):
Finally:
r
rm
r
rn
ICD
I
D
CBDA
D
CBDA
I
BDI
DC
BA12/1
2/11
2/1
2/111 0
0
0
0
0
0
LU
DCD
CBDA
D
BDCBDA
DC
BA
2/12/1
2/11
2/1
2/12/11 0
0
rxrrmrxrxr
xrrmrmxrm
rxrrmrx
xrrnrmxrn
rxrrnrx
rxrxrrnrnxrn
ICD
I
D
CBDA
I
DBI1
11 0
0
0
0
Return toMatrix Decomposition
143
SOLO
Matrix Decompositions
Matrices
Cholesky decomposition
• Applicable to: square, symmetric, positive definite matrix A• Decomposition: A = UTU, where U is upper triangular with positive diagonal entries• Comment: the Cholesky decomposition is a special case of the symmetric LU decomposition, with L = UT.• Comment: the Cholesky decomposition is unique• Comment: the Cholesky decomposition is also applicable for complex hermitian positive definite matrices• Comment: An alternative is the LDL decomposition which can avoid extracting square roots.
Return toMatrix Decomposition
Andre – LouisCholesky
1875 - 1918
144
SOLO
Matrix Decompositions
Matrices
Return toMatrix Decomposition
QR decomposition
• Applicable to: m-by-n matrix A• Decomposition: A = QR where Q is an orthogonal matrix of size m-by-m, and R is an upper triangular matrix of size m-by-n• Comment: The QR decomposition provides an alternative way of solving the system of equations Ax = b without inverting the matrix A. The fact that Q is orthogonal means that QTQ = I, so that Ax = b is equivalent to Rx = QTb, which is easier to solve since R is triangular.
145
SOLO Matrices
Householder Transformation
We want to to use the Householder transformation to annulate the elements of a column vector bellow the firs term
Teexxv 001&: 11
x
Theorem: Let define
Then
MatrixrHouseholdev
vvIH
xexxHT
n
T
2
1
2:
00
Proof:
11
11
11
112
222
exxexx
xexxexxxx
exxexx
exxexxxx
v
vvxxH T
T
T
TT
xexxeexexxxexxxexxexx
xexxxexxxxexxTTTT
x
TT
TTTT
1
2
1
11
2
1111
1
2
11
2
222
2
Therefore TxexexxxxH 0011
q.e.d.
Alston Scott Householder1904 - 1993
146
SOLO Matrices
Factorization A = Q R Using Householder Transformation
Given the Square Matrix
Use Householder Transformation to work on the first column with a11 as pivot and annulate all the elements bellow it
m
nnnn
n
n
ccc
aaa
aaa
aaa
Anxn
21
21
22221
11211
Teeccv 001&: 11111
nH
T
T
n IUUvv
vvIHU 11
11
1111 2
1
21
11
1 1
na
a
a
cnx
nnn
n
n
aa
aa
aac
AU
''0
''0
''
2
222
1121
1
147
SOLO Matrices
Factorization A = Q R Using Householder Transformation
We obtained
Use Householder Transformation again to work on the second column with a22 as pivot and annulate all the elements bellow it
112
11
112
11
0
1
'
'
2
2
22
2
xn
xn
xn
xn
ec
n
c
a
a
v
nnn
n
n
aa
aa
aac
AU
''0
''0
''
2
222
1121
1
*00
*'0
**
2
1
12
c
c
AUU
122
22
2212
22 2
0
01
n
H
T
T
n IUUvv
vvIH
HU
148
SOLO Matrices
Factorization A = Q R Using Householder Transformation
We obtained
*00
*'0
**
2
1
12
c
c
AUU
By continuing this procedure until the n column we obtain:
R
n
Q
n
c
c
c
AUUUH
'00
*'0
**
2
1
12
therefore
* denotes elements non necessary zero.
nxnnxnnxn RQA Return toMatrix Decomposition
149
MatricesSOLO
Vector Space
Orthonormal Vectors
nnnn
n
n
n
xxxxxx
xxxxxx
xxxxxx
xxxG
,,,
,,,
,,,
:,,,
21
22212
12111
21
Jorgen Gram1850 - 1916
Define the Gram Matrix of the set:
zeroequalallnotxxx inn 02211 Proof: Linearly dependent set:
0,,,det
0
0
0
,,,
,,,
,,,
212
1
21
22212
12111
n
Solutionnontrivial
nnnnn
n
n
xxxG
xxxxxx
xxxxxx
xxxxxx
q.e.d.
Let denote a set of elements in the vector space V. nxxx ,,, 21
Theorem: A set of functions of the vector space V is linearly dependent if and only if the Gram determinant of the set is zero.
nxxx ,,, 21
Multiplying (inner product) this equation consecutively by we obtain:
,,,, 21 nxxx
150
SOLO
Vector Space
Orthonormal Sets (continue – 2)
q.e.d.
If then for every we have: nixx i ,,2,10, n
n
iii xxLxy ,,1
1
0,,1
0
n
iii xxyx
Matrices
Theorem: A set of vectors of the vector space V is linearly dependent if and only if the Gram determinant of the set is zero.
nxxx ,,, 21
Corollary: The rank of the Gram matrix equals the dimension of the linear manifold .If determinant is nonzero, the Gram determinant of any other subset is also nonzero.
nxxxL ,,, 21 nxxxG ,,, 21
Definition 1: Two elements of a vector space V are said to be orthogonal if . yx, 0, yx
Definition 2: Let S be a nonempty subset of a vector space V. S is called an orthogonalset if for every pair and . If in addition for every ,
then S is called an orthonormal set. 0, yx Syx , yx 1x Sx
Lemma: Every orthogonal set is linearly independent. If x is orthogonal to every element of the set , then x is orthogonal to manifold . nxxx ,,, 21 nxxxL ,,, 21 Proof: The Gram matrix of an orthogonal set has only nonzero diagonal; thereforedeterminant , and the set is linearlyindependent.
0,,,22
2
2
121 nn xxxxxxG
151
SOLO
Vector Space
Orthonormal Sets (continue – 3)
Gram-Schmidt Orthogonalization Process
Jorgen Gram1850 - 1916
Erhard Schmidt1876 - 1959
11 xy
111
2122
11
2121
1121212112122
,
,
,
,
,,,0
yyy
xyxy
yy
xy
yyxyyyyxy
1
1
1
1
1
1
,
,
,
,
,,,0
i
jj
ii
jiii
kk
kiik
i
jjkijikki
i
jjijii
yyy
yxxy
yy
yx
yyxyyyyxy
kj
Matrices
Let any finite set of linearly independent vectors
and , the manifold spanned by the set X. nxxxX ,,, 21
nxxxL ,,, 21
The Gram-Schmidt orthogonalization process derive a set
of orthonormal elements from the set X. neeeE ,,, 21
152
SOLO
Vector Space
Orthonormal Sets (continue – 4)
Gram-Schmidt Orthogonalization Process (continue)
Jorgen Gram1850 - 1916
Erhard Schmidt1876 - 1959
11 xy
111
2122 ,
,y
yy
xyxy
1
1 ,
,i
jj
ii
jiii y
yy
yxxy
2/1
11
11
,
yy
ye
2/1
22
22
,
yy
ye
2/1,
ii
ii
yy
ye
1
1 ,
,n
jj
ii
jnnn y
yy
yxxy
2/1,
nn
nn
yy
ye
Orthogonalization Normalization
Matrices
153
SOLO
QR Decomposition Using Gram-Schmidt Procedure
Matrices
1111
2/1
1111 , ereyyyx
22211222/1
2212/111
2121
11
212 ,
,
,
,
,erereyye
yy
xyyy
yy
xyx
i
jjji
i
jjjj
ii
jiiiii ereyy
yy
yxeyyx
1
1
1
2/12/1,
,
,,
Given a Nonsingular Matrix Anxn , i.e. the columns are linarly Independent, then we can use the Gram-Schmidt Procedure to obtain an Orthonormal basis
,,,, 21 nxxx
.,,, 21 neee
n
jjjn
n
jjjj
nn
jnnnnn ereyy
yy
yxeyyx
1
1
1
2/12/1,
,
,,
154
SOLO Matrices
Given a Nonsingular Matrix Anxn , i.e. the columns are linarly Independent, then we can use the Gram-Schmidt Procedure to obtain an Orthonormal basis
,,,, 21 nxxx
.,,, 21 neee
n
jjjnn
i
jjjii
erx
erx
ererx
erx
1
1
2221122
1111
R
nn
n
n
Q
n
A
n
r
rr
rrr
eeexxx
00
0 222
11211
2121
We obtained a Q R decomposition (Q orthonormal : QHQ=In, R Upper Triangular) of a Nonsingular Square Matrix Anxn.
nxnnxnnxn RQA
QR Decomposition Using Gram-Schmidt Procedure
The Householder QR Factorization is used more since it has greater numerical stability than the Gram-Scmidt Method.
Return toMatrix Decomposition
155
SOLO Matrices
Iterative QR Decomposition Algorithm – Francis, Kublanovskaya (1961)
One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis and Vera Kublanovskaya in 1961
Vera Nikolaevna Kublanovskaya
1920 -
John Guy Figgis Francis1934 -
QR Algorithm :
AAiQRA
RQA
iii
iii
1
1
,2,1
where Qi is unitary and Ri is upper triangular.
Since iiH
iii
I
iH
ii QAQQRQQA
n
1
each stage corresponds to a Unitary Transformation, and the Eigenvalueas are Invariant.
The decomposition is performed by a sequence of Householder Matrices. We have
iH
i
T
ii
T
HHi
Hi
iiiH
iH
i
iiH
ii
ATTQQQAQQQ
QQAQQ
QAQA
iH
i
11111
111
1
156
SOLO Matrices
Iterative QR Decomposition Algorithm – Francis, Kublanovskaya (1961)
One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis and Vera Kublanovskaya in 1961
We have iH
ii ATTA 1 iH
iniH
iiii ATTITTQQQT ,11
This leads to iii ATAT 1
Define an Upper Triangular Matrix11 RRRU iii
Then
1111
1121
11
ii
TAAT
iii
i
A
iiii
UTAUAT
RRRQQQUT
iii
i
Let continue this recurrence
i
A
iiiiiiii ARQAUTAUATAUTAUT 11
111
12211
Assume that A has Eigenvalues satisfying n 21
and we can write where Λ is diagonal whose entries are the Eigenvalues in descending order.
1 XXA
QR Algorithm (continue -1) :
157
SOLO Matrices
Iterative QR Decomposition Algorithm – Francis, Kublanovskaya (1961)
One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis and Vera Kublanovskaya in 1961
Let continue this recurrence
i
A
iiiiiiii ARQAUTAUATAUTAUT 11
111
12211
Assume that A has Eigenvalues satisfying n 21
and we can write where Λ is diagonal whose entries are the Eigenvalues in descending order.
XXA 1
Assume also that we can write
TiangularUpperUTiangularLowerlLULX
TiangularUpperRIQQRQX
kj
nH
&
&1
QR Algorithm (continue -2) :
URRLRQULRQXXAUT iiiiiiii 11then
j
kkjkj
ii lL
now
158
SOLO Matrices
Iterative QR Decomposition Algorithm – Francis, Kublanovskaya (1961)
One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis and Vera Kublanovskaya in 1961QR Algorithm (continue -3) :
URRLRQULRQXXAUT iiiiiiii 11then
i
j
kkjkj
ii lL
nndiag 2121 ,,,where
We have ΛiLΛ-1→In as i→∞ since lkj=0 for k<j and for k>j as i→∞. 0
i
j
k
Hence we may write ,where Ei is strictly lower triangular inii EIL 0lim
ii
E
Ti, Q are both unitary, Ui and RΛiU are both upper triangular and In+Fi→In.The uniqueness of QR factorization tell us that Ti→QD, where D is a diagonal matrix having ±1 elements,i.e. D2=In.
0lim111
i
iininin
ii FFIRERIREIRRLR
URFIQUT iinii so
nIULRQXX 1and
In the same way
159
SOLO Matrices
Iterative QR Decomposition Algorithm – Francis, Kublanovskaya (1961)
One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis and Vera Kublanovskaya in 1961QR Algorithm (continue -4) :
DQTi
finally
DQQRRQQDDQXXQDDQAQDATTA HHQRX
QRX
HHXXA
HHi
Hii
1111 111
1
orDRRDA H
i1
1
DHRΛR-1D is upper tridiangular with the Eigenvalues of A ordered down the diagonal.
Although the QR Factorization is more computationally rigorous than the LU Factorization the method offers superior stability properties.
Return toMatrix Decomposition
160
SOLO Matrices
Return toMatrix Decomposition
Eigendecomposition
Also called spectral decomposition• Applicable to: square matrix A.• Decomposition: A = PDP − 1, where D is a diagonal matrix formed from the eigenvalues of A, and the columns of S are the corresponding eigenvectors of A.• Existence: An n-by-n matrix A always has n eigenvalues, which can be ordered (in more than one way) to form an n-by-n diagonal matrix D and a corresponding matrix of nonzero columns S that satisfies the eigenvalue equation AS = SD. If the n eigenvalues are distinct (that is, none is equal to any of the others), then S is invertible, implying the decomposition A = SDS − 1.• Comment: The eigendecomposition is useful for understanding the solution of a system of linear ordinary differential equations or linear difference equations. For example, the difference equation xt + 1 = Axt starting from the initial condition x0 = c is solved by xt = Atc, which is equivalent to xt = SDtS − 1c, where S and D are the matrices formed from the eigenvectors and eigenvalues of A. Since D is diagonal, raising it to power Dt, just involves raising each element on the diagonal to the power t. This is much easier to do and to understand than raising A to power t, since A is usually not diagonal.
Matrix Decompositions
161
SOLO Matrices
Return toMatrix Decomposition
Matrix Decompositions
Jordan decomposition
• The Jordan normal form and the Jordan–Chevalley decomposition• Applicable to: square matrix A• Comment: the Jordan normal form generalizes the eigendecomposition to cases where there are repeated eigenvalues and cannot be diagonalized, the Jordan– Chevalley decomposition does this without choosing a basis.
1 SJSA
where:
lJ
J
J
J
00
00
00
2
1
i
i
i
i
i
xkki iiJ
0000
1000
0000
0010
0001
162
SOLO
Schur Decomposition
n
nxnnxnnxn ZAZ
00
*0
**
2
1
1
Matrices
Theorem: An nxn Matrix A whose Eigenvalues λ1, λ2, …, λn, not necessarily distinct, can always be reduced to a triangular form, or
where Z is a Nonsingular Matrix and * denotes elements non necessary zero.
Proof: Let prove first that there exists a Nonsingular Matrix X1, such that
11
11
111
11
11
1
|0
|
|
nxn
nx
A
a
XAX
xn
nxnnxnnxn
163
SOLO
Schur Decomposition
Matrices
Proof (continue – 1): Let prove first that there exists a Nonsingular Matrix X1, such that
11
11
111
11
11
1
|0
|
|
nxn
nx
A
a
XAX
xn
nxnnxnnxn
Define by the Eigenvector corresponding to the Eigenvalue λ1, given by:1v
11 vvA Construct a Nonsingular Matrix X1 having as his first column.1v
Let denote the n columns of A X as P1, P2,.., Pn, or nPPPXA 21:and the n columns of X-1A X as Q1, Q2,.., Qn, or nQQQXAX 21
1 :
We have: 11
1111
111111 & vXPXQvvAP
nnXnXnX
nXXX
nXXX
nxn
CCC
CCC
CCC
XX
,,2,1
2,2,22,1
1,1,21,1
1
11
111
111
111
det
1
0
0
det
11
11
,,2,1
2,2,22,1
1,1,21,1
11
111
111
111
111
v
CCC
CCC
CCC
XPXQ
nnXnXnX
nXXX
nXXX
164
SOLO
Schur Decomposition
Matrices
Proof (continue – 2): We proved that there exists a Nonsingular Matrix X1, such that
11
11
111
11
11
1
|0
|
|
nxn
nx
A
a
XAX
xn
nxnnxnnxn
The Matrix A1 has the n-1 Eigenvalues λ2, λ3,…, λn.Let define as the solution of :By choosing as the first column of a Nonsingular Matrix Y , we obtain as before
2v 2221 vvA 2v
22
21
212
22
111111
11
|0
|
|
nxn
nx
A
a
YAY
xn
nxnnxnnxn
Let define
1111
11
2
|0
|
0|1
:
nxnxn
nx
nxn
Y
X
165
SOLO
Schur Decomposition
Matrices
Proof (continue – 3): We found Nonsingular X1 and Y such that
11
11
111
11
11
1
|0
|
|
nxn
nx
A
a
XAX
xn
nxnnxnnxn
22
21
212
22
111111
11
|0
|
|
nxn
nx
A
a
YAY
xn
nxnnxnnxn
We also defined
1111
11
2
|0
|
0|1
:
nxnxn
nx
nxn
Y
X
2
2
1
11
11
1
11
11
11
1
211
11
2
|00
|
*|0
*|*
|0
|
|
|0
|
|
|0
|
0|1
|0
|
0|1
|0
|
|
|0
|
0|1
AYAY
Ya
YA
Ya
YYA
a
Y
XXAXX nxn
Repeating the same process we finally obtain:
n
nxnnxnnxn ZAZ
00
*0
**
2
1
1
nxnnxnnxn nnxn XXXZ 21:where
q.e.d.
166
SOLO
Schur Decomposition
Matrices
During the procedure of choosing the Nonsingular Matrices Xi we could use the Gram-Schmidt procedure to obtain Orthonormal Matrices Ui, i.e. Ui
HUi = In,and finally Z=U=U1U2…Un
We obtain the Schur Decomposition (Schur Form, Schur Factorisation, Schur Triangulation):
Hnxn
n
nxnnxn UUA
00
*0
**
2
1
* denotes elements non necessary zero.
Return toMatrix Decomposition
167
SOLO Matrices
Companion Matrix
Given the Linear Differential Equation with Constant Coefficients:
xfxatd
xda
td
xda
td
xdnnn
n
n
n
11
1
1
Let rewrite it in Matrix form, using, x=yn and:
1
23
12
11111
nn
nnnnn
ytd
yd
ytd
yd
ytd
yd
yfyayayatd
yd
n
n
n
nn
n
n
yf
y
y
y
y
yaaaaa
y
y
y
y
y
td
d
0
0
0
0
1
01000
00000
00010
00001
1
3
2
11321
1
3
2
1
Top Companion Matrix of the Linear Differential Equation with Constant Coefficients
01000
00000
00010
00001
:
1321
nn
CT
aaaaa
A
168
SOLO Matrices
Companion Matrix (continue – 1)
Given the Linear Differential Equation with Constant Coefficients:
xfxatd
xda
td
xda
td
xdnnn
n
n
n
11
1
1
Let rewrite it in Matrix form, using, x=z1 and:
112111
43
32
21
zfzazazatd
zd
ztd
zd
ztd
zd
ztd
zd
nnnn
1
1
3
2
1
1221
1
3
2
1
1
0
0
0
0
10000
00000
00100
00010
zf
z
z
z
z
z
aaaaaz
z
z
z
z
td
d
n
n
nnnn
n
Bottom Companion Matrix of the Linear Differential Equation with Constant Coefficients
1221
10000
00000
00100
00010
:
aaaaa
A
nnn
CB
169
SOLO Matrices
Companion Matrix (continue – 2)
Relations between ACT, ACB:
CBCB AS
nnn
A
nnn
S
aaaaa
aaaaa
00010
00100
01000
10000
10000
00000
00100
00010
00001
00010
00100
01000
10000 12121
1221
CTCB A
nn
SAS
nnn aaaaaaaaaa
01000
00000
00010
00001
00001
00010
00100
01000
10000
00010
00100
01000
1000013211221
1
1
23
12
11111
nn
nnnnn
ytd
yd
ytd
yd
ytd
yd
yfyayayatd
yd
112111
43
32
21
zfzazazatd
zd
ztd
zd
ztd
zd
ztd
zd
nnnn
Change of Coordinates:
zSy
z
z
z
z
y
y
y
y
n
n
S
n
n
1
2
1
1
2
1
0001
0010
0100
1000
CTCB ASAS 1
170
SOLO Matrices
Companion Matrix (continue – 3)
Given the Linear Differential Equation with Constant Coefficients:
xfxatd
xda
td
xda
td
xdnnn
n
n
n
11
1
1
Let rewrite it in Matrix form, using, x=wnand:
nnn
nn
nn
nn
wfwawtd
wd
wawtd
wd
wawtd
wd
watd
wd
111
2223
1112
1
1
1
3
2
1
1
2
2
1
1
3
2
1
1
0
0
0
0
1000
0000
0010
0001
0000
wf
w
w
w
w
w
a
a
a
a
a
w
w
w
w
w
td
d
n
n
n
n
n
n
n
Right Companion Matrix of the Linear Differential Equation with Constant Coefficients
1
2
2
1
1000
0000
0010
0001
0000
:
a
a
a
a
a
A n
n
n
CR
171
SOLO Matrices
Companion Matrix (continue – 4)
Given the Linear Differential Equation with Constant Coefficients:
xfxatd
xda
td
xda
td
xdnnn
n
n
n
11
1
1
Let rewrite it in Matrix form, using, x=v1 and:
11
4133
3122
2111
vfvatd
vd
vvatd
vd
vvatd
vd
vvatd
vd
nn
n
n
n
n
n
n
n
vf
v
v
v
v
v
a
a
a
a
a
v
v
v
v
v
td
d
1
0
0
0
0
0000
1000
0000
0010
0001
1
3
2
1
1
3
2
1
1
3
2
1
Left Companion Matrix of the Linear Differential Equation with Constant Coefficients
0000
1000
0000
0010
0001
:
1
3
2
1
n
n
CL
a
a
a
a
a
A
172
SOLO Matrices
Companion Matrix (continue – 5)
Relations between ACR, ACL:
CRCR AS
n
n
n
A
n
n
n
S
a
a
a
a
a
a
a
a
a
a
0000
0001
0010
0000
1000
1000
0000
0010
0001
0000
00001
00010
00100
01000
10000
1
2
2
1
1
2
2
1
CLCR A
n
n
n
SAS
n
n
n
a
a
a
a
a
a
a
a
a
a
0000
1000
0100
0010
0001
00001
00010
00100
01000
10000
0000
0001
0010
0000
1000
1
2
2
1
1
2
2
1
1
Change of Coordinates:
vSw
v
v
v
v
w
w
w
w
n
n
S
n
n
1
2
1
1
2
1
0001
0010
0100
1000
nnn
nn
nn
nn
wfwawtd
wd
wawtd
wd
wawtd
wd
watd
wd
111
2223
1112
1
11
4133
3122
2111
vfvatd
vd
vvatd
vd
vvatd
vd
vvatd
vd
nn
CLCR ASAS 1
173
SOLO Matrices
Companion Matrix (continue – 6)
Relations between ACL, ACB: Change of Coordinates:
CBCB AB
n
nn
A
nnn
B
nnn
nnn
CB
a
aaa
aa
a
aaaaaaaaa
aaa
aa
a
AB
0000
10
000
0010
00010
10000
00000
00100
00010
1
01
001
0001
00001
:
132
12
1
12211321
432
12
1
BA
n
nn
B
nnn
nnn
A
n
n
CL
CLCL
a
aaa
aa
a
aaaa
aaa
aa
a
a
a
a
a
a
BA
0000
10
000
0010
00010
1
01
001
0001
00001
0000
1000
0000
0010
0001
132
12
1
1321
432
12
1
1
3
2
1
11
4133
3122
2111
vfvatd
vd
vvatd
vd
vvatd
vd
vvatd
vd
nn
112111
43
32
21
zfzazazatd
zd
ztd
zd
ztd
zd
ztd
zd
nnnn
B is a Low Triangular Toeplitz Matrix
CBCL ABBA 1 BABA CBCL
zBv
z
z
z
z
aaa
aa
a
v
v
v
v
n
n
B
nn
nn
n
n
1
2
1
121
32
1
1
2
1
1
01
001
0001
174
SOLO Matrices
Companion Matrix (continue – 7)
Theorem:
nnnnn
nCLnCRnCBnCT aaaIAIAIAIA 11
11detdetdetdet
Proof:
Let prove this for the Left Companion Matrix ACL:
00
10
00
001
det1
000
100
000
001
det
000
100
000
001
0001
detdet
1
3
2
1
1
3
2
1
n
n
n
n
nCL
a
a
a
a
a
a
a
a
a
a
IA
0
1
00
det1
00
10
00
det11
3
221
1
n
n
n
a
a
a
aa
nnnnnn
nn
nnnnnnn aaaaaaaaa
12
21
1113
32
21
1 11111
q.e.d.
175
SOLO Matrices
Companion Matrix (continue – 8)
Relations between ACT, ACB , ACR , ACL :we found:
B is a Low Triangular Toeplitz Matrix
1 BABA CBCL
CLCR ASAS 1
CTCB ASAS 1
00001
00010
00100
01000
10000
:1
TSSS
SBABSSASA CBCLCR111
nCBnCBnCBnCBnCT
nCBnCBnCBnCBnCL
nCRnCRnCRnCRnCL
IAIASSSIASISASIA
IAIABBBIABIBABIA
IAIASSSIASISASIA
detdetdetdetdetdetdet
detdetdetdetdetdetdet
detdetdetdetdetdetdet
1
111
1
111
1
111
nnnnn
nCLnCRnCBnCT aaaIAIAIAIA 11
11detdetdetdet
nnnnn
nCL aaaIA 11
11det
q.e.d.
Proof (continue – 1):
therefore:
1
01
001
0001
:
121
32
1
aaa
aa
a
B
nn
nn
176
SOLO Matrices
Companion Matrix (continue – 9)
Theorem:
nn
nnnnn
nL aaaIA 2111
1 11det
Assume that the Characteristic equation of the Down Companion Matrix AL has n Eigenvalues λ1, λ2,…, λn,
then an Eigenvector corresponding to the Eigenvalue λi, is [1 λi λi2 … λi
n-1]T
If the Eigenvalues are distinct the Left Companion Matrix ACB is diagonalized by a Vandermonde Matrix:
nnnCB DiagVVA ,,,,,,,,, 212121
n
n
nn
nnn
nn
nnn
n
n
nn
nnn
nn
nnn
n
n
nn aaaaa
0000
0000
0000
0000
000011111111
10000
00000
00100
00010
1
3
2
1
113
12
11
223
22
21
223
22
21
321
113
12
11
223
22
21
223
22
21
321
1321
177
SOLO Matrices
Companion Matrix (continue – 10)
Theorem:
nn
nnnnn
nCB aaaIA 2111
1 11det
Proof:
Assume that the Characteristic equation of the Bottom Companion Matrix ACB has n distinct Eigenvalues λ1, λ2,…, λn,
q.e.d.
then an Eigenvector corresponding to the Eigenvalue λi, is [1 λi λi2 … λi
n-1]T
ninn
nin
i
i
i
uuauaua
uu
uu
uu
uu
2211
1
34
23
12
0
0
0
0
0
1000
0000
0010
0001
1
3
2
1
1321
n
n
nin
i
i
i
i
niCB
u
u
u
u
u
aaaaa
uIA
niaaa ninn
in
i ,,1011
1
1
2
2
1
3
2
11
ni
ni
i
i
n
n
i
u
u
u
u
u
u
178
SOLO Matrices
Companion Matrix (continue – 11)
Relations between ACT, ACB , ACR , ACL :
we found:
B is a Low Triangular Toeplitz Matrix
1 BABA CBCL
CLCR ASAS 1
CTCB ASAS 1
00001
00010
00100
01000
10000
:1
SS
SBABSSASA CBCLCR111
1
01
001
0001
:
121
32
1
aaa
aa
a
B
nn
nn
01000
00000
00010
00001
:
1321
nn
CT
aaaaa
A
1221
10000
00000
00100
00010
:
aaaaa
A
nnn
CB
1
2
2
1
1000
0000
0010
0001
0000
:
a
a
a
a
a
A n
n
n
CR
0000
1000
0000
0010
0001
:
1
3
2
1
n
n
CL
a
a
a
a
a
AT
CTCL AA
TCBCR AA
179
SOLO Matrices
Companion Matrix (continue – 12)
Relations between ACT, ACB , ACR , ACL :
We also found:
1 BABA CBCL
CLCR ASAS 1
CTCB ASAS 1
00001
00010
00100
01000
10000
:1
SS
SBASBA CBCR111
1
01
001
0001
:
121
32
1
aaa
aa
a
B
nn
nn
TCTCL AA
TCBCR AA
nCB DiagVVA ,,, 21
VSVSSAS
nI
CB1 VSVSACT
1111 VBVBBVVBBABA CBCL
VBVBACL
VBVBSAS CR1 VBSVBSACR
11
TCT
TTCL AVSVSVBVBA 1
TTCR VVVBSVBSA 111
180
SOLO Matrices
Companion Matrix (continue – 13)
SSVVB TT
V
nn
nnn
nn
nnn
n
n
B
nnn
nnn
V
nn
nnnn
nn
nnnn
nn
nn
nn
aaaa
aaa
aa
a
T
113
12
11
223
22
21
223
22
21
321
1321
432
12
1
122
11
21
211
13
23
233
12
22
222
11
21
211 1111
1
01
001
0001
00001
1
1
1
1
1
12121
11
211121
232
21132
212
21112
111
113
12
11
223
22
21
223
22
21
321
1321
432
12
1
111111
1
01
001
0001
00001
nn
nnnnn
nnnn
nnnnn
nnn
nn
n
V
nn
nnn
nn
nnn
n
n
B
nnn
nnn
aaaaaa
aaaa
aaaa
aa
aaaa
aaa
aa
a
12121
11
211121
232
21132
212
21112
111
122
11
21
211
13
23
233
12
22
222
11
21
211 11
1
1
1
1
1
nn
nnnnn
nnnn
nnnnn
nnn
nn
n
nn
nnnn
nn
nnnn
nn
nn
nn
aaaaaa
aaaa
aaaa
aa
TTTT
TCT
TTCL
VSSVVSVB
AVSVSVBVBA
11
1
181
SOLO Matrices
The Inverse of a Vandermonde Matrix.
q.e.d.
11
321133221321
2121
1
2313
321212
312121
1
112
11
222
21
21
121
1
01
001
0001
100
110
111
111
,,,
LU
xxxxxxxxxxxx
xxxx
x
xxxx
xxxxxx
xxxxxx
xxx
xxx
xxx
xxxV
nn
nn
n
n
nnxn
otherwisexx
ji
ji
uuUj
ikk ki
ijij
1
111
1
11
0
otherwiselxll
ji
ji
llL
jijiji
ijij
0
1
0
,01
1,11
1.11
111
182
SOLO Matrices
Proof:
The Inverse of a Vandermonde Matrix.
Let compute A L:
q.e.d.
2313
321212
312121
321133221321
2121
1
112
11
222
21
21
21
100
110
111
1
01
001
0001111
,,,
xxxx
xxxxxx
xxxxxx
xxxxxxxxxxxx
xxxx
x
xxx
xxx
xxx
LxxxV
nn
nn
n
n
nnxn
183
SOLO Matrices
Matrix Analysis (Ogata Ch.5)
184
SOLO Matrices
CounterclockwiseRotation by φ
Horizontal shear Scaling Unequal Scaling
10
1 k
k
k
0
0
2
1
0
0
k
k
cossin
sincos
0112 22 02 222 kkk 021 kk 01cos22
11 k1 2211 , kk jej sincos2,1
Matrix
Illustration
CharacteristicEquation
Eigenvalues λi
Algebraic (n) &Geometric (n)
1,2 11 mn 2,2 11 mn 1,1 2211 mnmn 1,1 2211 mnmn
0
11v
1
0,
0
121 vv
1
0,
0
121 vv
i
vi
v1
,1
21Eigenvectors
http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
185
SOLO
References
[2] Pease, “Methods of Matrix Algebra” ,Mathematics in Science and Engineering, Vol.16, Academic Press, 1965
Matrices
[3] K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
[1] I. Creangã, T. Lucian, “Introduction to Tensorial Calculus” (Romanian), Editura Didacticã si Pedagogicã, Bucarest, 1963
http://en.wikipedia.org/wiki
[4] T. Kailath, “Linear Systems”, Prentice Hall, Inc., 1980, Appendix, pp.645-670
[5] G. Strang, “Linear Algebra and its Applications”, Academic Press, 2nd Ed., 1980
April 13, 2023 186
SOLO
TechnionIsraeli Institute of Technology
1964 – 1968 BSc EE1968 – 1971 MSc EE
Israeli Air Force1970 – 1974
RAFAELIsraeli Armament Development Authority
1974 –
Stanford University1983 – 1986 PhD AA
187
SOLO
Square Block Matrix Invers
Matrices
We have (assuming A square and nonsingular):
rn
r
rn
r
I
BAI
BCAD
A
ICA
I
DC
BA
00
00 1
11
or:
1
1
1
1
111 0
0
0
0
rn
r
rn
r
ICA
I
BCAD
A
I
BAI
DC
BA
From this we obtain:
rn
r
rn
r
ICA
I
BCAD
A
I
BAI
DC
BA111
111 0
0
0
0
11111
111 0
0 BCADCABCAD
A
I
BAI
DC
BA
rn
r
Finally:
11111
111111111
BCADCABCAD
BCADBACABCADBAA
DC
BA
and:
188
SOLO Matrices
Finally:
r
rn
r
rn
ICD
I
D
CBDA
I
BDI
DC
BA1
11 0
0
0
0
Square Block Matrix Invers
We have (assuming D square and nonsingular):
From this we obtain:
1111
1
1
1
00
00
r
rn
r
rn
I
BDI
D
CBDA
ICD
I
DC
BA
or:
r
rn
r
rn
I
BDI
D
CBDA
ICD
I
DC
BA
00
00 1
1
11
1
1
1
11111
1
1
0
0
D
BDCBDACBDA
ICD
I
DC
BA
r
rn
11111111
111111
BDCBDACDDCBDACD
BDCBDACBDA
DC
BA
and:
189
SOLO Matrices
We have:
Square Block Matrix Invers
If both A and D are square and nonsingular, then:
11111
11111111
11111111
111111
BCADCABCAD
BCADBACABCADBAA
BDCBDACDDCBDACD
BDCBDACBDA
DC
BA
111111
1111111
BCADBABDCBDA
CABCADBAACBDA
111111
1111111
CABCADCBDACD
BDCBDACDDBCAD
The last two equation can be obtained from the first two by changing A ↔D and C↔B.
190
SOLO Matrices
Proof:
Theorem: The Inverse of a Nonsingular Upper/Lower Triangular Square Matrix is an Upper/Lower Triangular Square Matrix.
Start with the Identity:
1
1111
00 D
DBAA
D
BA
Given: Upper Triangular Matrix
nn
n
n
a
aa
aaa
Unxn
00
0 222
11211
1
0|0
|
|0
|
222
11211
U
nn
n
n
a
aa
aaa
Unxn
|0
|
|0
|
11
11112
111
111
1
U
Uaaaa
U
n
nxn
|0
|
|0
|
12
12223
122
122
11 11
U
Uaaaa
U
n
nxn
In this way:
1
122
111
1
00
*0
**
nna
a
a
Unxn
q.e.d.The same way for a Lower Triangular Matrix.