linear algebra - iit bombaydey/linearalg04.pdf · triangularization closely related to the problem...
TRANSCRIPT
Linear Algebra
Santanu Dey
February 18, 2011
1/58
Spectral Theorem
Similarity does not necessarily preserve the distance.
In terms of matrices, thismay be noticed in the fact that an arbitrary conjugate C−1AC of a Hermitianmatrix may not be Hermitian. Thus the diagonalization problem for specialmatrices such as Hermitian matrices needs a special treatment viz., we need torestrict C to those matrices which preserve the inner product. In this section weshall first establish an important result that says that a Hermitian matrix can bediagonalized using unitary matrices. We shall then see some applications of this togeometry.
ARS (IITB) MA106-Linear Algebra February 14, 2011 68 / 99
Spectral Theorem
Similarity does not necessarily preserve the distance. In terms of matrices, thismay be noticed in the fact that an arbitrary conjugate C−1AC of a Hermitianmatrix may not be Hermitian.
Thus the diagonalization problem for specialmatrices such as Hermitian matrices needs a special treatment viz., we need torestrict C to those matrices which preserve the inner product. In this section weshall first establish an important result that says that a Hermitian matrix can bediagonalized using unitary matrices. We shall then see some applications of this togeometry.
ARS (IITB) MA106-Linear Algebra February 14, 2011 68 / 99
Spectral Theorem
Similarity does not necessarily preserve the distance. In terms of matrices, thismay be noticed in the fact that an arbitrary conjugate C−1AC of a Hermitianmatrix may not be Hermitian. Thus the diagonalization problem for specialmatrices such as Hermitian matrices needs a special treatment viz., we need torestrict C to those matrices which preserve the inner product.
In this section weshall first establish an important result that says that a Hermitian matrix can bediagonalized using unitary matrices. We shall then see some applications of this togeometry.
ARS (IITB) MA106-Linear Algebra February 14, 2011 68 / 99
Spectral Theorem
Similarity does not necessarily preserve the distance. In terms of matrices, thismay be noticed in the fact that an arbitrary conjugate C−1AC of a Hermitianmatrix may not be Hermitian. Thus the diagonalization problem for specialmatrices such as Hermitian matrices needs a special treatment viz., we need torestrict C to those matrices which preserve the inner product. In this section weshall first establish an important result that says that a Hermitian matrix can bediagonalized using unitary matrices.
We shall then see some applications of this togeometry.
ARS (IITB) MA106-Linear Algebra February 14, 2011 68 / 99
Spectral Theorem
Similarity does not necessarily preserve the distance. In terms of matrices, thismay be noticed in the fact that an arbitrary conjugate C−1AC of a Hermitianmatrix may not be Hermitian. Thus the diagonalization problem for specialmatrices such as Hermitian matrices needs a special treatment viz., we need torestrict C to those matrices which preserve the inner product. In this section weshall first establish an important result that says that a Hermitian matrix can bediagonalized using unitary matrices. We shall then see some applications of this togeometry.
ARS (IITB) MA106-Linear Algebra February 14, 2011 68 / 99
Triangularization
Closely related to the problem of diagonalization is the problem oftriangularization. We shall use this concept as a stepping stone toward thesolution of diagonalization problem.
DefinitionTwo n × n matrices A and B are said to be congruent if there exists a unitarymatrix C such that C∗AC = B.
DefinitionWe say A is triangularizable if there exists an invertible matrix C such thatC−1AC is upper triangular.
RemarkObviously all diagonalizable matrices are triangularizable. The following resultsays that triangularizability causes least problem:
ARS (IITB) MA106-Linear Algebra February 14, 2011 69 / 99
Triangularization
Closely related to the problem of diagonalization is the problem oftriangularization. We shall use this concept as a stepping stone toward thesolution of diagonalization problem.
DefinitionTwo n × n matrices A and B are said to be congruent if there exists a unitarymatrix C such that C∗AC = B.
DefinitionWe say A is triangularizable if there exists an invertible matrix C such thatC−1AC is upper triangular.
RemarkObviously all diagonalizable matrices are triangularizable.
The following resultsays that triangularizability causes least problem:
ARS (IITB) MA106-Linear Algebra February 14, 2011 69 / 99
Triangularization
Closely related to the problem of diagonalization is the problem oftriangularization. We shall use this concept as a stepping stone toward thesolution of diagonalization problem.
DefinitionTwo n × n matrices A and B are said to be congruent if there exists a unitarymatrix C such that C∗AC = B.
DefinitionWe say A is triangularizable if there exists an invertible matrix C such thatC−1AC is upper triangular.
RemarkObviously all diagonalizable matrices are triangularizable. The following resultsays that triangularizability causes least problem:
ARS (IITB) MA106-Linear Algebra February 14, 2011 69 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries. We have to find aunitary matrix C such that C∗AC is upper triangular. We shall prove this byinduction. For n = 1 there is nothing to prove. Assume the result for n − 1.Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries.
We have to find aunitary matrix C such that C∗AC is upper triangular. We shall prove this byinduction. For n = 1 there is nothing to prove. Assume the result for n − 1.Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries. We have to find aunitary matrix C such that C∗AC is upper triangular.
We shall prove this byinduction. For n = 1 there is nothing to prove. Assume the result for n − 1.Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries. We have to find aunitary matrix C such that C∗AC is upper triangular. We shall prove this byinduction.
For n = 1 there is nothing to prove. Assume the result for n − 1.Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries. We have to find aunitary matrix C such that C∗AC is upper triangular. We shall prove this byinduction. For n = 1 there is nothing to prove.
Assume the result for n − 1.Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries. We have to find aunitary matrix C such that C∗AC is upper triangular. We shall prove this byinduction. For n = 1 there is nothing to prove. Assume the result for n − 1.
Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries. We have to find aunitary matrix C such that C∗AC is upper triangular. We shall prove this byinduction. For n = 1 there is nothing to prove. Assume the result for n − 1.Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.
(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
Proposition
Over the complex numbers every square matrix is congruent to an uppertriangular matrix.
Proof: Let A be an n × n matrix with complex entries. We have to find aunitary matrix C such that C∗AC is upper triangular. We shall prove this byinduction. For n = 1 there is nothing to prove. Assume the result for n − 1.Let µ be an eigenvalue of A and let v ∈ Cn \ {0} be such that Av1 = µv1.(Here we need to work with complex numbers; real numbers won’t do. why?)
ARS (IITB) MA106-Linear Algebra February 14, 2011 70 / 99
Triangularization
We can choose v1 to be of norm 1.
We can then complete it to an orthonormalbasis {v1, . . . , vn}. (Here we use Gram-Schmidt.) We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n. Then as seen earlier, C1 is a unitarymatrix. Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1. This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ. Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
We can choose v1 to be of norm 1. We can then complete it to an orthonormalbasis {v1, . . . , vn}.
(Here we use Gram-Schmidt.) We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n. Then as seen earlier, C1 is a unitarymatrix. Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1. This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ. Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
We can choose v1 to be of norm 1. We can then complete it to an orthonormalbasis {v1, . . . , vn}. (Here we use Gram-Schmidt.)
We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n. Then as seen earlier, C1 is a unitarymatrix. Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1. This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ. Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
We can choose v1 to be of norm 1. We can then complete it to an orthonormalbasis {v1, . . . , vn}. (Here we use Gram-Schmidt.) We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n.
Then as seen earlier, C1 is a unitarymatrix. Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1. This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ. Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
We can choose v1 to be of norm 1. We can then complete it to an orthonormalbasis {v1, . . . , vn}. (Here we use Gram-Schmidt.) We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n. Then as seen earlier, C1 is a unitarymatrix.
Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1. This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ. Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
We can choose v1 to be of norm 1. We can then complete it to an orthonormalbasis {v1, . . . , vn}. (Here we use Gram-Schmidt.) We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n. Then as seen earlier, C1 is a unitarymatrix. Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1.
This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ. Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
We can choose v1 to be of norm 1. We can then complete it to an orthonormalbasis {v1, . . . , vn}. (Here we use Gram-Schmidt.) We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n. Then as seen earlier, C1 is a unitarymatrix. Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1. This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ.
Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
We can choose v1 to be of norm 1. We can then complete it to an orthonormalbasis {v1, . . . , vn}. (Here we use Gram-Schmidt.) We then take C1 to be thematrix whose i th column is vi , 1 ≤ i ≤ n. Then as seen earlier, C1 is a unitarymatrix. Put A1 = C−11 AC1. ThenA1e1 = C−11 AC1e1 = C−11 Av1 = µC−11 (v1) = µe1. This shows that the firstcolumn of A1 has all entries zero except the first one which is equal to µ. Let Bbe the matrix obtained from A1 by cutting down the first row and the firstcolumn, so that A1 is a block matrix of the form
A1 =
[µ ?0n−1 B
]where 0n−1 is the column of size n − 1 consisting of zeros.
ARS (IITB) MA106-Linear Algebra February 14, 2011 71 / 99
Triangularization
By induction there exists a (n− 1)× (n− 1) unitary matrix M such that M−1BM
is an upper triangular matrix.
Put M1 =
[1 0t
n−10n−1 M
].
Then M1 is unitary and hence C = C1M1 is also unitary. ClearlyC−1AC = M−11 C−11 AC1M1 = M−11 A1M1 which is of the form[
1 0tn−1
0n−1 M−1
] [µ ∗0n−1 B
] [1 0t
n−10n−1 M
]=
[µ ∗
0n−1 M−1BM
]and hence is upper triangular. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 72 / 99
Triangularization
By induction there exists a (n− 1)× (n− 1) unitary matrix M such that M−1BM
is an upper triangular matrix. Put M1 =
[1 0t
n−10n−1 M
].
Then M1 is unitary and hence C = C1M1 is also unitary. ClearlyC−1AC = M−11 C−11 AC1M1 = M−11 A1M1 which is of the form[
1 0tn−1
0n−1 M−1
] [µ ∗0n−1 B
] [1 0t
n−10n−1 M
]=
[µ ∗
0n−1 M−1BM
]and hence is upper triangular. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 72 / 99
Triangularization
By induction there exists a (n− 1)× (n− 1) unitary matrix M such that M−1BM
is an upper triangular matrix. Put M1 =
[1 0t
n−10n−1 M
].
Then M1 is unitary and hence C = C1M1 is also unitary.
ClearlyC−1AC = M−11 C−11 AC1M1 = M−11 A1M1 which is of the form[
1 0tn−1
0n−1 M−1
] [µ ∗0n−1 B
] [1 0t
n−10n−1 M
]=
[µ ∗
0n−1 M−1BM
]and hence is upper triangular. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 72 / 99
Triangularization
By induction there exists a (n− 1)× (n− 1) unitary matrix M such that M−1BM
is an upper triangular matrix. Put M1 =
[1 0t
n−10n−1 M
].
Then M1 is unitary and hence C = C1M1 is also unitary. ClearlyC−1AC = M−11 C−11 AC1M1 = M−11 A1M1 which is of the form[
1 0tn−1
0n−1 M−1
] [µ ∗0n−1 B
] [1 0t
n−10n−1 M
]=
[µ ∗
0n−1 M−1BM
]and hence is upper triangular. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 72 / 99
Triangularization
Remark
Assume now that A is a real matrix with all its eigenvalues real.
Then from lemma13, it follows that we can choose the eigenvector v1 to be a real vector and thencomplete this into a orthonormal basis for Rn
. Thus the matrix C1 correspondingto this basis will have real entries. By induction M will have real entries and hencethe product C = MC1 will also have real entries. Thus we have proved:
Proposition
For a real square matrix A with all its eigenvalues real, there exists an orthogonalmatrix C such that C tAC is upper triangular.
ARS (IITB) MA106-Linear Algebra February 14, 2011 73 / 99
Triangularization
Remark
Assume now that A is a real matrix with all its eigenvalues real. Then from lemma13, it follows that we can choose the eigenvector v1 to be a real vector and thencomplete this into a orthonormal basis for Rn
.
Thus the matrix C1 correspondingto this basis will have real entries. By induction M will have real entries and hencethe product C = MC1 will also have real entries. Thus we have proved:
Proposition
For a real square matrix A with all its eigenvalues real, there exists an orthogonalmatrix C such that C tAC is upper triangular.
ARS (IITB) MA106-Linear Algebra February 14, 2011 73 / 99
Triangularization
Remark
Assume now that A is a real matrix with all its eigenvalues real. Then from lemma13, it follows that we can choose the eigenvector v1 to be a real vector and thencomplete this into a orthonormal basis for Rn
. Thus the matrix C1 correspondingto this basis will have real entries. By induction M will have real entries and hencethe product C = MC1 will also have real entries.
Thus we have proved:
Proposition
For a real square matrix A with all its eigenvalues real, there exists an orthogonalmatrix C such that C tAC is upper triangular.
ARS (IITB) MA106-Linear Algebra February 14, 2011 73 / 99
Triangularization
Remark
Assume now that A is a real matrix with all its eigenvalues real. Then from lemma13, it follows that we can choose the eigenvector v1 to be a real vector and thencomplete this into a orthonormal basis for Rn
. Thus the matrix C1 correspondingto this basis will have real entries. By induction M will have real entries and hencethe product C = MC1 will also have real entries. Thus we have proved:
Proposition
For a real square matrix A with all its eigenvalues real, there exists an orthogonalmatrix C such that C tAC is upper triangular.
ARS (IITB) MA106-Linear Algebra February 14, 2011 73 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.
(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal.
Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.
(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal.
For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal.
Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian.
For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
DefinitionA square matrix A is called normal if A∗A = AA∗.
Remark
(i) Normality is congruence invariant. This means that if C is unitary and A isnormal then C−1AC is also normal. This is easy to verify.(ii) Any diagonal matrix is normal. Therefore it follows that in order that a matrixA is congruent to a diagonal matrix, it is necessary that A is normal. Amazingly,this condition turns out to be sufficient as well. That is one simple reason todefine this concept.(iii) Observe that product of two normal matrices may not be normal. For
example take A =
[0 11 0
]; B =
[1 00 2
].
(iv) Certainly Hermitian matrices are normal. Of course, there are normal matrices
which are not Hermitian. For example, take A =
[ı 00 −ı
].
ARS (IITB) MA106-Linear Algebra February 14, 2011 74 / 99
Normal Matrices
Lemma
For a normal matrix A we have, ‖Ax‖2 = ‖A∗x‖2 for all x ∈ Cn;
in particular, foreach i , the norm of i th-row of A is equal to the norm of i th-column.
Proof: ‖Ax‖2 = (Ax)∗(Ax) = x∗A∗Ax = x∗AA∗x = (A∗x)∗(A∗x) = ‖A∗x‖2.Taking x = ei we get that norms of i th-column of A is equal to the norm ofi th-column of A∗= norm of the i th-row of A=norm of the i th-row of A. ♠
LemmaIf A is normal, then v is an eigenvector of A with eigenvalue µ iff v is aneigenvector of A∗ with eigenvalue µ.
Proof: Observe that if A is normal then A− µI is also normal. Now(A− µI )(v) = 0 iff ‖(A− µI )(v)‖ = 0 iff ‖(A− µI )∗v‖ = 0 iff (A∗ − µI )(v) = 0.♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 76 / 99
Normal Matrices
Lemma
For a normal matrix A we have, ‖Ax‖2 = ‖A∗x‖2 for all x ∈ Cn; in particular, for
each i , the norm of i th-row of A is equal to the norm of i th-column.
Proof: ‖Ax‖2 = (Ax)∗(Ax) = x∗A∗Ax = x∗AA∗x = (A∗x)∗(A∗x) = ‖A∗x‖2.Taking x = ei we get that norms of i th-column of A is equal to the norm ofi th-column of A∗= norm of the i th-row of A=norm of the i th-row of A. ♠
LemmaIf A is normal, then v is an eigenvector of A with eigenvalue µ iff v is aneigenvector of A∗ with eigenvalue µ.
Proof: Observe that if A is normal then A− µI is also normal. Now(A− µI )(v) = 0 iff ‖(A− µI )(v)‖ = 0 iff ‖(A− µI )∗v‖ = 0 iff (A∗ − µI )(v) = 0.♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 76 / 99
Normal Matrices
Lemma
For a normal matrix A we have, ‖Ax‖2 = ‖A∗x‖2 for all x ∈ Cn; in particular, for
each i , the norm of i th-row of A is equal to the norm of i th-column.
Proof: ‖Ax‖2 = (Ax)∗(Ax) = x∗A∗Ax = x∗AA∗x = (A∗x)∗(A∗x) = ‖A∗x‖2.
Taking x = ei we get that norms of i th-column of A is equal to the norm ofi th-column of A∗= norm of the i th-row of A=norm of the i th-row of A. ♠
LemmaIf A is normal, then v is an eigenvector of A with eigenvalue µ iff v is aneigenvector of A∗ with eigenvalue µ.
Proof: Observe that if A is normal then A− µI is also normal. Now(A− µI )(v) = 0 iff ‖(A− µI )(v)‖ = 0 iff ‖(A− µI )∗v‖ = 0 iff (A∗ − µI )(v) = 0.♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 76 / 99
Normal Matrices
Lemma
For a normal matrix A we have, ‖Ax‖2 = ‖A∗x‖2 for all x ∈ Cn; in particular, for
each i , the norm of i th-row of A is equal to the norm of i th-column.
Proof: ‖Ax‖2 = (Ax)∗(Ax) = x∗A∗Ax = x∗AA∗x = (A∗x)∗(A∗x) = ‖A∗x‖2.Taking x = ei we get that norms of i th-column of A is equal to the norm ofi th-column of A∗= norm of the i th-row of A=norm of the i th-row of A. ♠
LemmaIf A is normal, then v is an eigenvector of A with eigenvalue µ iff v is aneigenvector of A∗ with eigenvalue µ.
Proof: Observe that if A is normal then A− µI is also normal. Now(A− µI )(v) = 0 iff ‖(A− µI )(v)‖ = 0 iff ‖(A− µI )∗v‖ = 0 iff (A∗ − µI )(v) = 0.♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 76 / 99
Normal Matrices
Lemma
For a normal matrix A we have, ‖Ax‖2 = ‖A∗x‖2 for all x ∈ Cn; in particular, for
each i , the norm of i th-row of A is equal to the norm of i th-column.
Proof: ‖Ax‖2 = (Ax)∗(Ax) = x∗A∗Ax = x∗AA∗x = (A∗x)∗(A∗x) = ‖A∗x‖2.Taking x = ei we get that norms of i th-column of A is equal to the norm ofi th-column of A∗= norm of the i th-row of A=norm of the i th-row of A. ♠
LemmaIf A is normal, then v is an eigenvector of A with eigenvalue µ iff v is aneigenvector of A∗ with eigenvalue µ.
Proof: Observe that if A is normal then A− µI is also normal. Now(A− µI )(v) = 0 iff ‖(A− µI )(v)‖ = 0 iff ‖(A− µI )∗v‖ = 0 iff (A∗ − µI )(v) = 0.♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 76 / 99
Normal Matrices
Lemma
For a normal matrix A we have, ‖Ax‖2 = ‖A∗x‖2 for all x ∈ Cn; in particular, for
each i , the norm of i th-row of A is equal to the norm of i th-column.
Proof: ‖Ax‖2 = (Ax)∗(Ax) = x∗A∗Ax = x∗AA∗x = (A∗x)∗(A∗x) = ‖A∗x‖2.Taking x = ei we get that norms of i th-column of A is equal to the norm ofi th-column of A∗= norm of the i th-row of A=norm of the i th-row of A. ♠
LemmaIf A is normal, then v is an eigenvector of A with eigenvalue µ iff v is aneigenvector of A∗ with eigenvalue µ.
Proof: Observe that if A is normal then A− µI is also normal. Now(A− µI )(v) = 0 iff ‖(A− µI )(v)‖ = 0 iff ‖(A− µI )∗v‖ = 0 iff (A∗ − µI )(v) = 0.♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 76 / 99
Normal Matrices
Proposition
An upper triangular normal matrix is diagonal.
Proof: Let A be an upper triangular normal matrix. Inductively, we shallshow that aij = 0 for j > i . We have, Ae1 = a11e1. Hence ‖Ae1‖2 = |a11|2. Onthe other hand, this is equal to ‖A∗e1‖2 = |a11|2 + |a12|2 + · · ·+ |a1n|2. Hencea12 = a13 = · · · = a1n = 0. Inductively, suppose we have shown aij = 0 for j > ifor all 1 ≤ i ≤ k − 1. Then it follows that Aek = akkek . Exactly as in the firstcase, this implies that ‖A∗ek‖2 = |ak,k |2 + |ak,k+1|2 + · · ·+ |ak,n|2 = |ak,k |2.Hence ak,k+1 = · · · = ak,n = 0. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 77 / 99
Normal Matrices
Proposition
An upper triangular normal matrix is diagonal.
Proof: Let A be an upper triangular normal matrix.
Inductively, we shallshow that aij = 0 for j > i . We have, Ae1 = a11e1. Hence ‖Ae1‖2 = |a11|2. Onthe other hand, this is equal to ‖A∗e1‖2 = |a11|2 + |a12|2 + · · ·+ |a1n|2. Hencea12 = a13 = · · · = a1n = 0. Inductively, suppose we have shown aij = 0 for j > ifor all 1 ≤ i ≤ k − 1. Then it follows that Aek = akkek . Exactly as in the firstcase, this implies that ‖A∗ek‖2 = |ak,k |2 + |ak,k+1|2 + · · ·+ |ak,n|2 = |ak,k |2.Hence ak,k+1 = · · · = ak,n = 0. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 77 / 99
Normal Matrices
Proposition
An upper triangular normal matrix is diagonal.
Proof: Let A be an upper triangular normal matrix. Inductively, we shallshow that aij = 0 for j > i . We have, Ae1 = a11e1. Hence ‖Ae1‖2 = |a11|2.
Onthe other hand, this is equal to ‖A∗e1‖2 = |a11|2 + |a12|2 + · · ·+ |a1n|2. Hencea12 = a13 = · · · = a1n = 0. Inductively, suppose we have shown aij = 0 for j > ifor all 1 ≤ i ≤ k − 1. Then it follows that Aek = akkek . Exactly as in the firstcase, this implies that ‖A∗ek‖2 = |ak,k |2 + |ak,k+1|2 + · · ·+ |ak,n|2 = |ak,k |2.Hence ak,k+1 = · · · = ak,n = 0. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 77 / 99
Normal Matrices
Proposition
An upper triangular normal matrix is diagonal.
Proof: Let A be an upper triangular normal matrix. Inductively, we shallshow that aij = 0 for j > i . We have, Ae1 = a11e1. Hence ‖Ae1‖2 = |a11|2. Onthe other hand, this is equal to ‖A∗e1‖2 = |a11|2 + |a12|2 + · · ·+ |a1n|2.
Hencea12 = a13 = · · · = a1n = 0. Inductively, suppose we have shown aij = 0 for j > ifor all 1 ≤ i ≤ k − 1. Then it follows that Aek = akkek . Exactly as in the firstcase, this implies that ‖A∗ek‖2 = |ak,k |2 + |ak,k+1|2 + · · ·+ |ak,n|2 = |ak,k |2.Hence ak,k+1 = · · · = ak,n = 0. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 77 / 99
Normal Matrices
Proposition
An upper triangular normal matrix is diagonal.
Proof: Let A be an upper triangular normal matrix. Inductively, we shallshow that aij = 0 for j > i . We have, Ae1 = a11e1. Hence ‖Ae1‖2 = |a11|2. Onthe other hand, this is equal to ‖A∗e1‖2 = |a11|2 + |a12|2 + · · ·+ |a1n|2. Hencea12 = a13 = · · · = a1n = 0.
Inductively, suppose we have shown aij = 0 for j > ifor all 1 ≤ i ≤ k − 1. Then it follows that Aek = akkek . Exactly as in the firstcase, this implies that ‖A∗ek‖2 = |ak,k |2 + |ak,k+1|2 + · · ·+ |ak,n|2 = |ak,k |2.Hence ak,k+1 = · · · = ak,n = 0. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 77 / 99
Normal Matrices
Proposition
An upper triangular normal matrix is diagonal.
Proof: Let A be an upper triangular normal matrix. Inductively, we shallshow that aij = 0 for j > i . We have, Ae1 = a11e1. Hence ‖Ae1‖2 = |a11|2. Onthe other hand, this is equal to ‖A∗e1‖2 = |a11|2 + |a12|2 + · · ·+ |a1n|2. Hencea12 = a13 = · · · = a1n = 0. Inductively, suppose we have shown aij = 0 for j > ifor all 1 ≤ i ≤ k − 1.
Then it follows that Aek = akkek . Exactly as in the firstcase, this implies that ‖A∗ek‖2 = |ak,k |2 + |ak,k+1|2 + · · ·+ |ak,n|2 = |ak,k |2.Hence ak,k+1 = · · · = ak,n = 0. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 77 / 99
Normal Matrices
Proposition
An upper triangular normal matrix is diagonal.
Proof: Let A be an upper triangular normal matrix. Inductively, we shallshow that aij = 0 for j > i . We have, Ae1 = a11e1. Hence ‖Ae1‖2 = |a11|2. Onthe other hand, this is equal to ‖A∗e1‖2 = |a11|2 + |a12|2 + · · ·+ |a1n|2. Hencea12 = a13 = · · · = a1n = 0. Inductively, suppose we have shown aij = 0 for j > ifor all 1 ≤ i ≤ k − 1. Then it follows that Aek = akkek . Exactly as in the firstcase, this implies that ‖A∗ek‖2 = |ak,k |2 + |ak,k+1|2 + · · ·+ |ak,n|2 = |ak,k |2.Hence ak,k+1 = · · · = ak,n = 0. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 77 / 99
Normal Matrices
Why all this fuss? Well for one thing, we now have a big theorem. All we have todo is to combine propositions 13 and 15.
Theorem
Spectral Theorem Given any normal matrix A ∈ M(n,C), there exists a unitarymatrix C such that C∗AC is a diagonal matrix.
Corollary
(a) Every Hermitian matrix A is congruent to a diagonal matrix.(b) A real symmetric matrix is real-congruent to a diagonal matrix.
Proof: (a) We simply observe that a Hermitian matrix is normal and applythe above theorem.(b) We first recall that for a real symmetric matrix, all eigenvalues are real.Hence the proposition 14 is applicable and so we can choose C to be anorthogonal matrix. Along with proposition 15, this gives the result. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 78 / 99
Normal Matrices
Why all this fuss? Well for one thing, we now have a big theorem. All we have todo is to combine propositions 13 and 15.
Theorem
Spectral Theorem Given any normal matrix A ∈ M(n,C), there exists a unitarymatrix C such that C∗AC is a diagonal matrix.
Corollary
(a) Every Hermitian matrix A is congruent to a diagonal matrix.
(b) A real symmetric matrix is real-congruent to a diagonal matrix.
Proof: (a) We simply observe that a Hermitian matrix is normal and applythe above theorem.(b) We first recall that for a real symmetric matrix, all eigenvalues are real.Hence the proposition 14 is applicable and so we can choose C to be anorthogonal matrix. Along with proposition 15, this gives the result. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 78 / 99
Normal Matrices
Why all this fuss? Well for one thing, we now have a big theorem. All we have todo is to combine propositions 13 and 15.
Theorem
Spectral Theorem Given any normal matrix A ∈ M(n,C), there exists a unitarymatrix C such that C∗AC is a diagonal matrix.
Corollary
(a) Every Hermitian matrix A is congruent to a diagonal matrix.(b) A real symmetric matrix is real-congruent to a diagonal matrix.
Proof: (a) We simply observe that a Hermitian matrix is normal and applythe above theorem.
(b) We first recall that for a real symmetric matrix, all eigenvalues are real.Hence the proposition 14 is applicable and so we can choose C to be anorthogonal matrix. Along with proposition 15, this gives the result. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 78 / 99
Normal Matrices
Why all this fuss? Well for one thing, we now have a big theorem. All we have todo is to combine propositions 13 and 15.
Theorem
Spectral Theorem Given any normal matrix A ∈ M(n,C), there exists a unitarymatrix C such that C∗AC is a diagonal matrix.
Corollary
(a) Every Hermitian matrix A is congruent to a diagonal matrix.(b) A real symmetric matrix is real-congruent to a diagonal matrix.
Proof: (a) We simply observe that a Hermitian matrix is normal and applythe above theorem.(b) We first recall that for a real symmetric matrix, all eigenvalues are real.
Hence the proposition 14 is applicable and so we can choose C to be anorthogonal matrix. Along with proposition 15, this gives the result. ♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 78 / 99
Quadratic forms and their diagonalization
Definition
Let A = (aij) be an n × n real matrix.
The function Q : Rn → R defined by :
QA(x) := QA((x1, x2, . . . , xn)t) :=n∑
i=1
n∑j=1
aijxixj
is called the quadratic form associated with A.If A = diag (λ1, λ2, . . . , λn) then Q(x) = λ1x2
1 + λ2x22 + · · ·+ λnx2
n is called adiagonal form.
ARS (IITB) MA106-Linear Algebra February 14, 2011 79 / 99
Quadratic forms and their diagonalization
Definition
Let A = (aij) be an n × n real matrix. The function Q : Rn → R defined by :
QA(x) := QA((x1, x2, . . . , xn)t) :=n∑
i=1
n∑j=1
aijxixj
is called the quadratic form associated with A.
If A = diag (λ1, λ2, . . . , λn) then Q(x) = λ1x21 + λ2x2
2 + · · ·+ λnx2n is called a
diagonal form.
ARS (IITB) MA106-Linear Algebra February 14, 2011 79 / 99
Quadratic forms and their diagonalization
Definition
Let A = (aij) be an n × n real matrix. The function Q : Rn → R defined by :
QA(x) := QA((x1, x2, . . . , xn)t) :=n∑
i=1
n∑j=1
aijxixj
is called the quadratic form associated with A.If A = diag (λ1, λ2, . . . , λn) then Q(x) = λ1x2
1 + λ2x22 + · · ·+ λnx2
n is called adiagonal form.
ARS (IITB) MA106-Linear Algebra February 14, 2011 79 / 99
Quadratic forms
Proposition
Q(x) = [x1, x2, . . . , xn]A
x1x2...
xn
= xtAx where x = (x1, x2, . . . , xn)t .
Proof:
[x1, x2, . . . , xn]A
x1x2...
xn
= [x1, x2, . . . , xn]
n∑j=1
a1jxj
...n∑
j=1
anjxj
=
n∑j=1
a1jxjx1 + · · ·+n∑
j=1
anjxjxn =n∑
j=1
n∑i=1
aijxixj = Q(x).
♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 80 / 99
Quadratic forms
Proposition
Q(x) = [x1, x2, . . . , xn]A
x1x2...
xn
= xtAx where x = (x1, x2, . . . , xn)t .
Proof:
[x1, x2, . . . , xn]A
x1x2...
xn
= [x1, x2, . . . , xn]
n∑j=1
a1jxj
...n∑
j=1
anjxj
=n∑
j=1
a1jxjx1 + · · ·+n∑
j=1
anjxjxn =n∑
j=1
n∑i=1
aijxixj = Q(x).
♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 80 / 99
Quadratic forms
Proposition
Q(x) = [x1, x2, . . . , xn]A
x1x2...
xn
= xtAx where x = (x1, x2, . . . , xn)t .
Proof:
[x1, x2, . . . , xn]A
x1x2...
xn
= [x1, x2, . . . , xn]
n∑j=1
a1jxj
...n∑
j=1
anjxj
=
n∑j=1
a1jxjx1 + · · ·+n∑
j=1
anjxjxn
=n∑
j=1
n∑i=1
aijxixj = Q(x).
♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 80 / 99
Quadratic forms
Proposition
Q(x) = [x1, x2, . . . , xn]A
x1x2...
xn
= xtAx where x = (x1, x2, . . . , xn)t .
Proof:
[x1, x2, . . . , xn]A
x1x2...
xn
= [x1, x2, . . . , xn]
n∑j=1
a1jxj
...n∑
j=1
anjxj
=
n∑j=1
a1jxjx1 + · · ·+n∑
j=1
anjxjxn =n∑
j=1
n∑i=1
aijxixj = Q(x).
♠ARS (IITB) MA106-Linear Algebra February 14, 2011 80 / 99
Quadratic forms
Example
(1) A =
[1 13 5
], X =
[xy
]. Then
X tAX = [x , y ]
[1 13 5
] [xy
]= [x , y ]
[x + y
3x + 5y
]= x2
1 + 4xy + 5y2.
(2) Let B =
[1 22 5
], X =
[xy
]. Then
X tBX = [x , y ]
[1 22 5
] [xy
]= [x , y ]
[x + 2y
2x + 5y
]= x2
1 + 4xy + 5x22 .
Notice that A and B give rise to same Q(x) and B = 12 (A + At) is a symmetric
matrix.
ARS (IITB) MA106-Linear Algebra February 14, 2011 81 / 99
Quadratic forms
Example
(1) A =
[1 13 5
], X =
[xy
]. Then
X tAX = [x , y ]
[1 13 5
] [xy
]= [x , y ]
[x + y
3x + 5y
]= x2
1 + 4xy + 5y2.
(2) Let B =
[1 22 5
], X =
[xy
].
Then
X tBX = [x , y ]
[1 22 5
] [xy
]= [x , y ]
[x + 2y
2x + 5y
]= x2
1 + 4xy + 5x22 .
Notice that A and B give rise to same Q(x) and B = 12 (A + At) is a symmetric
matrix.
ARS (IITB) MA106-Linear Algebra February 14, 2011 81 / 99
Quadratic forms
Example
(1) A =
[1 13 5
], X =
[xy
]. Then
X tAX = [x , y ]
[1 13 5
] [xy
]= [x , y ]
[x + y
3x + 5y
]= x2
1 + 4xy + 5y2.
(2) Let B =
[1 22 5
], X =
[xy
]. Then
X tBX = [x , y ]
[1 22 5
] [xy
]= [x , y ]
[x + 2y
2x + 5y
]= x2
1 + 4xy + 5x22 .
Notice that A and B give rise to same Q(x) and B = 12 (A + At) is a symmetric
matrix.
ARS (IITB) MA106-Linear Algebra February 14, 2011 81 / 99
Quadratic forms
Example
(1) A =
[1 13 5
], X =
[xy
]. Then
X tAX = [x , y ]
[1 13 5
] [xy
]= [x , y ]
[x + y
3x + 5y
]= x2
1 + 4xy + 5y2.
(2) Let B =
[1 22 5
], X =
[xy
]. Then
X tBX = [x , y ]
[1 22 5
] [xy
]= [x , y ]
[x + 2y
2x + 5y
]= x2
1 + 4xy + 5x22 .
Notice that A and B give rise to same Q(x) and B = 12 (A + At) is a symmetric
matrix.
ARS (IITB) MA106-Linear Algebra February 14, 2011 81 / 99
PropositionFor any n × n matrix A and the column vector x = (x1, x2, . . . xn)t
xtAx = xtBx where B =12
(A + At ).
Hence every quadratic form is associated with a symmetric matrix.
Proof: xtAx is a 1× 1 matrix. Hence xtAx = (xtAx)t = xtAtx. Hence
xtAx =12
xtAx +12
xtAtx = xt 12
(A + At )x = xtBx.
□
59/59
PropositionFor any n × n matrix A and the column vector x = (x1, x2, . . . xn)t
xtAx = xtBx where B =12
(A + At ).
Hence every quadratic form is associated with a symmetric matrix.
Proof: xtAx is a 1× 1 matrix. Hence xtAx = (xtAx)t = xtAtx. Hence
xtAx =12
xtAx +12
xtAtx = xt 12
(A + At )x = xtBx.
□
59/59
PropositionFor any n × n matrix A and the column vector x = (x1, x2, . . . xn)t
xtAx = xtBx where B =12
(A + At ).
Hence every quadratic form is associated with a symmetric matrix.
Proof: xtAx is a 1× 1 matrix. Hence xtAx = (xtAx)t = xtAtx.
Hence
xtAx =12
xtAx +12
xtAtx = xt 12
(A + At )x = xtBx.
□
59/59
PropositionFor any n × n matrix A and the column vector x = (x1, x2, . . . xn)t
xtAx = xtBx where B =12
(A + At ).
Hence every quadratic form is associated with a symmetric matrix.
Proof: xtAx is a 1× 1 matrix. Hence xtAx = (xtAx)t = xtAtx. Hence
xtAx =12
xtAx +12
xtAtx = xt 12
(A + At )x = xtBx.
□
59/59
Quadratic forms and their diagonalization
We now show how the spectral theorem helps us in converting a quadratic forminto a diagonal form.
Theorem
Let xtAx be a quadratic form associated with a real symmetric matrix A. Let U bean orthogonal matrix such that U tAU = diag (λ1, λ2, . . . , λn). Then
xtAx = λ1y21 + λ2y2
2 + · · ·+ λny2n ,
where y = (y1, . . . , yn) are define by
x =
x1x2...
xn
= U
y1y2...
yn
= Uy.
ARS (IITB) MA106-Linear Algebra February 14, 2011 83 / 99
Quadratic forms and their diagonalization
We now show how the spectral theorem helps us in converting a quadratic forminto a diagonal form.
Theorem
Let xtAx be a quadratic form associated with a real symmetric matrix A.
Let U bean orthogonal matrix such that U tAU = diag (λ1, λ2, . . . , λn). Then
xtAx = λ1y21 + λ2y2
2 + · · ·+ λny2n ,
where y = (y1, . . . , yn) are define by
x =
x1x2...
xn
= U
y1y2...
yn
= Uy.
ARS (IITB) MA106-Linear Algebra February 14, 2011 83 / 99
Quadratic forms and their diagonalization
We now show how the spectral theorem helps us in converting a quadratic forminto a diagonal form.
Theorem
Let xtAx be a quadratic form associated with a real symmetric matrix A. Let U bean orthogonal matrix such that U tAU = diag (λ1, λ2, . . . , λn).
Then
xtAx = λ1y21 + λ2y2
2 + · · ·+ λny2n ,
where y = (y1, . . . , yn) are define by
x =
x1x2...
xn
= U
y1y2...
yn
= Uy.
ARS (IITB) MA106-Linear Algebra February 14, 2011 83 / 99
Quadratic forms and their diagonalization
We now show how the spectral theorem helps us in converting a quadratic forminto a diagonal form.
Theorem
Let xtAx be a quadratic form associated with a real symmetric matrix A. Let U bean orthogonal matrix such that U tAU = diag (λ1, λ2, . . . , λn). Then
xtAx = λ1y21 + λ2y2
2 + · · ·+ λny2n ,
where y = (y1, . . . , yn) are define by
x =
x1x2...
xn
= U
y1y2...
yn
= Uy.
ARS (IITB) MA106-Linear Algebra February 14, 2011 83 / 99
Quadratic forms and their diagonalization
Proof: Since x = Uy,
xtAx = (Uy)tA(Uy) = yt(U tAU)y.
Since U tAU = diag (λ1, λ2, . . . , λn), we get
xtAx = [y1, y2, . . . , yn]
λ1
λ2. . .
λn
y1y2...
yn
= λ1y2
1 + λ2y22 + · · ·+ λny2
n .
♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 84 / 99
Quadratic forms and their diagonalization
Proof: Since x = Uy,
xtAx = (Uy)tA(Uy) = yt(U tAU)y.
Since U tAU = diag (λ1, λ2, . . . , λn), we get
xtAx = [y1, y2, . . . , yn]
λ1
λ2. . .
λn
y1y2...
yn
= λ1y2
1 + λ2y22 + · · ·+ λny2
n .
♠
ARS (IITB) MA106-Linear Algebra February 14, 2011 84 / 99
Quadratic forms: An example
Example
Let us determine the orthogonal matrix U which reduces the quadratic formQ(x) = 2x2
1 + 4xy + 5x22 to a diagonal form.
We write
Q(x) = [x , y ]
[2 22 5
] [xy
]= xtAx.
The symmetric matrix A can be diagonalized. The eigenvalues of A are λ1 = 1 andλ2 = 6. An orthonormal set of eigenvectors for λ1 and λ2 is
v1 =1√5
[2−1
]and v2 =
1√5
[12
].
Hence U = 1√5
[2 1−1 2
]. The change of variables equations are
[xy
]= U
[uv
].
The diagonal form is:
[u, v ]
[1 00 6
][u, v ]T = u2 + 6v2.
Check that UtAU = diag (1, 6).
ARS (IITB) MA106-Linear Algebra February 14, 2011 85 / 99
Quadratic forms: An example
Example
Let us determine the orthogonal matrix U which reduces the quadratic formQ(x) = 2x2
1 + 4xy + 5x22 to a diagonal form. We write
Q(x) = [x , y ]
[2 22 5
] [xy
]= xtAx.
The symmetric matrix A can be diagonalized.
The eigenvalues of A are λ1 = 1 andλ2 = 6. An orthonormal set of eigenvectors for λ1 and λ2 is
v1 =1√5
[2−1
]and v2 =
1√5
[12
].
Hence U = 1√5
[2 1−1 2
]. The change of variables equations are
[xy
]= U
[uv
].
The diagonal form is:
[u, v ]
[1 00 6
][u, v ]T = u2 + 6v2.
Check that UtAU = diag (1, 6).
ARS (IITB) MA106-Linear Algebra February 14, 2011 85 / 99
Quadratic forms: An example
Example
Let us determine the orthogonal matrix U which reduces the quadratic formQ(x) = 2x2
1 + 4xy + 5x22 to a diagonal form. We write
Q(x) = [x , y ]
[2 22 5
] [xy
]= xtAx.
The symmetric matrix A can be diagonalized. The eigenvalues of A are λ1 = 1 andλ2 = 6.
An orthonormal set of eigenvectors for λ1 and λ2 is
v1 =1√5
[2−1
]and v2 =
1√5
[12
].
Hence U = 1√5
[2 1−1 2
]. The change of variables equations are
[xy
]= U
[uv
].
The diagonal form is:
[u, v ]
[1 00 6
][u, v ]T = u2 + 6v2.
Check that UtAU = diag (1, 6).
ARS (IITB) MA106-Linear Algebra February 14, 2011 85 / 99
Quadratic forms: An example
Example
Let us determine the orthogonal matrix U which reduces the quadratic formQ(x) = 2x2
1 + 4xy + 5x22 to a diagonal form. We write
Q(x) = [x , y ]
[2 22 5
] [xy
]= xtAx.
The symmetric matrix A can be diagonalized. The eigenvalues of A are λ1 = 1 andλ2 = 6. An orthonormal set of eigenvectors for λ1 and λ2 is
v1 =1√5
[2−1
]and v2 =
1√5
[12
].
Hence U = 1√5
[2 1−1 2
]. The change of variables equations are
[xy
]= U
[uv
].
The diagonal form is:
[u, v ]
[1 00 6
][u, v ]T = u2 + 6v2.
Check that UtAU = diag (1, 6).
ARS (IITB) MA106-Linear Algebra February 14, 2011 85 / 99
Quadratic forms: An example
Example
Let us determine the orthogonal matrix U which reduces the quadratic formQ(x) = 2x2
1 + 4xy + 5x22 to a diagonal form. We write
Q(x) = [x , y ]
[2 22 5
] [xy
]= xtAx.
The symmetric matrix A can be diagonalized. The eigenvalues of A are λ1 = 1 andλ2 = 6. An orthonormal set of eigenvectors for λ1 and λ2 is
v1 =1√5
[2−1
]and v2 =
1√5
[12
].
Hence U = 1√5
[2 1−1 2
].
The change of variables equations are
[xy
]= U
[uv
].
The diagonal form is:
[u, v ]
[1 00 6
][u, v ]T = u2 + 6v2.
Check that UtAU = diag (1, 6).
ARS (IITB) MA106-Linear Algebra February 14, 2011 85 / 99
Quadratic forms: An example
Example
Let us determine the orthogonal matrix U which reduces the quadratic formQ(x) = 2x2
1 + 4xy + 5x22 to a diagonal form. We write
Q(x) = [x , y ]
[2 22 5
] [xy
]= xtAx.
The symmetric matrix A can be diagonalized. The eigenvalues of A are λ1 = 1 andλ2 = 6. An orthonormal set of eigenvectors for λ1 and λ2 is
v1 =1√5
[2−1
]and v2 =
1√5
[12
].
Hence U = 1√5
[2 1−1 2
]. The change of variables equations are
[xy
]= U
[uv
].
The diagonal form is:
[u, v ]
[1 00 6
][u, v ]T = u2 + 6v2.
Check that UtAU = diag (1, 6).
ARS (IITB) MA106-Linear Algebra February 14, 2011 85 / 99
Quadratic forms: An example
Example
Let us determine the orthogonal matrix U which reduces the quadratic formQ(x) = 2x2
1 + 4xy + 5x22 to a diagonal form. We write
Q(x) = [x , y ]
[2 22 5
] [xy
]= xtAx.
The symmetric matrix A can be diagonalized. The eigenvalues of A are λ1 = 1 andλ2 = 6. An orthonormal set of eigenvectors for λ1 and λ2 is
v1 =1√5
[2−1
]and v2 =
1√5
[12
].
Hence U = 1√5
[2 1−1 2
]. The change of variables equations are
[xy
]= U
[uv
].
The diagonal form is:
[u, v ]
[1 00 6
][u, v ]T = u2 + 6v2.
Check that UtAU = diag (1, 6).
ARS (IITB) MA106-Linear Algebra February 14, 2011 85 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set(ii) a single point(iii) one or two straight lines(iv) an ellipse(v) an hyperbola or(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form. This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set
(ii) a single point(iii) one or two straight lines(iv) an ellipse(v) an hyperbola or(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form. This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set(ii) a single point
(iii) one or two straight lines(iv) an ellipse(v) an hyperbola or(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form. This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set(ii) a single point(iii) one or two straight lines
(iv) an ellipse(v) an hyperbola or(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form. This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set(ii) a single point(iii) one or two straight lines(iv) an ellipse
(v) an hyperbola or(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form. This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set(ii) a single point(iii) one or two straight lines(iv) an ellipse(v) an hyperbola or
(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form. This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set(ii) a single point(iii) one or two straight lines(iv) an ellipse(v) an hyperbola or(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form.
This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic Sections
A conic section is the locus in the Cartesian plane R2of an equation of the form
ax2 + bxy + cy2 + dx + ey + f = 0. (10)
It can be proved that this equation represents one of the following:(i) the empty set(ii) a single point(iii) one or two straight lines(iv) an ellipse(v) an hyperbola or(vi) a parabola.The second degree part of (10)
Q(x , y) = ax2 + bxy + cy2
is a quadratic form. This determines the type of the conic.
ARS (IITB) MA106-Linear Algebra February 14, 2011 86 / 99
Conic SectionsWe can write the equation (10) into matrix form:
[x , y ]
[a b/2
b/2 c
] [xy
]+ [d , e]
[xy
]+ f = 0 (11)
Write A =
[a b/2
b/2 c
]. Let U = [v1, v2] be an orthogonal matrix whose
column vectors v1 and v2 are eigenvectors of A with eigenvalues λ1 and λ2. Applythe change of variables
x =
[xy
]= U
[uv
]to diagonalize the quadratic form Q(x , y) to the diagonal form λ1u2 + λ2v2
2 . Theorthonormal basis {v1, v2} determines a new set of coordinate axes with respectto which the locus of the equation [x , y ]A[x , y ]T + B[x , y ]T + f = 0 withB = [d , e] is same as the locus of the equation
0 = [u, v ] diag (λ1, λ2)[u, v ]T + (BU)[u, v ]T + f
= λ1u2 + λ2v2 + [d , e][v1, v2][u, v ]T + f . (12)
ARS (IITB) MA106-Linear Algebra February 14, 2011 87 / 99
Conic SectionsWe can write the equation (10) into matrix form:
[x , y ]
[a b/2
b/2 c
] [xy
]+ [d , e]
[xy
]+ f = 0 (11)
Write A =
[a b/2
b/2 c
].
Let U = [v1, v2] be an orthogonal matrix whose
column vectors v1 and v2 are eigenvectors of A with eigenvalues λ1 and λ2. Applythe change of variables
x =
[xy
]= U
[uv
]to diagonalize the quadratic form Q(x , y) to the diagonal form λ1u2 + λ2v2
2 . Theorthonormal basis {v1, v2} determines a new set of coordinate axes with respectto which the locus of the equation [x , y ]A[x , y ]T + B[x , y ]T + f = 0 withB = [d , e] is same as the locus of the equation
0 = [u, v ] diag (λ1, λ2)[u, v ]T + (BU)[u, v ]T + f
= λ1u2 + λ2v2 + [d , e][v1, v2][u, v ]T + f . (12)
ARS (IITB) MA106-Linear Algebra February 14, 2011 87 / 99
Conic SectionsWe can write the equation (10) into matrix form:
[x , y ]
[a b/2
b/2 c
] [xy
]+ [d , e]
[xy
]+ f = 0 (11)
Write A =
[a b/2
b/2 c
]. Let U = [v1, v2] be an orthogonal matrix whose
column vectors v1 and v2 are eigenvectors of A with eigenvalues λ1 and λ2.
Applythe change of variables
x =
[xy
]= U
[uv
]to diagonalize the quadratic form Q(x , y) to the diagonal form λ1u2 + λ2v2
2 . Theorthonormal basis {v1, v2} determines a new set of coordinate axes with respectto which the locus of the equation [x , y ]A[x , y ]T + B[x , y ]T + f = 0 withB = [d , e] is same as the locus of the equation
0 = [u, v ] diag (λ1, λ2)[u, v ]T + (BU)[u, v ]T + f
= λ1u2 + λ2v2 + [d , e][v1, v2][u, v ]T + f . (12)
ARS (IITB) MA106-Linear Algebra February 14, 2011 87 / 99
Conic SectionsWe can write the equation (10) into matrix form:
[x , y ]
[a b/2
b/2 c
] [xy
]+ [d , e]
[xy
]+ f = 0 (11)
Write A =
[a b/2
b/2 c
]. Let U = [v1, v2] be an orthogonal matrix whose
column vectors v1 and v2 are eigenvectors of A with eigenvalues λ1 and λ2. Applythe change of variables
x =
[xy
]= U
[uv
]to diagonalize the quadratic form Q(x , y) to the diagonal form λ1u2 + λ2v2
2 .
Theorthonormal basis {v1, v2} determines a new set of coordinate axes with respectto which the locus of the equation [x , y ]A[x , y ]T + B[x , y ]T + f = 0 withB = [d , e] is same as the locus of the equation
0 = [u, v ] diag (λ1, λ2)[u, v ]T + (BU)[u, v ]T + f
= λ1u2 + λ2v2 + [d , e][v1, v2][u, v ]T + f . (12)
ARS (IITB) MA106-Linear Algebra February 14, 2011 87 / 99
Conic SectionsWe can write the equation (10) into matrix form:
[x , y ]
[a b/2
b/2 c
] [xy
]+ [d , e]
[xy
]+ f = 0 (11)
Write A =
[a b/2
b/2 c
]. Let U = [v1, v2] be an orthogonal matrix whose
column vectors v1 and v2 are eigenvectors of A with eigenvalues λ1 and λ2. Applythe change of variables
x =
[xy
]= U
[uv
]to diagonalize the quadratic form Q(x , y) to the diagonal form λ1u2 + λ2v2
2 . Theorthonormal basis {v1, v2} determines a new set of coordinate axes with respectto which the locus of the equation [x , y ]A[x , y ]T + B[x , y ]T + f = 0 withB = [d , e] is same as the locus of the equation
0 = [u, v ] diag (λ1, λ2)[u, v ]T + (BU)[u, v ]T + f
= λ1u2 + λ2v2 + [d , e][v1, v2][u, v ]T + f . (12)
ARS (IITB) MA106-Linear Algebra February 14, 2011 87 / 99
Conic SectionsIf the conic determined by (12) is not degenerate i.e., not an emptyset, a point, nor line(s) then the signs of λ1 and λ2 determine whetherit is a parabola, an hyperbola or an ellipse.
The equation (10) willrepresent
(1) ellipse if λ1λ2 > 0
(2) hyperbola if λ1λ2 < 0
(3) parabola if λ1λ2 = 0.
ARS (IITB) MA106-Linear Algebra February 14, 2011 88 / 99
Conic SectionsIf the conic determined by (12) is not degenerate i.e., not an emptyset, a point, nor line(s) then the signs of λ1 and λ2 determine whetherit is a parabola, an hyperbola or an ellipse. The equation (10) willrepresent
(1) ellipse if λ1λ2 > 0
(2) hyperbola if λ1λ2 < 0
(3) parabola if λ1λ2 = 0.
ARS (IITB) MA106-Linear Algebra February 14, 2011 88 / 99
Conic SectionsIf the conic determined by (12) is not degenerate i.e., not an emptyset, a point, nor line(s) then the signs of λ1 and λ2 determine whetherit is a parabola, an hyperbola or an ellipse. The equation (10) willrepresent
(1) ellipse if λ1λ2 > 0
(2) hyperbola if λ1λ2 < 0
(3) parabola if λ1λ2 = 0.
ARS (IITB) MA106-Linear Algebra February 14, 2011 88 / 99
Conic SectionsIf the conic determined by (12) is not degenerate i.e., not an emptyset, a point, nor line(s) then the signs of λ1 and λ2 determine whetherit is a parabola, an hyperbola or an ellipse. The equation (10) willrepresent
(1) ellipse if λ1λ2 > 0
(2) hyperbola if λ1λ2 < 0
(3) parabola if λ1λ2 = 0.
ARS (IITB) MA106-Linear Algebra February 14, 2011 88 / 99
Conic SectionsIf the conic determined by (12) is not degenerate i.e., not an emptyset, a point, nor line(s) then the signs of λ1 and λ2 determine whetherit is a parabola, an hyperbola or an ellipse. The equation (10) willrepresent
(1) ellipse if λ1λ2 > 0
(2) hyperbola if λ1λ2 < 0
(3) parabola if λ1λ2 = 0.
ARS (IITB) MA106-Linear Algebra February 14, 2011 88 / 99
Conic SectionsIf the conic determined by (12) is not degenerate i.e., not an emptyset, a point, nor line(s) then the signs of λ1 and λ2 determine whetherit is a parabola, an hyperbola or an ellipse. The equation (10) willrepresent
(1) ellipse if λ1λ2 > 0
(2) hyperbola if λ1λ2 < 0
(3) parabola if λ1λ2 = 0.
ARS (IITB) MA106-Linear Algebra February 14, 2011 88 / 99
Conic Sections: ExamplesExample
(1) 2x2 + 4xy + 5y2 + 4x + 13y − 1/4 = 0.We have earlier diagonalized the quadratic form 2x2 + 4xy + 5y2.
The associatedsymmetric matrix, the eigenvectors and eigenvalues are displayed in the equation ofdiagonalization :
U tAU =1√5
[2 −11 2
] [2 22 5
]1√5
[2 1−1 2
]=
[1 00 6
].
Put t = 1/√
5 for convenience. Then the change of coordinates equations are :[xy
]=
[2t t−t 2t
] [uv
],
i.e., x = t(2u + v) and y = t(−u + 2v). Substitute these into the original equationto get
u2 + 6v2 −√
5u + 6√
5v − 1
4= 0.
Complete the square to write this as
(u − 1
2
√5)2 + 6(v +
1
2
√5)2 = 9.
This is an equation of ellipse with center ( 12
√5,− 1
2
√5) in the uv-plane.
ARS (IITB) MA106-Linear Algebra February 14, 2011 89 / 99
Conic Sections: ExamplesExample
(1) 2x2 + 4xy + 5y2 + 4x + 13y − 1/4 = 0.We have earlier diagonalized the quadratic form 2x2 + 4xy + 5y2. The associatedsymmetric matrix, the eigenvectors and eigenvalues are displayed in the equation ofdiagonalization :
U tAU =1√5
[2 −11 2
] [2 22 5
]1√5
[2 1−1 2
]=
[1 00 6
].
Put t = 1/√
5 for convenience. Then the change of coordinates equations are :[xy
]=
[2t t−t 2t
] [uv
],
i.e., x = t(2u + v) and y = t(−u + 2v). Substitute these into the original equationto get
u2 + 6v2 −√
5u + 6√
5v − 1
4= 0.
Complete the square to write this as
(u − 1
2
√5)2 + 6(v +
1
2
√5)2 = 9.
This is an equation of ellipse with center ( 12
√5,− 1
2
√5) in the uv-plane.
ARS (IITB) MA106-Linear Algebra February 14, 2011 89 / 99
Conic Sections: ExamplesExample
(1) 2x2 + 4xy + 5y2 + 4x + 13y − 1/4 = 0.We have earlier diagonalized the quadratic form 2x2 + 4xy + 5y2. The associatedsymmetric matrix, the eigenvectors and eigenvalues are displayed in the equation ofdiagonalization :
U tAU =1√5
[2 −11 2
] [2 22 5
]1√5
[2 1−1 2
]=
[1 00 6
].
Put t = 1/√
5 for convenience. Then the change of coordinates equations are :[xy
]=
[2t t−t 2t
] [uv
],
i.e., x = t(2u + v) and y = t(−u + 2v). Substitute these into the original equationto get
u2 + 6v2 −√
5u + 6√
5v − 1
4= 0.
Complete the square to write this as
(u − 1
2
√5)2 + 6(v +
1
2
√5)2 = 9.
This is an equation of ellipse with center ( 12
√5,− 1
2
√5) in the uv-plane.
ARS (IITB) MA106-Linear Algebra February 14, 2011 89 / 99
Conic Sections: ExamplesExample
(1) 2x2 + 4xy + 5y2 + 4x + 13y − 1/4 = 0.We have earlier diagonalized the quadratic form 2x2 + 4xy + 5y2. The associatedsymmetric matrix, the eigenvectors and eigenvalues are displayed in the equation ofdiagonalization :
U tAU =1√5
[2 −11 2
] [2 22 5
]1√5
[2 1−1 2
]=
[1 00 6
].
Put t = 1/√
5 for convenience.
Then the change of coordinates equations are :[xy
]=
[2t t−t 2t
] [uv
],
i.e., x = t(2u + v) and y = t(−u + 2v). Substitute these into the original equationto get
u2 + 6v2 −√
5u + 6√
5v − 1
4= 0.
Complete the square to write this as
(u − 1
2
√5)2 + 6(v +
1
2
√5)2 = 9.
This is an equation of ellipse with center ( 12
√5,− 1
2
√5) in the uv-plane.
ARS (IITB) MA106-Linear Algebra February 14, 2011 89 / 99
Conic Sections: ExamplesExample
(1) 2x2 + 4xy + 5y2 + 4x + 13y − 1/4 = 0.We have earlier diagonalized the quadratic form 2x2 + 4xy + 5y2. The associatedsymmetric matrix, the eigenvectors and eigenvalues are displayed in the equation ofdiagonalization :
U tAU =1√5
[2 −11 2
] [2 22 5
]1√5
[2 1−1 2
]=
[1 00 6
].
Put t = 1/√
5 for convenience. Then the change of coordinates equations are :[xy
]=
[2t t−t 2t
] [uv
],
i.e., x = t(2u + v) and y = t(−u + 2v). Substitute these into the original equationto get
u2 + 6v2 −√
5u + 6√
5v − 1
4= 0.
Complete the square to write this as
(u − 1
2
√5)2 + 6(v +
1
2
√5)2 = 9.
This is an equation of ellipse with center ( 12
√5,− 1
2
√5) in the uv-plane.
ARS (IITB) MA106-Linear Algebra February 14, 2011 89 / 99
Conic Sections: ExamplesExample
(1) 2x2 + 4xy + 5y2 + 4x + 13y − 1/4 = 0.We have earlier diagonalized the quadratic form 2x2 + 4xy + 5y2. The associatedsymmetric matrix, the eigenvectors and eigenvalues are displayed in the equation ofdiagonalization :
U tAU =1√5
[2 −11 2
] [2 22 5
]1√5
[2 1−1 2
]=
[1 00 6
].
Put t = 1/√
5 for convenience. Then the change of coordinates equations are :[xy
]=
[2t t−t 2t
] [uv
],
i.e., x = t(2u + v) and y = t(−u + 2v).
Substitute these into the original equationto get
u2 + 6v2 −√
5u + 6√
5v − 1
4= 0.
Complete the square to write this as
(u − 1
2
√5)2 + 6(v +
1
2
√5)2 = 9.
This is an equation of ellipse with center ( 12
√5,− 1
2
√5) in the uv-plane.
ARS (IITB) MA106-Linear Algebra February 14, 2011 89 / 99
Conic Sections: ExamplesExample
(1) 2x2 + 4xy + 5y2 + 4x + 13y − 1/4 = 0.We have earlier diagonalized the quadratic form 2x2 + 4xy + 5y2. The associatedsymmetric matrix, the eigenvectors and eigenvalues are displayed in the equation ofdiagonalization :
U tAU =1√5
[2 −11 2
] [2 22 5
]1√5
[2 1−1 2
]=
[1 00 6
].
Put t = 1/√
5 for convenience. Then the change of coordinates equations are :[xy
]=
[2t t−t 2t
] [uv
],
i.e., x = t(2u + v) and y = t(−u + 2v). Substitute these into the original equationto get
u2 + 6v2 −√
5u + 6√
5v − 1
4= 0.
Complete the square to write this as
(u − 1
2
√5)2 + 6(v +
1
2
√5)2 = 9.
This is an equation of ellipse with center ( 12
√5,− 1
2
√5) in the uv-plane.
ARS (IITB) MA106-Linear Algebra February 14, 2011 89 / 99
Conic Sections: Examples
The u-axis and v -axis are determined by the eigenvectors v1 and v2 as indicated inthe following figure :
y
uv
ARS (IITB) MA106-Linear Algebra February 14, 2011 90 / 99
Conic Sections: Examples
Example
(2) 2x2 − 4xy − y2 − 4x + 10y − 13 = 0.
Here, the matrix A =
[2 −2−2 −1
]gives the quadratic part of the equation. We write the equation in matrix form as
[x , y ]
[2 −2−2 −1
] [xy
]+ [−4, 10]
[xy
]− 13 = 0.
Let t = 1/√
5. The eigenvalues of A are λ1 = 3, λ2 = −2. An orthonormal set ofeigenvectors is v1 = t(2,−1)t and v2 = t(1, 2)t . Put
U = t
[2 1−1 2
]and
[xy
]= U
[uv
].
The transformed equation becomes
3u2 − 2v2 − 4t(2u + v) + 10t(−u + 2v)− 13 = 0
ARS (IITB) MA106-Linear Algebra February 14, 2011 91 / 99
Conic Sections: Examples
Example
(2) 2x2 − 4xy − y2 − 4x + 10y − 13 = 0. Here, the matrix A =
[2 −2−2 −1
]gives the quadratic part of the equation.
We write the equation in matrix form as
[x , y ]
[2 −2−2 −1
] [xy
]+ [−4, 10]
[xy
]− 13 = 0.
Let t = 1/√
5. The eigenvalues of A are λ1 = 3, λ2 = −2. An orthonormal set ofeigenvectors is v1 = t(2,−1)t and v2 = t(1, 2)t . Put
U = t
[2 1−1 2
]and
[xy
]= U
[uv
].
The transformed equation becomes
3u2 − 2v2 − 4t(2u + v) + 10t(−u + 2v)− 13 = 0
ARS (IITB) MA106-Linear Algebra February 14, 2011 91 / 99
Conic Sections: Examples
Example
(2) 2x2 − 4xy − y2 − 4x + 10y − 13 = 0. Here, the matrix A =
[2 −2−2 −1
]gives the quadratic part of the equation. We write the equation in matrix form as
[x , y ]
[2 −2−2 −1
] [xy
]+ [−4, 10]
[xy
]− 13 = 0.
Let t = 1/√
5. The eigenvalues of A are λ1 = 3, λ2 = −2. An orthonormal set ofeigenvectors is v1 = t(2,−1)t and v2 = t(1, 2)t . Put
U = t
[2 1−1 2
]and
[xy
]= U
[uv
].
The transformed equation becomes
3u2 − 2v2 − 4t(2u + v) + 10t(−u + 2v)− 13 = 0
ARS (IITB) MA106-Linear Algebra February 14, 2011 91 / 99
Conic Sections: Examples
Example
(2) 2x2 − 4xy − y2 − 4x + 10y − 13 = 0. Here, the matrix A =
[2 −2−2 −1
]gives the quadratic part of the equation. We write the equation in matrix form as
[x , y ]
[2 −2−2 −1
] [xy
]+ [−4, 10]
[xy
]− 13 = 0.
Let t = 1/√
5.
The eigenvalues of A are λ1 = 3, λ2 = −2. An orthonormal set ofeigenvectors is v1 = t(2,−1)t and v2 = t(1, 2)t . Put
U = t
[2 1−1 2
]and
[xy
]= U
[uv
].
The transformed equation becomes
3u2 − 2v2 − 4t(2u + v) + 10t(−u + 2v)− 13 = 0
ARS (IITB) MA106-Linear Algebra February 14, 2011 91 / 99
Conic Sections: Examples
Example
(2) 2x2 − 4xy − y2 − 4x + 10y − 13 = 0. Here, the matrix A =
[2 −2−2 −1
]gives the quadratic part of the equation. We write the equation in matrix form as
[x , y ]
[2 −2−2 −1
] [xy
]+ [−4, 10]
[xy
]− 13 = 0.
Let t = 1/√
5. The eigenvalues of A are λ1 = 3, λ2 = −2.
An orthonormal set ofeigenvectors is v1 = t(2,−1)t and v2 = t(1, 2)t . Put
U = t
[2 1−1 2
]and
[xy
]= U
[uv
].
The transformed equation becomes
3u2 − 2v2 − 4t(2u + v) + 10t(−u + 2v)− 13 = 0
ARS (IITB) MA106-Linear Algebra February 14, 2011 91 / 99
Conic Sections: Examples
Example
(2) 2x2 − 4xy − y2 − 4x + 10y − 13 = 0. Here, the matrix A =
[2 −2−2 −1
]gives the quadratic part of the equation. We write the equation in matrix form as
[x , y ]
[2 −2−2 −1
] [xy
]+ [−4, 10]
[xy
]− 13 = 0.
Let t = 1/√
5. The eigenvalues of A are λ1 = 3, λ2 = −2. An orthonormal set ofeigenvectors is v1 = t(2,−1)t and v2 = t(1, 2)t . Put
U = t
[2 1−1 2
]and
[xy
]= U
[uv
].
The transformed equation becomes
3u2 − 2v2 − 4t(2u + v) + 10t(−u + 2v)− 13 = 0
ARS (IITB) MA106-Linear Algebra February 14, 2011 91 / 99
Conic Sections: Examples
Example
(2) 2x2 − 4xy − y2 − 4x + 10y − 13 = 0. Here, the matrix A =
[2 −2−2 −1
]gives the quadratic part of the equation. We write the equation in matrix form as
[x , y ]
[2 −2−2 −1
] [xy
]+ [−4, 10]
[xy
]− 13 = 0.
Let t = 1/√
5. The eigenvalues of A are λ1 = 3, λ2 = −2. An orthonormal set ofeigenvectors is v1 = t(2,−1)t and v2 = t(1, 2)t . Put
U = t
[2 1−1 2
]and
[xy
]= U
[uv
].
The transformed equation becomes
3u2 − 2v2 − 4t(2u + v) + 10t(−u + 2v)− 13 = 0
ARS (IITB) MA106-Linear Algebra February 14, 2011 91 / 99
Conic Sections: Examples
which is the same as
3u2 − 2v2 − 18tu + 16tv − 13 = 0.
Complete the square in u and v to get
3(u − 3t)2 − 2(v − 4t)2 = 12
or
(u − 3t)2
4− (v − 4t)2
6= 1.
This represents a hyperbola with center(3t, 4t) in the uv -plane. Theeigenvectors v1 and v2 determine thedirections of positive u and v axes.
u
v
x
y
ARS (IITB) MA106-Linear Algebra February 14, 2011 92 / 99
Conic Sections: Examples
which is the same as
3u2 − 2v2 − 18tu + 16tv − 13 = 0.
Complete the square in u and v to get
3(u − 3t)2 − 2(v − 4t)2 = 12
or
(u − 3t)2
4− (v − 4t)2
6= 1.
This represents a hyperbola with center(3t, 4t) in the uv -plane. Theeigenvectors v1 and v2 determine thedirections of positive u and v axes.
u
v
x
y
ARS (IITB) MA106-Linear Algebra February 14, 2011 92 / 99
Conic Sections: Examples
which is the same as
3u2 − 2v2 − 18tu + 16tv − 13 = 0.
Complete the square in u and v to get
3(u − 3t)2 − 2(v − 4t)2 = 12
or
(u − 3t)2
4− (v − 4t)2
6= 1.
This represents a hyperbola with center(3t, 4t) in the uv -plane. Theeigenvectors v1 and v2 determine thedirections of positive u and v axes.
u
v
x
y
ARS (IITB) MA106-Linear Algebra February 14, 2011 92 / 99
Conic Sections: Examples
which is the same as
3u2 − 2v2 − 18tu + 16tv − 13 = 0.
Complete the square in u and v to get
3(u − 3t)2 − 2(v − 4t)2 = 12
or
(u − 3t)2
4− (v − 4t)2
6= 1.
This represents a hyperbola with center(3t, 4t) in the uv -plane.
Theeigenvectors v1 and v2 determine thedirections of positive u and v axes.
u
v
x
y
ARS (IITB) MA106-Linear Algebra February 14, 2011 92 / 99
Conic Sections: Examples
which is the same as
3u2 − 2v2 − 18tu + 16tv − 13 = 0.
Complete the square in u and v to get
3(u − 3t)2 − 2(v − 4t)2 = 12
or
(u − 3t)2
4− (v − 4t)2
6= 1.
This represents a hyperbola with center(3t, 4t) in the uv -plane. Theeigenvectors v1 and v2 determine thedirections of positive u and v axes.
u
v
x
y
ARS (IITB) MA106-Linear Algebra February 14, 2011 92 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
]. The eigenvalues
are λ1 = 25, λ2 = 0. An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5. An orthogonal diagonalizing matrix
is U = a
[3 −44 3
]. The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0. This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
].
The eigenvalues
are λ1 = 25, λ2 = 0. An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5. An orthogonal diagonalizing matrix
is U = a
[3 −44 3
]. The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0. This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
]. The eigenvalues
are λ1 = 25, λ2 = 0.
An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5. An orthogonal diagonalizing matrix
is U = a
[3 −44 3
]. The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0. This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
]. The eigenvalues
are λ1 = 25, λ2 = 0. An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5.
An orthogonal diagonalizing matrix
is U = a
[3 −44 3
]. The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0. This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
]. The eigenvalues
are λ1 = 25, λ2 = 0. An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5. An orthogonal diagonalizing matrix
is U = a
[3 −44 3
].
The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0. This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
]. The eigenvalues
are λ1 = 25, λ2 = 0. An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5. An orthogonal diagonalizing matrix
is U = a
[3 −44 3
]. The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0. This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
]. The eigenvalues
are λ1 = 25, λ2 = 0. An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5. An orthogonal diagonalizing matrix
is U = a
[3 −44 3
]. The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0.
This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99
Conic Sections: Examples
Example
(3) 9x2 + 24xy + 16xy2 − 20x + 15y = 0.
The symmetric matrix for the quadratic part is A =
[9 12
12 16
]. The eigenvalues
are λ1 = 25, λ2 = 0. An orthonormal set of eigenvectors isv1 = a(3, 4)t , v2 = a(−4, 3)t , where a = 1/5. An orthogonal diagonalizing matrix
is U = a
[3 −44 3
]. The equations of change of coordinates are
[xy
]= U
[uv
]i.e., x = a(3u − 4v), y = a(4u + 3v).
The equation in uv-plane is u2 + v = 0. This is an equation of parabola with itsvertex at the origin.
ARS (IITB) MA106-Linear Algebra February 14, 2011 93 / 99