linear algebra and matrix theory - uhalmus/4389_la_review_part2_after.pdf · linear algebra and...

22
Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 2. Definition and Examples Briefly, a vector space consists of a set of objects called vectors along with a set of objects called scalars. The vector space axioms concern the algebraic relationships among the vectors and scalars. They may be found in any of the above references. Informally, they are as follows. Vectors can be added to form new vectors. Vectors can also be multiplied by scalars to form new vectors. There is one vector, called the zero vector, that acts as an identity element with respect to addition of vectors. Each vector has a negative associated with it. The sum of a vector and its negative is the zero vector. Addition of vectors is associative and commutative. In summary, the set of vectors with the operation of vector addition is an abelian group. 1

Upload: others

Post on 25-Jun-2020

37 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

Linear Algebra and Matrix Theory

Part 2 - Vector Spaces

1. References

(1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall.

(2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa-

tions Using Matlab, Brooks-Cole.

(3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall.

(4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca-

demic Press.

2. Definition and Examples

Briefly, a vector space consists of a set of objects called vectors along

with a set of objects called scalars. The vector space axioms concern the

algebraic relationships among the vectors and scalars. They may be found

in any of the above references. Informally, they are as follows. Vectors can

be added to form new vectors. Vectors can also be multiplied by scalars to

form new vectors. There is one vector, called the zero vector, that acts as

an identity element with respect to addition of vectors. Each vector has a

negative associated with it. The sum of a vector and its negative is the zero

vector. Addition of vectors is associative and commutative. In summary,

the set of vectors with the operation of vector addition is an abelian group.1

Page 2: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

2

The set of scalars is an algebraic field, such as the field of real numbers or

the field of complex numbers. Multiplication of scalars by vectors distributes

over addition of vectors and also over addition of scalars. The product of

the unit element 1 of the field with any vector is that vector. Finally,

multiplication of vectors by scalars is associative.

Generically, we denote the set of vectors by V and the field of scalars by F.

It is common to say that V is a vector space over F. Individual vectors will

be denoted by lower case Latin letters and individual scalars by lower case

Greek letters. We use the same symbol + to denote both addition of scalars

and addition of vectors. Multiplication of scalars by scalars or scalars by

vectors is indicated simply by conjoining the symbols for the multiplicands.

Example 2.1. Let F be either R or C and let V = Fm×n. Two vectors (ma-

trices) are added by adding corresponding entries and a matrix is multiplied

by a scalar by multiplying each entry by that scalar. The zero matrix is the

matrix all of whose entries are 0. The negative of a matrix is obtained by

multiplying it by -1. If m = 1 the vectors of this space are called row vectors

and if n = 1 they are called column vectors.

Example 2.2. Two directed line segments in the Euclidean plane are equiv-

alent if they have the same length and the same direction. Let V be the set

of equivalence classes. If u and v are vectors (i.e., elements of V), choose

representative line segments such that the representative of v begins where

the representative of u ends. Then u + v is represented by the line segment

Page 3: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

3

from the initial point of u to the terminal point of v. To multiply a vector u

by a real number α, let αu be represented by a line segment whose length is

|α| times the length of u. Let αu have the same direction as u if α > 0 and

the opposite direction if α < 0. These are the familiar geometric vectors of

elementary mathematics. We leave it to you to explain how the zero vector

and negatives of vectors are defined and to verify the other properties of a

vector space.

Example 2.3. Let F = R and let V = Ck(0, 1), the set of all functions u :

(0, 1) → R with at least k continuous derivatives on (0, 1). Vector addition

and scalar multiplication are defined pointwise: (u + v)(t) = u(t) + v(t) and

(αu)(t) = αu(t) for each t ∈ (0, 1).

3. Subspaces

Let V be a vector space over F and let W be a nonempty subset of V. We

say thatW is a subspace of V if it is a vector space over F with the operations

of vector addition and scalar multiplication inherited from V. This means

that W is closed under vector addition and scalar multiplication and that it

contains the zero vector of V. There is a simple test for when a subset of V

is a subspace.

Theorem 3.1. W is a subspace of V if and only if αu + βv ∈ W for all

u, v ∈ W and all α, β ∈ F.

Example 3.1. In Example 2.3, let W = Cr(0, 1), where r > k.

Page 4: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

4

Definition 3.1. Let V be a vector space over F and let S be a nonempty

subset of V. The span of S is the subspace

sp(S) = {α1u1 + · · ·+ αkuk|k ∈ N;α1, · · · , αk ∈ F;u1, · · · , uk ∈ S}

We also say that sp(S) is the subspace spanned by the elements of S.

Definition 3.2. Let A ∈ Fm×n. The row space of A is the subspace of F1×n

spanned by the rows of A. The column space of A is the subspace of Fm×1

spanned by the columns of A. The null space of A is the set of all solutions

x ∈ Fn×1 of the homogeneous system Ax = 0.

There are short ways of denoting these subspaces. The row space is

{yA|y ∈ F1×m}, the column space is {Ax|x ∈ Fn×1}, and the null space is

{x ∈ Fn×1|Ax = 0}.

Theorem 3.2. Row equivalent matrices have the same row space and null

space.

Subspaces can be combined to form new subspaces. One way is by taking

their intersection. Another is by forming their sum, defined as

Definition 3.3. Let U and W be subspaces of a vector space V. Their sum

is

U +W = {u + w|u ∈ U , w ∈ W}.

Theorem 3.3. U ∩W and U +W are subspaces of V.

Page 5: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

5

Solved Problems:

1. Tell whether the following sets S are subspaces of V or not. If the answer

is no, explain.

(1) V = R1×2, S = {(x1, x2)|x1 = 0 or x2 = 0}.

(2) V = Fn×1, S = {x|Ax = b}, where A ∈ Fm×n and b ∈ Fm×1 are

given, b 6= 0.

(3) V = C2(0, 1), S =All real solutions y on (0, 1) of the homogeneous

differential equation y′′ − 2y′ + y = 0.

(4) V = C4×4, F = C, S = {H ∈ V|H∗ = H}, where H∗ is the conjugate

of Ht.

(5) Same V and S as in the preceding problem, but F = R.

Solution:

(1) Not a subspace because it is not closed under vector addition. For

example, (1, 0) and (0, 1) are both in S but (1, 1) is not.

(2) Not a subspace because it is not closed under vector addition, not

closed under scalar multiplication, does not contain the zero vector,

and other reasons.

(3) S is a subspace of V.

(4) Not a subspace because it is not closed under scalar multiplication

by elements of C.

(5) S is a subspace. Incidentally, a complex matrix satisfying H∗ = H

is called hermitian.

Page 6: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

6

2. Describe in the simplest possible terms the row space of the matrix1 2 0 1

0 1 1 0

1 2 0 1

Solution: The reduced row echelon form has the same row space as the

given matrix. It is 1 0 −2 1

0 1 1 0

0 0 0 0

The row space is the span of the rows of this matrix, i.e., the set of row

vectors of the form

(α1, α2,−2α1 + α2, α1)

where α1 and α2 are arbitrary elements of F.

3. Describe in simplest possible terms the null space of the same matrix.

Solution: The reduced row echelon form has the same null space. A vector

x = (x1, x2, x3, x4)t is in the null space if and only if

x1 − 2x3 + x4 = 0

x2 + x3 = 0

You may think of two of the variables, say x1 and x2, as having arbitrary

values and the other two variables as being determined by them according

Page 7: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

7

to the last set of equations. After substituting, the null space is the set of

all vectors of the form

(x1, x2,−x2,−x1 − 2x2)

where x1 and x2 are arbitrary elements of F.

4. Describe in simplest terms the column space of the same matrix.

Solution: A vector y is in the column space of A if and only if there is a

solution of the linear system Ax = y. In the present case the augmented

matrix is 1 2 0 1 | y1

0 1 1 0 | y2

1 2 0 1 | y3

Now row-reduce the augmented matrix to get the coefficient part in row

echelon form, with symbolic calculations in the last column. The result is

1 0 −2 1 | y1 − 2y2

0 1 1 0 | y2

0 0 0 0 | y3 − y1

Obviously, this system has a solution if and only if y3− y1 = 0. The column

space is the set of all vectors y = (y1, y2, y3)t with y3 = y1.

Unsolved Problems:

Page 8: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

8

1. Describe the row space, the null space and the column space of the

following matrix. 1 2 1

1 0 1

1 1 1

2. Show that the intersection of two subspaces of a vector space is a sub-

space.

3. Let S1 and S2 be nonempty subsets of a vector space V. Show that

sp(S1) + sp(S2) = sp(S1 ∪ S2).

4. Let A ∈ Fm×n. Show that if y ∈ F1×n is in the row space of A and

x ∈ Fn×1 is in the null space of A, then yx = 0.

4. Linear Independence, Bases and Coordinates

Definition 4.1. Vectors v1, · · · , vm in a vector space V over F are linearly

dependent if there are scalars α1, · · · , αm, not all zero, such that α1v1 +

· · ·+ αmvm = 0. Otherwise, v1, · · · , vm are linearly independent.

An informal way of expressing linear dependence is to say that there is

a non-trivial linear combination of the given vectors which is equal to the

zero vector.

Definition 4.2. A vector space V is finite-dimensional if there is a finite

linearly independent set of vectors in V which spans V. Such a set of vectors

is called a basis for V.

Page 9: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

9

Example 4.1. In F1×n, let ei denote the vector whose entries are all 0

except the ith, which is 1. Then {e1, · · · , en} is a basis for F1×n called the

standard basis. In Fm×n, let Ei,j denote the matrix all of whose entries are

0 except the i, jth, which is 1. Then {Ei,j |1 ≤ i ≤ m; 1 ≤ j ≤ n} is a basis

for Fm×n.

There is never just one basis for a finite-dimensional vector space. If

v1, · · · , vm is a basis then so, for example, is 2v1, · · · , 2vm. However, we

have the following theorem.

Theorem 4.1. Any two bases for a finite-dimensional vector space have

the same number of elements. The set {v1, · · · , vm} is a basis for V if and

only if for each vector u ∈ V there are unique scalars α1, · · · , αm such that

u = α1v1 + · · ·+ αmvm.

Definition 4.3. The dimension of a finite-dimensional vector space is the

number of elements in a basis. The dimension of V is denoted by dim(V).

If dim(V) = m and v1, · · · , vk are linearly independent vectors in V, then

k ≤ m and if k < m we may extend this set of vectors to form a basis

v1, · · · , vk, vk+1, · · · , vm. Similarly, if u1, · · · , ur spans V then r ≥ m and we

may select a subset of m of the given vectors which forms a basis for V.

Definition 4.4. Let B = {v1, · · · , vm} be an ordered basis for V and let

u ∈ V. The unique m-tuple of scalars (α1, · · · , αm) such that∑m

i=1 αivi = u

Page 10: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

10

is called the coordinate vector of u relative to the basis B. It is usually

represented as a column vector in Fm×1

[u]B =

α1

...

αm

Solved Problems:

1. Show that any set of more than m vectors in Fm×1 is linearly dependent.

Solution: Suppose n > m vectors are given. Let A ∈ Fm×n be a matrix

whose columns are the given vectors. These vectors are linearly dependent

if and only if the homogeneous linear system Ax = 0 has a solution other

than x = 0. Since row equivalent matrices have the same null space, we

may assume that A is in reduced row echelon form. There are at least

n − m columns of A that do not contain leading 1’s. The variables corre-

sponding to these columns may be assigned arbitrary values. The variables

corresponding to the columns with leading 1’s are determined once these are

specified.

2. Show that any set of fewer than m vectors in Fm×1 does not span Fm×1.

Solution: Again, let A be an m × n matrix whose columns are the given

vectors and n < m. These vectors fail to span Fm×1 if and only if there is

a vector y ∈ Fm×1 such that the system Ax = y has no solution. For such

a vector y and for any invertible matrix Q, the system QAx = Qy has no

solution. Thus, we may suppose that A is in reduced row echelon form to

Page 11: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

11

begin with. Since A has more rows than columns, its last row must be all

zeros. Therefore, if we choose ym 6= 0 the system has no solution.

3. In F3×1, let v1 = (1, 0, 0)t, v2 = (1,−1, 0)t, and v3 = (1, 0,−1)t. Show

that B = {v1, v2, v3} is a basis and find the coordinate vector of u = (0, 1, 0)t

relative to this basis.

Solution: Let A be a matrix with columns v1, v2, and v3.

A =

1 1 1

0 −1 0

0 0 −1

B is a basis if and only if it is a linearly independent spanning set of vectors,

which is equivalent to invertibility of A. Clearly, A is invertible because its

determinant is 1.

To find the coordinates of u, we must find scalars α1, α2, and α3 such

that u = α1v1 + α2v2 + α3v3. Then [u]B = (α1, α2, α3)t is the solution of

A[u]B = u. The augmented matrix is1 1 1 | 0

0 −1 0 | 1

0 0 −1 | 0

The solution is

[u]B =

1

−1

0

Unsolved Problems:

Page 12: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

12

1. Let V be the set of all polynomials of degree ≤ 3 with coefficients in

F. Constant functions are included as polynomials of degree 0. With the

obvious definitions of addition and multiplication by elements of F, V is a

vector space over F of dimension 4. Find a basis for this vector space. With

respect to this basis, find the coordinates of the vector v(z) = −1 + 2z2.

2. Let V be the real vector space of all 3×3 hermitian matrices with complex

entries. Find a basis for V.

5. Change of Coordinates Under Change of Basis

Let B1 = {v1, · · · , vn} be a basis for V. Let Q = (qi,j) ∈ Fn×n and define

a new set of vectors B2 = {u1, · · · , un} by the equations

uj =n∑

i=1

qi,jvi

for j = 1, · · · , n.

Theorem 5.1. B2 is a basis if and only if the matrix Q is nonsingular. If

so, then the coordinate vectors [w]1 and [w]2 with respect to the bases B1 and

B2 of a vector w are related by

[w]1 = Q[w]2

The matrix Q is called the transition matrix from one basis to the other. If

two bases are given, then the transition matrix is completely determined by

expressing the elements of one basis as linear combinations of the elements

of the other basis.

Page 13: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

Linear Algebra and Matrix Theory

Part 3 - Linear Transformations

1. References

(1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall.

(2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa-

tions Using Matlab, Brooks-Cole.

(3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall.

(4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca-

demic Press.

(5) G. Strang, Linear Algebra and Its Applications, Academic Press.

2. Solved Problems

1. Let Vn be the real vector space of all polynomials of degree ≤ n with real

coefficients. For each space Vn, let Bn = {v1, · · · , vn+1} be the usual basis of

polynomial functions vi(t) = ti−1. Let D : V3 −→ V3 be the differentiation

operator. Find the matrix of D with respect to the basis B3.

Solution: We have Dv1 = 0 and Dvi = (i − 1)vi−1 for i ≥ 2. Let M be

the matrix of D and let [u] denote the coordinate vector of u with respect

to the given basis B3. The rule relating D to M is [Du] = M [u]. Hence,

[Dvi] = M [vi] for i = 1, · · · , 4. If {e1, · · · , e4} is the standard basis for R4×1,

[vi] = ei. From Dv1 = 0 we get Me1 = 0, from Dv2 = v1 we get Me2 = e1.

Similarly, Me3 = 2e2 and Me4 = 3e3. Partitioned by columns, the matrix1

Page 14: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L
Page 15: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L
Page 16: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

2

M is (Me1|Me2|Me3|Me4). Thus,

M =

0 1 0 0

0 0 2 0

0 0 0 3

0 0 0 0

2. Let V be the same space as in problem no. 1. Instead of the usual basis

B3 = {vi}4i=1 consider the basis B̂3 = {v̂i}4

i=1, where v̂i(t) = vi(t)− 1. Find

the transition matrix from B3 to B̂3 and use it to find the matrix M̂ of D

with respect to B̂3.

Solution: The same technique used in problem no. 1 may also be used

here, with the result that

M̂ =

0 1 2 3

0 0 2 0

0 0 0 3

0 0 0 0

.

You should carry out the details of this derivation. The transition matrix

Q = (qi,j) is defined by the equations

v̂j =4∑

i=1

qi,jvi.

Page 17: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

3

One can see immediately that v̂1 = v1 and v̂j = vj − v1 for i ≥ 2. From this,

the transition matrix is

Q =

1 −1 −1 −1

0 1 0 0

0 0 1 0

0 0 0 1

.

Its inverse is

Q−1 =

1 1 1 1

0 1 0 0

0 0 1 0

0 0 0 1

.

The relationship between the two matrix representations of D is M̂ =

Q−1MQ. You should carry out the matrix multiplication to verify the an-

swer given previously.

3. Show that the dimension of the column space of a matrix is its rank.

Solution: Recall that the rank of a matrix was defined as the number of

nonzero rows in its reduced row echelon form. If v1, · · · , vk are linearly inde-

pendent vectors in Fm×1 and Q ∈ Fm×m is nonsingular, then Qv1, · · · , Qvk

are linearly independent. This follows almost immediately from the defi-

nition of linear independence. Thus, although row equivalent matrices do

not necessarily have the same column space, the dimensions of their column

spaces are the same. Therefore, the dimension of the column space of a ma-

trix M is the same as that of its reduced row echelon form. In the reduced

Page 18: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

4

row echelon form, the columns with the leading 1’s are linearly independent

and each column without a leading 1 is a linear combination of the preceding

columns. Hence, the dimension of the column space is the number of leading

1’s, i.e., the number of nonzero rows. This shows that the dimension of the

row space of a matrix and the dimension of its column space are the same.

It also shows why we use the term ”rank” for the dimension of the range

space of a linear transformation.

4. Verify the theorem on rank and nullity for the linear transformation D

in problem no. 1.

Solution: Let V be a vector space of dimension n over a field F and let W

be another vector space over F. Let T : V −→ W be a linear transformation.

The nullity of T is defined as the dimension of its null space (or kernel)

ker(T ) = {v ∈ V|Tv = 0}

and its rank is defined as the dimension of its range space

T (V) = {w ∈ W|w = Tv for some v ∈ V}.

The theorem on rank and nullity asserts that the nullity of T plus the rank

of T is equal to dim(V). In problem 1, dim(V) = 4. The range of D is

the subspace of all polynomials of degree ≤ 2, so it has dimension 3. In

other words, the rank of D is 3. The null space of D is the subspace of

all polynomials v such that Dv = 0, i.e., all constant polynomials. This

Page 19: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L

5

subspace is spanned by the first basis vector v1(t) ≡ 1, so the nullity of D

is 1.

3. Unsolved Problems

1. Define T : R3×1 −→ R3×1 by

T (x1, x2, x3)t = (x1, x1 + x2, x1 + x2 + x3)t.

Show that T is a linear transformation. Find [T ], the matrix of T relative

to the standard basis B of R3×1. Find the matrix [T ]′ of T relative to the

basis

B′ = {(1, 0, 0)t, (1, 1, 0)t, (1, 1, 1)t}

2. In problem 1, find the transition matrix Q from B to B′. Verify that

[T ]′ = Q−1[T ]Q.

3. Let Vn be the vector space of real polynomials of degree ≤ n and let Bn

be the usual basis of vi(t) = ti−1. Define T : V2 −→ V3 by

Tv(x) =∫ x

0v(t)dt

Let D : V3 −→ V2 be the differentiation operator. (Note that this is slightly

different from the definition of D in Solved Problem no. 1). Find the

matrices [T ] and [D] with respect to the given bases. Find TD and DT and

their matrices. Verify that [DT ] = [D][T ] and [TD] = [T ][D].

4. Verify the theorem on rank and nullity for the linear transformation T

in problem 3.

Page 20: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L
Page 21: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L
Page 22: Linear Algebra and Matrix Theory - UHalmus/4389_LA_review_part2_after.pdf · Linear Algebra and Matrix Theory Part 2 - Vector Spaces 1. References (1) S. Friedberg, A. Insel and L