lecture1- linear systems. linear algebra concepts

Upload: cervv

Post on 08-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Lecture1- Linear Systems. Linear Algebra Concepts

    1/7

    Linear Systems - August 26-28, 2009

    Topic - Linear Algebra Concepts

    Linear systems theory is about the mathematical models we use to represent dynamical systems. Inthis respect, this course is a math course about the mathematical tools we use to model the class ofso-called linear dynamical systems.

    Mathematically, we think of a system as a mapping between a space of input functions and outputfunctions. So let Li denote the set of input functions and Lo denote the set of output functions.We assume that these sets for a linear space in that we can algebraically add signals together. Thesystem, G, is then a map G : Li Lo.

    We say this system is linear if the system map G satisfies the principle of superposition. In particular,this means that for any w1 Li and w2 in Li that the outputs generated by the system satisfy

    G[w1 + w2] = G[w1] + G[w2](1)

    where we use G[w] to denote the function in Lo that is generated by system G using input w.

    It will be seen later that such linear systems can always be represented in matrix-vector form. This

    means that for each w Li and y Lo we can identify a vectors w and y (possibly infinite dimen-sional) and matrix G such that y = Gw. This means, of course, that we can use our understandingof matrix-vector analysis (or rather linear algebra) to explore the structure of linear systems. Thisapplication of linear algebra to linear systems is precisely what this course is about.

    Since matrix-vector computations represent such an important part of our future work, this lecturewill review that a topic that should be familiar to most of you from your earlier studies in highschool. In particular, this lecture will talk about Gaussian elimination as a method to solve systemsof linear algebraic equations.

    The familiar problem of solving linear algebraic equations considers equations of the form

    b = Ax(2)

    where x n, b m, and A mn. The problem is that given vector b and matrix A determine

    the vector x.

    With regard to this problem there are three questions we can consider

    Does a solution, x, exist? If a solution does exist, is the solution unique? If solutions exist, how do we find or characterize all such solutions?

    In beginning to answer these questions, we start by considering a method for constructing suchsolutions. That construction is called Gaussian elimination.

    Homogeneous Systems of Equations

    Well start from an example. Lets consider the problem

    b = 12

    7

    = 2 1 14 1 02 2 1

    x1x2x3

    = Ax(3)This matrix-vector equation, of course, is equivalent to the following set of coupled linear algebraicequations,

    1 = 2x1 + x2 + x3(4)

    2 = 4x1 + x2(5)

    7 = 2x1 + 2x2 + x3(6)1

  • 8/7/2019 Lecture1- Linear Systems. Linear Algebra Concepts

    2/7

    2

    The problem is to find x1, x2, and x3.

    Gaussian elimination uses elementary row-column operations to reduce the original 3 by 3 setof equations to a sequence of smaller problems. This reduction, in essence, transforms the originalproblem into an equivalent problem that is triangular in structure. This reduction is done through

    elementary row-column operations. These operations involve multiplying one equation by a realconstant, subtracting from another equation and then replacing the equation. The equations weselect to transform and the constant used in multiplying the first equation is based on trying toremove a variable from the second equation. By removing this equation, we effectively reduce thedimensionality of the problem being solved.

    As an example, lets multiply the first equation by 2, add to the second equation, and use this toreplace the second equation. The resulting transformation takes the form

    (7)2 = 4x1 2x2 2x32 = 4x1 +x24 = x2 2x3

    Note that this removes x1 from the second equation.

    We can now repeat this approach to removex1 from the third equation. This is done by an elementaryrow-column operation that multiplies the first equation by 1, adds it to the third equation, andreplaces that third equation to yield

    (8)+1 = +2x1 +x2 +x37 = 2x1 +2x2 +x38 = 3x2 +2x3

    This transformation results in the following system of equations in which x1 has been removed fromthe second and third equations,

    (9)1 = 2x1 +x2 +x34 = x2 2x38 = 3x2 + 2x3

    The 1, 1 element in the first equation is called the pivot. Notice that the last two equations forma smaller 2 by 2 system of equations that is completely independent from x1. So we can applyGaussian elimination again to this smaller 2 by 2 system.

    For this smaller system, we use a row-column operation to remove x2 from the last equation. Thisis accomplished by multiplying the second equation by 3, adding it to the third equation, and thenreplacing the third equation with the result. In this case, we see that

    (10)12 = 3x2 6x38 = 3x2 +2x34 = 4x3

    So that we now obtain a completely transformed system of linear equations whose right hand sideis triangular in structure,

    (11)1 = 2x1 +x2 +x34 = x2 2x34 = 4x3

    The last equation above is a 1 by 1 system of equations whose solution can be written down byinspection to obtain x3 = 1. We can now use this value for x3 to solve the 2 by 2 system of equation.We back substitute the value for x3 into the second equation to obtain

    4 = x2 2 x2 = 2(12)

  • 8/7/2019 Lecture1- Linear Systems. Linear Algebra Concepts

    3/7

    3

    We repeat this approach by taking the values for x3 and x2, back substituting into the first equationto obtain

    1 = 2x1 + 2 + 1 x1 = 1(13)

    This process is called backsubstitution .So when does this approach (Gaussian Elimination with Back substitution) fail? Obviously we cantuse it if the pivot is zero. Note that if this occurs, we may be able to avoid the zero pivot problemby simply reordering the equations. If this cannot be done, however, then we say the system ofequations is singular.

    A singular system has either no solution or an infinite number of solutions. How do we determinewhich case applies to a specific singular system? Again lets examine this question through anexample.

    The system under consideration takes the form,

    b = 000 =

    1 3 3 22 6 9 5

    1 3 3 0

    x1x2

    x3x4

    = Ax(14)

    This system is singular. This will become apparent as we apply Gaussian elimination to reduce thesystem. The first pivot is a11 = 1. This is nonzero, so we can use elementary row-column operationsto transform the systems original A matrix as follows,

    (15)

    1 3 3 22 6 9 5

    1 3 3 0

    1 3 3 20 0 3 1

    1 3 3 0

    This was done by multiplying the first row by 2 and subtracting from the second row. ApplyingGaussian elimination to the third row yields,

    1 3 3 20 0 3 1

    1 3 3 0

    1 3 3 20 0 3 10 0 6 2

    (16)

    This was done by multiplying the first row by 1 and subtracting from the third row. Notice thatthe second pivot is zero and no reordering of the equations will fix this problem. So this system ofequations is singular.

    But we still have a smaller 2 by 2 square subsystem that involves the variables x3 and x4. So letsapply Gaussian elimination to this smaller subsystem. The pivot is now a23 = 3. We remove avariable from the third equation by multiplying the second row by 2 and subtracting from thethird row. This yields,

    1 3 3 20 0 3 1

    0 0 6 2

    1 3 3 20 0 3 1

    0 0 0 0

    (17)The problem now is that the last row is zero, so we cant use backsubstitution to solve for x4.

    The resulting variables in this system can now be divided into two sets.

    basic variables correspond to non-zero pivots. In this example the basic variables are x1and x3.

    free variables correspond to zero pivots. In this example those free variables are x2 andx4.

  • 8/7/2019 Lecture1- Linear Systems. Linear Algebra Concepts

    4/7

    4

    So to obtain the most general solution, we assign arbitrary values to the free variables and then useback substitution.

    In this case the we see that the second equation yields,

    3x3 + x4 = 0 x3 = 13x4(18)

    the basic variable x3 is expressed in terms of the free variable x4.

    We now back substitute the basic variable x3s expression into the first equation to obtain

    0 = x1 + 3x2 + 3x3 + 2x4(19)

    = x1 + 3x2 x4 + 2x4(20)

    = x1 + 3x2 + x4(21)

    and we solve for the single basic variable x1 in terms of the free variables x2 and x4 to obtain

    x1 = 3x2 x4(22)

    Note that all solutions of this singular linear system of equations may not be written as

    x =

    3x2 x4x2

    13x4

    x4

    = x2

    3100

    + x4

    10

    1/31

    (23)

    All solutions are expressed as a linear combination of two vectors in 4. This set of linearcombinations for a subspace of 4. The two vectors form a basis that spans this subspace. Thesubspace given above has a special name. It is sometimes called the Null Space of the matrixA. This is because any vector in the span of these two vectors is nulled by the matrix A.

    This particular example shows we have an infinite number of solutions and those solutions are thenull space ofA. This example assumed that b = 0. In this case, a singular system always has aninfinite number of solutions. Does the same thing happen if b = 0? Well address that question inthe next lecture.

    Inhomogeneous Systems of Equations:

    In the previous lecture we consider a homegeneous system of linear equations of the form

    0 = b = Ax =

    1 3 3 22 6 4 5

    1 3 3 0

    x1x2x3x4

    (24)

    We found that there were an finite number of solutions that we could express as

    x

    x2

    3100

    + x4

    10

    1/31

    = Null Space ofA = N(A)(25)

    Homogeneous problems always have a solution. This solution is either 0 or the nontrivial null spaceofA.

  • 8/7/2019 Lecture1- Linear Systems. Linear Algebra Concepts

    5/7

    5

    When b is non-zero, we have an inhomogeneous problem. To characterize the solution, we need totransform b by our row column operations. Again appealing to our earlier example, the transforma-

    tion of an arbitrary b =b1 b2 b3

    Tusing the earlier row-column operations yields,

    b =

    b1b2 2b1b3 2b2 + 5b1

    = 1 3 3 20 0 3 1

    0 0 0 0

    x1x2x3x4

    (26)

    Note that the last equation requires

    b3 2b2 + 5b1 = 0(27)

    Not any b 3 will satisfy this relation. This means that the inhomogeneous problem may fail tohave a solution for certain b vectors. The question is how to characterize such a situation?

    Note that b can be written as lying in the span of four vectors. In particular,

    b = b1

    b2b3

    = 1 3 3 2

    2 6 9 51 3 3 0

    x1x2x3x4

    = Ax(28)

    = x1

    12

    1

    + x2

    36

    3

    + x3

    39

    3

    + x4

    25

    0

    (29)

    So that b lies in a subspace of 3 that is spanned by the columns of the A matrix. In other words,for this system to have a solution we require that

    b col(A) = column space ofA(30)

    In this case we can show that

    col(A) = span

    1

    21,

    1

    31(31)

    which is a two-dimensional plane in 3. Clearly not all b in 3 may lie on this plane. But if b doeslie in col(A), then we can solve the inhomogeneous problem by back substitution. In particular, wecan easily show that all solutions of this problem are given by

    x = x2

    3100

    + x4

    10

    1/31

    +

    3b1 b20

    1

    3(b2 2b1)

    0

    (32)

    The first two terms on the right hand side of the above equation are vectors in the null space of A.The third term on the right hand side of the equation is a particular solution to the problem. Inother words, if b col(A), then all solutions to the problem can be expressed as

    x xp + N(A)(33)

    where xp is a particular solution to the system of equations. Note that if N(A) = {0} (the trivialnull space) then the solution to our system of equations is unique.

    What if b does not lie in col(A)? We can still find a solution to this system of linear equationsby enlarging the set of what we think of as solutions. This involves finding a vector that satisfiesthe equation in some other sense than strict equality. We sometimes refer to this as a solutionconcept.

  • 8/7/2019 Lecture1- Linear Systems. Linear Algebra Concepts

    6/7

  • 8/7/2019 Lecture1- Linear Systems. Linear Algebra Concepts

    7/7

    7

    the last equation says that the error bb is always in the null space ofAT. This is a common propertyof minimum mean square estimates that sometimes goes under the name of the orthogonalityprinciple.

    Subspace concepts are useful in characterizing when a system of linear algebraic equations has a

    solution. But what exactly is a subspace? This concept is formalized in linear algebra, whichprovides a large set of mathematical tools that can be applied to many engineering problems. Welluse linear algebraic concepts throughout this course to study systems that can be modeled by systemsof linear differential or difference equations. Next time, well say a bit more about linear algebra,primarily to review fundamental concepts and establish some notational conventions.