gaussian elimination1
TRANSCRIPT
-
8/8/2019 Gaussian elimination1
1/8
CONTENTS1. INTRODUCTION
2. OVERVIEW
3. METHOD OF FINDING THE INVERSE OF
MATRIX AND RANK
4. PSEUDOCODE
5. REFRENCE
-
8/8/2019 Gaussian elimination1
2/8
INTRODUCTION
Gaussian elimination is an algorithm for solving
1.System of linear equation
2.rank of matrix
3.inverse of an invertible matrix
Gaussian elimination is named after German mathematician and scientistCarl Friedrich
Gauss, which makes it an example ofStigler's law.
Elementary row operations are used to reduce a matrix to row echelon form.GaussJordanelimination, an extension of this algorithm, reduces the matrix further to reduced row echelon
form. Gaussian elimination alone is sufficient for many applications, and is cheaper than the
-Jordan version.
History
The method of Gaussian elimination appears in Chapter Eight,Rectangular Arrays, of the
important Chinese mathematical textJiuzhang suanshu orThe Nine Chapters on the
Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The
first reference to the book by this title is dated to 179 CE, but parts of it were written as early
as approximately 150 BCE.[1] It was commented on by Liu Hui in the 3rd century.
The method in Europe stems from the notes of Isaac Newton.[2] In 1670, he wrote that all the
algebra books known to him lacked a lesson for solving simultaneous equations, which
Newton then supplied. Cambridge University eventually published the notes asArithmetica
Universalis in 1707 long after Newton left academic life. The notes were widely imitated,
which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks
by the end of the 18th century. Carl Friedrich Gauss in 1810 devised a notation for symmetric
elimination that was adopted in the 19th century by professional hand computers to solve the
normal equations of least-squares problems. The algorithm that is taught in high school was
named for Gauss only in the 1950s as a result of confusion over the history of the subject.
Algorithm overview
The process of Gaussian elimination has two parts. The first part (Forward Elimination)
reduces a given system to eithertriangularorechelon form, or results in a degenerate
equation with no solution, indicating the system has no solution. This is accomplished
through the use ofelementary row operations. The second step usesback substitution to find
the solution of the system above.
Stated equivalently for matrices, the first part reduces a matrix to row echelon form using
elementary row operations while the second reduces it to reduced row echelon form, orrow
canonical form.
http://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Carl_Friedrich_Gausshttp://en.wikipedia.org/wiki/Carl_Friedrich_Gausshttp://en.wikipedia.org/wiki/Carl_Friedrich_Gausshttp://en.wikipedia.org/wiki/Stigler's_lawhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-0http://en.wikipedia.org/wiki/Liu_Huihttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-1http://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Echelon_formhttp://en.wikipedia.org/wiki/Degeneracy_(mathematics)http://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitutionhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/Carl_Friedrich_Gausshttp://en.wikipedia.org/wiki/Carl_Friedrich_Gausshttp://en.wikipedia.org/wiki/Stigler's_lawhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/The_Nine_Chapters_on_the_Mathematical_Arthttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-0http://en.wikipedia.org/wiki/Liu_Huihttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-1http://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Echelon_formhttp://en.wikipedia.org/wiki/Degeneracy_(mathematics)http://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitutionhttp://en.wikipedia.org/wiki/Elementary_row_operationshttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/Row_canonical_formhttp://en.wikipedia.org/wiki/Algorithm -
8/8/2019 Gaussian elimination1
3/8
Another point of view, which turns out to be very useful to analyze the algorithm, is that
Gaussian elimination computes a matrix decomposition. The three elementary row operations
used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of
rows to other rows) amount to multiplying the original matrix with invertible matrices from
the left. The first part of the algorithm computes an LU decomposition, while the second part
writes the original matrix as the product of a uniquely determined invertible matrix and auniquely determined reduced row-echelon matrix.
Example
Suppose the goal is to find and describe the solution(s), if any, of the following system of
linear equations:
The algorithm is as follows: eliminatex from all equations belowL1, and then eliminatey
from all equations belowL2. This will put the system into triangular form. Then, using back-
substitution, each unknown can be solved for.
In the example,x is eliminated fromL2 by adding toL2.x is then eliminated fromL3 by
addingL1 toL3. Formally:
The result is:
Nowy is eliminated fromL3 by adding 4L2 toL3:
The result is:
http://en.wikipedia.org/wiki/Matrix_decompositionhttp://en.wikipedia.org/wiki/LU_decompositionhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Triangular_formhttp://en.wikipedia.org/wiki/Matrix_decompositionhttp://en.wikipedia.org/wiki/LU_decompositionhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Triangular_form -
8/8/2019 Gaussian elimination1
4/8
This result is a system of linear equations in triangular form, and so the first part of the
algorithm is complete.
The second part, back-substitution, consists of solving for the unknowns in reverse order. It
can thus be seen that
Then,zcan be substituted intoL2, which can then be solved to obtain
Next,zandy can be substituted intoL1, which can be solved to obtain
The system is solved.
Some systems cannot be reduced to triangular form, yet still have at least one valid solution:
for example, ify had not occurred inL2 andL3 after the first step above, the algorithm would
have been unable to reduce the system to triangular form. However, it would still have
reduced the system to echelon form. In this case, the system does not have a unique solution,
as it contains at least onefree variable. The solution set can then be expressed parametrically
(that is, in terms of the free variables, so that if values for the free variables are chosen, a
solution will be generated).
In practice, one does not usually deal with the systems in terms of equations but insteadmakes use of the augmented matrix(which is also suitable for computer manipulations). For
example:
Therefore, the Gaussian Elimination algorithm applied to theaugmented matrix begins with:
which, at the end of the first part(Gaussian elimination, zeros only under the leading 1) of the
algorithm, looks like this:
http://en.wikipedia.org/wiki/Echelon_formhttp://en.wikipedia.org/wiki/Free_variablehttp://en.wikipedia.org/wiki/Free_variablehttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Echelon_formhttp://en.wikipedia.org/wiki/Free_variablehttp://en.wikipedia.org/wiki/Augmented_matrixhttp://en.wikipedia.org/wiki/Augmented_matrix -
8/8/2019 Gaussian elimination1
5/8
That is, it is in row echelon form.
At the end of the algorithm, if the GaussJordan elimination(zeros under and above the
leading 1) is applied:
That is, it is in reduced row echelon form, or row canonical form.
Other applications
Finding the inverse of a matrix
SupposeA is a matrix and you need to calculate its inverse. The identity
matrix is augmented to the right ofA, forming a matrix (theblock matrixB = [A,I]).
Through application of elementary row operations and the Gaussian elimination algorithm,
the left block ofB can be reduced to the identity matrixI, which leavesA 1 in the right block
ofB.
If the algorithm is unable to reduceA to triangular form, thenA is not invertible.
[edit] General algorithm to compute ranks and bases
The Gaussian elimination algorithm can be applied to any matrixA. If we get "stuck"
in a given column, we move to the next column. In this way, for example, some
matrices can be transformed to a matrix that has a reduced row echelon form like
(the *'s are arbitrary entries). This echelon matrix Tcontains a wealth of information aboutA:
the rankofA is 5 since there are 5 non-zero rows in T; the vector spacespanned by the
columns ofA has a basis consisting of the first, third, fourth, seventh and ninth column ofA
(the columns of the ones in T), and the *'s tell you how the other columns ofA can be written
as linear combinations of the basis columns.
http://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Matrix_inversionhttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Block_matrixhttp://en.wikipedia.org/wiki/Block_matrixhttp://en.wikipedia.org/w/index.php?title=Gaussian_elimination&action=edit§ion=6http://en.wikipedia.org/wiki/Rank_of_a_matrixhttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Row_echelon_formhttp://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_eliminationhttp://en.wikipedia.org/wiki/Reduced_row_echelon_formhttp://en.wikipedia.org/wiki/Matrix_inversionhttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Block_matrixhttp://en.wikipedia.org/w/index.php?title=Gaussian_elimination&action=edit§ion=6http://en.wikipedia.org/wiki/Rank_of_a_matrixhttp://en.wikipedia.org/wiki/Vector_space -
8/8/2019 Gaussian elimination1
6/8
Analysis
Gaussian elimination to solve a system ofn equations forn unknowns requires n(n+1) / 2
divisions, (2n3 + 3n2 5n)/6 multiplications, and (2n3 + 3n2 5n)/6 subtractions,[3] for a total
of approximately 2n3
/ 3 operations. So it has a complexity of .
This algorithm can be used on a computer for systems with thousands of equations and
unknowns. However, the cost becomes prohibitive for systems with millions of equations.
These large systems are generally solved using iterative methods. Specific methods exist for
systems whose coefficients follow a regular pattern (see system of linear equations).
The Gaussian elimination can be performed over any field.
Gaussian elimination is numerically stable fordiagonally dominant orpositive-definite
matrices. For general matrices, Gaussian elimination is usually considered to be stable in
practice if you usepartial pivoting as described below, even though there are examples forwhich it is unstable.[4]
Higher order tensors
Gaussian elimination does not generalize in any simple way to higher ordertensors(matrices
are order 2 tensors); even computing the rank of a tensor of order greater than 2 is a difficult
problem.
Pseudocode
As explained above, Gaussian elimination writes a given m n matrixA uniquely as a
product of an invertible m m matrix Sand a row-echelon matrix T. Here, Sis the product of
the matrices corresponding to the row operations performed.
The formal algorithm to compute TfromA follows. We writeA[i,j] for the entry in row i,
columnj in matrixA. The transformation is performed "in place", meaning that the original
matrixA is lost and successively replaced by T.
i := 1j := 1
while (i m andj n) do Find pivot in column j, starting in row i:maxi := i
for k := i+1 to m do if abs(A[k,j]) > abs(A[maxi,j]) then
maxi := k end if end for if A[maxi,j] 0 then
swap rows i and maxi, but do not change the value of i Now A[i,j] will contain the old value of A[maxi,j].
divide each entry in row i by A[i,j] Now A[i,j] will have the value 1.
for u := i+1 to m dosubtract A[u,j] * row i from row u
http://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-2http://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Field_(mathematics)http://en.wikipedia.org/wiki/Numerical_stabilityhttp://en.wikipedia.org/wiki/Diagonally_dominanthttp://en.wikipedia.org/wiki/Positive-definite_matrixhttp://en.wikipedia.org/wiki/Pivot_elementhttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-3http://en.wikipedia.org/wiki/Tensorshttp://en.wikipedia.org/wiki/Tensorshttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-2http://en.wikipedia.org/wiki/Iterative_methodhttp://en.wikipedia.org/wiki/System_of_linear_equationshttp://en.wikipedia.org/wiki/Field_(mathematics)http://en.wikipedia.org/wiki/Numerical_stabilityhttp://en.wikipedia.org/wiki/Diagonally_dominanthttp://en.wikipedia.org/wiki/Positive-definite_matrixhttp://en.wikipedia.org/wiki/Pivot_elementhttp://en.wikipedia.org/wiki/Gaussian_elimination#cite_note-3http://en.wikipedia.org/wiki/Tensors -
8/8/2019 Gaussian elimination1
7/8
Now A[u,j] will be 0, since A[u,j] - A[i,j] * A[u,j] = A[u,j] - 1 *A[u,j] = 0.
end fori := i + 1
end ifj := j + 1
end while
This algorithm differs slightly from the one discussed earlier, because before eliminating a
variable, it first exchanges rows to move the entry with the largest absolute valueto the
"pivot position". Such "partial pivoting" improves the numerical stability of the algorithm;
some variants are also in use.
The column currently being transformed is called the pivot column. Proceed from left to
right, letting the pivot column be the first column, then the second column, etc. and finally
the last column before the vertical line. For each pivot column, do the following two steps
before moving on to the next pivot column:
1. Locate the diagonal element in the pivot column. This element is called the pivot. The
row containing the pivot is called the pivot row. Divide every element in the pivot
row by the pivot to get a new pivot row with a 1 in the pivot position.
2. Get a 0 in each position below the pivot position by subtracting a suitable multiple of
the pivot row from each of the rows below it.
Upon completion of this procedure the augmented matrix will be in row-echelon formand
may be solved by back-substitution.
With the increasing popularity of multi-core processors, programmers now exploit thread-level parallel Gaussian elimination algorithms to increase the speed of computing. The
shared-memory programming model (as opposed to the message exchange model)
pseudocode is listed below.
void parallel(int num_threads,int matrix_dimension)int i;for(i=0;i
-
8/8/2019 Gaussian elimination1
8/8
for(i=k+1;icur_count++;if(mybarrier->cur_count!=num_thread)
pthread_cond_wait(&(mybarrier->barrier_cond),&(mybarrier->barrier_mutex));else
mybarrier->cur_count=0;
pthread_cond_broadcast(&(mybarrier->barrier_cond));pthread_mutex_unlock(&(mybarrier->barrier_mutex));
References
Atkinson, Kendall A. (1989), An Introduction to Numerical Analysis (2nd ed.), New
York:
Calinger, Ronald (1999), A Contextual History of Mathematics,
Farebrother, R.W. (1988), Linear Least Squares Computations, STATISTICS:Textbooks and Monographs, Marcel Dekker,
(1996), Matrix Computations (3rd ed.), Johns Hopkins,.
Katz, Victor J. (2004), A History of Mathematics, Brief Version,