notes on the solution of algebraic linear simultaneous equations
TRANSCRIPT
NOTES ON THE SOLUTION OF ALGEBRAICLINEAR SIMULTANEOUS EQUATIONS
By L. FOX, H. D. HUSKEY, and J. H. WILKINSON(The National Physical Laboratory, Teddington, Middlesex)
[Received 14 October 1947]
SUMMARYIn this paper four methods of solving simultaneous equations and inverting
matrices are described from the viewpoint of the practical computer. The first threemethods can be performed on ordinary desk calculating machines; the fourth usesHollerith punched card equipment. The desk methods considered are the methodof elimination or 'pivotal condensation', the method of 'orthogonal vectors', andthe method of Choleski. The elimination method is, with slight variants, the simplemethod taught at school, used in such a way that solutions are obtained with themaximum possible speed and accuracy. In the method of orthogonal vectors,applicable only to symmetric matrices, a new set of variables is chosen in sucha way that the matrix is transformed to a diagonal form, from which the solutionof equations or the inverse of the matrix is immediately obtainable. The Choleskimethod, applied here again only to symmetric matrices, expresses the square matrixas the product of two triangular matrices, the reciprocation of which is a relativelysimple operation. This method is quicker and more accurate than the others andcan be used, with slight modifications, in the case of unsymmetric matrices. Theprocedure used with punched card equipment is essentially a mechanization of theelimination method, with a considerable advantage in speed over the correspondingdesk method.
1. IntroductionMETHODS for the solution of algebraic linear simultaneous equations fallnaturally into two classes. First, there is the indirect approach, in whichthe unknowns are obtained by successive approximation. Of such methodsthe procedure of Morris (1), and the relaxation method of Southwell (2),are best known and the most widely used. Secondly, there is the directapproach, the basic characteristic of which is the fact that any desiredaccuracy can be obtained in a single application of the process, providedthat sufficient figures are retained throughout the computation. The bestknown method of this class is that known classically as the Gaussianalgorithm, or more recently as the method of 'pivotal condensation'.Here the matrix is reduced to a triangular form by successive eliminationof variables, and the solution is finally obtained by a process known asback-substitution. Alternatively, by extensions of the elimination process,the matrix can be reduced to a diagonal form, and the back-substitutionavoided.
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
150 L. FOX, H. D. HUSKEY, AND J. H. WILKINSON
There are striking differences between the two types of method quoted.By the direct method the solution can always be obtained.(provided itexists), and the only limit on accuracy is the number of figures to whichthe computer is prepared or compelled to work. Indirect methods, on theother hand, may never converge to a solution, or may converge so slowlyas to be impracticable.
Again, if an indirect method converges reasonably quickly, the numberof figures retained in the computation can be increased gradually as thesolution improves, reducing to a minimum the labour involved. By thedirect method, on the other hand, the precision of the final solution hasan upper bound which is completely dependent on the number of figuresretained at the first stage of the computation^ Finally, mistakes in theindirect process can merely delay the convergence to an accurate solution,while mistakes in direct methods may completely invalidate the solutionobtained, so that careful checks have to be applied throughout.
There is little doubt that indirect methods at their best are quicker andless laborious than direct methods when performed on desk computingmachines. They are at their best for equations in which the diagonalterms are large compared with off-diagonal terms, a state of affairsencountered often in survey problems, statistical problems, and in thenumerical solution of certain types of ordinary and partial differentialequations. Unfortunately there seem to be many practical problems inwhich the equations are of a type for which indirect methods are generallyuseless and direct methods must be used.
It is with direct methods that this paper is chiefly concerned. In section 2we discuss some aspects of the method of pivotal condensation in the lightof recent experience in the Mathematics Division, National PhysicalLaboratory. In section 3 we give the theory and computational details oftwo methods applicable to symmetric matrices. The first of these seemedto be new, but was subsequently found to be almost identical with theescalator process of Morris (3). So different is the theoretical approachthat this similarity was only noticed by comparing the numerical resultsobtained by the two methods when applied to the example given byMorris (3). Our presentation is accordingly given here in full detail, withparticular emphasis on the computational layout. The second method,due to Choleski, is very powerful but not very well known in this country.
In § 4 a numerical example is solved by each cf the methods describedin previous sections.
In section 5 we mention, without details, a method in which punched
t Though there are, of course, methods for improving the accuracy of an approximatei nverse of a matrix, or the approximate solution of a set of linear equations.
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS 151
card equipment is adapted for the mechanization of pivotal condensation.This method, several times as fast as the corresponding desk computation,would seem to be of considerable importance in view of the widespreaddemand for rapid methods of solving equations, calculating determinants,and reciprocating matrices.
The present paper is concerned with computing methods. In a com-panion paper (4) a mathematical analysis is given of the rounding-offerrors inherent in some matrix processes. The main results of thisanalysis are summarized briefly in § 6. The most striking result is thatprevious estimates of error have been far too pessimistic, and the practiceof retaining large numbers of guarding figures has been quite unnecessary.
2. Pivotal condensationA. Description of method
Though this method is very well known and has often been described,the relevant information is given here for the sake of completeness, as anintroduction to § 5, and for the further inclusion of details arising fromrecent experience in the Mathematics Division.
Briefly, the solution of simultaneous equations requires the computa-tion of a column matrix x, satisfying the equation
Ax = b, (1)
where A is a known square matrix of order n and b a known columnmatrix. Closely allied is the problem of the reciprocation of the matrix A,involving the computation of a square matrix X such that
AX = I. (2)
Clearly the solution of (2) is equivalent to the solution of n equations ofthe type (1), in which b denotes successively the n columns of the unitmatrix I, and the n solutions x then give the n columns of the reciprocalXof A.
For the solution of equation (1) the process of pivotal condensation orelimination is generally performed in such a way that the matrix A istransformed into a triangular matrix. The resulting n equations containn, n—1,..., 2, 1 unknowns respectively, and are solved by an obviousprocess of'back-substitution'.
For the computation of the reciprocal of a matrix (equation (2)) it isdesirable to extend the elimination process, reducing the matrix to adiagonal form. This extended process increases the work of eliminationby 50 per cent., at the expense of avoiding the back-substitution. I t istherefore not worth while for the solution of one set of simultaneousequations, but undoubtedly pays for the computation of a reciprocal, or
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
152 L. FOX, H. D. HUSKEY, AND J. H. WILKINSON
when the solution of equation (1) is required for constant A and severaldifferent b.
The simple elimination process proceeds by the following steps:(i) Write down the matrix A (n rows and n columns) side by side with
the matrix b (n rows and one or more columns, according to the numberof equations to be solved).
(ii) Calculate the row sums of the whole matrix (A, b), and record themin a separate column s, say, which will be used to serve as a check on thecomputation.
(iii) Pick out the largest of the elements a y of the matrix A. Supposethis element, the so-called 'pivot', is aw. Calculate and record the (n—1)quantities
mi = - , for all i =£ k.aki
(iv) Multiply the kth row (the 'pivotal' row) of the complete matrix(A,b,s) by the factors m^ and add to the corresponding ith rows. Thati s ' f o r m <$> = aij+miaki \
y I for all j and all i ^ k. (3)
The kth row of the new matrix A(1) is the unchanged pivotal row of A,and the ith column of A(1) has zeros in all rows except the fcth.
(v) Check the computation by summing the elements of the rows ofthe new matrix (A(1), b(1)). These sums should be equal, apart fromrounding-off errors in the last figure, to the elements in the correspondingrows of the transformed sum s(1).
Two courses are now possible. In the simple method, whereby thematrix is reduced to a triangular form, the fcth row and the ith columnare henceforth left unaltered, and are not written down again. We havein fact effectively eliminated the unknown xt, leaving (n— 1) equationsin (n—1) unknowns. The elimination process is then continued, pickingout each time the largest element of Aw as pivot, until only one row andcolumn, that is, a single term, remains in A<n-X). The n pivotal rows,containing respectively n, n— 1,..., 2, 1 unknowns, are then assembledtogether, and the components of the solution or solutions x are obtainedsuccessively by an obvious but somewhat tedious and error-provokingprocess.
Alternatively, after the first elimination process, the &th row and Zthcolumn are retained in the new matrix A'1' and treated in subsequenteliminations like any other row and column. The sequence of pivotalelements is exactly the same as before, the pivot, at any stage being the
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS 153
largest element in the part of the matrix excluding previous pivotal rows.At the final stage each row has been used once, and once only, as pivotalrow, and in the resulting matrix A*"-1' all the elements are zero exceptthese pivotal elements, one in each row and column. Simple division ofthese elements into the corresponding elements of the final matrix b(n--1)
gives immediately the solution or solutions x of the equations. Thisextension of the Gauss algorithm is usually attributed to Jordan.
A slightly different process for avoiding the back-substitution is favouredby some writers (5). For the solution of equation (1), the first step wouldbe to write down the coefficients in the scheme
- I n
where In is the unit matrix of order n. Pivots are chosen from A inexactly the same sequence as before, and the rows of — In as well as of Aare included in the elimination process, the simple one used for the reduc-tion of A to a triangular form. Clearly at each stage one more row of—In is involved, and one more row of A is omitted so that, as in theprevious method, the number of rows to be written down is always thesame. The number of columns on the left of the vertical line is reducedby one at each stage. When the last elimination has been performed,nothing remains above or to the left of the vertical line, and the solutionto the equations, or the reciprocal matrix (for b = In) is given in thebottom right-hand corner.f
An example performed by each of the three methods is given in § 4.
B. Special pointsA mathematical analysis of questions concerning the error of the
reciprocal matrix is given in a companion paper (4), summarized in § 6.The points discussed in this section are prompted by recent practicalexperience in the Mathematics Division of the N.P.L.
1. The choice of the largest coefficient as pivot reduces to a minimumthe rounding-off error by ensuring in the simple elimination process thatthe factors mf are always less than unity. In the extended processes someof the factors may be greater than unity. For the first method this canoccur in the factors corresponding to previous pivotal rows, and for thesecond method in the factors corresponding to the rows of — In. Thisfeature is unavoidable, and occurs in a disguised form in the ' substituting -back' process.
t Because of the horizontal line separating (A, b) from (—I,, O) we call this, colloquially,the 'below-the-line' method.
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
154 L. FOX, H. D. HUSKEY, AND J. H. WILKINSON
2. Since only a finite number of figures can be retained in the computa-tion, even if the initial coefficients of the matrix A are known exactly,the solution obtained is only an approximation. This approximation x0
should always be checked by computing the residuals (from equation (1))
S = b - A x 0 .
A more accurate solution xo-f x t can then be obtained by finding xx from
Axx = 8.
Most of the elimination has already been performed in the computationof x0, so little extra effort is required.
Somewhat similarly, a more accurate inverse can be obtained in theform X0-f-Xl5 where Xo is an approximate solution of (2), and
Xx = X0(I-AX0).
This second solution is worth obtaining not only when improvement isrequired, but also in those cases when accuracy of a given order is required,and there is doubt about the number of correct figures in the first approxi-mation, of which the residuals do not always give a reliable indication.
3. Very often the coefficients and right-hand sides of the equations areobtained from physical considerations and experiments, and are thereforeknown to a limited degree of accuracy. Consequently any solution, forwhich the residuals are less than the 'tolerance' of the original system,is as good as any other, and it is pointless to attempt to obtain moreaccurate solutions. For example, if the coefficients are known exactly,whilst the right-hand sides have an error which may be as much as 5 unitsin, say, the fourth decimal place, any solution is 'accurate' for which theresiduals are less than 5 units in the fourth decimal.
4. It seems to be customary, in solving equations by elimination, tokeep many more extra figures than are given in the original coefficients.In our experience this is quite unnecessary.
To fix ideas, consider the solution by simple elimination of a set ofequations, working throughout with, say, six decimal places. Originallyall the coefficients are less than unity and at least one in every row liesbetween 0-1 and 1*0. The elimination process is performed until the finalpivot is reached. This pivot is written down to six decimal places, ofwhich the last one or two figures may be incorrect, due to accumulationof rounding-off errors. (In spite of much that has been written to thecontrary, it is our experience that this error is small.)
The significance of the last pivot is this, that with well-conditionedequations its value is not very much less, and may even be greater, thanthe first pivot. Thus it may contain as many as six significant figures,
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS 155
resulting in solutions correct almost to six significant figures. With ill-conditioned equations, this last pivot may be much smaller than theoriginal—possibly the first four figures after the decimal point are. allzeros. Consequently only a few significant figures are available in thefinal pivot, and results are certainly not obtainable to more figures.
This does not, however, mean that more accurate solutions could beobtained by taking originally more than six places. For if the originalcoefficients are known accurately only to six decimals, the last pivot cannotbe obtained to better than six decimals, and if four significant figures werelost in the elimination, only two remain and more are meaningless.
If, furthermore, the original coefficients were known only to four decimals,and in the working to six decimals the first four figures in the final pivotwere zenv then the original equations have effectively no solution, beinglinearly dependent. A solution obtained with the help of more figureswould, as far as the physical problem is concerned, be meaningless andeven misleading.
Our conclusion is then as follows.' If the original coefficients are correctto n places of decimals, computations should be carried out with perhapsone extra figure for every ten equations, these extra figures being theso-called 'guarding figures' for the accumulation of rounding-off errors.Then if the first n decimals of the final pivot contains m significant figures,no answer to better than m significant figures can be obtained from thegiven equations to the physical problem.
These conclusions have been borne out by experiment on a set ofeighteen ill-conditioned equations, whose coefficients were given to fivedecimals. Several guarding figures were added, and the final pivot hadtwo significant figures in its first five decimals. The coefficients were thenrounded off to four decimals, and the elimination process repeated, onlyone significant figure being left in the final pivot. The two solutions agreedgenerally to only one significant figure, bearing out the above conclusions.
3. Two methods applicable to symmetric matricesOne of the disadvantages of the elimination method, even in the most
favourable case of a symmetric matrix, is that quite a substantial partof the total time is taken in recording the numbers used at the variousstages. In the two methods of this section far fewer numbers are recorded,and the saving in time is considerable. Improved accuracy would alsobe expected, since for.every recorded figure one rounding off is performed.Though they apply only to symmetric matrices, any matrix can be broughtto this form by multiplication by its transpose.
The first method, which we have called 'the method of orthogonal
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
156 L. FOX, H. D. HUSKEY, AND J. H. WILKINSON
vectors', requires a word of explanation. As stated above, this seemed to usto be new, but in fact is almost identical with the escalator process ofMorris (3). So different is the theoretical approach, however, that thissimilarity was only noticed by comparing actual numerical results. Thisnew approach is expressible very concisely in vector and matrix notation,and permits of a very convenient computational layout. There is also animportant computational difference which is explained in the next section.
A. The method of orthogonal vectors
In the equation Ax = b (1)
the solution x is a column matrix or vector. The present method consistsin expressing x as a linear combination of n column matrices or vectorsxr in the form n
2 (4)These vectors are chosen in a special way to satisfy the 'orthogonality'condition = 0 (r *,), (5)where A is necessarily symmetric. Then from (1) and (4) we have
i r b. (6)I
Premultiplication by x, for all s = \,...,n, then gives the n equationstypified by n
| « r x ; A x , = x;b. (7)r= l
Equations (7) constitute a set of simultaneous equations for the quantitiesar. By equation (5), all terms in (7) vanish for s =£• r, that is, the matrixof coefficients is diagonal. We have in fact
and the final solution from (4) is
The orthogonal vectors are-obtained as follows. Starting with the ncoordinate vectors _ , . . . nX .
we calculate in turn the vectors xr, starting with xx = yx. The vector x2
l , (11)
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS 157
in which, since x2 is to be orthogonal to xv we use equation (5) andobtain
A z _ * i A v 2 _ y2Ax t ( l 2 )Xj A.Xj X AXi
Similarly x3 = y3+A3x1+/i3x2,
in which, making x3 orthogonal to both xx and x2, we find (since xx andx2 are already orthogonal)
^ __ y3Ax1 _ y3 Ax2
X VX X ^^X
Finally xn = yn+An X1+^n X2+...
a n d A» = ~Vl£' **» = - 7 1 7 ' etc-These orthogonal vectors, and hence the coefficients on the left of
equation (7), are independent of the constant term b. The extension ofthis method to the evaluation of the reciprocal of A is therefore fairlyeasy, involving merely the computation from equation (7) of n sets ofvalues of a for the n columns b of the unit matrix. The remaining calcula-tion follows trivially from equation (4).
In practice, we regard this method as a device whereby the originalmatrix is transformed to an almost diagonal form. It is clearly not essentialfor equation (5) to hold: given any vectors x,. we can, in theory, set upequations (7), solve for a, and obtain the required solution from equation(4). In practice the orthogonality condition is used because it gives thesimplest set of equations (7). Since we work with a finite number offigures, equations like (5) will not be satisfied exactly, and the off-diagonalterms in equation (7) may differ from zero in the number of places towhich we are working. We therefore accept and record these terms, andsolve the resulting equations (7) by iteration, which in this case is veryrapid. For large numbers of equations, particularly when they are ill-conditioned, this treatment adds greatly to the accuracy attainable. Herewe differ from the escalator process which ignores the off-diagonal termsand effectively uses equation (8) for the computation of the a's.
In the computations it is convenient to write column matrices like x,and Ax, in rows. The matrix X whose rows are the vectors x,, is thenseen to be a lower triangular matrix, with units in the principal diagonal.The numbers A, p, etc., then fit into place in the upper half of the matrix,X2 to An occupying the places 2 to n in row 1, /x3 to fj.n occupying places 3-to n in row 2, etc. In this scheme the various numbers are ideally situatedfor the computation of successive xr.
8092.J M
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
158 L. FOX, H. D. HUSKEY, AND J. H. WILKINSON
The orthogonality condition (5) implies that the matrix whose rows arethe column matrices Ax, is an upper triangular matrix. The effectivevanishing of terms below the principal diagonal is one check on thecomputation. A more powerful check on Ax,, the summation check, is
The vectors are underlined to prevent confusion with the coefficients A, /*., etc.
obtained by calculating sxr, where s is the row formed by the sum of thecolumns of A. This quantity should be effectively equal to the sum ofthe elements of Axr.
The computations are performed in the following order, starting at thepoint where the rth vecto" x r has just been formed.
1. Calculate xJ.Axg for all s < r. This is a good check on the accuracyof xr, and these quantities are needed in any case, being some of the(practically zero) coefficients of equation (7).
2. Calculate Axr, checking that all elements less than the rth arepractically zero, and carrying out also the summation check.
3. Calculate xJ.Axr (another coefficient in (7)) and the quantities of thetype A of equation (14). (Note that y's Ax, is merely the sth element of Axr.)
4. Calculate x8 Axr for all a < r. These values should also be verysmall and very nearly equal to the corresponding terms xJ.Ax8. A setof rxr coefficients in the top left-hand corner of the matrix given byequation (7) has now been computed.
5. Calculate the vector xr+1.6. Proceed in this way, and in this order, until all the vectors x,. and
all the coefficients xj. Axg have been obtained.7. Calculate x'8b for all s.8. Solve equation (7) for a by relaxation or iteration.9. Calculate the final solution x from equation (4).10. Calculate the residuals
8 = b - A x ,and obtain if required a more accurate solution by repeating the processfrom step (7) onwards with 8 in place of b.
When the matrix is ill-conditioned, the diagonal terms x^Ax,. in the
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS 159
equations given by (7) suffer, as did the 'pivots' in the elimination method,from loss of significant figures. In experiments with a quite ill-conditionedset of eighteen equations, however, a more accurate solution was obtainedthan by the elimination process, working to the same number of decimals,with a saving of time of about one-third. This increased accuracy is dueto some extent to the inclusion of the small but non-zero off-diagonalterms in the equation (7), and the saving of time to the considerabledecrease in the number of recorded quantities.
An example is given in § 4.
B. Choleski's methodAnother method, due to Choleski, for the reciprocation of a symmetric
matrix was pointed out to us recently by John Todd. This method isso simple in theory and straightforward in practice that its neglect in thiscountry is surprising. The method, consists simply in expressing thesymmetric matrix A in the form
A = LL', (15)where L is a lower triangular matrix and L', its transpose, an uppertriangular matrix. Then . x _ T ' - I T - I /i6\
and the problem is solved.Methods for the reciprocation of a triangular matrix are given in
Elementary Matrices (6). The problem merely involves the back-substitu-tion process mentioned in § 2, for equations whose right-hand sides takesuccessively the columns of the unit matrix. Further, L ' - 1 is the transposeof L"1, so that only one such process is necessary.
The computation of L is also simple. Consider the case of a third-ordermatrix. We have to find the elements a, corresponding to known elementsa, for which the following matrix equation holds:
(a u 0 0 \ / a n a21 a31\ / a n a12
a21 a22 ) (
\a31 ™32 a33/ \ " " U33/ V*13 "23 "33/
Computationally, it is better to consider the matrix product as composedof the scalar products of rows of the left-hand matrix. If (r, s) denotesthe scalar product of the rth and sth rows, we have the six independentequations: (1,1) = a n ^
(2,1) = (1,2) = a12 I giving successively au , a21, a^,(3,1) = (1,3) = a13j
(2,2) = a22 1 • . . ,II < ^ /O -X\ § 1 V m S S U C C e S S 1 V e l y a22> «32.(O, &) = \ii, o) = ffl^ )
and (3,3) = a^ giving aM.
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
160 L. FOX, H. D. HUSKEY, AND J. H. WILKINSON
Thus only the matrix L need be recorded, and its elements are obtainedcolumn "by column. A good check is provided by writing down an extrarow whose elements are the sums of columns so far obtained. If this rowis denoted by 8, and the corresponding row obtained from the matrix Adenoted by 2, the identity
nth row s' = Sn, (18)
where S n is the nth element of 2, holds for all n. As soon as a columnis completed, and the element s r is obtained, application of the identity(18) for n = r gives a perfect check on the computation so far performed.
As pointed out by A. M. Turing, it is unnecessary to compute thereciprocal of L if the solution is desired to only one set of simultaneousequations. For equations (1) and (15) imply that
LL'x = b. (19)
Thus writing L'x = y, we have
L'x = y /
and two processes of back-substitution give the solution.Though this method applies only to a symmetric matrix, Turing (4) has
shown that it can be extended, without the device of normalization, to theun8ymmetric case in a form convenient for computation.
For symmetric matrices, this method is undoubtedly the quickest, andprobably the most accurate, fewer figures being recorded than in any ofthe other processes described. I t has also a rather important applicationin the problem of calculating the roots A and the modes x of the matrixequation ( A A _ B ) x = ^ ( 2 i )
where A and B are both positive definite, symmetric matrices. By any
method the case ( i A _ C ) x = 0 (22)
is very much more easy to deal with, and equation (21) can be transformedinto (22) by the Choleski device. We have
(AA-B)x = 0,
whence (AI—A-xB)x = 0. (23)
Now write A = LL', A"1 = L'^L"1. Equation (23) can then be written
(AI—L'-^BL'-^L'Jx = 0. (24)
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTAXKOl'S EQUATIONS 161
Premultiplication by L' then gives the equation
(AI-L^BL'-^L'x - , 0. (25)
Since L^BL'- 1 is symmetric, equation (25) is in the required form(22), the variable x having been replaced by L'x.
4. Numerical exampleThe example solved in this section is the one chosen by Morris (3) as
an illustration of his escalator method. The complete working is givenhere for each of the three methods of elimination described in § 2, andfor the vector and Choleski methods <>f § 3. The inverse of the matrix isalso obtained by Choleski's method. The computational layouts shouldbe self-explanatory, but the following points may be noted:
1. The original coefficients were given to six decimals. We have workedthroughout with eight decimals.
2. In the elimination methods the pivots (underlined) are always on theprincipal diagonal, so that a good deal of symmetry is maintained. Thisis always the case with a positive definite, symmetric matrix.
3. The three elimination methods give almost exactly the same solution.If the substituting-back process is to be avoided, it is our opinion that theJordan method is preferable to the 'below-the-line' method.
4. A check on any substituting-back process, particularly desirable inthe Choleski method, is obtainable by a method analogous to the varioussummation checks mentioned previously. Thus if the solution y has beenobtained to the equation T v _ K
which occurs in the Choleski method, a good check is furnished by record-ing the row s, whose elements are the sums of the columns of L, andforming the product sy. This should then be equal to the sum of theelements of b. All sums recorded in the computations are required forsome check of this kind.
5. In the elimination method the last pivot has lost three of the originalsix significant figures. Thus in accordance with earlier remarks, if thecoefficients are known accurately to only six figures, three figures onlyare reliable in the answers.
6. If the original coefficients are exact, answers can be obtained to anydegree of accuracy, and the question arises as to which of the methodshas given the best result. Conclusive evidence is not furnished by one.example. In particular, it is difficult to be equally 'fair' to all methods—this is not necessarily achieved by retaining the same number of decimalsthroughout. However, the accurate result has been obtained, and the
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
162 L. FOX, H. D. HUSKEY, AND J. H. WILKINSON
errors in the approximate solutions are given, for what they are worth,in the following table, with the results given by Morris included forcomparison.
Error in 5/Adecimal of
*i
**
*s* 4
* 5
xt
Elimination
351968424425
Vector
136
25'3' 48
Choleski
2
1
432
1
Morris
231 2
47252816
Our experience is that Choleski's method is quicker and more accuratethan the vector method, for symmetric matrices. Both are much superiorto pivotal condensation, which is less accurate and involves much morework. For unsymmetrical matrices the vector and Choleski methodscannot be used in their present form, but it is likely that the modifiedCholeski method given by Turing (4) is still the best both for speed andaccuracy. The usefulness of the elimination process is that it can be'mechanized' and adapted to Hollerith equipment, as described in thenext section.
7. In one particular the above table is unfair to the vector method.As already stated, we are prepared to accept non-zero off-diagonal termsin equations (7), that is, we do not insist that quantities like xJ.Axs
should be exactly zero to the number of figures retained. We do, however,want to know these quantities accurately—they are subject to rounding-off errors, as is seen in the computation, where xj. Ax, is not always equalto XgAxr. This accuracy can be achieved by the simple process of record-ing the Axr alone to one or more extra figures (depending on the magnitudeof the vectors x,), an almost trivial increase in the amount of work. If oneextra figure is kept in the present example, the errors, given in the follow-ing table, are seen to be very little worse than those of the Choleski method.
Error in 5thdecimal oj
Vectormethod
32
9452
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
I. E
LIM
INA
TIO
N
AN
D B
AC
K
SU
BS
TIT
UT
ION
I 2 3 4 5 6 3 4 5 6 i 4 5 6 I 5 6 i 5 5
-•66
4751
84
-•46
0170
99—
•667
7562
0—
234B
2061
—35
6O37
3O
—88
0496
50
-•25
0238
09—
•808
2474
0-•
1759
5276
+ •4
9417
850
+ -0
0249
040
—91
1279
59
+ •1
1646
259
—•6
6147
606
+ •8
2447
511
(1)
•539
999O
O•5
2328
600
•435
7850
0•3
6224
200
•276
4720
0•1
8469
100
•192
1436
7•1
9498
396
•012
8145
3•1
5369
832
— -
0016
1834
•020
4609
8-
-035
9778
9-
-003
8969
6-
-035
9263
0
-002
6814
8-
-003
9865
6-
-003
1403
8
• -0
0231
574
— -
0019
0927
(*>
•523
2860
0•7
8719
000
•362
2420
0•5
2565
100
•184
6910
0•2
8026
900
(3)
•435
7850
0•3
6224
200
•388
1410
0•2
9730
400
•263
9740
0•1
6793
600
•194
9839
6•2
2144
774
•055
4146
6••1
7898
456
•038
9643
4
(4)
•362
2420
0•5
2565
100'
•297
3040
0•4
3767
700
•167
9360
0•2
6324
600
•012
8145
3•0
5541
466
•086
6702
9
•044
6074
4•0
7609
464
- -0
3597
789
•072
8034
3
— -
0001
8131
•066
3442
8
(5)
•276
4720
0•1
8469
100
•263
9740
0•1
6793
600
•201
5780
0•1
1492
100
•153
6983
2•1
7898
456
•044
6074
4•1
5824
568
•049
1641
2-
- -0
0389
696
- -0
0018
131
•013
5818
7•0
1767
129
- -0
0398
656
•013
5814
2•0
1783
651
— -
0019
0927
•001
7830
0
•000
2088
5
(6)
•184
6910
0•2
8026
900
•167
9360
0•2
6324
600
•114
9210
0•1
9406
500
— -
0016
1834
•038
9643
4•0
7609
464
•049
1641
2•0
9427
878
- -0
3592
630
•066
3442
8•0
1767
129
•087
4229
0
- -0
0314
038
•017
8365
1•0
2696
471
4
•123
6790
0•0
4844
800
•124
9500
0•0
4730
400
•106
4700
0•0
3783
100
•091
4731
0•1
0265
564
•014
9525
5•0
9510
310
•020
5817
0
•001
0851
7-•
0107
3480
,•01
2131
95•0
0251
916
—•0
0422
023
•012
1052
1•0
1230
248
—00
2787
45
•003
9674
1
•001
6692
3
a
2-44
6154
002-
7117
7700
2-04
0332
002-
1013
6000
1-31
6042
001-
2429
5900
•643
4952
4•7
9245
090
•290
5541
1•6
7980
322
•277
4652
4
—05
4255
00•0
9225
271
•039
3068
4•1
3803
133
-•00
8665
69•0
3953
658
•053
9633
2
-•00
2380
98
•003
8411
4
,•00
1878
08
o H 3 Q > CO O d GO W o GO
+5-
3859
°-2
-813
16-1
1-59
164
+6-3
6445
+ 79
9248
-4-2
0333
co
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
II.
EL
IMIN
AT
ION
(JO
RD
AN
)
I 2 3 4 S 6 i 2 3 4 5 6 i 2 3 4 5 6 i 2 3 4 5 6
m,
- -66475184
- -46017099
- -66775620
— -23462061
-
35603730
-
88049650
-1-63579001
- -25023809
- -80824740
- -17593276
+ -49417850
-5-97S0S>59
- -7611545*
+ -00249040
-
91127959
+ -11646259
+6-67090356
+ -43774093
— 2-46041140
— -66147606
1
+ -53999900
+ -52328600
+ -43578500
+ -36224200
+ -37647200
+ -1846910c
+ -19214367
+ -52328600
+ -19498396
+ -01281453
+ -15369832
— -00161834
+ -02046098
+ -20433319
+ -19498396
- -03597789
- -00389696
- -03592630
+ -00268148
+ -41930294
+ -22236869
- -03597789
- -00398656
- -00314038
2
+ -52328600
+ -78719000
+ -36224200
+ -52565100
+ -18469100
+ -28026900
0
+ -78719000
0 0 0 0 0
+ -78719000
0 0 0 0 0
+ -78710000
0 0 0 0
3
+ -435785O0
+ •36224200
+ •38814100
+•29730400
+•26397400
+•16793600
+•19498396
+ •36224200
+•22144774
+ •05541466
+ •17898456
+ •03896434
0 0
+ •22144774
0 0 0 0 0
+•22144774
0 0 0
4
+ -36224200
+ -52565100
+ -29730400
+ -43767700
+ -16793600
+ -26324600
+ -01281453
+ -52565100
+ -05541466
+ -08667029
+ -04460744
+ -07609464
- -03597789
+ -43500425
+ -05541466
+ -07280343
- -00018131
+ -06634428
0 0 0
+ -07280343
0 0
5
+ -27647200
+ -18469100
+ -26397400
+ -16793600
+ -20157800
+ -1149210c
+ -15369832
+ -18469100
+ -17898456
+ -04460744
+ -15824568
+ -04916412
— -00389696
— -10809016
+ -17898456
— -00018151
+ -01358187
+ -oi767129
- -00398656
- -10700682
+ -17912256
— -00018131
+ -01358142
+ -01783651
6
+ -18469100
+ -28026900
+ -16793600
+ -26324600
+ -11492100
+ -19406500
— -00161834
+ -28026900
+ -03896434
+ -07609464
+ -04916412
+ -09427878
- -03592630
+ -21653152
+ -03896434
+ -00634428
+ 01767129
+. -08742290
- -00314038
- -17987898
- -01153391
+ -06634428
+ -01783651
+ -02696471
b
+ -12367900
+ -04844800
+ -12495000
+ -04730400
+ -i 0647000
+ -03783100
+ -09147310
+ -04844800
+ -10265564
+ -OI4C/5255
+ -09510310
+ -0205X170
+
0010S517
- -11947507
+ -10265564
- -01073580
+ -01213195
+ -00251916
- -00422023
- -05532811
+ -11082724
- -01073580
+ -01210521
+ -01230248
a
+ 244615400
+ 2-71177700
+ 2-04033200
+ 2-10136000
4 1-31604200
+1-24295^00
•r -64341)524
-i 2-71177700
+ -71^)5090
+ -21)055411
: (17980322
1- -27746524
- -05425500
+ 1-41549373
+ -79245090
+ -09225271
+ -03930684
+ -13803133
- OO866569
+ -86427903
+ -72223233
+ -09225271
+ -03953658
+ -05396332
o X X p d GO O 2
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
II.
EL
IMIN
AT
ION
(JO
RD
AN
) (co
nt.
)
I 2 3 4 5 6 I 2 3 4 S 6 I 2 3 4 S 6
mi
-172-0200757
-95-4448340
+ 12-19966836
+ -824475"
+ 1-35610215
+9-14182428
-1629933397
-1766731769
+322522959
-73-0061767
1
+ -00231574
+ -39835377
+ -22102542
- -02825126
— -00190927
- -00314038
+
00231574
0 0 0 0 0
+ -00231574
0 0 0 0 0
3 0
+ -78719000
0 0 0 0 0+ -78719000
0 0 0 0 0+ -78719000
0 0 0 0
3 0 0+ •22144774
0 0 0 0 0
+ •22144774
0 0 0 0 0+ •22144774
0 0 0
4 0 0 0+ -07280343
0 0 0 0 0+ -07280343
0 0 0 0 0+ -07280343
0 0
S— -00190927
+ -01197882
+ -18675197
- -04406646
+ -00178300
+ -01783651
— -00190927
+ -34041159
+ -36898193
- -06735892
+ -00020885
+ -01524734
0 0 0 0+ -00020885
0
6 0 0 0 0 0+ -02696471
00000
+ -02696471
0 0 0 0 0+ -02696471
b
- -00278745
+ -02674055
+ -11608951
— -04100496
+ -00396741
+ -01230248
- -00278745
+ -50623791
+ -38213721
- -07501093
+ -00166923
+ -00852241
+ -01247236
-2-21449581
-2-56694446
+ -46335407
+ -00166923
- -11334169
a
- -00238098
+ 1-22426313
+ -74531464
- -04051925
+ -00384114
+ -05396332
- -00238098
+ 1-63383950
+ -97256688
- -06956642
+ -00187808
+ -05073446
+ -01478810
-1-42730581
-234549672
+ -53615750
+ -00187808
-
08637698
W f
+5-3
8591
-2-8
1314
+1
1-59
165
+6-3
6445
+7-9
9248
-4-2
0333
O s M <© s o en
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
OS
I 2 3 4 5 6 2 I 3 4 5 6 2 3 i 4 5 6 2 3 4
- -66475184
- -46017099
— -66775620
- -23462061
- -35603730
+ 1-27034134
-
88049650
- -25023809
- -80824740
- -i7595*76
—2-07801168
+4-51573812
+ -49417850
+ -00249040
- -9"27959
-7-59O355I5
-3-437I7446
1
+ -539999OO
+ -52328600
+ -43578500
+ -36224200
+ -27647200
+ -18469100
0
+ -19214367
+ -19498396
+ -01281453 •
+ -15369832
- -00161834
+ -66475184
0
+ -02046098
-••O3597789
- -00389696
- -03592630
+ -25957289
+ -88049650
0
III
2
+ -52328600
+ •78719000
+ •36224200
+ •52565100
+ •18469100
+•28026900
— 1
. ELI
MINA
TION ('BELOW
3
+•43578500
+ •36224200
+ •38814100
+•29730400
+ •26397400
+ •16793600
0
+•19498396
+ •22144774
+•05541466
+ •17898456
+ •03896434
+ •46017099
— 1
4
+•36224200
+•52565100
+ •29730400
+•43767700
+•16793600
+ •26324600
0
+•01281453
+•05541466
+•08667029
+ -04460744
+•07609464
+•66775620
0
-03597789
+•07280343
—00018131
+•06634428
+•55260389
+•25023809
— 1
' THE LINE')
5
+ -27647200
+ -18469100
+ -26397400
• + -16793600
+ -20157800
+ -11492100
0
+ -15369832
+ -17898456
+ -04460744
+ -15824568
+ -04916412
+ -23462061
0
- -00389696
— -00018131
+ -01358187
+ -01767129
- -13.731140
+ -80824740
0
6
+ •18469100
+•28026900
+ • 16793600
+•26324600
+ •11492100
+•19406500
0
—-00161834
+-03896434
+-07609464
+ •04916412
+•09427878
+•35603730
0
—03592630
+•06634428
+•01767129
+•08742290
+ •27506895
+•17595276
0
b
+ -12367900
+ -04844800
+ -12495000
+ -04730400
+ -10647000
+ -03783100
0
+ -09147310
+ -10265564
+ -01495255
+ -09510310
+ -02058170
+ -06154550
0
+ -00108517
- -01073580
+ -01213195
+ -00251916
- -15177412
+ -46356599
0
a
+ 2-44615400
+ 2-71177700
+ 2-04033200
+ 2-10136000
+ 1-31604200
+ 1-24295900
— 1
+ -64349524
+ -79245090
+ -29055411
+ -67980322
+ -27746524
+ 2-44488244
— 1
—
-05425500
+ -09225271
+ -03930684
+ -'3803133
+ -79816021
+ 2-57850074
-1
2 d w GO I
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
in. ELIMINATION ('BELOW THE LINE') (cont.)
i 5 6 2 3 4 6 I 2 2 3 4 6 i 5 2 3 4 6 i 5 2 3 4 6 i 5
"»<
+ -11646259
— -66147606
+8-47432440
+ I-93I56537
-33-795267S9
+37-08550917
+
-82447511
-218-524208
-431-003882
+ 167-569986
+ 50-2917383
+431-827407
-2070-571654
-7978097965
+4430-052095
-2707-472109
+3947-690256
+ 4788-125449
1
+ -00268148
- -00398656
- -00314038
+ -53265785
+ 1-00415878
- -49417850
0
+ -00231574
— -00190927
+ -50604525
+ -99809293
- -38804852
- -11646259
— 1
23
45
- -00398656
+ -01358142
+ -01783651
- -iss0^^
+ -80887059
— -00249040
0
— -00190927
+ -00178300
+ -01521718
+ -84332298
— -60528003
+ -66147606
0
+ -00020885
+ -43243889
+ 1-66622576
- -92521638
+ -56545555
- -82447511
—1
6
-•00314038
+ •01783651
+ •02696471
—22850770
-•05208410
+ •91127959
— 1
b
— -00422023
+ -01210521
+ -01230248
- -07028559
+ -50046681
- -14746283
0
- -00278745
+ -00396741
+ -03396962
+ -52422985
- -56322843
+ -45624373
0
+ -00166923
- -64309492
+ 1-72563162
-1-03032139
+ -31605802
-1-20369731
0
-2-81317
-11-59165
+ 6-36445
-4-20334
+ 5-38591
+ 7-99248
8
- -00866569
+ -03953658
+ -05396332
+ -09792937
+ 2-26141208
+ -26714786
— I
- -00238098
+ -00384114
+ -55523205
+ 2-36564576
-I-55655698
-;-I-OOI2572O
— I
+ -00187808
+ 1-07553381
+3'39l85738
-1-95553777
+ -88151357
— 2-02817242
— 1
-2-81317
-11-59165
+6-36445
-4-20334
+ 5-3859I
+ 7-99248
= *i
= *3
= *4
= *«
= *1
= *5
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
IV.
VE
CT
OR
ME
TH
OD
A
+ -5399990°
+ -52328600
+ -43578500
+ -36224200
+ -27647200
+ -18469100
•
+2-32247500
%i
+1 'OOOOOOOO
*i
- -96904994
xt
—1-01478214
14
+1-75836623
x,
+1-48670750
X,
-1-38636181
-4*1
+ -53999900
Ax%
0AXf
0Axt
—1
Axt
—1
Ax.
+ 1
+ -52328600
+ -78719000
+ -36334300
+ -52565100
+ -18469100
+ -38036900
+ 2-66333900
- -96904994
+ I'OOOOOOOO
+ -21440731
— 1-00903811
- -65597«44
+ -733«»97
+ -52328600
+ -280099/3 0
-1 0 0
+ -43578500
+ -36234300
+ -38814100
+ -39730400
+ -26397400
+ -16793600
+1-91538200
- -80701075
+ -21440731
-f* I'OOOOOOOO
-'•7
9847
338
—2-30176106
+283566639
+ -43578500
-
06005543
+ -02358150
+1
+ 1 -I
+ -36224200
+ -52565100 ,
+ -29730400
• + -43767700
+ -16793600
+ -26324600
+2-05405600
- -67081976
- -62342227
-1-79847338
+ -73718963
—1-60067469
+ -36224200
+ -17462041
+ -04241070
+
00954113
+3-6
+ -27647200
+ -18469100
+ -26397400
+ -16793600
+ -20157800
+ -11492100
+1-20957200
- -51198613
+ -29712339
-
97594513
+ -73718963
+ I'OOOOOOOO
-1-72024388
+ -37647200
- -08333418
+ -03301425
- -00703360
+ -00765458
-3
+ -18469100
+ -28026900
+ -16793600
+ -26324600
+ -11492100
+ -19406500
+ 1-20512800
— -34203100
- -36163619
-1-73196807
- -33*52874
-1-73034388
+ 1*00000000
+ -18469100
+ -10139430
+ -04060659
+ -00317369
+ -01316771
+ -00063521
b
+ -12367900
+ -04844800
+ -12495000
+ -04730400
+ -10647000
+ -03783100
Row Sums
+ 232247500
+ -4«73473
+ -13961304
+ -00568021
+ -02082232
+ -00063512
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS 169
Po
O **5 r- r- r- i
? o 8 I 8 8+ i + i + i
i i
7«
8
I + + I 3 I «Sf!fM«lO O 00 O O v* W
it O « O i-i i-tI + 5 1 1
o o o o o o
O "1 O O O O
OO \O U)00 V> Vj\O IO N *•) I*) "1
+ I + I +
O O M MI I
C5
' HW 1?" i f «*" I
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
V.
CH
OL
ESK
I
A
+ -5399990°
+ -52328600
+ -43578500
+ -36224200
+ -27647200
+ -18469100
2
+2-32247500
+ -52328600
+ -78719000
+ -36224200
+ -52565100
+ -18469100
+ -28026900
+ 2-66332900
+ -43578500
+ -36224200
+ -38814100
+ -29730400
+ -26397400
+ -16793600
+ 1-91538200
+ -36224200
+ -52565100
+ -29730400
+ -43767700
+ -16793600
+ -26324600
+ 2-05405600"
+ -27647200
+ -18469100
+ -26397400
+ -16793600
+ -20157800
+ -11492100
+ 1-20957200
+ -18469100
+ -28026900
+ -16793600
+ -26324600
+ -11492100
+ -19406500
+ 1-20512800
b
+ -12367900
+ -04844800
+ -12495000
+ -04730400
+ -10647000
+ -03783100
+ -48868200
+ -73484624
+ -71210271
+ -59302882
+ -49294938
+ -37623109
+ -25133285
8
+3-16049109
+ -52924449
- -11347389
+ -32994280
- -15725091
+ -19139396
+ -77985645
y[Ly = b]
+•16830596
—13491521
+ •06401531
—09037462
+ -06664695
—10595387
Sum —-03227548
L
+ -15356267
+ -27617847
+ -14986877
+ -26443011
+ -84404002
+ -09767861
— -07200762
+ -03248065
+ -05815164
+ -08749041
+ -15050428
+
23799469
+ -02520588
+ -02520588
x [L'x = y]
-4-20354 = x.
+ 7-99285 = *,
+ 6-36480 = xt
-11-59228 = x,
-2-81334 = *,
+ 5-38623 = *,
Row sums
+ -73484624
+ 1-24134720
+ -63311760
+ 1-19674926
+ -38433174
+ -9'534773
3 7.< 3 •-[ - z JI
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
V.
CH
OL
ES
KI (
cont
.)
Com
puta
tion
of
A-1
I/-1
+ 1-36082890
-1-83100621
+ 1-88948590
-660826068
+1-39622029
+6-51199930
+ 18-00155507
-10-33008406
—18-41215803
+ 10-23765592
I+ 16-99280531
- 7-49763555
-26-30872348
+ 8-42594334
+ 11-42982414
-5500091709
+29-08508695
+ 112-49937136
-63-50368463
-68-24746657
+39-67328258
+3686-78585
-1925-75615
—7009-10808
+3820-23503
+394789803
-2182-06693
-1925-75615
+ 1014-38705
+3668-59854
-2015-94069
—2070-68016
+ 1153-90088
— 7009-10808
+3668-59854
+ 13729-67118
-7554-29775
-7978-5OH7
+4463-21935
+3820-23503
-2015-94069
-7554-29775
+4208-52408
+4430-27264
-2519-39963
+3947-898O3
— 2070-68016
-7978-5OII7
+ 4430-27264
+ 4788-35757
-2707-60103
-2182-06693
+ 1153-90088
+4463-21935
-25I9-39963
—2707-60103
+ I573-96935
w I O 3
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
172 L. FOX, H: D. HUSKEY, AND J. H. WILKINSON
5. Solution by Hollerith equipmentThe present Hollerith equipment appears to be somewhat unsuitable
for the computation of scalar products, an essential part of methods suchas that of Choleski, and for this reason the elimination methods are morelikely than others to be readily adaptable.
Several such methods for the theoretical use of punched card methodsfor solving simultaneous equations have been suggested by various writers(refs. 8 and 9). As far as we can discover, however, these methodshave had only limited success in practice. The theoretical mechanizationof the elimination method is fairly straightforward, but difficulties peculiarto digital machines have to be overcome before the method can be saidto be practicable. These difficulties arise through the restricted capacityof the Hollerith multiplying punch, and the consequent need for greatcare in fixing the position of the decimal point. This trouble is particularlyacute in the case of ill-conditioned equations.
The methods by which these difficulties were overcome are too lengthyto receive detailed treatment here. A full account, together with a detaileddescription of the complete process of solution, can be obtained from theauthors at the National Physical Laboratory. The Hollerith equipmentused consisted of a card punch, an eight by eight decimal multiplyingpunch, a rolling total tabulator, a reproducer summary punch, and acollator. A special feature of the method is the fact that tabulator andmultiplying punch are working simultaneously throughout a large partof the work, effecting a considerable saving in time.
Several sets of eighteen very ill-conditioned equations have beensolved by this method, with a time-saving factor of about eight, com-pared with computation on a desk machine.
6. Note on error analysisAn analysis of the magnitude of the errors arising from rounding off has
recently been carried out by A. M. Turing (4), for some methods of solvinglinear equations and inverting matrices. In all those cases where a con-veniently simple error estimate can be given this error does not exceed n(order of matrix) times the error which might be caused by altering thecoefficients in the matrix to be inverted by the maximum rounding-offerror. This in effect means that the ^errors only become large when theinverse of the matrix has large coefficients. In the case of the Jordanmethod, for example, the maximum error E in the inverse is given by
\E\ = e\M |«n»/3,where e is the rounding-off error in each calculation, and M is the largestcoefficient in the inverse.
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021
ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS 173
The statistical error is very much less than this.The proportionality to the square, of the inverse coefficients appears
natural when we consider the case of finding the reciprocal of a scalar:
1 1 €
x x+e x(x+e)Hotelling (7) has implied that the building-up error in the elimination
methods is so large that these methods can be used only with extremecaution. Turing's analysis shows that this estimate can be reached onlyin the most exceptional circumstances, and in our practical experience onmatrices of orders up to the twentieth, some of them very ill-conditioned,the errors were in fact quite small.
The work described here has been carried out as part of the Researchprogramme of the National Physical Laboratory and this paper is pub-lished by permission of the Director of the Laboratory.
REFERENCES1. J. MOBBIS, 'A successive approximation process for solving simultaneous linear
equations', A.R.C.R. & M. No. 1711 (1936).2. R. V. SOUTHWELL, Relaxation Methods in Engineering Science (Oxford, 1940).3. J. MOBBIS, 'An escalator process for the solution of linear simultaneous equa-
tions', Phil. Mag. Series 7, 37 (1946), 106.4. A. M. TUBING, 'Rounding-off errors in matrix processes', Quart. J. Mech. and
Applied Math., in the press.5. A. C. AITKEN, 'Studies in practical mathematics. 1. The evaluation with applica-
tions of a certain triple product matrix', Proc. Roy. Soc. Edin. 57 (1936-7).6. FBAZER, DUNCAN, and COLLAB, Elementary Matrices (Cambridge Univ. Press).7. H. HOTELLING, 'Some new methods in matrix calculation', Ann. Math. Stat.
14, No. 1 (March 1943).8. R. J. SCHMIDT and R. A. FAIBTHOB>TE, 'Seme comments on the use of Hollerith
machines for scientific calculations', R.A.E. Report A.D. 3153 (September1940).
9. H. O. HABTLEY, 'The application of some commercial calculating machines tocertain statistical calculations', Supplement to Journal Roy. Stat. Soc. 8,No. 2 (1946).
6OS2.2
Dow
nloaded from https://academ
ic.oup.com/qjm
am/article/1/1/149/1883392 by guest on 26 N
ovember 2021