some computational properties of - semantic scholar · there are four main parameterizations of the...

40
Some Computational Properties of Rotation Representations Christopher Brown Department of Computer Science .University of Rochester Rochester, NY 14627 August 23, 1989 Abstract There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related quaternion components) as well as the matrix form of rotation representation are particularly of interest in computer vision, graphics, and robotics. Some of their computational properties are explored here. Operation counts are given for primitive operations of normalization, conversion, and application of rotations as well as for sequences of rotations. The numerical accuracy of vector rotation calculations is investigated for some common tasks like iterated application of the same or different rotations. The measure of accuracy is taken to be the length and direction of the resulting rotated vector. Some analytical analysis appears, but most conclusions are based on empirical tests at artificially reduced numerical precision. Acknowledgments The work was supported by the DARPA U.S. Army Engineering Topographic Laboratories Grant DACA76-85-C-OOOl, the Air Force Systems Command (RADC, Griffiss AFB, NY) and Air Force OSR Contract F30602-85-C-0008, which supports the Northeast Artificial Intelligence Consortium. The University of Rochester College of Arts and Sciences provided sabbatical leave support. The assistance of Mike Brady, Andrew Blake, Joe Mundy, Andrew Zisserman, and the staff of the Robotics Research Group of the University of Oxford is gratefully acknowledged. 1

Upload: others

Post on 11-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Some Computational Properties of

Rotation Representations

Christopher Brown Department of Computer Science

University of Rochester Rochester NY 14627

August 23 1989

Abstract

There are four main parameterizations of the rotation group SO(3) Two of them (rotation angle and axis and the closely related quaternion components) as well as the matrix form of rotation representation are particularly of interest in computer vision graphics and robotics Some of their computational properties are explored here Operation counts are given for primitive operations of normalization conversion and application of rotations as well as for sequences of rotations The numerical accuracy of vector rotation calculations is investigated for some common tasks like iterated application of the same or different rotations The measure of accuracy is taken to be the length and direction of the resulting rotated vector Some analytical analysis appears but most conclusions are based on empirical tests at artificially reduced numerical precision

Acknowledgments

The work was supported by the DARPA US Army Engineering Topographic Laboratories Grant DACA76-85-C-OOOl the Air Force Systems Command (RADC Griffiss AFB NY) and Air Force OSR Contract F30602-85-C-0008 which supports the Northeast Artificial Intelligence Consortium The University of Rochester College of Arts and Sciences provided sabbatical leave support The assistance of Mike Brady Andrew Blake Joe Mundy Andrew Zisserman and the staff of the Robotics Research Group of the University of Oxford is gratefully acknowledged

1

SECURITY CLASSIFICATION OF THIS PAGE (When D Enlered)

READ INSTRUCTIONS REPORT DOCUMENTATION PAGE BEFORE COMPLETING FORM REPORT NUMBER 32 GOVT ACCESSION NO RECIPIENTS CATALOG NUMBER

303

bull TlTL E (end Sublllle) S TYPE OF REPORT 6 PERIOD COVERED

Some Computational Propertiesof Rotation Representations Tachnf ca l 1~ULmiddotL

6 PERFORMING ORG REPORT NUMBER

7 AUTHOR() 8 CONTRACT OR GRANT NUMBER()

Christopher M Brown DACA76-85-C-000I

10 PROGRAM ELEMENT PROJECT TASK AREA 6 WORK UNIT NUMBERS

t PERFORMING ORGANIZAION NAME AND ADDRESS

Computer Science Department734 Computer Studies Bldg IInivprcitv nf Dnr NY 14n7

12 REPORT DATEII CONTROLLING OFFICE NAME AND ADDRESSD Adv Res Proj Agency August 1989 1400 Wilson Blvd u NUMBER OF PAGESArlington VA 22209 36

14 MONITORING AGENCY NAME 6 ADDRESS(II dlilerenl Irom Controllln Olllee) 15 SECURITY CL ASS (01 Ihl reporl)

US Army ETL Fort Belvoir VA 22060 unclassified

15bull DECLASSIFICATIONDOWNGRADINGSCHEDULE

16 DISTRIBUTION STATEMENT (01 Ihl Reporl)

Distribution of this document is unlimited

17 DISTRIBUTION STATEMENT (01 Ihe bret enlered In Bloc 20 II dlllrenl rom Reporl)

Ill SUPPLEMENTARY NOTES

None

9 KEY WORDS (Conllnu on rere Ide II nebullbullbull end Identlly by bloc number)

Quaternions Rotation RepresentationsOperation counts Numerical Accuracy

20 ABSTRACT fConllnue on rre Ide nebullbullbullbull end Idnllly by bloelc numbr)

here are four main parameterizations of the rotation group SO(3)Two of them (rotation angle and axis and the closely related quaternion components) as well as the matrix form of rotation representation are parshyticularly of interest in computer vision graphics and robotics Some of their computational properties are explored here Operation counts are given for primitive operations of normalization conversion and application of rotations as well as for sequences of rotations The numerical accuracyof vector rotation calculations is investiqated for some common tasks like

DD JAN 73 1473 EDITION OF NOV 65 IS OBSOLETEFORM

unclassified SECURITY CLASSIFICATION OF THIS PAGE (When Del Enlered)

20 ABSTRACT (Continued)

iterated application of the same or different rotations The measure of acshycuracy is taken to be the length and direction of the resulting rotated vecshytor Some analytical analysis appears but most conclusions are based on empirical tests at artificially reduced numerical precision

1 Rotation Parameters and Representations

One ofthe better treatments ofthe representation ofthe rotation group SO(3) (the special orthogonal group of order 3) is [1] and there is also a treatment in [10] This section will briefly define the main rotation parameterizations and identify two as the most useful for computer vision Rotations can be visualized in their alias or alibi aspect In their alias aspect they rotate the coordinate system in which points are described in the alibi aspect the descriptive coordinate system is fixed and the points are as it were physically rotated with respect to it When there is necessity we assume henceforth we are talking about the alibi aspect There are four main parameterizations

Conical Four numbers specifying the 3-vector direction n of a rotation axis and the angle ltgt through which to rotate Since n can be made a unit vector these four numbers are seen to be redundant

Euler Angles Three angles (8ltgtIjJ) Usually they are interpreted as the sequential rotation of space (including the coordinate axes) first about the Z axis by IjJ then about the new Y axis by ltgt and last about the newest Z axis by 8 The idea extends to other choices of axes Euler angles are a nonredundant representation

Cayley-Klein Two complex numbers (013) actually parameterizing the SU(2) (Special Unitary order 2) group

Euler-Rodrigues Four numbers specifying a realgt and vector A related to the conshyical parameters by [gtA] = [cos(~)sin(~)n]

Probably the most familiar representation for SO(3) in the computer vision robotics and graphics communities is that of orthonormal 3 x 3 matrices through which vector rotation is accomplished by matrix (that is matrix - vector) multiplication and rotation composition is simply multiplication of the rotation matrices Such matrices are more or less easily constructed from the rotation parameters listed here but their nine redundant numbers are not a satisfactory rotation parameterization

Euler angles are common in the robotics literature probably because by definition they provide a constructive description of how to achieve a particular general rotation through specific rotations about two axes The elegant use of only two axes has practical consequences - for example a spacecraft can be arbitrarily oriented using only two rigidly mounted thrusters However there are severe computational problems (ambiguities conshyventions) with Euler angles the composition of rotations is not easily stated using them the rotation they represent is not obvious by inspection and there are deeper mathematishycal infelicities some being addressed by specifying rotations about all three axes not just two and some perhaps not being reparable

Cayley-Klein parameters are usually manipulated in the form of 2 x 2 matrices where they appear redundantly in conjugate form and describe rotations in terms of the stereshyographic projection of 3-space onto a plane using homogeneous coordinates They are

1

isomorphic to the Euler-Rodrigues parameters but their geometrical interpretation is difshyficult to visualize

Euler-Rodrigues parameters are given by a particular semantics for normalized quatershynions After normalization the four quaternion components are the Euler-Rodrigues pashyrameters Quaternions were invented by Hamilton [7] as an extension of complex numbers (the goal was an algebra of 3-vectors allowing multiplication and division) but were imshymediately (within a day) recognized by him as being intimately related to rotations It is said that every good idea is discovered possibly several times before ultimately being invented by someone else In this case quaternions and their relation to rotations were earlier discovered by Rodrigues [18] He also discovered the construction proving that the composition of two rotations is also a rotation later invented by Euler and today bearing Eulers name More on this interesting story appears in [1]

The most relevant representations for computer vision and are conical parameters quaternions (Euler-Rodrigues parameters) and of course 3 X 3 orthonormal matrices

The topology of 50(3) is of interest as well as its parameterization The first step in its visualization is to note that the four Euler-Rodrigues parameters form a unit 4-vector As such they define points lying in the 3-sphere (the unit sphere in 4-space) and every point on the spherical surface defines a rotation The next intuition is that a rotation (ltgtn) is equivalent to the rotation (-ltgt-n) Thus each rotation is represented twice on the surface of the sphere and to remove this ambiguity and achieve a one-to-one mapping of rotations onto 4-space points opposite points (usually opposite hemispheres and halfshyequators) on the sphere must be identified This turns the surface into a cross-cap or projective plane (Fig 1) Another isomorphism relates the rotations to points in a solid parameter ball in 3-space The radial distance of a point (from 0 to 11) represents the magnitude of rotation angle and its direction from the origin gives the axis of rotation In this case just the skin of one hemisphere and half the equator must be identified with the other half and all null rotations (in whatever direction) map to the origin

Many of the infelicities surrounding calculations with rotations arise from the topology of 50(3) One consequence is that there are two inequivalent paths from any rotation to any other not unlike the paths between points on a cylinder (Fig 2) The paths cannot be shrunk to one another (consider the two paths from a point to itself one that goes all the way around the cylinder and one that does not) and clearly have different properties Consider the path that minimizes time or energy expended to push a mass from point A through the points in the order BC It is easy to imagine that a better path exists in direction d2 than dl Path planning (and interpolation) in rotation space must consider the topology [16]

Another aspect of the connectivity is that rotations must either be represented redunshydantly or discontinuously Consider the visualization of the projective plane in which the upper hemisphere and half the equator of the 3-sphere is isomorphic with the rotations and the lower hemisphere and half-equator is identified with diametrically opposite points A smooth path crossing the equator moves directly from the rotation (for example) (1In) (in conic parameters) to the rotation (11 - c -n) a striking discontinuity of reversal in

2

Figure 1 About Here

Figure 1 By identifying (gluing together) parallel sides in the directions indicated various more or less familiar surfaces can be formed The Klein bottle and projective plane cannot be physically realized in three-space The 50(3) topology is that of a projective plane

Figure 2 About Here

Figure 2 One consequence of 50(3) being non-simply connected is the inequivalence between the paths A B in direction d l and A B in direction d2 Any two paths between two points on a simply-connected surface can be shrunk to be equivalent but that is not the case on the cylinder shown here or on the projective plane

the direction vector If the entire 3-sphere is used to represent rotations this problem disappears with (1In) being followed by (11 + cn) However this same rotation is still also represented by the point (11 - s -n) so the isomorphism is lost The special cases created by potentially duplicated representation complicate algorithms and symbolic mashynipulations For related reasons representations of straight lines are plagued with the same problems (an observation due to Joe Mundy)

2 Basic Quaternion and Matrix Formulae

21 Quaternion Representation

Quaternions are well discussed in the mathematical and robotics literature An accessible reference is [9] though there are several others This brief outline is restricted to those properties most directly related to our central questions Quaternions embody the EulershyRodrigues parameterization of rotations directly and thus are closely related to the conic represen tation of rotation

If the Euler-Rodrigues parameters of a rotation are (~ A) the quaternion that represhysents the rotation is a 4-tuple best thought of as a scalar and a vector part

Q = [~A] (1)

Actually the vector part is a pseudovector transforming to itself under inversion like the cross product but that distinction will not bother us

If a rotation in terms of its conic parameters is R(4J n) the corresponding quaternion is

(2)

The multiplication rule for quaternions is

[~b AI][~2 A2] = [~1~2 - Al A 2 ~IA2 +~2AI +Al X A 2] (3)

3

A real quatemion is of the form [aO] and multiplies like a real number A pure quatemion has the form [0A] A unit quatemion is a pure quaternion [0 n] with n a unit vector The conjugate of a quatemion A = [a A] is the quaternion Amiddot = [a -A] The norm of a quatemion A is the square root of AAmiddot which is seen to be the square root of a2 + A A A quatemion of unit norm is a normalized quatemion and it is normalized quaternions that represent rotations A quaternion [a A] = [a An] can satisfy the normalization condition if a = cos(a) A = sin(a) This gives a Hamilton quaternion or versor which we shall not use Better for our purposes is to write a = cos( ~) A = sin(~) The conjugate inverts the rotation axis and thus denotes the inverse rotation The inverse quatemion of A is A-I such that AA-l = [10] For a normalized quaternion the inverse is the conjugate The product of two normalized quatemions is a normalized quaternion a property we desire for a rotation representation Under the Rodrigues parameterization a normalized quatemion represents a rotation quaternion multiplication corresponds to composition of rotations and quatemion inverse is the same as quaternion conjugation

One way to apply a rotation represented by a quaternion to a vector is to represent the vector by a pure quatemion and to conjugate it with the rotation quaternion That is to rotate a given vector r by A=[cos t sin ~n] form a pure quatemion r = [0 r] and the resulting vector is the vector part of the pure quatemion

rl =ArAmiddot (4)

From the quaternion multiplication rule the conical transformation follows though it can also be derived in other ways It describes the effect of rotation upon a vector r in terms of the conical rotation parameters

rl = cos4gtr + sin4gt(n x r) + (1 - cos4raquo(n r)n (5)

For computational purposes neither conjugation nor the conical transform is as quick as the clever formula (note the repeated subexpression) given in [9] arising from applying vector identities to eq (5) Given a vector r and the normalized quatemion representation [aA] The rotated vector is

rt = r +2(a(A x r) - (A x r) X A) (6)

The matrix representation of a rotation represented by conic parameters (4) n) can be written explicitly in terms of those parameters and deriving it is a well known exercise for robotics students It can be found for example in [1] For computational purposes it is easier to use the following result The matrix corresponding to the quaternion rotation representation p Az All Az] is

2(AzAIi - oXAz) 2oX 2 - Az

2 +A - Az (7) 2(AIiAz + oXAz )

4

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 2: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

SECURITY CLASSIFICATION OF THIS PAGE (When D Enlered)

READ INSTRUCTIONS REPORT DOCUMENTATION PAGE BEFORE COMPLETING FORM REPORT NUMBER 32 GOVT ACCESSION NO RECIPIENTS CATALOG NUMBER

303

bull TlTL E (end Sublllle) S TYPE OF REPORT 6 PERIOD COVERED

Some Computational Propertiesof Rotation Representations Tachnf ca l 1~ULmiddotL

6 PERFORMING ORG REPORT NUMBER

7 AUTHOR() 8 CONTRACT OR GRANT NUMBER()

Christopher M Brown DACA76-85-C-000I

10 PROGRAM ELEMENT PROJECT TASK AREA 6 WORK UNIT NUMBERS

t PERFORMING ORGANIZAION NAME AND ADDRESS

Computer Science Department734 Computer Studies Bldg IInivprcitv nf Dnr NY 14n7

12 REPORT DATEII CONTROLLING OFFICE NAME AND ADDRESSD Adv Res Proj Agency August 1989 1400 Wilson Blvd u NUMBER OF PAGESArlington VA 22209 36

14 MONITORING AGENCY NAME 6 ADDRESS(II dlilerenl Irom Controllln Olllee) 15 SECURITY CL ASS (01 Ihl reporl)

US Army ETL Fort Belvoir VA 22060 unclassified

15bull DECLASSIFICATIONDOWNGRADINGSCHEDULE

16 DISTRIBUTION STATEMENT (01 Ihl Reporl)

Distribution of this document is unlimited

17 DISTRIBUTION STATEMENT (01 Ihe bret enlered In Bloc 20 II dlllrenl rom Reporl)

Ill SUPPLEMENTARY NOTES

None

9 KEY WORDS (Conllnu on rere Ide II nebullbullbull end Identlly by bloc number)

Quaternions Rotation RepresentationsOperation counts Numerical Accuracy

20 ABSTRACT fConllnue on rre Ide nebullbullbullbull end Idnllly by bloelc numbr)

here are four main parameterizations of the rotation group SO(3)Two of them (rotation angle and axis and the closely related quaternion components) as well as the matrix form of rotation representation are parshyticularly of interest in computer vision graphics and robotics Some of their computational properties are explored here Operation counts are given for primitive operations of normalization conversion and application of rotations as well as for sequences of rotations The numerical accuracyof vector rotation calculations is investiqated for some common tasks like

DD JAN 73 1473 EDITION OF NOV 65 IS OBSOLETEFORM

unclassified SECURITY CLASSIFICATION OF THIS PAGE (When Del Enlered)

20 ABSTRACT (Continued)

iterated application of the same or different rotations The measure of acshycuracy is taken to be the length and direction of the resulting rotated vecshytor Some analytical analysis appears but most conclusions are based on empirical tests at artificially reduced numerical precision

1 Rotation Parameters and Representations

One ofthe better treatments ofthe representation ofthe rotation group SO(3) (the special orthogonal group of order 3) is [1] and there is also a treatment in [10] This section will briefly define the main rotation parameterizations and identify two as the most useful for computer vision Rotations can be visualized in their alias or alibi aspect In their alias aspect they rotate the coordinate system in which points are described in the alibi aspect the descriptive coordinate system is fixed and the points are as it were physically rotated with respect to it When there is necessity we assume henceforth we are talking about the alibi aspect There are four main parameterizations

Conical Four numbers specifying the 3-vector direction n of a rotation axis and the angle ltgt through which to rotate Since n can be made a unit vector these four numbers are seen to be redundant

Euler Angles Three angles (8ltgtIjJ) Usually they are interpreted as the sequential rotation of space (including the coordinate axes) first about the Z axis by IjJ then about the new Y axis by ltgt and last about the newest Z axis by 8 The idea extends to other choices of axes Euler angles are a nonredundant representation

Cayley-Klein Two complex numbers (013) actually parameterizing the SU(2) (Special Unitary order 2) group

Euler-Rodrigues Four numbers specifying a realgt and vector A related to the conshyical parameters by [gtA] = [cos(~)sin(~)n]

Probably the most familiar representation for SO(3) in the computer vision robotics and graphics communities is that of orthonormal 3 x 3 matrices through which vector rotation is accomplished by matrix (that is matrix - vector) multiplication and rotation composition is simply multiplication of the rotation matrices Such matrices are more or less easily constructed from the rotation parameters listed here but their nine redundant numbers are not a satisfactory rotation parameterization

Euler angles are common in the robotics literature probably because by definition they provide a constructive description of how to achieve a particular general rotation through specific rotations about two axes The elegant use of only two axes has practical consequences - for example a spacecraft can be arbitrarily oriented using only two rigidly mounted thrusters However there are severe computational problems (ambiguities conshyventions) with Euler angles the composition of rotations is not easily stated using them the rotation they represent is not obvious by inspection and there are deeper mathematishycal infelicities some being addressed by specifying rotations about all three axes not just two and some perhaps not being reparable

Cayley-Klein parameters are usually manipulated in the form of 2 x 2 matrices where they appear redundantly in conjugate form and describe rotations in terms of the stereshyographic projection of 3-space onto a plane using homogeneous coordinates They are

1

isomorphic to the Euler-Rodrigues parameters but their geometrical interpretation is difshyficult to visualize

Euler-Rodrigues parameters are given by a particular semantics for normalized quatershynions After normalization the four quaternion components are the Euler-Rodrigues pashyrameters Quaternions were invented by Hamilton [7] as an extension of complex numbers (the goal was an algebra of 3-vectors allowing multiplication and division) but were imshymediately (within a day) recognized by him as being intimately related to rotations It is said that every good idea is discovered possibly several times before ultimately being invented by someone else In this case quaternions and their relation to rotations were earlier discovered by Rodrigues [18] He also discovered the construction proving that the composition of two rotations is also a rotation later invented by Euler and today bearing Eulers name More on this interesting story appears in [1]

The most relevant representations for computer vision and are conical parameters quaternions (Euler-Rodrigues parameters) and of course 3 X 3 orthonormal matrices

The topology of 50(3) is of interest as well as its parameterization The first step in its visualization is to note that the four Euler-Rodrigues parameters form a unit 4-vector As such they define points lying in the 3-sphere (the unit sphere in 4-space) and every point on the spherical surface defines a rotation The next intuition is that a rotation (ltgtn) is equivalent to the rotation (-ltgt-n) Thus each rotation is represented twice on the surface of the sphere and to remove this ambiguity and achieve a one-to-one mapping of rotations onto 4-space points opposite points (usually opposite hemispheres and halfshyequators) on the sphere must be identified This turns the surface into a cross-cap or projective plane (Fig 1) Another isomorphism relates the rotations to points in a solid parameter ball in 3-space The radial distance of a point (from 0 to 11) represents the magnitude of rotation angle and its direction from the origin gives the axis of rotation In this case just the skin of one hemisphere and half the equator must be identified with the other half and all null rotations (in whatever direction) map to the origin

Many of the infelicities surrounding calculations with rotations arise from the topology of 50(3) One consequence is that there are two inequivalent paths from any rotation to any other not unlike the paths between points on a cylinder (Fig 2) The paths cannot be shrunk to one another (consider the two paths from a point to itself one that goes all the way around the cylinder and one that does not) and clearly have different properties Consider the path that minimizes time or energy expended to push a mass from point A through the points in the order BC It is easy to imagine that a better path exists in direction d2 than dl Path planning (and interpolation) in rotation space must consider the topology [16]

Another aspect of the connectivity is that rotations must either be represented redunshydantly or discontinuously Consider the visualization of the projective plane in which the upper hemisphere and half the equator of the 3-sphere is isomorphic with the rotations and the lower hemisphere and half-equator is identified with diametrically opposite points A smooth path crossing the equator moves directly from the rotation (for example) (1In) (in conic parameters) to the rotation (11 - c -n) a striking discontinuity of reversal in

2

Figure 1 About Here

Figure 1 By identifying (gluing together) parallel sides in the directions indicated various more or less familiar surfaces can be formed The Klein bottle and projective plane cannot be physically realized in three-space The 50(3) topology is that of a projective plane

Figure 2 About Here

Figure 2 One consequence of 50(3) being non-simply connected is the inequivalence between the paths A B in direction d l and A B in direction d2 Any two paths between two points on a simply-connected surface can be shrunk to be equivalent but that is not the case on the cylinder shown here or on the projective plane

the direction vector If the entire 3-sphere is used to represent rotations this problem disappears with (1In) being followed by (11 + cn) However this same rotation is still also represented by the point (11 - s -n) so the isomorphism is lost The special cases created by potentially duplicated representation complicate algorithms and symbolic mashynipulations For related reasons representations of straight lines are plagued with the same problems (an observation due to Joe Mundy)

2 Basic Quaternion and Matrix Formulae

21 Quaternion Representation

Quaternions are well discussed in the mathematical and robotics literature An accessible reference is [9] though there are several others This brief outline is restricted to those properties most directly related to our central questions Quaternions embody the EulershyRodrigues parameterization of rotations directly and thus are closely related to the conic represen tation of rotation

If the Euler-Rodrigues parameters of a rotation are (~ A) the quaternion that represhysents the rotation is a 4-tuple best thought of as a scalar and a vector part

Q = [~A] (1)

Actually the vector part is a pseudovector transforming to itself under inversion like the cross product but that distinction will not bother us

If a rotation in terms of its conic parameters is R(4J n) the corresponding quaternion is

(2)

The multiplication rule for quaternions is

[~b AI][~2 A2] = [~1~2 - Al A 2 ~IA2 +~2AI +Al X A 2] (3)

3

A real quatemion is of the form [aO] and multiplies like a real number A pure quatemion has the form [0A] A unit quatemion is a pure quaternion [0 n] with n a unit vector The conjugate of a quatemion A = [a A] is the quaternion Amiddot = [a -A] The norm of a quatemion A is the square root of AAmiddot which is seen to be the square root of a2 + A A A quatemion of unit norm is a normalized quatemion and it is normalized quaternions that represent rotations A quaternion [a A] = [a An] can satisfy the normalization condition if a = cos(a) A = sin(a) This gives a Hamilton quaternion or versor which we shall not use Better for our purposes is to write a = cos( ~) A = sin(~) The conjugate inverts the rotation axis and thus denotes the inverse rotation The inverse quatemion of A is A-I such that AA-l = [10] For a normalized quaternion the inverse is the conjugate The product of two normalized quatemions is a normalized quaternion a property we desire for a rotation representation Under the Rodrigues parameterization a normalized quatemion represents a rotation quaternion multiplication corresponds to composition of rotations and quatemion inverse is the same as quaternion conjugation

One way to apply a rotation represented by a quaternion to a vector is to represent the vector by a pure quatemion and to conjugate it with the rotation quaternion That is to rotate a given vector r by A=[cos t sin ~n] form a pure quatemion r = [0 r] and the resulting vector is the vector part of the pure quatemion

rl =ArAmiddot (4)

From the quaternion multiplication rule the conical transformation follows though it can also be derived in other ways It describes the effect of rotation upon a vector r in terms of the conical rotation parameters

rl = cos4gtr + sin4gt(n x r) + (1 - cos4raquo(n r)n (5)

For computational purposes neither conjugation nor the conical transform is as quick as the clever formula (note the repeated subexpression) given in [9] arising from applying vector identities to eq (5) Given a vector r and the normalized quatemion representation [aA] The rotated vector is

rt = r +2(a(A x r) - (A x r) X A) (6)

The matrix representation of a rotation represented by conic parameters (4) n) can be written explicitly in terms of those parameters and deriving it is a well known exercise for robotics students It can be found for example in [1] For computational purposes it is easier to use the following result The matrix corresponding to the quaternion rotation representation p Az All Az] is

2(AzAIi - oXAz) 2oX 2 - Az

2 +A - Az (7) 2(AIiAz + oXAz )

4

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 3: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

20 ABSTRACT (Continued)

iterated application of the same or different rotations The measure of acshycuracy is taken to be the length and direction of the resulting rotated vecshytor Some analytical analysis appears but most conclusions are based on empirical tests at artificially reduced numerical precision

1 Rotation Parameters and Representations

One ofthe better treatments ofthe representation ofthe rotation group SO(3) (the special orthogonal group of order 3) is [1] and there is also a treatment in [10] This section will briefly define the main rotation parameterizations and identify two as the most useful for computer vision Rotations can be visualized in their alias or alibi aspect In their alias aspect they rotate the coordinate system in which points are described in the alibi aspect the descriptive coordinate system is fixed and the points are as it were physically rotated with respect to it When there is necessity we assume henceforth we are talking about the alibi aspect There are four main parameterizations

Conical Four numbers specifying the 3-vector direction n of a rotation axis and the angle ltgt through which to rotate Since n can be made a unit vector these four numbers are seen to be redundant

Euler Angles Three angles (8ltgtIjJ) Usually they are interpreted as the sequential rotation of space (including the coordinate axes) first about the Z axis by IjJ then about the new Y axis by ltgt and last about the newest Z axis by 8 The idea extends to other choices of axes Euler angles are a nonredundant representation

Cayley-Klein Two complex numbers (013) actually parameterizing the SU(2) (Special Unitary order 2) group

Euler-Rodrigues Four numbers specifying a realgt and vector A related to the conshyical parameters by [gtA] = [cos(~)sin(~)n]

Probably the most familiar representation for SO(3) in the computer vision robotics and graphics communities is that of orthonormal 3 x 3 matrices through which vector rotation is accomplished by matrix (that is matrix - vector) multiplication and rotation composition is simply multiplication of the rotation matrices Such matrices are more or less easily constructed from the rotation parameters listed here but their nine redundant numbers are not a satisfactory rotation parameterization

Euler angles are common in the robotics literature probably because by definition they provide a constructive description of how to achieve a particular general rotation through specific rotations about two axes The elegant use of only two axes has practical consequences - for example a spacecraft can be arbitrarily oriented using only two rigidly mounted thrusters However there are severe computational problems (ambiguities conshyventions) with Euler angles the composition of rotations is not easily stated using them the rotation they represent is not obvious by inspection and there are deeper mathematishycal infelicities some being addressed by specifying rotations about all three axes not just two and some perhaps not being reparable

Cayley-Klein parameters are usually manipulated in the form of 2 x 2 matrices where they appear redundantly in conjugate form and describe rotations in terms of the stereshyographic projection of 3-space onto a plane using homogeneous coordinates They are

1

isomorphic to the Euler-Rodrigues parameters but their geometrical interpretation is difshyficult to visualize

Euler-Rodrigues parameters are given by a particular semantics for normalized quatershynions After normalization the four quaternion components are the Euler-Rodrigues pashyrameters Quaternions were invented by Hamilton [7] as an extension of complex numbers (the goal was an algebra of 3-vectors allowing multiplication and division) but were imshymediately (within a day) recognized by him as being intimately related to rotations It is said that every good idea is discovered possibly several times before ultimately being invented by someone else In this case quaternions and their relation to rotations were earlier discovered by Rodrigues [18] He also discovered the construction proving that the composition of two rotations is also a rotation later invented by Euler and today bearing Eulers name More on this interesting story appears in [1]

The most relevant representations for computer vision and are conical parameters quaternions (Euler-Rodrigues parameters) and of course 3 X 3 orthonormal matrices

The topology of 50(3) is of interest as well as its parameterization The first step in its visualization is to note that the four Euler-Rodrigues parameters form a unit 4-vector As such they define points lying in the 3-sphere (the unit sphere in 4-space) and every point on the spherical surface defines a rotation The next intuition is that a rotation (ltgtn) is equivalent to the rotation (-ltgt-n) Thus each rotation is represented twice on the surface of the sphere and to remove this ambiguity and achieve a one-to-one mapping of rotations onto 4-space points opposite points (usually opposite hemispheres and halfshyequators) on the sphere must be identified This turns the surface into a cross-cap or projective plane (Fig 1) Another isomorphism relates the rotations to points in a solid parameter ball in 3-space The radial distance of a point (from 0 to 11) represents the magnitude of rotation angle and its direction from the origin gives the axis of rotation In this case just the skin of one hemisphere and half the equator must be identified with the other half and all null rotations (in whatever direction) map to the origin

Many of the infelicities surrounding calculations with rotations arise from the topology of 50(3) One consequence is that there are two inequivalent paths from any rotation to any other not unlike the paths between points on a cylinder (Fig 2) The paths cannot be shrunk to one another (consider the two paths from a point to itself one that goes all the way around the cylinder and one that does not) and clearly have different properties Consider the path that minimizes time or energy expended to push a mass from point A through the points in the order BC It is easy to imagine that a better path exists in direction d2 than dl Path planning (and interpolation) in rotation space must consider the topology [16]

Another aspect of the connectivity is that rotations must either be represented redunshydantly or discontinuously Consider the visualization of the projective plane in which the upper hemisphere and half the equator of the 3-sphere is isomorphic with the rotations and the lower hemisphere and half-equator is identified with diametrically opposite points A smooth path crossing the equator moves directly from the rotation (for example) (1In) (in conic parameters) to the rotation (11 - c -n) a striking discontinuity of reversal in

2

Figure 1 About Here

Figure 1 By identifying (gluing together) parallel sides in the directions indicated various more or less familiar surfaces can be formed The Klein bottle and projective plane cannot be physically realized in three-space The 50(3) topology is that of a projective plane

Figure 2 About Here

Figure 2 One consequence of 50(3) being non-simply connected is the inequivalence between the paths A B in direction d l and A B in direction d2 Any two paths between two points on a simply-connected surface can be shrunk to be equivalent but that is not the case on the cylinder shown here or on the projective plane

the direction vector If the entire 3-sphere is used to represent rotations this problem disappears with (1In) being followed by (11 + cn) However this same rotation is still also represented by the point (11 - s -n) so the isomorphism is lost The special cases created by potentially duplicated representation complicate algorithms and symbolic mashynipulations For related reasons representations of straight lines are plagued with the same problems (an observation due to Joe Mundy)

2 Basic Quaternion and Matrix Formulae

21 Quaternion Representation

Quaternions are well discussed in the mathematical and robotics literature An accessible reference is [9] though there are several others This brief outline is restricted to those properties most directly related to our central questions Quaternions embody the EulershyRodrigues parameterization of rotations directly and thus are closely related to the conic represen tation of rotation

If the Euler-Rodrigues parameters of a rotation are (~ A) the quaternion that represhysents the rotation is a 4-tuple best thought of as a scalar and a vector part

Q = [~A] (1)

Actually the vector part is a pseudovector transforming to itself under inversion like the cross product but that distinction will not bother us

If a rotation in terms of its conic parameters is R(4J n) the corresponding quaternion is

(2)

The multiplication rule for quaternions is

[~b AI][~2 A2] = [~1~2 - Al A 2 ~IA2 +~2AI +Al X A 2] (3)

3

A real quatemion is of the form [aO] and multiplies like a real number A pure quatemion has the form [0A] A unit quatemion is a pure quaternion [0 n] with n a unit vector The conjugate of a quatemion A = [a A] is the quaternion Amiddot = [a -A] The norm of a quatemion A is the square root of AAmiddot which is seen to be the square root of a2 + A A A quatemion of unit norm is a normalized quatemion and it is normalized quaternions that represent rotations A quaternion [a A] = [a An] can satisfy the normalization condition if a = cos(a) A = sin(a) This gives a Hamilton quaternion or versor which we shall not use Better for our purposes is to write a = cos( ~) A = sin(~) The conjugate inverts the rotation axis and thus denotes the inverse rotation The inverse quatemion of A is A-I such that AA-l = [10] For a normalized quaternion the inverse is the conjugate The product of two normalized quatemions is a normalized quaternion a property we desire for a rotation representation Under the Rodrigues parameterization a normalized quatemion represents a rotation quaternion multiplication corresponds to composition of rotations and quatemion inverse is the same as quaternion conjugation

One way to apply a rotation represented by a quaternion to a vector is to represent the vector by a pure quatemion and to conjugate it with the rotation quaternion That is to rotate a given vector r by A=[cos t sin ~n] form a pure quatemion r = [0 r] and the resulting vector is the vector part of the pure quatemion

rl =ArAmiddot (4)

From the quaternion multiplication rule the conical transformation follows though it can also be derived in other ways It describes the effect of rotation upon a vector r in terms of the conical rotation parameters

rl = cos4gtr + sin4gt(n x r) + (1 - cos4raquo(n r)n (5)

For computational purposes neither conjugation nor the conical transform is as quick as the clever formula (note the repeated subexpression) given in [9] arising from applying vector identities to eq (5) Given a vector r and the normalized quatemion representation [aA] The rotated vector is

rt = r +2(a(A x r) - (A x r) X A) (6)

The matrix representation of a rotation represented by conic parameters (4) n) can be written explicitly in terms of those parameters and deriving it is a well known exercise for robotics students It can be found for example in [1] For computational purposes it is easier to use the following result The matrix corresponding to the quaternion rotation representation p Az All Az] is

2(AzAIi - oXAz) 2oX 2 - Az

2 +A - Az (7) 2(AIiAz + oXAz )

4

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 4: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

1 Rotation Parameters and Representations

One ofthe better treatments ofthe representation ofthe rotation group SO(3) (the special orthogonal group of order 3) is [1] and there is also a treatment in [10] This section will briefly define the main rotation parameterizations and identify two as the most useful for computer vision Rotations can be visualized in their alias or alibi aspect In their alias aspect they rotate the coordinate system in which points are described in the alibi aspect the descriptive coordinate system is fixed and the points are as it were physically rotated with respect to it When there is necessity we assume henceforth we are talking about the alibi aspect There are four main parameterizations

Conical Four numbers specifying the 3-vector direction n of a rotation axis and the angle ltgt through which to rotate Since n can be made a unit vector these four numbers are seen to be redundant

Euler Angles Three angles (8ltgtIjJ) Usually they are interpreted as the sequential rotation of space (including the coordinate axes) first about the Z axis by IjJ then about the new Y axis by ltgt and last about the newest Z axis by 8 The idea extends to other choices of axes Euler angles are a nonredundant representation

Cayley-Klein Two complex numbers (013) actually parameterizing the SU(2) (Special Unitary order 2) group

Euler-Rodrigues Four numbers specifying a realgt and vector A related to the conshyical parameters by [gtA] = [cos(~)sin(~)n]

Probably the most familiar representation for SO(3) in the computer vision robotics and graphics communities is that of orthonormal 3 x 3 matrices through which vector rotation is accomplished by matrix (that is matrix - vector) multiplication and rotation composition is simply multiplication of the rotation matrices Such matrices are more or less easily constructed from the rotation parameters listed here but their nine redundant numbers are not a satisfactory rotation parameterization

Euler angles are common in the robotics literature probably because by definition they provide a constructive description of how to achieve a particular general rotation through specific rotations about two axes The elegant use of only two axes has practical consequences - for example a spacecraft can be arbitrarily oriented using only two rigidly mounted thrusters However there are severe computational problems (ambiguities conshyventions) with Euler angles the composition of rotations is not easily stated using them the rotation they represent is not obvious by inspection and there are deeper mathematishycal infelicities some being addressed by specifying rotations about all three axes not just two and some perhaps not being reparable

Cayley-Klein parameters are usually manipulated in the form of 2 x 2 matrices where they appear redundantly in conjugate form and describe rotations in terms of the stereshyographic projection of 3-space onto a plane using homogeneous coordinates They are

1

isomorphic to the Euler-Rodrigues parameters but their geometrical interpretation is difshyficult to visualize

Euler-Rodrigues parameters are given by a particular semantics for normalized quatershynions After normalization the four quaternion components are the Euler-Rodrigues pashyrameters Quaternions were invented by Hamilton [7] as an extension of complex numbers (the goal was an algebra of 3-vectors allowing multiplication and division) but were imshymediately (within a day) recognized by him as being intimately related to rotations It is said that every good idea is discovered possibly several times before ultimately being invented by someone else In this case quaternions and their relation to rotations were earlier discovered by Rodrigues [18] He also discovered the construction proving that the composition of two rotations is also a rotation later invented by Euler and today bearing Eulers name More on this interesting story appears in [1]

The most relevant representations for computer vision and are conical parameters quaternions (Euler-Rodrigues parameters) and of course 3 X 3 orthonormal matrices

The topology of 50(3) is of interest as well as its parameterization The first step in its visualization is to note that the four Euler-Rodrigues parameters form a unit 4-vector As such they define points lying in the 3-sphere (the unit sphere in 4-space) and every point on the spherical surface defines a rotation The next intuition is that a rotation (ltgtn) is equivalent to the rotation (-ltgt-n) Thus each rotation is represented twice on the surface of the sphere and to remove this ambiguity and achieve a one-to-one mapping of rotations onto 4-space points opposite points (usually opposite hemispheres and halfshyequators) on the sphere must be identified This turns the surface into a cross-cap or projective plane (Fig 1) Another isomorphism relates the rotations to points in a solid parameter ball in 3-space The radial distance of a point (from 0 to 11) represents the magnitude of rotation angle and its direction from the origin gives the axis of rotation In this case just the skin of one hemisphere and half the equator must be identified with the other half and all null rotations (in whatever direction) map to the origin

Many of the infelicities surrounding calculations with rotations arise from the topology of 50(3) One consequence is that there are two inequivalent paths from any rotation to any other not unlike the paths between points on a cylinder (Fig 2) The paths cannot be shrunk to one another (consider the two paths from a point to itself one that goes all the way around the cylinder and one that does not) and clearly have different properties Consider the path that minimizes time or energy expended to push a mass from point A through the points in the order BC It is easy to imagine that a better path exists in direction d2 than dl Path planning (and interpolation) in rotation space must consider the topology [16]

Another aspect of the connectivity is that rotations must either be represented redunshydantly or discontinuously Consider the visualization of the projective plane in which the upper hemisphere and half the equator of the 3-sphere is isomorphic with the rotations and the lower hemisphere and half-equator is identified with diametrically opposite points A smooth path crossing the equator moves directly from the rotation (for example) (1In) (in conic parameters) to the rotation (11 - c -n) a striking discontinuity of reversal in

2

Figure 1 About Here

Figure 1 By identifying (gluing together) parallel sides in the directions indicated various more or less familiar surfaces can be formed The Klein bottle and projective plane cannot be physically realized in three-space The 50(3) topology is that of a projective plane

Figure 2 About Here

Figure 2 One consequence of 50(3) being non-simply connected is the inequivalence between the paths A B in direction d l and A B in direction d2 Any two paths between two points on a simply-connected surface can be shrunk to be equivalent but that is not the case on the cylinder shown here or on the projective plane

the direction vector If the entire 3-sphere is used to represent rotations this problem disappears with (1In) being followed by (11 + cn) However this same rotation is still also represented by the point (11 - s -n) so the isomorphism is lost The special cases created by potentially duplicated representation complicate algorithms and symbolic mashynipulations For related reasons representations of straight lines are plagued with the same problems (an observation due to Joe Mundy)

2 Basic Quaternion and Matrix Formulae

21 Quaternion Representation

Quaternions are well discussed in the mathematical and robotics literature An accessible reference is [9] though there are several others This brief outline is restricted to those properties most directly related to our central questions Quaternions embody the EulershyRodrigues parameterization of rotations directly and thus are closely related to the conic represen tation of rotation

If the Euler-Rodrigues parameters of a rotation are (~ A) the quaternion that represhysents the rotation is a 4-tuple best thought of as a scalar and a vector part

Q = [~A] (1)

Actually the vector part is a pseudovector transforming to itself under inversion like the cross product but that distinction will not bother us

If a rotation in terms of its conic parameters is R(4J n) the corresponding quaternion is

(2)

The multiplication rule for quaternions is

[~b AI][~2 A2] = [~1~2 - Al A 2 ~IA2 +~2AI +Al X A 2] (3)

3

A real quatemion is of the form [aO] and multiplies like a real number A pure quatemion has the form [0A] A unit quatemion is a pure quaternion [0 n] with n a unit vector The conjugate of a quatemion A = [a A] is the quaternion Amiddot = [a -A] The norm of a quatemion A is the square root of AAmiddot which is seen to be the square root of a2 + A A A quatemion of unit norm is a normalized quatemion and it is normalized quaternions that represent rotations A quaternion [a A] = [a An] can satisfy the normalization condition if a = cos(a) A = sin(a) This gives a Hamilton quaternion or versor which we shall not use Better for our purposes is to write a = cos( ~) A = sin(~) The conjugate inverts the rotation axis and thus denotes the inverse rotation The inverse quatemion of A is A-I such that AA-l = [10] For a normalized quaternion the inverse is the conjugate The product of two normalized quatemions is a normalized quaternion a property we desire for a rotation representation Under the Rodrigues parameterization a normalized quatemion represents a rotation quaternion multiplication corresponds to composition of rotations and quatemion inverse is the same as quaternion conjugation

One way to apply a rotation represented by a quaternion to a vector is to represent the vector by a pure quatemion and to conjugate it with the rotation quaternion That is to rotate a given vector r by A=[cos t sin ~n] form a pure quatemion r = [0 r] and the resulting vector is the vector part of the pure quatemion

rl =ArAmiddot (4)

From the quaternion multiplication rule the conical transformation follows though it can also be derived in other ways It describes the effect of rotation upon a vector r in terms of the conical rotation parameters

rl = cos4gtr + sin4gt(n x r) + (1 - cos4raquo(n r)n (5)

For computational purposes neither conjugation nor the conical transform is as quick as the clever formula (note the repeated subexpression) given in [9] arising from applying vector identities to eq (5) Given a vector r and the normalized quatemion representation [aA] The rotated vector is

rt = r +2(a(A x r) - (A x r) X A) (6)

The matrix representation of a rotation represented by conic parameters (4) n) can be written explicitly in terms of those parameters and deriving it is a well known exercise for robotics students It can be found for example in [1] For computational purposes it is easier to use the following result The matrix corresponding to the quaternion rotation representation p Az All Az] is

2(AzAIi - oXAz) 2oX 2 - Az

2 +A - Az (7) 2(AIiAz + oXAz )

4

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 5: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

isomorphic to the Euler-Rodrigues parameters but their geometrical interpretation is difshyficult to visualize

Euler-Rodrigues parameters are given by a particular semantics for normalized quatershynions After normalization the four quaternion components are the Euler-Rodrigues pashyrameters Quaternions were invented by Hamilton [7] as an extension of complex numbers (the goal was an algebra of 3-vectors allowing multiplication and division) but were imshymediately (within a day) recognized by him as being intimately related to rotations It is said that every good idea is discovered possibly several times before ultimately being invented by someone else In this case quaternions and their relation to rotations were earlier discovered by Rodrigues [18] He also discovered the construction proving that the composition of two rotations is also a rotation later invented by Euler and today bearing Eulers name More on this interesting story appears in [1]

The most relevant representations for computer vision and are conical parameters quaternions (Euler-Rodrigues parameters) and of course 3 X 3 orthonormal matrices

The topology of 50(3) is of interest as well as its parameterization The first step in its visualization is to note that the four Euler-Rodrigues parameters form a unit 4-vector As such they define points lying in the 3-sphere (the unit sphere in 4-space) and every point on the spherical surface defines a rotation The next intuition is that a rotation (ltgtn) is equivalent to the rotation (-ltgt-n) Thus each rotation is represented twice on the surface of the sphere and to remove this ambiguity and achieve a one-to-one mapping of rotations onto 4-space points opposite points (usually opposite hemispheres and halfshyequators) on the sphere must be identified This turns the surface into a cross-cap or projective plane (Fig 1) Another isomorphism relates the rotations to points in a solid parameter ball in 3-space The radial distance of a point (from 0 to 11) represents the magnitude of rotation angle and its direction from the origin gives the axis of rotation In this case just the skin of one hemisphere and half the equator must be identified with the other half and all null rotations (in whatever direction) map to the origin

Many of the infelicities surrounding calculations with rotations arise from the topology of 50(3) One consequence is that there are two inequivalent paths from any rotation to any other not unlike the paths between points on a cylinder (Fig 2) The paths cannot be shrunk to one another (consider the two paths from a point to itself one that goes all the way around the cylinder and one that does not) and clearly have different properties Consider the path that minimizes time or energy expended to push a mass from point A through the points in the order BC It is easy to imagine that a better path exists in direction d2 than dl Path planning (and interpolation) in rotation space must consider the topology [16]

Another aspect of the connectivity is that rotations must either be represented redunshydantly or discontinuously Consider the visualization of the projective plane in which the upper hemisphere and half the equator of the 3-sphere is isomorphic with the rotations and the lower hemisphere and half-equator is identified with diametrically opposite points A smooth path crossing the equator moves directly from the rotation (for example) (1In) (in conic parameters) to the rotation (11 - c -n) a striking discontinuity of reversal in

2

Figure 1 About Here

Figure 1 By identifying (gluing together) parallel sides in the directions indicated various more or less familiar surfaces can be formed The Klein bottle and projective plane cannot be physically realized in three-space The 50(3) topology is that of a projective plane

Figure 2 About Here

Figure 2 One consequence of 50(3) being non-simply connected is the inequivalence between the paths A B in direction d l and A B in direction d2 Any two paths between two points on a simply-connected surface can be shrunk to be equivalent but that is not the case on the cylinder shown here or on the projective plane

the direction vector If the entire 3-sphere is used to represent rotations this problem disappears with (1In) being followed by (11 + cn) However this same rotation is still also represented by the point (11 - s -n) so the isomorphism is lost The special cases created by potentially duplicated representation complicate algorithms and symbolic mashynipulations For related reasons representations of straight lines are plagued with the same problems (an observation due to Joe Mundy)

2 Basic Quaternion and Matrix Formulae

21 Quaternion Representation

Quaternions are well discussed in the mathematical and robotics literature An accessible reference is [9] though there are several others This brief outline is restricted to those properties most directly related to our central questions Quaternions embody the EulershyRodrigues parameterization of rotations directly and thus are closely related to the conic represen tation of rotation

If the Euler-Rodrigues parameters of a rotation are (~ A) the quaternion that represhysents the rotation is a 4-tuple best thought of as a scalar and a vector part

Q = [~A] (1)

Actually the vector part is a pseudovector transforming to itself under inversion like the cross product but that distinction will not bother us

If a rotation in terms of its conic parameters is R(4J n) the corresponding quaternion is

(2)

The multiplication rule for quaternions is

[~b AI][~2 A2] = [~1~2 - Al A 2 ~IA2 +~2AI +Al X A 2] (3)

3

A real quatemion is of the form [aO] and multiplies like a real number A pure quatemion has the form [0A] A unit quatemion is a pure quaternion [0 n] with n a unit vector The conjugate of a quatemion A = [a A] is the quaternion Amiddot = [a -A] The norm of a quatemion A is the square root of AAmiddot which is seen to be the square root of a2 + A A A quatemion of unit norm is a normalized quatemion and it is normalized quaternions that represent rotations A quaternion [a A] = [a An] can satisfy the normalization condition if a = cos(a) A = sin(a) This gives a Hamilton quaternion or versor which we shall not use Better for our purposes is to write a = cos( ~) A = sin(~) The conjugate inverts the rotation axis and thus denotes the inverse rotation The inverse quatemion of A is A-I such that AA-l = [10] For a normalized quaternion the inverse is the conjugate The product of two normalized quatemions is a normalized quaternion a property we desire for a rotation representation Under the Rodrigues parameterization a normalized quatemion represents a rotation quaternion multiplication corresponds to composition of rotations and quatemion inverse is the same as quaternion conjugation

One way to apply a rotation represented by a quaternion to a vector is to represent the vector by a pure quatemion and to conjugate it with the rotation quaternion That is to rotate a given vector r by A=[cos t sin ~n] form a pure quatemion r = [0 r] and the resulting vector is the vector part of the pure quatemion

rl =ArAmiddot (4)

From the quaternion multiplication rule the conical transformation follows though it can also be derived in other ways It describes the effect of rotation upon a vector r in terms of the conical rotation parameters

rl = cos4gtr + sin4gt(n x r) + (1 - cos4raquo(n r)n (5)

For computational purposes neither conjugation nor the conical transform is as quick as the clever formula (note the repeated subexpression) given in [9] arising from applying vector identities to eq (5) Given a vector r and the normalized quatemion representation [aA] The rotated vector is

rt = r +2(a(A x r) - (A x r) X A) (6)

The matrix representation of a rotation represented by conic parameters (4) n) can be written explicitly in terms of those parameters and deriving it is a well known exercise for robotics students It can be found for example in [1] For computational purposes it is easier to use the following result The matrix corresponding to the quaternion rotation representation p Az All Az] is

2(AzAIi - oXAz) 2oX 2 - Az

2 +A - Az (7) 2(AIiAz + oXAz )

4

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 6: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 1 About Here

Figure 1 By identifying (gluing together) parallel sides in the directions indicated various more or less familiar surfaces can be formed The Klein bottle and projective plane cannot be physically realized in three-space The 50(3) topology is that of a projective plane

Figure 2 About Here

Figure 2 One consequence of 50(3) being non-simply connected is the inequivalence between the paths A B in direction d l and A B in direction d2 Any two paths between two points on a simply-connected surface can be shrunk to be equivalent but that is not the case on the cylinder shown here or on the projective plane

the direction vector If the entire 3-sphere is used to represent rotations this problem disappears with (1In) being followed by (11 + cn) However this same rotation is still also represented by the point (11 - s -n) so the isomorphism is lost The special cases created by potentially duplicated representation complicate algorithms and symbolic mashynipulations For related reasons representations of straight lines are plagued with the same problems (an observation due to Joe Mundy)

2 Basic Quaternion and Matrix Formulae

21 Quaternion Representation

Quaternions are well discussed in the mathematical and robotics literature An accessible reference is [9] though there are several others This brief outline is restricted to those properties most directly related to our central questions Quaternions embody the EulershyRodrigues parameterization of rotations directly and thus are closely related to the conic represen tation of rotation

If the Euler-Rodrigues parameters of a rotation are (~ A) the quaternion that represhysents the rotation is a 4-tuple best thought of as a scalar and a vector part

Q = [~A] (1)

Actually the vector part is a pseudovector transforming to itself under inversion like the cross product but that distinction will not bother us

If a rotation in terms of its conic parameters is R(4J n) the corresponding quaternion is

(2)

The multiplication rule for quaternions is

[~b AI][~2 A2] = [~1~2 - Al A 2 ~IA2 +~2AI +Al X A 2] (3)

3

A real quatemion is of the form [aO] and multiplies like a real number A pure quatemion has the form [0A] A unit quatemion is a pure quaternion [0 n] with n a unit vector The conjugate of a quatemion A = [a A] is the quaternion Amiddot = [a -A] The norm of a quatemion A is the square root of AAmiddot which is seen to be the square root of a2 + A A A quatemion of unit norm is a normalized quatemion and it is normalized quaternions that represent rotations A quaternion [a A] = [a An] can satisfy the normalization condition if a = cos(a) A = sin(a) This gives a Hamilton quaternion or versor which we shall not use Better for our purposes is to write a = cos( ~) A = sin(~) The conjugate inverts the rotation axis and thus denotes the inverse rotation The inverse quatemion of A is A-I such that AA-l = [10] For a normalized quaternion the inverse is the conjugate The product of two normalized quatemions is a normalized quaternion a property we desire for a rotation representation Under the Rodrigues parameterization a normalized quatemion represents a rotation quaternion multiplication corresponds to composition of rotations and quatemion inverse is the same as quaternion conjugation

One way to apply a rotation represented by a quaternion to a vector is to represent the vector by a pure quatemion and to conjugate it with the rotation quaternion That is to rotate a given vector r by A=[cos t sin ~n] form a pure quatemion r = [0 r] and the resulting vector is the vector part of the pure quatemion

rl =ArAmiddot (4)

From the quaternion multiplication rule the conical transformation follows though it can also be derived in other ways It describes the effect of rotation upon a vector r in terms of the conical rotation parameters

rl = cos4gtr + sin4gt(n x r) + (1 - cos4raquo(n r)n (5)

For computational purposes neither conjugation nor the conical transform is as quick as the clever formula (note the repeated subexpression) given in [9] arising from applying vector identities to eq (5) Given a vector r and the normalized quatemion representation [aA] The rotated vector is

rt = r +2(a(A x r) - (A x r) X A) (6)

The matrix representation of a rotation represented by conic parameters (4) n) can be written explicitly in terms of those parameters and deriving it is a well known exercise for robotics students It can be found for example in [1] For computational purposes it is easier to use the following result The matrix corresponding to the quaternion rotation representation p Az All Az] is

2(AzAIi - oXAz) 2oX 2 - Az

2 +A - Az (7) 2(AIiAz + oXAz )

4

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 7: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

A real quatemion is of the form [aO] and multiplies like a real number A pure quatemion has the form [0A] A unit quatemion is a pure quaternion [0 n] with n a unit vector The conjugate of a quatemion A = [a A] is the quaternion Amiddot = [a -A] The norm of a quatemion A is the square root of AAmiddot which is seen to be the square root of a2 + A A A quatemion of unit norm is a normalized quatemion and it is normalized quaternions that represent rotations A quaternion [a A] = [a An] can satisfy the normalization condition if a = cos(a) A = sin(a) This gives a Hamilton quaternion or versor which we shall not use Better for our purposes is to write a = cos( ~) A = sin(~) The conjugate inverts the rotation axis and thus denotes the inverse rotation The inverse quatemion of A is A-I such that AA-l = [10] For a normalized quaternion the inverse is the conjugate The product of two normalized quatemions is a normalized quaternion a property we desire for a rotation representation Under the Rodrigues parameterization a normalized quatemion represents a rotation quaternion multiplication corresponds to composition of rotations and quatemion inverse is the same as quaternion conjugation

One way to apply a rotation represented by a quaternion to a vector is to represent the vector by a pure quatemion and to conjugate it with the rotation quaternion That is to rotate a given vector r by A=[cos t sin ~n] form a pure quatemion r = [0 r] and the resulting vector is the vector part of the pure quatemion

rl =ArAmiddot (4)

From the quaternion multiplication rule the conical transformation follows though it can also be derived in other ways It describes the effect of rotation upon a vector r in terms of the conical rotation parameters

rl = cos4gtr + sin4gt(n x r) + (1 - cos4raquo(n r)n (5)

For computational purposes neither conjugation nor the conical transform is as quick as the clever formula (note the repeated subexpression) given in [9] arising from applying vector identities to eq (5) Given a vector r and the normalized quatemion representation [aA] The rotated vector is

rt = r +2(a(A x r) - (A x r) X A) (6)

The matrix representation of a rotation represented by conic parameters (4) n) can be written explicitly in terms of those parameters and deriving it is a well known exercise for robotics students It can be found for example in [1] For computational purposes it is easier to use the following result The matrix corresponding to the quaternion rotation representation p Az All Az] is

2(AzAIi - oXAz) 2oX 2 - Az

2 +A - Az (7) 2(AIiAz + oXAz )

4

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 8: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

22 Matrix Representation

This representation is the most familiar to graphics computer vision and robotics workers and is given correspondingly short shrift here A rotation is represented by an orthonormal 3 x 3 matrix This rotation component occupies the upper left hand corner of the usual 4 X 4 homogeneous transform matrix for three dimensional points expressed as homogeneous 4-vectors The matrix representation is highly redundant representing as it does three quantities with nine numbers

In this section capital letters are matrix variables The primitive rotation matrices are usually considered to be those for rotation about coordinate axes For example to rotate a (column) vector r by cent about the X axis we have the familiar

1 0 0]rl = 0 coscent -sincent r (8)[o sin cent cos cent

with related simple matrices for the other two axes The composition of rotations A B C is written rl = CBAr which is interpreted either as performing rotations ABC in that order each being specified with respect to a fixed (laboratory) frame or performing them in the reverse order C B A with each specified in the coordinates that result from applying the preceding rotations to the preceding coordinate frame starting with C acting on the laboratory frame

Conversion methods from other rotation parameterizations (like Euler angles) to the matrix representation appear in [1] Generally the matrices that result are of a moderate formal complexity but with pleasing symmetries of form arising from the redundancy in the matrix In this work all we shall need is eq ( 7)

Infinitesimal rotations are interesting The skew symmetric component of a rotation matrix A is 5 = A - AT and it turns out that

1A = exp( 5) = I +5 + 52 + (9)

2

A little further work establishes that if the conical representation of a rotation is (centnznlInz) and

z = [z (10)-z -~z] -nil nz 0

then A = exp( centZ) = 1+ (sin cent)Z +(1- cos cent )Z2 from which the explicit form of the 3 X 3 matrix corresponding to the conic representation can be found

Writing eq (10) as

-1 o (11)Z = n [~ ~ ~1 ]+n [~1 ~ ~] +n [ ~ o

5

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 9: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

shows that infinitesimal rotations can be expressed elegantly in terms of orthogonal inshyfinitesimal rotations It is of course no accident that the matrix weighted by nz in eq (11) is the derivative of the matrix in eq (8) evaluated at 4gt = O Further it will be seen that if vt is the vector arising from rotating vector v by the rotation (64) n) then each component of vt is just the sum of various weighted components of v Thus infinitesimal rotations unlike general ones commute

3 Operation Counts

Given the speed and word length of modern computers including those used in embedded applications like controllers concern about operation counts and numerical accuracy of rotational operations may well be irrelevant However there is always a bottleneck someshywhere and in many applications such as control graphics and robotics it can still happen that computation of rotations can be time consuming Small repeated incremental rotashytions are especially useful in graphics and in some robot simulations [623] In this case the infinitesimal rotations of the last section can be used if some care is exercised To compute the new xl and yl that result from a small rotation 0 around the Z axis use

Xl = X - ysinO yl = Y+xlsinO (12)

The use of xl in the second equation maintains normalization (unit determinant of the corresponding rotation matrix) For graphics cumulative error can be fought by replacing the rotated data by the original data when the total rotation totals 211 [6]

Operation counts of course depend not just on the representation but on the algorithm implemented As the application gets more complex it becomes increasingly likely that ameliorations in the form of cancellations appropriate data structures etc will arise Even simple operations sometimes offer surprising scope for ingenuity (compare eqs (5) and (6)) Thus it is not clear that listing the characteristics of a few primitive operations is guaranteed to be helpful (I shall do it nevertheless) Realistic prediction of savings possible with one representation or the other may require detailed algorithmic analysis Taylor [20] works out operation counts for nine computations central to his straight line manipulator trajectory task In these reasonably complex calculations (up to 112 adds and 156 multiplies in one instance) quaternions often cut down the number of operations by a factor of 15 to 2 Taylor also works out operation counts for a number of basic rotation and vector transformation operations This section mostly makes explicit the table appearing in Taylors chapter detailing those latter results Section 49 gives counts for iterated (identical and different) rotations

31 Basic Operations

Recall the representations at issue

Conic (4) n) - axis n angle 4gt

6

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 10: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Quaternion [gtA] withgt = cos(t)A = sin(t)n

Matrix a 3 x 3 matrix

For 3-vectors if multiplication inciudes division we have the following

Vector Addition 3 additions

Scalar - Vector Product 3 multiplications

Dot Product 3 multiplications 2 additions

Cross Product 6 multiplications 3 additions

Vector Normalization 6 multiplications 2 additions 1 square root

32 Conversions

Conic to Quaternion One division to get ltp2 one sine and one cosine operation three multiplications to scale n Total 4 multiplications 1 sine 1 cosine

Quaternion to Conic Although it looks like one arctangent is all that is needed here after dividing the quaternion through by X that seems to lose track of the quadrants Best seems simply finding ltp2 by an arccosine and a multiplication by 20 to get ltp then computing sin(ltp2) and dividing the vector part by that Total 1 arccosine 1 sine 4 multiplications Special care is needed here for small (especially zero) values of ltp

Quaternion to Matrix Eq (7) is the basic tool Although the matrix is orthonormal (thus one can compute a third row from the other two by vector cross product) that constraint does not help in this case Computing all the necessary products takes 13 multiplications after which 8 additions generate the first two rows The last row costs only another 5 additions working explicitly from eq (7) compared to the higher cost of a cross product Total 13 multiplications 13 additions

Conic to Matrix One way to proceed is to compute the matrix whose elements are expressed in terms of the conic parameters (referred to in Section 2) Among other things it requires more trigonometric evaluations so best seems the composition of conic to quaternion to matrix conversions for a total of 17 multiplications 13 additions 1 sine 1 cosine

Matrix to Quaternion The trace of the matrix in eq (7) is seen to be 4gt2 - 1 and differences of symmetric off-diagonal elements yield quantities of the form 2gtAz etc Total 5 multiplications 6 additions 1 square root

Matrix to Conic Easiest seems matrix to quaternion to conic

33 Rotations

Vector by Matrix Three dot products Total 9 multiplications 6 additions

7

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 11: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Vector by Quaternion Implement eq (6) Total 18 multiplications 12 additions

Compose Rotations (Matrix Product) Here it pays to exploit orthonormality rather than simply implementing matrix multiplication with nine dot products Six dot products and one cross product are necessary Total 24 multiplications 15 additions This method uses only 23 of the information in the matrices and may be suspect for inexact representation (see below on normalization)

Compose Rotations (Quaternion Product) Implementing eq (3) directly takes one dot product one cross product two vector additions two vector-scalar products one multiply and one add Total 16 multiplications 12 additions

Rotate around XY or Z Axis To do the work of eq (8) requires 1 sine 1 cosine (unless they are already known) 4 multiplications and 2 additions

34 Normalization

In the conic representation one has little choice but to normalize the axis direction vector norm to unity and leave the rotation angle alone In the quaternion and matrix represenshytations the direction and magnitude are conflated in the parameters and normalization of parameters affects them both

Quaternion Simply dividing through all quaternion components by the norm of the quaternion costs 8 multiplies 3 additions one square root

Matrix An exact rotation matrix is orthonormal Three simple normalization alshygorithms suggest themselves In the no crossproduct method each matrix row (in this paragraph meaning row or column) is normalized to a unit vector Cost 12 multiplies 6 additions 3 square roots No rows are guaranteed orthogonal The one crossproduct method normalizes two rows and computes the third as their cross product Cost 18 multiplies 7 additions 2 square roots This method does not guarantee the first two rows are orthogonal The two crossproduct method performs the one crossproduct norshymalization then recomputes the first row as the cross product of the second and third say for a total of 23 multiplies 10 additions and 2 square roots Neither of the last two methods uses information from one matrix row The best normalization procedure by definition should return the orthonormal matrix closest to the given one The above procedures are approximations to this ideal One idea might be to find the principal axis direction (the eigenvector corresponding to the largest real eigenvalue) and call that the normalized rotation axis An exact rotation matrix M has one eigenvalue of unity and two conjugate complex ones What then of the rotation angle In the errorless matrix C05(4)) = (Trace(M) - 1)2 and the trace is invariant under the eigenvalue calculation Thus the angle can be calculated from three of the matrix elements But does this choice minimize any reasonable error in rotation space Indeed what error should be minimized The work here uses the no- and one-crossproduct forms tries some empirical investishygation in Section 47 and leaves the large question of optimal normalization as research issue Exercise find the eigenvalues and eigenvectors of the matrix in eq(8)

8

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 12: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

~ Operation IQuaternion IMatrix Rep -+ Conic 4 1 acos 1 sin 9 6+ 1 acos 1 sin 1 j Conic -+ Rep 4 1 sin 1 cos 17 13+ 1 sin 1 cos Rep -+ Other 1313 + 5 6 + 1 V Rep 0 vector 18 12+ 96+ Rep 0 Rep 16 12+ 24 15+ XY or Z Axis Rot 84+ 42+ Normalize 8 3+ 1 187+2 V

Table 1 Summary of quaternion and matrix operation counts Rep means the represenshytation of the column heading Other means that of the other column heading Applying a rotation to a vector or another rotation is denoted by o Multiplications additions and square roots are denoted by +j Trigonometric functions are denoted sin acos etc

35 Summary and Discussion

The operation counts are summarized in Table 1 They generally agree with those in [20] with a few differences Taylor seems to have a quaternion to matrix conversion that takes 9 multiplies and 19 adds (Exercise find it) and he implements his matrix to quaternion conversion with 3 sines 3 cosines and one arctangent instead of a single square root Taylor normalizes matrix representations by the one crossproduct method The use of eq (6) for vector rotation saves 4 multiplies over Taylors counts In any event the differences in costs of the two representations do not seem as dramatic for these basic calculations as for Taylors more complex robotic trajectory calculations but iterative operations reveal a difference of a factor oftwo to three in effort (see Section 49) Generally it may be impossible to predict realistically the extent of the savings possible with one representation over the other without a detailed algorithmic analysis

4 Numerical Accuracy

41 Vector Rotation Tasks Methods and Representations

The manipulations of quaternions matrices and vectors presented in Section 3 are based on simple building blocks of vector addition scaling and dot and cross products The numbers involved in the representations are generally well behaved with vector composhynents and trigonometric functions lying in the interval [01] and rotation magnitudes and inverse trigonometric functions often in the interval [-211211] Neither representation reshyquires algorithms with obvious numerically dangerous operations like dividing by small differences

It is often alleged that one representation is more numerically accurate than the other However it is not easy to see how to quantify this claim analytically The matrix represhy

9

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 13: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

sentation is redundant representing the three parameters with nine numbers Does this mean it is more vulnerable to numerical error or less Given that the algorithms are straightforward and simple do their numerical results depend more on their structure or on the data they operate on The individual numbers (often components of unit 3-vectors) have nonlinear constraints that may be affect the numerical performance Perhaps some of the alleged differences are thought to arise in microcomputer implementations with small word size how does accuracy vary with the number of significant digits

In general the lack of analytical leverage on these questions motivated the empirical study reported in this section Herein we compare representations for several common vector rotation tasks rotation of a vector iterated identical rotations of a vector and iterated different (random) rotations of a vector Error is measured either as an error vector or as the amount that the resulting vectors length and direction differ from the correct value

More precisely a correct answer is determined by computation at full (double) precishysion When the rotation task is to iterate the same rotation the correct answer can be (and is) calculated for the final cumulative rotation rather than calculated by iteration The application of a rotation to a vector is accomplished by two methods by vector shymatrix multiplication as in eq (8) and by Homs quatemion - vector formula eq (6) The iteration of a rotation is accomplished in three ways quatemion - quaternion multishyplication matrie matrix multiplication and the quatemion - vector formula Thus theraquo

results of iterated rotation are represented as a quaternion matrix and (cumulatively rotated) vector respectively

The experiments use a variable precision implementation of all the relevant real vecshytor quatemion and matrix operations offering a runtime choice of the number of bits of mantissa precision and the approximation type (rounding or truncation) The most basic operations are approximation which rounds or truncates the mantissa of a C-Ianguage doushyble (double-precision real floating point number) to the required precision and arithmetic operations that approximate the operands operate (at full precision) and approximate the result From these basics are built vector cross and dot products quaternion and matrix products normalization routines etc

Vectors are specified or created randomly as (xyz) unit vectors at the highest mashychine precision Rotations are specified or created randomly in the conic (4) n) (rotation angle rotation axis direction) representation also at highest precision Random unit vecshytors for input or rotation axes are not uniformly distributed in orientation space being generated by normalizing three uniform variates to produce a unit vector For random (4) n) rotations a random vector is produced for n and a fourth uniform variate is normalshyized to the range [-1111] Exercise what do these distributions of vectors and rotations look like in orientation and rotation space

The correct answer is computed at highest precision by applying a single rotation (by N centgt for an N-long iteration of the same rotation centraquo around the rotation axis to the input vector using eq (6) Then working at the lower experimental precision the conic input representation is converted to quaternion and matrix representation by eqs (2)

10

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 14: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

and (7) Next the experimental rotational task is carried out at the experimental lower precision Finally the error analysis consists of highest precision comparison of the low precision result with the high precision correct answer

When a reduced-precision vector vI is obtained its error is determined by comparing it to the full-precision non-iteratively rotated vector VI The error vector is VI - VI the length error AL and direction error AD are

AL =111 vt II - II VI II I (13)

AD = arccos(v VI) (14)

where V is the unit vector in the direction of v

In all the work here II v 11= 1 Directional errors are reported in radians length errors in units of the (unit) length of the vector to be transformed The experiments are reported below in a rather discursive style Along the way a few side issues are addressed and conclusions are drawn that affect the later choices of experimental parameters

42 Analytic Approaches

Let us begin to consider the conversion of conic (4) n) rotation parameters to quaternion and matrix form For simplicity we might assume that tJ = 4gt2 is known exactly Assume that at n bits of precision a number x is represented to a maximum additive absolute error of 6 = 2-n

which is a relative error of x6 Ignoring second order terms the product of two imprecise numbers (x + 6)(y + c) has additive relative error of poundx + 6y When looking up or computing a function the error to first order in f(x + 6) is 6~ Thus the error in computing the cosine of an errorful number x + 6 is sin(x )6 If as in the implementation of this paper the function is computed initially to high precision and then approximated the further approximation may add another error of 6 for a total absolute error of (l+sin(x))6 Of course the function computation could be set up to eliminate the last approximation error if the working precision were known in advance

These considerations are a start at constructing formulae for the relative error in the conversion of conic to quatemion and matrix form via eqs (2) and (7) However missing is the constraint that n be a unit vector (that the matrix be orthonormal) Without this constraint the predictions of errors will be pessimistic Interacting with the constraint is the usual simplistic assumption that the worst happens in every operation (ie errors never cancel) This usual assumption makes analysis possible but potentially irrelevant In fact errors in the matrix and quaternion components after conversion depend closely on the values of the inputs and empirically do not exhibit predictable data-independent structure

Turning to a slightly grosser examination of the structure of rotation operations conshysider the implementation used in the work below for the two competing equations for quaternion-mediated rotation (eq (6)) and matrix-mediated rotation (multiplication of a vector by the matrix in eq (7)) The quaternion formula is the sum of three vectors (the

11

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 15: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

original and two cross products) whereas the matrix formula is a matrix-vector product Going through the code for this operation we find that the xl component of the result vector (XI yl ZI) after applying the quaternion formula for quatemion (AA~ AllAz ) to the input vector (x y z) is computed by operations applied at their usual precedence in the following formula

Xl =X +2A(AlI z - Azy) - 2Az(Azx - A~z) +2Av(A~1I - AllX ) (15)

and similarly for the 11 and zt components Both (x 11 z) and A are unit vectors and Ais a cosine

The operations implementing the corresponding component by the matrix multiplicashytion formula are

and similarly for the yl and zt components

There is nothing in these two formulae to predict the significant statistical difference (especially under the more predictable truncation approximation) in the performance of these two methods (See eg Fig 6)

Pressing on consider still a grosser description of the structure Looking at eqs (4) and (8) say that the error in quaternion and matrix (caused perhaps by truncation) is described as a scaling of the whole quaternion or matrix (by 6 = (1- keuro) for some constant k if truncation error is f) In that case the matrix-transformed vector is in error by a factor of 6 or an additive error of e and the quaternion-transformed vector by a factor of 62 (because of the conjugation) or an additive factor of 2pound (to first order) Iterated matrix or quaternion self-multiplication exponentially damps the individual matrix and quaternion elements which otherwise vary sinusoidally as the iteration proceeds Under iterated application of the scaled matrix the direction of the resulting vector should be unaffected but its length should shrink decaying exponentially to zero

In fact this level of explanation suffices to explain the data resulting from a single rotation (Fig 6) the quatemion errors are approximately twice the matrix errors for a single rotation under truncation It also correctly predicts the behavior of matrix and quaternion elements under iteration of the same rotation (Fig 3) It correctly predicts the form of error for the matrix transformation under iteration of the same rotation (Fig 8) and forms part of the explanation for quaternion transformation error (see Section 44) Exercise In (Fig 3) the period of the matrix element is half that of the quaternion element This result is expected from inspection of eqs (8 2) but looks inconsistent with eq (7) Is it

43 Single Rotations

First some sample single rotations were applied

VI = Ro v (17)

12

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 16: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 3 About Here

Figure 3 The components of the quatemion and matrix are scaled and shifted sinushysoids Here representative quaternion and matrix elements amplitudes decay exponentially through time under iterative rotation with truncation Under rounding the cumulative deshycay is negligible Dotted line - matrix element [00] Solid line - quaternion element A~

Figure 4 About Here

Figure 4 Rotating the unit vector in direction [11 -1] around the axis [111] by varying amounts (a) 01 rad (b) 10 rad (c) 17 rad The errors are plotted logarithmically against the number of bits of mantissa precision under truncation Hollow and dark squares shydirection error with quaternion and matrix representations Star and cross -length error with quaternion and matrix representations Variations in error magnitude and its rank ordering result from varying rotation magnitude The largest error at 22 bits precision is 00001 (the smallest representable number is about 10-9 )

Fig 4 shows the results comparing the conic (eq (6)) with matrix rotation computation under truncation In general increasing the rotation magnitude does not increase error the error magnitudes are not predictably ordered and the error magnitude is at worst 10-6 with 22 bits of mantissa precision

Next 100 individual random rotation - random vector pairs were generated with the rotation applied to the vector at a single precision (here as throughout the paper until Section 48 10 bits) under both truncation (case RaTr) and rounding (case RaRd) (Statistics for several cases are collected in Table 2 in Section 46)

The results are shown in two different styles in Figs 56 and 7 Fig 5 displays length and direction of the 100 trials under truncation indexed along the X axis although they are each independent As predicted in Section 42 the matrix representation outperforms quaternions by a factor of about two The length errors under truncation show the nonzero mean one would expect The same data are presented in Fig 6 as two projections of the three-dimensional error vectors all oriented so that the true answer lies along the Z axis facing down with its point at the origin aiming in the -Z direction The X - Y projection shows how the direction errors arise and the X - Z projection shows how the length errors arise (with the approximate vectors all being too short) Exercise Construct the rotation that points a given (e y z) vector along the Z axis (a) from rotations about the (X Y Z) axes (b) using quaternions

Figure 5 About Here

Figure 5 Case RaTr (a) Length errors of 100 different random (rotation vector) pairs at 10 bits precision under truncation Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but direction errors

13

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 17: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

MOEBIUS

S11lIP1_11 11~l Figure 1

Fi gure 2

Mat Quat elt TNIlC

08 l

Figure 3

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 18: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 6 About Here

Figure 6 The same case and data as the previous figure plotted as error vectors where correct answer vector points to the origin and lies along the positive Z axis (see text) (a) Error vectors projected onto X - Z plane Circles plot the quaternion - vector errors Xs the matrix - vector errors (b) As for (a) but projection onto X - Y plane

Figure 7 About Here

Figure 7 Case RaRd (a) (b) as the previous figure with same input rotations and vectors but with operations under rounding to 10 bits of precision

Fig 7 is similar to Fig 6 except that rounding is employed (note the scale change) Not only does the length error bias disappear as expected but the difference between the methods decreases with the matrix implementation maintaining a slight advantage

44 Iterated Rotations and Normalization

The second experiment is to iterate the same small rotation of tgt around a particular direction using three rotation composition methods Quaternion multiplication and matrix multiplication produce iterated quaternion and matrix rotation representations At the end of the iterations the resulting rotation representation is applied to the input vector once to determine the rotated vector A third method called the vector method here iteratively applies the identical incremental rotation to the vector at each iterative step preserving only the rotated vector There are thus two cases (quaternion and matrix) of composing rotation representations and one case (vector) of composing the application of the rotation to the vector Previous results indicate that the matrix method is both more accurate and more efficient for applying a single rotation and empirically the results are much better with this choice as well so rotations are implemented in the vector method by matrix-vector multiplication

These experiments were carried out under both truncation and rounding Truncation is sometimes used to get at worst case performance in a numerical analysis sense Such an approach might be useful in analysis of a non-iterative computational scheme (such as a single rotation) but the cumulative scaling effects described in Section 42 overwhelm numerical precision effects The errors engendered through iterative truncation tum out to be qualitatively similar to those that arise under rounding These results are thus good for geometric intuition about the methods and illustrate the importance of normalization Fig B(a) and (b) show the (shocking) errors that result if no normalization takes place during the iterated rotation composition and if the resulting non-normalized product quaternion or matrix rotation is applied to the input vector This is case ISO (Itershyative Sequence 0 normalizations) Without normalization all the representations show cumulative errors that are are easy to understand

14

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 19: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

AM_ Can e _ I IAnE Trunc

1 a2 o o 0

2e-02 o o o o

o 00

o 0 o o

oo 00 0

bullbullbull o-JlO bullbull ~ o

shy O bull0_0 rrII 0 bullbull bull aoabullbullbullbullbull bull

bull 0 bull a~ abullbullo 0bullbull bull bull 1- a bullbull - bull ~ bull O bull a

bull bull bull 0 0 ~ car001 0

0 0 0deg

302010

o i o 0 0 ltbOo o 0 deg00 0deg 0

o 0 ~ ooa 00

0 00 0 deg0

10 70 10 00

AM_ Can o MIl I OirETMgtC

2e-02

o

o o

o o

o

o

10

0 0

0 0

0 0 00 0 0

00 0

0 0 0

0 0 00

0 00 0 0

o

shy

Figure 5(b) Figure 5(a)

xmiddotz Err v_ Ccln 0 IIITIUI1C

1~2

0 0

0 0 00 0 0 0

ClCO

0 0

0

0 0 00

0 0 0

0

Se02

Xmiddoty Err Y_ Ccln o 0 bll TNllC

0

0 0 o~ 0 0

0 0

0 0middot 0

0 0

~0

0 middot~2

0

000

8 0

Figure 6(b) Figure 6(a)

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 20: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 8 About Here

Figure 8 Case ISO Iterative transformation of the vector [1 1 -11 about the axis [1 1 I) by 1 rad 200 times Calculations done with ten bits of mantissa precision under truncation No normalization of final representation before applying it to vector This choice of input parameters results in a representative performance but not a normative one (a) Result vector error projected on X - Z plane Solid line - vector representation dotted line - matrix dashed line - quaternion The matrix errors are a very tight helix up the Z axis with the vector errors only slightly larger (b) AB for (a) but showing X - y projection The matrix and vector errors project to the small dark blob at the origin

First consider the vector representation It is acted on at each iteration by an identical but errorful transform of eq (6) We see it in both views spiraling away from the correct initial value its length shrinking and its direction error increasing The performance of the vector method might be improved by a factor of two on average by using the matrix transformation at every step (considering the arguments in Section 42 and the results shown in Fig 6) Another important aspect of this method is its variability in performance for different rotations arising in the variability in the accuracy of the first calculated rotation (again Fig 6)

As predicted in Section 42 the matrix representation is affected in its scaling property but not significantly in its rotational properties there is effectively no directional error but the result shrinks to less than half its original length in 200 iterations

The quaternion representation acts using eq (6) by adding two cross product vectors to the original input vector as the quatemion components are increasingly systematically errorful they affect the result in the interesting way shown in the figure One effect is the decaying magnitude of the quaternion the other is that as the cumulative rotation 4gt increases from zero the sin(4)2) weighting of A the vector part of the quaternion starts at zero and returns to zero with a period of 21rt where t is the truncated value of the incremental rotation angle Thus with that period the effective rotation error varies sinusoidally and returns to zero as the magnitude of A does

The same data are plotted for 400 iterations this time in Fig 9 First consider length errors This presentation emphasizes the exponential decay of the answers length to zero with the matrix and vector representations Thus as the quaternions magnitude ultishymately decays to zero the result vector will be unaffected by the rotation and will remain equal to the input vector so its length error stabilizes at zero The short period sinusoidal character of the length error arises from the arguments of the last paragraph The initially increasing envelope arises from the increasing inaccuracy caused by the truncation The ultimate exponential decay of the envelope arises from the shrinking of the quaternion components

Considering direction errors the matrix and conic representations have (locally) linshyearly increasing errors as they spiral away from the true answer direction (The linear increase cannot be maintained forever on the orientation sphere of course) The quatershy

15

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 21: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

bullzEn_Cono 10 _ middotylEftII_ConO IO _

_3

o o

o

o o

o

bullbullbull 0

bull 0

orI 0

bull bull ~ 0

o o o 0 0

bullO 0

o

0

o omiddot

o

o -5~

o

IOHl3

o 0 o

6~ o

o

Figure 7(a) Figure 7(b)

shy -

I II II II

--1 I i 1 I ~ I I

- I I - I I

-t I f I - I I

I ~ I ~

-- r~t1 ~ ~ ~

0415 00

0415 c--- (i ~__ I J (

---- I I lt ---- J

t 1 I J 1

I

I 1 ~ __ _~

I I J I ~

-~ I----~--- I ~

_J

Figure 8(a) Figure 8(b)

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 22: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 9 About Here

Figure 9 Case 150 for 400 iterations (see text) (a) Length error of result vector Solid line - vector representation dotted line - matrix dashed line - quaternion The vector and matrix errors coincide (b) As for (a) but showing direction error for the resulting vector The vector and matrix errors are very close to zero

Figure 10 About Here

Figure 10 Case ISO under rounding instead of truncation Solid line - vector represenshytation dotted line - matrix dashed line - quaternion Though the forms of the errors are similar to the truncation case their relative magnitudes are more equal (a) Result vector projected on X - Z plane (b) As for (a) but showing X - Y projection

nion methods direction error ultimately becomes purely periodic as the answer vector becomes static The direction error as a function of iteration number t and angle r beshytween the rotation axis and the input vector is then ofthe form laquorsin t)2+(r-rc05 t)2)12 or the length of a circular chord from a fixed point on the circle to a point moving around the circle at constant velocity Before that time the direction error exhibits a more comshyplex two-humped form this arises from the two separate cross product components of the vector increment in eq (6) Exercise Would there be any difference in direction and length errors if the rotation is accomplished by conjugation (eq (4)) instead of eq (6)

Next in this sequence the last two demonstrations are repeated under rounding not truncation The expectation is that due to the better approximation and possible error cancellations arising from high and low approximations the errors in the representations will be less but that systematic errors will still exist The results (Figs 10 11) bear this out the errors often seem like much smaller noisy versions of the truncation errors In Fig 11 the quatemion errors display the fine two-humped structure longer since it is not overwhelmed as quickly by shrinking quaternion magnitude Rounding has a dramatically good effect on the systematic shrinking of matrices and quaternions under iterative self multiplication

So far we have purely for intuition dealt with the pathological case in which no normalization of rotation representations is performed The arguably pathological case of approximation through truncation yields insight about the structure of systematic approxshyimation errors in an iterative scheme From here on we abandon truncation approximation (except in Section 47)

Figure 11 About Here

Figure 11 Data of the previous figure showing length and direction errors Solid line shyvector representation dotted line - matrix dashed line - quaternion (a) Length error of result vector (b) Direction error

16

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 23: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

10

us

100

10

Qa _llIr1Etla1O~T

A ~ I

~

1 1 j(1 1 I I I

~ I i _r I J

- I 1 I

I -- I V V V

o 100 200 300 400

Figure 9(a) Figure 9(b)

ll-lErrV _ IlmiddotYErrV ~

Figure 10(a) Figure lO(b)

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 24: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 12 About Here

Figure 12 Case lSI Rotating the vector (11 -I) by 1 radian increments about the axis (111) with 10 bits precision under rounding normalizing the quaternion and matrix representation prior to application to the input vector and normalizing the final vector representation to its initial length Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector The vector representation has lower variance and the quaternion exhibits marked periodicity Only the first 100 iterations are shown the rest are similar (b) Direction error

We now look at two different styles of normalization In the first no normalization is performed on intermediate representations but the final quaternion matrix and vector are normalized In the second the intermediate quaternion matrix and vector are normalized at each step Normalization of the matrix and quaternion is not dependent on the input data but the length of the input vector must be computed and remembered for the vector representation to be normalized This fact might be significant if many vectors are being treated at once

Fig 12(a) shows the length error when the final quaternion and matrix are normalized prior to application to the input vector and the incrementally generated vector represenshytation is normalized to its original length (here known to be unity) This case is called lSI for Iterative Sequence 1 normalization The matrix normalization used is the one crossproduct form When only one normalization is being performed the three types of normalization perform virtually identically The computations were performed under rounding to the usual ten bits of precision If its initial length is not remembered the vector representation cannot be normalized and its error is the same as the non-normalized case of Fig 11 showing a linear error accumulation

Fig 12(b) compares direction error for the three representations normalized as in Fig 12(a) The comparative performance here is certainly not so clear cut and the error accumulations resemble random walks Alternatively the performance could depend more on the particular choice of input data (rotation and input vector) than on numerical properties of the methods

Finally a natural question concerns the effects of normalization applied after every rotation composition This is case ISN (N normalizations) Fig 13 shows the results for length and direction As might be expected for length error the normalized vector has the best performance (presumably as accurate as machine precision allows) The quaternion result has a systematically negative mean higher variance and periodicity The matrix length error displayed in Fig 13(a) results from the no crossproduct form of matrix normalization The trend is not an illusion carried out to 400 iterations the length error shows a marked systematic increase in the negative direction On the other hand iterashytion of the one crossproduct normalization resulted in ~ factor of two worsening of the magnitude of the direction errors for the matrix representation Repeated normalizations seem to have adversely affected the accuracy of the vector representation after many itershy

17

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 25: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

~_ VIOCDW_IO-_aaa-VIOC_EnaIO~_

1_ f~

I I

I I I

I shy I lshyI I

I I NI I

I I I I I

Figure l1(a) Figure l1(b)

teQVOJl-EnaIO11IDnn teQVAMDWEnaIO I

UHII

Figure 12(a) Figure 12(b)

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 26: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 13 About Here

Figure 13 Case ISN As for previous figure but normalizing the rotation representation at each iteration using no crossproduct matrix normalization which improved direction errors but worsened length errors over the one crossproduct form Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (b) Direction error

Figure 14 About Here

Figure 14 Case RSl As for case lSI (no normalization of intermediate representations) but using a sequence of random rotations There is no difference in performance with the different matrix normalizations no crossproduct matrix normalization Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error of the result vector (first 100 values the rest are similar) (b) Direction error The rather confusing results at least have some clear qualitative differences

ations but more work is needed to tell if this is a repeatable effect In any event it seems clear that the repeated normalizations are certainly not improving matters much in this case over case lSI

The main conclusion here is that the cost of normalizing quaternion or matrix represhysentations after every rotation in a repeated sequence may not repaid in results and in fact may be actively harmful The vector representation seems to do best of the three on length error and worst on direction error More work (analytic or experimental) would be needed to clarify the performance ranks or even to recommend or disqualify a representation for this task

45 Concatenated Random Rotations

Here a sequence of random rotations is generated as described earlier analogous to the sequence of identical rotations in the last experiment They are applied cumulatively to an input vector In each case computation is with 10 bits precision under rounding The cases of a single final normalization and normalizing every intermediate representation (called cases RSI and RSN) were investigated Both the no - and one crossproduct forms were investigated The results comparable to those in cases lSI and ISN (Figs 12 and 13) are shown in Figs 14 and 15

The results indicate that the vector representation remains better than the other two for length errors but for direction errors the results are unclear It is hard to draw clear distinctions either between the normalization methods or the representations Again the form of matrix normalization has some annoying effects which motivate the next section

18

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 27: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

- -

lIeQ YAM La1rIL0~

uo3 lIeQ YAMIlIr1rIL O~

UHl2

100

Figure 13(a) Figure 13(b)

aMYLaEmO _ --Q MY1lIr1rIL0_

IOHl3

Figure 14(a) Figure 14(b)

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 28: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 15 About Here

Figure 15 Case RSN As for case RSI but normalizing all intermediate representations Solid line - vector representation dotted line - matrix dashed line - quaternion (a) Length error using one crossproduct matrix normalization (b) Direction error usin one crossproduct form (c) Direction error using no crossproduct form

~Case ~ I-vec I I-mat I U vee I Umat I RaTr L - 0006225 0005026 - 0001181 0003369

D - 0001787 0004578 - 0001094 0002307 RaRd L - -0000310 -0000225 - 0001087 0001263

D - 0000965 0001180 - 0000542 0000759 lSI L 0000276 -0000260 -0000262 0000466 0000857 0001648 100 D 0003694 0004063 0001435 0001705 0002298 0001123 ISN L 0000137 -0001772 -0000811 0000533 0001106 0001312 100 D 0005646 0004844 0001606 0002786 0002872 0001045 RSI L -0000252 -0000162 -0000157 0000521 0001612 0002022 100 D 0003710 0005854 0006065 0002150 0003686 0006065 RSN L 0000164 -0000602 -0001097 0000473 0000703 0001666 100 OX

D 0002269 0009830 0005960 0000874 0004230 0002236 D 0002269 0005187 0005960 0000874 0002248 0002236

Table 2 Statistics for length (L) and direction (D) errors for some of the cases illustrated in this section Cases RaTr and RaRd are the independent random rotations lSI and ISN are the iterative sequences (one normalization and normalization of each intermediate representation) and RSI and RSN the random sequences Only the first 100 data values are used from the iterated rotations to diminish the effect of longer-term trends The second line RSN is with no crossproduct normalization (only affects the matrix representation )

46 Statistical Summary

Table 2 summarizes some of the data from the cases so far

47 Matrix Normalization

Some experiments explored the performance of the three types of matrix normalization (no one or two cross products) in the context of iterated identical and random rotations

The experiments consist of two iterated identical rotations (Figs 16 17) under roundshying and two iterated random rotation sequences under truncation and rounding Fig 18 gives the direction errors for two sequences of random rotations applied to an input veeshy

19

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 29: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

IMSeq Q LMI EfII to NllnnEwy And

003

Figure 15(a)

IMSeq Q NVCirErn toilia NllnnEwy And

- shy ~

bull I bullbullbull

to-G2

I

0000 4-----------__------------shyo tOO 200

Figure 15(b)

QvOirEfta to NllnnEwy And

Figure 15(c)

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 30: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 16 About Here

Figure 16 Iterated identical rotations of the unit vector in direction (1 1 -1) by 1 radian about the axis (111) computed to 10 bits under rounding Solid line - no cross products Dotted line - one cross product Dashed line - two cross products (a) Length errors (b) Direction errors

Figure 17 About Here

Figure 17 As for previous figure but the rotation is of the unit vector in direction (123) by iterated rotations of 2 radians about the axis (3-2-1)

tor In no case is there significant difference between the performances on the length error at 200 iterations but the one- or two crossproduct forms significantly outperform the no crossproduct form over 400 iterations since the latter exhibits systematically increasing negative length error and the former do not

For direction error few conclusions can be drawn about relative performance under iterated identical rotation sequences It is perhaps surprising that the no crossproduct form usually performs here at least as well as the other two over a small number of iterations and that for random sequences there is no clear winner between normalization type even approximating with truncation Thus the symptoms noticed in the last two sections involving long-term relative effects the normalization form were not noticed over 200 iterations The one- and two crossproduct forms may be more stable over long iteration but the issue of optimal normalization remains open

48 Average Error of Concatenated Random Rotations

Which representation is best or worst To wind up this overlong exercise in trying to derive structure from noise let us consider average behavior for the task of random rotation sequences (perhaps of more general interested than iterated identical rotations) Technical decisions on normalization are based on the conclusions of previous sections approximation is by rounding with normalization only after the last rotation Matrix normalization is done with the one crossproduct method Each run is a single experiment like that of Section 45 only performed (with the same data) at a number of different precisions

Such a set of runs yields length and direction error surfaces plotted against the precision

Figure 18 About Here

Figure 18 Direction errors only for a sequence of random rotations applied to an input vector Legend as in previous two figures (a) Truncation to 10 bits (b) Rounding to 10 bits

20

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 31: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

___IAIlElf_

shy ___DIrE

Imiddotmiddot

Figure 16(a) Figure 16(b)

OO-G2

0000 fo-------~-------~100 200

Figure 17(a) Figure 17(b)

___DlrElfT

0000 +o-------~----------100

Figure 18(a) Figure 18(b)

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 32: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

Figure 19(a) Figure 19(b)

Figure 19(c) Figure 19(d)

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 33: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

~ Rep ~ Repeated Rot IRandom Seq

Vec(Mat) (9N+17) (6N+13)+ 1sc 26N 19N+ Nsc Vec(Quat) (18N+4) (12N)+ 1sc 22N 12N+ Nsc Quat (16N+22) (12N+12)+ 1sc (20N+18) (12N+12)+ Nsc Matrix (27N + 26) (18N + 19)+ 1sc (44N+9) (31N+6)+ Nsc

Table 4 Operation counts for N-long rotation sequences The long form of matrix multishyplication is used here Vec(Mat) is vector representation with intermediate transformation by matrices Vec(Quat) with transformation by quaternion method Multiplications adshyditions and sine-cosine pairs are denoted by + sc etc

matrix transformation) since matrix-vector multiplication costs one third of matrix-matrix multiplication

410 Conclusions and Questions

The conclusions we can draw at this point are the following

1 Numerical simulation of variable precision arithmetic has yielded some tentative conshyclusions but in many interesting cases they are rather delicate and depend on niceties of the implementation Accurate performance predictions may only be possible with faithful simulation of the particular hardware and software at issue

2 Practical differences in numerical accuracy tend to disappear at reasonable precishysions After 200 iterations of a rotation 53 bits of mantissa precision (the double representation in the C language) yields no error to 10 decimal places With 24 bits (the float representation) effects were noticeable in the 5th to 7th decimal place with all methods

3 Normalization is vital before quaternion or matrix representations are applied to vecshytors but not as important during the composition of representations and the extra numerical computation of frequent normalization may actually decrease accuracy in the final result

4 The vector (with matrix transformation) and matrix representations for rotation sequences seem to outperform the quaternion for accuracy

5 the vector (with matrix transformation) representation for rotation sequences with a single normalization of vector length at the end is most computationally efficient and has least length error on the average

6 Averaged over many trials the difference in accuracy of the three methods amounts only to about a single bit of precision for a small number of rotation compositions (say 25 or fewer)

7 Matrices are the least computationally efficient on rotation sequences

22

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 34: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

5

There are also some remaining basic questions

1 What is a reasonable analytic approach to the numerical accuracy problem for inshyteresting rotational tasks (say sequences of random rotations under rounding)

2 How should optimal matrix or quaternion normalization methods be defined and constructed

Discussion

The area of robust computational geometry is becoming popular with an annual intershynational conference and increasingly sophisticated theoretical formulations Very little of that work (if any) is related to rotation algorithms Symbolic algebra systems can help with analysis [4] A good treatment of computer arithmetic and rounding appears in [15] with an application to detecting line intersection Serious qualitative representational problems arise when issues of topology (line connectivity) are at stake and much of the work is on computing line intersection accurately with finite precision arithmetic [19] For instance Milenkovics analysis is based on facts about line intersection such as the numshyber of extra bits that are necessary to guarantee a specific precision of answer Some of these facts are surprising it takes at least three times the input precision to calculate a line intersection location to the same precision [11] Robust polygon and polyhedron intersection algorithms as well as various line-sweeping algorithms are based on accurate line intersection algorithms [8]

Several points of comparison between rotation parameterizations and representations have been raised in the preceding sections It is perhaps worth repeating that little evshyidence for dramatic differences in numerical accuracy has emerged though the evidence might tend to favor matrices slightly and to condemn the vector representation

One of the common applications of rotational transforms is to affect several vectors at once either in the context of transforming a set of points around (as in graphics) or in handling coordinate systems represented as n points (n - 1 of them at points at infinity representing axis directions and one representing the origin of coordinates in a homogeshyneous coordinate system) In the first case the repeated application of the same transform will be more economical if the work is done in the matrix representation In the second the same may well be true but of course the whole point of homogeneous coordinates is to make the projective a fortiori affine transforms into linear ones implemented with matrix multiplication This means for example that coordinate systems rigid transformations inshycluding translation and rotation and point projective transforms (simple camera models) can a1l be represented as matrices and composed with matrix multiplication and that the operations of such transforms on vectors is simply matrix-vector multiplication In this context homogeneous coordinates by definition require the matrix representation

If the rotation at issue is a general one (say specified in conic (4) n) parameters) then conversion either to the quaternion or matrix form is easy Thus the representations in

23

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 35: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

this case do not dictate the semantics or how one wants to think about the rotation If the rotation is specified in terms of rotations about (X Y Z) coordinate axes then the matrix representation is easily and robustly constructed

Real-time applications involving concatenating rotations and applying them to a small number of vectors (perhaps one the position of a robot hand) need detailed analysis to see which representation is the most efficient Similarly for more complex geometric calculations

A common application of rotation representations in vision is to express the motion of the observer Often the rotational motions for mechanical or mathematical reasons are expressed as rotations about some fixed axes which in practice are usually orthogonal Examples are the motions induced by pan and tilt camera platforms or the expression of egomotion in terms of rotations about the observers local X Y Z axes In this case the matrix representation is the clear choice to maximize efficiency and intuition

Reconstructing the rotation that transformed known points is a useful vision task that requires fitting data to a model of the transformation The straightforward route followed since Roberts thesis [17] is to do a linear least squares fit to the transforms matrix representation This approach is perhaps risky since the matrix elements are not independent and the error being minimized thus may not be the one desired The alternative of using conic parameters has been successfully used [12] and may well be a better idea Recently more sophisticated Kalman filter methods have been used to derive motion from point matches through time [5] Model-matching with constraints allows a model to be matched to a class of objects and promising approaches are emerging that use hybrid methods of symbolic algebra implementing polynomial triangulation to get explicit solutions for known constraints followed by root-finding and sophisticated minimization techniques to search for minimum-error solutions [1314]

A less common but important set of applications is to reason about and manipulate rotations themselves For example one may want to sample rotation space evenly [9] or to create smoothly interpolated paths in rotation space between a given set of rotations [16] In these cases the Euler-Rodrigues parameters are the most convenient

Another example is the use since 1932 of quaternions in the neurophysiological litshyerature to describe eye kinematics [22] Models of human three-dimensional oculomotor performance especially for the vestibula-ocular reflex and feedback models of saccades involve computations on rotations that are much more stable when carried out in conic or Euler-Rodrigues parameters using quaternion arithmetic Subtractive feedback laws adequate for simple (left or right) eye movements do not generalize to arbitrary rotational axes The quatemion model uses multiplicative feedback explains human performance better than similar models based on rotation matrices and leads to explicit predictions such as a four-channel saccade computation [21]

24

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 36: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

6 Bibliography

[1] S 1 Altmann Rotations Quaternions and Double Groups Clarendon Press Oxshyford 1986

[2] C M Brown Gaze controls with interactions and delays In DARPA Image Undershystanding Workshop Submitted IEEE-TSMC pages 200-218 May 1989

[3] C M Brown Prediction in gaze and saccade control Technical Report OUEL 177189 UR TR-295 Oxford University Dept Engg Science (U Rochester Compo Sci Dept) May 1989

[4] C I Connolly D Kapur J1 Mundy and R Weiss GeoMeter A system for modeling and algebraic manipulation In Proceedings DARPA Image Understanding Workshop pages 797-804 May 1989

[5] O D Faugeras F Lustman and G Toscani Motion and structure from motion from point and line matches In Proceedings International Conference on Computer Vision pages 25-34 June 1987

[6] J D Foley and A Van Dam Fundamentals of Interactive Computer Graphics Addison-Wesley 1982

[7] W R Hamilton On quaternions or on a new system of imaginaries in algebra Philosophical Magazine 3rd Ser 25489 - 495 1844

[8] C Hoffmann and J Hopcroft Towards implementing robust geometric computations In Proceedings ACM Symposium on Computational Geometry 1988

[9] Berthold KP Horn Robot Vision MIT-Press McGraw-Hill 1986

[10] K I Kanatani Group Theory in Computer Vision Springer Verlag 1989

[11] V J Milenkovic Robust geometric computations for vision and robotics In Proshyceedings DARPA Image Understanding Workshop pages 764-773 May 1989

[12] J 1 Mundy Private communication August 1989

[13] J 1 Mundy The application of symbolic algebra techniques to object modeling Lecture University of Oxford August 1989

[14] J 1 Mundy Symbolic representation of object models In Preparation August 1989

[15] T Ortmann G Thiemt and C Ullrich Numerical stability of geometric algorithms In Proceedings A CM Symposium on Computational Geometry 1987

[16] K S Roberts G Bishop and S K Ganapathy Smooth interpolation of rotational motion In Computer Vision and Pattern Rocognition 1988 pages 724-729 June 1988

[17] 1 G Roberts Machine perception of three-dimensional solids In J T Tippett et al editors Optical and electro-optical information processing pages 159-197 MIT Press 1968

25

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 37: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related

[18] O Rodrigues Des lois geometriques qui regissent les deplacements dun systeme solide dans lespace et de la variation des coordonnees provenant de ses deplacements consideres independamment des causes qui peuvent les produire Journal de Mathemaiiques Puree et Appliquees 5380 - 440 1840

[19] D Salesin J Stolfi and L Guibas Epsilon geometry building robust algorithms from imprecise computations In Proceedings ACM Symposium on Computational Geometry 1989

[20] R Taylor Planning and execution of straight line manipulator trajectories In J M Brady J M Hollerbach T L Johnson T Lozano-Perez and M T Mason editors Robot Motion Planning and Control MIT Press 1982

[21] D Tweed and T Vilis Implications of rotational kinematics for the oculomotor system in three dimensions Journal of Neurophysiology 58(4)832-849 1987

[22] G Westheimer Kinematics for the eye Journal of the Optical Society of America 47967-974 1932

26

Page 38: Some Computational Properties of - Semantic Scholar · There are four main parameterizations of the rotation group SO(3). Two of them (rotation angle and axis, and the closely related