understanding satellite navigation || navigation solutions

26
Navigation Solutions 6 CHAPTER OUTLINE 6.1 Fundamental concepts ...................................................................................... 217 6.2 Generation of observation equation ................................................................... 222 6.3 Linearization .................................................................................................... 223 6.4 Solving for position .......................................................................................... 226 6.5 Other methods for position fixing ....................................................................... 232 6.5.1 Solving range equations without linearization .................................... 232 6.5.1.1 Using a constraint equation ...................................................... 232 6.5.1.2 Bancroft’s method .................................................................... 233 6.5.2 Other methods ............................................................................... 237 6.5.2.1 Doppler-based positioning ........................................................ 237 6.6 Velocity estimation........................................................................................... 239 Conceptual questions .............................................................................................. 241 References ............................................................................................................. 241 In this chapter, we will learn to perform the most important activity in the whole satellite navigation process, estimating the position. The whole purpose of the nav- igation is to achieve this objective. Here, we will first understand the approach to obtaining the solution intuitively. Then, we shall see how the inputs required to form the mathematical equations needed to solve for the position are obtained. Finally, we will obtain the solution mathematically using the linearization technique, and will discuss a few intricacies of the process. As a continuation, we will also learn other processes to obtain the same solution. 6.1 Fundamental concepts In a real-life scenario, all distances are required to be measured in a three- dimensional (3D) space. In Chapter 1, we mentioned that to represent the position of a point with respect to a reference, we need to fix the position of the reference and get the distance of the point from it along three orthogonal directions. Now, let us see in detail the geometrical aspects of setting distances along three orthogonal CHAPTER Understanding Satellite Navigation. http://dx.doi.org/10.1016/B978-0-12-799949-4.00006-3 Copyright © 2014 Elsevier Inc. All rights reserved. 217

Upload: rajat

Post on 03-Feb-2017

225 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Understanding Satellite Navigation || Navigation Solutions

CHAPTER

Navigation Solutions

Understanding Satellite Navigation. http://dx.doi.org/10.1016/B978-0-12-799949-4.00006-3

Copyright © 2014 Elsevier Inc. All rights reserved.

6

CHAPTER OUTLINE

6.1 Fundamental concepts...................................................................................... 217

6.2 Generation of observation equation ................................................................... 222

6.3 Linearization.................................................................................................... 223

6.4 Solving for position .......................................................................................... 226

6.5 Other methods for position fixing ....................................................................... 232

6.5.1 Solving range equations without linearization.................................... 232

6.5.1.1 Using a constraint equation ...................................................... 232

6.5.1.2 Bancroft’s method.................................................................... 233

6.5.2 Other methods ............................................................................... 237

6.5.2.1 Doppler-based positioning ........................................................ 237

6.6 Velocity estimation........................................................................................... 239

Conceptual questions .............................................................................................. 241

References ............................................................................................................. 241

In this chapter, we will learn to perform the most important activity in the wholesatellite navigation process, estimating the position. The whole purpose of the nav-igation is to achieve this objective. Here, we will first understand the approach toobtaining the solution intuitively. Then, we shall see how the inputs required toform the mathematical equations needed to solve for the position are obtained.Finally, we will obtain the solution mathematically using the linearization technique,and will discuss a few intricacies of the process. As a continuation, we will also learnother processes to obtain the same solution.

6.1 Fundamental conceptsIn a real-life scenario, all distances are required to be measured in a three-dimensional (3D) space. In Chapter 1, we mentioned that to represent the positionof a point with respect to a reference, we need to fix the position of the referenceand get the distance of the point from it along three orthogonal directions. Now,let us see in detail the geometrical aspects of setting distances along three orthogonal

217

Page 2: Understanding Satellite Navigation || Navigation Solutions

218 CHAPTER 6 Navigation Solutions

directions. We know that a point in a 3D space is the result of the intersection ofthree nonparallel planes. So, our task to define the point reduces to defining thesethree planes. Once the reference system is set, the reference point is consequentlydefined and the directions of the three orthogonal axes are fixed. The origin of thecoordinate system (i.e. the reference point coordinates) becomes (0, 0, 0). To definethe position of point P (x1, y1, z1), we first fix a distance x1 along axis x. This definesa plane x¼ x1 that is parallel to the yz plane at this point. This is at a distance x1 fromthe origin along the x axis. Thus, moving this distance along the x axis, our positionis reduced to some point (x1, 0, 0) on this plane. The job is to define the other twoplanes orthogonal to the one already defined. At this stage, we define another plane,y¼ y1, which is parallel to the xz plane with the normal distance, y1, from the origin.This further restricts the locus of our point on a straight line parallel to the z axisformed by the intersection of these two planes defined. Moving a distance, y1,from the current position along the y axis puts us on some point (x1, y1, 0) on thisline. Then, as the final plane z¼ z1, perpendicular to the z axis is defined, it inter-sects the mentioned line of locus at z¼ z1. Again moving a distance, z1, along thez axis, we reach the intersection of the three planes at the position (x1, y1, z1).Thus, these three mutually perpendicular movements completely fix our point atthe point P (x1, y1, z1). This is shown in Figure 6.1.

Because the task is only to define the planes, it can be done with respect to anyarbitrary reference point in the frame using the relative distances. Even then, thepoint in question may be fixed with respect to the original reference point, if weknow the position of this new secondary reference in the original frame. Definingplanes with respect to any reference is done by representing the vectorial distanceof these planes from this new reference point to the point of interest in terms of threeindependent bases or coordinates. Remember, we said the vectorial distance and notsimple range, because it requires both a sense of direction and magnitude to definethe point.

FIGURE 6.1

Defining a position in three dimensions.

Page 3: Understanding Satellite Navigation || Navigation Solutions

6.1 Fundamental concepts 219

But, what if we know only the radial range from this new reference point to ourpoint of interest instead of the vectorial distance? In a 3D space, any distance vectorshould have three independent components. In a spherical system, the three coordi-nates are the radial range and two angular deviations from some fixed planes. So,when we mention only the range, it means we lose two sets of information out ofthree. In such cases, the exact position of the point of interest may still be derivedby adding some more independent information, which effectually compensatesthe loss. This may be done by adding independent range measurements from othernew reference points whose positions are known. The basic idea is to form definiteintersecting surfaces to reduce the possible intersection to a point.

This is what is done during satellite navigation. Here, the new planes are relativeto different reference points, which are the point of locations of three satelliteswhose absolute positions are known in the chosen reference frame. The rangesfrom these satellites to the point form the independent information from which threeindependent spherical surfaces may be obtained. Using these, the position of thepoint may be derived locating the point of intersection in terms of absolute reference.However, because the surfaces thus created are nonlinear, it requires an adequatenumber of such surfaces to explicitly indicate the position.

In a 3D space, two planes intersect to form a straight line that is a linear func-tion of the coordinates. On the contrary, two spherical surfaces intersect to form acircle that is a quadratic function. So, unlike the former case, in which an additionalplane surface sufficiently defines a point, an additional spherical surface intersectswith the circle at two different points, and hence is not enough to explicitly fix apoint.

How much information, therefore, is sufficient for this need? This may be eluci-dated by considering an example. First, let us understand in a scenario where weattempt to represent the position of a point, P, located on the xy plane, i.e. in atwo dimensional (2D) space.

In 2D space there are two unknowns for the location of a point, P. In Cartesiancoordinates, these are “x” and “y” with respect to the absolute reference point O atthe origin of the axes. The point may also be represented in terms of its range fromthe origin and its direction given by the angle it makes with the axes. The range maybe obtained from the individual Cartesian coordinates as r ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2 þ y2

pand the

angle, q¼ tan�1 (y/x). If we know only the radial range of the point from this refer-ence, we can only represent the distance as

r2 ¼ x2 þ y2 (6.1a)

This is the equation about the origin with radius r. In squaring the values, weactually lose the information about its sign and cannot find q from it. Thus, wehave a deficit of information, its exact direction.

Furthermore, let there be more such information available that states that theradial range of the same point from a new reference point at (x1, y1) is r1. So,

ðx� x1Þ2 þ ðy� y1Þ2 ¼ r21 (6.1b)

Page 4: Understanding Satellite Navigation || Navigation Solutions

220 CHAPTER 6 Navigation Solutions

The above equation represents another circle centered at (x1, y1) with radius r1.Replacing Eqn (6.1a) in Eqn (6.1b), we get

x21 þ y21 � 2xx1 � 2yy1 ¼ r21 � r2 (6.2)

This is a linear equation of the form axþ by¼ c, where a¼ 2x1, b¼ 2y1, andc¼ r2� {r1

2� (x12þ y1

2)}. Thus, two quadratic equations formed out of two observa-tions in a 2D space thus represents two circles which intersect at two different pointsthat lie on a straight line represented by a linear function of x and y. The values of thecoordinates are yet to be determined. The solution can be obtained with an additionalequation. Stated mathematically, this establishes the fact that the quadratic nature ofthe range equation results in two possible roots of the unknowns are formed fromtwo equations; and we need an additional equation to get the exact solution.

The same fact is also evident from the corresponding geometry of the equationsas shown in Figure 6.2. In 2D space, using only the range information R1 and R2

from two relative reference points, A and B, respectively, leads to ambiguity withprobable positions at P1 and P2, where the conditions set by both equations satisfy.In addition, if point P is known to be positioned at a distance R3 from another relativereference point C, it must lie on the perimeter of the circle around C with radius R3.This cuts only the point P1 among the probable two. Only then can we unambigu-ously find the position of P.

We can generalize this observation for higher-order spaces with more numbers ofunknowns. For “k” numbers of unknowns, k nonlinear equations of order 2 leave uswith two equiprobable solutions. Thus, we need 1 more equations to explicitly solvefor the unknowns. Therefore, a total of [kþ 1] equations are required. So, in real-lifeconditions, where the observation equations are quadratic in a 3D space, it requires3þ 1¼ 4 equations to solve for positions explicitly using them.

FIGURE 6.2

Two-dimensional case of position fixing.

Page 5: Understanding Satellite Navigation || Navigation Solutions

6.1 Fundamental concepts 221

FOCUS 6.1 REQUIREMENT OF EQUATIONS FOR SOLUTIONHere, we explain with simple examples the requirement of numbers of equations in the caseof spherical nonlinearity. We treat the problem for both cases of solving with a constraintequation and for an additional observation equation of spherical nature.

Suppose we have a second-order equation such as:

x2 þ y2 ¼ 9 (i)

To solve for variables x, y, and z, we need more equations like this. Let anotherequation be

x2 þ y2 � 10xþ 9 ¼ 0 (ii)

Combining Eqns (i) and (ii), we get the value of x as

x ¼ 18=10

¼ 1:8(iii)

Thus, the point at which these two equations simultaneously satisfy has the value of x as18/10. However, taking this expression for x derived in Eqn (iii) and putting in any of theequation of Eqn (i) or Eqn (ii) leaves us with a quadratic in y. Putting it in Eqn (i), we get,

ð18=10Þ2 þ y2 ¼ 9

or; 100 y2 ¼ 576

or; y ¼ �2:4

Thus, from these two Eqns (i) and (ii), we cannot obtain an explicit solution of (x,y) evenin a 2D case. Now, we need to add a new equation.

First, let us suppose that we add a constraint equation of the form

3xþ 4y ¼ 15 (iv)

This is a constraint equation that states that the solutions should also simultaneously lieon the line defined by it. Putting the value of x¼ 1.8 into this equation, we get the exact valueof y as

y ¼ ð15� 3� 1:8Þ=4¼ 2:4

Thus, we get the exact solution for x and y.Second, assume that we have a separate equation of the form

5x2 þ 5y2 � 18x� 44yþ 93 ¼ 0 (v)

Notice that all of the nonlinear equations that we have used are spherical in nature.Again, using the derived value of x¼ 1.8 here, we get

16:2þ 5y2 � 32:4� 44yþ 93 ¼ 0

or; 5y2 � 44yþ 76:8 ¼ 0(vi)

So, the possible solutions of y are

y ¼ 6:4

and y ¼ 2:4

Therefore, the actual solution of (x, y) is (1.8, 2.4), which we get from three independentequations.

Page 6: Understanding Satellite Navigation || Navigation Solutions

222 CHAPTER 6 Navigation Solutions

Let us consolidate what we have learned in this discussion:

1. First, the position of any unknown point may be represented in a reference frame,by knowing the position of a new reference point in the frame and vectorialdistance from this relative reference point to the point of interest.

2. If, instead of the vectorial distance, only the radial distance (range) is known, thelack of adequate information may be compensated for by adding enoughnumbers of similar range of information from other such points of knownlocation to explicitly define the position of the point.

6.2 Generation of observation equationThis general observation may be extended to the practical scenario of 3D space, inaccordance with the requirements of satellite navigation.

In satellite navigation systems, the preferred absolute reference system is thegeocentric earth-centered earth-fixed (ECEF). However, it may be transformed toany other frame according to the requirements. The representation of the positionof any point in this frame may be done using the secondary references, which arethe navigation satellites in the sky. For this, we obviously need the absolute positionof the satellite and the vectorial distance of the point from the satellite. However,because it is not feasible to obtain vectorial distances, positioning of the point canstill be done, as we have just seen, if we know the ranges from other satellites placedat known locations. This is known as trilateration, the estimation of the position of apoint unambiguously based on the measurements of distances from three or moreknown reference locations.

To estimate a position of a point using a satellite navigation system, it is thusnecessary to have two things ready: the position of the satellites and the distanceof the point from these satellites. Range observation equations are generated fromthis information and the position is estimated in 3D space by solving these equa-tions. Therefore, two aspects are important at this point:

1. To obtain this required range information2. To solve the equations

We have discussed the first aspect in the previous chapter, and hence shallconcentrate only on the second here.

Ranges of satellites are measured at the receiver. If these ranges measured fromthree satellites, S1, S2, and S3 are R1, R2, and R3, respectively, we get threequadratic equations, formed using three reference positions and the correspondingranges. These equations are

R1 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS1 � xÞ2 þ ðyS1 � yÞ2 þ ðzS1 � zÞ2

q(6.3a)

R2 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS2 � xÞ2 þ ðyS2 � yÞ2 þ ðzS2 � zÞ2

q(6.3b)

R3 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS3 � xÞ2 þ ðyS3 � yÞ2 þ ðzS3 � zÞ2

q(6.3c)

Page 7: Understanding Satellite Navigation || Navigation Solutions

6.3 Linearization 223

where, xsi, ysi, and zsi are the coordinates of the ith satellite (i¼ 1, 2, 3) and x, y, zare the unknown user position. This constitutes the set of observation equations.

Geometrically, the ith of the above equations state that the user is located anywhereon a spherical surface formed with a radius Ri with the satellite located at (xi, yi, zi)placed at the center of the sphere. Simultaneous measurements from two such refer-ences put the possible location of the user on the intersection of two spheres. Usingtwo of the equations, of (6.3), the information may be reduced to a linear equation in(x, y, z), signifying that the intersection of the corresponding two spheres leads toplanar geometry. The possible locus of the positions actually forms a circle thatlies on this plane represented by the obtained linear equation in 3D space.

A third reference satellite and the measured range of its user generate the thirdEqn (6.3c). This equation combines with the effective circular intersection obtainedpreviously to reduce the positions of the point into two solutions at two equallypossible locations. Therefore, either an additional measurement from a fourth satel-lite is required or there must be some separate independent relationship existing be-tween the user’s coordinates. The latter is called the constraint equation. Thisadditional information augmenting the three observation equations suffices to obtainthe particular solution for the position.

Thus, the nonlinearity of range equations demands four navigational satellites tosolve the unknown position coordinates. However, this issue of solving the nonlin-earity problem can be resolved in a different manner. Nevertheless, we shall see thateven without nonlinearity, we still require four satellites to solve the position; thereason for this is different and will be clear in the next section. First, let us seehow this nonlinearity problem is resolved.

6.3 LinearizationFrom our discussion in the last section, the problem of position fixing is reduced tosolving the problem of simultaneous quadratic equations. At the same time, weknow that the three simultaneous quadratic equations in (6.3) cannot be solved toget the unknowns, x, y, and z unambiguously, because these quadratic equationsgive two possible solutions for their unknown variables. We therefore need more in-formation to solve the unknowns.

There are three different methods to attack this problem. One is to take an addi-tional independent observation equation and solve four simultaneous quadraticequations. The second option is to use a constraint equation instead, to resolve theambiguities that arise on solving three quadratics. A constraint equation is the def-inite relation that the unknown variables always maintain between them. The finalpossibility, as chosen in most cases of navigational receivers, is that these quadraticequations are linearized to form three linear differential equations. We will now seehow it is done.

Linearization (Kaplan et al. 2006), defined in this context, is the technique ofconverting quadratic observation equations into linear differential equations abouta fixed nominal approximated position, which may be an initial guess of solution.These linear equations are then solved by standard methods to get the differential

Page 8: Understanding Satellite Navigation || Navigation Solutions

224 CHAPTER 6 Navigation Solutions

values of the true position coordinates with respect to those of the initial guess. Oncethese differential values are estimated, they can be added to the initial guess to obtainthe absolute position solution.

To elaborate on this, we first start from Taylor’s theorem, which says that if thevalue of a multivariate function f(X) is known at a point X0, its value at a nearbypoint X is

fðXÞ ¼ fðX0Þ þ f0 ðX0Þ dXþ 1 =

2f00 ðX0Þ dX2 þ :::: (6.4)

where f0 and f00 are the first- and second-order derivatives of function f, respectively,with respect to X and obtained at the known position X0.

In our case, the function is the range, R, of the satellite from the user position, P,which is a function of both user position P¼ (x, y, z) and that of the satellite,Ps¼ (xs, ys, zs). Thus, we can write function R¼ R(xs, ys, zs x, y, z) as

R ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxs � xÞ2 þ ðys � yÞ2 þ ðzs � zÞ2

q(6.5)

For any instant (supposing we have frozen the time at that instant), the knownsatellite positions are fixed. Effectively, the range remains a function of unknownuser position variables (x, y, z). At this instant, let us consider an approximate (prac-tically close enough) user position, P0¼ (x0, y0, z0). Now, expressing the range attrue position P by expanding the range function about the point and in terms ofthe range at position P0 according to Taylor’s theorem, and considering only up tothe first-order derivatives, it follows from Eqn (6.4),

R�x; y; z

� ¼ R�x0; y0; z0

�þ vR=vx��P0

Dxþ vR=vy��P0

Dyþ vR=vz��P0

Dz

or; R�x; y; z

�� R�x0; y0; z0

� ¼ vR=vx��P0

Dxþ vR=vy��P0

Dyþ vR=vz��P0

Dz

(6.6)

The higher-order derivatives can be neglected owing to our assumption of theclose proximity of our approximated position from the true values. Here, the finitedifferences along the coordinates between the two points are represented as

Dx ¼ x� x0Dy ¼ y� y0Dz ¼ z� z0

(6.7)

Calling the geometrically calculated range from the P0 to the satellite as R0, sothat

R0 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxs � x0Þ2 þ ðys � y0Þ2 þ ðzs � z0Þ2

q(6.8)

Equation (6.6) can be rewritten as

R� R0 ¼ ðvR=vxDxþ vR=vyDyþ vR=vzDzÞ���P0

or; DR ¼ ðvR=vxDxþ vR=vyDyþ vR=vzDzÞ���P0

(6.9)

Page 9: Understanding Satellite Navigation || Navigation Solutions

FIGURE 6.3

Elements of linearization process.

6.3 Linearization 225

where DR¼ R� R0, is the finite differential range between the true and the approx-imated position. This is illustrated in Figure 6.3.

Partially differentiating the expression of the range equation with respect to x, y,and z at the approximated position of P0(x0, y0, z0), and putting the values intoEqn (6.9), we get

DR ¼ �ðxs � x0Þ=R0 Dx� ðys � y0Þ=R0Dy� ðzs � z0Þ=R0 Dz

¼ ga Dxþ gb Dyþ gg Dz(6.10)

ga, gb, and gg are the partial derivatives of the range at the approximated point withrespect to x, y, and z, respectively. Because the position coordinates of this nominalpoint (x0, y0, z0) are known, the values of ga, gb, and gg can easily be determined.They also represent the direction cosines of the vector joining the satellite from thenominal position. The negative sign shows that an increment in Dx will result in adecrement in the range error with the sense that the parameters are chosen.

Thus, the nonlinear range equations with unknown coordinates have turned into alinear differential equation about the nominal position, assumed a priori, with rela-tive errors in position as unknowns. Thus, the set of such linearized observationequations can be solved to obtain these position errors for each coordinate. Becausethe nominal position is known, the true position can be determined from it by addingthe estimated relative errors to it.

Nevertheless, we still need four satellites, even after linearization to find the po-sition and time. To understand that, let us recall how the user receiver measures theranges of each reference satellite. Broadly, it measures the range from the time delaywith which a definite phase of the signal, transmitted from the satellite, is received atthe receiver. The fundamental question here is, how does the receiver knowwhen anysignal is being transmitted by the satellite? Here comes the utility of the rangingcodes.We learned in Chapter 4 that in a satellite navigation system, the ranging codesare transmitted synchronously with the navigation data and carrier phase of the signalsuch that the transmission instant of each code bit or the partial phase thereof can beexactly derived from the time stamp present in the signal and the knowledge of thecode chip rate. The received time is derived from the clock present in the receiver it-self. Thus, the propagation time is measured from the difference of time at which a

Page 10: Understanding Satellite Navigation || Navigation Solutions

226 CHAPTER 6 Navigation Solutions

message was transmitted and the time when the message was received at the receiver.This time interval is multiplied by the velocity of light, c, to obtain the range.

The transmission and the received times are obtained from two different clocks:the former from the satellite clock and the latter from the receiver. But the issue hereis that the receiver clock does not have as much accuracy and precision as the satelliteclock. The satellite clock is an atomic clock of very high stability, about w10�13,and thus keeps time very accurately. The receiver clock, on the other hand, is cheapand consequently its stability is low, of the order of aboutw10�6 to 10�9. Thus, theclock at the receiver is not synchronous to the satellite atomic clock and drifts withrespect to the latter, leading to a relative clock offset. There remains an intrinsictime delay (or advancement) between the satellite and the receiver time. Thisaffects the ranging process and hence the measured range. The error in rangingis equal to the product of the offset between the two clocks and the velocity of light.

Let a definite phase of the signal be transmitted at true time Tt, which is also thesatellite clock time. Also, let this phase be received at true time Tr after a traversetime of Dt¼ Tr� Tt. At the instant of receiving, if the receiver clock is shifted byan amount,þdtu, with respect to the satellite time, the time that the receiver registersfor to receive the phase is Trþ dtu. Thus, to the receiver the time of propagation is

Dtu ¼ ðTr þ dtuÞ � Tt

¼ ðTr � TtÞ þ dtu¼ Dtþ dtu

(6.11a)

The range thus obtained at the receiver is

R ¼ cDtu¼ cDtþ cdtu¼ rþ cdtu

(6.11b)

An error of cdtu incurs in range in the process owing to its clock shift. It must beappreciated that an error of 1ms leads to an error of 300 m. This resultant offset inrange remains added to the measured range. Therefore, this unknown receiver clockoffset also needs to be determined for range correction.

This resultant offset in range is taken as an unknown variable in solving for theposition. So, even for the linear equations, one more unknown is added in addition tothe three unknown position coordinate variables. Thus, to get the solution for thesefour unknown variables, four linearized observation equations are required. Onemore reference satellite and its distance are therefore necessary. This results in arequirement of four satellites for position and time estimation.

6.4 Solving for positionWith the introduction of a new unknown (i.e. the receiver clock shift with respect tothe satellite clock), the basic observation equation changes as given in Eqn (6.11b)and below as

Ri ¼ riþ cdtu

Page 11: Understanding Satellite Navigation || Navigation Solutions

6.4 Solving for position 227

where Ri is pseudorange measurement to the ith satellite, ri is the corresponding geo-metric range, and dtu is the receiver clock offset with respect to the satellite time.Because all satellite clocks are synchronous, this shift is the same for allobservations. We are considering the effective additional length that gets erroneouslyadded to the range as a result of to this shift, so we multiply the clock bias, dtu with c,the velocity of light in vacuum, to convert the same into an effective range error.

Four observation equations for four satellites can be written by expanding thegeometric range as a function of coordinates as

R1 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS1 � xÞ2 þ ðyS1 � yÞ2 þ ðzS1 � zÞ2

qþ cdtu

R2 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS2 � xÞ2 þ ðyS2 � yÞ2 þ ðzS2 � zÞ2

qþ cdtu

R3 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS3 � xÞ2 þ ðyS3 � yÞ2 þ ðzS3 � zÞ2

qþ cdtu

R4 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS4 � xÞ2 þ ðyS4 � yÞ2 þ ðzS4 � zÞ2

qþ cdtu

(6.12)

These simultaneous observation equations are linearized about the approximatesolution of Xa¼ (x0, y0, z0, cdtu0), and the resultant linearized equations become

DR1 ¼ �Ga1$Dx� Gb1$Dy� Gg1$Dzþ Db

DR2 ¼ �Ga2$Dx� Gb2$Dy� Gg2$Dzþ Db

DR3 ¼ �Ga3$Dx� Gb3$Dy� Gg3$Dzþ Db

DR4 ¼ �Ga4$Dx� Gb4$Dy� Gg4$Dzþ Db

(6.13a)

where all terms are in accordance with the terms defined in Eqns (6.6)e(6.10) andhave the ordered index for the four different satellites. Db is the new term added andrepresents the finite difference of the true effective range error owing to the receiverclock bias from the same assumed in the initial guess of the same (i.e. Db¼ c[dtu� dtu0]). The set of equations in (6.13a) may be written in matrix form as

DR ¼ Ga � DXa (6.13b)

where

Ga ¼

0BB@

�Ga1 �Gb1 �Gg1 1

�Ga2 �Gb2 �Gg2 1

�Ga3 �Gb3 �Gg3 1

�Ga4 �Gb4 �Gg4 1

1CCA

DR ¼ ½DR1 DR2 DR3 DR4 �T and DXa ¼ ½Dx Dy Dz Db �:Now DXa may be obtained by standard techniques such as iteration, simple least-

squares solution, weighted least-squares solutions, and so forth (Axelrad and Brown,1996; Strang, 2003). The least-squares solution becomes

DXa ¼�GTaGa

��1GTaDR (6.14)

When this derived DXa is added to the initially assumed approximate position,Xa, about which the equations have been linearized, the true position solution isobtained as X¼XaþDXa.

Page 12: Understanding Satellite Navigation || Navigation Solutions

228 CHAPTER 6 Navigation Solutions

Recall that we assumed at the beginning that the approximated point is near the trueposition such that the linearity condition in the error holds good and higher-order dif-ferentials of Taylor’s theorem may be omitted. However, it is not always possible toapproximate such a position because the receiver may have no idea about the location.In such cases (which are more likely), the estimation may start with any arbitraryapproximate position where the above conditions are not fulfilled and hence higher or-der terms in the Teylor’s series exist. As a result of neglecting these higher-order termsin the estimation process, the solution arrived at will definitely carry some error. How-ever, it will be near the real position. So, this solved position of the first estimationmaynow be taken as the initial guess and another iteration of solving for the positionmay becarried out with the same set of data to reach a further nearer position. This way, after afew iterations for a single definite point, the solution will converge to the true position.These situations may thus be handled through the iteration process, in which theprevious steps are required to be repeated until solution is obtained. Box 6.1 explainsthe estimation process where the solution obtained after few iterations.

FOCUS 6.2 SOLVING FOR POSITIONWe assume the following constant values for radius of earth, Re, and radial distance of sat-ellites, Rs, from the earth’s center expressed in km is as below:

Re¼ 6.3781� 103

Rs¼ 2.6056� 104

Let the true position of the user be:

Latitude¼ 22� NLongitude¼ 88� E

The coordinates in ECEF frame are:

xt¼ 206.3853 in kmyt¼ 5.9101� 103 in kmzt¼ 2.3893� 103 in km

Let the clock offset of the user receiver be such that c*Dt¼ b¼ 15 in kilometers. Theseare unknown values initially. Only the satellite positions and measured ranges are known. Thesolution of the position estimation, obtained from measured ranges and approximate posi-tion, must converge to these values.

Let the Cartesian coordinates of satellites S1, S2, S3, and S4 in ECEF frame in kilometers,as obtained from the satellite ephemeris transmitted by the satellites in their message, be:

Sat:S1 Sat:S2 Sat:S3 Sat:S4xS1¼ 2.1339� 103 xS2¼ 0.0 xS3¼ 4.1006� 103 xS4¼ 2.0581� 103

yS1¼ 2.4391� 104 yS2¼ 2.4484� 104 yS3¼ 2.3256� 104 yS4¼ 2.3525� 104

zS1¼ 8.9115� 103 zS2¼ 8.9115� 103 zS3¼ 1.1012� 104 ZS4¼ 1.1012� 104

The measured range for satellite S1 to S4, expressed in km are:

Rt1¼ 1.9708� 104

Rt2¼ 1.9702� 104

Rt3¼ 1.9773� 104

Rt4¼ 1.9714� 104

Page 13: Understanding Satellite Navigation || Navigation Solutions

6.4 Solving for position 229

Initially, the true position of the user is not known. Thus, let us take an initial approximateposition in Cartesian coordinates in ECEF frame as:

xa0¼ 458.9177 kmya0¼ 5.8311� 103 kmza0¼ 2.5433� 103 kmba0¼ 10 in km

and the ranges calculated in km from the initial approximate position to four satellites are:

Ra01¼ 1.9703� 104

Ra02¼ 1.9726� 104

Ra03¼ 1.9723� 104

Ra04¼ 1.9691� 104

dRa0¼ [4.2000; �23.5262; 50.2975; 23.1277]

The directional cosines with respect to the approximate position with satellite S1 are:

Ga1¼ 0.0851 Ga2¼�0.0233 Ga3¼ 0.1847 Ga4¼ 0.0813Gb1¼ 0.9424 Gb2¼ 0.9461 Gb3¼ 0.8839 Gb4¼ 0.8990Gg1¼ 0.3234 Gg2¼ 0.3230 Gg3¼ 0.4296 Gg4¼ 0.4303

The observation equations after linearization turns into:

04.2 ¼� 0.0851 dx� 0.9424 dy� 0.3234 dzþ db�23.5262¼þ 0.0233 dx� 0.9461 dy� 0.3230 dzþ db50.2975¼� 0.1847 dx� 0.8839 dy� 0.4296 dzþ db

23.1277¼� 0.0813 dx� 0.8990 dy� 0.4303 dzþ db

This can be expressed in the form: dR¼Ga * dXa

0BB@

04:20�23:526250:297523:1277

1CCA ¼

0BB@

0:085055 0:94244 0:32337 1�0:023277 0:94611 0:32301 10:184740 0:88393 0:42959 10:081258 0:89903 0:43029 1

1CCA

0BB@

dxdydzb

1CCA

The dXa can be simply solved as: dXa¼ Ga�1 * dR

dXa ¼

0BB@

�252:936473:1880

�156:29471:1212

1CCA

Hence, the new positions after the first iteration become Xa1¼ Xa0þ dXa:

xa1¼ 458.9177� 252.9364¼ 205.9813 kmya1¼ 5.8311� 103þ 073.1880¼ 5.9043� 103 kmza1¼ 2.5433� 103� 156.2947¼ 2.3870� 103 km

ba1¼ 10þ 001.1212¼ 11.1212 km

After the first iteration, we come closer to the actual solution of position than the firstapproximation.

Page 14: Understanding Satellite Navigation || Navigation Solutions

230 CHAPTER 6 Navigation Solutions

Now, with the new values of x, y, z, and b, we repeat the steps. From the position obtainedafter the first iteration, the ranges calculated from the initial approximate position to the foursatellites and expressed in km are:

Ra11¼ 1.9710� 104

Ra12¼ 1.9704� 104

Ra13¼ 1.9775� 104

Ra14¼ 1.9716� 104

BOX 6.1 MATLA

The MATLAB code poof satellite positionsand observation inforformer file, the positi

dR10¼ [�2.3798, �2.3653, �2.3110, �2.3670]

Note that the absolute values of differential ranges have reduced compared with thefirst iteration. The directional cosines with respect to the approximate position with satelliteS1 are:

Ga1¼ 0.0979 Ga2¼�0.0105 Ga3¼ 0.1971 Ga4¼ 0.0940Gb1¼ 0.9385 Gb2¼ 0.9435 Gb3¼ 0.8779 Gb4¼ 0.8942Gg1¼ 0.3312 Gg2¼ 0.3313 Gg3¼ 0.4364 Gg4¼ 0.4377

B FOR

sition_mand corrmation pons of th

POSITION FIXIN

ain.m was run to obesponding ranges. Treloaded in text filee visible satellites a

G

tain the position she input to the prs. From the navigat any instant are

The observation equations turn into the form: dR¼Ga * dXa0BB@

�2:3798�2:3653�2:3110�2:3670

1CCA ¼

0BB@

�0:0979 �0:9385 �0:3312 10:0105 �0:9435 �0:3313 1�0:1971 0:8779 �0:4364 1�0:0940 �0:8942 �0:4377 1

1CCA

0BB@

dxdydzb

1CCA

The dXa can be simply solved as: dXa¼Ga�1 * dR

dXa ¼

0BB@

0:40425:81322:31143:8808

1CCA

Thus, the updated positions after the second iteration become Xa2¼ Xa1þ dXa:

xa2¼ 205.9813þ 000.4042¼ 206.3855 kmya2¼ 5.9043� 103þ 005.8132¼ 5.9101� 103 kmza2¼ 2.3870� 103þ 002.3114¼ 2.3893� 103 kmba2¼ 11.1212þ 3.8808¼ 15.0020 km

After the second iteration, the solution converges to the true position.

olution from known valuesogram is the navigationtion data present in theobtained, whereas the

Page 15: Understanding Satellite Navigation || Navigation Solutions

BOX 6.1 MATLAB FOR POSITION FIXINGdcont’d

corresponding range information was obtained from the latter. On running the program, thefollowing information is provided in sequence:

The position and range of satellites.

x¼ 22657881.0793 in metersy¼ 13092933.5636 in metersz¼ 5887881.983 in metersR¼ 22889484.2157 in meters

The program then assumes an approximate position and displays it as:Approximate position assumed:

x_apx¼ 302536.5663 in metersy_apx¼ 5772741.575 in metersz_apx¼ 2695567.787 in metersb_apx¼ 10 in meters

From these data, the best set of four satellites is selected by obtaining the minimumdilution of precision (DOP), which the program flashes as output as:

The minimum DOP obtained is 0.094484.The program starts the iterative calculation of the position following the user input for the

numbers of iteration required.The program then displays the results it derives sequentially, as in the following.Iteration # nApproximated ranges to the selected four satellites are:

1:0eþ 007�½2:2673 2:3734 2:1757 2:0181 � in m

Differences in ranges are: dR

1:0eþ 004�½ �3:9911 �4:5629 �4:5205 �4:5642 � in m

Linearized observation equation is: dR¼G * dX.

�39910.5553¼ 0.4043 *dxþ�0.74338 *dyþ 0.53285 *dzþ 1 *db�45,629.4328¼�0.5439 *dxþ�0.04069 *dyþ�0.83816 *dzþ 1 *db�45,204.8555¼ 0.0474 *dxþ�0.58482 *dyþ�0.80978 *dzþ 1 *db�45,641.5591¼�0.2551 *dxþ�0.95848 *dyþ�0.12731 *dzþ 1 *db

Solution for dX is: GT * inv(GT*G):

1:0e þ 004�½0:4120; 0:3869; 0:3305; �4:0461 � in m

Solution after n iterations is:

1:0e þ 006�½1:1965 6:2759 1:5954 0:1143 � in m

Finally, after the requisite numbers of iterations are over, it displays the final solution as:Final solution of coordinates of user are:

6.4 Solving for position 231

Page 16: Understanding Satellite Navigation || Navigation Solutions

232 CHAPTER 6 Navigation Solutions

6.5 Other methods for position fixing6.5.1 Solving range equations without linearizationHere, we discuss how to fix the positions of the user by using the same range mea-surements, but with different methods to get the solution. A number of methods havebeen put forward by researchers in this field that employ the analytical method tofind the solution. However, we will describe only two methods that use the originalquadratic observation equation, but will handle it with two different approaches. Thefirst will be done by using an additional linear equation called a constraint equation;the other will logically select the true position from two alternatives.

6.5.1.1 Using a constraint equationAs stated before, it is possible to solve for four unknowns from quadratic equationswithout linearization. For this, we need a constraint equation defining the fixedrelationship that the coordinates follow, in addition to the four quadratic equations.This makes a total of five equations, and is in accordance with our previous obser-vation for the requirement of the equations: four unknowns and one more constraintequation to remove the quadratic ambiguity.

One such constraint may be the assumption that the difference between thesquare of the radius of the user position and the square of the error resulting fromclock shift is a constant (Grewal et al. 2001). This constraint may be mathematicallyrepresented as

�x2 þ y2 þ z2

�� b2 ¼ k2 (6.15)

We have chosen this constraint to make our computations simple when wedescribe the process. However, other constraints relating the position coordinatesand bias will do, as well, provided the constraint equation is independent.

The first observation equation was

R1 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS1 � xÞ2 þ ðyS1 � yÞ2 þ ðzS1 � zÞ2

qþ cdtu (6.16)

Expanding the square terms, and denoting cdtu as b, we get the equation

R1 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2S1 þ x2 þ y2S1 þ y2 þ z2S1 þ z2 � 2xS1x� 2yS1y� 2zS1z

qþ b (6.17a)

With a few manipulations, it follows from the above equation

R21 ¼ x2S1 þ x2 þ y2S1 þ y2 þ z2S1 þ z2 � 2xS1x� 2yS1y� 2zS1zþ 2R1b� b2

(6.17b)

Replacing the constraint Eqn (6.15) in Eqn (6.17b), we get

R21 � k2 � R2

s ¼ �2xxS1 � 2yyS1 � 2zzS1 þ 2R1b (6.18a)

or; A1xþ B1yþ C1zþ D1b ¼ k1 (6.18b)

Page 17: Understanding Satellite Navigation || Navigation Solutions

6.5 Other methods for position fixing 233

where A1¼ 2xS1, B1¼ 2yS1, C1¼ 2zS1, D1¼�2R1, and K1¼ Rs2þ k2� R1

2. Wecan construct four such linear equations with each observation equation and theconstraint, and solve for the four unknowns. The simultaneous equations thusformed are

A1xþ B1yþ C1zþ D1b ¼ k1

A2xþ B2yþ C2zþ D2b ¼ k2

A3xþ B3yþ C3zþ D3b ¼ k3

A4xþ B4yþ C4zþ D4b ¼ k4

(6.19a)

This can be written in matrix form as

GX ¼ K (6.19b)

where G ¼

0BB@

A1 B1 C1 D1

A2 B2 C2 D2

A3 B3 C3 D3

A4 B4 C4 D4

1CCA

X ¼ ½ x y z b �T and k ¼ ½ k1 k2 k3 k4 �T

Using standard least-squares method, the solution for X becomes

X ¼ �GTG

��1GTK (6.20)

6.5.1.2 Bancroft’s methodIn our general discussion about finding the solution, we said that it becomesconvenient to find a solution if the original quadratic observation equation isturned into a linear one. The equation was then linearized to serve the purpose.In the last subsection, we put an additional constraint on the positional coordinatesto get the solution. In Bancroft’s method, the equation is kept quadratic and somealgebraic manipulations are carried out using the given relation to reduce the equa-tions to a least-squares problem. Then, from the two possible solutions of thisquadratic equation, the required solution is logically chosen. This method of solu-tion is algebraic and noniterative in nature, computationally efficient, and numer-ically stable, and admits extended batch processing (Bancroft, 1985). It is a classicexample of efficiently handling quadratic conditions and was further analyzed byAbel and Chaffee (1991) and Chafee and Abel (1994).

The observation equation is written in terms of satellite position, user positionsand receiver bias as

R ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxs � xÞ2 þ ðys � yÞ2 þ ðzs � zÞ2

qþ b (6.21a)

Expanding the terms in the observation equation as functions of the unknownterms of the user and the satellite, we get

x2s � 2xsxþ x2 þ y2s � 2ysyþ y2 þ z2s � 2zszþ z2 ¼ R2 � 2Rbþ b2 (6.21b)

Page 18: Understanding Satellite Navigation || Navigation Solutions

234 CHAPTER 6 Navigation Solutions

Rearranging the terms, we get�x2 þ y2 þ z2 � b2

�� 2ðxsxþ ysyþ zsz� RbÞ þ �x2s þ y2s þ z2s � R2

� ¼ 0

(6.21c)

Notice that this may be recognized as a common quadratic equation of the formX2þ kXþ c¼ 0, where X¼ [x y z b]T is a multivariate vector of dimension 4.

However, the intelligent part comes at this point. Instead of solving directly forX, notice that the quadratic unknown terms, i.e. (x2þ y2þ z2� b2), remain in theform of a scalar. Those who are aware of the Special Theory of Relativity can iden-tify that this form is similar to that of the Lorentz equation. Thus, this compositeterm is called the Lorentz inner product of X. This function may be defined as

l ¼< X � X >¼ x2 þ y2 þ z2 � b2

(6.22)

Remember that this term l is a scalar function of the unknown variables in X.Similarly, defining the vector S¼ [xs ys zs R]

T, we get

< S � S >¼ �x2s þ y2s þ z2s � R2

�¼ a

(6.23)

Using these definitions, the equation becomes

l� 2bXþ a ¼ 0 (6.24a)

where a and b are known and l andX are unknown quantities. Also, b¼ [xs ys zs�R].This can equivalently be written as

bX ¼ 1 =

2 lþ 1 =

2a (6.24b)

Because this equation holds well for each of the satellites, a similar equation maybe formed from “n” different satellites to form the matrix equation

BX ¼ 1 =

2Lþ 1 =

2 A: (6.24c)

l, which is a scalar function of the user position, remains the same for allsatellites. Hence, here L¼ l U and U¼ [1 1 1 1 ..]T and are both [n� 1]matrices. “a” is a constant but it is different for different satellites, thus formingthe matrix A. So,

B ¼

0BBBB@

xS1 yS1 zS1 �rS1xS2 yS2 zS2 �rS2xS3 yS3 zS3 �rS3

:xSn ySn zSn �rSn

1CCCCA

and A ¼ ½a1 a2 a3 . an �T.If we have n satellites, B is an [n� 4] matrix, U is [n� 1], and A is an

[n� 1] vector. If we have enough satellites, a least-squares solution solves the

Page 19: Understanding Satellite Navigation || Navigation Solutions

6.5 Other methods for position fixing 235

normal equation. From Eqn (6.24c), we can derive the least-squares solutionX* of X as

X� ¼ �BTB

��1BT 1 =

2Lþ 1 =

2 A��

¼ K�1 =

2Lþ 1 =

2 A�

(6.25)

where K¼ (BTB)�1 BT is a [4� n] matrix. However, our solution X* involves l,which again is a function of X. Substituting X* into the definition of the scalar l,we get

l ¼< 1 =

2 KðLþ AÞ � 1 =

2 KðLþ AÞ >¼ 1 =

4 l2 < KU � KU > þ1 =

2 l < KU � KA > þ1 =

4 < KA � KA >(6.26a)

or; l2 < KU � KU > þ2lð< KU � KA > �1Þþ < KA � KA >¼ 0 (6.26b)

or; l2C1þ 2l C2þ C3 ¼ 0 (6.26c)

This is a scalar quadratic equation in l. You may verify by comparing the dimen-sions of each constituent matrix of the given operations that the equation contains allscalar coefficients. All three of these values can be computed because all compo-nents in them are known. Hence, the two possible solutions for l (l1 and l2) canbe obtained. Each of these two solutions is valid, but these are scalar functions ofX and not X itself. Thus, each value of l can be put into Eqn (6.25) to obtain thefinal closed value of X, as

X1 ¼ K�1 =

2 l1Uþ 1 =

2 A�

X2 ¼ K�1 =

2 l2Uþ 1 =

2 AÞ (6.27)

One solution for this will only give a logical result that makes sense. Forexample, for ground-based users, one solution for X will be on the surface of theearth with a radius of R, and one will not. Thus, one of the two is selected throughrational reasoning, and the other is rejected. The mathematical equivalence of this,however, is only a constraint equation. Box 6.2 describes the method for solving forposition using Bancroft’s method.

FOCUS 6.3 SOLVING WITH BANCROFT’S METHODWe illustrate here an example of solving the position of a point using the measured ranges andthe satellite positions as input. For the Cartesian coordinates of satellites S1, S2, S3, and S4in an ECEF frame in kilometers, as obtained from the satellite ephemeris, transmitted by thesatellites in their message:

Sat:S1 Sat:S2 Sat:S3 Sat:S4xS1¼ 2.1339� 103 xS2¼ 0.0 xS3¼ 4.1006� 103 xS4¼ 2.0581� 103

yS1¼ 2.4391� 104 yS2¼ 2.4484� 104 yS3¼ 2.3256� 104 yS4¼ 2.3525� 104

zS1¼ 8.9115� 103 zS2¼ 8.9115� 103 zS3¼ 1.1012� 104 zS4¼ 1.1012� 104

Page 20: Understanding Satellite Navigation || Navigation Solutions

236 CHAPTER 6 Navigation Solutions

and the measured range from a definite position P on the earth after all corrections are madeon that for satellites is

Rt1¼ 1.9708� 104

Rt2¼ 1.9702� 104

Rt3¼ 1.9773� 104

Rt4¼ 1.9714� 104

We need to find the position of P. The first job is to use the position and range informationto generate the B matrix. For our case, the B matrix becomes

B ¼

26640:2134 2:4391 0:8912 �1:97080 2:4484 0:8912 �1:97020:4101 2:3256 1:1012 �1:97730:2058 2:3525 1:1012 �1:9714

3775� 1004

From this value of B, the K matrix is derived as

K ¼

26640:06 �0:06 �0:02 0:020:38 �0:37 �0:39 0:390:11 �0:16 �0:16 0:210:52 �0:54 �0:56 0:57

3775� 10�2

Now, because both KU and KA values are known and formed using known parameters,these can be used to generate Eqn (6.26b) with coefficients c1, c2, and c3, respectively,where

c1¼�1.1445� 10�09, c2¼�0.810, c3¼ 1.4786� 1008

The solutions for the equation thus formed in Eqn (6.26c) can be obtained using stan-dard methods of solving quadratic equations, and are

l1¼�8.5821� 1008 and l2¼ 1.5053� 1008

Putting these values into Eqn (6.27), we get vectors X1 and X2 as

X1 ¼ ½0:041; 1:137; 0:468; 16:026 � � 1003

X2 ¼ ½0:223; 6:464; 2:619; �1:979 � � 1003

To validate the feasibility of the two results thus obtained, the radius of the two solutionpoints is determined. These radii turn out to be R1 and R2 for solutions X1 and X2, respec-tively, where

R1¼ 1.2299� 103 kmR2¼ 6.9787� 103 km

Considering that the point of interest was on the earth’s surface, the first solution doesnot satisfy the case because its radius is too short, whereas the second solution does, andlooks more probable. So, X2 is our solution.

BOX 6.2 MATLAB FOR BANCROFT’S METHOD

The MATLAB program Bancroft.m was run to obtain the solution, as shown above. Informationregarding the satellite position and the measured ranges was obtained from an external file.This file, “sat_pos.txt,” was read by the program through in-line commands.

Run the program and use different sets of data to check for the following:

1. How the condition of matrix B changes as the satellite passes close by

2. What happens when the measured range is exact (i.e. x2s þ y2s þ z2s � R2 ¼ 0) so that A¼ 0

Page 21: Understanding Satellite Navigation || Navigation Solutions

6.5 Other methods for position fixing 237

6.5.2 Other methods6.5.2.1 Doppler-based positioningAmong other different methods, the Doppler-based position fixing is important. Itwas the technique first used by many initial satellite navigation systems. This tech-nique was used for the first time with satellite Sputnik when the position of the sat-ellite was determined using the Doppler frequency from known receiver positions.Here, we describe the fundamentals of position estimation using Doppler (Axelradand Brown, 1996).

We first define Doppler frequency as a shift in the frequency of the receivedsignal from what is transmitted as a result of the relative radial motion betweenthe transmitter and the receiver. Thus, if vrs is the radial velocity of the receiver rela-tive to the transmitter, the shift in the received frequency of the signal of wavelengthl owing to the Doppler is given by

Df ¼ �vrs=l (6.28)

We have used the convention that when the relative velocity is such that it in-creases the intermediate distance, it has a positive sense. Therefore, when the radialdistance decreases as the transmitter and the receiver approach each other, the fre-quency increases, resulting in a positive Doppler frequency; whereas wheneverthey relatively recede with a corresponding increase in radial range, the receivedfrequency decreases, causing a negative Doppler frequency. Furthermore, becausel is fixed for a signal, the relative velocity vrs at any instant can readily surrogatethe Doppler shift. In this section, we will use the terms “Doppler shift” and “relativeradial velocity” synonymously.

Doppler and integrated Doppler can be used to determine the position of thereceiver when the position and the velocity of the transmitter are precisely known.We have seen that the range measured by the receiver can be expressed as

R1 ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðxS1 � xÞ2 þ ðyS1 � yÞ2 þ ðzS1 � zÞ2

qþ b (6.29)

where the notations carry their usual meanings. Because the relative velocity is therate of change in the range, considering the rate of change of this range, we get

vrs ¼ dR=dt¼ ðasvsx � arvrxÞ þ

�bsvsy � brvry

�þ ðgsvsz � grvrzÞ þ db=dt(6.30)

where as¼ vR/vxs, bs¼ vR/vys, and gs¼ vR/vzs. Similarly, ar¼�vR/vxr,br¼�vR/vyr and gr¼�vR/vzr. Here, vsx, vsy, and vsz are components of the satellitevelocities and vrx, vry, and vrz are the components of the receiver velocities along theX, Y, and Z axes, respectively. d(dtu)/dt is the drift of the receiver clock. These veloc-ity components of the satellite and user velocities, multiplied with their respectiveprojection factors a, b and g, contribute to the total relative radial velocity alongthe line joining the user receiver to the satellite.

Now, to calculate position, the receiver must remove the effect of his ownvelocity. Thus, an effective choice is that the receiver remain stationary during the

Page 22: Understanding Satellite Navigation || Navigation Solutions

238 CHAPTER 6 Navigation Solutions

estimation then, under the condition of static receiver location, vrx, vry, and vrzbecome zero. The previous equation reduces to

vrs ¼ ðasvsxÞ þ�bsvsy

�þ ðgsvszÞ þ db=dt (6.31)

We have already seen that derivatives as, bs, and gs are the direction cosines ofthe vector joining the satellite and the receiver. These are also the components of theunit vector e along the direction of the satellite onto the Cartesian axes. We can alsowrite Eqn (6.31) as

vrs ¼�vsx vsy vsz1

�½as bs gs db=dt�T¼ ½vs 1�$G (6.32)

This gives a relation between the Doppler shift, alternatively represented by vrs,and the unknown positions present in parameter G through absolute satellite velocityvs. Therefore, if the satellite position can be derived from the transmitted data alongwith its velocity, then from the Doppler-derived relative velocity, vrs, the receivercan use Eqn (6.32) to deduce his own position.

In this equation, parameters, as, bs, and gs depend on the relative position of thetransmitter and the receiver. It is a nonlinear function of the receiver position ofxr, yr, and zr along with the satellite position.

For a definite satellite position, any point on the line joining the satellite and thetrue receiver position will have the same value of G. This ambiguity of position,however, is removed to reduce the solution to a single point when many suchequations are simultaneously considered. Then, to solve the position few distinctmeasurements are done for the same receiver position. The batch of observationequation, thus created, is then solved through nonlinear least square methods. A rela-tive approach may also be used to avoid the nonlinearity while solving. Here, to getthe fix, an approximate solution X0¼ (x0, y0, z0, db0/dt) is first assumed. Because theDoppler variations will be different for different positions, when it is put in the batchof observation equation, this assumed position will yield different value of vrs, sayvrs1 for each of the observation. Because these two Doppler values are attained forthe same value of satellite velocity vs, the corresponding differential expression in amatrix form, considering all the observation equations in the batch, will be

Dvrs ¼ vrs � vrs1¼ ½vs 1�$

�Gtrue � Gapprox

�¼ ½vs 1�$DG¼ ½vs 1�$gradðGÞjx0DX

(6.33)

where DX is the differential values of the unknowns with respect to the approximatevalues.

Thus, we get a relation between the Doppler error with the position estimationerror. Thus, the problem reduces to finding the differential values from these linearequations. There are different standard algebraic methods to find them. Once thesedifferential values are solved, they can be added to X0 and the values of x, y, and zmay be obtained in turn.

Page 23: Understanding Satellite Navigation || Navigation Solutions

6.6 Velocity estimation 239

With a single measurement at any definite processing instant, and no a prioriknowledge of the position, it is difficult to estimate accurate values. However,with enough data collected over time for a specific position of the user, the explicitposition, satisfying all sets of equations over time in a least-squares sense can beobtained.

In summary, here the position fix begins with an approximated position and de-termines the differential positioning error, by solving for the shift in that positionrequired to best match calculated slant range rate with those measured using byDoppler.

6.6 Velocity estimationThe receiver estimates the position, P, and time, T, at every instant from the updatedmeasurements. Thus, it is easy to find the velocity from these estimates as the instan-taneous ratio of the incremental position to the incremental time. This ratio, derivedalong any definite axis, will give the velocity along that axis. So,

V ¼ Dp=Dt (6.35a)

This can be resolved into an estimation along the axis components as

vx ¼ Dx=Dtvy ¼ Dy=Dtvz ¼ Dz=Dt

(6.35b)

However, this numeric approach for velocity estimation has drawbacks. A betteralternative may be devised from the fact that the Doppler shift in the receiverfrequency is a function of the relative radial velocity of the user with respect tothe satellite (Kaplan et al. 2006). We saw this in Eqn (6.28). We can write it andexpand the term into its components so that the equation turns into

Df ¼ �vrs=l¼ ��ðasvsx � arvrxÞ þ

�bsvsy � brvry

�þ ðgsvsz � grvrzÞ��l

or; lDf þ vs$Gs ¼ vr$Gr

(6.36)

where Gr ¼ ðar iþ br jþ grkÞ and Gs ¼ ðas iþ bs jþ gskÞ are the unit vectors alongthe direction of the receiver and the satellite, respectively. l is the wavelength of thetransmitted signal and is equal to c/ft, where c is the velocity of light and ft is thefrequency of the transmitted signal. Df is the measured Doppler shift. vs and vrare the velocity of the satellite and the user, respectively.

If the user is able to receive the signal and measure the Doppler frequency Df,and can estimate the velocity of the satellite from the received ephemeris, he canalso derive the velocity using this relation.

However, there are still practical considerations. How do we measure Dopplerfrequency? The answer is, simply by differencing our known signal frequencyfrom the measured frequency of the signal. Again, we measure the incoming

Page 24: Understanding Satellite Navigation || Navigation Solutions

240 CHAPTER 6 Navigation Solutions

frequency of the signal by counting the total number of complete oscillations of thereceived signal occurring in a second. Finally, count of a second is derived from alocal clock. Thus, if the local clock in the receiver is in error, the frequency measuredbecomes erroneous, and also the Doppler shift.

Besides receiver clock error, the transmitted frequency is not exactly a designedfrequency known to the receiver. It has errors owing to satellite oscillator drift. How-ever, this value is typically estimated by the ground segment of the system and therequired correction is sent through the navigation message to the user. Therefore, wemay consider the error resulting from this effect to have been corrected.

Considering that the receiver clock error is the only source of error in this esti-mation, we recall that receiver clock errors are clock bias and clock drift. Receiver’sfixed-clock bias (i.e. deviation from the true time) creates no difference in frequencymeasurement. This is because the difference between the start and the stop times incounting the oscillations remain equally shifted from the true time, and hence theerror is nullified in the difference.

If t0 is the clock drift, during the interval Dt, between the start and the end time ofDt, the clock shifts in this interval by

dt ¼ ðt0x DtÞ (6.37)

This term dt becomes the error in timing. Therefore, if n is the total count of os-cillations in the time period Dt, the true received frequency of the signal is f¼ n/Dt.But, owing to the error in measuring the time period, the frequency actuallymeasured is

fm ¼ f þ Df¼ f þ vf=vDt dt

(6.38)

The error in measuring the frequency is

df ¼ vf=vDt dt

¼ v=vDtðn=DtÞdt¼ �n=Dt2dt

¼ �n=Dt2ðDt t0Þ¼ �n=Dt t0

¼ �f t0

(6.39)

The negative sign, as usual, indicates that if the drift is positive, it leads to adecrement in the measured frequency. So, the measured Doppler frequency becomesequal to the true Doppler and the Doppler estimation error due to clock drift. Foreach satellite, we then get the expression from the Doppler observation as

vr Gr þ lft0 ¼ vs Gs þ lDfmor; ½Gr lf�½vr t0� ¼ vs Gs þ lDfmor; G0

r ½vr t0� ¼ vs Gs þ lDfm

(6.40)

where G0r ¼ ½Gr lf�.

Page 25: Understanding Satellite Navigation || Navigation Solutions

References 241

We can now construct a similar matrix relation to solve for the receiver velocity.The solution becomes

½vr t0� ¼ �G0r

��1Gs vs þ

�G0r

��1ðlDfÞ (6.41)

This equation has an implicit assumption that the line of sight joining thesatellite and the user is known before the estimate of the velocity, whichenables us to find Gr. Thus, velocity estimation with this method can be doneonly after the position of the user is estimated. Like the position, these velocityvalues are derived from the instantaneous measurements and are updated afterevery measurement.

However, errors come in this estimation, too. We will learn about the errors in thechapter specifically on this topic in Chapter 7. Also, other techniques can be used toestimate the position and velocity. One of these is by the Kalman filter, in whichposition, time, and velocity are determined simultaneously, considering them tobe the state variables of the receiver. An introduction to the Kalman filter and asso-ciated estimation process is discussed in Chapter 9.

Conceptual questions1. Is it possible to find the position of a flying aircraft by measuring its range from a

known position on the earth? If yes, how many such receivers will be required tofind it?

2. Instead of using the measured range, if we take the range difference values, thecommon term of clock bias cancels out and the equations are left with threeunknowns. Is it possible to use only three satellites and the corresponding threedifference equations to derive the position coordinates?

3. What advantages do we obtain by using an atomic clock while determining theposition and velocity?

4. Do you expect the accuracy of the navigation solution to improve if more thanfour satellites are used for the purpose?

ReferencesAbel, J.S., Chaffee, J.W., 1991. Existence and uniqueness of GPS solutions. IEEE Transac-

tions on Aerospace and Electronic Systems 27 (6), 952e956.Axelrad, P., Brown, R.G., 1996. GPS navigation algorithms. In: Parkinson, B.W.,

Spilker Jr., J.J. (Eds.), Global Positioning Systems, Theory and Applications, vol. I.AIAA, Washington, DC, USA.

Bancroft, S., 1985. An algebraic solution of the GPS equations. IEEE Transactions onAerospace and Electronic Systems 21, 56e59.

Chaffee, J.W., Abel, J.S., 1994. On the exact solutions of pseudorange equations. IEEETransactions on Aerospace and Electronic Systems 30 (4), 1021e1030.

Page 26: Understanding Satellite Navigation || Navigation Solutions

242 CHAPTER 6 Navigation Solutions

Grewal, M.S., Weill, L., Andrews, A.P., 2001. Global Positioning Systems, Inertial Navigationand Integration. John Wiley and Sons, New York, USA.

Kaplan, E.D., Leva, J.L., Milbet, D., Pavloff, M.S., 2006. Fundamentals of satellitenavigation. In: Kaplan, E.D., Hegarty, C.J. (Eds.), Understanding GPS Principles andApplications, second ed. Artech House, Boston, MA, USA.

Strang, G., 2003. Introduction to Linear Algebra, third ed. Wellesley-Cambridge Press,Wellesley, MA, USA.