fusion of visual odometry and landmark constellation matching...

8
Fusion of Visual Odometry and Landmark Constellation Matching for Spacecraft Absolute Navigation: Analysis and Experiments Bach Van Pham 1,2 , Simon Lacroix 1,2 , Michel Devy 1,2 , Thomas Voirin 3 , Cl´ ement Bourdarias 4 and Marc Drieux 4 1 CNRS ; LAAS ; 7 avenue du colonel Roche, F-31077 Toulouse, France 2 Universit´ e de Toulouse ; UPS , INSA , INP, ISAE ; LAAS ; F-31077 Toulouse, France 3 European Space Agency, ESTEC - Keplerlaan 1 - P.O. Box 299 - 2200 AG Noordwijk ZH, The Netherlands 4 EADS-ASTRIUM, 66 Route de Verneuil, Les Mureaux Cedex , 78133, France Abstract— Future space exploration missions require a pre- cise absolute localization of the lander during descent and at touch-down, called “pinpoint landing”. This article presents the extensions of “Landstel”, a solution initially presented at Astra 2008 [10], that exploits on-board vision to localize the lander during descent with respect to orbital imagery. Landstel uses the geometric repartition of surface landmarks to match descent images with orbital images. The algorithm is designed to be robust with respect to illumination variations, independent from the presence of specific features on the surface, and with a very low demand on memory and processing power. The article focuses on the two following issues: methods to pre-process the orbital images to ensure a proper behavior of Landstel, and the integration of this algorithm with the output of an inertial navigation sensor or with a visual odometry scheme. Extensive results with simulated data have been obtained – the paper analyzes some of them. I. I NTRODUCTION Autonomous precision landing is an important capability required to ensure the mission success for future automatic landers targeting Mars, the polar regions of the Moon and asteroids [17]. For the ESA Lunar Lander mission for instance [13], the maximum dispersion allowed for the lander position before the main breaking phase (at 15 km altitude, figure 1(a)) is a few hundred meters. This requirement is not fulfilled with the current technology where spacecraft position is determined with Earth-based ground tracking and inertial navigation. Many research have been performed to improve the nav- igation accuracy during entry, descent and landing (EDL) by either reducing the initial position error (e.g. by using radio communication capabilities of orbital assets [2]) or by improving the estimation of the lander velocity and attitude [4], [15]. However, the availability of high resolution imagery of planetary surfaces acquired by orbiters, such as the Mars Reconnaissance Orbiter and Lunar Reconnaissance Orbiter, allows the definition of absolute localization solutions, nick- named “pinpoint landing”. Such solutions rely on the search of correspondences between orbital data and data acquired during descent, using either a camera or a LIDAR. This article proposes a vision-based solution to the pin- point landing problem: descent images are acquired with an embedded camera during flight and compared with a “landfall map” of the landing area that gathers the orbital images and the corresponding digital elevation maps (DEM). (a) Fig. 1. Main phases of a Moon polar landing scenario. Landfall maps (initial orbital images) are symbolized by chessboards. We focus on the lunar landing scenario depicted Figure 1(a): in contrast with previous lunar navigation studies (e.g. [1]) where the navigation system is used below 15 km altitude, the optical navigation system is intended to be used right after the de-orbiting phase, at about 100 km altitude. In such a case, because of the limited ranging capability of existing altimeters, the vision-based navigation system has to be robust with respect to low precision altitude information obtained from inertial measurements only. Besides this, one big challenge for the vision-based navi- gation systems is to be robust with respect to the radiometry differences between the orbital images and the descent ones, mainly due to variations of sun illumination. Another chal- lenge is to cope with the limit of onboard processing power – for instance the current CPU targeted by the European Space Agency is the Leon 3 [14], that has four cores clocked at 100 MHz, and whose memory resources are restricted to about 100 Megabytes. Approach overview and outline Our approach relies on a Land mark Constel lation match- ing algorithm named Landstel [10], [11]. On-line, Land-

Upload: others

Post on 09-Aug-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

Fusion of Visual Odometry and Landmark Constellation Matching forSpacecraft Absolute Navigation: Analysis and Experiments

Bach Van Pham1,2, Simon Lacroix1,2, Michel Devy1,2, Thomas Voirin3, Clement Bourdarias4 and Marc Drieux4

1CNRS ; LAAS ; 7 avenue du colonel Roche, F-31077 Toulouse, France2Universite de Toulouse ; UPS , INSA , INP, ISAE ; LAAS ; F-31077 Toulouse, France

3European Space Agency, ESTEC - Keplerlaan 1 - P.O. Box 299 - 2200 AG Noordwijk ZH, The Netherlands4EADS-ASTRIUM, 66 Route de Verneuil, Les Mureaux Cedex , 78133, France

Abstract— Future space exploration missions require a pre-cise absolute localization of the lander during descent and attouch-down, called “pinpoint landing”. This article presentsthe extensions of “Landstel”, a solution initially presented atAstra 2008 [10], that exploits on-board vision to localize thelander during descent with respect to orbital imagery. Landsteluses the geometric repartition of surface landmarks to matchdescent images with orbital images. The algorithm is designedto be robust with respect to illumination variations, independentfrom the presence of specific features on the surface, and with avery low demand on memory and processing power. The articlefocuses on the two following issues: methods to pre-process theorbital images to ensure a proper behavior of Landstel, andthe integration of this algorithm with the output of an inertialnavigation sensor or with a visual odometry scheme. Extensiveresults with simulated data have been obtained – the paperanalyzes some of them.

I. INTRODUCTION

Autonomous precision landing is an important capabilityrequired to ensure the mission success for future automaticlanders targeting Mars, the polar regions of the Moon andasteroids [17]. For the ESA Lunar Lander mission forinstance [13], the maximum dispersion allowed for the landerposition before the main breaking phase (at 15 km altitude,figure 1(a)) is a few hundred meters. This requirement isnot fulfilled with the current technology where spacecraftposition is determined with Earth-based ground tracking andinertial navigation.

Many research have been performed to improve the nav-igation accuracy during entry, descent and landing (EDL)by either reducing the initial position error (e.g. by usingradio communication capabilities of orbital assets [2]) or byimproving the estimation of the lander velocity and attitude[4], [15]. However, the availability of high resolution imageryof planetary surfaces acquired by orbiters, such as the MarsReconnaissance Orbiter and Lunar Reconnaissance Orbiter,allows the definition of absolute localization solutions, nick-named “pinpoint landing”. Such solutions rely on the searchof correspondences between orbital data and data acquiredduring descent, using either a camera or a LIDAR.

This article proposes a vision-based solution to the pin-point landing problem: descent images are acquired withan embedded camera during flight and compared with a“landfall map” of the landing area that gathers the orbitalimages and the corresponding digital elevation maps (DEM).

(a)

Fig. 1. Main phases of a Moon polar landing scenario. Landfall maps(initial orbital images) are symbolized by chessboards.

We focus on the lunar landing scenario depicted Figure 1(a):in contrast with previous lunar navigation studies (e.g. [1])where the navigation system is used below 15 km altitude,the optical navigation system is intended to be used rightafter the de-orbiting phase, at about 100 km altitude. Insuch a case, because of the limited ranging capability ofexisting altimeters, the vision-based navigation system has tobe robust with respect to low precision altitude informationobtained from inertial measurements only.

Besides this, one big challenge for the vision-based navi-gation systems is to be robust with respect to the radiometrydifferences between the orbital images and the descent ones,mainly due to variations of sun illumination. Another chal-lenge is to cope with the limit of onboard processing power –for instance the current CPU targeted by the European SpaceAgency is the Leon 3 [14], that has four cores clocked at 100MHz, and whose memory resources are restricted to about100 Megabytes.

Approach overview and outlineOur approach relies on a Landmark Constellation match-

ing algorithm named Landstel [10], [11]. On-line, Land-

Page 2: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

stel estimates the lander position from matches betweenlandmarks extracted both on orbital and descent images.To cope with illumination variations during the matchingoperation, Landstel exploits geometric information related topoint features rather than radiometric information. Landstelis also exploited off line, in a preparatory phase that assesseswhich orbital images are suitable for its on line operation,and to extract good point features from these images (sectionII).

Fig. 2. VIBAN: an architecture that integrates Landstel and visualodometry.

Landstel is integrated with visual odometry within theoverall VIsion-Based Absolute Navigation (VIBAN) system,depicted in figure 2. Visual odometry (VO) is assumedto be entirely independent from Landstel; it estimates alander state that is fused with the Landstel absolute positionestimate, according to a loose integration scheme (sectionIII). Furthermore, matched landmarks (Landstel matchesbetween one descent image and one orbital image) andtracked landmarks (VO matches between consecutive descentimages) can be integrated, thus making possible to discardLandstel faults, and to augment the number of matchedlandmarks, as presented in section IV.

The VIBAN system has been extensively tested in dozensof scenarios, using tens of thousands images. Due to the limitof the paper, only results made with a nominal condition arepresented in section V.

II. LANDSTEL

Landstel relies on the geometric repartition of the surfacelandmark to find matches between the descent image and theorbital image. Figure 3 shows the topology of the landmarkextracted from two images acquired with two different sunangles and from two different positions. The percentage ofcommon points of the two images (repeatability) is ratherlow, here about 25%. Despite this low repeatability rate, thegeometric repartition of these landmarks is well preserved.Landstel exploits this characteristic to find the matchesbetween the two images.

As presented Figure 2, Landstel is both exploited onlineand offline. The online process is the one that establishmatches between descent imagery and the orbital imageslandmarks, whereas the offline module processes the landfall

Fig. 3. Interest points detected with images taken with a (55,25) (azimuth,elevation) sun angle (left image) and with a (70,40) sun angle (right image).The left image is taken from 200 km altitude with a 10deg field of viewcamera and the right one is taken from 8 km altitude with a 70deg field ofview camera

maps to create the orbital database. It measures the suitabilityof the landfall map with respect to the operation of Landstel,and determines the best parameters for Landstel with respectto a particular landfall map.

A. Landstel On-line

The Landstel on-line function consists of 5 major steps(Table I). The first and second steps extract and transformthe landmarks in the descent image so that the similarity ofthe geometric repartition between the descent landmarks andthe orbital landmarks is maximized (using an homographycomputed from the position and attitude knowledge). Then,the third step extracts the signature of each descent landmark,which is compared with the orbital landmarks signatures(step 4): a list of potential candidates from the orbitallandmark database is associated to each descent landmark. Inthe last step, a voting scheme is applied to assess the correctmatches: several affine transforms are extracted within thepotential candidate list. The best affine transform (the onewith the highest number of matches) is used to find additionalmatches between descent landmarks and orbital ones. Furtherinformation about the algorithm can be found in [10], [11].

B. Landstel Off-line

The orbital image plays an important role in the perfor-mance of Landstel: the choice of a landing zone which issuitable for Landstel is essential for the mission success.The off-line process introduced in this section pre-processthe orbital image so that the performance of Landstel ismaximized. It also presents metrics to classify the landingsite, and thus allow the operator to select the most suitablelanding site for Landstel.

1) Orbital Landmark Selection: Given an image of thelanding site, the interest points are extracted. Similar to theonline process, the DoG feature point detector [7] is appliedon the orbital image. With a surface with a large numberof interest points with different scales, a Gaussian operatoris applied to remove the less significant points, keepingonly the most meaningful ones. Figure 4 shows the resultobtained by applying a Gaussian filter to a cratered surface.

Page 3: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

Algorithm LANDSTEL OnlineInputs: orbital landmarks database, descent image,estimated lander attitude, altitude, position,and camera model.Outputs: matched landmarks between descentand orbital images.Algorithm:1. Select descent landmarks with scaled operator.2. Rectify descent landmarks to the orbital image plane.3. Extract descent landmark signatures with ShapeContext (br, pr, nR and nW ).4. Compare descent landmark signatures with orbitallandmark ones to create the potential candidates list.5.a. Extract affine transforms from the potentialcandidates.5.b. Choose the best affine transform with the highestnumber of matches.5.c. If the number of matches is high enough, returnthem as output.

TABLE ILANDSTEL ONLINE PROCESS.

Without using a Gaussian filter, more than 20.000 featurepoints are detected. With the Gaussian filter, the numberof interest points is reduced to 1000. The stored interestpoints represent the most significant landmarks distributed onthe whole landing site. This process is similar to the scaleadjustment operator used in step 1 of the on-line process,it transforms the scale of the orbital image and reduces thedifference in scale between the orbital and descent images. Itis taken into account during the step 1 in the on line processthanks to the scale adjustment operator.

After having extracted the interest points of the orbitalimage, a signature is defined for each extracted feature point:five paramaters are required to define this signature, whichis akin to a shape context descriptor [12]. The initial land-marks 2D positions, their signatures and their 3D absoluteco-ordinates in the orbiter local frame constitute the geo-database stored in the memory of the lander before launch.

Fig. 4. 20740 interest points detected without Gaussian filter (left image)- 1141 interest points detected with Gaussian filter (right image)

2) Orbital Data and Landstel Parameter Optimization:In order to maximize the on-line performance of Landstel,a pre-processing function is employed to optimize both

Algorithm LANDSTEL Off line Pre-processingInputs: orbital image, sub-images size, step sizeand parameters pool.Outputs: Gaussian scale, shape context parameters.Algorithm:1. Choose one off line Gaussian scale.2. Extract orbital landmarks with the scale.3. Create simulated descent landmarks database bydistorting orbital landmark, adding new points anddeleting old points.4. Choose a sub-image in the orbital image as onedescent image.5. Extract landmarks in the descent image fromdescent database.6. Apply Landstel to the extracted descent landmarks.7. Repeat step 4 to 6 with “step size” until no sub-imageavailable.8. Calculate and store the Landstel performance scores.9. Repeat step 1 to 8 with other Gaussian scale value.10. Return the scale and parameters with the highestscores.

TABLE IILANDSTEL OFFLINE PROCESS

the orbital database and the Landstel parameters. This pre-processing function has two goals: it defines the number ofextracted interest points by indicating the Gaussian value forthe orbital image and it selects optimal values for the fiveshape context parameters required to define the signature ofa point.

Table II details the ten steps of this pre-processing func-tion. With the pre-defined values of the Gaussian filter andthe shape context parameters, the orbital landmarks areextracted, and their signatures are calculated. Then, a testlandmark database is created from the extracted orbital land-marks. First, the positions of these landmarks are corruptedin order to simulate the change of the landmarks in thedescent image. The level of distortion can be based on theelevation of the surface (simulation of rectification error) oron the shape of the neighbour area (simulation of shadow),the noise of the attitude and altitude estimate or the differencein radiometric value between the orbital and descent cameras.Then, a percentage of landmarks are randomly deleted.Finally, new landmarks are randomly added to form a testdescent landmark database.

After having created the test descent landmarks database,a sub-image whose size corresponds to the predicted sizeof the warped descent image is extractedxs; landmarkswhose positions stay inside the sub-image are gathered tocreate the “descent landmarks”. Then, the on line Landstelprocess is applied to find matches between the descentlandmarks extracted from the sub-image and the orbitalones. This process is repeated with as numerous sub-images.The different scores of Landstel with this set of parametersare calculated, and the process is iterated over the whole

Page 4: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

parameter space. Figure 5 shows the scores of differentshape context parameters and Gaussian values with an orbitalimage.

Sha

pe C

onte

xt S

pace

Redundancy

2 4 6 8 10

10

20

30

40

Time

2 4 6 8 10

10

20

30

40

Memory

2 4 6 8 10

10

20

30

40

Gaussian Value

Sha

pe C

onte

xt S

pace

Operation Probability

2 4 6 8 10

10

20

30

40

Gaussian Value

Good Match

2 4 6 8 10

10

20

30

40

Gaussian Value

Bad Match

2 4 6 8 10

10

20

30

40 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 5. Gaussian scale and parameter scores with one orbital image (allchart values are normalized for visualization purpose, high value means agood score).

III. FUSION OF ESTIMATED POSITIONS

A. Filter Description

Most integration systems proposed in the literature tocombine image and IMU data are tight integration schemes[19], [6], [8]. In contrast, we propose a loosely coupledconfiguration due to its flexibility. Figure 6 shows the ar-chitecture of one variation (“feed backward”) of the looseintegration system.

Camera

Descentimage

Orbitalimage

LANDSTEL

Landmark matching

Observed position

IMU

- +

Raw data

Linear KalmanFilter

ObservedError of Position

Features tracking

NPAL

PositionEstimatedPosition

EstimatedError of Position

Pk^

Loose Integration

Fig. 6. The “feed backward” loosely coupled scheme combines VO andLandstel outputs and sends the filter output backward, to correct the positionestimated by VO.

The loosely coupled integration scheme combines theoutput of VO (here given by the NPAL system [4]) or anINS and Landstel. The NPAL sensor can be consideredas an extended INS: it generates estimates for the vehicleposition with cumulative errors, fusing measurements fromIMU and from a features tracking function. Unlike the tightintegration scheme, the loose integration filter does not takeinputs directly from the raw sensors: raw features positionsor IMU measurements are processed by VO or INS, whilelandmark observations on the image and landmark positionson the terrain, are processed inside Landstel to estimatethe spacecraft absolute position. This loose integration onlyupdates the vehicle position (contrary to the tight integration,that updates all the spacecraft states).

To correct the position error, the filter estimates the error inposition of VO or INS. The observation which is delivered tothe filter is the “observed error”, measured as the differencebetween the VO or INS position estimate and the Landstel

position estimate. Because the navigation error equations arethe linearised model [18], the filter is linear. In Figure 6,the error of position estimated by the Linear Kalman Filteris sent back in the system. This scheme is called “feedbackward”.

B. Filter Implementation

Since the process models are the linearised navigationerror models, the implemented filter is a linear Kalmanfilter. Detail in linearising the system navigation equationto generate the error equation state can be found in [18].The Kalman filter status vector is the navigation error. Thestate of the filter is an error state and is defined as:

δxk =[εT δvT δpT δf bT

δωbT

ib

]T(1)

where ε is the system attitude’s error, δv is the systemvelocity’s error, δp is the system position’s error, δf b is theerror in the measurement of acceleration and δωb

ib is the errorin the measurement of the angular rate. Each term is a 3-dimensional vector (3× 1). The control unit u is:

uc(k) =[uT

acc(k) uTgyro(k)

]T(2)

where uacc and ugyro denote the IMU (or VO) systemnoise.

The continuous-time navigation error state model can bedefined as:

δx(k) = F (k)δx(k) +G(k)uc(k) (3)

where F(k) is the 15×15 matrix

F (k) =

−Ωe

ie 03 03 03 Reb

−Υe −2Ωeie 03 Re

b 03

03 I3 03 03 03

03 03 03 03 03

03 03 03 03 03

(4)

and G(k) the 15×6 matrix:

G(k) =

03 Re

b

Reb 03

03 03

03 03

03 03

(5)

where Reb the rotation matrix between the body frame and

e-frame1. The 03 and I3 symbols respectively denote the 3×3null matrix and the 3×3 identity matrix, and Ωe

ie is a skew-symmetric matrix which represents the planet rotation givenby the angular rate ωe

ie = [ω1, ω2, ω3] between the i-frame2

and the e-frame.

Ωeie =

0 −ω3 ω2

ω3 0 −ω1

−ω2 ω1 0

(6)

1Planet-Centered Planet-Fixed Frame.2Planet-Centered Inertial Frame.

Page 5: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

and Υ is defined as:

Υe =

0 −fe3 fe

2

fe3 0 −fe

1

−fe2 fe

1 0

(7)

where fel indicates the acceleration along the lth coordinate

axis in the e-frame.As explained in [18], the discrete-time error equation

becomes:

δxk+1 = Ψkδxk + ud,k (8)

with ud,k the discrete-time process noise. And the statetransition matrix between two instance kTs and (k+ 1)Ts isapproximated as:

Ψk ≈ I + F (kTs)Ts (9)

Let δy the difference between the Landstel and VO or INSposition estimates and wd,k the error in the Landstel positionestimates. Then the observation equation can be written as:

δyk = Hkδxk + wd,k (10)

where wd,k is the discrete observation noise. The observationmatrix H is a 03×15 zero-matrix if Landstel can not estimatethe absolute position and contains an identity sub-matrix withthe observations:

Hk =[03 03 I3 03 03

](11)

The integration of the two systems not only enhances thevehicle position knowledge but allows Landstel to reduce thesearch zone in the orbital image instead of searching in thewhole landing area. After each Kalman filter update, the newspacecraft absolute position is returned back to Landstel. Onthe basis of the value of the estimation covariance Pk, thesize of the search zone is determined. This focusing mech-anism not only accelerates the algorithm by reducing thesearch area, but also improves the algorithm’s performanceby limiting the false matches probability.

The Kalman filter should be “tuned” before use [16], [5].This comes to define the process Q(k) and observation R(k)covariance matrices values. The Q(k) value can be obtainedby calibrating the INS or by a validation campaign with VO,and similarly, the value of R(k) matrix can be obtained witha Landstel validation campaign.

IV. FUSION OF SURFACE LANDMARKS POINTS

A. Vision-based fault detection

In a conventional tight integration system, the detectionand rejection of the matched points outliers are made in anEKF using a gating function [3] based on the Mahalanobisdistance. However, the linear Kalman filter implemented heredoes not directly take image measurements as inputs. Todetect and reject large position errors of Landstel caused byfalse matches, the Consistency Checking module performslike a gating function: prediction and observation of matchedlandmarks are made and compared to validate the Landstel

estimate. The prediction and observation functions are sym-bolically described as:• Prediction Set: using the estimated position returned by

Landstel at instant t ptLs and the motion estimation

between t and t + 1 given by the visual odometry∆pt+1|t

V o , the position of the lander at t+ 1 is predictedas:

pt+1Ls = pt

Ls + ∆pt+1|tV o (12)

With the predicted position pt+1Ls and the 3D point M3D

associated with the matched points in the descent imageBt

Des, the positions of the BtDes points in the next image

are predicted from back projection from BtDes to M3D,

and projection of M3D using the predicted camera pose.The prediction set is calculated with:

Pre(BtDes) = BackProj(Bt

Des, pt+1Ls ,M3D) (13)

= BackProj(BtDes, p

tLs + ∆pt+1|t

V o ,M t3D)(14)

In Figure 7, the red points in image (b) represent thepredicted positions of the matched points Bt

Des in thefollowing image. The blue points in image (a) illustratea subset of Landstel matched points Bt

Des.• Observation Set: given a set of the matched points Bt

Des

returned by Landstel at time t, the set point is trackedto the next image at time t + 1 with visual odometry.The observed points are represented by the green pointsin image (b) (Figure 7). The observation set is thusobtained with:

Obs(BtDes) = Track(Bt

Des, imt, imt+1) (15)

Using the observation function in equation 15 and theprediction function in equation 14 (illustrated by the greenand red points in Figure 7), the Landstel output is consideredas being incorrect if the prediction set is inconsistent withthe observation set. The prediction set depends only on theoutput of the Landstel algorithm, i.e. the estimated positionand the associated 3D points. In contrast, the observation setdepends only on visual odometry. Thanks to the use of imagecorrelation aided by IMU measurements, the observation setis considered as being more reliable during a short period ofone second.

Let ξ the difference or the innovation between the obser-vation set and the prediction set:

ξ(BtDes) = Obs(Bt

Des)− Pre(BtDes) (16)

The normalized value α of the innovation vector ξ(BtDes)

is calculated with:

α = ξ(BtDes)T ∗ P (Bt

Des)−1 ∗ ξ(BtDes) (17)

where P (BtDes) is the covariance of the matched points

BtDes

3.

3Theoretically, there are also errors (less than 1 pixel) in the VOmeasurements. However, the covariance of Landstel is much bigger, up toa few pixels, which largely overrides the VO measurement error.

Page 6: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

The covariance value here indicates the precision of theLandstel matched points locations, which depends on theparameters used in the Landstel algorithm. This value iscalculated off-line. The value α is used to compare witha predetermined threshold to classify the Landstel output. Ifit is bigger than the threshold, the Landstel output will bereported as erroneous and will be discarded. In that case, onlythe visual odometry information will be used to propagatethe global position estimation – until Landstel retrieves anabsolute position estimate. The choice of the threshold isimportant: if it is too small, good data are discarded, and ifit is too large, false estimate corrupts the system.

Nevertheless, the fact that the prediction set is consistentwith the observation set is not an absolute guarantee that theLandstel output is correct. Should such a case occur, Landsteloutput is considered as correct and is fed back, hoping thatsubsequent steps will retrieve a potential inconsistency.

B. Landstel enhancement

Fig. 7. (a): a descent image with the subset of the Landstel matchedpoints. (b): the next descent image with the Landstel interest points trackedby visual odometry (in green) and with the prediction points (in red).For visualization purpose, only the points matched by Landstel which aresuccessfully tracked by visual odometry are shown (23 out of 31 here).

The main purpose of the Landstel enhancement procedureis to increase the number of matched points returned byLandstel, which consequently improves its estimation pre-cision. Given a set of matched points returned by Landstelat one instant (blue points in Figure 7), the set is trackedto the next image processed by Landstel4 with the “featurestracking” of VO (green points in Figure 7). After havingsuccessfully tracked the matched points, the tracked pointsare injected into the set of affine candidates of Landstel(output of step 5). This new candidate are considered like theother normal candidates. Among these affine candidates, theone with the highest number of matches will be considered asthe best candidate and will be used to calculate the positionof the spacecraft.

There is a little risk of system divergence due to sub-optimality in using VO tracked points. The VO tracked pointsare inserted in Landstel only as a normal affine candidate.This new affine candidate must be used to generate newmatches with the orbital image and must compete with othercandidates to be chosen. Moreover, the Landstel search zoneis reduced but kept from getting too small so that the truespacecraft position always stays in the search zone.

4The low frequency, 1 Hz, of Landstel requires the matched landmark tobe tracked through multiple images (N images with a N Hz VO).

V. EXPERIMENTS

A. Description

1) Test trajectory: Fig. 1(a) page 1 illustrates the coastingphase where the lander uses Landstel to estimate its status.The lander descends from 100 km down to 15 km altitudewith the moon gravity. As a result, only the embeddedcamera and the IMU are usable. This coasting phase lasts1 hour with a flight path of 5000 km.

Due to the long flight time of the Moon coasting phase (1hour), the system will only be tested for a short durationat three altitudes which are at 100 km, 58 km and 29km (illustrated by three landfall maps). The three altitudescorrespond to the sub-trajectories A, B and C in Figure 1(a).For each altitude, Landstel is employed for a time lapse(“trial”) of 43 seconds with 1 Hz frequency 5.

2) Synthesized Imagery: Synthesized images generatedfrom PANGU [9] are employed. To validate Landstel, adozen of surfaces were synthesized with Earth DEM 6.Figure 8 shows two surfaces used in this paper: the leftsurface denoted “NH” is synthesized by PANGU usingsurface parameter while the right terrain denoted “MB” isgenerated with a modified Earth DEM model. The differencebetween the lowest point and the highest point of MB is 10km.

Fig. 8. Left: cratered surface with 0.2 craters/km2 (named “NH”). Right:mountainous terrain with 10 km elevation variation (named “MB”). Surfacesize is 320× 320km2.

Two types of images are generated with PANGU: theorbital images and the descent images. The resolution of thedescent image is fixed at 512× 512 pixels, 8 bit/pixel. Thecamera operates at 10 Hz frequency. The camera FOV is setto 50× 50 degrees. In these experiments, Landstel is set towork at 1 Hz frequency and the visual odometry is also set at1 Hz frequency. As Landstel is employed for a total durationof 129 seconds (43 × 3), there are 129 absolute estimationresults and 129 descent images.

There are two types of orbital images used for the Moonscenario: orbital images with a resolution of 160 m/pixel usedat trajectory A and B and orbital images with a resolutionof 80 m/pixel used at trajectory C. The resolution of theorbital image is chosen here so that the difference betweenthe resolution of orbital and descent images is smaller than0.5 [11]. The resolution of the descent image with a nadirpointing camera at trajectory A (100 km) is approximately210 m/pixel, 160 m/pixel at trajectory B (58 km) and 80m/pixel at trajectory C (29 km).

5Explanation in subsection V-C6LRO and Kaguya DEM were not available at the time.

Page 7: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

Name Ele (L) Azi (L) Ele (N) Azi (N)Orbital image 5o 0o 25o 55o

Descent 1 1o 180o 25o 55o

Descent 2 1o 90o 10o 40o

Descent 3 1o −90o 40o 40o

Descent 4 1o 45o 10o 70o

Descent 5 1o −45o 40o 70o

TABLE IIITWO ILLUMINATION SETTINGS: LOW LIGHT (L) AND NORMAL (N)

CONDITIONS. “ELE” CORRESPONDS TO SUN ELEVATION AND “AZI”CORRESPONDS TO SUN AZIMUTH.

3) Illumination: Table III summarizes the two illumi-nation conditions considered in this article. The orbitalillumination is fixed whereas the illumination of the descentimage varies. The low light illumination setting resemblesthe Moon North and South Poles illumination. The normallight corresponds to the Moon equator illumination wherethe sun is relatively high in comparison with the two poles.

4) Sensor Noise: Sensor noise is added to every descentimage generated by PANGU and to the INS outputs. Thehorizontal velocity error is bounded by 1.5 m/second whereasvertical velocity error is limited by 1 m/second. Gaussianwhite noise with standard deviation equal to 0.5% (1.3 greyvalue for 8 bits image) is injected to the descent image. Theattitude error is set to less than 1 degree.

5) Scenarios: Two scenarios are defined for the nominalvalidation. In the first scenario, The trajectory coincides withthe red continuous line in Figure 1(a): the lander faces thesun at point A (i.e. 180 degrees incidence angle), and thesun is directly behind the lander around point C. In thesecond scenario, the lander follows the transition from theMoon dark and bright sides from the North Pole down to theSouth Pole in a plane perpendicular to the first trajectory. Theincidence angle between the sun and the lander orientationis set to 45 degrees for the points A and C.

For both scenarios, the cratered surface NH is used atthe two points A and C. The mountainous surface MB isemployed at point B. The low-light condition is used attwo points A and C which corresponds to the two polesof the Moon, normal light is applied at point B. Note thatthe illumination difference between the orbital and descentimages for the two scenarios is kept equivalent to isolate thedifference in incidence angle, the angle between the sun andspacecraft direction. Illumination at point B is intentionallyset to the same for the 2 scenarios.

B. Results

Fig. 9 shows two results of the image matching functionof Landstel, the interest points with the same color in the twoimages indicate a match. For the top images, the elevation ofthe sun for the descent image is lower than that for the orbitalimage: 10 degrees versus 25 degrees, and there is 15 degreesdifference for the two images. Besides these differences,Landstel can still find matches. For the bottom images, the

Fig. 9. Two examples of matches provided by Landstel near the twopoints B (top images with MB surface) and C (bottom images with NHsurface). The left images (a1) and (b1) present the orbital image. The rightimages (a3) and (b3) are the descent image before rectification, and (a2)and (b2) are close-ups of (a1) and (b1). The orbital image (a1) is acquiredwith (25o, 55o) (elevation, azimuth). The descent image (a3) is taken with(10o, 40o) sun position. (5o, 0o) is set for bottom left orbital image and(1o, 180o) is set for bottom right descent image.

sun elevation is kept at 5 degrees for both the orbital imageand 1 degree for the descent image, the azimuth angle is setto 180 degrees for the descent image. Therefore, the directionof the shadow of the descent image is different from that inthe orbital image. This contrast can be observed with theshadow of the crater.

0 20 40 60 80 100 120 1400

500

1000

1500

2000

2500

3000

3500

Image Number

Pos

ition

Mag

nitu

de E

rror

(m

)

Scenario 1 with VOScenario 2 with VOScenario 1 with INSScenario 2 with INS

0 20 40 60 80 100 120 1400

500

1000

1500

2000

2500

Image Number

Alti

tude

Err

or (

m)

Scenario 1 with VOScenario 2 with VOScenario 1 with INSScenario 2 with INS

Fig. 10. Landstel position estimation error in magnitude (left) andin altitude (right) with the two Moon coasting scenarios and with twoconfigurations (INS or VO)

Fig. 10 shows the average estimation errors in magnitude(left) and in altitude (right) of five runs with the two scenariosand with the two configurations. In one configuration, Land-stel is coupled with VO. In the other configuration, Landstelis combined with an INS. The difference between the twoconfigurations is the integration of the feature points. At eachtrajectory A, B, and C, VIBAN is used during a time lapseof 43 seconds, so there are in total 129 estimates for thenominal trajectory. The navigation period using INS(VO)between the two trajectories A and B or B and C is notshown due to the large difference in time scale (129 secondsversus 2500 seconds). Therefore, the navigation error resultsin high jumps at the 43rd image (transition A-B) and at the86th image (transition B-C).

In this figure, we can observe that the average positionerror obtained with Landstel for the two scenarios and thetwo configurations is 1500 m at the end of trajectory A. Wecan see that the error in altitude increases: this is due to thehigh error values in the X and Y axis. The initial error inaltitude is too small to be corrected (of the order of 100 m,ı.e. 0.1%, thanks to the earth-based communications). The

Page 8: Fusion of Visual Odometry and Landmark Constellation Matching …robotics.estec.esa.int/ASTRA/Astra2011/Papers/03A/FCXNL... · 2011-05-12 · Fusion of Visual Odometry and Landmark

average altitude error obtained with VIBAN at this altitudeis 400 m, which is 0.4% (of 100 km).

At the beginning of trajectory B, both the position errorand the altitude error augment due to the integration ofINS(VO) error. The altitude error raises up to 2km, i.e.3.45% at 58 km altitude. Thanks to Landstel, the altitudeand position errors are corrected for the two scenarios.

But the altitude error is indeed slightly worse than the oneobtained at the end of point A. The average position erroris 500 m while the mean altitude error is 400 m. These twovalues signify that most of errors concern the estimation ofthe altitude. This phenomenon will be further analysed in thenext section.

At trajectory C, the errors from the end of trajectory B arepropagated with the outputs of the INS(VO). At this point,the initial position error is 1500 m whereas the initial altitudeerror is 1100 m, 73.3% of initial error. Thanks to VIBAN,both errors are well corrected. The average altitude errorat the end of trajectory C is 60 m, which corresponds to0.2% error (at 29 km altitude). The average position errorapproximates 300 m.

Both configurations can correct the errors due to theintegration of IMU measurements, but the system deliversa better performance with VO than with an INS. At pointC (at 29 km altitude), final position errors of the twoscenarios 1 and 2 are respectively 278 m and 237 m forthe VO configuration and 319 m and 287 m for the INSconfiguration.

C. Run Time

Run time of Landstel was measured with the C/C++implementation. The time was measured with the stand-alonemode: the descent image is matched with an entire orbitalimage. With a descent image of size 512 × 512 pixels andan orbital image of size 2048× 2048 pixels, the average runtime of Landstel is 0.2 second (5 Hz). This order of run timesuggests an operation frequency of 1 Hz for Landstel for thenear future space qualified processors.

VI. CONCLUSIONS

We introduced VIBAN, a complete absolute navigationsystem, in which Landstel plays an important role to provideaccurate lander position. Validation results proved that theVIBAN system can be used in various conditions. Moreover,the use of image interest points greatly reduces the memoryrequirement. The algorithm only needs 200 KB to storean orbital image of size 2048 × 2048 pixels whereas othersystems require several megabytes to store the DEM of thesame image with an equivalent DEM resolution.

We demonstrated the performance of a loose integrationsystem between Landstel and an INS or a VO. By couplingLandstel with an INS, we showed that a precision up to 300m error at 30 km can be obtained without radar altimeter.In addition, the proposed application of Landstel allows theoperator to choose various places, besides the landing site,as the landfall map for Landstel.

ACKNOWLEDGEMENTS

This work was supported by the European Space Agencyand the Astrium company.

REFERENCES

[1] D. Adams, T.B. Criss, and U.J. Shankar. Passive optical terrain relativenavigation using aplnav. In Aerospace Conference, 2008 IEEE, pages1 –9, 2008.

[2] Cheng-Chih Chu. Development of advanced entry, descent, andlanding technologies for future mars missions. In IEEE AerospaceConference, Big Sky, Montana, March 4-11, 2006, 2006.

[3] M. Fernandezi. Fault Detection and Isolation in DecentralizedMultisensor Systems. PhD thesis, University of Oxford, 1994.

[4] B. Frapard, B. Polle, G. Flandin, P. Bernard, C. Vetel, X. Sembely,and S. Mancuso. Navigation for Planetary Approach and Landing. InK. Fletcher & R. A. Harris, editor, Spacecraft Guidance, Navigationand Control Systems, pages 159–+, February 2003.

[5] Mohinder S. Grewal, Lawrence R. Weill, and Angus P. Andrews.Global Positioning Systems, Inertial Navigation, and Integration.Wiley Interscience, 2007.

[6] Shuang Li, Pingyuan Cui, and Hutao Cui. Vision-aided inertialnavigation for pinpoint planetary landing. Aerospace Science andTechnology, 11(6):499 – 506, 2007.

[7] David G. Lowe. Object Recognition from Local Scale-Invariant Fea-tures. Computer Vision, IEEE International Conference on, 2:1150–1157 vol.2, August 1999.

[8] Anastasios I. Mourikis, Nikolas Trawny, Stergios I. Roumeliotis, An-drew E. Johnson, and Larry Matthies. Vision-aided inertial navigationfor precise planetary landing: Analysis and experiments. Proc. Robot.Sci. Syst, 2007.

[9] S.M Parkes, I. Martin, M. Dunstan, and D. Matthews. Planet surfacesimulation with pangu. In Eighth International Conference on SpaceOperations, Montreal, Canada, 2004.

[10] Bach Van Pham, Simon Lacroix, and Michel Devy. Landmarkconstellation based position estimation for spacecraft pinpoint landing.In 10th Symposium on Advanced Space Technologies in Robotics andAutomation - ASTRA 2008, Noordwijk, the Netherlands, 2008.

[11] Bach Van Pham, Simon Lacroix, Michel Devy, Marc Drieux, andChristian Philippe. Visual landmark constellation matching for space-craft pinpoint landing. In Spacecraft Guidance, Navigation andControl Systems, Chicago, USA, 2009.

[12] Bach Van Pham, Simon Lacroix, Michel Devy, Marc Drieux, andThomas Voirin. Landmark constellation matching for planetary landerabsolute localization. In International Joint Conference on ComputerVision, Computer Graphics Theory and Application (VISIGRAPP),pages 267 – 274, Angers, France, 2010.

[13] C. Philippe and A. Pradier. Autonomous safe precision landingtechnology: Esa achievements and challenge. In 61st InternationalAstronautical Congress, Prague, CZ, 2010.

[14] Andre L. R. Pouponnot. A giga instruction architecture (gina) for thefuture esa microprocessor based on the leon3 ip core. In Data Syst.Aerosp. (DASIA), 2006.

[15] Stergios I. Roumeliotis, Andrew E. Johnson, and James F. Mont-gomery. Augmenting inertial navigation with image-based motionestimation. In in IEEE International Conference on Robotics andAutomation, Washington D.C., 2002, pages 4326–4333, 2002.

[16] Salah Sukkarieh. Low Cost, High Integrity, Aided Inertial NavigationSystems for Autonomous Land Vehicles. PhD thesis, University ofSydney, 2000.

[17] F. Terui, N. Ogawa, K. Oda, and M. Uo. Image based navigation andguidance for approach phase to the asteroid utilizing captured imagesat the rehearsal approach manuscript template and style guide. In 61stInternational Astronautical Congress, Prague, CZ, 2010.

[18] D. Titterton and J. Weston. Strapdown Inertial Navigation Technology.The American Institute of Aeronautics and Astronautics, secondedition, 2004.

[19] Nikolas Trawny, Anastasios I. Mourikis, Stergios I. Roumeliotis,Andrew E. Johnson, and James Montgomery. Vision-aided inertialnavigation for pin-point landing using observations for mapped land-marks. Journal of Fields Robotics, 5:357 – 378, 2006.