an open-source navigation system for micro aerial vehicles

12
Auton Robot (2013) 34:177–188 DOI 10.1007/s10514-012-9318-8 An open-source navigation system for micro aerial vehicles Ivan Dryanovski · Roberto G. Valenti · Jizhong Xiao Received: 29 May 2012 / Accepted: 29 December 2012 / Published online: 7 March 2013 © Springer Science+Business Media New York 2013 Abstract This paper presents an open-source indoor navigation system for quadrotor micro aerial vehicles(MAVs), implemented in the ROS framework. The system requires a minimal set of sensors including a planar laser range-finder and an inertial measurement unit. We address the issues of autonomous control, state estimation, path-planning, and teleoperation, and provide interfaces that allow the system to seamlessly integrate with existing ROS navigation tools for 2D SLAM and 3D mapping. All components run in real time onboard the MAV, with state estimation and control operating at 1 kHz. A major focus in our work is modularity and abstraction, allowing the system to be both flexible and hardware-independent. All the software and hardware com- ponents which we have developed, as well as documentation and test data, are available online. Keywords Micro aerial vehicle · Open-source · Naviga- tion · Quadrotor · State estimation · Control · Mapping · SLAM · Path-planning Electronic supplementary material The online version of this article (doi:10.1007/s10514-012-9318-8) contains supplementary material, which is available to authorized users. I. Dryanovski (B ) Department of Computer Science, CUNY Graduate Center, 365 FifthAvenue, New York, NY, USA e-mail: [email protected] R. G. Valenti · J. Xiao Electrical Engineering Department, CUNY City College, Convent Avenue & 140th Street, New York, NY 10031, USA e-mail: [email protected] J. Xiao e-mail: [email protected] 1 Introduction Micro aerial vehicles (MAVs), such as quadrotor helicopters, are emerging as popular platforms for unmanned aerial vehi- cle (UAV) research due to their structural simplicity, small form factor, vertical take-off and landing (VTOL) capabil- ity and high maneuverability. They have many military and civilian applications, such as target localization and tracking, three-dimensional (3D) mapping, terrain and utility inspec- tion, disaster monitoring, environmental surveillance, search and rescue, traffic surveillance, deployment of instrumenta- tion, and cinematography. In recent years, numerous research efforts have been made in this field, ranging from MAV test-bed and flight control design and path planning, to 3D SLAM and multi-robot coordination. As a result, today’s MAVs have improved in autonomy to the level that they can achieve autonomous exploration in structured indoor environments, waypoint fol- lowing and navigation in open space. In this paper, we present a navigation system for a quadro- tor MAV and discuss its open-source software implemen- tation (Fig. 1). The system, which is designed to work in GPS-denied indoor environments, has been tested with the CityFlyer (Fig. 2), a research project at the City College of New York to develop an MAV that is capable of autonomous flight in a variety of 3D spaces, based on the AscTec Pelican quadrotor. All the software discussed runs on-board an in real time on the 1.6 GHz Atom board processor and ARM7 microcontroller. The first contribution of this paper is a collection of tools for MAV state estimation. The 9-component state (posi- tion, angles, and linear velocities) of the MAV is estimated at a rate of 30 Hz, and fused together with IMU readings at a rate of 1kHz using a Kalman Filter. The laser-based framework makes the assumptions that the environment is 123

Upload: jizhong

Post on 27-Jan-2017

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An open-source navigation system for micro aerial vehicles

Auton Robot (2013) 34:177–188DOI 10.1007/s10514-012-9318-8

An open-source navigation system for micro aerial vehicles

Ivan Dryanovski · Roberto G. Valenti · Jizhong Xiao

Received: 29 May 2012 / Accepted: 29 December 2012 / Published online: 7 March 2013© Springer Science+Business Media New York 2013

Abstract This paper presents an open-source indoornavigation system for quadrotor micro aerial vehicles(MAVs),implemented in the ROS framework. The system requires aminimal set of sensors including a planar laser range-finderand an inertial measurement unit. We address the issuesof autonomous control, state estimation, path-planning, andteleoperation, and provide interfaces that allow the systemto seamlessly integrate with existing ROS navigation toolsfor 2D SLAM and 3D mapping. All components run in realtime onboard the MAV, with state estimation and controloperating at 1 kHz. A major focus in our work is modularityand abstraction, allowing the system to be both flexible andhardware-independent. All the software and hardware com-ponents which we have developed, as well as documentationand test data, are available online.

Keywords Micro aerial vehicle · Open-source · Naviga-tion · Quadrotor · State estimation · Control · Mapping ·SLAM · Path-planning

Electronic supplementary material The online version of thisarticle (doi:10.1007/s10514-012-9318-8) contains supplementarymaterial, which is available to authorized users.

I. Dryanovski (B)Department of Computer Science, CUNY Graduate Center,365 Fifth Avenue, New York, NY, USAe-mail: [email protected]

R. G. Valenti · J. XiaoElectrical Engineering Department, CUNY City College, ConventAvenue & 140th Street, New York, NY 10031, USAe-mail: [email protected]

J. Xiaoe-mail: [email protected]

1 Introduction

Micro aerial vehicles (MAVs), such as quadrotor helicopters,are emerging as popular platforms for unmanned aerial vehi-cle (UAV) research due to their structural simplicity, smallform factor, vertical take-off and landing (VTOL) capabil-ity and high maneuverability. They have many military andcivilian applications, such as target localization and tracking,three-dimensional (3D) mapping, terrain and utility inspec-tion, disaster monitoring, environmental surveillance, searchand rescue, traffic surveillance, deployment of instrumenta-tion, and cinematography.

In recent years, numerous research efforts have been madein this field, ranging from MAV test-bed and flight controldesign and path planning, to 3D SLAM and multi-robotcoordination. As a result, today’s MAVs have improved inautonomy to the level that they can achieve autonomousexploration in structured indoor environments, waypoint fol-lowing and navigation in open space.

In this paper, we present a navigation system for a quadro-tor MAV and discuss its open-source software implemen-tation (Fig. 1). The system, which is designed to work inGPS-denied indoor environments, has been tested with theCityFlyer (Fig. 2), a research project at the City College ofNew York to develop an MAV that is capable of autonomousflight in a variety of 3D spaces, based on the AscTec Pelicanquadrotor. All the software discussed runs on-board an inreal time on the 1.6 GHz Atom board processor and ARM7microcontroller.

The first contribution of this paper is a collection of toolsfor MAV state estimation. The 9-component state (posi-tion, angles, and linear velocities) of the MAV is estimatedat a rate of 30 Hz, and fused together with IMU readingsat a rate of 1 kHz using a Kalman Filter. The laser-basedframework makes the assumptions that the environment is

123

Page 2: An open-source navigation system for micro aerial vehicles

178 Auton Robot (2013) 34:177–188

Fig. 1 Visualization of the MAV navigation system in an indoor map-ping experiment

mostly rectilinear vertically and horizontally, which holdstrue for typical indoor scenes. The Kalman Filter (KF)fusion framework allows for external sensor updates to beprovided, so the system can be extended beyond the lasersensor.

The second contribution of the paper is a collection oftools for MAV control and path-planning. We assume thatthe MAV comes with roll, pitch, yaw-rate, and thrust (RPYT)controllers available on the autopilot board. These are typi-cally mapped to the four stick axes of radio remote controllersfor quadrotors. We perform modified PID control for the x-,y-, z- (height), and yaw- (heading) state of the quadrotor, ontop of the built-in RPYT control, at a rate of 1 kHz. The inputto the control can be both position and velocity commands.Additionally, we discuss a 2D costmap-based path-planningalgorithm, teleop interfaces, and a state machine providingsafe take off and landing behavior. The framework is decou-pled and allows for external control commands for a givencomponent. For example, a user can attach a modified heightcontroller, while keeping the existing x-, y-, and yaw con-trollers.

The third contribution of the paper is a set of interfacesallowing the system to be used with many existing ROS com-ponents, including 2D slam, 2D localization, 3D mapping,and visualization. We maintain a strong policy of abstractionand hardware independence. This is accomplished throughusing existing ROS message and service types, and definingnew general MAV messages and services where needed. Toensure compatibility with rest of ROS and future systems,all messages are in SI units, and all frames are in the ENUconvention.

The last contribution of the paper is a set of specific toolsrelated to the AscTec Pelican platform, including a firmwareallowing the system to communicate with the AscTec quadro-tor driver, 3D visualization models of the AscTec Pelican, andCAD designs for hardware parts.

We provide the open-source implementation of all thetools described, as well as their documentation, online.

Fig. 2 The CityFlyer MAV

Where appropriate, we have also provided data sets allowingindividual components to be tested.

The paper is organized in the following way. Section 2examines related work that has been done in this area. Sec-tion 3 presents an overview of the entire system. Section 4describes the state estimation framework, including the laser-based scan matching, altitude estimation, and sensor fusion.Section 5 goes over the control and path-planning details.Section 6 outlines ways we have used our system in conjunc-tion with existing mapping and SLAM frameworks. Finally,Sect. 7 summarizes our accomplishments. Readers interestedin working with the system can refer to the Appendix, whichoutlines the key software and hardware modules and how toobtain them.

2 Previous work

In recent years, MAVs have been an increasingly popularplatform for robotics research. In this paper we focus onan onboard laser-based navigation system for indoor envi-ronments, including pose estimation, SLAM, planning andcontrol. Many researchers have made efforts to develop aMAV capable of performing autonomous navigation tasks inindoor or outdoor environments, using different sets of sen-sors. In this section, we review previous work which usesonboard sensors without the aid of external systems such asmotion capture cameras (for indoor) or GPS (for outdoor).

Roberts et al. (2007) used ultrasound and infrared sen-sors for controlling a custom built quadrotor in a structuredobstacle-free environment. The system is intended to useonly a minimum set of sensor so only basic tasks such asautonomous take off and landing, altitude and anti-drift con-trol and collision avoidance are performed.

Zhang et al. (2009) present a multi-robot system, capa-ble of tracking the pose of a MAV and ground robot. Posetracking is achieved through vision processing offboard theMAV.

123

Page 3: An open-source navigation system for micro aerial vehicles

Auton Robot (2013) 34:177–188 179

Andersen and Taylor (2007) use a UKF and visualinformation to estimate the pose of the MAV. Data is post-processed on an desktop computer, and no onboard imple-mentation is available.

He et al. (2008) presented an indoor navigation system forsmall-sized quadrotor. The pose estimation is accomplishedby an unscented KF and a path is computed from a pre-defined map. No open- source implementation is availableand only the attitude control run on-board.

Ferrick et al. (2012) use a Parrot AR. Drone equipped witha small laser rangefinder to construct simplified occupancygrid maps, performing wall following and obstacle avoid-ance. They do not provide an open-source code and mappingand path planner run on a base station.

Shen et al. (2011) present autonomous navigation sys-tem using both laser and visual informations for MAV in anindoor environment pursuing the task of multifloor mapping.However they do not provide an open-source code.

Blosh et al. (2010) present a navigation system in unknownenvironments based on vision-inferred pose which controlsthe MAV during the map-building process. However, noobstacle avoidance is accomplished.

The PIXHAWK system proposed by Meier et al. (2012) isa MAV provided with a powerful onboard computer able torun both vision based flight control and stereo-vision basedobstacle detection in parallel. Their vision based localizationexploits an ARK in a test bed arranged with visual markers.Position control, similar to the one proposed in this paper, isprovided. However, no velocity control is available and nomapping is computed. The entire system is intended to runin a middleware available on-line but not as widely used asROS.

Grzonka et al. (2012) present a comprehensive open-source laser-based navigation system for MAVs in indoorenvironments. This system handles pose-estimation, map-ping, localization and control, with only the device driversrunning on-board. Similar approach for position control isadopted. However, since pose estimation and position con-trol run off-board at 10 Hz, its performance is significantlydegraded.

Bachrach et al. (2011) present an alternative compre-hensive system for performing autonomous exploration andmap acquisition in indoor environments. Pose estimation isachieved through a fusion of scan matching and visual odom-etry. A Rao-Blackwellized particle filter based 2D SLAM isperformed and an algorithm for frontier-based autonomousexploration is used. Four independent PID controllers forposition and heading provide command for the low-level atti-tude and thrust controllers. SLAM and path planning are notperformed on-board and the implementation is not addressedor published under an open-source license.

Achtelik et al. (2011) present similar system for visioninstead of laser-based navigation. They achieve fast bias-free

Fig. 3 Complete system diagram. The system is distributed betweenan optional ground station, an onboard CPU, and Autopilot board. Theonboard CPU calculates odometry, path planning, and SLAM. Theautopilot board performs the 1 kHz sensor fusion and control. Com-ponents shown in white represent contributions from the authors. Com-ponents shown in orange are contributed by other sources (device andquadrotor drivers, 3rd party ROS packages)

pose estimation and control of MAV by fusing visual infor-mations from a monocular camera with IMU data and thenuse this data to control the MAV by a nonlinear dynamicinversion method. They address the problem of navigatingboth in indoor and outdoor environments, but do not addressobstacle avoidance. Their architecture is similar to the archi-tecture presented in this paper, including an open-sourcerelease of ROS system components. The fundamental differ-ence between the works is that the system we present relies onlaser data, while their system relies on a monocular camera.

3 System architecture

The complete block diagram of our system architecture isshown in Fig. 3. The framework is distributed between aground station, onboard CPU, and onboard microcontrollers.We use the AscTec Pelican system, which has an 1.6 GHzAtom Board processor and 2 ARM7 microcontrollers, ofwhich we utilize one for our firmware.

The ground station is only used for visualization, and toallow a user to teleoperate the MAV (using a joystick or by

123

Page 4: An open-source navigation system for micro aerial vehicles

180 Auton Robot (2013) 34:177–188

clicking on the map shown on the screen). Thus, it is a non-critical component.

The onboard CPU handles odometry, mapping, localiza-tion, and path-planning. All software running on the CPUoperates in the ROS framework. The different componentsare implemented as nodelets, a ROS mechanism for threadedexecution allowing for zero-copy message transport. Theflyer interface serves as a generic bridge between ROS and themicrocontroller, translating messages and services to pack-ets sent over a serial connection. The implementation of thetwo-way serial communication is adapted in part from theimplementation of Achtelik et al. (2011).

The Autopilot microcontroller handles sensor fusionthough a Kalman Filter and control. The implementation(written in C) is provided by the authors. The implementationis minimally coupled with the AscTec hardware, through aset of functions which translate SI units to the custom unitsrequired by the quadrotor drivers, and functions to read andwrite data from the quadrotor. Therefore, the code should beeasily adaptable to other hardware platforms as well.

4 State estimation

The state estimation system is presented in detail in Fig. 3.The IMU provides readings for the roll (φ), pitch (θ ), andyaw (ψ) angles of the MAV in the inertial frame, as wellas the linear accelerations (ax , ay, az), measured in the bodyframe. During the scan projection step, laser data is orthog-onally projected onto the horizontal xy-plane by using theroll and pitch angles of the last state. The projected data isthen passed to a scan matcher. The scan matcher uses the yawangle reading from the IMU as an initial guess for the orien-tation of the MAV. The output of the scan matcher includesx, y, and yaw angle pose components.

A laser altimeter uses the laser scan readings of the floor,as well as the roll and pitch orientation of the MAV, to esti-mate the altitude (z) of the vehicle. The readings are obtainedby using a mirror to deflect a portion of the horizontal laserscanner’s beams downwards. A histogram and threshold fil-ters are applied to detect discontinuities in the floor, assumingthe floor is piecewise-linear.

A rough estimate of the linear velocities is computed fromthe derivative of the position readings, and passed thoughan alpha-beta filter (αβF). The estimates from the variouscomponents are fused using an KF to produce the final state.

The following subsections describe each step in detail.

4.1 Scan matching

One of the performance-critical steps used in our state esti-mation technique is scan matching. An common approachto scan matching is to match each scan to the previous scan,

and incrementally compute the global pose. This approachresults in fast performance, at the cost of incurring drift errorsover time. The scan matching is typically performed usinga variation of the Iterative Closest Point (ICP) algorithm(Zhang 1994). In recent years, significant results have beenachieved in the field of real-time 3D point cloud matching.An overview of ICP variants if provided by Rusinkiewiczand Levoy (2001). A study of the parameters affecting ICPperformance is presented in Pomerleau et al. (2011).

While scan matchers have been a staple in many robot-ics localization algorithms, we found that adapting a scanmatcher to be used for onboard state estimation is a hardproblem. We present the steps taken to reduce the problem to2D scan matching, and details which allow us to obtain highperformance (30 Hz) and better accuracy using the limitedcomputational abilities of the MAV CPU.

Our pose estimation relies on the assumption that inindoor environments, obstacles usually appear the same in2D regardless of the height at which they are observed, dueto the rectilinear property. This means that changes in thealtitude of the vehicle do not affect the accuracy or outputof the scan matcher. We also assume that the environmentshave sufficient geometric details. The pose estimation willfail when observing:

– environments with highly-nonrectilinear vertical sur-faces, such as staircases or inclined walls. Regular fur-niture or slight wall irregularities do not compromiserobustness.

– environments with high degrees of symmetry and no geo-metric features, such as empty hallways longer than thelaser range, or circular empty rooms.

Our implementation is based on on the PL-ICP algorithm(Censi 2008). PL-ICP uses a point-to-line metric to solvethe iterated corresponding point problem. We provide twoextensions to the algorithm, not addressed in the originalwork.

The first extension is using a constant-velocity modelinstead of a zero-velocity model during the scan matching. Agood way to speed up the convergence of ICP is to provide thealgorithm with an initial guess of the change of pose betweenthe scans. Instead of making the common assumption that therobot did not move between the scans, we estimate its instan-taneous velocity and use it as a prediction of the current pose.Estimating this velocity vector can be achieved in a naive wayby taking the time derivative of the robot’s position. Alter-natively, the velocity can be estimated using a KF, wherethe time derivative is fused with acceleration readings froman IMU. If no acceleration readings are available, a simplerfilter that can be used is the αβ-filter. This resulted in fasterconvergence, and consequently, higher update frequencies ofthe scan matcher.

123

Page 5: An open-source navigation system for micro aerial vehicles

Auton Robot (2013) 34:177–188 181

Fig. 4 An overview of the laser projection step, showing the real laser,with frame laser, the projected laser, with frame ortho, the actual laserdata (scan), and the projected laser data (scan∗)

The second extension is to introduce keyframe scans intothe scan matching process. In the classical implementation,each scan is aligned against its immediate preceding scan,and any small error in the scan matching is accumulated overtime. Thus, even if the vehicle is relatively stationary, thepose can drift without bound. In our approach, we align eachscan against a single previous keyframe scan. The keyframescan is updated once the vehicle moves a certain distance(for example, 20◦ or 20 cm.) This approach greatly reducesthe overall drift in the estimation. When the vehicle is sta-tionary, the keyframe scan remains the same, and thus errorestimation of the position is bounded.

4.2 Scan projection

If we know the 3D orientation of the laser scanner, we canreduce the problem to 2D scan matching and perform scanmatching on sequences of projected scans instead of rawscans. The two steps required for this are projecting themoving frame, and projecting the laser data. The projectedscans and frame further allow us to interface with existing2D implementations of SLAM, localization, and navigation.An overview of the projection is presented in Fig. 4. We use astar superscript (∗) to denote virtual (orthogonally projected)data and frames.

Let B denote the position of the MAV base in the fixedframe of reference, denoted by world. The transformationbetween the world frame and the base frame is T base

world , andis the output of the pose estimation system.

We define a virtual MAV base with position B∗ and coor-dinate system ortho, corresponding to the orthogonal pro-jection of B onto the xy- world plane. The transformationT orthoworld between theworld and ortho can be obtained by dis-

carding the z, φ, and θ components of T baseworld , using the last

output available from the pose estimation.

Note that we project the MAV base frame, and not thelaser frame itself. This is because after projection, the laser’sx- and y- position in respect to the base frame is not constant,but depends on the roll and pitch motion of the MAV. Thiseffect is not present in robots moving in 2D. It becomes morenoticeable the further the laser is mounted from the MAV’scenter of rotation. To account for this, we express the scan inthe coordinates of the vehicle, and not in the coordinates ofthe sensor.

Let L denote the position of the laser in the world frame,and laser be the coordinate frame attached to it. In orderto perform scan matching with respect to the virtual orthoframe, we need to transform the laser scan data from the laserframe (attached to the laser scanner) to the ortho frame. Webegin by expressing the position of the laser in the worldframe:

T laserworld = T base

world · T laserbase (1)

where T laserbase is determined by the vehicle configuration.

The laser readings arrive in the form {ri , βi },where ri arerange readings, and βi are angles at which the readings weretaken. The points detected by the laser thus have coordinates[Pi]laser = [ri cosβi , ri sin βi , 0]T . Transforming that intothe world reference frame we get

[Pi]world = T laserworld · [ri cosβi , ri sin βi , 0]T (2)

We then obtain the orthogonally projected points, first in theworld, then in the ortho frame

[P∗i ]world = [Pix , Piy, 0]T (3a)

[P∗i ]ortho = T orthoworld

−1 · [P∗i ]world (3b)

We perform scan registration in respect to the ortho frame.The rigid transformation which best aligns two consecu-tive sets of laser data gives us the instantaneous change inx-, y-, and ψ− pose of B∗ in the world frame. This in turn,is equivalent to the instantaneous change in the x-, y-, andψ-pose of B in the world frame.

4.3 Robust height estimation

The laser altimeter has been used previously by Bachrach etal. (2011) and Grzonka et al. (2009). We describe it with theaddition of a robust filtering technique.

Part of the laser rays from a horizontally mounted laserscanner are deflected towards the ground using a mirror. Themeasurement to the floor is corrected to account for the rolland pitch angles of the MAV. The laser altimeter assumes thatthe floor surface is piece-wise linear. If a large measurementdiscontinuity is detected, the system assumes that there is adiscontinuity in the floor—the MAV has moved over a surfacewith a different height. In this way, we can simultaneouslytrack the vehicle altitude and the floor level.

123

Page 6: An open-source navigation system for micro aerial vehicles

182 Auton Robot (2013) 34:177–188

Fig. 5 Readings from a deflected laser scanner as the vehicle is trav-eling over an edge. Left (1): readings fall on the floor surface. Middle(2): readings are distributed between floor and elevated surface, withsome readings incorrectly interpolated in-between. Right (3): readingsfall on the elevated surface

A problem with this technique is that certain laser scan-ners, including popular Hokuyo models, may provide aninterpolated reading when the laser beam falls on the edgeof a discontinuity (see Fig. 5). Thus, readings taken of a dis-continuous surface get smoothed out to a continuous curve,and thresholding on the measurement jumps fails to detectthe object edge properly.

To solve this problem, we deflect multiple laser beamsdown. Next, we compute a histogram of the z readings, dis-cretization them in a certain bin size. In our experiments,we used 20 beams, and a bin size of 2cm. We next identifythe mode of the readings, (the peak of the histogram), andcompute the average of all readings belonging to that bin.In this manner, we can robustly identify reading inliers andoutliers as the vehicle is moving across an edge on the floor(Fig. 5). The results of the height estimation can be seen inFig. 6, which shows the estimated versus ground-truth alti-tude (from a VICON motion capture system) of the MAV,as well as the estimated floor level, as the vehicle travelsforward and backward over several surfaces.

Mounting one mirror pointing down and another onepointing up allows the MAV to estimate the distances to boththe ceiling and the floor. This can be accomplished simply byrunning two instances of the same software described above.The position of the ceiling plane with respect to the MAVcan be useful for obstacle avoidance.

We have tested the system using two different laserscanners—a Hokuyo URG-04LX-UG01 and a HokuyoUTM-30LX. We have released 3D CAD files of the mirrormounting hardware, available online.

4.4 Sensor fusion

In order to estimate the state and reduce the amount of noisefrom the sensor readings we use a cascade of two filters: analpha beta filter (αβF) and a KF. The αβF is a simplified caseof the KF which does not maintain its optimal functionalitybut has the advantage of being simple and fast since it doesnot require a system model. The αβF assumes that the systemhas two states, where the first state is the integration of thesecond state (for example, position and velocity). The αβF

Fig. 6 Experimental verification of the robust height estimation algo-rithm. The MAV is flown over several flat objects of varying heights.The algorithm tracks the estimated floor level (blue) and vehicle alti-tude (red). The vehicle altitude is compared to ground truth data (black)from a motion capture system. (In this experiment, the MAV is flown atan arbitrary height, and does not attempt to maintain a certain altitude.)

is composed of the two typical steps of a KF, prediction andcorrection:

Prediction:

x̂k|k−1 = x̂k−1|k−1 +�T · v̂k−1|k−1 (4)

v̂k|k−1 = v̂k−1|k−1 (5)

Correction:

rk = xk − x̂k|k−1 (6)

where rk is the residual (or innovation) and represents theprediction error calculated as the difference between the newmeasurement and the estimated position. It is used to correctthe two states:

x̂k|k = x̂k|k−1 + α · rk (7)

v̂k|k = v̂k|k−1 + β

�Trk (8)

where α, β ∈ [0, 1]. Lower values α and β result in asmoother estimate but with longer delay.

The main reason why the αβF is used is to obtain a lessnoisy first estimation of the linear velocity in order to useit as correction in the KF, instead of a simple derivative ofthe laser position which increases the amplitude of the noise.The estimation is carried out at 30 Hz.

Once we have estimated the position, linear velocities andyaw angle, we pass these values to the Autopilot where theyare fused with the IMU readings at a rate of 1 kHz. We usea KF similar to the one described by Achtelik et al. (2011),where the acceleration readings from the IMU, expressedwith respect to the body coordinate frame, are transformedby a rotation matrix to be expressed with respect to the worldcoordinate frame, and then corrected for gravity. In the filterdesign all three axis are considered decoupled. In this way,three smaller KFs (one for each axis) perform estimation of

123

Page 7: An open-source navigation system for micro aerial vehicles

Auton Robot (2013) 34:177–188 183

position and velocity using prediction by integrating acceler-ation and correction by position and velocity laser data pre-filtered with the αβF. A fourth KF performs estimation ofthe yaw angle integrating the high frequency yaw-rate fromgyroscope which is then corrected by the yaw angle readingfrom the laser. The states, inputs and measurements of theKFs are the following:

x(1) = [x vx ]T u(1) = [ax ] z(1) = [xL vxL ]T (9)

x(2) = [y vy]T u(2) = [ay] z(2) = [yL vyL ]T (10)

x(3) = [z vz]T u(3) = [az] z(3) = [zL vzL ] (11)

x(4) = [ψ] u(4) = [ωz] z(4) = [ψL ] (12)

The linear discrete motion model used to describe the systemis shown below.

xk = Ak · xk−1 + Bk · uk (13)

yk = H · xk (14)

Where for x1, x2 and x3

A =[

1 �T0 1

]B =

[�T 2

2�T

]H =

[11

](15)

and for x4 :A = [1] B = [�T ] H = [1] (16)

5 Autonomous flight

5.1 Flight state machine

In order to achieve safe and robust flight behavior, we presenta state machine which handles the transitions in and out ofautonomous flight mode (Fig. 7).

The system starts in the Off state, in which the power tothe motors is cut off. An engage command advances the stateto Engaging, where the Autopilot attempts to communicatewith all four motor controllers and turn them on. If this stepsucceeds, the system enters the Idle state, where the motorsare powered on and rotating at a minimum idle speed. Asimilar disengage procedure can transition between the Idlestate and the Off state.

Once in the Idle state, a takeoff command enables theautonomous control and advances the MAV to the Flyingstate. While in that state, the MAV can be in different con-trol modes (Position, Velocity, Direct—described in the nextsection). A land command transitions the state back to Idle.There is an intermediate state called Landing, during whichthe MAV is held at the same spot, while the thrust is gradually(over the course of several seconds) decreased linearly fromits current value down to 0. Note that this is an open-looplanding behavior, usually initiated once the MAV is within asafe touch-down distance of 10–15 cm above the ground.

Fig. 7 State machine describing the flight states of the MAV

If any of the timed transitions fail, or if the vehicle is givenan e-stop command, the system enters an Error state. Powerto the motors is immediately shut off, and the MAV must berestarted.

5.2 Control

The control system provides four different control modes fordifferent purposes. The modes are: Disabled, Direct, Positionand Velocity.

Disabled mode disables the high-level control in theAutopilot. This mode is used for manual flight using theremote controller (RC) which sends commands directly tothe Autopilot low level RPYT controller.

Direct mode bypasses the high-level control allowingusers to send commands directly to the Autopilot low-levelRPYT controller by an external source. This mode permitsusers to add their own controller, which can run on theonboard CPU or workstation.

Position mode only enables the position-based high-levelcontrol. It can be used for autonomous navigation with thepath planner which generates a sequence of waypoints.

Velocity mode only enables the velocity-based high-levelcontrol. We perform velocity control separately to the posi-tion control in order to be able to switch from one controllerto another to send different kinds of commands. For example,we use velocity mode to fly in teleoperation using the joy-stick sending velocity commands, where the quadrotor doesnot need to reach a waypoint but just follows the desiredvelocity, allowing a smooth and constant flight.

We implement position and velocity control separately foreach of the x, y, and z axes, and position control for the yaw

123

Page 8: An open-source navigation system for micro aerial vehicles

184 Auton Robot (2013) 34:177–188

Fig. 8 PI-D controller withvelocity feedback

Fig. 9 Left position controlperformance showingcommanded versus estimatedposition for x, y, z. Right topview of the xy trajectory duringthe experiment

Fig. 10 Position control with a sequence of waypoints, varying x, y,and yaw. Command is shown in red, and estimated state in blue

axis, Yaw-rate control is already provided by the quadro-tor low-level RPYT controller. Therefore, since the axis aredecoupled we can choose different modes for each axes; forexample, we can have velocity mode for x, y and yaw andposition for z, or direct mode for the first three with z in directmode etc.

The position controller is based on a cascade struc-ture of two loops, where the outer position loop generatescommands to the inner RPYT controller provided by the

AscTec Autopilot. Roll and pitch commands are generatedfrom the outer loop as reference to the inner attitude con-troller in order to reach and maintain desired y and x posi-tion respectively, and thrust commands are generated by theouter controller to control the height by sending desired thrustvalues. The position controller is based on a modified PIDstructure (PI-D) as shown in Fig. 8. The PI block is directlyapplied to the error as difference of the desired position andcurrent position and its output is added to the output of theD term which is not applied on the derivative of the error buton the negative feedback of velocity value. From a mathe-matic view, applying the Kd term to the negative velocity isequivalent to applying it to the derivative of the error as longas the waypoint is constant, as shown below.

de(t)

dt= d[xd − x(t)]

dt= dxd

dt− dx(t)

dt= −v(t) (17)

During flight, the waypoints are constant for most of thetime and change only when the quadrotor reaches the currentwaypoint. This improves the performances of the controllerbecause it avoids the phenomenon called “derivative kick”,which is the spike due to the derivative of the error during theinstantaneous changing of the setpoint value. Furthermore,the velocity estimation is more accurate and less noisy thana simple derivative. The complete position controller has thefollowing form:

123

Page 9: An open-source navigation system for micro aerial vehicles

Auton Robot (2013) 34:177–188 185

Fig. 11 Path planning stages. Left a user selects a goal for the MAV.The grid-based costmap and path are generated by the ROS navigationstack. Middle the plan is decomposed in straight-line, collision-free

segments. Right a sequence of waypoints is generated for the MAV tofollow, including rotate-in-place waypoints

Fig. 12 A 3D map of a corridor, obtained by flying the MAV along apath of 25 m, while changing the altitude slightly. A grid with cell size1 m is provided for reference

u(t) = K pe(t)+ Ki

t∫0

e(t) dt + Kd [−v(t)] (18)

The velocity controller as well as yaw controller are simplePI controllers. Figure 9 shows position plots of the MAV ina hover experiment where it maintains a position around thedesired point with oscillations smaller than 20 cm for x- andy- axis and around 5 cm for z for a period close to 4 min.Figure 10 shows the quadrotor position as it follows withvarying x-, y-, and yaw components.

5.3 Path planning

Path planning and obstacle avoidance is performed using agrid-based costmap. An overview of the process is shown inFig. 11. First, a costmap with inflated obstacles is computed,using the map produced by the laser data. A user can clickanywhere on the costmap, and a path from the MAV to thegoal is computed using Dijkstra’s algorithm. This function-ality is provided by the ROS navigation stack.

The second step is to decompose the plan into a sequenceof straight-line segments, each of which is collision-free.This is done by a greedy recursive divide-and-conquer algo-rithm, presented in Algorithm 1. The algorithm takes a set ofsequential grid cells P (the original plan), and returns a set ofgrid cells corresponding to end-points of safe, collision-freestraight lines.

The third step is to generate a sequence of waypoints fromthe decomposed plan. Each waypoint contains [x, y, θ ]Tcoordinates. Waypoints are generated so that the quadrotoris always traveling with its x-axis close to the direction ofmotion. If the next waypoint heading is similar to the currentone (in our implementation, within 20◦), then the MAV isallowed to move forward and adjust its heading simultane-ously. If a waypoint is at a heading sufficiently different thanthe current one, an intermediate “turn-on-a-dime” waypointis generated, which forces the MAV to rotate in place beforemoving forward.

The straight-line trajectory to the next waypoint is contin-uously checked for collision in case of dynamic objects, andif collision is detected, the plan is recomputed.

Algorithm 1 pathDecomp(P)1: // P is a set of (x, y) cells. The size of P is N .2: N ← ‖P‖3: if collisionFreeLine(P1, PN ) then4: return {P0, PN }5: else6: P− ← {P0, . . . , PN/2}7: P+ ← {PN/2, . . . , PN }8: return pathDecomp(P−) ∪ pathDecomp(P+)9: end if

5.4 Teleoperation

We provide a generic teleoperation interface for the MAV,allowing the user to send velocity commands, positioncommands, and state transitions, together with an imple-mentation of a teleop client using a PlayStation joystickcontroller. In our experiments, we observed that the most

123

Page 10: An open-source navigation system for micro aerial vehicles

186 Auton Robot (2013) 34:177–188

Fig. 13 2D map of the firstfloor of Steinman Hall at TheCity College of New York,computed onboard the MAV

Fig. 14 3D map of the firstfloor of Steinman Hall at TheCity College of New York. The3D map built by the robot is inthe form of a height-coloredpoint cloud

intuitive control scheme is to use velocity control for the x, y,and yaw axes, and position control for the z axis. The user cansend velocity commands using the thumb controllers, whilethe MAV is autonomously kept at a fixed height. Sendingzero-velocity commands for x and y provides precise stopswith very little drift. The resulting effect is the illusion of con-trolling a holonomic ground MAV which glides around at acertain altitude (also adjustable through the PS3 controller).

The teleop control mode can work in cooperation with thepoint-and-click method of sending waypoints. At any giventime, the user can issue a “hold” command with the joy-stick controller, resulting in clearing the planner commands,and freezing the vehicle at it’s current position, until furthercommands arrive. The “hold” command, together with the“e-stop” command, provide a basic safety functionality thatwe believe should be part of any MAV system with a mixtureof autonomous and teleoperated controllers.

6 Applications

In the previous sections, we presented experiments demon-strating the state estimation, control, and path-planning capa-bilities of the system. Next, we briefly present applications

which use our system in conjunction with existing ROS mod-ules. In all the experiments described, computation was car-ried out entirely onboard.

The first experiment consists of building a 3D map byadding laser data at poses determined by the state estimationsystem (Fig. 12), without the aid of SLAM algorithms. As thefigure shows, the state estimation applied to mapping resultsin locally consistent maps.

In the second experiment, we demonstrate that our stateestimation system can easily be used in conjunction withexisting mapping tools. The state estimation is used asan odometric input to a 2D SLAM algorithm (gmapping,Grisetti et al. (2007)) running onboard the vehicle in realtime. The map is constructed by passing all the projectedlaser scans to gmapping. The 2D SLAM step performs addi-tional alignment of the data and guarantees loop closure andglobal consistency. Note that gmapping alone, without theaid of laser-based odometric system, is too slow and cannotsolve the problem of 2D SLAM for the MAV. We are able touse it in real time since we provide it with a good odometricestimate from the scan matcher. Also note that the outputof gmapping is not used in any critical components such ascontrol or real-time obstacle avoidance. Thus, if it fails, the

123

Page 11: An open-source navigation system for micro aerial vehicles

Auton Robot (2013) 34:177–188 187

Fig. 15 Long trajectory calculated by the MAV for autonomous flighton an a priori map. Execution of the trajectory is shown in the videoattachment

system would not become unstable. The 3D point cloud isbuilt on top of the 2D-refined data, by commanding the MAVup and down. The results of the experiment can be seen inFigs. 13 and 14.

In the third experiment, we use a previously-built mapto localize the robot from an unknown position. The local-ization is performed using the Advanced Monte-Carlo algo-rithm (AMCL), an implementation of which is provided inROS.

In the fourth experiment, we perform a fully autonomousflight over a long trajectory, including narrow doorways. Theexperiment uses an a priori map of resolution 0.10 m, alsocreated by the MAV in advance. The MAV starts at a knownlocation, and is a given a goal by the operator. Next, theMAV calculates a trajectory and a sequence of waypoints,and autonomously reaches the goal, as shown in Fig. 15.The path-planning and waypoint decomposition took 80 ms.During the flight, the MAV uses AMCL to refine its pose onthe map.

Footage of the experiments is shown in the video attach-ment.

7 Conclusion

In this paper, we describe a modular, open-source indoornavigation system for quadrotors MAVs. The system worksin indoor environments, where the obstacles are mostly rec-tilinear and the floors are piecewise linear. The system isself-contained and can work onboard a MAV. The minimumsensor requirements are a laser scanner and an IMU.

We describe the elements of the system, which range fromlow-level to high-level computations and behaviors: stateestimation, control, path-planning, teleoperation, localiza-tion and mapping.

We demonstrate that the navigation estimation system canbe used to build 2D and 3D maps, comparable in accuracyto maps built using ground vehicles with odometric sensors.We further demonstrate that we can provide easy and intuitiveinteractions between the MAV and the human operator forteleoperation and waypoint control.

The major focus of our work and contribution is a well doc-umented, open-source release of all our software and hard-ware. Researchers can download the test data and recreateour experiments.

Acknowledgments This work was supported in part by the U.S.Army Research Office under Grant No. W911NF-09-1-0565 and U.S.National Science Foundation under Grants No. CNS-0619577 and No.IIS-0644127

Appendix: Software and hardware resources

This appendix details the relevant open-source software andhardware components of the system.

The software implementation of the system presentedin this paper can be obtained free and open-source onlineat http://robotics.ccny.cuny.edu/CityFlyer. The software isorganized according to the ROS build system of stacks andpackages, where a package is a unit of software responsiblefor a certain task, and a stack is a collection of functionallyrelated packages.

– asctec_drivers: Firmware for the AscTec Autopi-lot architecture, and AscTec Pelican models. Firmwareincludes the sensor fusion and control software, whichare hardware-independent.

– scan_tools: Software for processing laser scannerdata. Tools include orthogonal projection, scan match-ing, and various utilities.

– mav_tools: Software for MAV-specific navigationtasks and interfaces. Also includes high-level launch filesfor the CityFlyer project.

Each stack and package has a dedicated web page withinstructions, available at http://www.ros.org/wiki

The open-source hardware consist of mirror mountsrequired for the height estimation module. We provideadapters for the Hokuyo URG-04LX-UG01 and HokuyoUTM-30LX laser scanner models. The parts can be builtusing a 3D printer.

– http://www.thingiverse.com/thing:3949– http://www.thingiverse.com/thing:10203

123

Page 12: An open-source navigation system for micro aerial vehicles

188 Auton Robot (2013) 34:177–188

References

Achtelik, M., Achtelik, M., Weiss, S., & Siegwart, R. (2011). OnboardIMU and monocular vision based control for MAVs in unknownin- and outdoor environments. In IEEE International Conference onRobotics and Automation (pp. 3056–3063).

Andersen, E. D., & Taylor, C. N. (2007). Improving MAV pose estima-tion using visual information. In IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) (pp. 3745–3750).

Bachrach, A., Prentice, S., He, R., & Roy, N. (2011). RANGE—Robustautonomous navigation in GPS-denied environments. Journal ofField Robotics, 28, 644–666.

Blosh, M., Weiss, S., Scaramuzza, D., & Siegwart, R. (2010). Visionbased MAV navigation in unknown and unstructured environments.In IEEE International Conference on Robotics and Automation(ICRA) (pp. 21–28).

Censi, A. (2008). An ICP variant using a point-to-line metric. In IEEEInternational Conference on Robotics and Automation (ICRA) (pp.19–25).

Ferrick, A., Fish, J., Venator, E., & Lee, G. S. (2012). UAVobstacle avoidance using image processing techniques. In IEEEInternational Conference on Technologies for Practical Robot Appli-cations (TePRA) pp. 73–78.

Grisetti, G., Stachniss, C., & Burgard, W. (2007). Improved techniquesfor grid mapping with Rao-Blackwellized particle filters. In IEEETransactions on Robotics (pp. 34–46).

Grzonka, S., Grisetti, G., & Burgard, W. (2009). Towards a navigationsystem for autonomous indoor flying. In IEEE International Con-ference on Robotics and Automation (ICRA) (pp. 2878–2883).

Grzonka, S., Grisetti, G., & Burgard, W. (2012). A fully autonomousindoor quadrotor. IEEE Transactions on Robotics, 28(1), 90–100.doi:10.1109/TRO.2011.2162999.

He, R., Prentice, S., & Roy, N. (2008). Planning in information spacefor a quadrotor helicopter in a GPS-denied environment. In IEEEInternational Conference on Robotics and Automation (ICRA) (pp.1814–1820).

Meier, L., Tanskanen, P., Heng, L., Lee, G. H., Fraundorfer, F., &Pollefeys, M. (2012). PIXHAWK: A micro aerial vehicle designfor autonomous flight using onboard computer vision. AutonomousRobots, 6, 1–19.

Pomerleau, F., Magnenat, S., Colas, F., Liu, M., & Siegwart, R. (2011).Tracking a depth camera: Parameter exploration for fast ICP. InIEEE/RSJ International Conference on Intelligent Robots and Sys-tems (IROS) (pp. 3824–3829).

Roberts, J. F., Stirling, T., Zufferey, J. C., & Floreano, D. (2007).Quadrotor using minimal sensing for autonomous indoor flight. InEuropean Micro Air Vehicle Conference and Flight Competition(EMAV ).

Rusinkiewicz, S., & Levoy, M. (2001). Efficient variants of the ICPalgorithm. In Proceedings of Third International Conference on 3-D Digital Imaging and Modeling (pp. 145–152). doi:10.1109/IM.2001.924423.

Shen, S., Michael, N., & Kumar, V. (2011). Autonomous multi-floorindoor navigation with a computationally constrained micro aerialvehicle. In IEEE International Conference on Robotics and Automa-tion (ICRA) (pp. 2968–2969).

Zhang, T., Li, W., Achtelik, M., Kuhnlenz, K., & Buss, M. (2009).Multi-sensory motion estimation and control of a mini-quadrotor inan air-ground multi-robot system. In IEEE International Conferenceon Robotics and Biomimetics (ROBIO) (pp. 45–50).

Zhang, Z. (1994). Iterative point matching for registration of free-form curves and surfaces. International Journal of Computer Vision,13(2), 119–152.

Author Biographies

Ivan Dryanovski was born inSofia, Bulgaria and is a Ph.D.student at The Graduate Center,The City University of New York(CUNY), USA. He received aB.A. in Physics from Franklinand Marshall College, PA, USAin 2007 and a M.Sc. in Com-puting Science from ImperialCollege London, UK in 2009.His research interests includequadrotor MAV systems, 3Dmapping, localization, and nav-igation.

Roberto G. Valenti was bornin Catania, Italy. He received theM.S. degree in electronics engi-neering from the University ofCatania, Catania, Italy in 2009.He joined the Department ofElectrical Engineering at TheCity College of New York asResearch Assistant where he iscurrently a Ph.D. student. Hiscurrent research interests includemicro-aerial-vehicles modeling,simulation, and control.

Jizhong Xiao is an associateprofessor of Electrical Engineer-ing at the City College of NewYork, CUNY. He received hisPh.D. degree from MichiganState University in 2002; Mas-ter of engineering degree fromNanyang Technological Univer-sity, Singapore, and M.S., B.S.degrees from East China Instituteof Technology in 1999, 1993,and 1990 respectively. He startedthe robotics research program at

CCNY in 2002 as the founding director of CCNY Robotics Lab. He is arecipient of USA National Science Foundation (NSF) CAREER awardin 2007. His research interests include robotics and control, mobilesensor networks, robotic navigation, and multi-agent systems.

123