cs 182 final project - wordpress.com

11
CS 182 Final Project Maggie Basta, Albert Chien, Bovey Rao December 8, 2017 1 Introduction The purpose of this project was to implement an autonomous navigation system for the Harvard Undergraduate Robotics Club Mars Rover. The Rover is currently in contention for participation in the 2018 University Rover Challenge (URC), a competition that since 2006 has driven the com- petitive design and construction of the next generation of Mars rovers [8]. An integral part of the Rovers performance in the competition is completion of autonomous traversal tasks, set up in different stages of moderately difficult terrain. It must do this by employing algorithms which determine navigation from multiple inputs including GPS coordinates, laser and sensor field read- ings, and camera data. This project tackled the implementation of the Rovers core navigation and made significant headway in additional features of its implementation. The multiplicity of challenges that Rover must complete in the URC competition structured our project in which way we tackled a range of problems. Many of these were in direct relation to the ideas, techniques and applications covered in CS182. The team first put significant effort into the construction of a proper, reliable environment for applying and testing algorithms on the Rover, and the creation of an effectively accurate dynamic model for the robot within this environment. Then, as a core algorithmic framework, we derived several search algorithms for application in our environment. These included uninformed search algorithms like depth first search and breadth first search and basic informed search algorithms like A* search and greedy search, which were covered in class. We then further developed these into dynamic search al- gorithms for path planning in our environment with additional unknown obstacles. Preliminary image classifying was also prepared but not completed as the marker could be hard coded. In summation, the work done by the project ultimately successfully implemented the base of the HURC Rovers autonomous navigation system. It will now be able to be expanded upon as the greater Rover team continues to work on its further optimization and integration into the physical Rover that hopefully will compete next May. As the navigation system is more compre- hensive, it will also incorporate more constraints such as motor power for climbing hills and also autonomous search for a goal marker after arriving at the specific finishing GPS position. 2 Background and Related Work The University Rover Challenge is an annual collegiate robotics competition that aims to inspire designs and novel builds for the next generation of Mars Rovers. At the competition, each rover must be capable of completing four basic tasks: science cache experiments, extreme retrieval and 1

Upload: others

Post on 06-Apr-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

CS 182 Final Project

Maggie Basta, Albert Chien, Bovey Rao

December 8, 2017

1 Introduction

The purpose of this project was to implement an autonomous navigation system for the HarvardUndergraduate Robotics Club Mars Rover. The Rover is currently in contention for participationin the 2018 University Rover Challenge (URC), a competition that since 2006 has driven the com-petitive design and construction of the next generation of Mars rovers [8]. An integral part ofthe Rovers performance in the competition is completion of autonomous traversal tasks, set upin different stages of moderately difficult terrain. It must do this by employing algorithms whichdetermine navigation from multiple inputs including GPS coordinates, laser and sensor field read-ings, and camera data. This project tackled the implementation of the Rovers core navigation andmade significant headway in additional features of its implementation.

The multiplicity of challenges that Rover must complete in the URC competition structuredour project in which way we tackled a range of problems. Many of these were in direct relationto the ideas, techniques and applications covered in CS182. The team first put significant effortinto the construction of a proper, reliable environment for applying and testing algorithms onthe Rover, and the creation of an effectively accurate dynamic model for the robot within thisenvironment. Then, as a core algorithmic framework, we derived several search algorithms forapplication in our environment. These included uninformed search algorithms like depth firstsearch and breadth first search and basic informed search algorithms like A* search and greedysearch, which were covered in class. We then further developed these into dynamic search al-gorithms for path planning in our environment with additional unknown obstacles. Preliminaryimage classifying was also prepared but not completed as the marker could be hard coded.

In summation, the work done by the project ultimately successfully implemented the baseof the HURC Rovers autonomous navigation system. It will now be able to be expanded uponas the greater Rover team continues to work on its further optimization and integration into thephysical Rover that hopefully will compete next May. As the navigation system is more compre-hensive, it will also incorporate more constraints such as motor power for climbing hills and alsoautonomous search for a goal marker after arriving at the specific finishing GPS position.

2 Background and Related Work

The University Rover Challenge is an annual collegiate robotics competition that aims to inspiredesigns and novel builds for the next generation of Mars Rovers. At the competition, each rovermust be capable of completing four basic tasks: science cache experiments, extreme retrieval and

1

delivery, equipment servicing, and autonomous traversal. Autonomous traversal through mod-erately difficult terrain from known markers is the portion of the competition that requires anautonomous component, but autonomous control is also relevant during the competition to ad-just for turning and motor control. The rover should be able to autonomously navigate throughfour different difficulty levels of course and terrain, which have various requirements beyondstrictly autonomous navigation.

Currently, the physical HURC Rover is being built with a few unique aspects that hopefullywill be effective in the simulated Martian environment. For example, rather than using wheels,the HURC rover will be using flexible legs that allow for more maneuverability. A precise GPSsystem, laser and field sensors, and a camera/computer vision system are also being added tothe physical rover, so these inputs can also be taken into account for the autonomous navigationsystem.

Before writing our path planning algorithm for the rover, we did preliminary research into thecurrent methods that have proven to be successful for robot motion planning. We came across acomprehensive survey of robot 3D path planning algorithm [2] that covers sampling based, nodebased, mathematic model based, and bioinspired algorithms. Given the numerous references andsummaries of each algorithm, we decided that this survey would serve as a suitable launchpadfor our work. In many ways, we found that the algorithms used multiple concepts covered inclass and many fused different aspects of algorithms together to tailor towards the specific prob-lem. From this research, we realized that we should not be attempting to generate a generic pathplanning algorithm, but rather attempt to create an algorithm that would work specifically for therover project.

Moving beyond our preliminary research, we found that autonomous navigation of a harshterrain has been an exciting area of interest in the field of space exploration for an extensive periodof time. From the Soujourner microrover, a system of autonomous navigation was implementedthat was capable of accounting for numerous obstacles and environmental conditions. However,the traversal was limited by a strict 2D representation of movement, which was accounted for byutilizing additional sensors for steepness [6]. Later autonomous navigation systems were then ex-panded through preplanning a larger map of the environment and incorporated 3D informationinto the Rover navigation system. The improved system of global planning navigation algorithmsmaps the entire surrounding environment alongside the projected coordinates to be able to ac-curate map where the rover is. Then by progressing to subsets of local goals rather than strictlyproceeding to the goal state, the Rover could more easily move towards the final objective throughits known environment [7]. Currently, NASAs premier autonomous rover navigation algorithm isknown as GESTALT ( Grid-based Estimation of Surface Traversability Applied to Local Terrain),where local stereo images are utilized to avoid obstacles and predict local path optimality. Essen-tially, they improved the local image classification system to form a more accurate global planningsystem [1].

3 Problem Specification

Given that this was the earliest implementation for autonomous navigation of the HURC Rover,we limited the scope of the problem to be simpler than in actuality. Thus, we focused primarilyon navigating through the moderately difficult terrain with a few obstacles added in. Afterwards,the rover was tested in a simulated environment to examine the ability of a rover to follow the

mapped trajectories. In later stages of the rover will also require computer vision to identify thegoal marker, so the strategy was also explored through image classifiers.

Our original project proposal covered the topics of HMMs, A*/Heuristic Search, SLAM, andCSPs/Backtracking. Ultimately, HMMs and CSPs/Backtracking were not explored extensivelydue to the narrowed scope of our project. HMMs were not utilized as modelling that many inputswas not easily modeled, and CSPs/Backtracking were not employed due to similar difficultieswith modelling violation of such constraints. A*/Heuristic Search with SLAM comprised themajority of this project, as we created a dynamic path planning system that was capable of inter-preting path data.

4 Approach

4.1 General

The team essentially tackled these problems of the implementation from the ground up. To firstestablish a framework for our environment and algorithms, we first parsed a png image of aheightmap, and output a set of corresponding coordinates. We then mapped the image into oursimulation environment in Gazebo, and created a full model of the Rover within it. Using thecoordinate data from the parsed heightmap, we then in a separate script wrote and ran severalpython algorithms, which outputted trajectories to identified goal states within the environment.Finally, for the algorithms that were possible, we put these trajectory coordinates back into thesimulation and analyzed the performance with the Rover.

4.2 Definition of the Environment

The environment that the team created was derived from a black and white heightmap digitalelevation model (DEM), formatted as a PNG image. The darkness of every pixel in the imagedefines the depth of the coordinate that it maps to, where the darkest value is the lowest possiblepoint within the terrain, and the lightest value is the highest. Heightmaps like these are widelyavailable and can also be easily generated for specific locations using satellite imaging. Thus,for the Rover competition, once we are aware of the more specific testing location, we can createa heightmap for that environment. Furthermore, with the broader mission of the URC and theRover in mind, if we needed to, we would even be able to generate an environment for Mars!

The available functionality of Gazebo allows for these heightmaps to be integrated into the cre-ation of worlds, Gazebos 3D environments. In order to utilize the same maps in our algorithms,we parsed through each pixel of the png and generated its corresponding 3D coordinates, as theyare mapped in the Gazebo environment (see parser/parser.py). Once they coordinates were pro-cessed by the respective algorithms, the outputted trajectory could be reinserted into Gazebo,where they could be tested in a realistic simulation.

4.3 Creation of Algorithms

An array of uninformed and informed search algorithms were used to generate the path from theinitial to the goal state. Depth First Search (DFS) and Breadth First Search (BFS) were the twouninformed search algorithms used as testing benchmarks. Uniform Cost Search, A* search, anda dynamic SLAM-like path replanning algorithm were the informed search algorithms used for

pathfinding. Uniform cost search computed three-dimensional distance between the expandedstate and the goal state as the heuristic for guiding, while our A* search added the distance alongthe path onto the heuristic to find the optimum solution. Since the heightmap may not containinformation on all obstacles that we need to avoid, we further implemented a dynamic path plan-ning algorithm based on exploration of hidden obstacles. The SLAM-like dynamic path planningalgorithm scanned the periphery for obstacles and would adjust the trajectory based on differenttriggers. The trigger on the first iteration of the dynamic path planning algorithm was the discov-ery of any hidden obstacles that were not previously in the SLAM model. The second iteration setthe trigger on whenever the current trajectory would collide into obstacles in the SLAM model.These two iterations allowed for testing on computation time as this would affect the overall runtime of the robot and completion of the autonomous navigation task under time constraints.

Algorithm 1 Dynamic Path Planning Algorithmprocedure GENERATE TRAJECTORY(start, goal, heightmap)

cur state← startcoordinates = []while cur state not goal do

coordinates← cur stateObserve surroundingsPlan path (A∗ search)for each action in plan do

if action collide obstacles thenbreak

elsenext state f rom actioncur state← next state

end ifend for

end whilereturn coordinates

4.4 Implementation and Execution in Gazebo

In the rendered 3D Gazebo environment, we created different functions that would execute ac-tions in the rover model. The main functions were Rotate Left, Rotate Right, and Move forward.Each of these functions were specific to our Rover (as opposed to a traditional car or wheeledrobot), as they had to apply to its 6 legged makeup. We then took the executed path coordinatesand created algorithms that would attempt to follow the trajectory. This algorithm involved bothreorienting the rover to the nearest trajectory coordinate and establishing a radius around the co-ordinate that would be sufficient for the rover to move towards the next trajectory coordinate. Weused the simulator to to analyze the efficacy of our trajectory following algorithm, as well as theperformance of the trajectory output from each path finding algorithm in different environments.We could also see how changes in mass and design to the Rover build would affect the executionof the trajectory, a utility that will prove very useful in the long term design of the larger project.

Algorithm 2 Trajectory Following in Gazebo Simulatorprocedure FOLLOW TRAJECTORY(coordinates, turn radius, state radius, incr)

traj = coordinatesidx = 0while cur state not goal do

cur state← get rover position()next state← traj[idx]dist = distance(cur state, next state)while dist > state radius do

while angle to next state > turn radius doturn towards next state

end whilemove f orward

end whileidx+ = incr

end whilereturn

4.5 Gazebo and ROS

Originally, we planned to connect our python code to Gazebo through a robot operating system(ROS), but version incompatibilities between Gazebo v8 (which we worked in) and all ROS ver-sions made it difficult to utilize. While a Gazebo v8 wrapper was installed, there were issues withrunning it in the Bash on Ubuntu VM, so this aspect was abandoned.

4.6 Computer Vision for Goal Marker Identification

As the goal state marker for the Rover is known to be a standard tennis ball, a basic computervision interface to identify the tennis ball was generated. Initially, the plan was to preprocessimages through CENSURE, Histogram of oriented gradients, and other recognition techniquesand then train a neural network using Tensorflow and Keras to identify the tennis ball basedon these features [5]. When preprocessing through an HSV color gate, we realized through thepreliminary testing that this would work at a high frequency without need for training a neuralnetwork and risking the error of misclassification. Thus, this feature was left at the HSV colorgating to identify and later track the ball. Each time, the image was run through erosion andblurring to remove background.

Figures 1, 2, 3: CENSURE, Histogram of oriented gradients, and HSV filtering preprocessingtechniques for object recognition of tennis balls through images. For CENSURE and Hoog, thereis significant background of the processed image, while HSV filtering is able to capture the imageof the ball clearly.

5 Experiments and Results

5.1 Path Planning Experiments

Figure 4: Visual representation of dynamic path planning algorithm (time steps 15, 30, 45, 60, 105and completed path)

The orange in the Figure 4 represent hidden obstacles with red pixels representing detectedobstacles and the white pixel representing the path of the rover in this pseudo environment.

Table 1: Completion time for various goal states (V1 is obstacle triggered and V2 is collisiontriggered) (in seconds)

Taking the average of the values across one column and comparing between V1 and V2 within

the different goals, we found that V2 reduces computation time by 25%. This is a promising steptowards optimizing the algorithm for efficiency and to perform well under time constraints.

The following graph represents the average run time at different time steps for the dynamicpath planning algorithm v1 and v2. As seen, the time reduction of collision triggered over obstacletriggered is significant as the search space increases.

5.2 Simulation Experiments

5.2.1 Testing the Trajectory Following Algorithm

There were many different factors that had to be adjusted in order to create a plugin that success-fully followed outputted trajectories. These included the radius at which each trajectory pointcan be deemed completed, when it is best for the Rover to skip a trajectory point and go directlyto a latter one closer to the goal, the turning radius for which the Rover considers its orientationto be facing the next attempted point on the trajectory, and other various more minor adjustableparameters. We were able to test all of this by watching the Rover execute its trajectory followingplugin in the simulation. The images below depict a few stages of the simulation. The peripheralview is the main image, the birds eye view is in the bottom left (the Rover is circled in red), andthe desired trajectory in the top left. These images are taken from an A* search trajectory with thegoal state defined to be the red target. (The Rover always started in the center of the environmentand followed a trajectory to a goal state in one of the 4 corners)

Figure 6: Testing the generated trajectory in Gazebo

For a full video please go to: https://www.youtube.com/watch?v=nFk77Z98EDs (recorded at 3xtime step)

5.2.2 BFS vs. A* Search in Gazebo

After implementing a successful trajectory following plugin, we realized from reviewing the Roversexecutions of the trajectories in the simulations that certain algorithms performed better in certainenvironments. Although we initially expected A* to have the best performance across the board,we were surprised to see that this was not always the case. This was demonstrated in a compari-son between A* produced trajectories and the BFS produced trajectories in different environments.Given that the A* accounted for for the three-dimensional distance between the expanded stateand the goal state and the distance along the path to find the optimum solution, we assumed itwould perform better than the BFS, which produced a trajectory that is a direct path to the goal.However, because the Rover does not move flawlessly along the trajectory, this was not in facttrue. Because of its direct path, the BFS trajectory requires less calculation and adjustment for theRover to orient itself to the next point on the trajectory. Thus, in environments with less extremeterrain, it had a much better performance. However, in more extreme terrain, A* significantlyoutperforms BFS, which gets stuck in attempting to scale impossible hills.

For a full video comparison (at 4x speed), please go to the following links:Easy Terrain: urlhttps://www.youtube.com/watch?v=GVmmz6BLM-IHard Terrain: urlhttps://www.youtube.com/watch?v=MAnNDF2xY7M

BFS A*

Easy Terrain 280 s 420 sDifficult Terrain ∞ 333 s

Table 1: BFS and A* Trajectory Completion Time in Different Terrains (regular time step)

6 Discussion

Through our 2D heightmap and 3D Gazebo environments, we have created an autonomous navi-gation system for the HURC Rover that is capable of planning paths with obstacles and reachingthe targeted goal coordinates. In the 2D heightmaps, we were able to see optimal routing fromA* search and then take those principles and apply them to the dynamic path replanning SLAM-like system. In the different instances of the SLAM-like path planning, we noticed that collisionbased replanning allowed for a faster computation time for the same optimal route as it triggeredless frequently than obstacle based replanning. Furthermore, through Gazebo, we could furthervisualize and verify our A* search into finding an optimal path for the rover and flexibly movetowards the goal state. While still in an early stage, the autonomous navigation system appears tobe functional and is capable of guiding a simulated rover successfully to a goal state.

We aim to further refine our path planning algorithms with more obstacle awareness and in-corporate the HMM and CSP/backtracking features. Strictly for path planning, we could look intousing a RRT* to preprocess the space which could be faster than our current replanning algorithm.Preliminary goal detection is available, but further obstacle sensors and other surrounding datawill be computed into the algorithm as well. Furthermore, for the competition, the robot must beable to scan its surroundings and explore the goal area coordinates, which we must also accountfor in the future.

As we move forward and attempt to further optimize and advance the system, the environ-ment and model we have created to test our algorithms will allow us to continually develop theoptimal solution in a realistic environment. As was seen in our comparison of BFS and A*, op-timality in more realistic applications is not always what we may expect. Moreover, with theadvancement of both our navigation system and the actual build of the physical HURC Rover,optimality for the Rover in the real world will be changing. Nevertheless, our project has putthe team in a position where we can design to accommodate for this. Whether it be selecting analgorithm based off of the different encountered environments or adjusting the parameters of thetrajectory following plugin, we now have the capability to test the execution of different solutionsin a realistic environment.

A System Description

All relevent code referenced in this section can be found at:https://github.com/maggiebasta/CS182 Rover Project

A.1 Heightmap parsing

For 2D height map pathfinding, you must find or create a terrain with the color gradient that ispresent in the parser folder, and upload it to that file. You then must generate the correspondingcoordinates of the map using parser/parser.py, where the PNG you wish to process is specified inline 4. The program will output the 3D coordinates to the command line.

A.2 Python Planning Algorithms

Then, by loading the generated coordinates into any of the Python Path Planner algorithms withthe visualizer function, the respective trajectory and a path map will be generated. Executingheightmap planner.py would yield an A* or UCS trajectory in a png image as well as a set ofcoordinates. Note that the coordinates are (y, -x) with (64, 64) as the top left corner, (64, -64) as thetop right, (-64, 64) as the bottom left, and (-64, -64) as the bottom right. Executing local planner.pyyields the SLAM-like dynamic path planner with corresponding visuals. Note that the png are inBW and require color processing to reproduce the visuals in this report. A sample png heightmapimage and parsed txt file is included.

A.3 Gazebo

For Gazebo, the two most significant files are our plugin, which can be found in gazeboCode/Rover.ccand our world, found in gazeboCode/Rover.world. Please note that these two files cannot be runas stand alone programs. In order to run them, you must install gazebo version 8 alongside allrequisite packages listed in gazebo plugin documentation [4] and and gazebo world documen-tation [3]. Moreover, the programs require local files and the proper exportation of plugin andmodel paths.

A.4 Image Processing

For image processing, you must install scikit-image and opencv2 to process the images. Thenrunning the programs requires an input and path to file.

B Group Makeup

Maggie Basta: Constructed the heightmap parser for terrain map images, created the Gazeboenvironment, created the Gazebo Model and plugin, implemented the algorithm that plannedpaths Gazebo and fixed physics issues.

Albert Chien: Devised 2D heightmap strategy, implemented algorithms for 2D pathfinding,generated path coordinates for Gazebo

Bovey Rao: Translated gazebo v8 code into gazebo v7 for ROS compatibility (was abandoneddue to time constraints), image and video processed for object recognition

All: worked on heuristics for path planning, SLAM-like system obstacle correction system,and adjusting for physics

References

[1] Rankin A. Ferguson D. Stentz A. Carsten, J. Global planning on the mars exploration rovers:Software integration and surface testing. Journal of Field Robotics, 26(4):337–357, 2009.

[2] Liang Yang et al. Survey of robot 3d path planning algorithms. Journal of Control Science andEngineering, 2016:22, 2016.

[3] Open Source Robotics Foundation. Building a world, 2014.

[4] Open Source Robotics Foundation. Plugins 101, 2014.

[5] Scikit image Development Team. Sci-kit image processing in python, 2017.

[6] J. Matijevic. The pathfinder mission to mars: Autonomous navigation and the sojourner mi-crorover. Journal of Control Science and Engineering, 280(5362):454–455, 1989.

[7] M. Maurette. Mars rover autonomous navigation. Autonomous Robots, 14(2/3):199–208, 2003.

[8] The Mars Society. University rover challenge. http://urc.marssociety.org/.