laurope - six legged walking robot for …robotics.estec.esa.int/astra/astra2015/papers/session...

8
LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING IN THE SPACEBOT CUP G. Heppner, A. Roennau, J. Oberl¨ ander, S. Klemm, and R. Dillmann FZI Research Center for Information Technology, Department IDS - Interactive Diagnosis- and Service Systems Haid-und-Neue-Str. 10-14, Karlsruhe ABSTRACT Space exploration, in particular the exploration of dis- tant planets, requires complex robotic systems, capable of autonomous operation and advanced capabilities such as construction or sample collection. While most of these systems are developed by agencies such as NASA or ESA, many promising technologies can be found in the non terrestrial state of the art of robotics which lead to several competitions that simulate such missions. We participated in the German DLR SpaceBot Cup with our six legged walking robot LAURON V. In this pa- per we present the bio-inspired hexapod, our architecture PlexNav and the approaches for 3D localisation and map- ping, path planning, object detection and mobile manip- ulation as well as the mission control structure and com- munication concepts. Key words: Lauron, Space, Exploration, SpaceBot Cup, Autonomy, 3-D Mapping, Mobile Manipulation, Compe- tition, PlexNav, Control Architecture, Mission Control, Walking Robot, Hexapod. 1. INTRODUCTION Space exploration often relies on robotic systems to achieve many scientific goals that would otherwise be impossible, too expensive or simply impractical. This has lead to several very successful robotic exploration missions like NASA’s MER (Mars Exploration Rover Mission)[1] with the two rovers Spirit and Opportunity or the more recent Mars Science Laboratory mission [2] with the rover Curiosity. As the Planets to explore are far away, direct control of these Robots is, if at all, only possible in a very limited way and with large de- lays. Successful space exploration robots therefore re- quire many different skills and the capability to apply them autonomously as well as the ability to cope with various harsh environments and obstacles. To test robots early analogue missions like the rover tesbet FIDO [3] have been conducted which evaluate the rovers capabili- ties on earth. In order to use the potential already present in traditional robotics competitions, resembling tests as the one for the MER systems, like the NASA sample re- turn challenge [4], the ESA Lunar Robotics Challenge [5] or more general challenges like the darpa robotics chal- lenge [6] or the robocup [7] have been developed with growing success. The SpaceBot Cup challenge (SBC)[8] was started by the German Aerospace Center (DLR) in 2013 to test and extend autonomy of mobile robots to pre- pare them for such ambitious tasks. We participated with the six legged walking Robot LAURON V (Fig. 1) as Team LAUROPE (LAUfROboter ur die Planetare Exploartion, german for Walking Robot for Planetary Exploration). Using a walking robot for exploration offers many possibilities, especially in terms of mobility. Such new mobility concepts are also being developed for space by other institutes, such as the six legged Spaceclimber [9] or even radical designs like the Super Ball Bot [10]. Figure 1. Bio-inspired Walking robot LAURON V. The paper is structured as follows: In chapter 2 we will introduce the competition scenario as well as the walking robot LAURON V. Chapter 3 will present the PlexNav architecture and our approaches to the tasks of the SBC. Chapter 4 will give a short recap of the lessons learned during preparation for such a task before section 5 will finish with a conclusion and outlook of future activities.

Upload: vanhuong

Post on 17-Sep-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATIONPARTICIPATING IN THE SPACEBOT CUP

G. Heppner, A. Roennau, J. Oberlander, S. Klemm, and R. Dillmann

FZI Research Center for Information Technology,Department IDS - Interactive Diagnosis- and Service Systems

Haid-und-Neue-Str. 10-14, Karlsruhe

ABSTRACT

Space exploration, in particular the exploration of dis-tant planets, requires complex robotic systems, capableof autonomous operation and advanced capabilities suchas construction or sample collection. While most of thesesystems are developed by agencies such as NASA orESA, many promising technologies can be found in thenon terrestrial state of the art of robotics which lead toseveral competitions that simulate such missions. Weparticipated in the German DLR SpaceBot Cup withour six legged walking robot LAURON V. In this pa-per we present the bio-inspired hexapod, our architecturePlexNav and the approaches for 3D localisation and map-ping, path planning, object detection and mobile manip-ulation as well as the mission control structure and com-munication concepts.

Key words: Lauron, Space, Exploration, SpaceBot Cup,Autonomy, 3-D Mapping, Mobile Manipulation, Compe-tition, PlexNav, Control Architecture, Mission Control,Walking Robot, Hexapod.

1. INTRODUCTION

Space exploration often relies on robotic systems toachieve many scientific goals that would otherwise beimpossible, too expensive or simply impractical. Thishas lead to several very successful robotic explorationmissions like NASA’s MER (Mars Exploration RoverMission)[1] with the two rovers Spirit and Opportunityor the more recent Mars Science Laboratory mission [2]with the rover Curiosity. As the Planets to explore arefar away, direct control of these Robots is, if at all,only possible in a very limited way and with large de-lays. Successful space exploration robots therefore re-quire many different skills and the capability to applythem autonomously as well as the ability to cope withvarious harsh environments and obstacles. To test robotsearly analogue missions like the rover tesbet FIDO [3]have been conducted which evaluate the rovers capabili-ties on earth. In order to use the potential already present

in traditional robotics competitions, resembling tests asthe one for the MER systems, like the NASA sample re-turn challenge [4], the ESA Lunar Robotics Challenge [5]or more general challenges like the darpa robotics chal-lenge [6] or the robocup [7] have been developed withgrowing success. The SpaceBot Cup challenge (SBC)[8]was started by the German Aerospace Center (DLR) in2013 to test and extend autonomy of mobile robots to pre-pare them for such ambitious tasks.

We participated with the six legged walking RobotLAURON V (Fig. 1) as Team LAUROPE (LAUfROboterfur die Planetare Exploartion, german for Walking Robotfor Planetary Exploration). Using a walking robot forexploration offers many possibilities, especially in termsof mobility. Such new mobility concepts are also beingdeveloped for space by other institutes, such as the sixlegged Spaceclimber [9] or even radical designs like theSuper Ball Bot [10].

Figure 1. Bio-inspired Walking robot LAURON V.

The paper is structured as follows: In chapter 2 we willintroduce the competition scenario as well as the walkingrobot LAURON V. Chapter 3 will present the PlexNavarchitecture and our approaches to the tasks of the SBC.Chapter 4 will give a short recap of the lessons learnedduring preparation for such a task before section 5 willfinish with a conclusion and outlook of future activities.

Page 2: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

2. SPACEBOT CUP AND LAURON V

While the majority of the robots developed for space ap-plications relies on wheels, many of them already incor-porate some form of legs to give the wheels more groundclearance and the possibility to step over or onto largerrocks. However very rough terrain with sharp edges orcraters with steep slopes requires new mobility concepts.Multi legged walking robots are well suited for such en-vironments which is why we decided to participate withour walking robot LAURON V in the DLR SBC.

2.1. The SpaceBot Cup

The SpaceBot Cup challenge [8] was first held in 2013with the aim to enhance research of autonomous robotscapable of space exploration as well as to identify cur-rently available technologies with potential for space ap-plications. Participants have so solve various tasks in acoherent one hour run, during which intervention is onlypossible in the form of dedicated checkpoints by a groundcrew, residing in an special area away from the actualrobot. During these checkpoints the ground crew cansend data to the robot for exactly five minutes after whichthe communication is restricted again (only the receivedirection is enabled). All communication, even through-out the checkpoints, is delayed by 2 seconds in each di-rection, making any kind of teleoperation very difficultwhich in turn requires enhanced autonomy by the robotitself.

Figure 2. The SBC arena in a motocross hall has differ-ent types of sand, trenches and several hills with variousslopes.

In the challenge the robot is placed into a field, resem-bling a planetary surface (Fig. 2) with various rocks, dif-ferent types of sand, hills, step trenches and 3 objects(Fig. 3) that have to be localised. Once the mission starts,the robot has to explore the previously unknown environ-ment, find a yellow coloured battery and a blue colouredsample container, filled with water, and bring them to thethird object, a red coloured scale where they have to beassembled. After completion of the assembly, the robot

has to return to the starting point during which dynamicobstacles might be inserted into its path. Points wheregiven for each completed task, overall duration and var-ious additional tasks like a classification of obstacles orinnovative concepts.

Figure 3. The objects to find: Yellow battery, blue sampleglass, red scale and how to assemble them.

Overall the tasks to be solved in this challenge requireautonomous exploration, mapping, robot localisation, ob-ject detection and localisation, mobile manipulation andobject transport, mobile assembly, obstacle avoidance,decision making and advanced mobility in challengingterrain.

The competition will be repeated in late 2015, featuringroughly the same mission scenario. Our solutions to thetasks presented in the following are a mostly results ofpreparations for the SBC 2013 but also feature some re-cent results for the SBC 2015.

2.2. LAURON V

LAURON V is the newest version of the LAURON walk-ing robot series that has been first presented to the publicat the ICRA conference 2013. The bio inspired six leggedwalking robot takes inspiration from the Indian stick in-sect carausius morosus. While the fourth generation had3 degrees of freedom per leg, LAURON V features anadditional joint in the hip which enables it to tilt the legstowards an inclination to overcome steep slopes but wasalso designed with mobile manipulation in mind [11].

LAURON uses a robust hardened aluminium frame thatwas optimized for light weight construction while pro-viding maximum protection and stability. The legs areactuated by 4 powerful dc-motors each, resulting in anoverall robot weight of 42kg with an additional payloadcapacity of 10kg for a state-of-the-art CPU and missionspecific sensors. The robot is equipped with an array ofinternal sensors, like current measurements in the cus-tom made motor controllers, spring dampened feet withpotentiometers and a high precision IMU to detect thecurrent body pose. A pan-tilt unit head with two highresolution cameras and a kinect are used as main exter-nal sensor, but due to the large size, LAURON can carryalmost any sensing equipment required for a specific mis-sion. Further details regarding the mechanics, electronicsand overall design can be found in [12].

Page 3: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

With its behaviour based control system, LAURON isable to cope with a changing environment or unforeseenevents, such as sudden slippage of the legs or hitting anobstacle. The system consists of a multitude of coop-erating behaviours that govern different aspects of therobot state. Higher level behaviours, such as a postureor walking pattern behaviours supervise the overall robotbehaviour and its walking characteristics, fast reflex be-haviours, like a ground contact reflex, ensure that therobot maintains ground contact even when the ground ischanging.

2.3. Competition Setup

To tackle the challenges of the SBC LAURON neededsome mission specific enhancements. Robots often relyon laserscanner navigation such as the popular hectorSLAM but these technologies are not suitable for a lowfeature environment consisting mainly of hills and rocks.To be able to use the full landscape information we out-fitted LAURON wit a rotating laser scanner (KaRoLa[13]) on its back which can produce high resolution point-clouds of the environment. For the manipulation tasks,we added a custom made gripper as well as storage com-partments for the object to the robot as it can be seen inFig. 4.

Figure 4. LAURON V at the SBC with all its extensions.A gripper is mounted to the front right leg, the black boxon the back is the storage area for the battery object, thewhite box the base of the rotating laserscanner KaRoLa.

The behaviours where extended to detect the slope of theground and compensate this inclination with the newlyadded joints [14]. These different behaviours enable notonly a safe stand and walk over even or slightly uneventerrain, but let LAURON walk up slopes of up to 25 in-clination without any external sensor information. Asthe behaviour based control is a complex system that re-quires a multitude of settings an additional robot abstrac-tion layer was added that offers simple to use interfacessuch as as speed(x/y) or the selection of the desired walk-ing pattern which in turn tune the parameter sets for dif-ferent situations based on expert knowledge.

3. PLEXNAV

While the behaviour based system is implemented inour own MCA2 (modular controler architecture) frame-work [15], which requires a tight coupling of the com-ponents, the deliberate high level control makes heavyuse of ROS (Robot Operating System) [16], which of-fers a publish/subscribe communication style that greatlyeases the integration of various different components inthe system, even during runtime. The resulting hybrid ar-chitecture is called PlexNav (Planetary Exploration andNavigation Framework).

Figure 5. PlexNav architecture structure. Lowest layerhandles robot specific functionalities, mid layer offershigh level skills, top layer controls exectution of them andthe overall mission goals.

Plexnav is structured in 3 layers (Fig. 5):

• Robot LayerThe Robot Layer consists of the robot specific im-plementation. In case of LAURON this is the be-haviour based control that manages hardware relatedfunctions like leg movement, sensor interpretationor safety monitoring. The layer also includes therobot abstraction layer which exposes ROS inter-faces for the upper layers and enables safe accessto the low level functions.

• Skill LayerThis layer is composed of the individual skills thatthe robot can use. Each skill is a PlexNav compo-nent providing a certain subsystem, like manipula-tion or environment mapping. Skills provide a mul-titude of invocable capabilities which can be used bythe Mission Layer to fulfil tasks.

• Mission LayerThe Mission Layer handles the overall mission. Itprovides the components that manage the missionstatus and calls the exposed capabilities from theSkill Layer.

All layers may access or publish globally available datasources, such as a known objects list or a list containing

Page 4: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

points of interest. By allowing this decentralised access,each component can gather the information it requires onits own, requiring minimal coupling between the differ-ent components. This approach however requires a cleardefinition of the available data which is easily solved byusing the ROS message definitions as standalone compo-nents. The Robot Layer has already been presented invarious papers [11, 12, 14]. In this paper we thereforewant to focus on the Skill Layer and parts of the MissionLayer.

3.1. Skills

PlexNav functionalities are grouped together in skillswhich are implemented as ROS packages or meta pack-ages. Each skill contains one or more algorithms whichcompose a set of capabilities. As the API of these algo-rithms might become quite complex and hinders reuse,we introduced a coordination module for each capabil-ity that provides a simple and unified interface (via ROSactions) to be called by the mission control. For sim-plicity we will explain the functionalities of several skillsin combination as the number of skills has become quitelarge.

Localisation and MappingThe localisation information of the robot is kept with theuse of the ROS TF system which provides an easy wayof keeping track of various coordinate systems and theirrelations. As the odometry of a walking robot is not re-liable, due to slippage of the legs and mechanical playwe used fovis [17] to provide visual odometry that usesthe RGBD data of the kinect. To keep a globally consis-tent localization, the visual odometry is used in conjunc-tion with a SLAM algorithm that uses the full 3D point-cloud data gathered by KaRoLa. At the start point, aninitial scan of the environment provides the first referenceframe. After several meters that the robot has walked, anew scan is made which is fused with the first one withan ICP algorithm, using the odometry data as an initialguess. Each of the poses estimates is collected in a fac-tor graph that is globally optimized using the GTSAM li-brary once a feature based loop closing has been detected.A set of features for the loop closing was extracted usingsimulated 3D environments. By updating the poses andthe corresponding coordinate frames of the consecutivemaps, the global position of the robot is corrected and theodometry error compensated. Before the actual ICP andfeature extraction is computed on the pointclouds theyare pruned by a set of rules to filter out any parts that arenot of the challenge field, for example the ceiling of thesurrounding hall.

Each individual pointcloud, as well as the fused globalpointcloud are converted into a custom 2,5D height mapformat, the PlexMaps (Height map for Planetary Explo-ration) (Fig. 6). A PlexMap is a pre-interpreted 2,5Dheight map format that allows for fast planning in the

exploration steps. The PlexMap uses a quadtree as un-derlying data structure that saves key data in the leavesof the tree such as the covariance of all points, the sur-face normal vector, minimum and maximum heights ofthe points and the estimated point variance along the sur-faces of the cells normal. While the localisation and loopclosing computations are done on features extracted formthe full 3D data, the reduced complexity of the PlexMapis used to speed up planning and also provide a map thatis understandable to the human operator. All observationsmade by the robot as well as generated target points aresaved with regard to the reference map they were detectedin, which is important as an update of the factor graphwill not invalidate the positions of such features. Furtherdetails and evaluation results regarding the environmentmodel can be found in [18].

Figure 6. Top: Result of the combined 3D scans. Darkspot mark a scanning location. Bottom: A PlexMap ofthe whole SBC area. Black spots mark areas that arenon traversable while the shading of tiles represents theinclination.

Planning and NavigationPlanning of a suitable path, that is not only the short-est, but considers the actual movement capabilities of therobot is crucial for the exploration of an unknown area.We use a modified RRT* with a cost function based onrobot properties to plan full 6D Poses of the robot thatare restricted to the surface of the environment resultingin an implicit roll and pitch. The yaw angle is left freeto the later step of motion execution, allowing it to ex-ecute the plans in a way that is favourable to the kine-matics. PlexMaps are used as the environment model forthe planner, allowing for fast access of the relevant infor-mation without any further preprocessing steps. Costs of

Page 5: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

each sample are based on:

• The roughness of the area• The inclination for each foot• The inclination overall• The available scan data (information certainty)

The resulting plan is opportunistic, as it will also considerareas with only little information available, though pathswith much data are preferred. By considering the eleva-tion and roughness the robot is able to choose paths thatrequire additional effort to overcome if the distance dif-ference is worthwhile the additional effort. For the SBC2015 we plan to extend this approach to allow a moreflexible optimisation of the costs in order to plan adap-tively for goals as time, energy efficiency or speed.

The resulting plan is executed by a custom module thatwill try to use skid steering if possible and switch toomnidirectional movement for targets that are close-by.This is important as the stability margin of LAURON isgreater in the longitudinal direction, meaning it is saferto go forward then sideways. In particular when walkingup slopes the newly added hip joints can be utilized bestwhen walking for- or backward without any sideways orrotation movement. Further details and evaluation resultsregarding the planning can be found in [18].

Object Detection and TrackingWe choose a simple approach for the target detection,based on the a priori known colors of the objects. To findinitial estimates, each frame of the RGB data of the kinectis searched for all the pixels of the predefined colors. Anobject was detected if it has been consistently observedfor several frames and its pixel dimensions roughly fitwith the known size of the objects. Once a detection ismade it is tracked within the object list which keeps aconfidence value of each sighting calculated based on thesize, length of detection and how close the color matchis. The positions of the objects are tacked and updated ifthey change within the field of view of the robot, whileposition changes outside of it will result in new objectsdetected while the old observations will loose in confi-dence and ultimately be removed from the list. Once the

Figure 7. Left: RGB image with detections and confi-dence values. Right: 3D Pointcloud with fitted geometricshapes.

robot is close enough to the object to use the kinects depth

data, a complete pose of the object is extracted by fittinggeometric shapes into the 3D Pointcloud (Fig. 7). Themission control further evaluates the data by only usingobjects with a high confidence value and cross referenc-ing the position of detections with the already known mapand can try to actively verify the object by performing asweeping motion with the robots head.

Mobile Manipulation

Figure 8. A single grasping pose can be used to pick upboth of the objects to collect during SBC.

By moving the weight to the rear legs, LAURON is capa-ble of using both front legs for manipulation tasks. How-ever, LAURON only has 4 joints per leg, resulting in aheavily constraint workspace, especially for the collec-tion of the water glass. To tackle this challenge a spe-cial manipulator was constructed [19] which is capableof grasping both objects in the same grasping position(Fig. 8) and fulfils other requirements such as increasedrobustness and lifting of objects up to 2,3kg in weight.The mandible gripper is attached to the front right leg(Fig. 9) and can be completely retracted to enable nor-mal walking. During manipulation the two fingers areextended and perform a simple closing grasp, limited bya current controller which has been set to a force appro-priate to the weight of the objects.

To pick up the objects, LAURON moves into a graspingposition close to the object and iteratively refines its posi-tion by making small adjustment steps until the object iswithin an area that is predefined by the kinematics. Oncethe robot has been positioned the front right leg is broughtinto the grasping pose close to the object by a fixed trajec-tory. The final grasping movement is then calculated byusing the pose of the object that was detected by geome-try fitting and the inverse kinematics of the leg to make asimple interpolated movement. The storing and retrievalprocedures for each object are executed with predefinedtrajectories as the movements have to be very complexto reach the object storage compartments without colli-sions. For the SBC 2015 the approach is currently beingextended by calculating valid grasps into a database foreach object in advance and by using the whole body forthe inverse kinematics.

Page 6: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

Figure 9. A special gripper was constructed that is ableto withstand external forces in case LAURON steps ontoit and is capable of lifting various objects up to 2,3kg inweight and 50cm in diameter.

Mission LayerIn order to successfully complete the tasks set in the SBCthe robot has to make its own decisions based on theavailable information and current mission state. We de-signed the Mission Layer to be simple, yet flexible to eas-ily change the resulting behaviour. The overall MissionLayer consists of three parts (Fig. 5): The Mission Con-trol itself which is implemented as a state machine usingthe SMACH framework for managing the execution ofcapabilities, a Capability Management module which iscurrently just a static list, but will be used to aggregateavailable capabilities in the future and a Task Manage-ment module which manages task requests by the groundstation and constantly evaluates the available data sourcesto generates task on its own. Each skill provides a set ofcapabilities which can be used to fulfil tasks. Every ca-pability is designed with a hierarchical state machine asexecution control, also written in SMACH, resulting in asimple, yet effective, way of adding individual segmentsto the overall sequence of actions to be performed by therobot (Fig. 10).

Figure 10. PlexNav Mission Layer. The task Managementaccepts tasks and generates new ones while the MissionControl periodically checks what task to start and moni-tors the execution or abortion of it.

This approach is similar to HTN networks in which taskscan be subsequently divided into smaller tasks, until anordered plan of atomic actions the robot can perform

is found. The capabilities provide a uniform interface,currently consisting only of a string and two ids. Theyrequest or write additional information, like the currentposition of the robot, by accessing the globally avail-able data sources. Fig. 11 shows the state machine forthe MoveToPOI capability which handles an environmentupdate, request of a plan and triggers its execution.

Figure 11. The move to POI capability gathers requireddata from the available sources and calls the specific APIcalls of the skill to achieve the desired behaviour.

Each of the state machines for capability execution pos-sesses states for handling preemption requests in case atask is to be aborted by the above mission control statemachine. This can happen in two cases: firstly, when theground control has explicitly requested to abort the cur-rent task (or the complete mission) and start another one,or secondly, when the Task Manager has found a higherpriority task that invalidates the currently executed one.As an example: If the robot detects a target object whilemoving towards a point of interest, the movement will beaborted, a new target generated and a movement to thenew target point started. User defined task can be addedvia a ground control software and will always superseedthe autonomously generated ones. If the behaviour of therobot should be changed a different set of rules for theTask Manager module can be used, thereby allowing foreasy adoption of new mission strategies. The current ca-pabilities and their respective state machines are designedwith the tasks of the SBC in mind, but are planned to begeneralized as much as possible to be robot agnostic inthe future.

Communication with the Ground ControlThe communication is part of the Mission Layer and isimplemented using readily available tools. ROS requiresa tight communication coupling to function as it uses acentralized master and variable ports for publish and sub-scribe actions (Fig. 12).

The robot provides its own master which is a global entrypoint for any communication member. Each publishedtopic is advertised to the master which holds the recordabout name, type and access point (a port) for each ad-

Page 7: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

Figure 12. ROS communication requires a master whichkeeps the information about available topics and how toretrieve them.

vertised topic. Components that want to access this in-formation have to ask the master which topics are avail-able at what endpoints, after which a direct tcp (or udp)connection is established. In the SBC scenario only a sin-gle port is cleared for communication which already pro-hibits this dynamic allocation of communication channelsand the additional delay further complicates the commu-nication. To solve this issue, but still use the availabletooling of ROS and its communication features we en-hanced the rosbridge module. Originally designed to in-terface a robot running ROS with basically any device,and most prominently websites, the robsbrige subscribesto desired topics, serialises them as JSON encoded mes-sages and sends them via a single port. We use the ros-bridge effectively as relays on the robot and ground con-trol side which each run a separate ROS master. Selectedtopics (the ones needed for control an monitoring of themission) are subscribed by the bridge on the robot sideand transmitted to the bridge at the ground control sideof the distorted transmission where the topics are againpublished as if the rosbridge was the one to generate themessage itself. The bridge acts as a throttle to limit thehigh update frequency of some topics to an amount re-quired by the base station. To increase the robustness ofthis method we added a custom header to the messagepayload and transmit each message multiple times. At thecost of slightly increased traffic, this allows us to identifyout of order packets and if the complete message has beensuccessfully received.

4. RESULTS AND LESSONS LEARNED

The SBC challenge was a new experience for everyoneparticipating, with the result that none of the participat-ing robots was able to complete the challenge which ul-timately resulted in a closing without any points for anyteam. LAURON did not start its run, mainly due to prob-lems with the mission control,which we were unable tofix without direct access. The individual skills howeverhave been a major improvement for LAURON and partsof it have already been evaluated separately[14, 18, 19].

The localisation worked very well after tuning it in simu-lated environments, providing us with accurate position-ing information and maps for the planning. The naviga-tion and mapping (without fovis) has been successfully

used with our snake like robot KAIRO 3 [20] and a newtest platform we build for the SBC 2015. While manyteams had to struggle with the communication, the ros-bridge solution worked very well for us, providing a con-stant downlink of images and a suitable uplink. Whilebased on very simple principles the object detection alsoworked flawlessly during tests at the SBC and located ob-jects reliably.

The main pitfall is one that probably many roboticistmake when they are not used to this kind of scenario.For the newly developed code alone 20.134 lines of codewhere written, not counting the already existing controlcode for LAURON or libraries already available at theFZI which would go into the mid to upper hundreds ofthousands. While most researchers are used to this com-plexity, controlling a system like this without direct ac-cess requires a paradigm shift from a bottom up featuredevelopment to a top down system development approachwhich incorporates diagnostic methods and easy to usecontrol methods from the beginning. The structure for themission control presented in this paper already representsthe version planned for the SBC 2015 as the previous at-tempt was far to simple and not adequate to control suchcomplex subsystems while at the same time remain con-trollable by a ground station crew. A simple, yet flexiblestructure for integration of components is therefore keyin designing a complex exploration robot in a short timeresulting in the concept presented above.

5. CONCLUSIONS AND FUTURE WORKS

In this paper we presented the walking robot LAURONand the extension of its control architecture to providehigh level functionalities required to solve the tasks inthe SpaceBot Cup. The robust six legged robot was out-fitted with a 3D laserscanner for localisation and mappingin a low feature environment and a gripper for manipula-tion tasks with a walking robot. The low level behaviourbased control was extended by the deliberative PlexNavarchitecture. PlexNav heavily uses the ROS middlewareto provide various high level skills which can be executedby HTN like mission control which builds chains of taskprimitives to enable complex missions. While a coherenttest at the SBC 2013 was unfortunately unsuccessful theindividual skills performed very well and provide a solidbasis for the upcoming 2015 edition of the SBC. The ar-chitecture was in parts already deployed to our snake likerobot KAIRO. Even though KAIRO does not posses aKinect or other hardware specialities of LAURON the en-vironment modelling and path generation skills could bereused with modifications only to the robot specific partslike the cost generation for the planning module. Thisshows that the resulting architecture, even though it isstill work in progress, is capable of enabling a robot toperform complex missions and is flexible enough to sup-port arbitrary robots.

In the future we want to focus our work, next to ex-tending the capabilities of LAURON V for the SpaceBot

Page 8: LAUROPE - SIX LEGGED WALKING ROBOT FOR …robotics.estec.esa.int/ASTRA/Astra2015/Papers/Session 5B/96035... · LAUROPE - SIX LEGGED WALKING ROBOT FOR PLANETARY EXPLORATION PARTICIPATING

Cup 2015, mainly on the mission control and the overallPlexNav architecture. While the two blocks Capabilityand Task Management are currently mainly static mod-ules we want to extend them to dynamically spawn skillsand their capabilities as well as to coordinate them withother robots running the same architecture. Also the def-inition of missions will be further abstracted to allow therobot to actively choose which capability can be used bestto fulfil a given mission goal. This will not only furtherenhance the autonomy of the proposed system but willalso be used for multi robot coordination based on the ca-pability definitions and dynamic task allocation within agroup of robots.

ACKNOWLEDGMENTThis research has been carried out during the prepara-tion phase for the DLR SpaceBot Cup Challenge 2013and 2015 and was partially funded by the Space Agencyof the German Aerospace Center (DLR), with funds pro-vided by the German Federal Ministry of Economics andTechnology (BMWi), following a resolution of the Ger-man Bundestag.

REFERENCES

[1] JPL NASA. Mars exploration rovers. http://mars.nasa.gov/mer/home/index.html,Accessed: April. 2015.

[2] S.A. Hayati, G. Udomkesmalee, and R.T. Caffrey.Initiating the 2002 mars science laboratory (msl) fo-cused technology program. In Proceedings of the2004 IEEE Aerospace Conference, volume 1, page652, March 2004.

[3] E. Tunstel, T. Huntsberger, and E. Baumgartner.Earth-based rover field testing for exploration mis-sions on mars. In World Automation Congress,2004. Proceedings., volume 15, pages 307–312,2004.

[4] WPI NASA. Sample return robot challengewebsite. http://wp.wpi.edu/challenge/,Accessed: April. 2015.

[5] Felipe A. W. Belo et al. The esa lunar roboticschallenge: Simulating operations at the lunar southpole. Journal of Field Robotics, 29(4):601–626,2012. ISSN 1556-4967. URL http://dx.doi.org/10.1002/rob.20429.

[6] G. Pratt and J. Manzo. The darpa robotics chal-lenge [competitions]. Robotics Automation Maga-zine, IEEE, 20(2):10–12, June 2013. ISSN 1070-9932.

[7] H. Kitano, M. Asada, I. Noda, and H. Matsub-ara. Robocup: robot world cup. Robotics Automa-tion Magazine, IEEE, 5(3):30–36, Sep 1998. ISSN1070-9932.

[8] Thilo Kaupisch and Daniel Noelke. DLR spacebotcup 2013: A space robotics competition. KI, 28(2):111–116, 2014.

[9] Sebastian Bartsch et al. Spaceclimber: Develop-ment of a six-legged climbing robot for space explo-ration. In Robotics (ISR), 2010 41st InternationalSymposium on and 2010 6th German Conference onRobotics (ROBOTIK), pages 1–8, June 2010.

[10] NASA. Super ball bot website. http://www.nasa.gov/content/super-ball-bot,Accessed: April. 2015.

[11] A. Roennau, G. Heppner, and R. Dillmann. A six-legged walking robot designed for mobile manipu-lation. In Dynamic Walking Conference, 2014.

[12] A. Roennau, G. Heppner, M. Nowicki, and R. Dill-mann. Lauron v: A versatile six-legged walkingrobot with advanced maneuverability. In AdvancedIntelligent Mechatronics (AIM), 2014 IEEE/ASMEInternational Conference on, pages 82–87, July2014.

[13] L. Pfotzer, J. Oberlander, A. Roennau, and R. Dill-mann. Development and calibration of KaRoLa, acompact, high-resolution 3d laser scanner. In Safety,Security, and Rescue Robotics (SSRR), 2014 IEEEInternational Symposium on, pages 1–6, Oct 2014.

[14] A. Roennau, G. Heppner, M. Nowicki, J.M. Zoell-ner, and R. Dillmann. Reactive posture behaviorsfor stable legged locomotion over steep inclines andlarge obstacles. In Intelligent Robots and Systems(IROS 2014), 2014 IEEE/RSJ International Confer-ence on, pages 4888–4894, Sept 2014.

[15] Klaus Uhl and Marco Ziegenmeyer. MCA2 – AnExtensible Modular Framework for Robot ControlApplications. In Advances in Climbing and Walk-ing Robots: Proceedings of the 10th InternationalConference on Climbing and Walking Robots, pages680–689. World Scientific, 2007.

[16] Morgan Quigley et al. ROS: an open-source robotoperating system. In ICRA Workshop on OpenSource Software, 2009.

[17] Albert S. Huang et al. Visual Odometry and Map-ping for Autonomous Flight Using an RGB-D Cam-era. In Proceedings of the 15th International Sym-posium of Robotics Research, Flagstaff, Arizona,USA, August 2011.

[18] Jan Oberlander et al. A multi-resolution 3-D en-vironment model for autonomous planetary explo-ration. In Automation Science and Engineering(CASE), 2014 IEEE International Conference on,pages 229–235. IEEE, 2014.

[19] G. Heppner, T. Buettner, A. Roennau, and R. Dill-mann. Versatile - high power gripper for a sixlegged walking robot. In Mobile Service Robotics,Proceedings of the 17th International Conferenceon Climbing and Walking Robots, pages 461–468,2014.

[20] L. Pfotzer, S. Ruehl, G. Heppner, A. Roennau, andR. Dillmann. KAIRO 3: A modular reconfig-urable robot for search and rescue field missions.In Robotics and Biomimetics (ROBIO), 2014 IEEEInternational Conference on, pages 205–210, Dec2014.