[ ] athele
DESCRIPTION
sssTRANSCRIPT
-
2010:068
M A S T E R ' S T H E S I S
Software Control of an EightWheeled-Legged Hybrid Robot
Richard Paisley
Lule University of Technology
Master Thesis, Continuation Courses Space Science and Technology
Department of Space Science, Kiruna
2010:068 - ISSN: 1653-0187 - ISRN: LTU-PB-EX--10/068--SE
-
Preface
This Master Thesis research work was performed jointly between the Aalto
University School of Science and Technology and Shanghai Jiao Tong University
during the year 2010. The research was performed as part of an Erasmus Mundus
study program towards the dual degree of M.Sc. in Space Science and Technology.
The first three months of the work were performed in China at the Jiao Tong
University Advanced Robotics Laboratory. The remainder of the work was carried
out in Finland at the Aalto University Department of Automation and Systems
Technology.
I would like to express my gratitude to all involved in providing the opportunity and
financial assistance to perform part of this thesis work in China. The experience was
unforgettable and I would especially like to thank Professor JianJun Yuan, Yongtao
Song and others at the Advanced Robotics Laboratory for their invaluable support and
excellent hospitality.
I would like to thank Professor Aarne Halme and Tomi Ylikorpi for their continued
guidance and support. My gratitude also goes to Eric Halbach the supervisor of this
thesis for his excellent advice and comments.
I would finally like to thank my family and friends. Exceptionally warm thanks go to
my fiance Tuulia Lepisto for her constant support during this work.
Espoo, Finland, August 12, 2010 Richard Paisley
-
Aalto University
School of Science and Technology Abstract of the Masters Thesis Author: Richard Paisley
Title of the thesis: Software control of an eight wheeled-legged hybrid robot
Date: August 12, 2010 Number of pages: 83
Faculty: Faculty of Electronics, Communications and Automation
Department: Automation and Systems Technology
Program: Master's Degree Programme in Space Science and Technology
Professorship: Automation Technology (Aut-84)
Supervisors: Professor Aarne Halme (Aalto)
Professor Kalevi Hyypp (LTU)
Instructor: Eric Halbach (Aalto)
Professor Jianjun Yuan (Shanghai Jiao Tong University)
Space robots such as rovers have typically used wheeled mobility systems such as the rocker-bogie
suspension system. However, NASAs ATHLETE robot featuring a hybrid wheeled-legged mobility
system signifies a shift in direction. Legged and hybrid mobility systems improve on wheeled systems
in their enhanced ability to move effectively on rough terrain. One class of legged robots that have
been strongly researched for space applications are frame-walking systems. These robots typically
have legs with sliding prismatic joints to provide a degree of freedom in the vertical direction.
The present thesis work is targeted to a hybrid eight wheeled-legged robot called the Zero-Carrier.
The robot is designed for transportation of disabled and elderly people, and has legs with sliding
prismatic joints to allow stair-climbing ability. The aim of the thesis was to implement the upper-level
control software for the next version of the robot. The role of the software is to detect obstacles using
on-board sensors and control actuators to move the robot to autonomously overcome the obstacles. To
allow implementation, testing and demonstration of the software a simulation platform was developed
using OpenGL for 3D visualisation.
The described control software improves on previous versions by providing intelligent control of
individual legs using state machines, while centrally coordinating movements to guarantee stability.
This results in a wider range of obstacles that can be overcome. Advanced features including center of
gravity control, smooth movement and level movement on slopes are also addressed to improve
passenger comfort and safety. Results for a set of simulation test cases are presented which
demonstrate the implemented control softwares ability to overcome various obstacle situations.
In the context of space robotics, strong similarities were demonstrated between Zero-Carrier and
various researched space robots. The research work therefore provides a relevant examination of the
complexities encountered in performing the autonomous control of such a complex machine.
Keywords: Space Robotics, hybrid mobility system, 3D-simulation, control software, stair climbing.
-
Contents
1 INTRODUCTION ....................................................................................................... 1
1.1 Mobility Systems ................................................................................................ 2
1.2 Thesis Overview ................................................................................................. 3
2 THESIS BACKGROUND ........................................................................................... 6
2.1 History ................................................................................................................. 6
2.2 Future Missions ................................................................................................... 7
2.3 Legged Systems .................................................................................................. 8
2.4 Hybrid Systems ................................................................................................. 13
2.5 Zero-Carrier....................................................................................................... 16
3 SIMULATION SOFTWARE .................................................................................... 19
3.1 Robot Simulation Requirements ....................................................................... 19
3.2 3D-Simulation Environment ............................................................................. 20
3.3 OpenGL ............................................................................................................. 21
3.4 Software Design and Implementation ............................................................... 24
3.5 Test Scenarios ................................................................................................... 34
4 CONTROL SOFTWARE .......................................................................................... 36
4.1 Robust Software ................................................................................................ 36
4.2 Basic Movement ............................................................................................... 37
4.3 Control Software Design ................................................................................... 40
4.4 Main Control State Machine ............................................................................. 42
4.5 Leg Control State Machine ............................................................................... 44
4.6 Stability Test ..................................................................................................... 45
4.7 Perform Leg Action........................................................................................... 47
4.1 Manual Versus Automatic Control ................................................................... 47
4.2 Results ............................................................................................................... 49
-
5 ADVANCED CONTROL SOFTWARE ................................................................... 51
5.1 Negotiation of Multiple Pending Obstacles ...................................................... 51
5.2 Smooth Robot Motion ....................................................................................... 52
5.3 Centre of Gravity Control ................................................................................. 54
5.4 Slope Negotiation .............................................................................................. 55
5.5 Advanced Control Software .............................................................................. 57
5.6 Results ............................................................................................................... 61
6 FUTURE WORK ....................................................................................................... 67
7 CONCLUSIONS ........................................................................................................ 71
8 REFERENCES .......................................................................................................... 73
-
List of Figures
Figure 1: The rocker-bogie system. ............................................................................. 6
Figure 2: MSL size comparison. .................................................................................. 7
Figure 3: Marietta walking beam rover. ....................................................................... 9
Figure 4: AMBLER. .................................................................................................. 11
Figure 5: Dante II. ...................................................................................................... 12
Figure 6: Chariot chassis. ........................................................................................... 14
Figure 7: ATHLETE. ................................................................................................. 15
Figure 8: Zero-Carrier I. ............................................................................................. 17
Figure 9: 3D cube rendered with OpenGL. ................................................................ 22
Figure 10: 3D Graphics from polygons. .................................................................... 23
Figure 11: Software design. ....................................................................................... 24
Figure 12: Pro-Engineer robot model. ....................................................................... 25
Figure 13: Zero-Carrier OpenGL model. ................................................................... 26
Figure 14: OpenGL model leg stroke......................................................................... 26
Figure 15: The simulation world. ............................................................................... 28
Figure 16: Calculation of robot rotation on slopes..................................................... 30
Figure 17: Sensors configuration. .............................................................................. 31
Figure 18: Simulation software. ................................................................................. 33
Figure 19: Single block obstacle. ............................................................................... 38
Figure 20: Control sequence diagram. ....................................................................... 41
Figure 21: Main control state machine. ..................................................................... 42
Figure 22: Leg control state machine. ........................................................................ 44
Figure 23: Stability test. ............................................................................................. 46
Figure 24: Moving on to an obstacle. ........................................................................ 49
Figure 25: Moving down from an obstacle. ............................................................... 50
Figure 26: Speed calculation, moving on to obstacle. ............................................... 53
Figure 27: Speed calculation, descending from obstacle ........................................... 54
-
Figure 28: Leg control while moving up slope .......................................................... 56
Figure 29: Main control state machine. ..................................................................... 57
Figure 30: Leg control state machine. ........................................................................ 59
Figure 31: Stairs climbing ability; moving up stairs. ................................................. 61
Figure 32: Stairs climbing ability; moving down stairs. ............................................ 62
Figure 33: Advanced stair climbing ability. ............................................................... 63
Figure 34: Approaching an obstacle at a narrow angle. ............................................. 64
Figure 35: Approaching an obstacle at a wide angle. ................................................ 64
Figure 36: Obstacles partially blocking the robot. ..................................................... 65
Figure 37: Movement on a slope. ............................................................................... 66
-
Symbols and Abbreviations
3D Three Dimensions
AMBLER Autonomous Mobile Exploration Rover
AI Artificial Intelligence
API Application Programming Interface
ATHLETE All-Terrain Hex-Legged Extra-Terrestrial Explorer
CAN Controller Area Network
CMU Carnegie Mellon University
ESA European Space Agency
GLUI Graphics Library User Interface
HW Hardware
JPL Jet Propulsion Laboratory
LEO Low Earth Orbit
LER Lunar Electric Rover
MER Mars Exploration Rovers
MSL Mars Science Laboratory
NASA National Aeronautics and Space Administration
ODE Open Dynamics Engine
Open GL Open Graphics Library
OS Operating System
RTOS Real-Time Operating System
SW Software
UI User Interface
ZC Zero-Carrier
-
1
Chapter 1 Introduction
Since the success of NASAs Apollo lunar missions over 40 years ago, humans have
not ventured further distances from the Earth. Despite the progress made in human
spaceflight by new space powers such as China, it will undoubtedly be some time
before humans venture once again beyond Low Earth Orbit (LEO). In contrast,
robotic explorers continue to push the boundaries with pioneering missions out of
Earth orbit to the far reaches of the Solar System.
Robotic rovers are one of the major success stories of the robotic exploration
missions. In particular, a number of robotic missions have been successfully exploring
the surface of the Moon and Mars. Although human space endeavours serve best to
capture the public imagination, robotic missions have a number of major advantages.
In particular, human missions pose a high risk factor and necessitate the requirement
of safe return back to the surface of Earth. Robotic rovers however can be sent to
explore other worlds without the human risk factor and can be left to run out the
course of their mission without need for return to Earth. The task of sustaining human
life in space is a very complicated and costly procedure. Consequently, the budget
required to send robots instead of humans into space is significantly less.
The success of robotic missions drives the desire to attempt missions with even more
complex objectives with the aim of increased scientific return. More complex mission
objectives lead to more complex engineering requirements and ultimately higher risk
of failure. Space missions typically require many years of work and extremely large
budgets. Complete loss of a mission due to launch failure during take-off is an
accepted risk. However, with the stakes so high, loss of a mission because of a simple
engineering error such as a software bug can be just as disastrous.
-
2
1.1 Mobility Systems
The mobility system is the part of the rover, including mechanics and control systems,
which enable the robot to move. The mobility system ultimately determines how well
the robot is able to move in its destined environment. For a Martian rover mission
one-way communication delays between Earth and Mars can vary from between 3
and 22 minutes depending on the current position of the Planets in orbit.
Consequently, it is impossible to perform operation manually in real-time from Earth.
To overcome the problem a level of artificial intelligence (AI) is required to allow the
robot to perform tasks autonomously. In a typical operation, the human operator will
select a scientifically interesting location within the range of the rover and command
the rover to go there. It is then the responsibility of the robot AI to navigate to the
location safely negotiating obstacles on route. An inadequate mobility system could
result in the robot getting stuck or toppling over during the movement, resulting in the
end of the mission.
Typical mobility systems for rovers include wheeled, legged or hybrid wheeled-
legged systems. Wheeled designs have been the preferred method for previous
missions because they are mechanically simple, relatively straightforward to control
and energy efficient [1]. They have performed very well for achieving the goals of
missions that have been attempted thus far. These goals have typically been to travel
relatively short distances over fairly even terrain. However, it is predicted that future
missions will require access to more rugged locations and transport over greater
distances. These goals can only be achieved using more complex mobility systems
such as legged and hybrid systems.
Wheeled systems operate by maintaining continuous contact with the surface in order
to drive the vehicle forward. They are therefore most effective on flat and even terrain
where high speeds can be achieved without collision. Legged systems are typically
more complicated to control than wheeled systems. This is mainly due to the
increased degrees of freedom in the number of joints and moving mechanisms that
must be controlled and co-ordinated. The control and use of more mechanisms not
-
3
only increases the complexity of control but also the chance of failure. Legged robots
also require careful gait control to ensure stability during movement. A typical
method is keeping a subset of the legs static and in contact with the ground while the
remaining legs are lifted from the ground and moved to another stable position. As
motion involves removing legs from the surface completely, the robot is able to avoid
rough terrain entirely. This allows legged systems to move more capably in rough
environments than wheeled systems which require contact with the ground. A hybrid
locomotion system uses both wheels and legs to gain the advantages of both types of
locomotion systems. This however comes at the price of even more complexity and
risk of failure.
1.2 Thesis Overview
This thesis research focuses on the mobility system of the hybrid eight wheeled-
legged Zero-Carrier robot. The Zero-Carrier is the creation of robotics pioneer Shigeo
Hirose and Jianjun Yuan, forming the doctoral thesis of the latter [23]. It was
designed for the purpose of transportation of elderly or disabled people. In essence,
the robot operates similar to a wheel chair providing improved mobility for the
passenger. However, unlike a wheel chair the robot has motorised sliding prismatic
joints in each of the legs. This means the length of the legs can be modified making it
possible to move over certain obstacles such as stairs. The system can also be used to
provide active suspension by moving the legs to keep the robot body horizontal on
uneven terrain. An intelligent control system is used to automatically handle robot
locomotion to overcome obstacles. The control system is responsible for sensing the
environment using onboard sensors and controlling actuators to move the robot as
required. It is the design of this control system software that is the focus of this thesis
research. The Zero-Carrier robot is examined in more detail in the background work
to the thesis detailed in the next chapter.
Although the Zero-Carrier robot is aimed at transportation of elderly and disabled,
this thesis work is targeted for the application of research for space robotics. The
Zero-Carrier hybrid mobility system is relevant as it bears strong similarities to the
-
4
mobility systems of rovers and transportation robots planned for future space
missions. One such example is the All-Terrain Hex-Legged Extra-Terrestrial Explorer
(ATHLETE) robot developed by NASA for movement of heavy loads on the Moon.
The ATHLETE is similar to Zero-Carrier in that it uses a complex six wheeled-legged
hybrid mobility system. Another example is NASAs Lunar Exploration Rover (LER)
which uses a suspension system similar to Zero-Carrier where wheels can be moved
in the vertical direction. Like Zero-Carrier both the ATHLETE and LER can be used
for transportation of people. All robots therefore share the same requirements of
passenger comfort and safety. The Zero-Carrier therefore serves as a suitable platform
to perform the work with aim of understanding the challenges of future space
robotics. Subsequently, it is also a goal of the work to develop software that is as
reliable and robust as possible, such as would be required for a space mission.
Two versions of Zero-Carrier have been developed to date both at the Tokyo Institute
of Technology in Japan. The first version (Zero-Carrier I) was developed as a
demonstration platform of the mobility concept [18]. That version was implemented
with control software providing the core ability to overcome obstacles and stairs. The
aim of the second version (Zero-Carrier II) was to study improving the control to
negotiate obstacles and stairs using smoother motion [19]. The focus of that research
was aimed directly at improving passenger safety and comfort. The goal of this thesis
work is to develop a more advanced control software for the Zero-Carrier robot 1.
This work is aimed as a pre-study with the aim of assessing the control software for a
new version of the robot to be developed at Shanghai Jiao Tong University. For this
thesis work, the new version of Zero-Carrier is assumed to be mechanically the same
as Zero-Carrier I. The aim of the new control software is to achieve the same level of
functionality achieved in previous versions, but also further study and implement new
capabilities. The main aim of the software is to implement intelligent independent
control of all individual legs. This would allow negotiation of obstacles partially
blocking the front of robot or allow the robot to approach obstacles at an angle, which
1 To differentiate from the previous versions of the robot, the new version of the robot is referred to as Zero-
Carrier without version numbers in the remainder of the thesis report.
-
5
was not possible in previous versions. The thesis also aims to study further and
improve the passenger safety and comfort.
For the thesis work the robot hardware is not available, therefore simulation is used.
From the outset of the work it was required that a 3D-simulation environment be used
in order to give good quality visualisation of results. It is therefore an aim of the thesis
to develop a suitable simulation platform to allow the implementation, testing and
demonstration of the control software. By using simulation, it is hoped that major
problems can be identified before construction of the actual robot. It is also aimed that
the simulation can serve as a controlled platform for verifying and improving the
control software to be able to overcome more complex obstacles.
Chapter 2 establishes the background to the thesis work with an examination of the
Zero-Carrier mobility system. A study of robotic space missions and research is
provided, with focus on robots with similar mobility systems to Zero-Carrier. Chapter
3 details the design and development of the 3D simulation platform. A set of
simulation demo scenarios created to test the advanced features of the robot are also
described. The core of the thesis is the automatic control software and the design and
implementation are covered in chapters 4 and 5. Chapter 4 focuses on the design and
implementation of the basic control software to overcome simple obstacles. In chapter
5 improvements that were made to the basic control software are detailed, including
control of the center of gravity of the robot, smooth movement and slope negotiation.
Chapter 6 provides the conclusion of the thesis study, detailing what was achieved
and lessons learned. Potential future development work is also examined.
-
6
Chapter 2 Thesis Background
This chapter details the background research to the thesis work. The history of space
robotics and planned future missions are first examined with focus on the locomotion
systems. Various space robots with similar mobility systems to the Zero-Carrier robot
are then examined. Finally, a detailed examination of the mobility system of the Zero-
Carrier robot is provided with similarities to the previously examined robots
highlighted.
2.1 History
Since the Russian Lunokhod 1 first
landed on the moon, rover missions
have featured wheeled mobility
systems. Both Lunokhod missions
(Lunokhod 1 in 1970, Lunokhod 2 in
1973) were eight-wheel systems.
Later, the Apollo lunar rover missions
(1971, 1972) employed a four-wheel
buggy system. The method currently
favoured for rovers is the so-called
rocker-bogie suspension system shown in Figure 1. This system was successfully
used by the Soujourner rover in NASAs Mars Pathfinder mission (1997). The system
has also been used to great success in NASAs twin Mars Exploration Rovers
(MERs), Spirit and Opportunity (2004). The MERs landed on January 2004 and were
expected to operate for 90 Martian days (sols). However, over six years later the
Figure 1: The rocker-bogie system.
-
7
Opportunity rover is still operating successfully. Unfortunately, the Spirit rover has
lost its mobility due to becoming stuck in thick sand.
The rocker-bogie suspension system utilises six wheels, with three wheels on each
side of the rover. The two sets of wheels on each side are connected together and to
the vehicle chassis through a differential [2]. Of the wheels on each side two are
connected together via a bogie and the other is connected to the bogie via a rocker.
The system has no springs or axels and the bogie and rocker are both free to pivot.
This allows the front, rear and middle wheels to passively maintain contact with the
ground, even on rough terrain. This has the advantage of helping to propel the vehicle
over rough terrain and distributing the weight of the vehicle evenly among the wheels
[3]. The system allows the rover to traverse over relatively small obstacles up to a
wheel diameter in height and can tolerate up to 45 degrees tilt without overturning.
2.2 Future Missions
Planned mobility systems for
future rover missions look set to
follow in the footsteps of the
previous missions. The
European Space Agency (ESA)
currently plans to launch its
EXOMARS mission to Mars in
2018. The mobility system of
this rover is a wheeled system,
featuring three independent
linkages. NASAs next planned
mission to the Martian surface is the huge Mars Science Laboratory (MSL), which is
due to launch in 2011. It weights around one ton and is the size of a small car as
shown in Figure 2. It uses a rocker-bogie mobility system. China and India both have
lunar rover missions planned for launch in the next few years. Both missions are
planned to use six-wheeled rocker-bogie systems.
Figure 2: MSL size comparison.
-
8
Despite the popularity of wheeled systems, a large amount of research has been
performed into the possibility of legged and hybrid systems for space missions.
Though past trends would suggest such systems are destined to be confined to the
premises of universities and research institutes, they are now seriously being
considered for use by the major space agencies. This source of this interest is largely
driven by the drawbacks of the wheeled systems in use. Perhaps the most significant
drawback is the lack of ability to access areas of particularly rough terrain. This can
limit the robots ability to access areas that may be of particular scientific interest.
Another key drawback is that wheeled rocker-bogie systems do not perform very well
in soft sand. This has been highlighted by the plight of the Spirit rover stuck in sand
for six months. Attempts to free the rover have now officially been abandoned, with
its operation now focused on performing static location science. Another problem is
that very slow movement is required with the rocker-bogie system to allow obstacles
to be overcome [4].
As our knowledge of the solar system improves, it is becoming increasingly clear that
the most interesting secrets lie beneath the surface. Future missions to Mars will carry
large drills in order to access the subsurface in an attempt to locate water ice. Craters
and gullies offer a treasure trove of geological and scientific information, exposing
layers that would otherwise be buried. Access to such steeply inclined areas cannot
safely be achieved with rocker-bogie systems. Future rover missions may also be
aimed at assisting humans on the surface of the Moon or Mars and would therefore be
required to move at least at human walking speed. It is therefore inevitable that more
sophisticated mobility systems will be required in the future.
2.3 Legged Systems
Legged robots for planetary exploration have been in consideration already for a long
time. In 1980, robotics pioneer Rodney Brooks and his group at MIT were
researching the advantages of sending multiple mini walking robots for extra-
terrestrial exploration. The resulting bug-based robots Genghis [5] and Attila [6] are
-
9
now artefacts of legged robot history. The concept of increasing redundancy by
sending many simple robots seem to have largely been abandoned in the favour of
sending extremely large and more complex robots such as MSL [7]. However
walking machines are still an active research area, with many prototypes of walking
rovers being built worldwide. As previously discussed, the advantage of better
mobility on rough terrain with legs, comes at the cost of increased control
complexity and chance of failure. With this in mind legged robots for space
application have mainly been focused towards walking systems utilising mechanisms
that are as simple as possible. Frame Walkers are such a group of robots that have
been strongly studied for space application. The mobility system of this group of
robots consists only of orthogonal legs that utilise a sliding prismatic joint to extend
and retract in length in the vertical direction. This allows negotiation of obstacles and
rough terrain by ensuring specific legs can be kept in continual contact with ground
to provide a supporting gait. The following subsections examine various examples of
such legged robots that have been researched for lunar or planetary exploration
applications.
2.3.1 The Walking Beam
The Walking Beam [8] is a
legged robot aimed at planetary
exploration. It was sponsored
by Jet Propulsion Laboratory
(JPL) and built by Marietta
Corporation. The robot, along
with Walkie-6 [9], among
others is a frame walking robot.
The mobility system is
composed of two frames with
seven legs in total as shown in
Figure 3. The outer frame consists of three legs that are positioned at the corners of an
equilateral triangle formation with six meters distance between legs. The inner frame
consists of four legs that are positioned at the corners of a square formation. Each leg
Figure 3: Marietta walking beam rover.
-
10
is able to extend and retract in the vertical direction in a telescopic fashion. The inner
frame supports the body of the robot where the necessary instrumentation and
electronics are held. The two frames are attached via the body of the robot, with the
frames able to move relative to one another in a horizontal direction using a sliding
mechanism actuated by means of brushless motors, rollers and a cable pulley system.
The two frames are also able to rotate relative to one another via a motorised turntable
mechanism.
The basic motion of the robot is that one frame provides the supporting gait while the
other frame performs the movement and vice versa. During forward motion, the inner
frame lifts its legs from the ground while the outer frame keeps its legs on the ground
to ensure the robots stability. The inner frame will then move forward via the sliding
mechanism. Once the movement is complete, the inner frames legs will be extended
to provide the supporting gait, while the outer frame performs the movement. The
turntable motion allows the robot to move in different directions. In total, the robot
has nine degrees of freedom: one for vertical motion of each of the seven legs, one for
the horizontal motion of the frames via the slider mechanism and one for the angular
motion of frames via the turntable mechanism.
The considerable size of the robot and substantial length of legs enable the robot to
overcome large obstacles. The robot has been tested successfully on very rough
environments and at steep tilts. The ability to adjust leg lengths in the vertical
direction allows the position of the robot body to be controlled. This can be used to
keep the body flat and avoid rough movements. In effect, the legs can act as a form of
active suspension helping to improve stability and prevent damage to onboard
instrumentation. The large size of the robot could be seen to pose a problem where
size and mass are critical factors to be taken into account for launch. However, the
majority of the robot is metallic framework and could be compacted to smaller size
for launch and expanded to working size after landing.
-
11
2.3.2 AMBLER
The Carnegie Mellon Universitys
(CMU) Autonomous Mobile
Exploration Rover (AMBLER) is
another example of a legged robot
designed for planetary exploration
[10]. The robot consists of six
orthogonal legs configured as two
stacks of three at each side of the
robot as shown in Figure 4. Each leg
is capable of rotating and extending
in the horizontal direction and
extending in the vertical direction.
Movement is performed using a
circular gait where a trailing leg is
raised from the ground and brought
through the hollow body of the robot
to the front. A stable gait is
maintained during the motion phase
through suitable positioning of the legs kept in contact with the ground. Force sensors
on the base of the legs are used to ensure the contact with the ground and provide
indications of unexpected ground contact. The direction of the robot is controlled by
altering the final positioning of the moving legs. Positioning the legs directly in front
of trailing legs will result in forward motion, whereas positioning at an offset by
extending or retracting legs will result in a change of direction.
Ambler is, like the walking beam, a large robot, with a height of 4.1 6 meters and a
walking length of 3.5 meters [11]. It is designed to negotiate meter high boulders and
ditches, handle 30-degree tilts and travel 1000 km. With the use of orthogonal legs,
the vertical and horizontal movements are decoupled allowing simpler control
mechanisms. The robot also maximizes efficiency because of its long steps achievable
by swinging a leg from the back of the robot to the front. The ability to extend the
Figure 4: AMBLER.
-
12
legs to keep the body horizontal is also an advantage that could be necessary for
onboard science instruments. In conducted tests, AMBLER has performed well on
uneven terrain [12].
2.3.3 Dante II
Dante II [13] is the follow up to
the NASA funded Dante robot
that was designed by Carnegie
Mellon University to explore the
interior of the Mount Erebus, the
only active volcano in Antarctica.
The principle intention of the
mission was to research and
develop technologies which could
be used for future missions on the
most rugged and inaccessible
lunar or planetary terrain. The
mission would also serve to
retrieve valuable scientific data
from the volcano using a novel
new approach. The typical
approach for a volcanologist to retrieve scientific data would be to travel into the
crater to measure the gases. However, with highly active volcanoes this is an
extremely risky or even impossible task. Due to a major malfunction in a principle
component the Dante robot did not get far into the Volcano before failing. The Dante
II created several years later was a reworked and improved version of its predecessor,
based on the experience and lessons learned.
The Dante II robot shown in Figure 5, is another frame walking robot with eight legs
in total. It is composed of two frames that are actuated relative to one another through
a common vertical driver train. Frames can also rotate with respect to each other,
albeit with a maximum turn of only 7.5 degrees at a time, to perform modification of
Figure 5: Dante II.
-
13
direction. The outer frame has four legs and the inner frame has four legs, both
positioned in rectangular formation to maximize stability. Like the walking beam
robot, each legs length can be adjusted in the vertical direction to overcome obstacles
and adapt to uneven terrain. The legs however are pantographic in form allowing
amplification of foot motion compared to actuator stroke. The other key difference
between Dante II and other frame walker robots is that it is a rappelling robot. The
robot utilises a tether cable which it winches out as it advances, as a mountaineer
would do when abseiling. The tether allows the Dante II to scale down slopes as steep
as 90 degrees inclination. The robot, like other walkers examined previously, is quite
large (length 3.7 m, width 2.3 m, and height 3.7 m.) This allows negotiation of large
obstacles in the path of the robot.
Dante II was able to successfully rappel to the bottom of the crater of the Mount Spurr
volcano in Alaska [14]. The robot was able to overcome one-meter ash-covered
boulders, snow, ditches and rubble as well as being hit by falling boulders on several
occasions during its descent. It was then able to operate at the bottom of the crater
without outside intervention for almost a week. During the ascent from the crater, the
robot toppled over whilst rappelling up a steep muddy incline, ending the mission.
The robot was later recovered by helicopter. Despite not reaching its final destination
at the top of the crater, the mission was a resounding success. The technology
successfully demonstrated that even the harshest of terrains could be accessed using
robotic technology.
2.4 Hybrid Systems
Hybrid mobility systems utilise a mixture of both wheels and legs to get the best of
both systems. Many novel designs of hybrid robots have been studied, such as the
roller skating robot [22] from Hirose Fukushima robotics laboratory. Other robots
such as Work Partner [15] and Marsokhod [16, 17] utilise a hybrid mobility system
to perform a wheel-walking movement. This combined hybrid motion has been
shown to perform better in certain difficult environments where isolated legged and
wheeled motion doesnt fare well. The flexibility of hybrid vehicles however comes
-
14
at a price. The mass of carrying two mobility systems instead of one is increased and
the complexity of the control is also increased.
There are however, other advantages which have made hybrid systems an active area
of research. One advantage is the possibility to use a hybrid rovers legs to double as
an active suspension system. It could be necessary to keep the body of the rover
horizontal in order to keep the onboard scientific instruments level or to improve
stability. This can be accomplished on a predominantly wheeled rover through the
use of telescopic leg mechanisms. The following sub sections examine various
examples of hybrid-wheeled legged robots with potential for lunar or planetary
exploration applications.
2.4.1 Lunar Electric Rover
The state of the art of manned
exploration rovers is NASAs
Lunar Electric Rover (LER)
[20]. As a safety precaution, the
previous lunar rover used in the
Apollo missions was limited to
operate within the distance the
astronauts could walk safely in
a spacesuit: approximately 9.7
kilometers. This if the vehicle
were to break down during
operation, the astronauts would still be able to make a safe return. The LER on the
other hand features a pressurised and habitable cabin allowing it be operated without
the use of space suits. The extended oxygen supply available and other human
sustaining resources, allow the rover to explore safely much more of the Moons
surface. The envisioned operating concept would use multiple LERs kept within a
specified distance of each other. This distance is specified to allow rescue with one of
the other LERs in the case of a mobility failure. The rover is also operated using
lithium-ion batteries that can be charged using an exercise bike inside the cabin.
Figure 6: Chariot chassis.
-
15
The rovers base or mobility system is known as the Chariot chassis system which
itself can be used as an open rover, similar to that used in the Apollo missions, as
shown in Figure 6. The Chariot chassis has 12 wheels, which are each able to pivot
360 degrees. As a result a crab like movement can be performed. The wheels can also
be moved up and down with a range of over 51 cm in the vertical direction, allowing
better adaption to terrain. An active suspension is used to keep the body dynamically
level when traversing a slope, providing transport that is more comfortable. In
addition, the base of the chassis can be moved up and down, providing better
clearance of objects or easier access to the ground for astronauts. Wheels are
connected in groups of two, with three groups (six wheels) on each side of the rover.
2.4.2 ATHLETE
NASAs ATHLETE (All-
Terrain Hex-Legged Extra-
Terrestrial Explorer) [21] robot
shown in Figure 7, represents
the state of the art in hybrid
legged-wheeled space robots.
The robot is aimed for
astronaut assistance operations
in NASAs previously planned
return mission to the Moon. In
particular, the robot is aimed
to assist in development of a lunar base providing the main function of transporting
payloads to desired locations. The ATHLETE is a considerable size with a diameter
of four meters and a reach of six meters. The robot has six legs, each with 6 degrees
of freedom, giving a more complex mechanical design and more versatile walking
capabilities than those examined previously. The increased complexity was deemed
necessary for handling situations where fixed gait walking would not be suitable,
requiring use of free gait walking.
Figure 7: ATHLETE.
-
16
The robot has a wheel at the base of each leg that can be used for rolling at high
speeds or can be locked in place to allow the robot to walk over difficult terrain. The
aim of utilising a hybrid system is to allow the rover to move a possible hundred
times faster than the MERs, whilst having the ability to traverse almost any terrain
using intelligent planning of foot placement. The legs can also be used for an active
suspension system, adjusting to follow the terrain in order to keep the load being
carried horizontal and free from bumps. Further planned versions of the robot include
a deployable grappling hook that can be used to climb up or down steep slopes.
2.5 Zero-Carrier
The Zero-Carrier is a hybrid legged-wheeled system designed for transportation of
elderly or disabled people. With the population of elderly people advancing rapidly,
there is a specific demand for robots that can perform assistive duties. The Zero-
Carriers mobility system is similar to the legged frame walking robots described
previously in that each leg has a sliding prismatic joint, allowing leg lengths to be
varied. It however differs from a frame walking robot in that motorised wheels are
used to drive the robot forward, not a moving frame. Though not aimed for space
applications, the robot does have similar requirements to robots such as LER and
ATHLETE. These include ensured passenger comfort and safety as well as
guaranteed robustness and stability when moving. The robot base can be moved up
and down by changing the length of supporting legs in unison. Like LER, this allows
obstacles smaller than the height of the base to be cleared. It is possible for the robot
to climb on to and off obstacles by controlling the length of the legs. This allows the
robot to move over stairs. The legs can also be used to provide an active suspension,
like the LER and ATHLETE. Each of the robots legs is fitted with a wheel at its
base. The rear, and second from front pair of legs, have active motor driven wheels.
These are used to move the robot forward and allow the robot to turn by driving
wheels on opposite sides of the robot at different speeds. The front and second from
rear pair of legs have passive wheels on casters. The next version of the Zero-Carrier
robot, which is the subject of this thesis work, is mechanically identical to the Zero-
Carrier I robot shown in Figure 8.
-
17
Figure 8: Zero-Carrier I.
The robot is fitted with multiple sensors. These include two downward facing range
sensors located on the front of the robot. These sensors are mounted a short distance
out in front of the robot, as shown in Figure 8. This allows pre-detection of obstacles
which are lower than the height of the sensors as the robot moves. Each of the range
sensors is located directly ahead of legs on each each side of the robot. This allows
each sensor to determine the required movement of legs on the same side in order to
overcome detected obstacles. Zero-Carrier I used two SICK product range sensors
with range of 100 mm to 700 mm. With these sensors, the robot was able to determine
the height of approaching obstacles with high precision. It is assumed that the next
version will also use similar highly accurate front sensors. Each of the eight legs are
equipped with three sensors; a force sensor, a forward facing proximity sensor and a
downward facing proximity sensor. This differs slightly from Zero-Carrier I which
had forward facing proximity sensors only on the second from rear pair of legs. The
extra sensors on the new version help to improve accuracy and reliability. The force
-
18
sensor is installed on each leg to determine when the wheel of the leg is firmly in
contact with the surface below. This was previously implemented by measuring the
current drawn from the motors on the sliding joint of each leg. When the leg hits the
ground the change in current drawn was used to estimate when the leg is in contact.
The downward facing proximity sensor is used for detecting proximity to surfaces
below the leg. The forward facing proximity sensor is used for detecting when objects
come within range of the leg. Sharp product proximity sensors were used in Zero-
Carrier I, with measuring range 100 mm to 800 mm. Each leg is also equipped with a
potentiometer that enables determination of the current position of each leg. The leg
sensors used for Zero-Carrier are assumed to be of the same quality.
-
19
Chapter 3 3D Simulation Software
This chapter details the design and implementation of the simulation software for the
thesis. This covers all software that was implemented in thesis, with the exception of
the automatic control software. The requirements of the simulation are identified,
leading to a description of the design choices made in order to create the required
software.
3.1 Robot Simulation Requirements
One of the initial requirements of the thesis was that virtual 3D simulation be used for
the control software development. This would provide the advantage of allowing the
robot to be studied in a realistic representation of its target environment. It would also
allow good visualisation of the robot operation and therefore aid identification of
problems during development. Other key requirements of the simulation software
were identified, shown below:
The simulation should be aimed at allowing development, testing and demonstration of the upper-level automatic control software.
Within the scope of the thesis, the simulation should focus on testing the automatic control software for a number of key test scenarios.
The user should be able to control the simulation using a well-defined user interface and standard windows interaction i.e. mouse and keyboard.
An accurate model of the robot should be represented in the simulation. It should be possible for the user to modify the dimensions of the robot in the
simulation using the user interface.
-
20
It should be possible for the user to set obstacles as desired and modify the dimensions of obstacles in the test scenarios.
The robot model should interact with the simulation environment as the robot would in the real world; colliding with obstacles not passing through them.
Problem situations should be clearly notified to the user.
3.2 3D-Simulation Environment
There are many options available when it comes to 3D robotic simulation. Both
commercial and open source solutions are available, each offering different
advantages and disadvantages when compared to one another. Most 3D simulation
software provides the capability to represent the robot in a virtual world, taking care
of the physical interactions between the robot and the world. The physics engine
forms the heart of most 3D simulation tools. It performs the role of making the
simulation more realistic. The physics engine is typically responsible for two main
functions: handling motion dynamics and collision detection. The motion dynamics
are based on complex mathematics dependant on factors such as velocity,
acceleration, torques, mass and friction. Collision detection is based on recognising
the intersection of physical geometries.
Commercial simulation platforms include ADAMS, SimMechanics, Vortex and
WebBots [24]. These tools provide powerful capabilities to the user but can be
expensive to purchase and as they are generic tools may not meet the exact
requirements for simulation of a particular robot. Consequently, unnecessary features
may be included or it might not be possible to simulate the scenarios required. Many
good open source options are available such as the Player/Stage/Gazebo simulator
project that utilises the Open Dynamics Engine (ODE) for the physics core [25]. The
advantage of open source software, apart from the fact it is freely available for use, is
that the source code is available too. It is therefore possible to modify the software to
allow the exact requirements of the simulation to be met. However, as open source
software is typically an ongoing development work, it may not be up to the same
standards of reliability as commercially available software. It can also be a substantial
-
21
task in itself to work with open source software: integrating unfamiliar code into a
project, getting it all to compile and work together. If there are problems, it can be
particularly problematic to try to understand the workings of potentially very large
and complex code. Another option is to create the simulation software from scratch
using a development language for rendering the 3D graphics. The advantage of this is
that the simulation can be catered specifically for the needs of the work. One
possibility is to use ODE as the physics engine for developing the simulation. This
however increases the workload with a substantial amount of effort required to
integrate and understand the ODE source.
As the focus of the thesis is the high-level control software it was decided that an
accurate representation of the dynamics was not a priority for the thesis, but should be
considered in future work. It was therefore decided to create the simulation software
from scratch. This would allow the software to be kept as simple as possible to allow
the focus of the work to be on the control software. It was recognised that collision
detection would still be necessary in a basic simulation. However, it was decided
early on that obstacles would be simplified to consist only of simple flat surfaced 3D
blocks. It was therefore expected that collision detection would not be too complex to
implement. Two main options were considered for rendering the 3D visualisation of
the simulation, JAVA 3D and OpenGL. Although the author had prior experience
with JAVA programming, JAVA 3D had never been previously used. The author had
also been requested to study OpenGL in preparation for the thesis and it was
recognised that it would be a suitable platform, providing the necessary flexibility to
complete the work. It was therefore decided to use OpenGL to create the 3D
visualisation.
3.3 OpenGL
OpenGL (Open Graphics Library) is an open source API (Application Programming
Interface) for the development of 2D and 3D computer graphics. The API was
developed in 1992 and quickly became an industry standard to overcome the
problems of developing computer graphics for the wide range of computer hardware
-
22
that was available at the time. Initiated by Silicon Graphics and backed by many of
the industrys main players, the API was carefully designed to ease the task of
developers and meet the needs of innovative computer graphics. Prior to its
establishment, the developer would have to program according to the resources of the
specific graphics hardware being used. By creating an industry standard API, the
responsibility shifted to that of the hardware manufacturers to produce graphics cards
compliant with the standard. This provided cross platform compatibility meaning the
same graphics and special effects were available in any operating system using an
adhering graphics adapter.
OpenGL is a library where each available function performs a specific drawing
action, feature or special effect. The API provides control at the lowest level requiring
the developer to build complex graphics from simple primitives such as points, lines
and polygons. The creation of 3D graphics from drawing primitives requires the
developer to think in three dimensions using an X, Y, Z coordinate system. For
example, the surface of a 3D cube is created by individually drawing each of the
vertices corresponding to the edges of the cube. Repetition of similar code allows
each face to be drawn to build up the entire cube. This can be seen in the following
six lines of code, which draw the top face of the cube in 3-D as shown in Figure 9.
glBegin(GL_QUADS); glVertex3f(0,7,7); glVertex3f(0,07); glVertex3f(7,0,7); glVertex3f(7,7,7);
glEnd( );
This use of procedural calls to define whole scenes from the most basic elements
provides the developer with a great deal of versatility for developing graphics. This
Figure 9: 3D cube rendered with OpenGL.
-
however c
more deta
complex a
typically
These inc
front), alp
including
aspects o
developer
allow tran
means ani
OpenGL h
being its
Google Ea
Figure 10
polygons.
can quickly
ailed. OpenG
and time co
built direct
lude hidden
pha blendi
lighting, s
f rendering
. For examp
nslation and
imations can
has become
main comp
arth, Adobe
0 illustrates
y result in c
GL has been
onsuming o
tly into the
n surface re
ng (transpa
shadowing
g 3D grap
mple, once a
d rotation o
n easily be
a popular g
petitor. Pop
e Photoshop
the creatio
Figure 10
complex pr
n designed
operations o
e graphics
emoval (as
arency), te
and fog. T
phics are h
a 3D scene
of the entire
created. Du
graphics-ren
pular applic
p and game
on of comp
: 3D Graphi
rograms as
with power
of graphics
hardware t
objects bec
exture mapp
The advanta
handled int
is created,
e scene or
ue to such p
ndering tool
cations dev
es such as D
plex 3D gr
ics from po
the require
rful feature
processing
to allow hi
come hidde
ping and
age is that
ternally and
OpenGL p
components
owerful fea
l with Direc
eloped usin
DOOM and
raphics in O
lygons.
ed graphics
s to handle
g. These fea
gh-speed r
en behind o
atmospheri
t the most
d hidden f
rovides fun
s of the sce
atures and fl
ct3D from M
ng OpenGL
d World of W
OpenGL fro
23
becomes
the most
atures are
endering.
objects in
c effects
complex
from the
nctions to
ene. This
lexibility,
Microsoft
L include
Warcraft.
om basic
-
24
3.4 Software Design and Implementation
It was decided to use C programming language to implement the simulation software.
This was the logical choice as OpenGL is implemented in C. Binding libraries are
available to allow development using OpenGL with Java and C++. However it was
determined that use of C would simplify the development and integration process.
Despite using a non Object Oriented Programming (OOP) language, it was aimed to
try to adhere to some of the best practices of OOP design by keeping the code
structured, modular and dividing functionality using encapsulation. The SW design
was therefore simplified by separating the desired functionality into distinct modules,
each responsible for implementation of a specific area of functionality. Each module
provides a clearly defined API to allow other modules to access the specific
functionality of the module. This separation of the software into functional blocks is
shown in Figure 11. The aim was to clearly identify the specific functions in each
module, so implementation could be performed.
Figure 11: Software design.
Details of the design and implementation of each module and interfaces with other
modules are given in the following subsections.
-
25
3.4.1 ZC Model
The ZC Model module is responsible for drawing the robot in 3D using OpenGL.
Before modeling in OpenGL, the robot was first modeled using Pro-Engineer CAD
software as shown in Figure 12. The model was created by fellow colleague in the
project Yongtao Song. The Pro-Engineer model allowed the robots mechanical
concept and operation to be clearly visualised. The robot was modeled with
dimensions accurately matching that of the planned hardware, therefore providing a
good reference for creation of the simulation model in OpenGL.
Figure 12: Pro-Engineer robot model.
The next step was to convert the Pro-Engineer representation of the robot into an
accurate OpenGL model suitable for use in the simulation. The possibility of using
software to perform automatic conversion of a Pro-Eng model to OpenGL model was
initially examined. However, it was found to be more complex than expected and as
the robot model was relatively simple, it was decided to perform manual creation of
the OpenGL model. The model was built up in 3D using simple polygons. This
included rendering the robot base, back, sides and the legs using 3D blocks. Wheels
were created using disks and cylinders using API calls provided in the glu library
which is an extension to OpenGL. The Pro-Engineer model, dimensions accurate to
-
26
the planned robot hardware were used in the rendering of the robot model. The
created model of the robot with defined parts is shown in Figure 13.
Figure 13: Zero-Carrier OpenGL model.
The robot model module is also responsible for handling rendering of the moving
parts of the robot. This includes the movement of legs and base in the vertical
direction. The available leg stroke is limited by the physical restriction of the
maximum and minimum height that the base can be from the bottom of a leg, as
shown in Figure 14. This also defines how high the base can be moved.
Figure 14: OpenGL model leg stroke.
-
27
Three main functions provide the implementation; two for drawing the passive and
active robot legs and one for drawing the robot body. Parameters passed to the draw
leg functions determine the positioning of the leg relative to the robot body. This
allows animation of leg movement to be easily performed by updating the robot
model.
It was an identified requirement to allow the user to modify the dimensions of the
robot via the UI. All dimensions used in rendering the robot were therefore
implemented as software variables with parametric connection to other related
dimensions. Therefore, if the user changes one dimension via the UI all related
dimensions change correspondingly when updated. For example, changing the length
of the robot results in the location of the legs attached to the base changing relatively.
3.4.2 World Model
The World Model module is responsible for drawing and updating the view of the 3D
world when the robot moves. As a commonly used trick in computer animation, the
entire world is moved to give the effect that the robot is moving. The translation
feature of OpenGL makes this a simple operation to implement. To simplify the
simulation all obstacles in the world are formed using simple 3D flat surfaced blocks
of varying dimension. From individual blocks, more complex obstacles are created
such as stairs as shown in Figure 15. As a further constraint, blocks can only be
located with a flat surface perpendicular to the robots direction of movement. This
helps to simplify collision detection as described in the next chapter. These
constraints reduce the realism of the simulation. However, Zero-Carriers intended
operating environment is expected to be indoors where surfaces and obstacles are
typically flat and regular. Information of all individual blocks in the world is stored in
an array to simplify handling of rendering and collision detection. The array is read to
find the location and dimension of blocks when the world is rendered. This also
allows relatively easy implementation of the required functionality of allowing the
user to add obstacles to the world as desired via UI.
-
28
The module is also responsible for drawing sloping ground in the world. In order to
implement this, the entire ground area is divided into separate segments. The
segmentation used to render a slope in the simulation world is shown in Figure 15.
For each segment, information describing the initial height (when moving in the
positive x direction) and slope of the ground segment is stored. Like obstacles in the
world, an array is used to store all ground segments information. This allows easy
handling of rendering the ground and determining when the robot is on a particular
segment. To simplify the simulation slopes can only be set in the x direction, which is
the direction of movement of the robot.
Figure 15: The simulation world.
3.4.3 World Interaction
The World Interaction module is responsible for handling how the robot moves in the
simulation. It is responsible for collision detection and handling movement of the
robot on sloping ground. The use of arrays to store information of the obstacles and
sloping ground in the simulation world helps to simplify the task. To perform
collision detection an array of potential collision locations in the world coordinate
system are updated as the robot moves. The potential collision locations correspond to
the extremities of the robot that can collide with obstacles. As the obstacles in the
world consist only of simplified blocks with flat edges it is possible to determine
-
29
collisions using the information of collision locations, 3D obstacles and robot
movement to calculate geometric intersections. Detecting collisions in only the x
direction is straightforward, but for a 3D world, detecting collisions in the y and z
directions adds to complexity. For simulation of Zero-Carrier, the detection of
collisions for eight independently controlled moving legs also increases the
complexity of the task.
Movement on sloping ground is implemented like collision detection, using
determination of geometric intersections. As previously described, sloping ground is
rendered from an array of ground segments information. It is therefore possible, like
collision detection, to determine which ground segments the robot is in contact with.
The introduction of sloping ground to the 3D world creates the possibility of angular
intersections. This requires mathematics that is slightly more complex to calculate
geometric intersections. When the robot moves on a slope, the movement must follow
according to the inclination of the slope. In the simulation, the required rotation of the
robot is calculated and rendered visually using OpenGL to rotate the robot model as
required.
Determination of the required robot rotation as it moves on sloping ground is
determined by two factors. The first factor is the angle formed by the difference in the
height of surfaces that legs are in contact with. As the robot moves up the slope, more
legs become in contact with slope and the angle increases until the legs are fully on
the slope. This is shown in A, B and C in Figure 16. When the legs are fully on the
slope the rotation of the robot matches the slope. The second factor determining
rotation of the robot is the current position of the legs. As each leg can be individually
moved up and down, the angles formed by different leg heights must also be
considered. Such an angle formed by different leg heights is shown in D in Figure 16.
The angle formed between legs, which legs are extended further and the height of the
ground contact points, determine which legs are currently in contact with the ground.
With these factors determined it is possible to calculate the rotation of the robot. The
rotation of the robot is determined by the angle of ground contact points (angle 3)
minus the angle between legs that are in contact with the ground (angle 1.) This
produces the resulting angle of rotation (angle 2.) This is shown in figures E and F in
-
30
Figure 16. If the angle formed between contact legs is the same as the angle formed
between ground contact points then the angle of rotation is zero. In the next chapter,
this fact is considered further as a method of keeping the robot base horizontal by
varying leg lengths whilst moving up slopes.
Figure 16: Calculation of robot rotation on slopes.
3.4.4 ZC Sensors
The robot has multiple onboard sensors which are used to determine the location and
dimension of oncoming obstacles. Using sensor data the control software performs
the function of controlling the motors to move the robot legs and body position to
allow the robot to automatically overcome obstacles. The sensor setup to be used for
the simulation was described in section 2.5. The Zero-Carrier model has 26 main
sensors in total. These are two distance sensors on front of the robot facing
downward, two proximity sensors on each leg: one facing downwards and one facing
-
31
forwards, and one force sensor on each leg. The configuration from the side view of
the robot is shown in Figure 17.
Figure 17: Sensors configuration.
The ZC Sensors module is responsible for the implementation of the robot sensors in
the simulation. It was recognized that realistic simulation would require accurate
sensor modeling. However, for the goals of the thesis it was decided not to focus on
implementing too accurate sensor models, opting instead for a simplified
implementation of the core function of each sensor. The control software is dependant
on data read from the sensors in order to determine what actions to take in order to
overcome obstacles ahead. The control software therefore only interfaces to the
sensor module in order to determine the environment; it is not able to gain
information directly from the world model. Each sensor implementation has an API
that the control software calls in real-time to read current data back from the sensor.
As far as the control software is concerned, the internal implementation of the sensors
is irrelevant.
Matching to the real hardware, the two distance sensors on the front of the robot are
expected to give accurate range information. These sensors are therefore implemented
to return the accurate value of the distance to the nearest surface detected directly
below the sensor location in the simulation. The proximity sensors are expected to be
less accurate. These are therefore implemented to return a Boolean value indicating
whether an obstacle is in the proximity range in the direction the particular sensor is
-
32
facing. The force sensors are implemented to return a Boolean value as well
indicating if a robot leg is in contact with a surface or not.
The sensors are implemented like collision detection by determining geometric
intersections in three dimensions based on the current position of the robot in the
simulation world. Information of each sensors current position, direction and range
of operation is stored and updated in an array as the robot moves. For sensors located
on the legs, the current position of the leg is required to determine the sensors current
location. This information and the location of obstacles and slopes in the 3D world are
used to determine current sensor readings.
3.4.5 ZC Locomotion
The ZC Locomotion module is responsible for handling the movement of the robot.
This module takes the place of the low-level motor control software in the real
hardware. For realistic control software, realistic models of motors would be required.
However, as the focus on the thesis was the high-level software control, it was felt
that simplified motor modules would be suitable. It was therefore decided to
implement the motors to provide straightforward control of movement of the robot in
the simulation. In other words, the module provides the ability to control the
animation of the robot in the simulation. The module is implemented with an API to
allow the ZC Control module to individually control each motor on the robot. The
robots motors provide the only means for the control software to move the robot in
the real world. Likewise, the automatic control software is only able to make the robot
move via the ZC Locomotion module API. As previously described, each leg has a
motorised sliding prismatic joint that allows legs to be moved up or down. It is also
possible to move the body of the robot by increasing the stroke of supporting legs in
unison i.e. controlling the motors at the same time. There are also motors in the active
wheels of each robot for driving the robot. In the real robot hardware, it should be
possible to turn the robot by driving motors in the active wheels at different speeds.
However, this would complicate the collision detection significantly by allowing the
robot to approach obstacles at various angles. To simplify required collision detection
and keep the focus on the control software implementation, motion was therefore
-
33
restricted to only the forward direction. Demo tests, as described later, were however
implemented to replicate the action of approaching obstacles at an angle. The addition
of ability to turn the robot is also considered as future work in the final chapter of this
thesis, with the aim of improving the flexibility of the simulation.
3.4.6 User Interface
It was an identified requirement that a simple windows style user interface be
provided to allow the user to interact with and control the simulation software.
Another requirement was to be able to provide the user with feedback of situations of
interest during running of the simulation. Various options were considered for the
implementation of the UI software. One considered option was to implement the UI
using Microsoft Visual Studio. However, after some research it was determined to be
problematic to integrate a UI created with Microsoft visual tools with OpenGL based
software. Instead, a simpler method was found using the GLUI (GL User Interface)
library for OpenGL, which is specifically for development of user interfaces with
OpenGL software. It was determined that the library would provide all the necessary
capabilities required to develop the UI and handle mouse and keyboard interaction.
Figure 18: Simulation software.
-
34
The developed simulation software consists of three main display windows as shown
in
Figure 18. The main simulation window displays the 3D simulation that is taking
place. The user interface window is used to allow the user to interact with the
simulation using standard mouse and keyboard interaction. The feedback window is
used to proved text feedback to the user for particular situations of interest, such as
the robot losing balance or not being able to move the body up any further.
The user interface is implemented to provide the ability to modify the physical
dimensions of the robot and set the size and location of obstacles in the world as
desired. The aim of including this feature was to increase the flexibility of testing
possible. Another feature implemented is the ability to rapidly alter the point of view
of simulation using the mouse. This allows the user to examine the simulation from
different angles and zoom in and out if desired.
3.4.7 ZC Control
The ZC Control module contains the implementation of the automatic control
software, which is the focus of the thesis work. The main aim of creating the
simulation was to allow development, testing and demonstration of the high-level
automatic control software. The design and implementation of the control software is
detailed in Chapters 4 and 5 of this thesis.
3.5 Test Scenarios
A set of six test scenarios were implemented for testing the advanced operation of the
control software with the simulation. The demo scenarios are initiated from the UI,
which results in redrawing the simulation world with the obstacles and slopes set
according to data stored in fixed arrays for the situation. For each demo situation, the
capability to modify the dimensions of the obstacles or inclination of the slope via the
UI is implemented. The demo scenarios and test objectives are detailed below.
Screenshots from each demo scenario are provided in section 5.6 of this document.
-
35
Scenario 1: Stair climbing ability.
The objective of this scenario is to test the core stair climbing operation of the robot.
The scenario is implemented to place a set of three ascending steps followed by three
descending steps directly in front of the robot.
Scenario 2: Advanced stair climbing ability.
To test advanced stair climbing ability, a set of more complex stairs is placed directly
in front of the robot. The objective of this scenario is to test the ability of the robot to
be adaptive in its decision-making, not depending on stairs to be of a fixed nature.
Scenario 3: Approaching an obstacle at a narrow angle.
In section 3.4.5, the limitation of robot movement in only one direction was
explained. This test has therefore been created to attempt to replicate the case where
the robot approaches a block at a narrow angle. The angle of the obstacle is created
from smaller blocks to form a jagged facing surface. This allows simpler collision
detection to be used and is deemed sufficient, as it would require the same leg control
to overcome a block at the same angle with a smooth facing surface.
Scenario 4: Approaching an obstacle at a wide angle.
This scenario is implemented the same as scenario 4, except this time the angle of the
surface facing the robot is larger.
Scenario 5: Obstacles partially blocking the robot.
The objective of this scenario is to test the ability of the robot to sense obstacles
partially blocking the robot. It is also used to test the ability of the robot to move each
leg independently while maintaining stability.
Scenario 6: Slope negotiation.
The objective of this scenario is to test the ability of the robot to overcome a sloping
surface. It was determined that an advanced feature of the control software would be
to maintain a horizontal body position by moving the legs to match detected slopes.
The main purpose of this test scenario is to test this feature of the robot.
-
36
Chapter 4 Control Software
This chapter details the design of the control software for automatically overcoming
obstacles in the path of the robot as it moves forward. The software implements the
upper level gait control, determining how to move the legs to negotiate the
environment. Sensor data accessed via the sensor module provides feedback of the
internal state of the robot and external environment, determining when obstacles are
blocking movement. Based on this information the control software is responsible for
controlling actuators via the locomotion module to implement free gaited movement.
As the robot is driven forward by the active wheels the height of the base and length
of the legs are moved to adapt to the environment, overcoming obstacles whilst
maintaining stability.
4.1 Robust Software
For a human transportation vehicle robot such as Zero-Carrier, passenger safety is
critical. The software that autonomously controls movement must therefore be
guaranteed to be robust, reliable and free from bugs. This also applies to software for
space applications in general where the highest levels of quality assurance are
necessary. Unfortunately, there have been many cases already where software error
has proven fatal. One example is the Ariane 5 expendable launch system which
veered off path and exploded due to an internal software error [26]. The result was the
loss of the four Cluster mission spacecraft worth US $370 million. Fortunately,
modern space missions have the ability to upload new software remotely. However,
as in the case of Spirit rover on Mars, new software may not be enough to recover in
an unforgiving environment.
-
37
With the results driven nature of modern business it is often the tendency of the
software developer to implement code or fix bugs using the fastest and easiest
solution. For a large software project with many developers, the code can easily
become large and complex. When alterations are required due to new requirements or
bug fixes, the code can quickly become vastly different from the the orignal design. In
this case redesign may be the best option. It is important that the current design is
fully examined and consideration given as to whether there is a more logical or clear
way to make the implementation. Overall this can save time and improve results by
keeping the software maintainable. Other important factors are good comments and
clear documentation. For projects lasting a long duration such as space projects it is
inevitable that new people will become involved. For complex software with poor
documentation and comments there is a greater risk of introduction of bugs. Extensive
testing is also vital for ensuring all potentially fatal bugs are discovered and removed.
The Zero-Carrier robot is a relatively complex machine, with over thirty onboard
sensors, eight motor controlled sliding leg joints and four motor driven wheels. The
software is required to control each leg quickly and independently to safely overcome
oncoming obstacles. However, the movement of the legs must also be coordinated so
that the robot maintains a stable gait while moving. Along with vertical control of the
legs, the robot body and motors in the driving wheels of the robot must also be
controlled . With so many aspects to control, it is evident that the software can be
very complex. The aim of the design is therefore to keep the code as logical and
simple as possible in order to ensure robust operation.
4.2 Basic Movement
During normal operation, the robot is controlled manually by the passenger. Once an
oncoming obstacle is detected by the front sensors, the automatic control software
takes over control. The control software is responsible for controlling all robot
movements to overcome the oncoming obstacle before returning control back to the
passenger. The software is therefore entirely responsible for the safety and comfort of
the passenger during automatic motion. As previously described, it is the aim of the
-
38
work to test the advanced features of the software using a set of obstacle scenarios. It
was however decided to design and implement the control software progressively,
starting with functionality to overcome the simplest obstacle situation and gradually
adding more functionality to handle more complex situations. The simplest situation
was determined to be the case where a single symmetrical block is fully obstructing
the path of the robot, as shown in Figure 19.
Figure 19: Single block obstacle.
The control software was designed by splitting the required motion to be able to
overcome the block into discrete steps. Two main motion cases were identified. The
first case is where legs are moved up, to ascend onto the block. The second case is
where the legs are moved down, to descend from the block. In this case, if the robot
moves forward without the legs first being moved down onto firm ground, the robot
can easily lose balance. Once one of the front sensors detects a required action, all
legs on the same side of the robot eventually have to perform the same movement to
allow the robot to move forward. In this way, the robot moves in a similar fashion to a
centipede.
Case 1: Ascending Operation
1) The robot is moving forward. One of the front downward facing distance sensors
detects an obstacle in front of robot. Automatic control then takes over movement.
2) The robot is stopped. The base of the robot is moved up to a suitable height, based
on the detected height of the oncoming obstacle. The height is calculated so that
the robot legs can