ersp get started

Upload: kevin-blaner

Post on 29-Oct-2015

90 views

Category:

Documents


1 download

DESCRIPTION

ERSP Get Started

TRANSCRIPT

  • 2003 Evolution Robotics, Inc. All rights reserved. Evolution Robotics and the Evolution Robotics logo are

    trademarks of Evolution Robotics, Inc. All other trademarks are the property of their respective owners.

    Evolution Robotics Software PlatformTM is a trademark of Evolution Robotics, Inc.

    Microsoft Windows is a trademark of Microsoft Corporation Inc.

    IBM ViavoiceTM is a trademark of International Business Machines Corporation.

    WinVoiceTM is a trademark of Microsoft Corporation Inc.

    JavaTM Runtime Environment version 1.4 is a trademark of Sun Microsystems, Inc.

    This product includes software developed by the Apache Software Foundation (http://www.apache.org/).

    Part number MC6100

    Last revised 6/18/03.

  • Chapter 1 IntroductionManual Overview..............................................................................................................1-1Introducing ERSP .............................................................................................................1-2Why Use ERSP? ...............................................................................................................1-2Who Should Use ERSP? ...................................................................................................1-3ERSP Structure and Organization.....................................................................................1-4Evolution Robotics Software Architecture (ERSA) .........................................................1-5ER Vision..........................................................................................................................1-8

    Object Recognition.....................................................................................................1-8Motion Flow...............................................................................................................1-8Color Segmentation....................................................................................................1-8

    ER Navigation...................................................................................................................1-9Target Following ........................................................................................................1-9Obstacle Avoidance....................................................................................................1-9Hazard Avoidance ......................................................................................................1-9Teleoperation..............................................................................................................1-9

    ER Human-Robot Interaction ...........................................................................................1-10

    Table of ContentsGetting Started Guide

    Speech Recognition and Text to Speech ....................................................................1-10Robot Emotions and Personality ................................................................................1-10Person Detection and Head Gestures .........................................................................1-10

    Core Libraries ...................................................................................................................1-11Whats Next ......................................................................................................................1-12

    Chapter 2 Installing ERSPRecommended Skills.........................................................................................................2-1Requirements ....................................................................................................................2-1

  • Customer Support .............................................................................................................2-2

    Hardware Compatibility in Linux .....................................................................................2-2Before You Install ERSP ..................................................................................................2-2Typographic Conventions .................................................................................................2-3Installing ERSP for Linux.................................................................................................2-3Installing ERSP for Windows...........................................................................................2-4Sample Code Installation ..................................................................................................2-4Installation File Structure..................................................................................................2-6Diagnostics........................................................................................................................2-6

    The Drive Test............................................................................................................2-6The Camera Test ........................................................................................................2-7Camera Troubleshooting ............................................................................................2-8The IR Sensor Test .....................................................................................................2-8

    Chapter 3 ERSP BasicsAPI Documentation...........................................................................................................3-1Conventions ......................................................................................................................3-1

    About X, Y Coordinates.............................................................................................3-1Camera Coordinates ...................................................................................................3-3Units ...........................................................................................................................3-3

    Setting Up Your Resource Configuration File..................................................................3-4Schema Files .....................................................................................................................3-7Behave Command.............................................................................................................3-8Configuring Your IR Sensors ...........................................................................................3-8Configuring Speech Recognition and Text-to-Speech......................................................3-9

    In Windows ................................................................................................................3-9In Linux ......................................................................................................................3-9ViaVoice Setup ..........................................................................................................3-9ViaVoice ASR Environment Variables Setup ...........................................................3-10About Text to Speech.................................................................................................3-10Grammars ...................................................................................................................3-10

    Chapter 4 TutorialsGetting Started with Visual C++ Projects.........................................................................4-1

    Compiling and Building Existing Sample Code Projects ..........................................4-1Compiling and Building New Applications ...............................................................4-1

    Getting Started with Linux Projects..................................................................................4-2Before You Start ...............................................................................................................4-2Getting Started Guide

  • Task Tutorials ...................................................................................................................4-3

    01-simple ....................................................................................................................4-302-parallel...................................................................................................................4-603-custom-task ...........................................................................................................4-804-event ......................................................................................................................4-13

    Python Tutorials ................................................................................................................4-1701-simple ....................................................................................................................4-1702-parallel...................................................................................................................4-19

    Behavior Tutorials.............................................................................................................4-2101-network..................................................................................................................4-2202-custom-behavior....................................................................................................4-2503-teleop.....................................................................................................................4-3104-primitive ................................................................................................................4-36

    Resource Tutorials ............................................................................................................4-4101-config-camera........................................................................................................4-4102-config-ir.................................................................................................................4-4203-camera ...................................................................................................................4-4604-drive-system..........................................................................................................4-5005-custom-driver ........................................................................................................4-56

    Chapter 5 Sample CodeDirectory Layout ...............................................................................................................5-1

    Hardware Layer..........................................................................................................5-1Behavior Layer ...........................................................................................................5-2Task Layer ..................................................................................................................5-2Vision SDK ................................................................................................................5-2Getting Started Guide

  • Getting Started Guide

  • Introduction

    Manual OverviewThe following is an overview of the chapters in this Getting Started Guide. For a more detailed description of the ERSP software, see the ERSP Users Guide and the Doxygen documents described in the API Documentation section of the ERSP Basics chapter.

    Introduction - This chapter introduces the ERSP software and some basic concepts that are needed to use it.

    Chapter 1Getting Started Guide 1-1

    Installing ERSP - Walks you through installing the software and testing the installation.

    ERSP Basics - Covers the basic concepts and skills needed to use ERSP effectively.

    Tutorials - Step-by-step instructions lead you through each of the software layers and how to use those layer to create robotic applications.

    Sample Code - Gives an overview of the sample code available with ERSP.

  • Chapter 1

    Introducing ERSP

    This introductory chapter is intended to provide the reader with a overview of ERSPs functionality and how it can be used to prototype and develop software for a wide range of robotic systems. This introduction also walks you through related resources that will enhance your ability to use ERSP and its Application Programmers Interfaces (APIs) to maximum advantage.

    ERSP is a software development kit for programming robots. At the lowest level, ERSP consists of several hundred thousand lines of C++ code, which gives application developers a big head start with their robotics projects. The code is organized as a number of core libraries that define the basis of application programs.

    The ERSP libraries consist of a large set of functions that are useful for a wide variety of robotic applications. The infrastructure can be partitioned into four major components:

    Software Control Architecture

    Computer Vision

    Robot Navigation

    Human-Robot Interaction (HRI)

    Associated with each major component are tools that provide configuration management, programming languages, or graphical user interfaces.

    Why Use ERSP?ERSP enables developers to build powerful, rich robotics applications quickly and easily. ERSP supports this objective in several ways.

    First, it provides tools for efficient software/hardware integration. Interfacing the software with sensors, actuators, and user interface components (LCDs, buttons, etc.) can be a tedious, time-consuming, and costly task. ERSP provides a powerful paradigm for software/hardware integration making these tasks easier. By taking advantage of the object-oriented mechanisms of C++, ERSP provides powerful tools for easily extending a users robotic system to support new hardware components without the need to rebuild code from scratch. See the HAL chap-ter of the ERSP User Guide for more information.

    Second, ERSP provides a system architecture which contains a rich set of mechanisms and algorithms for controlling the activities of a robot. This architecture consists of several layers that deal with control issues ranging from as simple as turning a single motor to complex issues such as making a robot follow a person while avoiding obstacles.

    The system architecture is modular, with well-defined interfaces between its layers and interacting software modules. A developer can chose to use one or more layers of the architecture in a target system allowing scalability of computational requirements. This makes the target application more computationally efficient. For maximum flexibility, ERSP provides easily accessible Application Programmers Interfaces (APIs) so that developers can easily extend and modify them to fit the requirements of their target systems. The open APIs also make it very easy to integrate 3rd party software into ERSP. For instance, a company could use these APIs to integrate a proprietary face recognition technology into ERSP. 1-2 Getting Started Guide

  • Who Should Use ERSP?

    Third, ERSP puts a number of unique and very powerful technologies into the developers

    hands. A partial list includes:

    Vision

    Object Recognition

    Voice Recognition

    Text-to-speech

    Emotion

    Navigation

    And more

    In the area of computer vision, ERSP provides a very powerful object recognition system that can be trained to recognize an almost unlimited number of objects in its environment. Recog-nition can be used for many applications such as reading books to children, locating a charging station and docking into it, or localization and mapping.

    ERSPs voice recognition and text-to-speech modules can be used for enhanced voice interactivity between the user and the robot. A model of emotion is used to emulate and express robot emotions which enhance the user interface for applications such as entertainment robots.

    In the area of navigation, ERSP provides modules for controlling the movement of the robot relative to its environment. For instance, a target following module can be used to track and follow a given target while at the same time obstacle avoidance can be used to assure safe movement around obstacles. These modules define a set of high-level components upon which an application can be developed.

    Who Should Use ERSP?ERSP is for companies, organizations, developers, and researchers who are working on robotic products or projects. Most robotic projects require a large subset of the modules and technologies that ERSP provides. Often, companies with robotics initiatives need to develop an entire system from the ground up, from drivers to common components to the final complex robot application. Evolution Robotics, with ERSP, provides companies with these common, yet critical, software components necessary to develop systems for any robotics application. These applications could be anything to allow a robot perform cleaning, delivery, factory automation tasks or entertainment.

    ERSP frees companies from the mundane and resource-consuming task of developing common subsystems such as vision and navigation. With ERSP, companies can focus entirely on the value-added functionality of their particular robot applications. One of the additional benefits of this approach is that robotics applications developed using ERSP can be made portable to a wide range of hardware, enabling companies to extend valuable engineering resources. Using ERSP, customers can build robot applications faster, cheaper, and at lower risk.

    The value that ERSP has for an organization depends on the companys existing software infrastructure. Companies with a new initiative in robotics often find ERSP valuable because it gives them a head start, whereas starting from scratch would months or years of development time and cost. Companies that have had robotics initiatives for many years will have some legacy infrastructure. These companies typically find specific modules within ERSP such as the Getting Started Guide 1-3

  • Chapter 1

    visual object recognition, voice recognition, and obstacle avoidance, useful for integration with

    their own products. Some mature companies with several robotics initiatives may find that their existing software infrastructure is not being leveraged across projects; they end up building the same functions many times over, or finding that these functions from different projects do not talk to each other. These companies find ERSP valuable because it provides a cross-platform standard that encourages cross-project fertilization.

    ERSP Structure and OrganizationThe collection of ERSP libraries provide APIs that can be divided in several important functional categories (see the figure below):

    ER Software Architecture: The software architecture provides a set of APIs for integration of all the software components with each other and with the robot hardware. The infrastructure consists of APIs to deal with the hardware, for building task-achieving modules that can make decisions and control the robot, for orchestrating the coordination and execution of these modules, and for controlling access to system resources.

    ER Vision: The Vision APIs provide access to very powerful computer vision algorithms for analyzing camera images and extracting information that can be used for various tasks such as recognizing an object, detecting motion, or detecting skin (for detection of people).

    ER Navigation: The Navigation APIs provide mechanisms for controlling movement of the robot. These APIs provide access to modules for teleoperation control, obstacle avoidance, and target following.

    ER Human-Robot Interaction (HRI): The Human-Robot APIs support building user interfaces for applications with graphical user interfaces, voice recognition, and speech synthesis. Additionally, the HRI components include modules for robot emulation of emotions and personality to enhance the users experience and improve human-robot interaction. Also, these APIs support modules for recognition of gestures that can be used to interact with the robot.

    The software platform also provides developer tools which consist of well-defined application programmer's interfaces in Python, C++, XML scripting language, and visual programming tools. These tools provide a flexible environment for developing software

    Vision Human-Robot Interaction Navigation

    ERSA {TELBELHAL1-4 Getting Started Guide

  • Evolution Robotics Software Architecture (ERSA)

    for application programs without the need for in-depth knowledge of the intimate details

    of ERSP.

    Evolution Robotics Software Architecture (ERSA)ERSA consists of three main layers, where each of the layers provides infrastructure for dealing with three different aspects of application development.

    The Hardware Abstraction Layer (HAL) provides abstraction of the hardware devices and low-level operating system (OS) dependencies. This assures portability of the architecture and application programs to other robots and computing environments. At the lowest level, the HAL interfaces with device drivers, which communicate with the hardware devices through a communication bus. The description of the resources, devices, busses, their specifications and the corresponding drivers are managed through a number of configuration files.

    Configuration files employ a user-specified XML framework and syntax. The advantage of managing the resource specifications through configuration files is that it provides a high degree of flexibility. If you have two robots with significantly different devices, sensors, and motors, you only need to create a single resource configuration file for each. That file describes the mapping between the software modules and the hardware for each robot. HAL reads the specifications from the configuration file and reconfigures the software to work transparently, without modifications, with the application software. The XML configuration files typically contain information about the geometry of the robot, the sensors, sensor placements, interfaces to hardware devices, and parameters for hardware devices.

    The second layer, the Behavior Execution Layer (BEL), provides infrastructure for development of modular robot competencies, known as behaviors, for achieving tasks with a tight feedback loop such as finding a target, following a person, avoiding an object, etc. The behaviors become the basic building blocks on which software applications are built. The BEL also provides powerful techniques for coordination of the activities of behaviors for conflict resolution and resource scheduling. Each group of behaviors is typically organized in a behavior network which executes at a fixed rate. Behaviors are executed synchronously with an execution rate that can be set by the developer. The BEL also allows running several behavior networks simultaneously with each executing at a different execution rate. The communication ports and protocols between behaviors can be defined and implemented by the user. The BEL defines a common and uniform interface for all behaviors and the protocols for interaction among the behaviors. In each cycle, a Behavior Manager executes all sensor behaviors to acquire fresh sensory data then executes the network of behaviors to control the robot. The coordination of behaviors is transparent to the user.

    An XML interface enables behaviors to interact with scripts written in XML. The XML interface provides a convenient and powerful approach to building application programs using XML scripts. XML files (known as schemas) can be used to define the characteristics of a behavior module, such as parameters, input/output interface, etc. Schemas for behaviors are similar to classes in C++, whereas specific behaviors correspond to objects which are instances of classes. A behavior network can be specified in an XML file that instantiates behaviors using the schema files, specifies values for optional parameters, and specifies the interconnections between behavior ports. A Getting Started Guide 1-5

  • Chapter 1

    behavior network written in XML can then be executed using the behave command (see

    the Behave Command section of the ERSP Basics chapter of this Guide for details). The advantage of using XML for developing behavior networks is that it is very flexible and does not require recompilation of the code each time the tiniest change has been made to the network.

    Setting up the connections between behaviors using the C++ APIs could be a tedious task. Therefore, to further improve the process of developing behavior networks, ERSP provides the Behavior Composer, a graphical user interface. Typically, behavior networks are more conveniently developed using the Behavior Composer because it can be used to build application programs visually. With the Behavior Composer, you can use a mouse and keyboard to drag-and-drop behaviors and connect them together to form an application. This visual program is converted to an XML script that then can be executed by the ERSA.

    This figure is a graphical representation of how the different layers of the software interact with each other and the input XML files.

    Python

    TEL

    BEL

    HAL

    Tasks

    Primitive Tasks

    Behavior Networks

    Behaviors

    Resources

    Drivers

    BehaviorNetworks

    BehaviorComposer

    XMLFiles

    HardwareConfiguration

    XMLFiles1-6 Getting Started Guide

  • Evolution Robotics Software Architecture (ERSA)

    The Task Execution Layer (TEL) provides infrastructure for developing goal oriented

    tasks along with mechanisms for coordination of complex execution of tasks. Tasks can run in sequence or in parallel. Execution of tasks is triggered by user-defined events. (Events are conditions or predicates defined on values of variables within the Behavior Execution Layer or the Task Execution Layer.) Complex events can be defined by logical expressions of basic events.

    While behaviors are highly reactive, and are appropriate for creating robust control loops, tasks are a way to express higher-level execution knowledge and coordinate the actions of behaviors. Tasks run asynchronously as events are triggered. Time-critical modules such as obstacle avoidance are typically implemented in the BEL while tasks implement behaviors that are not required to run at a fixed execution rate. Tasks are developed hierarchically, starting with the primitive tasks, which are wrappers of behavior networks. At invocation, a primitive task loads and starts the execution of a behavior network. Tasks can monitor the execution of behavior networks and values of the data flow between behaviors to define certain events. Tasks can manipulate the behavior networks to cause desired outcomes. For example, tasks can inject values into the behavior network to cause a desired outcome. To change context of execution based on the goals of the robot, the TEL can cause termination of one behavior network and loading and execution of another. Asynchronous events provide a flexible mechanism for inter-task communication as well as communication between BEL and TEL.

    Tight feedback loops for controlling the actions of the robot according to perceptual stimuli (presence of obstacles, detection of a person, etc.) are typically implemented in the Behavior Execution Layer. Behaviors tend to be synchronous and highly data driven. The Task Execution Layer is more appropriate to deal with complex control flow which depends on context and certain conditions that can arise asynchronously. Tasks tend to be asynchronous and highly event driven.

    The TEL provides an interface to Python, an interpreted scripting language. Prototyping in Python is convenient because it is a programming language at a higher abstraction layer than C++, and it is interpreted. The design of TEL makes it easy to interface it to other programming or scripting languages.

    ERSA has been engineered to be highly flexible and reconfigurable to meet the requirements of numerous application programs. Any subset of the ERSA layers can be combined to embody a range of architectures with radically different characteristics. The possible embodiments of the architecture could consist of using any of the layers in isolation, any two of the layers in combination, or all three layers. For example, for applications with limited requirements for high-level functionality may require only HAL or HAL and BEL. The advantage of restricting the use to HAL would be in saving computational resources (memory, CPU power, etc.). If hardware abstraction is not of a concern to a project or product, then BEL can be used in isolation. Or if only high-level, event-driven control follow is required then TEL may be used. Getting Started Guide 1-7

  • Chapter 1

    ER Vision

    ERSP provides powerful vision algorithms for object recognition, motion flow estimation, and color segmentation.

    Object Recognition

    The object recognition system is a vision-based module that can be trained to recognize objects using a single, low-cost camera. The main strengths of the object recognition module lie in its robustness in providing reliable recognition in realistic environments where, for example, lighting can change dramatically. Object recognition provides a fundamental building block for many useful tasks and applications for consumer robotic products, including object identification, visual servoing and navigation, docking, and hand-eye coordination. Other useful and interesting applications include entertainment and education.

    The object recognition module is implemented in the objrec library (in the Core Libraries). The Behavior and Task libraries implement several useful behaviors and tasks that use the object recognition for tracking and following an object.

    To train the software, you need to capture one or more images of the object of interest, name them using a text string, and load them into a database known as the model set (using file extension .mdl). The software then analyzes the objects image and finds up to 1,000 unique and local features to build an internal model of the object. ERSP provides graphical and command line tools that help in creating and manipulating object model sets. (See the Vision chapter of the ERSP User Guide). To use the object recognition, the user employs the APIs to load a model set and executes the object recognition algorithm (using the library APIs, the behaviors, or tasks). Once the object is seen in the robot cameras field of view, it will be recognized. The recognition returns the name of the object, the pixel coordinates of where in the video image it was recognized, and a distance to the object. The object recognition can be trained on hundreds of objects and can recognize more than one simultaneously.

    Motion Flow

    While object recognition provides a key technology for building fundamental robot capabilities, it does not process movement in objects such as people and other robots. Motion Flow analyzes an image sequence rather than a single image at a time, making it possible to discern motion in the field of view. This fundamental capability can be used for a number of tasks, ranging from detection of motion at a gross scale (moving people) to analysis of motion at a very fine scale (moving pixels).

    The optical flow algorithm provides a robust analysis of motion in the field of view. This algorithm correlates blocks of pixels between two consecutive frames of a video to determine how much they have moved from one frame to the next.

    Color Segmentation

    Color segmentation can be useful for finding objects of a specific color. For instance, looking for an object using color can be used for a number of human-robot interaction components. This algorithm can also be used to detect people by searching for skin color under various lighting conditions.1-8 Getting Started Guide

  • ER Navigation

    The color segmentation algorithm provides for a reliable color segmentation based on a

    probabilistic model of the desired color. Using a mixture of Gaussian distributions, it can be trained to classify pixels into the desired color or background and allow for significant variation in pixel color caused by lighting changes or diversity of the object population. The color segmentation module builds models for a desired color based on a training set that contains a population of objects with the desired color. Once the model is learned by the module, it is able to classify objects based on the model.

    ER NavigationERSP provides modules for safe navigation in realistic environments. The navigation modules consist of behaviors for following targets and for obstacle and hazard avoidance. In addition, ERSP provides facilities for teleoperation of robots remotely.

    Target Following

    Target following modules are available in the BEL as well as the TEL. These modules track and follow the position of a target. The input to these modules comes from a target detection module which can be based on visual detection or detection using odometry information.

    Obstacle Avoidance

    Using the obstacle avoidance algorithm, the robot generates corrective movements to avoid obstacles. The robot continuously detects obstacles using its sensors and rapidly controls its speed and heading to avoid obstacles.

    Our obstacle avoidance algorithm uses a description of the robots mechanics and sensor characteristics in order to generate optimally safe control commands. The description of the robots mechanics and sensors are done in a generic configuration description language defined in XML so that the obstacle avoidance algorithm can easily be integrated onto different types of robots. Porting of obstacle avoidance (and other modules for that matter) to a new robot with different hardware just requires describing the new hardware in the configuration description language.

    Hazard Avoidance

    The hazard avoidance mechanisms provide a reflexive response to a hazardous situation in order to insure the robots safety and guarantee that it does not cause any damage to itself or the environment. Mechanisms for hazard avoidance include collision detection (using not one but a set of sensors and techniques). Collision detection provides a last resort for negotiating around obstacles in case obstacle avoidance fails to do so, which can be caused by moving objects, software or hardware failures.

    Stairs and other drop-off areas are handled by a cliff avoidance module. Cliff avoidance uses a set of redundant sensors to detect the hazard and ensures the robots safety in the case of faulty sensors. The robot immediately stops and moves away from a drop-off.

    Teleoperation

    ERSA provides infrastructure for cross network operation of the robot. Applications of this capability include multi-robot systems, off-board processing, and teleoperation. For Getting Started Guide 1-9

  • Chapter 1

    more information on the networking infrastructure see the 03-teleop and 04-primitive

    tutorials, and the Doxygen documents pertaining to, for example, MalleableBehavior.

    ER Human-Robot InteractionEvolution Robotics provides a variety of component technologies for developing rich interfaces for engaging interactions between humans and robots. These components support a number of interfaces for command and control of a robot and allow the robot to provide feedback about its internal status. Furthermore, these components enable the robot to interact with a user in interesting and even entertaining ways. The core technologies provided for developing human-robot interfaces (HRIs) consist of:

    Speech recognition and text-to-speech (TTS) for verbal interaction

    Robot emotions and personality to create interesting and entertaining life-like robot characters

    Person detection and recognition of simple gestures

    Speech Recognition and Text to Speech

    Two speech engines are available for use in user applications: one for input that converts a speech waveform into text (Automatic Speech Recognition or ASR) and one for output that converts text into audio (Text-to-Speech or TTS).

    Both engines are third-party applications that are included in the ERSP. The speech engines are resources available in HAL similar to resources for interacting with sensors and actuators such as IRs and motors. The speech modules can be integrated into behaviors, tasks, or both.

    Robot Emotions and Personality

    The robot emotion behaviors are used to describes the robot's internal and emotional states. For example, the emotional state defines whether the robot is sad or happy, angry or surprised. The emotion behaviors can also describe personality traits. For example, an optimistic robot would tend toward a state of happy, whereas a pessimistic robot would tend toward a state of sad.

    A graphical robot face is also available in ERSP. This face is capable of expressing emotion and having the appearance of forming words. This functionality allows the user to create a wide variety of emotions and responses triggered by user-specified stimuli. This greatly enhances the human-robot experience. See the Behaviors Library chapter of the ERSP User Guide and the Doxygen documents for details.

    Person Detection and Head Gestures

    Person detection and tracking can enable very diverse human-robot interaction. For instance, being able to detect, approach, and follow a person can be very useful primitives for HRI. Evolution Robotics, Inc. has a reliable person-tracking technology using vision, combining some of our technologies for object recognition, optical flow, and skin segmentation.1-10 Getting Started Guide

  • Core Libraries

    Gesture recognition provides another powerful technology for enhanced human-robot

    interfaces. Using gestures for interacting with a robot provides a natural and powerful interface for commanding a robot to perform tasks such as pick-and-place.

    Using our vision component technologies for motion analysis and skin segmentation (using color segmentation) ERSP can detect gestures including head nodding and head shaking. This is done by tracking the motion of head and hands of a user which are segmented using skin segmentation. These modules can be used to extend the system to recognize other gestures such as waving and pointing.

    Core LibrariesThe Core Libraries implement the basic functionality of ERSP upon which all other infrastructure is built. The core libraries can also be said to implement standards for later software modules. An application can build directly on any subset of the core libraries.

    The Driver Libraries implement interfaces for specific hardware components such as controller boards, drive systems, positioning systems, graphics engines, sensors, audio devices, etc. These drivers build on the infrastructure implemented in the core libraries. Specific drivers such as the Robot Control Module driver are implemented as a C++ class that is derived from a driver class in the resource library. This modular scheme assures, for example, that all derived driver classes for motor controllers provide a uniform interface defined by the driver class in the resource library. Thus, one controller can easily be replaced with another without propagating the change throughout the modules/classes that use the driver for the controller.

    The core libraries named Resource, Behavior, and Task implement the three layers of the software control architecture of the ERSA.

    While the core libraries implement the core functions of ERSA, the Behavior Libraries and Task Libraries provide higher-level functionality that builds on the core. For example, the navigation library in the Behavior Libraries provides modules for obstacle avoidance. A user can easily use this behavior without being concerned about how it is implemented using the core libraries. Finally, the core libraries implement basic and powerful functionality for object recognition and other vision algorithms. These modules become basic building blocks for building higher-level modules in the BEL and TEL.

    ERSP consists of the following set of libraries which implement its core functionality. The libraries can be found in the Install_dir\lib directory.

    Core Libraries

    Driver Libraries (Hardware Abstraction Layer)

    Behavior Libraries (Behavior Execution Layer)

    Task Libraries (Task Execution Layer)

    For details on these libraries, see the Core Libraries, Hardware Abstraction Layer, Behavior Execution Layer, and Task Execution Layer chapters, of the ERSP User Guide.Getting Started Guide 1-11

  • Chapter 1

    Whats Next

    Now that you have an overview of ERSP, its time to get started. The next chapter, Installing ERSP, will walk you through installing and testing the software.1-12 Getting Started Guide

  • Installing ERSP

    Recommended SkillsThe following skills are strongly recommended:

    Familiarity with object-oriented programming, specifically C++ and, optionally, Python

    Depending on which ERSP version youre using, you must be proficient in Linux or Microsoft Windows command line setup, file manipulation, and execution

    Chapter 2Getting Started Guide 2-1

    For Windows: familiarity with Microsoft Visual Studio.Net or Microsoft Visual Studio .net Professional

    For Linux: proficiency in g++ 3.0 and building programs using make from the command line

    RequirementsYou must supply a computer with at least the following specifications:

    Pentium III - 800MHz or faster (Needed for development, target applications will vary widely depending on the application.)

  • Chapter 2

    500 MB hard disk space 128MB RAM (256MB RAM recommended)

    USB port

    802.11b wireless network adaptor (recommended)

    Microsoft Windows 2000,Microsoft Windows XP orRed Hat Linux 7.3

    Full-duplex sound card

    Customer SupportEvolution Robotics Customer support is available by email at [email protected] or by filling out the form at www.evolution.com/support/. Customer Service representatives are available by calling toll free at 866-ROBO4ME or, for international customers 626-229-3198, Monday though Friday, 9 A.M. to 5 P.M. Pacific Time.

    Hardware Compatibility in LinuxThe Evolution peripherals (i.e. Gripper, IR) are compatible with the more common UHCI (universal host controller interface) controller for USB. The Evolution ER1 peripherals are not supported with the OHCI (open host control interface) controller.

    How to Identify your ControllerIf you want to see which type of controller your computer has, then watch the display during the boot up. There should be a line about loading USB UHCI or OHCI controllers.

    Before You Install ERSPBefore you start the installation process, do the following:

    For Windows

    ERSP is compatible with Microsoft Windows 2000 and XP.

    Install Microsoft Visual C++ or Visual Studio .NET, Version 7.

    Install Python 2.2.2. To get this version of Python, go to www.python.org. Download and follow the installation instructions there.

    If you have an installation of the ER1 Python SDK, uninstall it before installing the ERSP SDK. Note that the functionality from the ER1 Python SDK has been included in the ERSP SDK.

    Make sure to back up your system before installing this software.2-2 Getting Started Guide

  • Typographic Conventions

    For Linux Linux version must be RedHat 7.3 and GCC 3.0. RedHat 8.0 and GCC 3.2 are not supported.

    You must have kernel 2.7.18-24.7.x

    Install Python 2.2.2. To get this version of Python, go to www.python.org. Download and follow the installation instructions there.

    If you have an installation of the ER1 Python SDK, uninstall it before installing the ERSP SDK. Note that the functionality from the ER1 Python SDK has been included in the ERSP SDK.

    Make sure to back up your system before installing this software.

    Typographic ConventionsThere are various typographic conventions that are used in both this Guide and the ERSP User Guide. The following describes these conventions:

    Italics are used to denote variables that are specific to your system. The most common use of this convention is Install_dir, which stands for your ERSP installation directory.

    Courier is used to denote paths, filenames, function names, executables, words to type on the command line, and output from ERSP. You will see an example of this on the Installing ERSP for Linux section of the Installing ERSP chapter.

    Bold is used for Graphical User Interface (GUI) parameters and button names. You can find examples of this in the Tutorials chapter of this Guide.

    Blue is used in the PDF file of the Getting Started Guide and the ERSP User Guide to indicate hyperlinks. You will find examples of this in the Table of Contents, the Index and interspersed throughout the text of this Guide and the ERSP User Guide.

    A back slash \at the end of a line of code is an editorial convention that indicates that the next line of code should be typed on the same line as the first.

    Installing ERSP for Linux1. Login as root.

    2. Place the installation CD in the CD-ROM and mount the disk by typing:

    cd /mnt/cdrom/ERSP

    3. Run the install script

    ./install.shImportant Note: Make sure you are root when running this script.

    4. You will be prompted Do you want to continue?. Type yes. You will be asked a series of questions. Respond appropriately.Getting Started Guide 2-3

  • Chapter 2

    Important Note: The ERSP installation directory will be referred to as Install_dir for the

    rest of this Guide.

    5. The software is now installed. To ensure that your installation was performed properly, run the tests found in the Diagnostics section of this chapter.

    Installing ERSP for WindowsAfter you download and install the products listed in the Before You Install ERSP section of this chapter, you are ready to install ERSP.

    1. Put the CD into the CD-ROM drive.

    2. Open the installation CD directory in Windows Explorer. Click on the setup.exe file to start the Installshield Wizard. This will walk you through the installation process. You see the following messages:

    Preparing to Install.

    Welcome to InstallShield Wizard for ERSP.

    3. Do the following:

    Click on Next.

    Read the License Agreement carefully and then click on Yes.

    Select a destination folder. The default is C:\Program Files\ERSP. Click on Next.

    Important Note: The installation directory will be referred to as Install_dir for the rest of this Guide.

    The installation process starts. Cancel the installation process at any time by clicking on the Cancel button.

    4. You will see a prompt for installing Java Runtime Environment 1.3 or later. Click Yes. Java will also display a license agreement. Read this agreement and then click Yes. Then, you must select a destination folder. Finally, select a default browser.

    5. When the installation process is complete, you see the message "Setup has finished installing ERSP on your computer.

    6. Click on the Finish button.

    7. The ERSP is now ready to use. To ensure that your installation was performed properly, run the test in the Diagnostics section of this chapter.

    Sample Code InstallationTo follow platform conventions, the installation of the sample code differs a bit between Linux and Windows.

    In Windows, as with most SDKs, the sample code is installed with the rest of the ERSP, directly under the root ERSP directory. Thus, the default location for the sample code is Install_dir\sample_code. The Windows sample code includes standard Visual Studio 2-4 Getting Started Guide

  • Sample Code Installation

    .NET project and solution files, with a separate solution file for each main section of the

    sample code.

    Though installation varies by platform, the directory structure of the sample code is identical. For the remainder of this chapter, the top sample code directory as Samp_code_dir and other subdirectories will be specified relative to this path.

    In Linux, the sample code is distributed as a standard tarball (tar archive compressed by gzip), on the CD in the sample_code directory. To use the sample code, extract it in the usual way:$ cd

    $ tar zxvf \ /sample_code/evolution_robotics-sample_code-W.X.Y-Z.tar.gz

    This allows multiple users to have their own copies of the sample code, without needing write access to the ERSP installation.

    The sample code is now located in the Samp_code_dir directory. The structure of the sample code as follows:

    behavior - Examples of behavior networks (C++ and XML).

    config - Examples of Schema XML configuration files.

    driver - Examples of drivers.

    objrec - Examples of applications that use the object recognition library. python - Examples of task programs written in Python task examples of task programs

    written in C++.

    Linux_Project_Template - Templates for starting Linux projects. (Linux only) VC_Project_Template - Templates for starting Windows projects. (Windows only) task - Examples of tasks.

    tutorial - The tutorials found in the Tutorials chapter.

    viavoice - ViaVoice tutorials. (Linux only)

    Compile the C++ examples using either Microsoft Visual C++ version 7.0 for Windows or g++3.0 and make for Linux. The Linux sample code uses the GNU build tools; you simply configure and make the code:$ cd evolution_robotics-sample_code-W.X.Y-Z$ ./configure$ make

    Solution files are provided for compilation of the C++ examples. For example, in the behavior directory the behavior.sln file can be opened with Microsoft Visual C++ in Windows. Select the build solution option of the Build menu to compile the examples.

    Binary files are generated in each corresponding directory. For example, go into the Install_dir/behavior/emotion directory. In Windows, double click on the emotion_example.exe. In Linux, type emotion_test on the command line. A Getting Started Guide 2-5

  • Chapter 2

    command window appears, showing run-time messages and a window with an animated

    face that displays different expressions.

    Installation File StructureWhen you are done with the installation, the software will be located in the Install_dir directory for Windows and Linux. You should have the following directories:

    bin - Executables

    config - Configuration files

    data - Application data

    doc - Documentation

    external - External libraries used by ERSP (Windows only)

    include - Header files

    java - Java applications lib - ERSP library files

    licenses - Licenses (Windows only)

    python - Python libraries

    sample_code - Sample code (In Windows. In Linux, the location of this directory is user-determined)

    Diagnostics After you install and configure the ERSP software on the laptop, you should run the following tests to verify that the installation was successful. The tests can be found in the following directory:

    Install_dir/bin (for Linux and Windows)

    In Linux, it is recommended that you add this directory to your path, like this:

    $ PATH="$PATH:/opt/evolution_robotics/bin"$ export PATH

    In this chapter, it is assumed that the tests are in your PATH.

    Important Note: For fast online help, all these tests support the --help option.

    The Drive Test

    After setting up the robot, it is a good idea to run test_drive_system to make sure that things are working correctly. The drive test exercises the robot's drive system. The robot should move forward, then backward. After that, it should move forward ten centimeters, then back ten centimeters. It should then turn left, then to the right, and then re-center itself by heading back to the left. The correct output on the screen includes no error or warning messages.2-6 Getting Started Guide

  • Diagnostics

    Important Note: Before you run this test, make sure that you have a 4' clearance all around

    the robot. This test doesn't make use of the robot's vision or bump sensors, so if something is in the robot's path, the robot will bang into it.

    1. On the command line, type the command:

    test_drive_system

    2. The test takes a few seconds to initiate, then the robot starts to move.

    3. Here's what you see on the screen:

    *** test_drive_system ***Obtained drive system: driveForward 1 second: passedChecking velocities: passedForward stop: passedMoving backward 1 second: passedBackward stop: passedForward 10 cm: passedBackward 10 cm: passedTurning left 90 degrees: passedTurning right 180 degrees: passedTurning left 90 degrees: passed

    4. The robot moves through its paces. At the end of the test, the robot stops in its original location.

    The Camera Test

    The test_camera program tests the robot's camera.

    1. The usage for the camera command is below:

    test_camera --help

    Usage: test_camera [OPTIONS] [ [..]]OPTIONS:

    --frames Frames to output (default = 5). --quality Quality from [0-1] (default = 0.8). --pause-time Duration to pause between readings,

    in seconds (default = 0.1 = 100ms).

    2. By default, the camera will output images from all available cameras.

    3. Run the camera command with your chosen option(s). For example:

    test_camera --frames 5

    4. You see a display similar to the following:*** test_camera ***Obtained cameras: camera0Frame count: 5.Pause time: 0.5 sec.Writing file camera0_001.jpgWriting file camera0_002.jpgWriting file camera0_003.jpgWriting file camera0_004.jpgWriting file camera0_005.jpgGetting Started Guide 2-7

  • Chapter 2

    5. In this case, the robot is saving multiple .jpg snapshots from the camera. The .jpg

    files are written to the directory in which the camera command is run. For example:$ ls -l *.jpg-rw-r--r-- 1 user user 6361 Mar 18 15:43 camera0_001.jpg-rw-r--r-- 1 user user 12700 Mar 18 15:43 camera0_002.jpg-rw-r--r-- 1 user user 12687 Mar 18 15:43 camera0_003.jpg-rw-r--r-- 1 user user 12712 Mar 18 15:43 camera0_004.jpg-rw-r--r-- 1 root root 12730 Mar 18 15:43 camera0_005.jpg

    6. You can open the .jpg files to view the snapshots and assess the camera operation.

    Camera Troubleshooting

    If you see the following message:

    Initializing...Failure opening /dev/video0 - check permissions

    1. Check to see if all the connections are seated correctly.

    2. In the Krittercam, if the light is not brightly lit, the video driver may not be running.

    3. First, su to root, then unload the video driver:

    For KritterCam and Hawking cameras using the evolution_ov511 package use the following command:

    $ modprobe -r ov511

    For the Logitech Pro 3000/4000:

    $ modprobe -r pwcx-i386 pwc

    4. Then load the video driver.

    For KritterCam and Hawking cameras:

    $ modprobe ov511

    For the Logitech Pro 3000/4000:

    $ modprobe pwc$ insmod -f /pwcx-i386.o

    5. Remember to exit out of root before running the tests.

    This should fix the problem.

    The IR Sensor Test

    1. The test_range_sensor diagnostic checks the range sensors (e.g. IRs) present on your robot. The usage of this command is as follows:

    $ test_range_sensor --helpUsage: test_range_sensor [OPTIONS] [ [..]]OPTIONS:

    --read-count Number of sensor readings to perform. --pause-time Duration to pause between readings, in seconds (default = 0.1 = 100ms).2-8 Getting Started Guide

  • Diagnostics

    2. You may specify one or more range sensors to check, or, if none are specified, all

    present are polled:

    $ test_range_sensor --read-count*** test_range_sensor ***Obtained range sensors: IR_tn, IR_tne, IR_tnw

    IR_tn: distance = 49.48 raw = 34 time = 0IR_tne: distance = 44.9 raw = 164 time = 0IR_tnw: distance = 1.798e+308 raw = 0 time = 0

    IR_tn: distance = 61.69 raw = 3 time = 0IR_tne: distance = 45.94 raw = 165 time = 0IR_tnw: distance = 1.798e+308 raw = 0 time = 0

    IR_tn: distance = 68.05 raw = 0 time = 0IR_tne: distance = 41.73 raw = 178 time = 0IR_tnw: distance = 1.798e+308 raw = 0 time = 0

    IR_tn: distance = 69.27 raw = 22 time = 0IR_tne: distance = 42.38 raw = 173 time = 0IR_tnw: distance = 1.798e+308 raw = 3 time = 0

    IR_tn: distance = 66.85 raw = 29 time = 0IR_tne: distance = 48.57 raw = 188 time = 0IR_tnw: distance = 67.25 raw = 19 time = 0Getting Started Guide 2-9

  • Chapter 2 2-10 Getting Started Guide

  • ERSP Basics

    API DocumentationAll of the C++ APIs are documented in detail in the Doxygen documents that are included in the Installation. These files can be found in the Install_dir/doc/ERSP-API/html directory for Linux and Windows. To find something in the Doxygen documents, open the index.html file in your Internet browser. Click on the Compound List hyperlink. Use you browsers Find function to find the behavior or task that you are looking for. The name of the behavior or task is hyperlink to the detailed information you will need to create programs and scripts with ERSP.

    Chapter 3Getting Started Guide 3-1

    Conventions

    About X, Y Coordinates

    This coordinate system, with the positive X axis pointing forward and the positive Y axis pointed toward the left, is the ordinary x, y coordinate system (positive X axis pointed to the right, positive Y axis pointed forward, +X, +Y values in the forward-right quadrant), but rotated 90 degrees counter-clockwise. Also note that the Z axis points straight up. The reason for the rotation is that the 0 degree mark (i.e. positive X axis) needs to be pointed forward. This coordinate system, with the X axis pointed forward, is the standard in all of

  • Chapter 3

    robotics, and that is why ERSP uses it. The robotics coordinate system is always used in

    the resource config file.

    The following figure gives a visual representation of this coordinate system.

    The next figure shows you how to use the robotic coordinate system while piloting your robot.

    1. Robot starting position (0, 0) with front of robot pointing along X+ axis.

    2. Robot path to new relative position of x=10, y=20.

    3. Robot position after first relative move of x=10, y=20. Axes are redrawn so that robot is again at the position 0,0, with the front of the robot pointing along the X + axis.

    4. Robot path to new relative position of x=10, y= -30

    Z

    X

    Y3-2 Getting Started Guide

  • Conventions

    5. Robot position after relative move of x=10, y= -30. Robot is facing in the direction it

    would have been facing if the robot had traveled in a straight line to its new position.

    Camera Coordinates

    The camera coordinate system is different than the X, Y coordinates used for navigation. The camera coordinate system is expressed as the Z axis is forward (the direction that the camera is facing), positive X axis is to the right and Y is at 90o and down in relation to the to the Z, X plane.

    The camera coordinate system is used for activities related to vision algorithms and camera calibration. An example of a function that uses this coordinate system is PointAndGo described in the Existing Behaviors chapter of the ERSP User Guide.

    Units

    ERSP uses a certain set of default units for its functions. These are centimeters for forward and backward motion, radians for rotation, and seconds for time. These units are used at the resource and behavior levels. However, at the task level, you may use other units such as inches, feet, or meters for distance, degrees for rotation, or minutes for time. For example, when using tasks in Python scripts, you can use the setDefaultUnits function to set the units or the getDefaultUnits function to find out how your units are set. Below are some examples of how to change the default units being used in Python.

    setDefaultUnits

    Usageimport ersp.taskersp.task.setDefaultUnits(ersp.task.UNIT_type, unit)

    Parameters

    UNIT_type This parameter specifies the UNIT_type: UNIT_DISTANCE, UNIT_ANGLE, and/or UNIT_TIME.

    Y

    X

    ZGetting Started Guide 3-3

  • Chapter 3

    unit This parameter sets the units to be used for each UNIT_type. These are: DISTANCE - This parameter can be set to cm (centimeters), ft (feet), m (meters), or in (inches).

    ANGLE - The ANGLE parameter can set to rad (radians) or deg (degrees).

    TIME - This can be set to sec (seconds) or min (minutes).

    Returns

    Nothing.

    getDefaultUnits

    Usageimport ersp.taskersp.task.getDefaultUnits (UNIT_type)

    Parameters

    UNIT_type This parameter can be set to UNIT_DISTANCE, UNIT_ANGLE, or UNIT_TIME.

    Returns

    This function returns the distance, angle and/or time setting requested.

    Setting Up Your Resource Configuration FileThe primary resource configuration file is named resource-config.xml. This file can be found in the Install_dir/config/ directory in the default installation.

    Important Note: This file is already configured for the standard configuration of Evolutions SDK Robot.

    To configure this file, uncomment any areas of the file that pertain to your robot. For example, if you have a Gripper, uncomment the Gripper section of the file.

    The standard resource config file should look like this:

    3-4 Getting Started Guide

  • Setting Up Your Resource Configuration File

    Getting Started Guide 3-5

  • Chapter 3

    3-6 Getting Started Guide

  • Schema Files

    For details on the HTML tags used in the resource config file, see the Resource Configuration section of the Hardware Abstract Layer chapter in the ERSP User Guide.

    Schema FilesBehaviors require an .xml schema file that defines how they interface with each other to work. The default location for these is in Install_dir/config/behavior/Evolution/.xml. In general, they belong in /behavior//.xml. For example, the PrintBehavior (from Samp_code_dir/behavior/tutorial/) has a schema file located in Samp_code_dir/config/behavior/Examples/PrintBehavior.xml. The Getting Started Guide 3-7

  • Chapter 3

    system needs to be told where to look for these schema files or the user will get errors

    when trying to run behave on a network that uses a behavior with a missing schema.

    To use the example behaviors, modify the following line of $HOME/.bash_profile:export EVOLUTION_CONFIG_PATH=/opt/evolution_robotics/config

    to read export

    EVOLUTION_CONFIG_PATH=/opt/evolution_robotics/config:/opt/\evolution_robotics/sample_code/config

    so that the examples know where to find there schema files.

    Behave CommandThe following is the usage for the behave command. The behave command is used to execute behaviors. For more information on behaviors, see the Behavior Execution Layer and Behavior Libraries chapters of the ERSP Users Guide.

    Usagebehave [Options]

    Parameters--help Print this usage.

    --debug[=]

    Debugging category.

    --duration=

    Duration in seconds (at least 0.01s).

    --invocation-count=

    Number of invocations.

    --invocation-interval=

    Interval between invocations in seconds (at least 0.01s).

    --without-resources

    Do not load hardware resources.

    --load-all-resources

    Load all resources at start (default loads only as needed).

    Configuring Your IR SensorsFirst, a few things you need to know:

    If you are facing the robot, left is east, the front of robot is north, right is west, and the back of the robot is south.

    In order to know which IR sensor corresponds to which actual physical sensor, you need to use the IR sensor test program named test_range_sensor. (This is the same test you 3-8 Getting Started Guide

  • Configuring Speech Recognition and Text-to-Speech

    used in the Installing ERSP chapter.) Waving your hand in front of the sensor will change

    the corresponding sensor reading.

    1. In Windows, on the DOS command line, type:

    cd Install_dir\bintest_range_sensor.exe

    In Linux, type:

    cd Install_dir/evolution_robotics/bin$ ./test_range_sensor

    You should see something like:*** test_range_sensor ***

    Obtained range sensors: IR_tn, IR_tne, IR_tnw

    IR_tn: distance = 59.06 raw = 118 time = 9.621e+004IR_tne: distance = 45.94 raw = 165 time = 9.621e+004IR_tnw: distance = 50.1 raw = 152 time = 9.621e+004

    IR_tn: distance = 56.84 raw = 141 time = 9.621e+004IR_tne: distance = 42.38 raw = 171 time = 9.621e+004IR_tnw: distance = 46.94 raw = 162 time = 9.621e+004

    IR_tn: distance = 51.92 raw = 142 time = 9.621e+004IR_tne: distance = 51.45 raw = 147 time = 9.621e+004IR_tnw: distance = 56.14 raw = 129 time = 9.621e+004

    The information for the sensor with your hand in front of it will change. Use this information to place the sensor in the proper location on the robot.

    Configuring Speech Recognition and Text-to-SpeechIf you would like your robot to recognize your speech, or to speak written text, you must configure your system to process this data.

    In Windows

    Microsofts speech recognition program, WinVoice, works better after it has been trained on the users voice. To train the speech recognition software, open the Microsoft Speech Applet in the Control Panel and click on the Train button in the Speech Recognition tab. The speech software will prompt you from there.

    In Linux

    ViaVoice Setup

    In order to use speech recognition, you need to have a ViaVoice directory in your home directory. This directory contains all the user-related ViaVoice speech parameters and it is used by ViaVoice to dump running logs and other data.

    There are two ways in which you could set-up the ViaVoice directory in your user directory. Getting Started Guide 3-9

  • Chapter 3

    1. Read the README file located in /usr/doc/ViaVoice/sdk.readme.txt, and then

    run vvstartuserguru.

    2. Make a symbolic link to the ViaVoice directory in the sample_code directory by typing:

    $ cd $HOME$ ln -s $SAMPLE_CODE_INSTALL_DIR/viavoice viavoice

    ViaVoice ASR Environment Variables Setup

    The ASR engine uses a variety of environment variables to know where it resources are located. The setup for these variables can be performed using the script vvsetenv provided by ViaVoice. These script needs to be loaded before running the ASR. There are two ways of loading it:

    1. Type:

    source vvsetenv

    2. Add a line to your .bash_profile or your .bashrc that says:

    source vvsetenv

    About Text to Speech

    The speech synthesis engine (TTS) uses the E-sound daemon to send the utterances to the speakers. Therefore, the daemon MUST be running before any TTS-enabled program in run. In order to activate the E-sound daemon, you must execute the command esd&.

    Grammars

    Both WinVoiceTM and ViaVoice support the use of grammar files. Grammar files are used to increase the accuracy and speed of ERSPs voice recognition. Each grammar file contains a list of words and phrases that you would like your robot to understand. Any words and phrases that are not specified in this file will be ignored. For information on file formatting, see Appendix A, Grammar Information of the ERSP User Guide.3-10 Getting Started Guide

  • Tutorials

    Getting Started with Visual C++ ProjectsTo simplify programming with our APIs in Microsoft Windows, you have created several general purpose Microsoft Visual C++ .Net projects. These projects can be used to compile existing sample code or to build new applications/libraries from scratch.

    Compiling and Building Existing Sample Code Projects

    Open the *.sln file associated with the project. For this example, you will use

    Chapter 4Getting Started Guide 4-1

    behavior.sln (located in Samp_code_dir\behavior directory). Double click the behavior.sln file and wait for Microsoft Visual C++ .Net to launch the project. You should see a tree-view representation of the project on either corner. This view contains all of the projects that comprise the behavior solution. To compile and link the code, either press F7 or select Build\Build Solution from the menu bar. You should now be able to execute the generated code.

    Compiling and Building New Applications

    You have provided you with six Microsoft Visual C++ .Net quasi-project templates. These projects are quasi-templates because at this time, they are not fully integrated with Visual

  • Chapter 4

    C++ .Nets project wizard facility and will require some copying/pasting and renaming on

    your part.

    Now lets walk through the steps necessary to build a simple Hello World project.

    1. Make a new directory and name it anything you like.

    2. Copy the contents of the Samp_code_dir\VC_Project_Template\Empty_Console_App into this directory.

    3. Launch Microsoft Visual C++ .Net by double clicking on Empty_Console_App.sln.

    4. From the File menu, select Add New Item. From the dialog, select the C++ file. Enter SimpleTest in the name field of the dialog. Select Open. Visual C++ will add this new file into the project.

    5. Insert the following text into the new file:#include

    #include

    int main(int argc, char *argv[]){

    std::cout

  • Task Tutorials

    tutorials as Install_dir. The other important directory is where the sample code containing

    the tutorials are installed. This directory is referred to as Samp_code_dir.

    In Linux, in the Samp_code_dir directory, be sure to run the command:

    ./configure -with-evolution-config=Install_dir/bin/evolution-config

    to generate the proper make files for the tutorials and other sample codes.

    The Samp_code_dir/tutorial directory contains a number of tutorials designed to take the user through various features of ERSP. Tutorials are provided for the Hardware Abstraction Layer, Behavior Execution Layer, Task Execution Layer, and Python scripting, and are grouped into the following sub directories of Samp_code_dir/tutorial: resource (HAL), behavior (BEL), task (TEL), and python (Python scripting).

    Each tutorial is in turn contained in its own subdirectory, which are labeled with a number and a descriptive name. Examples are the subdirectories 01-config-camera and 02-config-ir of Samp_code_dir/tutorial/resource. The numbers indicate the order in which the tutorials should be performed, because later tutorials often build on the skills learned in earlier tutorials. Most tutorials require command line execution of programs. Linux developers should work on the tutorials in command line shells with the active directory changed to the directory containing the tutorial.

    In Linux, the bin directory found in, Install_dir/bin, should be part of the system path, so that ERSP tools and programs can be invoked on the command line without typing the full path. The EVOLUTION_CONFIG_PATH directory should contain the Samp_code_dir/config path, so that ERSP can find the various configuration and schema files used by the sample code in the tutorials. The CXX and CC environment variables should also be properly set for the GNU C and C++ compiler installed on the your system. Set the INSTALL file in the sample code directory for more details.

    The robot that you are using with ERSP should be connected to the computer used for the tutorials. Additional peripherals like cameras and sensors might need to be connected to the robot for the particular tutorials. The tutorial prerequisites will indicate which additional peripherals are required.

    Task Tutorials

    01-simple

    Purpose Tasks are useful for scripting a sequence of actions to be taken by the robot. This tutorial demonstrates how to sequence of two simple tasks. This example will show you how to use simple tasks, including setting default units, setting up task context and arguments, and receiving task values.Getting Started Guide 4-3

  • Chapter 4

    Prerequisites An ERSP-supported camera must be connected to the robot. (See http://www.evolution.com/support/recommended.masn#hub for a listing of approved cameras.)

    The Install_dir/bin directory must be in the system executable path.

    The ERSP sample code package should be installed as described in the Installing ERSP chapter.

    The active directory should be Samp_code_dir/tutorial/task/01-simple.

    The robot must have one meter of clearance in front of it.

    TaskThis tutorial will walk you through the process of sequencing two simple tasks. The tasks will move the robot forward 20 inches and then take a picture with the robots camera.

    The source file used in this tutorial is the simple.cpp file in Samp_code_dir/tutorial/task/01-simple. Note that the TEL supports the use of units other than the centimeter/radian/second used by the Behavior and Hardware layers. The file begins by specifying some default units for the three unit categories: distance, angle, and time with the following code:Units::set_default_units(UNIT_DISTANCE, "inches");Units::set_default_units(UNIT_ANGLE, "degrees");Units::set_default_units(UNIT_TIME, "seconds");

    All values of distance, angle, and time, as well as all derived values, such as velocity (distance / time), will be assumed to be set to the specified units.

    On to the first task: moving forward for 20 inches. For this task, use Evolution.DriveMoveDelta. It commands the drive system to move a specified delta distance from the current position. To find the task, look for it in the task registry by name. If the task is found, the task registry will return a pointer to the desired tasks functor (an object that wraps a single function call). Task functors wrap the run method, which executes the task. Here is the code to get the DriveMoveDelta task functor from the task registry:TaskFunctor* drive_move_delta = TaskRegistry::find_task("Evolution.DriveMoveDelta");

    Most tasks require that some arguments are specified to determine how the task should perform. DriveMoveDelta requires that how far the drive system should move be specified, how fast it should move, and also how fast the drive system should accelerate while moving. Task arguments are specified in a TaskArg object and are then stored in a TaskContext.

    Say you want to move the robot forward 20 inches at 5 inches / second velocity and 20 inches / second 2 acceleration. The code below constructs the above task arguments in a TaskArg object and then creates a TaskContext object to hold the task arguments:TaskArg args[] = { 20, 5, 20 };

    // Arguments to the task have to be specified in a task context.TaskContextPtr context(TaskContext::task_args(3, args));4-4 Getting Started Guide

  • Task Tutorials

    The TaskContext::task_args method creates a task context holding the TaskArgs

    object with the arguments. The first parameter specifies the number of arguments. The second parameter is the TaskArg object. The method returns a pointer to a heap-allocated TaskContext object. To prevent you from having to delete the TaskContext object after using it, the above code uses the smart pointer type TaskContextPtr to keep track of the TaskContext pointer returned by the task_args call. The TaskContext object will automatically be cleaned up when the smart pointer TaskContextPtr context object goes out of scope.

    You are now ready to run the DriveMoveDelta task and will do so by calling the run method:drive_move_delta->run(context.get());

    The run method takes the TaskContext pointer containing the arguments for the task. Recall that the context object is actually the smart pointer TaskContextPtr. Calling its get method returns the raw TaskContext pointer that the run method takes as its sole parameter.

    After executing the move task, its time to take a picture. You can do this using the Evolution.GetImage task. This task needs to be set up with arguments and context, just like the previous task. The code for all of this is in the simple.cpp file and follows the same pattern as for Evolution.DriveMoveDelta, so it wont be discussed here.

    The one difference regarding this second task is that you are interested in the tasks return value. The run method of all task functors return a pointer to a TaskValue type, a variant type which can contain one of many types used in ERSP. The return value from the GetImage task is the image taken obtained by the task from the camera. The following code line executes the GetImage task functor while preserving its return value:TaskValuePtr result (get_image->run (context1.get()));

    The run method is once again called to execute the task, and the returned pointer to the TaskValue type is wrapped in the smart pointer TaskValuePtr result object. Again, this is done to keep the user from having to manually delete the returned TaskValue pointer after using it.

    After a call to run, a pointer to the task is available by calling the get_task status of the task context, from which the tasks execution status can be obtained with the get_status call. The following code check is to make sure that the GetImage task was executed successfully, and if so, to obtain the image from the TaskValue and save it to file:

    if (context1->get_task ()->get_status () == TASK_SUCCESS) { // Obtain the image from the task result. Image* incoming_image = result->get_image(); incoming_image->write_file ("image.jpg", .9);}Build the simple.cpp file by typing make on Linux or build the Visual Studio project in Windows and run the tutorial program. The robot should move forward 20 inches, then take a picture and save it to a file named image.jpg.

    SummaryThis tutorial illustrates the basic steps to using a task. First, the default units are specified if units other than the default set of centimeters, radians and seconds will be used. Next, Getting Started Guide 4-5

  • Chapter 4

    pointers to functors of the task to be used are obtained from the task registry. Arguments

    to the task are then created in a TaskArg object and assigned to a task context with the TaskContext::task_args method. The smart pointer type TaskContextPtr can be used to automatically clean up the task context object. The task can now be executed with a call to the run method, passing in the task context pointer as the sole parameter. The task execution may return a value, as in the case of GetImage. The returned value can be wrapped in a TaskValuePtr smart pointer object for ease of maintenance and used appropriately.

    02-parallel

    Purpose In the previous tutorial you saw how to find and run tasks in sequence. This tutorial will show how tasks can be run in parallel. Multiple tasks can be set to run in parallel until one of the tasks complete or until all of the tasks complete.

    Prerequisites

    A supported camera must be connected to the robot. See the Evolution website at http://www.evolution.com/support/recommended.masn#hub for a list of approved cameras.

    The Install_dir/bin directory must be in the system executable path.

    The ERSP sample code package must be extracted, and the active directory should be Samp_code_dir/tutorial/task/02-parallel.

    There should be one meter of clearance in front of the robot.

    TaskThe file parallel.cpp in the Samp_code_dir/tutorial/task/02-parallel directory contains the source code for this tutorial. The source code starts by setting units and creating a context, which should be familiar after the previous tutorial. Next a Parallel object is constructed. This manages the parallel execution of multiple tasks and takes a TaskContext pointer as the sole parameter to its constructor:Parallel parallel(context);

    The next step is to create the tasks you want to run in parallel and add them to the Parallel object. You will be using DriveMoveDelta and GetImage again. The add_task method of Parallel takes three parameters: the task functor pointer, the number of arguments, and the TaskArg object containing the arguments. Adding a task involves using the TaskRegistry to locate the task functor, creating the TaskArg object with the arguments, and calling the add_task method, as shown here for DriveMoveModel and GetImage:// Get a task functor for DriveMoveDelta.TaskFunctor* drive_move_delta =

    TaskRegistry::find_task("Evolution.DriveMoveDelta");

    // Specify the arguments to the DriveMoveDelta tasks.4-6 Getting Started Guide

  • Task Tutorials

    TaskArg args[] = { 20, 5, 20 };// Add the DriveMoveDelta task to the Parallel object.Task* task1 = parallel.add_task(drive_move_delta, 3, args);

    // Get a task functor for GetImage.TaskFunctor* get_image =

    TaskRegistry::find_task("Evolution.GetImage");

    // Specify the arguments to the GetImage tasks.TaskArg args1[] = { "camera0" };

    // Add the GetImage task to the Parallel object.Task* task2 = parallel.add_task(get_image, 1, args1);

    Note that the add_task method returns a pointer to the task added to the Parallel object. This pointer can be used to retrieve the tasks execution status and return value after the parallel execution. You are now ready to run the task in parallel with the following code:// Execute both tasks and wait until both tasks are done.parallel.wait_for_all_complete_tasks();

    The above call to the wait_for_all_complete_tasks method simultaneously starts all tasks that have been added to the parallel object and waits until all those tasks are done. There is also a wait_for_first_complete_task method that terminates all remaining tasks when one task is done.

    Once the tasks are executed, task2, the task pointer returned by the add_task call, that added the GetImage task to the parallel object, can be used to see if the GetImage task completed successfully. If so, it can obtain the image from the task result. This is done by the following code:

    // Verify success of the GetImage task.if (task2->get_status () == TASK_SUCCESS) { TaskValue result = task2->get_result();

    // Obtain the image from the task result. Image* incoming_image = result.get_image(); if (incoming_image == 0) { std::cerr

  • Chapter 4

    before. However, if the robot starts out at the same place as in the first tutorial, the image

    saved should be different, because it would have been taken at the beginning of the move and not at the end, because both the move and the image captures will start at the same time, and in parallel.

    SummaryThis tutorial shows how to use the Parallel object to start multiple tasks in parallel. Task functors are obtained from the task registry and added to the Parallel object along with their arguments using Parallels add_task method. This method returns a pointer to the task, which can be used to obtain the tasks success status and return value after the task is run. To start the added tasks in parallel, use the wait_for_all_complete_tasks method to start all added tasks and wait until all tasks are complete. The wait_for_first_complete_task method can also be used to start all tasks, but stops after one task is done and terminates the rest.

    03-custom-task

    Purpose This tutorial will demonstrate how to create a reusable custom class contained in its own library. The steps required in creating a task will be discussed in the context of creating a task that uses the camera to repeatedly take photos. The tutorial will highlight a number of issues specific to task creation such as unit conversion, parameter handing, and returning task result. A test program that makes use of the task will also be provided.

    Prerequisites

    A supported camera must be connected to the robot.

    The Install_dir/bin directory must be in the system executable path.

    The ERSP sample code package should also be extracted, and the active directory should be Samp_code_dir/tutorial/task/02-parallel.

    There should be one meter of clearance in front of the robot.

    TaskSuppose that you want to take photos at regular intervals while moving. This cannot be done by the GetImage task used in the last couple of tutorials. A new custom task will need to be created to do this, and this tutorial will show how to create such a custom task. The custom task will be called PhotoShoot and will be stored in its own library, so that it can be easily reused.

    The source file for this task is PhotoShoot.cpp in the Samp_code_dir/tutorial/task/03-custom-task directory. In the same directory, there is also the ExampleTypes.hpp file. As with other sample code, the PhotoShoot task class is implemented in the Examples namespace. The ExampleTypes.hpp file contains a number of typedefs and macros that declare Evolution types in the Examples namespace so that the tutorial code is more concise and readable. There is also 4-8 Getting Started Guide

  • Task Tutorials

    a test_shoot.cpp file in the same directory, which provides an example of how to use

    the PhotoShoot task and will be discussed later in this tutorial.

    Open up the PhotoShoot.cpp file in a text editor for reference throughout the tutorial. There are some comments immediately inside the Examples namespace describing the functionality and arguments of this task. Briefly, this task will take photos from the camera at regular intervals. The task takes two arguments: a delay argument to specify the time interval between successive photos, and an optional stop_count argument to specify that the task should stop after taking a certain number of photos. If this optional argument is not specified, the task will continue indefinitely. When done, it will return the number of photos taken.

    Now lets proceed to look at the source code for PhotoShoot. The code starts with the following macro:ERSP_DECLARE_TASK_LINKAGE (PhotoShoot, "Examples.PhotoShoot", EVOLUTION_EXPORT);This macro declares the new PhotoShoot task. The first parameter is the C++ class name of the task. The second parameter is the new tasks string ID. The third parameter is an export macro that contains platform-specific linkage directive. The macro EVOLUTION_EXPORT should be properly defined for the current platform and should be used for this third parameter.

    In Linux, the EVOLUTION_EXPORT macro should be defined as nothing. In Windows when using Visual Studio, the following definition should be used:#define EVOLUTION_EXPORT __declspec(dllexport)

    Next in the source code is the following macro:ERSP_IMPLEMENT_TASK (PhotoShoot)

    This macro does pretty much what it claims by implementing the task in a single function body. All code to perform the task will be contained in this single block after this macro. This code block begins by defining a number of useful variables, including the two that will contain the argument values: delay and stop_count. The last variable defined, image_count, will be used to keep track of how many images has been taken. This value will be returned when the task is done.

    Units conversion follows the variable declarations. As mentioned previously, the TEL fully supports the use of a variety of units. The job of making the proper conversion falls to the task implementation. The value of the arguments passed in are assumed to be in default units, so the task must convert these values into the user-specified unit that is used by the task internally. The one argument of PhotoShoot that needs to be converted is delay, which is in time unit. Later, you will be using the millisecond_sleep method to specify the interval between successive photos, so internally the time unit used by the task is in milliseconds. ERSP provides the Units::convert_to_specified_units method to return a scale factor between the default units and the specified units. PhotoShoot calls this method to obtain the factor between millisecond and the default unit as follows:// Unit conversion.double time_