robotics - the university of texas at arlington – ut … robotics scut/neu june...figure 71.3...

56
71 Robotics Frank L. Lewis University of Texas at Arlington John M. Fitzgerald Adept Technology Kai Liu Alcatel Telecom 71.1 Introduction ...................................... 71-2 71.2 Robot Workcells .................................. 71-2 71.3 Workcell Command and Information Organization ...................................... 71-4 Intelligent Control Architectures Behaviors and Hybrid Systems Design Workcell Planning, Coordination, and Control Structure 71.4 Commercial Robot Configurations and Types ...... 71-7 Manipulator Performance Common Kinematic Configurations Drive Types of Commercial Robots Commercial Robot Controllers 71.5 Robot Kinematics, Dynamics, and Servo-Level Control ............................... 71-13 Kinematics and Jacobians Robot Dynamics and Properties Robot Servo-level Motion Control Robot Force/Torque Servocontrol Motion Trajectory Generation 71.6 End Effectors and End-of-Arm Tooling ............ 71-21 Part Fixtures and Robot Tooling Grippers and Fingers Robot Wrist Mechanisms Robot/Tooling Process Integration and Coordination 71.7 Sensors ........................................... 71-25 The Philosophy of Robotic Workcell Sensors Types of Sensors Sensor Data Processing Vision for Robotics 71.8 Workcell Planning ................................ 71-31 Workcell Behaviors and Agents Task Decomposition and Planning Task Matrix Approach to Workcell Planning Path Planning 71.9 Job and Activity Coordination ..................... 71-41 Matrix Rule-Based Job Coordination Controller Process Integration, Digital I/O, and Job Coordination Controller Implementation Coordination of Multiple Robots 71.10 Error Detection and Recovery ..................... 71-44 Error Detection Error Recovery 71.11 Human Operator Interfaces ....................... 71-45 Levels of User Interface Mechanisms for User Interface 71.12 Robot Workcell Programming ..................... 71-47 Robot Programming Languages V+, A Representative Robot Language 71.13 Mobile Robots and Automated Guided Vehicles .... 71-49 Mobile Robots Automated Guided Vehicle Systems 1-58488-360-X/$0.00+$1.50 © 2004 by CRC Press, LLC 71-1 F.L. Lewis, M. Fitzgerald, and K. Liu “Robotics,” in The Computer Science Handbook, Allen B. Tucker, Jr. ed., Chapter 71, CRC Press, 2011.

Upload: vunhi

Post on 07-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71Robotics

Frank L. LewisUniversity of Texas at Arlington

John M. FitzgeraldAdept Technology

Kai LiuAlcatel Telecom

71.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-271.2 Robot Workcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-271.3 Workcell Command and Information

Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-4Intelligent Control Architectures • Behaviors and HybridSystems Design • Workcell Planning, Coordination, andControl Structure

71.4 Commercial Robot Configurations and Types . . . . . . 71-7Manipulator Performance • Common KinematicConfigurations • Drive Types of Commercial Robots• Commercial Robot Controllers

71.5 Robot Kinematics, Dynamics, andServo-Level Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-13Kinematics and Jacobians • Robot Dynamics and Properties• Robot Servo-level Motion Control • Robot Force/TorqueServocontrol • Motion Trajectory Generation

71.6 End Effectors and End-of-Arm Tooling . . . . . . . . . . . . 71-21Part Fixtures and Robot Tooling • Grippers and Fingers• Robot Wrist Mechanisms • Robot/Tooling ProcessIntegration and Coordination

71.7 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-25The Philosophy of Robotic Workcell Sensors• Types of Sensors • Sensor Data Processing• Vision for Robotics

71.8 Workcell Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71-31Workcell Behaviors and Agents • Task Decomposition andPlanning • Task Matrix Approach to Workcell Planning• Path Planning

71.9 Job and Activity Coordination . . . . . . . . . . . . . . . . . . . . . 71-41Matrix Rule-Based Job Coordination Controller• Process Integration, Digital I/O, and Job CoordinationController Implementation • Coordination of MultipleRobots

71.10 Error Detection and Recovery . . . . . . . . . . . . . . . . . . . . . 71-44Error Detection • Error Recovery

71.11 Human Operator Interfaces . . . . . . . . . . . . . . . . . . . . . . . 71-45Levels of User Interface • Mechanisms for User Interface

71.12 Robot Workcell Programming . . . . . . . . . . . . . . . . . . . . . 71-47Robot Programming Languages • V+, A RepresentativeRobot Language

71.13 Mobile Robots and Automated Guided Vehicles . . . . 71-49Mobile Robots • Automated Guided Vehicle Systems

1-58488-360-X/$0.00+$1.50© 2004 by CRC Press, LLC 71-1

F.L. Lewis, M. Fitzgerald, and K. Liu “Robotics,” in The Computer Science Handbook, Allen B. Tucker, Jr. ed., Chapter 71, CRC Press, 2011.

Page 2: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-2 Computer Science Handbook

71.1 Introduction

The word robot was introduced by the Czech playright Karel Capek in his 1920 play Rossum’s UniversalRobots. The word robota in Czech means simply work. In spite of such practical beginnings, sciencefiction writers and early Hollywood movies have given us a romantic notion of robots and expectationsthat they will revolutionize several walks of life including industry. However, many of the more far-fetched expectations from robots have failed to materialize. For instance, in underwater assembly andoil mining, teleoperated robots are very difficult to manipulate due to sea currents and low visibility,and have largely been replaced or augmented by automated smart quick-fit couplings that simplify theassembly task. However, through good design practices and painstaking attention to detail, engineers havesucceeded in applying robotic systems to a wide variety of industrial and manufacturing situations wherethe environment is structured or predictable. Thus, the first successful commercial implementation ofprocess robotics was in the U.S. automobile industry; the word automation was coined in the 1940s at FordMotor Company, a contraction of automatic motivation.

As machines, robots have precise motion capabilities, repeatability, and endurance. On a practical level,robots are distinguished from other electromechanical motion equipment by their dexterous manipulationcapability in that robots can work, position, and move tools and other objects with far greater dexterity thanother machines found in the factory. The capabilities of robots are extended by using them as a basis forrobotic workcells. Process robotic workcells are integrated functional systems with grippers, end effectors,sensors, and process equipment organized to perform a controlled sequence of jobs to execute a process.Robots must coordinate with other devices in the workcell such as machine tools, conveyors, part feeders,cameras, and so on. Sequencing jobs to correctly perform automated tasks in such circumstances is not atrivial matter, and robotic workcells require sophisticated planning, sequencing, and control systems.

Today, through developments in computers and artificial intelligence (AI) techniques (and often mo-tivated by the space program), we are on the verge of another breakthrough in robotics that will affordsome levels of autonomy in unstructured environments. For applications requiring increased autonomyit is particularly important to focus on the design of the data structures and command-and-control in-formation flow in the robotic system. Therefore, this chapter focuses on the design of robotic workcellsystems. A distinguishing feature of robotics is its multidisciplinary nature: to successfully design roboticsystems one must have a grasp of electrical, mechanical, industrial, and computer engineering, as well aseconomics and business practices. The purpose of this chapter is to provide a background in these areasso that design of robotic systems may be approached from a position of rigor, insight, and confidence.

The chapter begins by discussing layouts and architectures for robotic workcell design. Then, compo-nents of the workcell are discussed from the bottom up, beginning with robots, sensors, and conveyors/partfeeders, and progressing upwards in abstraction through task coordination, job sequencing, and resourcedispatching, to task planning, assignment, and decomposition. Concepts of user interface and exceptionhandling/fault recovery are included.

71.2 Robot Workcells

In factory automation and elsewhere it was once common to use layouts such as the one in Figure 71.1,which shows an assembly line with distinct workstations, each performing a dedicated function. Robotshave been used at the workstation level to perform operations such as assembly, drilling, surface finishing,welding, palletizing, and so on. In the assembly line, parts are routed sequentially to the workstationsby conveyors. Such systems are very expensive to install, require a cadre of engineering experts to designand program, and are extremely difficult to modify or reprogram as needs change. In today’s high-mixlow-volume (HMLV) manufacturing scenario, these characteristics tolled the death knell for such rigidantiquated designs.

In the assembly line, the robot is restricted by placing it into a rigid sequential system. Robots are versatilemachines with many capabilities, and their potential can be significantly increased by using them as a basis

Page 3: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-3

FIGURE 71.1 Antiquated sequential assembly line with dedicated workstations. (Courtesy of Edkins, M. 1983. Link-ing industrial robots and machine tools. In Robotic Technology. A. Pugh, Ed. Peregrinus, London.)

FIGURE 71.2 Robot workcell. (Courtesy of Edkins, M. 1983. Linking industrial robots and machine tools. In RoboticTechnology. A. Pugh, Ed. Peregrinus, London.)

for robotic workcells such as the one in Figure 71.2 [Decelle 1988, Jamshidi et al. 1992, Pugh 1983]. In therobotic workcell, robots are used for part handling, assembly, and other process operations. The workcellis designed to make full use of the workspace of the robots, and components such as milling machines,drilling machines, vibratory part feeders, and so on are placed within the robots’ workspaces to allowservicing by the robots. Contrary to the assembly line, the physical layout does not impose a priori a

Page 4: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-4 Computer Science Handbook

fixed sequencing of the operations or jobs. Thus, as product requirements change, all that is required isto reprogram the workcell in software. The workcell is ideally suited to emerging HMLV conditions inmanufacturing and elsewhere.

The rising popularity of robotic workcells has taken emphasis away from hardware design and placednew emphasis on innovative software techniques and architectures that include planning, coordination, andcontrol (PC&C) functions. Research into individual robotic devices is becoming less useful; what is neededare rigorous design and analysis techniques for integrated multirobotic systems.

71.3 Workcell Command and Information Organization

In this section we define some terms, discuss the design of intelligent control systems, and specify a plan-ning, coordination, and control structure for robotic workcells. The remainder of the chapter is orga-nized around that structure. The various architectures used for modeling AI systems are relevant to thisdiscussion, although here we specialize the discussion to intelligent control architecture.

71.3.1 Intelligent Control Architectures

Many structures have been proposed under the general aegis of the so-called intelligent control (IC)architectures [Antsaklis and Passino 1992]. Despite frequent heated philosophical discussions, it is nowbecoming clear that most of the architectures have much in common, with apparent major differencesdue to the fact that different architectures focus on different aspects of intelligent control or differentlevels of abstraction. A general IC architecture based on work by Saridis is given in Figure 71.3, whichillustrates the principle of decreasing precision with increasing abstraction [Saridis 1996]. In this figure, theorganization level performs as a manager that schedules and assigns tasks, performs task decompositionand planning, does path planning, and determines for each task the required job sequencing and assignmentof resources. The coordination level performs the prescribed job sequencing, coordinating the workcellagents or resources; in the case of shared resources it must execute dispatching and conflict resolution.

The agents or resources of the workcell include robot manipulators, grippers and tools, conveyors andpart feeders, sensors (e.g., cameras), mobile robots, and so on. The execution level contains a closed-loopcontroller for each agent that is responsible for the real-time performance of that resource, includingtrajectory generation, motion and force feedback servo-level control, and so on. Some permanent built-inmotion sequencing may be included (e.g., stop robot motion prior to opening the gripper).

At each level of this hierarchical IC architecture, there may be several systems or nodes. That is, thearchitecture is not strictly hierarchical. For instance, at the execution level there is a real-time controllerfor each workcell agent. Several of these may be coordinated by the coordination level to sequence the jobsneeded for a given task. At each level, each node is required to sense conditions, make decisions, and givecommands or status signals. This is captured in the sense/world-model/execute (SWE) paradigm of Albus[1992], shown in the NASREM configuration in Figure 71.4; each node has the SWE structure.

FIGURE 71.3 Three-level intelligent control architecture from work by Saridis.

Page 5: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-5

FIGURE 71.4 Three-element structure at all levels of the IC architecture: the NASREM paradigm.

FIGURE 71.5 Behavior-based design after the subsumption technique of Brooks.

71.3.2 Behaviors and Hybrid Systems Design

In any properly designed IC system, the supervisory levels should not destroy the capabilities of the systemssupervised. Thus, design should proceed in the manner specified by Brooks [1986], where behaviors arebuilt in at lower levels, then selected, activated, or modified by upper-level supervisors. From the pointof view of still higher level nodes, the composite performance appears in terms of new more complex oremergent behaviors. Such subsumption design proceeds in the manner of adding layers to an onion, asdepicted loosely in Figure 71.5.

Near or slightly below the interfaces between the coordination level and the execution level one mustface the transition between two fundamentally distinct worlds. Real-time servo-level controller design andcontrol may be accomplished in terms of state-space systems, which are time-varying dynamical systems(either continuous time or discrete time) having continuous-valued states such as temperatures, pressures,motions, velocities, forces, and so on. On the other hand, the coordinator is not concerned about suchdetails, but speaks in terms of discrete events such as “perform this job” or “check this condition.” The

Page 6: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-6 Computer Science Handbook

FIGURE 71.6 Hybrid systems approach to defining and sequencing the plant behaviors.

theory of hybrid systems is concerned with the interface between continuous-state systems and discreteevent systems.

These concepts are conveniently illustrated by figures such as Figure 71.6, where a closed-loop real-timefeedback controller for the plant having dynamics x = f (x , u) is shown at the execution level. The functionof the coordinator is to select the details of this real-time feedback control structure; that is, the outputsz(t), control inputs u(t), prescribed reference trajectories r (t), and controllers K to be switched in at thelow level. Selecting the outputs amounts to selecting which sensors to read; selecting the control inputsamounts to selecting to which actuators the command signals computed by the controller should be sent.The controller K is selected from a library of stored predesigned controllers.

A specific combination of (z, u, r, K ) defines a behavior of the closed-loop system. For instance, in amobile robot, for path-following behavior one may select: as outputs, the vehicle speed and heading; ascontrols, the speed and steering inputs; as the controller, an adaptive proportional-integral-derivative(PID) controller; and as reference input, the prescribed path. For wall-following behavior, for instance, onesimply selects as output the sonar distance from the wall, as input the steering command, and as referenceinput the prescribed distance to be maintained. These distinct closed-loop behaviors are sequenced by thecoordinator to perform the prescribed job sequence.

71.3.3 Workcell Planning, Coordination, and Control Structure

A convenient planning, coordination, and control structure for robotic workcell design and operation isgiven in Figure 71.7, which is modified from the next generation controller (NGC) paradigm. This is anoperational PC&C architecture fully consistent with the previous IC structures. In this figure, the termvirtual agent denotes the agent plus its low-level servocontroller and any required built-in sequencingcoordinators. For instance, a virtual robot includes the manipulator, its commercial controller with servo-level joint controllers and trajectory generator, and in some applications the gripper controller plus anagent internal coordinator to sequence manipulator and gripper activities. A virtual camera might includethe camera(s) and framegrabber board, plus software algorithms to perform basic vision processing suchas edge detection, segmentation, and so on; thus, the virtual camera could include a data abstraction, whichis a set of data plus manipulations on that data.

Page 7: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-7

FIGURE 71.7 Robotic workcell planning, coordination, and control operational architecture.

The remainder of the chapter is structured after this PC&C architecture, beginning at the executionlevel to discuss robot manipulator kinematics, dynamics and control; end effectors and tooling; sensors;and other workcell components such as conveyors and part feeders. Next considered is the coordinationlevel including sequencing control and dispatching of resources. Finally, the organization level is treatedincluding task planning, path planning, workcell management, task assignment, and scheduling.

Three areas are particularly problematic. At each level there may be human operator interfaces; thiscomplex topic is discussed in a separate section. An equally complex topic is error detection and recovery,also allotted a separate section, which occurs at several levels in the hierarchy. Finally, the strict NGCarchitecture has a component known as the information or knowledge base; however, in view of the factthat all nodes in the architecture have the SWE structure shown in Figure 71.4, it is clear that the knowledgebase is distributed throughout the system in the world models of the nodes. Thus, a separate discussionon this component is not included.

71.4 Commercial Robot Configurations and Types

Robots are highly reliable, dependable, and technologically advanced factory equipment. The majorityof the world’s robots are supplied by established companies using reliable off-the-shelf component tech-nologies. All commercial industrial robots have two physically separate basic elements, the manipulator

Page 8: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-8 Computer Science Handbook

arm and the controller. The basic architecture of all commercial robots is fundamentally the same, andconsists of digital servocontrolled electrical motor drives on serial-link kinematic machines, usually withno more than six axes (degrees of freedom). All are supplied with a proprietary controller. Virtually allrobot applications require significant design and implementation effort by engineers and technicians.What makes each robot unique is how the components are put together to achieve performance that yieldsa competitive product. The most important considerations in the application of an industrial robot centeron two issues: manipulation and integration.

71.4.1 Manipulator Performance

The combined effects of kinematic structure, axis drive mechanism design, and real-time motion controldetermine the major manipulation performance characteristics: reach and dexterity, payload, quickness,and precision. Caution must be used when making decisions and comparisons based on manufacturers’published performance specifications because the methods for measuring and reporting them are notstandardized across the industry. Usually motion testing, simulations, or other analysis techniques areused to verify performance for each application.

Reach is characterized by measuring the extent of the workspace described by the robot motion anddexterity by the angular displacement of the individual joints. Some robots will have unusable spaces suchas dead zones, singular poses, and wrist-wrap poses inside of the boundaries of their reach.

Payload weight is specified by the manufacturers of all industrial robots. Some manufacturers also specifyinertial loading for rotational wrist axes. It is common for the payload to be given for extreme velocity andreach conditions. Weight and inertia of all tooling, workpieces, cables and hoses must be included as partof the payload.

Quickness is critical in determining throughput but difficult to determine from published robot spec-ifications. Most manufacturers will specify a maximum speed of either individual joints or for a specifickinematic tool point. However, average speed in a working cycle is the quickness characteristic of interest.

Precision is usually characterized by measuring repeatability. Virtually all robot manufacturers specifystatic position repeatability. Accuracy is rarely specified, but it is likely to be at least four times largerthan repeatability. Dynamic precision, or the repeatability and accuracy in tracking position, velocity, andacceleration over a continuous path, is not usually specified.

71.4.2 Common Kinematic Configurations

All common commercial industrial robots are serial-link manipulators, usually with no more than sixkinematically coupled axes of motion. By convention, the axes of motion are numbered in sequence asthey are encountered from the base on out to the wrist. The first three axes account for the spatial positioningmotion of the robot; their configuration determines the shape of the space through which the robot canbe positioned. Any subsequent axes in the kinematic chain generally provide rotational motions to orientthe end of the robot arm and are referred to as wrist axes. There are two primary types of motion that arobot axis can produce in its driven link — either revolute or prismatic. It is often useful to classify robotsaccording to the orientation and type of their first three axes. There are four very common commercial robotconfigurations: articulated, type I selectively compliant assembly robot arm (SCARA), type II SCARA,and Cartesian. Two other configurations, cylindrical and spherical, are now much less common.

71.4.2.1 Articulated Arms

The variety of commercial articulated arms, most of which have six axes, is very large (Figure 71.8). All ofthese robot’s axes are revolute. The second and third axes are parallel and work together to produce motionin a vertical plane. The first axis in the base is vertical and revolves the arm to sweep out a large workvolume. Many different types of drive mechanisms have been devised to allow wrist and forearm drivemotors and gearboxes to be mounted close to the first and second axis of rotation, thus minimizing theextended mass of the arm. The workspace efficiency of well-designed articulated arms, which is the degree

Page 9: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-9

FIGURE 71.8 Articulated arm; six-axis arm grinding from a casting. (Courtesy of Staubli Unimation, Inc.)

of quick dexterous reach with respect to arm size, is unsurpassed by other arm configurations when fiveor more degrees of freedom are needed. A major limiting factor in articulated arm performance is that thesecond axis has to work to lift both the subsequent arm structure and the payload. Historically, articulatedarms have not been capable of achieving accuracy as well as other arm configurations, as all axes have jointangle position errors which are multiplied by link radius and accumulated for the entire arm.

71.4.2.2 Type I SCARA

The type I SCARA (selectively compliant assembly robot arm) uses two parallel revolute joints to producemotion in the horizontal plane (Figure 71.9). The arm structure is weight-bearing but the first and secondaxes do no lifting. The third axis of the type I SCARA provides work volume by adding a vertical or z axis.A fourth revolute axis will add rotation about the z axis to control orientation in the horizontal plane.This type of robot is rarely found with more than four axes. The type I SCARA is used extensively inthe assembly of electronic components and devices, and it is used broadly for the assembly of small- andmedium-sized mechanical assemblies.

71.4.2.3 Type II SCARA

The type II SCARA, also a four-axis configuration, differs from type I in that the first axis is a long verticalprismatic z stroke, which lifts the two parallel revolute axes and their links (Figure 71.10). For quicklymoving heavier loads (over approximately 75 lb) over longer distance (more than about 3 ft), the type IISCARA configuration is more efficient than the type I.

71.4.2.4 Cartesian Coordinate Robots

Cartesian coordinate robots use orthogonal prismatic axes, usually referred to as x , y, and z, to translatetheir end effector or payload through their rectangular workspace (Figure 71.11). One, two, or threerevolute wrist axes may be added for orientation. Commercial robot companies supply several types ofCartesian coordinate robots with workspace sizes ranging from a few cubic inches to tens of thousands ofcubic feet, and payloads ranging to several hundred pounds. Gantry robots, which have an elevated bridgestructure, are the most common Cartesian style and are well suited to material handling applications wherelarge areas and/or large loads must be serviced. They are particularly useful in applications such as arcwelding, waterjet cutting, and inspection of large complex precision parts. Modular Cartesian robots are

Page 10: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-10 Computer Science Handbook

FIGURE 71.9 Type I SCARA arm. High-precision, high-speed midsized SCARA I. (Courtesy of Adept Technologies,Inc.)

FIGURE 71.10 Type II SCARA. (Courtesy of Adept Technologies, Inc.)

Page 11: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-11

FIGURE 71.11 Cartesian robot. Three-axis robot constructed from modular single-axis motion modules. (Courtesyof Adept Technologies, Inc.)

also commonly available from several commercial sources. Each module is a self-contained completelyfunctional single-axis actuator; the modules may be custom assembled for special-purpose applications.

71.4.2.5 Spherical and Cylindrical Coordinate Robots

The first two axes of the spherical coordinate robot are revolute and orthogonal to one another, and thethird axis provides prismatic radial extension. The result is a natural spherical coordinate system witha spherical work volume. The first axis of cylindrical coordinate robots is a revolute base rotation. Thesecond and third are prismatic, resulting in a natural cylindrical motion. Commercial models of sphericaland cylindrical robots (Figure 71.12) were originally very common and popular in machine tending andmaterial handling applications. Hundreds are still in use but now there are only a few commercially availablemodels. The decline in use of these two configurations is attributed to problems arising from use of theprismatic link for radial extension/retraction motion; a solid boom requires clearance to fully retract.

71.4.3 Drive Types of Commercial Robots

The vast majority of commercial industrial robots use electric servomotor drives with speed reducingtransmissions. Both ac and dc motors are popular. Some servohydraulic articulated arm robots are availablenow for painting applications. It is rare to find robots with servopneumatic drive axes. All types ofmechanical transmissions are used, but the tendency is toward low- and zero-backlash type drives. Somerobots use direct drive methods to eliminate the amplification of inertia and mechanical backlash associatedwith other drives. Joint angle position sensors, required for real-time servo-level control, are generallyconsidered an important part of the drive train. Less often, velocity feedback sensors are provided.

71.4.4 Commercial Robot Controllers

Commercial robot controllers are specialized multiprocessor computing systems that provide four basicprocesses allowing integration of the robot into an automation system: motion trajectory generationand following, motion/process integration and sequencing, human user integration, and informationintegration.

71.4.4.1 Motion Trajectory Generation and Following

There are two important controller related aspects of industrial robot motion generation. One is the extentof manipulation that can be programmed, the other is the ability to execute controlled programmed motion.A unique aspect of each robot system is its real-time servo-level motion control. The details of real-timecontrol are typically not revealed to the user due to safety and proprietary information secrecy reasons.

Page 12: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-12 Computer Science Handbook

(a)

(b)

FIGURE 71.12 Spherical and cylindrical robots. (a) Hydraulic powered spherical robot. (Source: Courtesy of KoholSystems, Inc. With permission.) (b) Cylindrical arm using scissor mechanism for radial prismatic motion. (Courtesyof Yamaha Robotics.)

Each robot controller, through its operating system programs, converts digital data from higher levelcoordinators into coordinated arm motion through precise computation and high-speed distributionand communication of the individual axis motion commands, which are executed by individual jointservocontrollers. Most commercial robot controllers operate at a sample period of 16 ms. The real-timemotion controller invariably uses classical independent-joint proportional-integral-derivative control orsimple modifications of PID. This makes commercially available controllers suitable for point-to-pointmotion, but most are not suitable for following continuous position/velocity profiles or exerting prescribedforces without considerable programming effort, if at all.

71.4.4.2 Motion/Process Integration and Sequencing

Motion/process integration involves coordinating manipulator motion with process sensors or otherprocess controller devices. The most primitive process integration is through discrete digital input/output(I/O). For example, a machine controller external to the robot controller might send a 1-b signal indicatingthat it is ready to be loaded by the robot. The robot controller must have the ability to read the signaland to perform logical operations (if then, wait until, do until, etc.) using the signal. Coordination withsensors (e.g., vision) is also often provided.

71.4.4.3 Human Integration

The controller’s human interfaces are critical to the expeditious setup and programming of robot systems.Most robot controllers have two types of human interface available: computer style CRT/keyboard terminals

Page 13: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-13

for writing and editing program code off line, and teach pendants, which are portable manual inputterminals used to command motion in a telerobotic fashion via touch keys or joy sticks. Teach pendantsare usually the most efficient means available for positioning the robot, and a memory in the controllermakes it possible to play back the taught positions to execute motion trajectories. With practice, humanoperators can quickly teach a series of points which are chained together in playback mode. Most robotapplications currently depend on the integration of human expertise during the programming phase forthe successful planning and coordination of robot motion. These interface mechanisms are effective inunobstructed workspaces where no changes occur between programming and exceution. They do notallow human interface during execution or adaptation to changing environments.

71.4.4.4 Information Integration

Information integration is becoming more important as the trend toward increasing flexibility and agilityimpacts robotics. Many commercial robot controllers now support information integration functionsby employing integrated personal computer (PC) interfaces through the communications ports (e.g.,RS-232), or in some through direct connections to the robot controller data bus.

71.5 Robot Kinematics, Dynamics, and Servo-Level Control

In this section we shall study the kinematics, dynamics, and servocontrol of robot manipulators; for moredetails see Lewis et al. [1993]. The objective is to turn the manipulator, by proper design of the controlsystem and trajectory generator, into an agent with desirable behaviors, which behaviors can then be selectedby the job coordinator to perform specific jobs to achieve some assigned task. This agent, composed ofthe robot plus servo-level control system and trajectory genarator, is the virtual robot in Figure 71.7; thisphilosophy goes along with the subsumption approach of Brooks (Figure 71.5).

71.5.1 Kinematics and Jacobians

71.5.1.1 Kinematics of Rigid Serial-Link Manipulators

The kinematics of the robot manipulator are concerned only with relative positioning and not with motioneffects.

71.5.1.1.1 Link A MatricesFixed-base serial-link rigid robot manipulators can be considered as a sequence of joints held together bylinks. Each joint i has a joint variable qi , which is an angle for revolute joints (units of degrees) and alength for prismatic or extensible joints (units of length). The joint vector of an n-link robot is defined asq = [q1 q2 · · · qn]T ∈ �n; the joints are traditionally numbered from the base to the end effector, withlink 0 being the fixed base. A robot with n joints has n degrees of freedom, so that for complete freedomof positioning and orientation in our 3-D space �3 one needs a six-link arm.

For analysis purposes, it is considered that to each link is affixed a coordinate frame. The base frame isattached to the manipulator base, link 0. The location of the coordinate frame on the link is often selectedaccording to the Denavit–Hartenberg (DH) convention [Lewis et al. 1993]. The relation between the linksis given by the A matrix for link i , which has the form

Ai (qi ) =[

Ri pi

0 1

](71.1)

where Ri (qi ) is a 3 × 3 rotation matrix (R−1i = RT

i ) and pi (qi ) = [xi yi zi ]T ∈ �3 is a translation vector.Ri specifies the rotation of the coordinate frame on link i with respect to the coordinate frame on linki − 1; pi specifies the translation of the coordinate frame on link i with respect to the coordinate frameon link i − 1. The 4 × 4 homogeneous transformation Ai thus specifies completely the orientation andtranslation of link i with respect to link i −1.

Page 14: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-14 Computer Science Handbook

The A matrix Ai (qi ) is a function of the joint variable, so that as qi changes with robot motion, Ai

changes correspondingly. Ai is also dependent on the parameters link twist and link length, which arefixed for each link. The A matrices are often given for a specific robot in the manufacturers handbook.

71.5.1.1.2 Robot T MatrixThe position of the end effector is given in terms of the base coordinate frame by the arm T matrix definedas the concatenation of A matrices

T(q) = A1(q1)A2(q2) · · · An(qn) ≡[

R p

0 1

](71.2)

This 4 × 4 homogeneous transformation matrix is a function of the joint variable vector q. The 3 × 3cumulative rotation matrix is given by R(q) = R1(q1)R2(q2) · · · Rn(qn).

71.5.1.1.3 Joint Space vs. Cartesian SpaceAn n-link manipulator has n degrees of freedom, and the position of the end effector is completely fixedonce the joints variables qi are prescribed. This position may be described either in joint coordinates or inCartesian coordinates. The joint coordinates position of the end effector is simply given by the value of then-vector q. The Cartesian position of the end effector is given in terms of the base frame by specifying theorientation and translation of a coordinate frame affixed to the end effector in terms of the base frame; thisis exactly the meaning of T(q). That is, T(q) gives the Cartesian position of the end effector.

The Cartesian position of the end effector may be completely specified in our 3-D space by a six vector;three coordinates are needed for translation and three for orientation. The representation of Cartesiantranslation by the arm T(q) matrix is suitable, as it is simply given by p(q) = [x y z]T . Unfortunately, therepresentation of Cartesian orientation by the arm T matrix is inefficient in that R(q) has nine elements.More efficient representations are given in terms of quaternions or the tool configuration vector.

71.5.1.1.4 Kinematics and Inverse Kinematics ProblemsThe robot kinematics problem is to determine the Cartesian position of the end effector once the jointvariables are given. This is accomplished simply by computing T(q) for a given value of q.

The inverse kinematics problem is to determine the required joint angles qi to position the end effectorat a prescribed Cartesian position. This corresponds to solving Equation 71.2 for q ∈ �n given a desiredorientation R and translation p of the end effector. This is not an easy problem, and may have more thanone solution (e.g., think of picking up a coffee cup, one may reach with elbow up, elbow down, etc.). Thereare various efficient techniques for accomplishing this. One should avoid the functions arcsin, arccos, anduse where possible the numerically well-conditioned arctan function.

71.5.1.2 Robot Jacobians71.5.1.2.1 Transformation of Velocity and AccelerationWhen the manipulator moves, the joint variable becomes a function of time t. Suppose there is prescribeda generally nonlinear transformation from the joint variable q(t) ∈ �n to another variable y(t) ∈ �p

given by

y(t) = h(q(t)) (71.3)

An example is provided by the equation y = T(q), where y(t) is the Cartesian position. Taking partialderivatives one obtains

y = ∂h

∂qq ≡ J (q)q (71.4)

where J (q) is the Jacobian associated with h(q). This equation tells how the joint velocities are transformedto the velocity y.

Page 15: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-15

If y = T(q) the Cartesian end effector position, then the associated Jacobian J (q) is known as themanipulatorJacobian. There are several techniques for efficiently computing this particular Jacobian; thereare some complications arising from the fact that the representation of orientation in the homogeneoustransformation T(q) is a 3 × 3 rotation matrix and not a three vector. If the arm has n links, then theJacobian is a 6 × n matrix; if n is less than 6 (e.g., SCARA arm), then J (q) is not square and there is notfull positioning freedom of the end effector in 3-D space. The singularities of J (q) (where it loses rank),define the limits of the robot workspace; singularities may occur within the workspace for some arms.

Another example of interest is when y(t) is the position in a camera coordinate frame. Then J (q) revealsthe relationships between manipulator joint velocities (e.g., joint incremental motions) and incrementalmotions in the camera image. This affords a technique, for instance, for moving the arm to cause desiredrelative motion of a camera and a workpiece. Note that, according to the velocity transformation 71.4, onehas that incremental motions are transformed according to �y = J (q)�q .

Differentiating Equation 71.4 one obtains the acceleration transformation

y = J q + J q (71.5)

71.5.1.2.2 Force TransformationUsing the notion of virtual work, it can be shown that forces in terms of q may be transformed to forcesin terms of y using

� = J T (q)F (71.6)

where �(t) is the force in joint space (given as an n-vector of torques for a revolute robot), and F is theforce vector in y space. If y is the Cartesian position, then F is a vector of three forces [fx fy fz]T andthree torques [�x �y �z]T . When J (q) loses rank, the arm cannot exert forces in all directions that may bespecified.

71.5.2 Robot Dynamics and Properties

The robot dynamics considers motion effects due to the control inputs and inertias, Coriolis forces, gravity,disturbances, and other effects. It reveals the relation between the control inputs and the joint variablemotion q(t), which is required for the purpose of servocontrol system design.

71.5.2.1 Robot Dynamics

The dynamics of a rigid robot arm with joint variable q(t) ∈ �n are given by

M(q)q + Vm(q, q)q + F(q, q) + G(q) + �d = � (71.7)

where M is an inertia matrix, Vm is a matrix of Coriolis and centripetal terms, F is a friction vector, G is agravity vector, and �d is a vector of disturbances. The n-vector �(t) is the control input. The dynamics fora specific robot arm are not usually given in the manufacturer specifications, but may be computed fromthe kinematics A matrices using principles of Lagrangian mechanics.

The dynamics of any actuators can be included in the robot dynamics. For instance, the electric orhydraulic motors that move the joints can be included, along with any gearing. Then, as long as the gearingand drive shafts are noncompliant, the form of the equation with arm-plus-actuator dynamics has thesame form as Equation 71.7. If the actuators are not included, the control � is a torque input vector for thejoints. If joint dynamics are included, then � might be, for example, a vector of voltage inputs to the jointactuator motors.

The dynamics may be expressed in Cartesian coordinates. The Cartesian dynamics have the same form asEquation 71.7, but appearances there of q(t) are replaced by the Cartesian position y(t). The matrices aremodified, with the manipulator Jacobian J(q) becoming involved. In the Cartesian dynamics, the controlinput is a six vector of forces, three linear forces and three torques.

Page 16: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-16 Computer Science Handbook

71.5.2.2 Robot Dynamics Properties

Being a Lagrangian system, the robot dynamics satisfy many physical properties that can be used to simplifythe design of servo-level controllers. For instance, the inertia matrix M(q) is symmetric positive definite,and bounded above and below by some known bounds. The gravity terms are bounded above by knownbounds. The Coriolis/centripetal matrix Vm is linear in q, and is bounded above by known bounds. Animportant property is the skew-symmetric property of rigid-link robot arms, which says that the matrix(M − 2Vm) is always skew symmetric.

This is a statement of the fact that the fictitious forces do no work, and is related in an intimatefashion to the passivity properties of Lagrangian systems, which can be used to simplify control systemdesign. Ignoring passivity can lead to unacceptable servocontrol system design and serious degradationsin performance, especially in teleoperation systems with transmission delays.

71.5.2.3 State-Space Formulations and Computer Simulation

Many commercially available controls design software packages, including MATLAB, allow the simulationof state-space systems of the form x = f (x, u) using, for instance, Runge–Kutta integration. The robotdynamics can be written in state-space form in several different ways. One state-space formulation is theposition/velocity form

x1 = x2

x2 = −M−1(x1)[Vm(x1, x2)x2 + F(x1, x2) + G(x1) + �d ] + M−1(x1)�(71.8)

where the control input is u = M−1(x1)�, and the state is x = [xT1 xT

2 ]T , with x1 = q, and x2 = q bothn-vectors. In computation, one should not invert M(q); one should either obtain an analytic expressionfor M−1 or use least-squares techniques to determine x2.

71.5.3 Robot Servo-level Motion Control

The objective in robot servo-level motion control is to cause the manipulator end effector to follow aprescribed trajectory. This can be accomplished as follows for any system having the dynamics Equa-tion 71.7, including robots, robots with actuators included, and robots with motion described in Cartesiancoordinates. Generally, design is accomplished for robots including actuators, but with motion describedin joint space. In this case, first, solve the inverse kinematics problem to convert the desired end effectormotion yd (t) (usually specified in Cartesian coordinates) into a desired joint-space trajectory qd (t) ∈ �n

(discussed subsequently). Then, to achieve tracking motion so that the actual joint variables q(t) followthe prescribed trajectory qd (t), define the tracking error e(t) and filtered tracking error r (t) as

e(t) = qd (t) − q(t) (71.9)

r (t) = e + �e(t) (71.10)

with � a positive definite design parameter matrix; it is common to select � diagonal with positiveelements.

71.5.3.1 Computed Torque Control

One may differentiate Equation 71.10 to write the robot dynamics Equation 71.7 in terms of the filteredtracking error as

Mr = −Vmr + f (x) + �d − � (71.11)

where the nonlinear robot function is given by

f (x) = M(q)(qd + �e) + Vm(q, q)(qd + �e) + F (q, q) + G(q) (71.12)

Page 17: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-17

FIGURE 71.13 Robot servo-level tracking controller.

Vector x contains all of the time signals needed to compute f (·), and may be defined for instance asx ≡ [eT eT qT

d qTd qT

d ]T . It is important to note that f (x) contains all the potentially unknown robot armparameters including payload masses, friction coefficients, and Coriolis/centripetal terms that may simplybe too complicated to compute.

A general sort of servo-level tracking controller is now obtained by selecting the control input as

� = f + Kvr − v(t) (71.13)

with f an estimate of the nonlinear terms f (x), Kvr = Kv e + Kv�e an outer proportional-plus-derivative(PD) tracking loop, and v(t) an auxiliary signal to provide robustness in the face of disturbances andmodeling errors. The multiloop control structure implied by this scheme is shown in Figure 71.13. Thenonlinear inner loop that computes f (x) provides feedforward compensation terms that improve thetracking capabilities of the PD outer loop, including an acceleration feedforward term M(q)qd , frictioncompensation F(q, q), and a gravity compensation term G(q).

This controller is a variant of computed-torque control, since the torque required for trajectory follow-ing is computed in terms of the tracking error and the additional nonlinear robot terms in f (x). An inte-grator may be added in the outer tracking loop to ensure zero steady-state error, obtaining a PID outer loop.

71.5.3.2 Commercial Robot Controllers

Commercial robot controllers do not implement the entire computed torque law. Most available controllerssimply use a PD or PID control loop around each joint, dispensing entirely with the inner nonlinearcompensation loop f (x). It is not clear exactly what is going on in most commercially available controllers,as they are proprietary and the user has no way to modify the joint tracking loops. However, in somecontrollers (e.g., Adept Hyperdrive), there appears to be some inner-loop compensation, where someof the terms in f (x) are included in �(t). For instance, acceleration feedforward may be included. Toimplement nonlinear feedback terms that are not already built-in on commercial controllers, it is usuallynecessary to perform hardware modifications of the controller.

71.5.3.3 Adaptive and Robust Control

There are by now many advanced control techniques for robot manipulators that either estimate thenonlinear robot function or compensate otherwise for uncertainties in f (x). In adaptive control the

Page 18: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-18 Computer Science Handbook

estimate f of the nonlinear terms is updated online in real-time using additional internal controllerdynamics, and in robust control the robustifying signal v(t) is selected to overbound the system modelinguncertainties. In learning control, the nonlinearity correction term is improved over each repetition ofthe trajectory using the tracking error over the previous repetition (this is useful in repetitive motionapplications including spray painting). Neural networks (NN) or fuzzy logic (FL) systems can be used inthe inner control loop to manufacture the nonlinear estimate f (x) [Lewis et al. 1995]. Since both NN andFL systems have a universal approximation property, the restrictive linear in the parameters assumptionrequired in standard adaptive control techniques is not needed, and no regression matrix need be computed.FL systems may also be used in the outer PID tracking loop to provide additional robustness.

Though these advanced techniques significantly improve the tracking performance of robot manipula-tors, they cannot be implemented on existing commercial robot controllers without hardware modifica-tions.

71.5.4 Robot Force/Torque Servocontrol

In many industrial applications it is desired for the robot to exert a prescribed force normal to a given surfacewhile following a prescribed motion trajectory tangential to the surface. This is the case in surface finishing,etc. A hybrid position/force controller can be designed by extension of the principles just presented.

The robot dynamics with environmental contact can be described by

M(q)q + Vm(q, q)q + F(q, q) + G(q) + �d = � + J T (q)� (71.14)

where J (q) is a constraint Jacobian matrix associated with the contact surface geometry and � (the so-calledLagrange multiplier) is a vector of contact forces exerted normal to the surface, described in coordinatesrelative to the surface.

The hybrid position/force control problem is to follow a prescribed motion trajectory q1d (t) tangentialto the surface while exerting a prescribed contact force �d (t) normal to the surface. Define the filteredmotion error rm = em +�em, where em = q1d − q1 represents the motion error in the plane of the surfaceand � is a positive diagonal design matrix. Define the force error as � = �d − �, where �(t) is the normalforce measured in a coordinate frame attached to the surface. Then a hybrid position/force controller hasthe structure

� = f + Kv L (q1)rm + J T [�d + K f �] − v (71.15)

where f is an estimate of the nonlinear robot function 71.12 and L (·) is an extended Jacobian determinedfrom the surface geometry using the implicit function theorem.

This controller has the basic structure of Figure 71.13, but with an additional inner force control loop. Inthe hybrid position/force controller, the nonlinear function estimate inner loop f and the robustifying termv(t) can be selected using adaptive, robust, learning, neural, or fuzzy techniques. A simplified controllerthat may work in some applications is obtained by setting f = 0, v(t) = 0, and increasing the PD motiongains Kv� and Kv and the force gain K f .

It is generally not possible to implement force control on existing commercial robot controllers withouthardware modification and extensive low-level programming.

71.5.5 Motion Trajectory Generation

In the section on servo-level motion control it was shown how to design real-time servo-level control loopsfor the robot joint actuators to cause the manipulator to follow a prescribed joint-space trajectory qd (t)and, if required by the job, to exert forces normal to a surface specified by a prescribed force trajectory�d (t). Unfortunately, the higher level path planner and job coordinator in Figure 71.7 do not specifythe position and force trajectories in the detail required by the servo-level controllers. Most commercialrobot controllers operate at a sampling period of 16 ms, so that they require specific desired motiontrajectories qd (t) sampled every 16 ms. On the other hand, the path planner wishes to be concerned at the

Page 19: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-19

level of abstraction only with general path descriptions sufficient to avoid obstacles or accomplish desiredhigh-level jobs (e.g., move to prescribed final position, then insert pin in hole).

71.5.5.1 Path Transformation and Trajectory Interpolation71.5.5.1.1 Joint Space vs. Cartesian Space Prescribed TrajectoriesThe job coordinator in Figure 71.7 passes required path-following commands to the virtual robot inthe form of discrete events to be accomplished, which could be in the form of commands to “move to aspecified final position passing through prescribed via points.” These prescribed path via points are given inCartesian coordinates y, are usually not regularly spaced in time, and may or may not have required timesof transit associated with them. Via points are given in the form (yi , yi , ti ), with yi the required Cartesianposition at point i and yi the required velocity. The time of transit ti may or may not be specified. Theirregularly spaced Cartesian-space via points must be interpolated to produce joint-space trajectory pointsregularly spaced at every sampling instant, often every 16 ms. It should be clearly understood that thepath and the joint trajectory are both prescribed for each coordinate: the path for three Cartesian positioncoordinates and three Cartesian orientation coordinates, and the trajectory for each of the n manipulatorjoints. If n is not equal to 6, there could be problems in that the manipulator might not be able to exactlyreach the prescribed via points. Thus, in its planning process the path planner must take into account thelimitations of the individual robots.

Two procedures may be used to convert prescribed Cartesian path via points into desired joint-spacetrajectory points specified every 16 ms. One may either: (1) use the arm inverse kinematics to computethe via points in joint-space coordinates and then perform trajectory interpolation in joint space, or (2)perform interpolation on the via points to obtain a Cartesian trajectory specified every 16 ms, and thenperform the inverse kinematics transformation to yield the joint-space trajectory qd (t) for the servo-levelcontroller. The main disadvantage of the latter procedure is that the full inverse kinematics transformationmust be performed every 16 ms. The main disadvantage of the former procedure is that interpolation injoint space often has strange effects, such as unexpected motions or curvilinear swings when viewed fromthe point of Cartesian space; one should recall that the path planner selects via points in Cartesian space,e.g., to avoid obstacles, often assuming linear Cartesian motion between the via points. The latter problemmay be mitigated by spacing the Cartesian path via points more closely together. Thus, procedure 1 isusually selected in robotic workcell applications.

71.5.5.1.2 Trajectory InterpolationA trajectory specified in terms of via points, either in joint space or Cartesian space, may be interpolated toobtain connecting points every 16 ms by many techniques, including interpolation by cubic polynomials,second- or third-order splines, minimum-time techniques, etc. The interpolation must be performedseparately for each coordinate of the trajectory (e.g., n interpolations if done in joint space). Cubicinterpolation is not recommended as it can result in unexpected swings or overshoots in the computedtrajectory.

The most popular technique for trajectory interpolation may be linear functions with parabolic blends(LFPB). Let us assume that the path via points are specified in joint space, so that the inverse kinematicstransformation from the Cartesian path via points obtained from the path planner has already been per-formed. Then, the path is specified in terms of the via points (q(ti ), q(ti ), ti ); note that the time of transit ti

of point i is specified; the transit times need not be uniformly spaced. Within each path segment connectingtwo via points, one uses constant acceleration or deceleration to obtain the required transit velocity, thenzero acceleration during the transit, then constant acceleration or deceleration to obtain the prescribedfinal position and velocity at the next via point. Sample LFPB trajectories are given in Figure 71.14. Notethat LFPB results in quadratic motion, followed by linear motion, followed by quadratic motion. Themaximum acceleration/deceleration is selected taking into account the joint actuator torque limits.

There are standard formulas available to compute the LFPB trajectory passing through two prescribedvia points, for instance the following. In Figure 71.14 two design parameters are selected: the blend timetb and the maximum velocity v M . Then the joint-space trajectory passing through via points i and (i+1),

Page 20: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-20 Computer Science Handbook

FIGURE 71.14 LFPB trajectory: (a) acceleration profile, (b) velocity profile, and (c) position profile.

shown in Figure 71.14(c), is given by

qd (t) =

a + (t − ti )b + (t − ti )2c , ti ≤ t < ti + tb

d + v Mt, ti + tb ≤ t < ti+1 − tb

e + (t − ti+1) f + (t − ti+1)2g , ti+1 − tb ≤ t < ti+1

(71.16)

It is not difficult to determine that the coefficients required to pass through the i th and (i + 1)st via pointsare given by

a = q(ti ), b = q(ti ), c = v M − q(ti )

2tb

d = q(ti ) + q(ti+1) − v Mti+1

2

e = q(ti+1), f = q(ti+1)

g = v Mti+1 + q(ti ) − q(ti+1) + 2tb[q(ti+1) − v M]

2t2b

(71.17)

One must realize that this interpolation must be performed for each of the n joints of the robot. Then,the resulting trajectory n-vector is passed as a prescribed trajectory to the servo-level controller, whichfunctions as in Robot Servo-level Motion Control subsection to cause trajectory-following arm motion.

Page 21: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-21

71.5.5.2 Types of Trajectories and Limitations of Commercial Robot Controllers

The two basic types of trajectories of interest are motion trajectories and force trajectories. Motion speci-fications can be either in terms of motion from one prescribed point to another, or in terms of followinga prescribed position/velocity/acceleration motion profile (e.g., spray painting).

In robotic assembly tasks point-to-point motion is usually used, without prescribing any requiredtransit time. Such motion can be programmed with commercially available controllers using standardrobot programming languages (Section 71.12). Alternatively, via points can usually be taught using atelerobotic teach pendant operated by the user (Section 71.11); the robot memorizes the via points, andeffectively plays them back in operational mode. A speed parameter may be set prior to the motion thattells the robot whether to move more slowly or more quickly. Trajectory interpolation is automaticallyperformed by the robot controller, which then executes PD or PID control at the joint servocontrol levelto cause the desired motion. This is by far the most common form of robot motion control.

In point-to-point motion control the commercial robot controller performs trajectory interpolationand joint-level PD servocontrol. All of this is transparent to the user. Generally, it is very difficult to modifyany stage of this process since the internal controller workings are proprietary, and the controller hardwaredoes not support more exotic trajectory interpolation or servo-level control schemes. Though some robotsby now do support following of prescribed position/velocity/acceleration profiles, it is generally extremelydifficult to program them to do so, and especially to modify the paths once programmed. Various tricksmust be used, such as specifying the Cartesian via points (yi , yi , ti ) in very fine time increments, andcomputing ti such that the desired acceleration is produced.

The situation is even worse for force control, where additional sensors must be added to sense forces(e.g., wrist force-torque sensor, see Section 71.7), kinematic computations based on the given surfacemust be performed to decompose the tangential motion control directions from the normal force controldirections, and then very tedious low-level programming must be performed. Changes in the surface orthe desired motion or force profiles require time-consuming reprogramming. In most available robotcontrollers, hardware modifications are required.

71.6 End Effectors and End-of-Arm Tooling

End effectors and end-of-arm tooling are the devices through which the robot manipulator interactswith the world around it, grasping and manipulating parts, inspecting surfaces, and so on [Wright andCutkosky 1985]. End effectors should not be considered as accessories, but as a major component in anyworkcell; proper selection and/or design of end effectors can make the difference between success andfailure in many process applications, particularly when one includes reliability, efficiency, and economicfactors. End effectors consist of the fingers, the gripper, and the wrist. They can be either standard com-mercially available mechanisms or specially designed tools, or can be complex systems in themselves (e.g.,welding tools or dextrous hands). Sensors can be incorporated in the fingers, the gripper mechanism,or the wrist mechanism. All end effectors, end-of-arm tooling, and supply hoses and cables (electrical,pneumatic, etc.) must be taken into account when considering the manipulator payload weight limits ofthe manufacturer.

71.6.1 Part Fixtures and Robot Tooling

In most applications the end effector design problem should not be decoupled from the part fixturingdesign problem. One should consider the wrist, gripper, fingers, and part fixturing as a single system.Integrated design can often yield innovative solutions to otherwise intractable problems; nonintegrateddesign can often lead to unforseen problems and unexpected failure modes. Coordinated design of fixturesand end effectors can often avoid the use of high-level expensive sensors (e.g., vision) and/or complexfeedback control systems that required overall coordinated control of the robot arm motion, the gripperaction, and the part pose. An ideal example of a device that allows simplified control strategies is theremote-center-of-compliance (RCC) wrist in Figure 71.17(b), if correctly used.

Page 22: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-22 Computer Science Handbook

(a) (b)

FIGURE 71.15 Angular and parallel motion robot grippers: (a) angular motion gripper and (b) parallel motiongripper, open and closed. (Courtesy of Robo-Tech Systems, Gastonia, NC.)

(a) (b)

FIGURE 71.16 Robot grippers: (a) center seeking gripper showing part contact by first finger and final closure bysecond finger and (b) Versagrip III adjustable three-finger gripper. (Courtesy of Robo-Tech Systems, Gastonia, NC.)

71.6.2 Grippers and Fingers

Commercial catalogs usually allow one to purchase end effector components separately, including fingers,grippers, and wrists. Grippers can be actuated either pneumatically or using servomotors. Pneumaticactuation is usually either open or closed, corresponding to a binary command to turn the air pressureeither off or on. Grippers often lock into place when the fingers are closed to offer failsafe action if airpressure fails. Servomotors often require analog commands and are used when finer gripper control isrequired. Available gripping forces span a wide range up to several hundred pounds force.

71.6.2.1 Gripper Mechanisms

Angular motion grippers, see Figure 71.15a, are inexpensive devices allowing grasping of parts eitherexternally or internally (e.g., fingers insert into a tube and gripper presses them outward). The fingers canoften open or close by 90◦. These devices are useful for simple pick-and-place operations. In electronicassembly or tasks where precise part location is needed, it is often necessary to use parallel grippers, seeFigure 71.15b, where the finger actuation affords exactly parallel closing motion. Parallel grippers generallyhave a far smaller range of fingertip motion that angular grippers (e.g., less than 1 in). In some cases, suchas electronic assembly of parts positioned by wires, one requires center seeking grippers, see Figure 71.16a,where the fingers are closed until one finger contacts the part, then that finger stops and the other fingercloses until the part is grasped.

Page 23: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-23

There are available many grippers with advanced special-purpose mechanisms, including Robo-Tech’sVersagrip III shown in Figure 71.16b, a 3-fingered gripper whose fingers can be rotated about a longitudinalaxis to offer a wide variety of 3-fingered grasps depending on the application and part geometry. Fingerrotation is affected using a fine motion servomotor that can be adjusted as the robot arm moves.

The gripper and/or finger tips can have a wide variety of sensors including binary part presence detectors,binary closure detectors, analog finger position sensors, contact force sensors, temperature sensors, andso on (Section 71.7).

71.6.2.2 The Grasping Problem and Fingers

The study of the multifinger grasping problem is a highly technical area using mathematical and mechanicalengineering analysis techniques such as rolling/slipping concepts, friction studies, force balance and centerof gravity studies, etc. [Pertin-Trocaz 1989]. These ideas may be used to determine the required grippermechanisms, number of fingers, and finger shapes for a specific application. Fingers are usually speciallydesigned for particular applications, and may be custom ordered from end-effector supply houses. Im-proper design and selection of fingers can doom to failure an application of an expensive robotic system. Bycontrast, innovative finger and contact tip designs can solve difficult manipulation and grasping problemsand greatly increase automation reliability, efficiency, and economic return.

Fingers should not be thought of as being restricted to anthropomorphic forms. They can have vacuumcontact tips for grasping smooth fragile surfaces (e.g., auto windshields), electromagnetic tips for handlingsmall ferrous parts, compliant bladders or wraparound air bladders for odd-shaped or slippery parts,Bernoulli effect suction for thin fragile silicon wafers, or membranes covering a powder to distributecontact forces for irregular soft fragile parts [Wright and Cutkosky 1985].

Multipurpose grippers are advantageous in that a single end effector can perform multiple tasks. Somemultipurpose devices are commercially available; they are generally expensive. The ideal multipurpose endeffector is the anthropomorphic dextrous hand. Several dextrous robot hands are now available and affordpotential applications in processes requiring active manipulation of parts or handling of many sorts oftooling. Currently, they are generally restricted to research laboratories since the problems associated withtheir expense, control, and coordination are not yet completely and reliably solved.

71.6.3 Robot Wrist Mechanisms

Wrist mechanisms couple the gripper to the robot arm, and can perform many functions. Commercialadapter plates allow wrists to be mounted to any commercially available robot arm. As an alternative toexpensive multipurpose grippers, quick change wrists allow end effectors to be changed quickly duringan application, and include quick disconnect couplings for mechanical, electrical, pneumatic and otherconnections. Using a quick change wrist, required tools can be selected from a magazine of availabletools/end effectors located at the workcell. If fewer tools are needed, an alternative is provided by inexpensivepivot gripper wrists, such as the 2-gripper-pivot device shown in Figure 71.17a, which allows one of twogrippers to be rotated into play. With this device, one gripper can unload a machine while the secondgripper subsequently loads a new blank into the machine. Other rotary gripper wrists allow one of several(up to six or more) grippers to be rotated into play. With these wrists, the grippers are mounted in paralleland rotate much like the chamber of an old-fashioned western Colt 45 revolver; they are suitable if thegrippers will not physically interfere with each other in such a parallel configuration.

Safety wrists automatically deflect, sending a fault signal to the machine or job coordinator, if theend-of-arm tooling collides with a rigid obstacle. They may be reset automatically when the obstacle isremoved.

Part positioning errors frequently occur due to robot end effector positioning errors, part variations,machine location errors, or manipulator repeatibility errors. It is unreasonable and expensive to requirethe robot joint controller to compensate exactly for such errors. Compliant wrists offset positioning errorsto a large extent by allowing small passive part motions in response to forces or torques exerted on the

Page 24: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-24 Computer Science Handbook

(a) (b)

FIGURE 71.17 Robot wrists. (a) Pivot gripper wrist. (Courtesy of Robo-Tech Systems, Gastonia, NC.) (b) Remote-center-of-compliance (RCC) wrist. (Courtesy of Lord Corporation, Erie, PA.)

part. An example is pin insertion, where small positioning errors can result in pin breakage or otherfailures, and compensation by gross robot arm motions requires sophisticed (e.g., expensive) force-torquesensors and advanced (e.g., expensive) closed-loop feedback force control techniques. The compliant wristallows the pin to effectively adjust its own position in response to sidewall forces so that it slides into thehole. A particularly effective device is the remote-center-of-compliance (RCC) wrist, Figure 71.17b, wherethe rotation point of the wrist can be adjusted to correspond, e.g., to the part contact point [Grooveret al. 1986]. Compliant wrists allow successful assembly where vision or other expensive sensors wouldotherwise be needed.

The wrist can contain a wide variety of sensors, with possibly the most important class being thewrist force-torque sensors (Section 71.7), which are quite expensive. A general rule-of-thumb is that, foreconomic and control complexity reasons, robotic force/torque sensing and control should be performedat the lowest possible level; e.g., fingertip sensors can often provide sufficient force information for mostapplications, with an RCC wrist compensating for position inaccuracies between the fingers and the parts.

71.6.4 Robot/Tooling Process Integration and Coordination

Many processes require the design of sophisticated end-of-arm tooling. Examples include spray paintingguns, welding tools, multipurpose end effectors, and so on. Indeed, in some processes the complexity ofthe tooling can rival or exceed the complexity of the robot arm that positions it. Successful coordinationand sequencing of the robot manipulator, the end effector, and the end-of-arm tooling calls for a varietyof considerations at several levels of abstraction in Figure 71.7.

There are two philosophically distinct points of view that may be used in considering the robot ma-nipulator plus its end-of-arm tooling. In the first, the robot plus tooling is viewed as a single virtual agentto be assigned by an upper-level organizer/manager and commanded by a midlevel job coordinator. Inthis situation, all machine-level robot/tool coordination may be performed by the internal virtual robotmachine coordinator shown in Figure 71.7. This point of view is natural when the robot must performsophisticated trajectory motion during the task and the tool is unintelligent, such as in pick-and-placeoperations, surface finishing, and grinding. In such situations, the end effector is often controlled by simplebinary on/off or open/close commands through digital input/output signals from the machine coordina-tor. Many commercially available robot controllers allow such communications and support coordinationthrough their programming languages (Section 71.12).

In the second viewpoint, one considers the manipulator as a dumb platform that positions the toolingor maintains its relative motion to the workpiece while the tooling performs a job. This point of viewmay be taken in the case of processes requiring sophisticated tooling such as welding. In this situation, therobot manipulator and the tooling may be considered as two separate agents which are coordinated by thehigher level job coordinator shown in Figure 71.7.

Page 25: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-25

A variety of processes fall between these two extremes, such as assembly tasks which require somecoordinated intelligence by both the manipulator and the tool (insert pin in hole). In such applicationsboth machine-level and task-level coordination may be required. The decomposition of coordinationcommands into a portion suitable for machine-level coordination and a portion for task-level coordinationis not easy. A rule-of-thumb is that any coordination that is invariant from process to process should beapportioned to the lower level (e.g., do not open gripper while robot is in motion). This is closely connectedto the appropriate definition of robot/tooling behaviors in the fashion of Brooks [1986].

71.7 Sensors

Sensors and actuators [Tzou and Fukuda 1992] function as transducers, devices through which the workcellplanning, coordination, and control system interfaces with the hardware components that make up theworkcell. Sensors are a vital element as they convert states of physical devices into signals appropriate forinput to the workcell PC&C control system; inappropriate sensors can introduce errors that make properoperation impossible no matter how sophisticated or expensive the PC&C system, whereas innovativeselection of sensors can make the control and coordination problem much easier.

71.7.1 The Philosophy of Robotic Workcell Sensors

Sensors are of many different types and have many distinct uses. Having in mind an analogy with biologicalsystems, proprioceptors are sensors internal to a device that yield information about the internal state of thatdevice (e.g., robot arm joint-angle sensors). Exteroceptors yield information about other hardware externalto a device. Sensors yield outputs that are either analog or digital; digital sensors often provide informationabout the status of a machine or resource (gripper open or closed, machine loaded, job complete). Sensorsproduce inputs that are required at all levels of the PC&C hierarchy, including uses for:

� Servo-level feedback control (usually analog proprioceptors)� Process monitoring and coordination (often digital exteroceptors or part inspection sensors such

as vision)� Failure and safety monitoring (often digital, e.g., contact sensor, pneumatic pressure-loss sensor)� Quality control inspection (often vision or scanning laser)

Sensor output data must often be processed to convert it into a form meaningful for PC&C purposes.The sensor plus required signal processing is shown as a virtual sensor in Figure 71.7; it functions asa data abstraction, that is, a set of data plus operations on that data (e.g., camera, plus framegrabber,plus signal processing algorithms such as image enhancement, edge detection, segmentation, etc.). Somesensors, including the proprioceptors needed for servo-level feedback control, are integral parts of theirhost devices, and so processing of sensor data and use of the data occurs within that device; then, the sensordata is incorporated at the servocontrol level or machine coordination level. Other sensors, often visionsystems, rival the robot manipulator in sophistication and are coordinated by the job coordinator, whichtreats them as valuable shared resources whose use is assigned to jobs that need them by some priorityassignment (e.g., dispatching) scheme. An interesting coordination problem is posed by so-called activesensing, where, e.g., a robot may hold a scanning camera, and the camera effectively takes charge of thecoordination problem, directing the robot where to move it to effect the maximum reduction in entropy(increase in information) with subsequent images.

71.7.2 Types of Sensors

This section summarizes sensors from an operational point of view. More information on functional andphysical principles can be found in Fraden [1993], Fu et al. [1987], Groover et al. [1986], and Snyder[1985].

Page 26: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-26 Computer Science Handbook

71.7.2.1 Tactile Sensors

Tactile sensors [Nichols and Lee 1989] rely on physical contact with external objects. Digital sensors suchas limit switches, microswitches, and vacuum devices give binary information on whether contact occursor not. Sensors are available to detect the onset of slippage. Analog sensors such as spring-loaded rods givemore information. Tactile sensors based on rubberlike carbon- or silicon-based elastomers with embeddedelectrical or mechanical components can provide very detailed information about part geometry, location,and more. Elastomers can contain resistive or capacitive elements whose electrical properties change as theelastomer compresses. Designs based on LSI technology can produce tactile grid pads with, e.g., 64 × 64forcel points on a single pad. Such sensors produce tactile images that have properties akin to digital imagesfrom a camera and require similar data processing. Additional tactile sensors fall under the classificationof force sensors discussed subsequently.

71.7.2.2 Proximity and Distance Sensors

The noncontact proximity sensors include devices based on the Hall effect or inductive devices based on theelectromagnetic effect that can detect ferrous materials within about 5 mm. Such sensors are often digital,yielding binary information about whether or not an object is near. Capacitance-based sensors detect anynearby solid or liquid with ranges of about 5 mm. Optical and ultrasound sensors have longer ranges.

Distance sensors include time-of-flight range finder devices such as sonar and lasers. The commerciallyavailable Polaroid sonar offers accuracy of about 1 in up to 5 ft, with angular sector accuracy of about 15◦.For 360◦ coverage in navigation applications for mobile robots, both scanning sonars and ring-mountedmultiple sonars are available. Sonar is typically noisy with spurious readings, and requires low-pass filteringand other data processing aimed at reducing the false alarm rate. The more expensive laser range findersare extremely accurate in distance and have very high angular resolution.

71.7.2.3 Position, Velocity, and Acceleration Sensors

Linear position-measuring devices include linear potentiometers and the sonar and laser range findersjust discussed. Linear velocity sensors may be laser- or sonar-based Doppler-effect devices.

Joint-angle position and velocity proprioceptors are an important part of the robot arm servocontroldrive axis. Angular position sensors include potentiometers, which use dc voltage, and resolvers, whichuse ac voltage and have accuracies of ±15 min. Optical encoders can provide extreme accuracy usingdigital techniques. Incremental optical encoders use three optical sensors and a single ring of alternatingopaque/clear areas, Figure 71.18a, to provide angular position relative to a reference point and angularvelocity information; commercial devices may have 1200 slots per turn. More expensive absolute opticalencoders, Figure 71.18b, have n concentric rings of alternating opaque/clear areas and require n opticalsensors. They offer increased accuracy and minimize errors associated with data reading and transmission,particularly if they employ the Grey code, where only one bit changes between two consecutive sectors.Accuracy is 360◦/2n, with commercial devices having n = 12 or so.

Gyros have good accuracy if repeatability problems associated with drift are compensated for. Directionalgyros have accuracies of about ±1.5◦; vertical gyros have accuracies of 0.15◦ and are available to measuremultiaxis motion (e.g., pitch and roll). Rate gyros measure velocities directly with thresholds of 0.05◦/sor so.

Various sorts of accelerometers are available based on strain gauges (next paragraph), gyros, or crystalproperties. Commercial devices are available to measure accelerations along three axes.

71.7.2.4 Force and Torque Sensors

Various torque sensors are available, though they are often not required; for instance, the internal torquesat the joints of a robot arm can be computed from the motor armature currents. Torque sensors on adrilling tool, for instance, can indicate when tools are dull. Linear force can be measured using load cellsor strain gauges. A strain gauge is an elastic sensor whose resistance is a function of applied strain ordeformation. The piezoelectric effect, the generation of a voltage when a force is applied, may also be

Page 27: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-27

(a) (b)

FIGURE 71.18 Optical encoders: (a) incremental optical encoder and (b) absolute optical encoder with n = 4 usingGrey code (From Snyder, W. E. 1985. Industrial Robots: Computer Interfacing and Control. Prentice–Hall, EnglewoodCliffs, NJ. With permission.)

used for force sensing. Other force sensing techniques are based on vacuum diodes, quartz crystals (whoseresonant frequency changes with applied force), etc.

Robot arm force-torque wrist sensors are extremely useful in dextrous manipulation tasks. Commer-cially available devices can measure both force and torque along three perpendicular axes, providing fullinformation about the Cartesian force vector F. Transformations such as Equation 71.6 allow computationof forces and torques in other coordinates. Six-axis force-torque sensors are quite expensive.

71.7.2.5 Photoelectric Sensors

A wide variety of photoelectric sensors are available, some based on fiber optic principles. These havespeeds of response in the neighborhood of 50 �s with ranges up to about 45 mm, and are useful fordetecting parts and labeling, scanning optical bar codes, confirming part passage in sorting tasks, etc.

71.7.2.6 Other Sensors

Various sensors are available for measuring pressure, temperature, fluid flow, etc. These are useful inclosed-loop servocontrol applications for some processes such as welding, and in job coordination and/orsafety interrupt routines in others.

71.7.3 Sensor Data Processing

Before any sensor can be used in a robotic workcell, it must be calibrated. Depending on the sensor, this couldinvolve significant effort in experimentation, computation, and tuning after installation. Manufacturersoften provide calibration procedures though in some cases, including vision, such procedures may not beobvious, requiring reference to the published scientific literature. Time-consuming recalibration may beneeded after any modifications to the system.

Particularly for more complex sensors such as optical encoders, significant sensor signal conditioningand processing is required. This might include amplification of signals, noise rejection, conversion of datafrom analog to digital or from digital to analog, and so on. Hardware is usually provided for such purposes

Page 28: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-28 Computer Science Handbook

(a) (b)

FIGURE 71.19 Signal processing using FSM for optical encoders: (a) phase relations in incremental optical encoderoutput and (b) finite state machine to decode encoder output into angular position. (From Snyder, W. E. 1985).

(a) (b)

FIGURE 71.20 Hardware design from FSM: (a) FSM for sonar transducer control on a mobile robot and (b) sonardriver control system from FSM.

by the manufacturer and should be considered as part of the sensor package for robot workcell design.The sensor, along with its signal processing hardware and software algorithms may be considered as a dataabstraction and is called the virtual sensor in Figure 71.7.

If signal processing does need to be addressed, it is often very useful to use finite state machine (FSM)design. A typical signal from an incremental optical encoder is shown in Figure 71.19a; a FSM for decodingthis into the angular position is given in Figure 71.19b. FSMs are very easy to convert directly to hardwarein terms of logical gates. A FSM for sequencing a sonar is given in Figure 71.20a; the sonar driver hardwarederived from this FSM is shown in Figure 71.20b.

A particular problem is obtaining angular velocity from angular position measurements. All too oftenthe position measurements are simply differenced using a small sample period to compute velocity. This isguaranteed to lead to problems if there is any noise in the signal. It is almost always necessary to employ alow-pass-filtered derivative where velocity samples vk are computed from position measurement samplespk using, e.g.,

vk = �vk−1 + (1 − �)(pk − pk−1)/T (71.18)

where T is the sample period and � is a small filtering coefficient. A similar approach is needed to computeacceleration.

Page 29: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-29

71.7.4 Vision for Robotics

Computer vision is covered in Chapter 43; the purpose of this section is to discuss some aspects of visionthat are unique to robotics [Fu et al. 1987, Lee et al. 1991, 1994]. Industrial robotic workcells often requirevision systems that are reliable, accurate, low cost, and rugged yet perform sophisticated image processingand decision making functions. Balancing these conflicting demands is not always easy. There are severalcommercially available vision systems, the most sophisticated of which may be the Adept vision system,which supports multiple cameras. However, it is sometimes necessary to design one’s own system. Visionmay be used for three purposes in robotic workcells: inspection and quality control, robotic manipulation,and servo-level feedback control. In quality control inspection systems the cameras are often affixed tostationary mounts while parts pass on a conveyor belt. In active vision inspection systems, cameras maybe mounted as end effectors of a robot manipulator, which positions the camera for the required shots.

The operational phase of robot vision has six principal areas. Low-level vision includes sensing andpreprocessing such as noise reduction, image digitization if required, and edge detection. Medium-levelvision includes segmentation, description, and recognition. High-level vision includes interpretation anddecision making. Such topics are disussed in Chapter 32. Prior to placing the vision system in operation,one is faced with several design issues including camera selection and illumination techniques, and theproblem of system calibration.

71.7.4.1 Cameras and Illumination

Typical commercially available vision systems conform to the RS-170 standard of the 1950s, so that framesare acquired through a framegrabber board at a rate of 30 frames/s. Images are scanned; in a popular U.S.standard, each complete scan or frame consists of 525 lines of which 480 contain image information. Thissample rate and image resolutions of this order are adequate for most applications with the exceptionof vision-based robot arm servoing (discussed subsequently). Robot vision system cameras are usuallyTV cameras: either the solid-state charge-coupled device (CCD), which is responsive to wavelengths oflight from below 350 nm (ultraviolet) to 1100 nm (near infrared) and has peak response at approximately800 nm, or the charge injection device (CID), which offers a similar spectral response and has a peakresponse at approximately 650 nm. Both line-scan CCD cameras, having resolutions ranging between 256and 2048 elements, and area-scan CCD cameras are available. Medium-resolution area-scan cameras yieldimages of 256 × 256, though high-resolution devices of 1024×1024 are by now available. Line-scan camerasare suitable for applications where parts move past the camera, e.g., on conveyor belts. Framegrabbersoften support multiple cameras, with a common number being four, and may support black-and-whiteor color images.

If left to chance, illumination of the robotic workcell will probably result in severe problems in opera-tions. Common problems include low-contrast images, specular reflections, shadows, and extraneous de-tails. Such problems can be corrected by overly sophisticated image processing, but all of this can be avoidedby some proper attention to details at the workcell design stage. Illumination techniques include spectralfiltering, selection of suitable spectral characteristics of the illumination source, diffuse-lighting tech-niques, backlighting (which produces easily processed silhouettes), structured-lighting (which providesadditional depth information and simplifies object detection and interpretation), and directional lighting.

71.7.4.2 Coordinate Frames and Camera Perspective Transformation

A typical robot vision system is depicted in Figure 71.21, which shows a gimball-mounted camera. Thereare illustrated the base frame (or world frame) (X, Y, Z), the gimball platform, the camera frame (x , y, z),and the image plane having coordinates of (�, υ).

71.7.4.2.1 Image Coordinates of a Point in Base CoordinatesThe primary tools for analysis of robot vision systems are the notion of coordinate transforms and the cameraperspective transformation. Four-by-four homogeneous transformations (discussed earlier) are used, as theyprovide information on translations, rotations, scaling, and perspective.

Page 30: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-30 Computer Science Handbook

FIGURE 71.21 Typical robot workcell vision system.

FIGURE 71.22 Homogeneous transformations associated with the robot vision system.

Four homogeneous transformations may be identified in the vision system, as illustrated in Figure 71.22.The gimball transformation G represents the base frame in coordinates affixed to the gimball platform.If the camera is mounted on a robot end effector, G is equal to T−1, with T the robot arm T matrixdetailed in earlier in Section 71.5; for a stationary-mounted camera G is a constant matrix capturing thecamera platform mounting offset r0 = [X0 Y0 Z0]T . The pan/tilt transformation R represents the gimballplatform with respect to the mounting point of the camera. This rotation transformation is given by

R =

cos � sin � 0 0− sin � cos � cos � cos � sin � 0

sin � sin � − cos � sin � cos � 00 0 0 1

(71.19)

with � the pan angle and � the tilt angle. C captures the offset r = [rx r y rz]T of the camera frame withrespect to the gimball frame. Finally, the perspective transformation

1 0 0 00 1 0 00 0 1 00 0 − 1

�1

(71.20)

projects a point represented in camera coordinates (x , y, z) onto a position (�, υ) in the image, where � isthe camera focal length.

Page 31: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-31

In terms of these constructions, the image position of a point w represented in base coordinates as(X, Y, Z) is given by the camera transform equation

c = P C RGw (71.21)

which evaluates in the case of a stationary-mounted camera to the image coordinates

� = �(X − X0) cos � + (Y − Y0) sin � − rx

−(X − X0) sin � sin � + (Y − Y0) cos � sin � − (Z − Z0) cos � + rz + �

υ = �−(X − X0) sin � cos � + (Y − Y0) cos � cos � + (Z − Z0) sin � − r y

−(X − X0) sin � sin � + (Y − Y0) cos � sin � − (Z − Z0) cos � + rz + �

(71.22)

71.7.4.2.2 Base Coordinates of a Point in Image CoordinatesIn applications, one often requires the inverse of this transformation; that is, from the image coordinates(�, υ) of a point one wishes to determine its base coordinates (X, Y, Z). Unfortunately, the perspectivetransformation P is a projection which loses depth information z, so that the inverse perspective transfor-mation P −1 is not unique. To compute unique coordinates in the base frame one therefore requires eithertwo cameras, the ultimate usage of which leads to stereo imaging, or multiple shots from a single movingcamera. Many techniques have been developed for accomplishing this.

71.7.4.2.3 Camera CalibrationEquation 71.22 has several parameters, including the camera offsets r0 and r and the focal length �. Thesevalues must be known prior to operation of the camera. They may be measured, or they may be computedby taking images of points wi with known base coordinates (Xi , Yi , Zi ). To accomplish this, one must takeat least six points wi and solve a resulting set of nonlinear simultaneous equations. Many procedures havebeen developed for accomplishing this by efficient algorithms.

71.7.4.3 High-Level Robot Vision Processing

Besides scene interpretation, other high-level vision processing issues must often be confronted, includingdecision making based on vision data, relation of recognized objects to stored CAD data of parts, recognitionof faults or failures from vision data, and so on. Many technical papers have been written on all of thesetopics.

71.7.4.4 Vision-Based Robot Manipulator Servoing

In standard robotic workcells, vision is not often used for servo-level robot arm feedback control. This isprimarily due to the facts that less expensive lower level sensors usually suffice, and reliable techniques forvision-based servoing are only now beginning to emerge. In vision-based servoing the standard frame rateof 30 ft/s is often unsuitable; higher frame rates are often needed. This means that commercially availablevision systems cannot be used. Special purpose cameras and hardware have been developed by severalresearchers to address this problem, including the vision system in Lee and Blenis [1994].

Once the hardware problems have been solved, one has yet to face the design problem for real-timeservocontrollers with vision components in the feedback loop. This problem may be attacked by consideringthe nonlinear dynamical system 71.7 with measured outputs given by combining the camera transformation71.21 and the arm kinematics transformation 71.2 [Ghosh et al. 1994].

71.8 Workcell Planning

Specifications for workcell performance vary at distinct levels in the workcell planning, coordination, andcontrol architecture in Figure 71.7. At the machine servocontrol level, motion specifications are givenin terms of continuous trajectories in joint space sampled every 16 ms. At the job coordination level,motion specifications are in terms of Cartesian path via points, generally nonuniformly spaced, computed

Page 32: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-32 Computer Science Handbook

to achieve a prescribed task. Programming at these lower levels involves tedious specifications of points,motions, forces, and times of transit. The difficulties involved with such low-level programming have ledto requirements for task-level programming, particularly in modern robot workcells which must be flexibleand reconfigurable as products vary in response to the changing desires of customers. The functionof the workcell planner is to allow task-level programming from the workcell manager by performingtask planning and decomposition and path planning, thereby automatically providing the more detailedspecifications required by the job coordinator and servo-level controllers.

71.8.1 Workcell Behaviors and Agents

The virtual machines and virtual sensors in the (PC&C) architecture of Figure 71.7 are constructed usingthe considerations discussed in previous sections. These involve commercial robot selection, robot kine-matics and servo-level control, end effectors and tooling, and sensor selection and calibration. The resultof design at these levels is a set of workcell agents — robots, machines, or sensors — each with a set ofbehaviors or primitive actions that each workcell agent is capable of. For instance, proper design could allowa robot agent to be capable of behaviors including accurate motion trajectory following, tool changing,force-controlled grinding on a given surface, etc. A camera system might be capable of identifying allPhillips screw heads in a scene, then determining their coordinates and orientation in the base frame of arobot manipulator.

Given the workcell agents with their behaviors, the higher level components in Figure 71.7 must be ableto assign tasks and then decompose them into a suitable sequencing of behaviors. In this and the nextsection are discussed the higher level PC&C components of workcell planning and job coordination.

71.8.2 Task Decomposition and Planning

A task is a specific goal that must be accomplished, often by an assigned due date. These goals may includecompleted robotic processes (e.g., weld seam), finished products, and so on. Tasks are accomplished by asequence of jobs that, when completed, result in the achievement of the final goal. Jobs are specific primitiveactivities that are accomplished by a well-defined set of resources or agents (e.g., drill hole, load machine).Once a resource has been assigned to a job it can be interpreted as a behavior. To attain the task goal, jobsmust usually be performed in some partial ordering, e.g., tasks a and b are immediate prerequisties fortask c . The jobs are not usually completely ordered, but have some possibility for concurrency.

At the workcell management level, tasks are assigned, along with their due dates, without specifyingdetails of resource assignment or selection of specific agents. At the job coordination level, the requiredspecifications are in terms of sequences of jobs, with resources assigned, selected to achieve the assignedgoal tasks. The function of the workcell planner is to convert between these two performance specificationparadigms. In the task planning component, two important transformations are made. First, assignedtasks are decomposed into the required job sequences. Second, workcell agents and resources are assigned toaccomplish the individual jobs. The result is a task plan that is passed for execution to the job coordinator.

71.8.2.1 Task Plan

A task plan is a sequence of jobs, along with detailed resource assignments, that will lead to the desiredgoal task. Jobs with assigned resources can be interpreted as behaviors. Plans should not be overspeci-fied — the required job sequencing is usually only a partial ordering, often with significant concurrencyremaining among the jobs. Thus, some decisions based on real-time workcell status should be left to thejob coordinator; among these are the final detailed sequencing of the jobs and any dispatching and routingdecisions where shared resources are involved.

71.8.2.2 Computer Science Planning Tools

There are well-understood techniques in computer science that can be brought to bear on the robot taskplanning problem [Fu et al. 1987]. Planning and scheduling is covered in Chapter 67, decision trees in

Page 33: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-33

Chapter 65, search techniques in Chapter 30, and decision making under uncertainty in Chapter 70; allof these are relevant to this discussion. However, the structure of the robotic workcell planning problemmakes it possible to use some refined and quite rigorous techniques in this chapter, which are introducedin the next subsections.

Task planning can be accomplished using techniques from problem solving and learning, especiallylearning by analogy. By using plan schema and other replanning techniques, it is possible to modify existingplans when goals or resources change by small amounts. Predicate logic is useful for representing knowledgein the task planning scenario and many problem solving software packages are based on production systems.

Several task planning techniques use graph theoretic notions that can be attacked using search algorithmssuch as A∗. State-space search techniques allow one to try out various approaches to solving a problem untila suitable solution is found: the set of states reachable from a given initial state forms a graph. A plan isoften represented as a finite state machine, with the states possibly representing jobs or primitive actions.Problem reduction techniques can be used to decompose a task into smaller subtasks; in this context it isoften convenient to use AND/OR graphs. Means–ends analysis allows both forward and backward searchtechniques to be used, solving the main parts of a problem first and then going back to solve smallersubproblems.

For workcell assembly and production tasks, product data in CAD form is usually available. Assemblytask planning involves specifying a sequence of assembly, and possibly process, steps that will yield thefinal product in finished form. Disassembly planning techniques work backwards from the final product,performing part disassembly transformations until one arrives at the initial raw materials. Care must betaken to account for part obstructions, etc. The relationships between parts should be specified in termsof symbolic spatial relationships between object features (e.g., place block1−face2 against wedge2−face3 andblock1−face1 against wedge2−face1 or place pin in slot). Constructive solid geometric techniques lead tographs that describe objects in terms of features related by set operations such as intersection, union, etc.

71.8.2.3 Industrial Engineering Planning Tools

In industrial engineering there are well-understood design tools used for product assembly planning,process planning, and resource assignment; they should be used in workcell task planning. The bill ofmaterials (BOM) for a product is a computer printout that breaks down the various subassemblies andcomponent parts needed for the product. It can be viewed as a matrix B whose elements B(i, j ) are set to 1if subassembly j is needed to produce subassembly i . This matrix is known as Steward’s Sequencing Matrix;by studying it one can decompose the assembly process into hierarchically interconnected subsystems ofsubassemblies [Warfield 1973], thereby allowing parallel processing and simplification of the assemblyprocess. The assembly tree [Wolter et al. 1992] is a graphical representation of the BOM.

The resource requirements matrix is a matrix R whose elements R(i, j ) are set equal to 1 if resource j isrequired for job i . The resources may include machines, robots, fixtures, tools, transport devices, and soon. This matrix has been used by several workers for analysis and design of manufacturing systems; it isvery straightforward to write down given a set of jobs and available resources. The subassembly tree is anassembly tree with resource information added.

71.8.3 Task Matrix Approach to Workcell Planning

Plans can often be specified as finite state machines. However, in the robot workcell case, FSM are neithergeneral enough to allow versatile incorporation of workcell status and sensor information, nor specificenough to provide all of the information needed by the job coordinator.

A very general robot workcell task plan can be completely specified by four task plan matrices [Lewis andHuang 1995]. The job sequencing matrix and job start matrix are independent of resources and carry the jobsequencing information required for task achievement. Resources are subsequently added by constructingthe resource requirements matrix and the resource release matrix. The function of the task planner is toconstruct these four matrices and pass them to the job coordinator, who uses them for job coordination,sequencing, and resource dispatching. The task plan matrices are straightforward to construct and are

Page 34: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-34 Computer Science Handbook

(a) (b)

FIGURE 71.23 Product information for task planning: (a) assembly tree with job sequencing information and(b) subassembly tree with resource information added to the jobs.

easy to modify in the event of goal changes, resource changes, or failures; that is, they accommodate taskplanning as well as task replanning.

The task planning techniques advocated here are illustrated through an assembly design example,which shows how to select the four task plan matrices. Though the example is simple, the techniqueextends directly to more complicated systems using the notions of block matrix (e.g., subsystem) design.First, job sequencing is considered, then the resources are added.

71.8.3.1 Workcell Task Decomposition and Job Sequencing

In Figure 71.23(a) is given an assembly tree which shows the required sequence of actions (jobs) toproduce a product. This sequence may be obtained from stored product CAD data through disassemblytechniques, etc. The assembly tree contains information analogous to the BOM; it does not include anyresource information. Part a enters the workcell and is drilled to produce part b, then assembled withpart c to produce part d , which is again drilled (part e) to result in part f , which is the cell output (POdenotes “product out”). The assembly tree imposes only a partial ordering on the sequence of jobs. It isimportant not to overspecify the task decomposition by imposing additional temporal orderings that arenot required for job sequencing.

71.8.3.1.1 Job Sequencing MatrixReferring to Figure 71.23a, define the job vector as v = [a b c d e f]T . The Steward’s sequencing matrix Fv

for the assembly tree in Figure 71.23a is then given by

Fv =

abcdef

PO

a b c d e f

0 0 0 0 0 01 0 0 0 0 00 0 0 0 0 00 1 1 0 0 00 0 0 1 0 00 0 0 0 1 00 0 0 0 0 1

. (71.23)

In this matrix, an entry of 1 in position (i, j ) indicates that job j must be completed prior to starting jobi . Fv is independent of available resources; in fact, regardless of the resources available, Fv will not change.

71.8.3.1.2 Sequencing State Vector and Job Start EquationDefine a sequencing state vector x , whose components are associated with the vector [a b c d e f PO]T ,that checks the conditions of the rules needed for job sequencing. The components of x may be viewed as

Page 35: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-35

situated between the nodes in the assembly tree. Then, the job start equation is

vs =

1 0 0 0 0 0 00 1 0 0 0 0 00 0 1 0 0 0 00 0 0 1 0 0 00 0 0 0 1 0 00 0 0 0 0 1 0

x1

x2

x3

x4

x5

x6

x7

≡ Sv x (71.24)

where vs is the job start command vector. In the job start matrix Sv , an entry of 1 in position (i, j ) indicatesthat job i can be started when component j of the sequencing state vector is active.

In this example, the matrix Sv has 1s in locations (i, i) so that Sv appears to be redundant. This structurefollows from the fact that the assembly tree is an upper semilattice, wherein each node has a unique nodeabove it; such a structure occurs in the manufacturing re-entrant flowline with assembly. In the moregeneral job shop with variable part routings the semilattice structure of the assembly tree does not hold.Then, Sv can have multiple entries in a single column, corresponding to different routing options; nodescorresponding to such columns have more than one node above them.

71.8.3.2 Adding the Resources

To build a job dispatching coordination controller for shop-floor installation to perform this particularassembly task, the resources available must now be added. The issue of required and available resourcesis easily confronted as a separate engineering design issue from job sequence planning. In Figure 71.23b isgiven a subassembly tree for the assembly task, which includes resource requirements information. Thisinformation would in practice be obtained based on the resources and behaviors available in the workcelland could be assigned by a user during the planning stage using interactive software. The figure showsthat part input PIc and part output (PO) do not require resources, pallets (P ) are needed for part a andits derivative subassemblies, buffers (B1, B2) hold parts a and e , respectively, prior to drilling, and bothdrilling operations need the same machine (M1). The assembly operation is achieved by fixturing part cin fixture F 1 while robot R1 inserts part b.

Note that drilling machine M1 represents a shared resource, which performs two jobs, so that dispatchingdecision making is needed when the two drilling jobs are simultaneously requested, in order to avoidpossible problems with deadlock. This issue is properly faced by the job coordinator in real-time, as shownin Section 71.9, not by the task planner. Shared resources impose additional temporal restrictions on thejobs that are not present in the job sequencing matrix; these are concurrency restrictions of the form: bothdrilling operations may not be performed simultaneously.

71.8.3.2.1 Resource Requirements (RR) MatrixReferring to Figure 71.23b, define the resource vector as r = [R1A F 1A B1A B2A PA M1A]T , where Adenotes available. In the RR matrix Fr , a 1 in entry (i, j ) indicates that resource j is needed to activatesequencing vector component xi (e.g., in this example, to accomplish job i). By inspection, therefore, onemay write down the RR matrix

Fr =

0 0 1 0 1 00 0 0 0 0 10 1 0 0 0 01 0 0 0 0 00 0 0 1 0 00 0 0 0 0 10 0 0 0 0 0

(71.25)

Page 36: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-36 Computer Science Handbook

Row 3, for instance, means that resource F1A, the fixture, is needed as a precondition for firing x3; whichmatrix Sv associates with job c . Note that column 6 has two entries of 1, indicating that M1 is a sharedresource that is needed for two jobs b and f . As resources change or machines fail, the RR matrix is easilymodified.

71.8.3.2.2 Resource Release MatrixThe last issue to be resolved in this design is that of resource release. Thus, using manufacturing engineeringexperience and Figure 71.23b, select the resource release matrix Sr in the resource release equation

rs =

R1As

F 1As

B1As

B2As

P As

M1As

=

0 0 0 0 1 0 00 0 0 1 0 0 00 1 0 0 0 0 00 0 0 0 0 1 00 0 0 0 0 0 10 0 0 1 0 0 1

x1

x2

x3

x4

x5

x6

x7

≡ Sr x (71.26)

where subscript s denotes a command to the workcell to start resource release. In the resource releasematrix Sr , a 1 entry in position (i, j ) indicates that resource i is to be released when entry j of x hasbecome high (e.g., in this example, on completion of job j ). It is important to note that rows containingmultiple ones in Sr correspond to columns having multiple ones in Fr . For instance, the last row of Sr

shows that M1A is a shared resource, since it is released after either x4 is high or x7 is high; that is, aftereither job b or job f is complete.

71.8.3.3 Petri Net from Task Plan Matrices

It will be shown in Section 71.9 that the four task plan matrices contain all of the information needed toimplement a matrix-based job coordination controller on, for instance, a programmable logic workcellcontroller. However, there has been much discussion of uses of Petri nets in task planning. It is now shownthat the four task plan matrices correspond to a Petri net (PN). The job coordinator would not normally beimplemented as a Petri net; however, it is straightforward to derive the PN description of a manufacturingsystem from the matrix controller equations, as shown by the next result.

Theorem 71.1 (Petri net from task plan matrices) Given the four task plan matrices Fv , Sv , Fr , Sr , definethe activity completion matrix F and the activity start matrix S as

F = [ Fv Fr ], S =[

Sv

Sr

](71.27)

Define X as the set of elements of sequencing state vector x, and A (activities) as the set of elements of the joband resource vectors v and r. Then (A, X, F , ST ) is a Petri net.

The theorem identifies F as the input incidence matrix and ST as the output incidence matrix of a PN,so that the PN incidence matrix is given by

W = ST − F =[

STv − Fv ST

r − Fr

](71.28)

Based on the theorem, the PN in Figure 71.24 is easily drawn for this example. In the figure, initial markingshave been added; this is accomplished by determining the number of resources available in the workcell.The inputs uD1, uD2 are required for dispatching the shared resource M1, as discussed in Section 71.9.This theorem provides a formal technique for constructing a PN for a workcell task plan and allows all of

Page 37: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-37

FIGURE 71.24 Petri net representation of workcell with shared resource.

the PN analysis tools to be used for analysis of the workcell plan. It formalizes some work in the literature(e.g., top-down and bottom-up design [Zhou et al. 1992]).

Behaviors. All of the PN transitions occur along the job paths. The places in the PN along the job pathscorrespond to jobs with assigned resources and can be interpreted as behaviors. The places off the taskpaths correspond to resource availability.

71.8.4 Path Planning

The path planning problem [Latombe 1991] may be decomposed into motion path planning, graspplanning, and error detection and recovery; only the first is considered here. Motion path planning is theprocess of finding a continuous path from an initial position to a prescribed final position or goal withoutcollision. The output of the path planner for robotic workcells is a set of path via points which are fedto the machine trajectory generator (discussed previously). Off-line path planning can be accomplishedif all obstacles are stationary at known positions or moving with known trajectories. Otherwise, on-lineor dynamic path planning is required in real time; this often requires techniques of collision or obstacleavoidance. In such situations, paths preplanned off line can often be modified to incorporate collisionavoidance. This subsection deals with off-line path planning except for the portion on the potential fieldapproach, which is dynamic planning. See Zhou [1996] for more information.

Initial and final positions may be given in any coordinates, including the robot’s joint space. Generally,higher level workcell components think in terms of Cartesian coordinates referred to some world frame.The Cartesian position of a robot end effector is given in terms of three position coordinates and threeangular orientation coodinates; therefore, the general 3-D path planning problem occurs in �6. If robotjoint-space initial and final positions are given one may work in configuration space, in which points arespecified by the joint variable vector q having coordinates qi , the individual joint values. For a six-degrees-of-freedom arm, configuration space is also isomorphic to �6. Path planning may also be carried out forinitial and final values of force/torque. In 3-D, linear force has three components and torque has threecomponents, again placing the problem in �6. Hybrid position/force planning is also possible. In thissubsection path planning techniques are illustrated in �2, where it is convenient to think in terms ofplanning paths for mobile robots in a plane.

If the number of degrees of freedom of a robot is less than six, there could be problems in that themanipulator may not be able to reach the prescribed final position and the via points generated in theplanning process. Thus, the path planner must be aware of the limitations of the individual robots in itsplanning process; in fact, it is usually necessary to select a specific robot agent for a task prior to planningthe path in order to take such limitations into account.

Page 38: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-38 Computer Science Handbook

FIGURE 71.25 Cell decomposition approach to path planning: (a) free space decomposed into cells using the vertical-line-sweep method and (b) connectivity graph for the decomposed space.

71.8.4.1 Cell Decomposition Approach

In the cell decomposition approach to path planning, objects are enclosed in polygons. The object polygonsare expanded by an amount equal to the radius of the robot to ensure collision avoidance; then, the robot istreated simply as a moving point. The free space is decomposed into simply connected free-space regionswithin which any two points may be connected by a straight line. When the Euclidean metric is used tomeasure distance, convex regions satisfy the latter requirement. A sample cell decomposition is shown inFigure 71.25. The decomposition is not unique; the one shown is generated by sweeping a vertical lineacross the space. Based on the decomposed space, a connectivity graph may be constructed, as shown inthe figure. To the graph may be added weights or costs at the arcs or the nodes, corresponding to distancestraveled, etc. Then, graph search techniques may be used to generate the shortest, or otherwise least costly,path.

71.8.4.2 Road Map Based on Visibility Graph

In the road map approach the obstacles are modeled as polygons expanded by the radius of the robot, whichis treated simply as a moving point. A visibility graph is a nondirected graph whose nodes are the verticesof the polygons and whose links are straight line segments connecting the nodes without intersecting anyobstacles. A reduced visibility graph does not contain links that are dominated by other links in terms ofdistance. Figure 71.26 shows a reduced visibility graph for the free space. Weights may be assigned to thearcs or nodes and graph search techniques may be used to generate a suitable path. The weights can reflectshortest distance, path smoothness, etc.

71.8.4.2.1 Road Map Based on Voronoi DiagramA Voronoi diagram is a diagram where the path segment lines have equal distance from adjacent obstacles.In a polygonal space, the Voronoi diagram consists of straight lines and parabolas: when both adjacentobject segments are vertices or straight lines, the equidistant line is straight, when one object is characterizedby a vertex and the other by a straight line, the equidistant line is parabolic. In the Voronoi approach,generated paths are generally longer than in the visibility graph approach, but the closest point of approach(CPA) to obstacles is maximized.

Page 39: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-39

FIGURE 71.26 Road map based on visibility graph. (Courtesy of Zhou, C. 1996. Planning and intelligent control. InCRC Handbook of Mechanical Engineering. F. Kreith, Ed. CRC Press, Boca Raton, FL.)

FIGURE 71.27 Quadtree approach to path planning: (a) quadtree decomposition of the work area and (b) quadtreeconstructed from space decomposition.

71.8.4.3 Quadtree Approach

In the quadtree approach, Figure 71.27a rectangular workspace is partitioned into four equal quadrantslabeled A, B, C, D. Suppose the initial point is in quadrant A with the goal in quadrant B. If there areobstacles in quadrant A, it must be further partitioned into four quadrants AA, AB, AC, AD. Suppose theinitial point is in AA, which also contains obstacles; then AA is further partitioned into quadrants AAA,AAB, AAC, AAD. This procedure terminates when there are no obstacles in the quadrant containing theinitial point. A similar procedure is effected for the goal position. Based on this space decomposition, thequadtree shown in Figure 71.27b may be drawn. Now, tree search methods such as A∗ may be used todetermine the optimal obstacle-free path.

Page 40: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-40 Computer Science Handbook

The quadtree approach has the advantage of partitioning the space only as finely as necessary. If anyquadrant contains neither goal, initial point, nor obstacles, it is not further partitioned. If any quadrantcontaining the initial position or goal contains no obstacles, it is not further partitioned. In 3-D, thisapproach is called octree.

71.8.4.4 Maneuvering Board Solution for Collision Avoidance of Moving Obstacles

The techniques just discussed generate a set of via points between the initial and final positions. If thereare moving obstacles within the free-space regions, one may often modify the paths between the via pointsonline in real-time to avoid collision. If obstacles are moving with constant known velocities in the freespace, a technique used by the U.S. Navy based on the maneuvering board can be used for on-line obstacleavoidance. Within a convex free-space region, generated for instance by the cell decomposition approach,one makes a relative polar plot with the moving robot at the center and other moving objects plotted asstraight lines depending on their relative courses and speeds. A steady bearing and decreasing range (SBDR)indicates impending collision. Standard graphical techniques using a parallel ruler allow one to alter therobot’s course and/or speed to achieve a prescribed CPA; these can be converted to explicit formulas forrequired course/speed changes. An advantage of this technique for mobile robots is that the coordinatesof obstacles in the relative polar plot can be directly measured using onboard sonar and/or laser rangefinders. This technique can can be modified into a navigational technique when some of the stationaryobstacles have fixed absolute positions, such obstacles are known as reference landmarks.

71.8.4.5 Potential Field Approach

The potential field approach [Arkin 1989] is especially popular in mobile robotics as it seems to emulatethe reflex action of a living organism. A fictitious attractive potential field is considered to be centeredat the goal position (Figure 71.28a). Repulsive fields are selected to surround the obstacles (Figure 71.28b).The sum of the potential fields (Figure 71.28c) produces the robot motion as follows. Using F(x) = ma ,with m the vehicle mass and F(x) equal to the sum of the forces from the various potential fields computedat the current vehicle position x, the required vehicle acceleration a(x) is computed. The resulting motionavoids obstacles and converges to the goal position. This approach does not produce a global path planneda priori. Instead, it is a real-time on-line motion control technique that can deal with moving obstacles,particularly if combined with maneuvering board techniques. Various methods have been proposed forselecting the potential fields; they should be limited to finite influence distances, or else the computationof the total force F(x) requires knowledge of all obstacle relative positions.

The potential field approach is particularly convenient as the force F may be computed knowing only therelative positions of the goal and obstacles from the vehicle; this information is directly provided by onboardsonar and laser readings. The complete potential field does not need to be computed, only the force vectorof each field acting on the vehicle. A problem with the potential field approach is that the vehicle maybecome trapped in local minima (e.g., an obstacle is directly between the vehicle and the goal); this canbe corrected using various techniques, including adding a dither force to get the vehicle out of these falseminima. The potential field approach can be combined with Lyapunov analysis techniques to integratethe path planning and trajectory following servocontrol functions of a mobile robot [Jagannathan et al.1994].

In fact, Lyapunov functions and potential fields may simply be added in an overall controls designtechnique.

71.8.4.5.1 Emergent BehaviorsThe responses to individual potential fields can be interpreted as behaviors such as seek goal, avoid obstacle,etc. Potential fields can be selected to achieve specialized behaviors such as docking (i.e., attaining a goalposition with a prescribed angle of approach) and remaining in the center of a corridor (simply definerepulsive fields from each wall). The sum of all of the potential fields yields an emergent behavior thathas not been preprogrammed (e.g., seek goal while avoiding obstacle and remaining in the center of thehallway). This makes the robot exhibit behaviors that could be called intelligent or self-determined.

Page 41: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-41

(a) (b)

(c) (d)

FIGURE 71.28 Potential field approach to navigation: (a) attractive field for goal at lower left corner, (b) repulsivefields for obstacles, (c) sum of potential fields, and (d) contour plot showing motion trajectory. (Courtesy of Zhou, C.1996. Planning and intelligent control. In CRC Handbook of Mechanical Engineering. F. Kreith, Ed. CRC Press, BocaRaton, FL.)

71.9 Job and Activity Coordination

Coordination of workcell activities occurs on two distinct planes. On the discrete event (DE) or dis-crete activity plane, job coordination and sequencing, along with resource handling, is required. Digitalinput/output signals, or sequencing interlocks, are used between the workcell agents to signal job comple-tion, resource availability, errors and exceptions, and so on. On a lower plane, servo-level motion/forcecoordination between multiple interacting robots is sometimes needed in special-purpose applications;this specialized topic is relegated to the end of this section.

71.9.1 Matrix Rule-Based Job Coordination Controller

The workcell DE coordinators must sequence the jobs according to the task plan, coordinating the agentsand activities of the workcell. Discrete event workcell coordination occurs at two levels in Figure 71.7:coordination of interagent activities occurs within the job coordinator and coordination of intra-agentactivities occurs within the virtual agent at the machine coordinator level. The latter might be consideredas reflex actions of the virtual agent.

71.9.1.1 Rule-Based Matrix DE Coordinator

The workcell DE coordinators are easily produced using the four task plan matrices constructed by thetask planner designed in the section on Petri net from task plan matrices. Given the job sequencing matrixFv , the job start matrix Sv , the resource requirements matrix Fr , and the resource release matrix Sr , the

Page 42: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-42 Computer Science Handbook

DE coordinator is given by

x = Fv vc + Fr r c + Fuu + FDuD (71.29)

vs = Sv x (71.30)

rs = Sr x (71.31)

where Eq. (71.29) is the controller state equation, Eq. (71.30) is the job start equation, and Eq. (71.31)is the resource release equation. This is a set of logical equations where all matrix operations are car-ried out in the matrix or/and algebra; addition of elements is replaced by OR, and multiplication ofelements is replaced by AND. Overbars denote logical negation, so that Equation 71.29 is a rule basecomposed of AND statements (e.g., if job b is completed and job c is completed and resource R1 isavailable, then set state component x4 high), and Equation 71.30 and Equation 71.31 are rule bases com-posed of OR statements (e.g., if state component x4 is high or state component x7 is high, then releaseresource M1).

For complex tasks with many jobs, the matrices in the DE controller can be large. However, they aresparse. Moreover, for special manufacturing structures such as the re-entrant flow line, the matrices inEquation (71.29) are lower block triangular, and this special structure gets around problems associatedwith the NP-hard nature of general manufacturing job shops. Finally, as rule bases, the DE controllerequations may be fired using standard efficient techniques for forward chaining, backward chaining (Retealgorithm), and so on.

The structure of the DE job coordination controller is given in Figure 71.29. This shows that the jobcoordinator is simply a closed-loop feedback control system operating at the DE level. At each time increment,workcell status signals are measured including the job complete status vector vc (where entries of 1 denotejobs complete), the resource availability vector rc (where entries of 1 denote resources available), andthe part input vector u (where entries of 1 denote parts coming into the workcell). These signals aredetermined using workcell digital interlocks and digital input/output between the agents. Based on thisworkcell status information, the DE controller computes which jobs to start next and which resourcesto release. These commands are passed to the workcell in the job start vector vs (where entries of 1denote jobs to be started) and the resource release vector rs (where entries of 1 denote resources to bereleased).

71.9.1.2 Deadlocks and Resource Dispatching Commands

Outer feedback loops are required to compute the dispatching input uD . A separate dispatching input isrequired whenever a column of Fr contains multiple ones, indicating that the corresponding resource is ashared resource required for more than one job. In a manufacturing workcell, if care is not taken to assignshared resources correctly, system deadlock may occur. In deadlock, operation of a subsystem ceases as acircular blocking of resources has developed [Wysk et al. 1991]. In a circular blocking, each resource iswaiting for each other resource, but none will ever again become available. In the example of Figure 71.24,a circular blocking occurs if machine M1 is waiting at b to be unloaded by R1, but R1 already has a partat d and the buffer B2 is full at e . On the other hand, buffer B2 cannot be unloaded at e since M1 alreadyhas a part at b.

There are well-established algorithms in industrial engineering for job dispatching [Panwalker andIskander 1977], including first-in–first-out, earliest due date, last buffer first serve, etc. Kanban systemsare pull systems where no job can be started unless a kanban card is received from a downstream job(possibly indicating buffer space or resource availability, or part requirements). A generalized kanbansystem that guarantees deadlock avoidance can be detailed in terms of the DE controller matrices, forit can be shown that all the circular waits of the workcell for a particular task are given in terms of thegraph defined by Sr Fr , with Sr the resource release matrix and Fr the resource requirements matrix.Only circular waits can develop into circular blockings. Based on this, in Lewis and Huang [1995] aprocedure known as maximum work-in-process (MAXWIP) is given that guarantees dispatching with nodeadlock.

Page 43: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-43

FIGURE 71.29 Matrix-based DE feedback controller for job sequencing and resource allocation.

71.9.2 Process Integration, Digital I/O, and Job CoordinationController Implementation

Information integration is the process by which the activities and status of the various workcell agents in-teract. Subcategories of information integration include sensor integration and sensor/actuator integration.Motion/process integration involves coordinating manipulator motion with process sensor or processcontroller devices. The most primitive process integration is through discrete digital I/O, or sequencinginterlocks. For example a machine controller external to the robot controller might send a 1-b signal indi-cating that it is ready to be loaded by the robot. The DE matrix-based job coordination controller providesan ideal technique for information integration. The workcell status signals required in Figure 71.29, thejob completion vector and resource availability vector, are given by digital output signals provided by

Page 44: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-44 Computer Science Handbook

sensors in the worckell (e.g., gripper open, part inserted in machine). The workcell command signals inFigure 71.29, the job start vector and resource release vector, are given by digital discrete input signals tothe workcell agents (e.g., move robot to next via point, pick up part, load machine).

The DE job coordination controller is nothing but a rule base, and so may be implemented either oncommercial progammable logic controllers (PLC) or on commercial robot controllers. In many cases,the DE controller matrices are block triangular, so that portions of the controller can be implementedhierarchically in separate subsystems (e.g., coordination between a single robot and a camera for some jobs).If this occurs, portions of the controller may be implemented in the machine coordinator in Figure 71.7,which often resides within the robot arm controller. Many robot controllers now support informationintegration functions by employing integrated PC interfaces through the communications ports, or insome through direct connections to the robot controller data bus. Higher level interactive portions of theDE controller should be implemented at the job coordination level, which is often realized on a dedicatedPLC. Vision-guided high-precision pick and place and assembly are major applications in the electronicsand semiconductor industries. Experience has shown that the best integrated vision/robot performancehas come from running both the robot and the vision system internal to the same computing platform,since data communication is much more efficient due to data bus access, and computing operations arecoordinated by one operating system.

71.9.3 Coordination of Multiple Robots

Coordination of multiple robots can be accomplished either at the discrete event level or the servocontrollevel. In both cases it is necessary to avoid collisions and other interactions that impede task completion. Ifthe arms are not interacting, it is convenient to coordinate them at the DE level, where collision avoidancemay be confronted by assigning any intersecting workspace where two robots could collide as a shared re-source, accessible by only a single robot at any given time. Then, techniques such as those in the section on thetask matrix approach to workcell planning may be used (where M1 was a shared resource). Such approachesare commonly used in coordination control of automated guided vehicles (AGV) [Gruver et al. 1984].

71.9.3.1 Two-Arm Motion/Force Coordination

In specialized robotic applications requiring two-arm interaction, such as coordinated lifting or processapplications, it may be necessary to coordinate the motion and force exertion of the arms on the jointservocontrol level [Hayati et al. 1989]. In such cases, the two robot arms may be considered in Figure 71.7as a single virtual agent having specific behaviors as defined by the feedback servocontroller.

There are two basic approaches to two-arm servocontrol. In one approach, one arm is considered asthe master, whose commanded trajectory is in terms of motion. The other, slave, arm is commanded tomaintain prescribed forces and torques across a payload mass, which effectively constrains its relativemotion with respect to the master arm. By this technique, the motion control and force/torque controlproblems are relegated to different arms, so that the control objectives are easily accomplished by servo-level feedback controller design. Another approach to two-arm coordination is to treat both arms as equals,coordinating to maintain prescribed linear and angular motions of the center-of-gravity (c.g.) of a payloadmass, as well as prescribed internal forces and torques across the payload. This approach involves complexanalyses to decompose the payload c.g. motion and internal forces into the required motion of each arm;kinematic transformations and Jacobians are needed.

71.10 Error Detection and Recovery

The material in this section is modifed from Zhou [1996]. In the execution of a task, errors can occur. Theerrors can be classified into several categories: hardware error, software error, and operational error. Thehardware errors include errors in mechanical and electrical mechanisms of the robot, such as failure inthe drive system or sensing system. Software errors can be bugs in the application program or controlsoftware. Timing with cooperative devices can also be called software error. Operational errors are errors

Page 45: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-45

in the robot environment that are external to the robot system such as jamming of parts or collision withobstacles.

During this discussion one should keep in mind the PC&C structure in Figure 71.7. Error handling canbe classified into two activities, error detection and error recovery. Error detection is composed of errorsensing, interpretation, and classification. Error recovery is composed of decision making and correctivejob assignment. While corrective jobs are being performed, the assigned task may be interrupted, or maycontinue to run at a reduced capability (e.g., one of two drilling machines may be down).

71.10.1 Error Detection

The sensors used in error detection can include all those discussed in Section 71.7 including tactile sensorsfor sensing contact errors, proximity sensors for sensing location or possible collision, force/torque sensorsfor sensing collision and jamming, and vision for sensing location, orientation and error existence. Oncean error is sensed, it must be interpreted and classified. This may be accomplished by servo-level stateobservers, logical rule-based means, or using advanced techniques such as neural networks.

71.10.2 Error Recovery

The occurrence of an error usually causes interruption of the normal task execution. Error recoverycan be done at three levels, where errors can be called exceptions, faults, and failures. At the lowest levelthe exception will be corrected automatically, generally in the real-time servocontrol loops, and the taskexecution continued. An example is jamming in the pin insertion problem, where a force/torque wristsensor can indicate jamming as well as provide the information needed to resolve the problem. At thesecond level, the error is a fault that has been foreseen by the task planner and included in the task planpassed to the job coordinator.

The vector x in Equation 71.29 contains fault states, and logic is built into the task plan matrices toallow corrective job assignment. Upon detection of an error, jobs can be assigned to correct the fault,with the task subsequently continued from the point where the error occurred. At this level, the errordetection/recovery logic can reside either in the machine coordinator or in the job coordinator.

At the highest level of recovery, the error was not foreseen by the task planner and there is no error statein the task plan. This results in a failure, where the task is interupted. Signals are sent to the planner, whomust correct the failure, sometimes with external resources, and replan the task, passing another plan tothe coordinator. In the worst case, manual operator intervention is needed. It can be seen that the flowof error signals proceeds upwards and of commands proceeds downwards, exactly as in the NASREMarchitecture in Figure 71.4.

At the lowest servocontrol level, additional sensory information is generally required for error recovery,as in the requirement for a wrist force/torque sensor in pin insertion. At the mid-level, additional logic isneeded for error recovery. At the highest level, task replanning capabilities are needed.

71.11 Human Operator Interfaces

Human operator integration is critical to the expeditious setup, programming, maintenance, and some-times operation of the robotic workcell. Especially important for effective human integration are theavailable human I/O devices, including the information available to the operator in graphical form andthe modes of real-time control operation available for human interaction. Teaching, programming, andoperational efforts are dramatically influenced by the type of user interface I/O devices available.

71.11.1 Levels of User Interface

Discounting workcell design and layout, operator interfaces occur at several levels in Figure 71.7 and maybe classified into off-line and on-line activities. Off-line interfaces occur in task definition and setup, often

Page 46: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-46 Computer Science Handbook

consisting of teaching activities. In workcell management, user inputs include assignment of tasks, duedates, and so on. At the workcell planning level, user functions might be required in task planning, both intask decomposition/job sequencing and in resource assignment. Off-line CAD programs are often usefulat this level. In path planning, the user might be required to teach a robot specific path via points forjob accomplishment. Finally, if failures occur, a human might be required to clear the failure, reset theworkcell, and restart the job sequence.

On-line user interfaces may occur at the discrete event level and the servocontrol level. In the formercase, a human might perform some of the jobs requested by the job coordinator, or may be required toperform corrective jobs in handling foreseen faults. At the servocontrol level, a human might performteleoperator functions, or may be placed in the inner feedback control loop with a machine or roboticdevice.

71.11.2 Mechanisms for User Interface

71.11.2.1 Interactive 3-D CAD

Computer integrated manufacturing operations require off-line programming and simulation in orderto layout production facilities, model and evaluate design concepts, optimize motion of devices, avoidinterference and collisions, minimize process cycle times, maximize productivity, and ensure maximumreturn on investment. Graphical interfaces, available on some industrial robots, are very effective forconveying information to the operator quickly and efficiently. A graphical interface is most importantfor design and simulation functions in applications which require frequent reprogramming and setupchanges. Several very useful off-line programming software systems are available from third party suppliers(CimStation [SILMA 1992], ROBCAD, IGRIP). These systems use CAD and/or dynamics computer modelsof commercially available robots to simulate job execution, path motion, and process activities, providingrapid programming and virtual prototyping functions. Interactive off-line CAD is useful for assigningtasks at the management level and for task decomposition, job sequencing, and resource assignment atthe task planning level.

71.11.2.2 Off-Line Robot Teaching and Workcell Programming

Commercial robot or machine tool controllers may have several operator interface mechanisms. Theseare generally useful at the level of off-line definition or teaching of jobs, which can then be sequencedby the job coordinator or machine coordinator to accomplish assigned tasks. At the lowest level one mayprogram the robot in its operating language, specifying path via points, gripper open/close commands,and so on. Machine tools may require programming in CNC code. These are very tedious functions,which can be avoided by object-oriented and open architecture approaches in well-designed workcells,where such functions should be performed automatically, leaving the user free to deal with other higherlevel supervisory issues. In such approaches, macros or subroutines are written in machine code whichencapsulate the machine behaviors (e.g., set speed, open gripper, go to prescribed point). Then, higherlevel software passes specific parameters to these routines to execute behaviors with specific location andmotion details as directed by the job coordinator.

Many robots have a teach pendant, which is a low-level teleoperation device with push buttons for movingindividual axes and other buttons to press commanding that certain positions should be memorized. Onjob execution, a playback mode is switched in, wherein the robot passes through the taught positions tosweep out a desired path. This approach is often useful for teaching multiple complex poses and Cartesianpaths.

The job coordinator may be implemented on a programmable logic controller (PLC). PLC programmingcan be a tedious and time-consuming affair, and in well-designed flexible reconfigurable workcells anobject-oriented approach is used to avoid reprogramming of PLCs. This might involve a programmingscheme that takes the task plan matrices in Section 71.9 as inputs and automatically implements thecoordinator using rule-based techniques (e.g., forward chaining, Rete algorithm).

Page 47: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-47

71.11.2.3 Teleoperation and Man-in-the-Loop Control

Operator interaction at the servocontrol level can basically consist of two modes. In man-in-the-loopcontrol, a human provides or modifies the feedback signals that control a device, actually operating amachine tool or robotic device. In teleoperation, an inner feedback loop is closed around the robot, anda human provides motion trajectory and force commands to the robot in a master/slave relationship. Insuch applications, there may be problems if extended communications distances are involved, since delaysin the communications channel can destabilize a teleoperation system having force feedback unless carefulattention is paid to designing the feedback loops to maintain passivity. See Lewis [1996] for more details.

71.12 Robot Workcell Programming

The robotic workcell requires programming at several levels [Leu 1985]. At the lower levels one generallyuses commercial programing languages peculiar to device manufacturers of robots and CNC machinetools. At the machine coordination level, robot controllers are also often used with discrete I/O signals anddecision making commands. At the job coordination level prorammable logic controllers (PLCs) are oftenused in medium complexity workcells, so that a knowledge of PLC programming techniques is required. Inmodern manufacturing and process workcells, coordination may be accomplished using general purposecomputers with programs written, for instance, in C.

71.12.1 Robot Programming Languages

Subsequent material in this section is modified from Bailey [1996]. Each robot manufacturer has its ownproprietary programming language. The variety of motion and position command types in a programminglanguage is usually a good indication of the robot’s motion generation capability. Program commandswhich produce complex motion should be available to support the manipulation needs of the application.If palletizing is the application, then simple methods of creating position commands for arrays of positionsare essential. If continuous path motion is needed, an associated set of continuous motion commandsshould be available. The range of motion generation capabilities of commercial industrial robots is wide.Suitability for a particular application can be determined by writing test code.

The earliest industrial robots were simple sequential machines controlled by a combination of servomo-tors, adjustable mechanical stops, limit switches, and PLCs. These machines were generally programmedby a record and play-back method with the operator using a teach pendant to move the robot throughthe desired path. MHI, the first robot programming language, was developed at Massachusetts Institute ofTechnology (MIT) during the early 1960s. MINI, developed at MIT during the mid-1970s was an expand-able language based on LISP. It allowed programming in Cartesian coordinates with independent controlof multiple joints. VAL and VAL II [Shimano et al. 1984], developed by Unimation, Inc., were interpretedlanguages designed to support the PUMA series of industrial robots. A manufacturing language (AML)was a completely new programming language developed by IBM to support the R/S 1 assembly robot. Itwas a subroutine-oriented, interpreted language which ran on the Series/1 minicomputer. Later versionswere compiled to run on IBM compatible personal computers to support the 7535 series of SCARA robots.Several additional languages [Gruver et al. 1984, Lozano-Perez 1983] were introduced during the late 1980sto support a wide range of new robot applications which were developed during this period.

71.12.2 V+, A Representative Robot Language

V+, developed by Adept Technologies, Inc., is a representative modern robot programming language withseveral hundred program instructions and reserved keywords. V+ will be used to demonstrate importantfeatures of robot programming. Robot program commands fall into several categories, as detailed in thefollowing subsections.

Page 48: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-48 Computer Science Handbook

71.12.2.1 Robot Control

Program instructions required to control robot motion specify location, trajectory, speed, acceleration,and obstacle avoidance. Examples of V+ robot control commands are as follows:

MOVE: Move the robot to a new location.DELAY: Stop the motion for a specified period of time.SPEED: Set the speed for subsequent motions.ACCEL: Set the acceleration and deceleration for subsequent motions.OPEN: Open the hand.CLOSE: Close the hand.

71.12.2.2 System Control

In addition to controlling robot motion, the system must support program editing and debugging, programand data manipulation, program and data storage, program control, system definitions and control,system status, and control/monitoring of external sensors. Examples of V+ control instructions are asfollows:

EDIT: Initiate line-oriented editing.STORE: Store information from memory onto a disk file.COPY: Copy an existing disk file into a new program.EXECUTE: Initiate execution of a program.ABORT: Stop program execution.DO: Execute a single program instruction.WATCH: Set and clear breakpoints for diagnostic execution.TEACH: Define a series of robot location variables.CALIBRATE: Initiate the robot positioning system.STATUS: Display the status of the system.ENABLE: Turn on one or more system switches.DISABLE: Turn off one or more system switches.

71.12.2.3 Structures and Logic

Program instructions are needed to organize and control execution of the robot program and interactionwith the user. Examples include familiar commands such as FOR, WHILE, IF as well as commands likethe following:

WRITE: Output a message to the manual control pendant.PENDANT: Receive input from the manual control pendant.PARAMETER: Set the value of a system parameter.

71.12.2.4 Special Functions

Various special functions are required to facilitate robot programming. These include mathematical ex-pressions such as COS, ABS, and SQRT, as well as instructions for data conversion and manipulation, andkinematic transformations such as the following:

BCD: Convert from real to binary coded decimal.FRAME: Compute the reference frame based on given locations.TRANS: Compose a transformation from individual components.INVERSE: Return the inverse of the specified transformation.

Page 49: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-49

71.12.2.5 Program Execution

Organization of a program into a sequence of executable instructions requires scheduling of tasks, controlof subroutines, and error trapping/recovery. Examples include the following:

PCEXECUTE: Initiate the execution of a process control program.PCABORT: Stop execution of a process control program.PCPROCEED: Resume execution of a process control program.PCRETRY: After an error, resume execution at the last step tried.PCEND: Stop execution of the program at the end of the current execution cycle.

71.12.2.6 Example Program

This program demonstrates a simple pick and place operation. The values of position variables pick andplace are specified by a higher level executive that then initiates this subroutine:

1 .PROGRAM move.parts()2 ; Pick up parts at location “pick” and put them down at “place”3 parts = 100 ; Number of parts to be processed4 height1 = 25 ; Approach/depart height at “pick”5 height2 = 50 ; Approach/depart height at “place”6 PARAMETER.HAND.TIME = 16 ; Setup for slow hand7 OPEN ; Make sure hand is open8 MOVE start ; Move to safe starting location9 For i = 1 TO parts ; Process the parts

10 APPRO pick, height1 ; Go toward the pick-up11 MOVES pick ; Move to the part12 CLOSEI ; Close the hand13 DEPARTS height1 ; Back away14 APPRO place, height2 ; Go toward the put-down15 MOVES place ; Move to the destination16 OPENI ; Release the part17 DEPARTS height2 ; Back away18 END ; Loop for the next part19 TYPE “ALL done.”, /I3, parts, “parts processed”20 STOP ; End of the program21 .END

71.12.2.7 Off-Line Programming and Simulation

Commercially available software packages (discussed in Section 71.11) provide support for off-line designand simulation of 3-D worckell layouts including robots, end effectors, fixtures, conveyors, part positioners,and automatic guided vehicles. Dynamic simulation allows off-line creation, animation, and verification ofrobot motion programs. However, these techniques are limited to verification of overall system layout andpreliminary robot program development. With support for data exchange standards [e.g., InternationalGraphics Exchange Specification (IGES), Virtual Data Acquisition and File Specification (VDAFS),Specification for Exchange of Text (SET)], these software tools can pass location and trajectory data toa robot control program, which in turn can provide the additional functions required for full operation(operator guidance, logic, error recovery, sensor monitoring/control, system management, etc.).

71.13 Mobile Robots and Automated Guided Vehicles

A topic which has always intrigued computer scientists is that of mobile robots [Zheng 1993]. Thesemachines move in generally unstructured environments and so require enhanced decision making andsensors; they seem to exhibit various anthropomorphic aspects since vision is often the sensor, decision

Page 50: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-50 Computer Science Handbook

making mimics brain functions, and mobility is similar to humans, particularly if there is an onboardrobot arm attached. Here are discussed mobile robot research and factory automated guided vehicle (AGV)systems, two widely disparate topics.

71.13.1 Mobile Robots

Unfortunately, in order to focus on higher functions such as decision making and high-level vision pro-cessing, many researchers treat the mobile robot as a dynamical system obeying Newton’s laws F = ma(e.g., in the potential field approach to motion control, discussed earlier). This simplified dynamical rep-resentation does not correspond to the reality of moving machinery which has nonholonomic constraints,unknown masses, frictions, Coriolis forces, drive train compliance, wheel slippage, and backlash effects. Inthis subsection we provide a framework that brings together three camps: computer science results basedon the F = ma assumption, nonholonomic control results that deal with a kinematic steering system, andfull servo-level feedback control that takes into account all of the vehicle dynamics and uncertainties.

71.13.1.1 Mobile Robot Dynamics

The full dynamical model of a rigid mobile robot (e.g., no flexible modes) is given by

M(q)q + Vm(q, q)q + F (q, q) + G(q) + �d = B(q)� − AT (q)� (71.32)

which should be compared to Equation 71.7 and Equation 71.14. In this equation, M is an inertia matrix,Vm is a matrix of Coriolis and centripetal terms, F is a friction vector, G is a gravity vector, and �d is avector of disturbances. The n-vector �(t) is the control input. The dynamics of the driving and steeringmotors should be included in the robot dynamics, along with any gearing. Then, � might be, for example,a vector of voltage inputs to the drive actuator motors.

The vehicle variable q(t) is composed of Cartesian position (x , y) in the plane plus orientation �. If arobot arm is attached, it can also contain the vector of robot arm joint variables. A typical mobile robotwith no onboard arm has q = [x y �]T , where there are three variables to control, but only two inputs,namely, the voltages into the left and right driving wheels (or, equivalently, vehicle speed and headingangle).

The major problems in control of mobile robots are the fact that there are more degrees of freedomthan control inputs, and the existence of nonholonomic constraints.

71.13.1.2 Nonholonomic Constraints and the Steering System

In Equation 71.12 the vector of constraint forces is � and matrix A(q) is associated with the constraints.These may include nonslippage of wheels and other holonomic effects, as well as the nonholonomic con-straints, which pose one of the major problems in mobile robot control. Nonholonomic constraints arethose which are nonintegrable, and include effects such as the impossibility of sideways motion (think ofan automobile). In research laboratories, it is common to deal with omnidirectional robots that have nononholonomic constraints, but can rotate and translate with full degrees of freedom; such devices do notcorrespond to the reality of existing shop floor or cross-terrain vehicles which have nonzero turn radius.

A general case is where all kinematic equality constraints are independent of time and can be expressedas

A(q)q = 0 (71.33)

Let S(q) be a full-rank basis for the nullspace of A(q) so that AS = 0. Then one sees that the linear andangular velocities are given by

q = S(q)v(t) (71.34)

where v(t) is an auxiliary vector. In fact, v(t) often has physical meaning, consisting of two components:the commanded vehicle speed and the heading angle. Matrix S(q) is easily determined independently of

Page 51: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-51

the dynamics 71.32 from the wheel configuration of the mobile robot. Thus, Equation 71.34 is a kinematicequation that expresses some simplified relations between motion q(t) and a fictitious ideal speed andheading vector v. It does not include dynamical effects, and is known in the nonholonomic literature asthe steering system. In the case of omnidirectional vehicles S(q) is 3 × 3 and Equation 71.34 correspondsto the Newton’s law model F = ma used in, e.g., potential field approaches.

There is a large literature on selecting the command v(t) to produce desired motion q(t) in nonholo-nomic systems; the problem is that v has two components and q has three. Illustrative references includethe chapters by Yamamato and Yun and by Canudas de Wit et al. in Zheng [1993], as well as Samsonand Ait-Abderrahim [1991]. There are basically three problems considered in this work: following a pre-scribed path, tracking a prescribed trajectory (e.g., a path with prescribed transit times), and stabilizationat a prescribed final docking position (x , y) and orientation �. Single vehicle systems as well as multi-body systems (truck with multiple trailers) are treated. The results obtained are truly remarkable andare in the vein of a path including the forward/backward motions necessary to park a vehicle at a givendocking position and orientation. All of the speed reversals and steering commands are automaticallyobtained by solving certain coupled nonlinear equations. This is truly the meaning of intelligence andautonomy.

71.13.1.3 Conversion of Steering System Commands to Actual VehicleMotor Commands

The steering system command vector obtained from the nonholonomic literature may be called vc (t), theideal desired value of the speed/heading vector v(t). Under the so-called perfect velocity assumption theactual vehicle velocity v(t) follows the command vector vc (t), and can be directly given as control inputto the vehicle. Unfortunately, in real life this assumption does not hold. One is therefore faced with theproblem of obtaining drive wheel and steering commands for an actual vehicle from the steering systemcommand vc (t).

To accomplish this, premultiply Equation 71.32 by ST (q) and use Equation 71.34 to obtain

M(q)v + V m(q, q)v + F(v) + �d = B(q)� (71.35)

where gravity plays no role and so has been ignored, the constraint term drops out due to the fact thatAS = 0, and the overbar terms are easily computed in terms of original quantities. The true model ofthe vehicle is thus given by combining both Equation 71.34 and Equation 71.35. However, in the latterequation it turns out that (B)(q) is square and invertible, so that standard computed torque techniques(see section on robot servo-level motion control) can be used to compute the required vehicle control� from the steering system command vc (t). In practice, correction terms are needed due to the fact thatv �= vc ; they are computed using a technique known as integrator backstepping [Fierro and Lewis 1995].

The overall controller for the mobile robot is similar in structure to the multiloop controller inFigure 71.13, with an inner nonlinear feedback linearization loop (e.g., computed torque) and an outertracking loop that computes the steering system command. The robustifying term is computed usingbackstepping. Adaptive control and neural net control inner loops can be used instead of computed torqueto reject uncertainties and provide additional dynamics learning capabilities. Using this multiloop controlscheme, the idealized control inputs provided, e.g., by potential field approaches, can also be converted toactual control inputs for any given vehicle. A major criticism of potential field approaches has been thatthey do not take into account the vehicle nonholonomic constraints.

71.13.2 Automated Guided Vehicle Systems

Though research in mobile robots is intriguing, with remarkable results exhibiting intelligence at thepotential field planning level, the nonholonomic control level, and elsewhere, few of these results maketheir way into the factory or other unstructured environments. There, reliability and repeatability are themain issue of concern.

Page 52: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-52 Computer Science Handbook

71.13.2.1 Navigation and Job Coordination

If the environment is unstructured one may either provide sophisticated planning, decision making, andcontrol schemes or one may force structure onto the environment. Thus, in most AGV systems the vehiclesare guided by wires buried in the floor or stripes painted on the floor. Antennas buried periodically in thefloor provide check points for the vehicle as well as transmitted updates to its commanded job sequence.

A single computer may perform scheduling and routing of multiple vehicles. Design of this coordinatingcontroller is often contorted and complex in actual installed systems, which may be the product of severalengineers working in an ad hoc fashion over several years of evolution of the system. To simplify andunify design, the discrete event techniques in the task matrix approach section may be used for planning.Track intersections should be treated as shared resources only accessible by a single vehicle at a time, sothat on-line dispatching decisions are needed. The sequencing controller is then implemented using theapproach in Section 71.9.

71.13.2.2 Sensors, Machine Coordination, and Servo-level Control

Autonomous vehicles often require extensive sensor suites. There is usually a desire to avoid vision systemsand use more reliable sensors including contact switches, proximity detectors, laser rangefinders, sonar,etc. Optical bar codes are sometimes placed on the walls; these are scanned by the robot so it can updateits absolute position. Integrating this multitude of sensors and performing coordinated activities basedon their readings may be accomplished using simple decision logic on low-level microprocessor boards.Servo-level control consists of simple PD loops that cause the vehicle to follow commanded speeds and turncommands. Distance sensors may provide information needed to maintain minimum safe intervehicularspacing.

Defining Terms

Accuracy: The degree to which the actual and commanded position (of, e.g., a robot manipulator)correspond.

Adaptive control: A large class of control algorithms where the controller has its own internal dynamicsand so is capable of learning the unknown dynamics of the robot arm, thus improving performanceover time.

A manufacturing language (AML): A robot programming language.Automatic programming of tools (APT): A robot programming language.Cell decomposition: An approach to path planning where the obstacles are modeled as polygons and the

free space is decomposed into cells such that a straight line path can be generated between any twopoints in a cell.

Compliance: The inverse of stiffness, useful in end effectors and tooling whenever a robot must interactwith rigid constraints in the environment.

Computed torque control: An important and large class of robot arm controller algorithms that relieson subtracting out some or most of the dynamical nonlinearities using feedforward compensationterms including, e.g., gravity, friction, Coriolis, and desired acceleration feedforward.

End effector: Portion of robot (typically at end of chain of links) designed to contact the external world.Feedback linearization: A modern approach to robot arm control that formalizes computed torque

control mathematically, allowing formal proofs of stability and design of advanced algorithmsusing Lyapunov and other techniques.

Force control: A class of algorithms allowing control over the force applied by a robot arm, often in adirection normal to a prescribed surface while the position trajectory is controlled in the plane ofthe surface.

Forward kinematics: Identification of Cartesian task coordinates given robot joint configuration.International Graphics Exchange Specification (IGES): A data exchange standard.Inverse kinematics: Identification of possible robot joint configurations given desired Cartesian task

coordinates.

Page 53: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-53

Joint variables: Scalars specifying position of each joint, one for each degree of freedom. The joint variablefor a revolute joint is an angle in degrees; the joint variable for a prismatic joint is an extension inunits of length.

Learning control: A class of control algorithms for repetitive motion applications (e.g., spray painting)where information on the errors during one run is used to improve performance during the nextrun.

Linearity in the parameters: A property of the robot arm dynamics, important in adaptive controllerdesign, where the nonlinearities are linear in the unknown parameters such as unknown massesand friction coefficients.

Manipulator Jacobian: A configuration-dependent matrix relating joint velocities to Cartesian coordinatevelocities.

Mechanical part feeders: Mechanical devices for feeding parts to a robot with a specified frequency andorientation. They are classified as vibratory bowl feeders, vibratory belt feeders, and programmablebelt feeders.

Mobile robot: A special type of manipulator which is not bolted to the floor but can move. Based ondifferent driving mechanisms, mobile robots can be further classified as wheeled mobile robots,legged mobile robots, treaded mobile robots, underwater mobile robots, and aerial vehicles.

Path planning: The process of finding a continuous path from an initial robot configuration to a goalconfiguration without collision.

Prismatic joint: Sliding or telescoping robot joint that produces relative translation of the connectedlinks.

Proportional-integral-derivative (PID) control: A classical servocontrol feedback algorithm where theactual system output is subtracted from the desired output to obtain a tracking error. Then, aweighted linear combination of the tracking error, its derivative, and its integral are used as thecontrol input to the system.

Remote-center-of-compliance (RCC): A compliant wrist or end effector designed so that task-relatedforces and moments produce deflections with a one-to-one correspondence (i.e., without sideeffects). This property simplifies programming of assembly and related tasks.

Repeatability: The degree to which the actual positions resulting from two repeated commands to thesame position (of, e.g., a robot manipulator) correspond.

Revolute joint: Rotary robot joint producing relative rotation of the connected links.Robot axis: A direction of travel or rotation usually associated with a degree of freedom of motion.Robot joint: A mechanism that connects the structural links of a robot manipulator together while al-

lowing relative motion.Robot link: The rigid structural elements of a robot manipulator that are joined to form an arm.Robust control: A large class of control algorithms where the controller is generally nondynamic, but

contains information on the maximum possible modeling uncertainties so that the tracking errorsare kept small, often at the expense of large control effort. The tracking performance does notimprove over time so the errors never go to zero.

SCARA: Selectively compliant assembly robot arm.Singularity: Configuration for which the manipulator Jacobian has less than full rank.Skew symmetry: A property of the dynamics of rigid-link robot arms, important in controller design,

stating that M − 12 Vm is skew symmetric, with M the inertia matrix and Vm the Coriolis/centripetal

matrix. This is equivalent to stating that the internal forces do no work.Specification for Exchange of Text (SET): A data exchange standard.Task coordinates: Variables in a frame most suited to describing the task to be performed by manipulator.

They are generally taken as Cartesian coordinates relative to a base frame.Virtual Data Acquisition and File Specification (VDAFS): A data exchange standard.Visibility graph: A road map approach to path planning where the obstacles are modeled as polygons.

The visibility graph has nodes given by the vertices of the polygons, the initial point, and the goalpoint. The links are straight line segments connecting the nodes without intersecting any obstacles.

Page 54: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

71-54 Computer Science Handbook

Voronoi diagram: A road map approach to path planning where the obstacles are modeled as polygons.The Voronoi diagram consists of line as having an equal distance from adjacent obstacles; it iscomposed of straight lines and parabolas.

References

Albus, J. S. 1992. A reference model architecture for intelligent systems design. In An Introduction toIntelligent and Autonomous Control. P. J. Antsaklis and K. M. Passino, Eds., pp. 27–56. Kluwer,Boston, MA.

Antsaklis, P. J. and Passino, K. M. 1992. An Introduction to Intelligent and Autonomous Control. Kluwer,Boston, MA.

Arkin, R. C. 1989. Motor schema-based mobile robot navigation. Int. J. Robotic Res. 8(4):92–112.Bailey, R. 1996. Robot programming languages. In CRC Handbook of Mechanical Engineering. F. Kreith,

Ed. CRC Press, Boca Raton, FL.Brooks, R. A. 1986. A robust layered control system for a mobile robot. IEEE. J. Robotics Automation.

RA-2(1):14–23.Craig, J. 1989. Introduction to Robotics: Mechanics and Control. Addison-Wesley, New York.Decelle, L. S. 1988. Design of a robotic workstation for component insertions. AT&T Tech. J. 67(2):

15–22.Edkins, M. 1983. Linking industrial robots and machine tools. Robotic Technology. A. Pugh, Ed. IEE control

engineering ser. 23, Pergrinus, London.Fierro, R. and Lewis, F. L. 1995. Control of a nonholonomic mobile robot: backstepping kinematics into

dynamics, pp. 3805–3810. Proc. IEEE Conf. Decision Control. New Orleans, LA., Dec.Fraden, J. 1993. AIP Handbook Of Modern Sensors, Physics, Design, and Applications. American Institute

of Physics, College Park, MD.Fu, K. S., Gonzalez, R. C., and Lee, C. S. G. 1987. Robotics. McGraw–Hill, New York.Ghosh, B., Jankovic, M., and Wu, Y. 1994. Perspective problems in systems theory and its application in

machine vision. J. Math. Sys. Estim. Control.Groover, M. P., Weiss, M., Nagel, R. N., and Odrey, N. G. 1986. Industrial Robotics. McGraw–Hill, New

York.Gruver, W. A., Soroka, B. I., and Craig, J. J. 1984. Industrial robot programming languages: a comparative

evaluation. IEEE Trans. Syst., Man, Cybernetics SMC-14(4).Jagannathan, S., Lewis, F., and Liu, K. 1994. Motion control and obstacle avoidance of a mobile robot with

an onboard manipulator. J. Intell. Manuf. 5:287–302.Jamshidi, M., Lumia, R., Mullins, J., and Shahinpoor, M. 1992. Robotics and Manufacturing: Recent Trends

in Research, Education, and Applications, Vol. 4. ASME Press, New York.Hayati, S., Tso, K., and Lee, T. 1989. Dual arm coordination and control. Robotics 5(4):333–344.Latombe, J. C. 1991. Robot Motion Planning, Kluwer Academic, Boston, MA.Lee, K.-M. and Li, D. 1991. Retroreflective vision sensing for generic part presentation. J. Robotic Syst.

8(1):55–73.Lee, K.-M. and Blenis, R. 1994. Design concept and prototype development of a flexible integrated vision

system. J. Robotic Syst. 11(5):387–398.Leu, M. C. 1985. Robotics software systems. Rob. Comput. Integr. Manuf. 2(1):1–12.Lewis, F. 1996. Robotics. In CRC Handbook of Mechanical Engineering, Ed. F. Kreith. CRC Press, Boca

Raton, FL.Lewis, F. L., Abdallah, C. T., and Dawson, D. M. 1993. Control of Robot Manipulators. Macmillan, New York.Lewis, F. L. and Huang, H.-H. 1995. Manufacturing dispatching controller design and deadlock avoidance

using a matrix equation formulation, pp. 63–77. Proc. Workshop Modeling, Simulation, ControlTech. Manuf. SPIE Vol. 2596. R. Lumia, organizer. Philadelphia, PA. Oct.

Lewis, F. L., Liu, K., and Yesildirek, A. 1995. Neural net robot controller with guaranteed trackingperformance. IEEE Trans. Neural Networks 6(3):703–715.

Page 55: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE

Robotics 71-55

Lozano-Perez, T. 1983. Robot programming. Proc. IEEE 71(7):821–841.Nichols, H. R. and Lee, M. H. 1989. A survey of robot tactile sensing technology. Int. J. Robotics Res.

8(3):3–30.Panwalker, S. S. and Iskander, W. 1977. A survey of scheduling rules. Operations Res. 26(1):45–61.Pertin-Trocaz, J. 1989. Grasping: a state of the art. In The Robotics Review 1. O. Khatib, J. Craig and

T. Lozano-Perez, Eds., pp. 71–98. MIT Press, Cambridge, MA.Pugh, A., ed. 1983. Robotic Technology. IEE control engineering ser. 23, Pergrinus, London.Samson, C. and Ait-Abderrahim, K. 1991. Feedback control of a nonholonomic wheeled cart in Cartesian

space, pp. 1136–1141. Proc. IEEE Int. Conf. Robotics Automation. April.Saridis, G. N. 1996. Architectures for intelligent control. In Intelligent Control Systems. M. M. Gupta and

R. Sinha, Eds. IEEE Press, New York.Shimano, B. E., Geschke, C. C., and Spalding, C. H., III. 1984. Val-II: a new robot control system for

automatic manufacturing, pp. 278–292. Proc. Int. Conf. Robotics, March 13–15.SILMA. 1992. SILMA CimStation Robotics Technical Overview, SILMA Inc., Cupertino, CA.Snyder, W. E. 1985. Industrial Robots: Computer Interfacing and Control. Prentice–Hall, Englewood Cliffs,

NJ.Spong, M. W. and Vidyasagar, M. 1989. Robot Dynamics and Control. Wiley, New York.Tzou, H. S. and Fukuda, T. 1992. Precision Sensors, Actuators, and Systems. Kluwer Academic, Boston, MA.Warfield, J. N. 1973. Binary matrices in system modeling. IEEE Trans. Syst. Man, Cybernetics. SMC-

3(5):441–449.Wolter, J., Chakrabarty, S., and Tsao, J. 1992. Methods of knowledge representation for assembly planning,

pp. 463–468. Proc. NSF Design and Manuf. Sys. Conf. Jan.Wright, P. K. and Cutkosky, M. R. 1985. Design of grippers. In The Handbook of Industrial Robotics, S. Nof,

Ed. Chap. 21. Wiley, New York.Wysk, R. A., Yang, N. S., and Joshi, S. 1991. Detection of deadlocks in flexible manufacturing cells. IEEE

Trans. Robotics Automation 7(6):853–859.Zheng, Y. F., ed. 1993. Recent Trends in Mobile Robots. World Scientific, Singapore.Zhou, C. 1996. Planning and intelligent control. In CRC Handbook of Mechanical Engineering. F. Kreith,

Ed. CRC Press, Boca Raton, FL.Zhou, M.-C., DiCesare, F., and Desrochers, A. D. 1992. A hybrid methodology for synthesis of Petri net

models for manufacturing systems. IEEE Trans. Robotics Automation 8(3):350–361.

Further Information

For further information one is referred to the chapter on “Robotics” by F. L. Lewis in the CRC Handbookof Mechanical Engineering, edited by F. Kreith, CRC Press, 1996. Also useful are robotics books by Craig(1989), Lewis, Abdallah, and Dawson (1993), and Spong and Vidyasagar (1989).

Page 56: Robotics - The University of Texas at Arlington – UT … robotics SCUT/NEU June...FIGURE 71.3 Three-level intelligent control architecture from work by Saridis. Robotics 71-5 FIGURE