development of a personal service robot with user …jun/pdffiles/fsr2003.pdf · development of a...

6
Development of a Personal Service Robot with User-Friendly Interfaces Jun Miura, Yoshiaki Shirai, Nobutaka Shimada, Yasushi Makihara, Masao Takizawa, and Yoshio Yano Dept. of Computer-Controlled Mechanical Systems, Osaka University, Suita, Osaka 565-0871, Japan {jun,shirai,shimada,makihara,takizawa,yano}@cv.mech.eng.osaka-u.ac.jp Abstract This paper describes a personal service robot developed for assisting a user in his/her daily life. One of the impor- tant aspects of such robots is the user-friendliness in com- munication; especially, the easiness of user’s assistance to a robot is important in making the robot perform various kinds of tasks. Our robot has the following three features: (1) interactive object recognition, (2) robust speech recog- nition, and (3) easy teaching of mobile manipulation. The robot is applied to the task of fetching a can from a distant refrigerator. 1 Introduction Personal service robot is one of the promising areas to which robotic technologies can be applied. As we are fac- ing the “aging society”, the need for robots which can help human in various everyday situations is increasing. Possi- ble tasks of such robots are: bringing a user-specified object to the user in the bed, cleaning a room, mobile aid, social interaction. Recently several projects on personal service robots are going on. HERMES [2, 3] is a humanoid robot that can perform service tasks such as delivery using vision- and conversation-based interfaces. MORPHA project [1] aims to develop two types of service robot: robot assistant for household and elderly care and manufacturing assistant, by integrating various robotics technologies such as human- machine communications, teaching methodologies, motion planning, and image analysis. CMU’s Nursebot project [10] has been developing a personal service robot for assisting elderly people in their daily activities based on communica- tion skills; a probabilistic algorithm is used for generating a timely and use-friendly robot behaviors [9]. One of the important aspects of such robots is the user- friendliness. Since personal service robots are usually used by a novice, they are required to provide easy interaction methods to users. Since personal service robots are ex- pected work in various environments and, therefore, it is difficult to give a robot a complete set of required skills and knowledge in advance; so teaching the robot on the job is indispensable. In other words, user’s assistance to a robot robust speech recognition interactive object recognition easy teaching of mobile manipulation Fig. 1: Features of our personal service robot. is necessary and should be done easily. We are developing a personal service robot which has the following three features (see Fig. 1): 1. Interactive object recognition. 2. Robust speech recognition. 3. Easy teaching of mobile manipulation. The following sections will describe these features and ex- perimental results. The current target task of our robot is fetching a can or a bottle from a distant refrigerator. The task is roughly di- vided into the following: (1) movement to/from the refrig- erator, (2) manipulation of the refrigerator and a can, (3) recognition of a can in the refrigerator. In the third task, a verbal interaction between the user and the robot is essential to the robustness of the recognition process. 2 Interactive Object Recognition This section explains our interactive object recognition method which actively uses the dialog with a user [6]. 2.1 Registration of Object Models The robot registers models of objects to be recognized in advance. A model consists of the size, representative colors (primary features), and secondary features (the color, the position, and the area of uniform regions other than repre- sentative colors). Secondary features are used only when there are multiple objects with the same representative col- ors. For model registration, the robot makes a “developed image” by mosaicing images captured from eight directions while the robot rotates an object. Fig. 2 shows acquisition of developed images for two types of objects.

Upload: ngothuy

Post on 30-Aug-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Development of a Personal Service Robotwith User-Friendly Interfaces

Jun Miura, Yoshiaki Shirai, Nobutaka Shimada,Yasushi Makihara, Masao Takizawa, and Yoshio YanoDept. of Computer-Controlled Mechanical Systems,

Osaka University, Suita, Osaka 565-0871, Japan{jun,shirai,shimada,makihara,takizawa,yano}@cv.mech.eng.osaka-u.ac.jp

Abstract

This paper describes a personal service robot developedfor assisting a user in his/her daily life. One of the impor-tant aspects of such robots is the user-friendliness in com-munication; especially, the easiness of user’s assistance toa robot is important in making the robot perform variouskinds of tasks. Our robot has the following three features:(1) interactive object recognition, (2) robust speech recog-nition, and (3) easy teaching of mobile manipulation. Therobot is applied to the task of fetching a can from a distantrefrigerator.

1 Introduction

Personal service robot is one of the promising areas towhich robotic technologies can be applied. As we are fac-ing the “aging society”, the need for robots which can helphuman in various everyday situations is increasing. Possi-ble tasks of such robots are: bringing a user-specified objectto the user in the bed, cleaning a room, mobile aid, socialinteraction.

Recently several projects on personal service robots aregoing on. HERMES [2, 3] is a humanoid robot that canperform service tasks such as delivery using vision- andconversation-based interfaces. MORPHA project [1] aimsto develop two types of service robot: robot assistant forhousehold and elderly care and manufacturing assistant, byintegrating various robotics technologies such as human-machine communications, teaching methodologies, motionplanning, and image analysis. CMU’s Nursebot project [10]has been developing a personal service robot for assistingelderly people in their daily activities based on communica-tion skills; a probabilistic algorithm is used for generating atimely and use-friendly robot behaviors [9].

One of the important aspects of such robots is the user-friendliness. Since personal service robots are usually usedby a novice, they are required to provide easy interactionmethods to users. Since personal service robots are ex-pected work in various environments and, therefore, it isdifficult to give a robot a complete set of required skills andknowledge in advance; so teaching the robot on the job isindispensable. In other words, user’s assistance to a robot

robustspeech recognition

interactiveobject recognition

easy teaching ofmobile manipulation

Fig. 1: Features of our personal service robot.

is necessary and should be done easily.We are developing a personal service robot which has

the following three features (see Fig. 1):1. Interactive object recognition.2. Robust speech recognition.3. Easy teaching of mobile manipulation.

The following sections will describe these features and ex-perimental results.

The current target task of our robot is fetching a can ora bottle from a distant refrigerator. The task is roughly di-vided into the following: (1) movement to/from the refrig-erator, (2) manipulation of the refrigerator and a can, (3)recognition of a can in the refrigerator. In the third task, averbal interaction between the user and the robot is essentialto the robustness of the recognition process.

2 Interactive Object RecognitionThis section explains our interactive object recognitionmethod which actively uses the dialog with a user [6].

2.1 Registration of Object Models

The robot registers models of objects to be recognized inadvance. A model consists of the size, representative colors(primary features), and secondary features (the color, theposition, and the area of uniform regions other than repre-sentative colors). Secondary features are used only whenthere are multiple objects with the same representative col-ors. For model registration, the robot makes a “developedimage” by mosaicing images captured from eight directionswhile the robot rotates an object. Fig. 2 shows acquisitionof developed images for two types of objects.

三浦 純
Preprints of the 4th Int. Conf. on Field and Service Robotics, pp. 293-298, July 2003.

(a) Original image (b) Piece image (c) Developed imagetop : can bottom : square type PET bottle

Fig. 2: Procedure for constructing a developed image

(a) Example of interval

(c) The other similarcolor regions

(b) Segmented image

(d) FeaturesIn (d), top line: representative color, middle line: color of secondaryfeature 1, bottom line: color of secondary feature 2

Fig. 3: Extraction of features

Since primary and secondary features depend on theviewing direction, we determine intervals of directionswhere similar features are observed. In the case of Fig.3(a), for example, two intervals, I1 (white) and I2 (blue),are determined. If two objects are not distinguishable foran interval, it is further divided into subintervals using sec-ondary features. Candidates for secondary features are ex-tracted as follows. The robot first segments a developedimage into uniform regions (see Fig. 3(b)) to extract pri-mary features used for first-level intervals. The robot thenextracts uniform color regions other than representative col-ors (see Fig. 3(c)) and records the size, the position, and thecolor of such regions as candidates for secondary features(see Fig. 3(d)). Secondary features of an object are incre-mentally registered to its model every time another objecthaving a similar feature and being undistinguishable in sev-eral viewing directions is added to the database.

2.2 Object Recognition

The robot first extracts candidate regions for objectsbased on the object color which is specified by a user oris determined from a user-specified object name. Then itdetermines the type of each candidate from its shape; forexample, a can has a rectangular shape in an image.

For each candidate, the robot checks if its size is compa-rable with that of the corresponding object model. If no sec-ondary features are registered in the model, the recognitionfinishes with success. Otherwise, the robot tries matchingusing secondary features. Fig. 4 shows an example match-ing process. Fig. 4(c) shows two candidates are found using

(a) input image (b) candidate region

(c) can candidates. (d) recognition result.(black: secondary feature)

Fig. 4: Matching with object models.

only the primary feature (representative color). Using a sec-ondary feature, the two candidates are distinguished.

Since the lighting condition in the recognition phase maydiffer from that in the learning phase, we have developed amethod for adjusting colors based on the observed color ofa reference object such as the door of a refrigerator [7].

2.3 Recognition Supported by Dialog

If the robot failed to find a target object, it tries to obtainadditional information by a dialog with the user. Currently,the user is supposed to be able to see the refrigerator througha remote display. We consider the following failure cases:(1) multiple object are found; (2) no objects are found butcandidate regions are found; (3) no candidate regions arefound due to (a) partial occlusion or (b) color change.

In this dialog, it is important for the robot to generategood questions which can retrieve an informative answerfrom the user. We here explain case (3)-(a) in detail. In thiscase, the robot asks the user an approximate position of thetarget like: “I have not found it. Where is it ?” Then the usermay answer: “It is behind A” (A is the name of an occlud-ing object). Using this advice, the robot first searches forobject A in the refrigerator (see Fig. 5(b)). Then it searchesboth sides of the occluding object for regions of the repre-sentative color of the target object and extracts its verticaledge corresponding to the object boundary (see Fig. 5(c)).Finally the robot determines the position of edges on theboundary of the other side using the size of the target object(see Fig. 5(d)).

3 Robust Speech Recognition

Many existing dialog-based interface systems assume thata speech recognition (sub)system always works well. How-ever, since the dialog with a robot is usually held in environ-ments where various noises exist, such an assumption is dif-ficult to be made. There is another problem that a user, who

(a) input image. (b) occluding object.

(c) region of target object. (d) recognition result.

Fig. 5: Recognition of occluded object.

is usually not an expert of robot operations, most probablyuses words which are not registered in the robot’s database.Therefore, the dialog system has to be able to cope withspeech recognition failure and unknown words [11].

3.1 Overview of the Speech Recognition

We use IBM’s ViaVoice as a speech recognition engine.Fig. 6 shows an overview of our speech recognition system.We first apply a context-free grammar (CFG)-based recog-nition engine to the voice input. If it succeeds, the recog-nition result is sent to an image recognition module. If itfails to identify some words due to, for example, noise orunknown words, the input is then processed by a dictation-oriented engine, which generates a set of probable candidatetexts. Usually in a candidate text, some words are identified(i.e., determined to be registered ones) and the others arenot. So the unidentified words are analyzed to estimate theirmeanings, by considering the relation to the other identifiedwords. For example, if an unidentified word has a similarpronunciation to a registered word, and if the category (e.g.,the part of speech) is acceptable considering the neighbor-ing identified words, the robot supposes that the unidenti-fied word is the registered one, and generates a question tothe user to verify the supposition. The robot uses proba-bilistic models of possible word sequences and updates themodel through the dialog with each specific user.

3.2 Estimating the Meaning of UnidentifiedWords

We consider that an unidentified word arises in the fol-lowing three cases: (1) a known word is erroneously recog-nized; (2) an unknown word is uttered which is a synonymof a known word; (3) noise is erroneously recognized as aword. In addition, we only consider the case where one orconsecutive two unidentified word(s) exist in an utterance.The robot evaluates the first two cases (erroneous recogni-tion or unknown word) and selects the estimation with thehighest evaluation value. If the highest value is less than

Recognitionresult to imageprocessing

Take a blue can

Estimate the meaningof unidentified words

CFG-basedrecognition engine(matching with a grammar)

Dictation-orientedrecognition engine(generate a string)

OK

NG

ViaVoice

Fig. 6: Speech recognition system.

State S

Identified word Cn Wn

Unidentified word C W

Pc(C | S)

Pw(W | S)

Pc-p(Cp | C)

Pw-p(Wp | W)Pc-n(Cn | C)

Pw-n(Wn | W)

Identified word Cp Wp

Fig. 7: Estimation of category C and word W

a certain threshold, the unidentified word is considered tocome from noise.

The problem of estimating the meaning of an uniden-tified word is formulated as finding the registered word Wwith the maximum probability, given state S, context γ, anda text string R generated by the dictation-oriented engine.State S indicates a possible state in the dialog such as theone where the robot is waiting for the user’s first utteranceor the one where it is waiting for an answer to its previ-ous question like “which one shall I take ?” Context γ isidentified words before and after an unidentified one underconsideration.

Fig. 7 illustrates the estimation of category C and wordW using the probabilistic models:

• Pc−p(Cp|C) is the probability that Cp is uttered justbefore the utterance of C.

• Pc−n(Cn|C) is the probability that Cn is uttered justafter the utterance of C.

• Pw−p(Wp|W ) is the probability that Wp is uttered justbefore the utterance of W .

• Pw−n(Wn|W ) is the probability that Wn is utteredjust after the utterance of W .

For case (1) (i.e., erroneous recognition of a registeredword), we search for the word W which is:

W = arg maxW

P (W |S, γ, R). (1)

For case (2) (i.e., use of a synonym of a registered word),we search for the word W which is:

W = arg maxW

P (W |S, γ). (2)

We here further examine eq. (1) only due to the space limi-tation. Eq. (1) is rewritten as:

P (W |S, γ, R)=∑

C{P (W |S, γ, C)P (C|S, γ)}P (R|W,S, γ)∑

W P (W,R|S, γ)

�∑

C{P (W |S, γ, C)P (C|S, γ)}P (R|W )∑

W P (W,R|S, γ)(3)

where∑

C indicates the summation for categories C whoseprobability P (C|S, γ) is larger than a threshold, and

∑w

indicates the summation for words W belonging to the cat-egories. Eq. (3) is obtained by considering that a recognizedtext R depends almost only on word W ; P (R|W ) is calleda pronunciation similarity.

An example of successful recognition of an uniden-tified word is as follows. A user asked the robot totake a blue PET bottle, by uttering “AOI (blue) PETTOBOTORU (PET bottle) WO TOTTE (take)”. The robothowever first recognized the utterance as “OMOI KUUTORABURU WO TOTTE”. Since this includes unidenti-fied words, the robot estimates their meanings using theabove-mentioned method, and reached the conclusion that“OMOI” means “AOI” and “KUU TORABURU” means“PETTO BOTORU”.

The recognition result of unidentified words are fed backto the system to update the database and the probabilisticmodels [11].

4 Easy Teaching of Mobile ManipulationUsually service robots have to deal with much wider rangeof tasks (i.e., operations and environments) than industrialones. An easy, user-friendly teaching method is, therefore,desirable for such service robots. Among previous teach-ing methods, direct methods (e.g., the one using a teachingbox) are intuitive and practical but requires much user’s ef-fort, while indirect methods (e.g., teaching by demonstra-tion [5, 4]) are easy but still needs further improvement ofthe robot’s ability for deployment.

We, therefore, use a novel teaching method for a mo-bile manipulator which exists in between the above two ap-proaches. In the method, a user teaches the robot a nominaltrajectory of the hand and its tolerance to achieve a task. Inthis teaching phase, the user does not have to explicitly con-sider the structure of the robot but teaches the movement ofthe hand in the object-centered coordinates. The toleranceplays an importance role when the robot generates an actualtrajectory in the subsequent playback phase; although thenominal trajectory may be infeasible due to the structurallimitation, the robot can search for a feasible one within thegiven tolerance. Only when the robot fails to find the fea-sible trajectory, the robot plans a movement of the mobilebase; that is, the redundancy provided by the mobile baseacts as another tolerance in trajectory generation. Sincethe robot autonomously plans a necessary movement of thebase, the user does not have to consider whether the move-ment is needed. The teaching method is well intuitive and

refrigerator

A

B C

D

E

X

Y

Fig. 8: A nominal trajectory foropening a door.

X

Y

Z

O

(a) straight line segment.

X

Y

Z

O

(b) circular segment.

Fig. 9: Coordinates on seg-ments.

does not require much effort from a user. In addition, it doesnot assume a high recognition and inference ability of therobot because the nominal trajectory given by the user hasmuch information for planning feasible motions; the robotdoes not need to generate a feasible trajectory from scratch.

The following subsections explain the teaching method,using the task of opening the door of a refrigerator as anexample.

4.1 Nominal Trajectory

A nominal trajectory is the trajectory of the hand pose(position and orientation) in a 3D object-centered coordi-nate system. Among feasible trajectories to achieve thetask, a user arbitrarily selects one, which can easily be spec-ified by the user. To simplify the trajectory teaching, wecurrently set a limitation that a trajectory of hand positionis composed of circular and/or straight line segments.

Fig. 8 shows a nominal trajectory for opening a door,which is composed of straight segments AB and BC and cir-cular segments CD and DE set on some horizontal planes;on segment CD, the robot roughly holds the door, while onsegment DE, the robot pushes it at a different height. Theaxes in the figure are those of the object-centered coordi-nates. On the two straight segments, the hand orientation isparallel to segment BC; on circular segment CD, the hand isaligned to the radial direction of the circle at each point; oncircular segment DE, the hand tries to keep aligned to thetangential direction of the circle.

4.2 Tolerance

A user-specified trajectory may not be feasible (exe-cutable) due to the structural limitation of the manipulator.In our method, therefore, a user gives not only a nominaltrajectory but also its tolerance. A tolerance indicates ac-ceptable deviations from a nominal trajectory to perform atask; if the hand exists within the tolerance over the entiretrajectory, the task is achievable. A user teaches a toler-ance without explicitly considering the structural limitationof the robot. Given a nominal trajectory and its tolerance,the robot searches for a feasible trajectory.

side view

door

top view

door

Fig. 10: An example tolerance for opening the door.

A user sets a tolerance to each straight or circular trajec-tory using a coordinate system attached to each point on thesegment (see Fig. 9). In these coordinate systems, a usercan teach a tolerance of positions relatively intuitively as akind of the width of the nominal trajectory. Fig. 10 showsan example of setting a tolerance for circular segment CD,which is for opening the door, in Fig. 8.

4.3 Generating Feasible Trajectories

The robot first tries to generate a feasible trajectorywithin a given tolerance. Only when the robot fails to find afeasible one, it divides the trajectory into sub-trajectoriessuch that each sub-trajectory can be performed withoutmovement of the base; it also plans the movement betweenperforming sub-trajectories.

4.3.1 Trajectory Division Based on Feasible Regions

The division of a trajectory is done as follows. The robotfirst sets via points on the trajectory with a certain interval(see Fig. 11). When generating a feasible trajectory, therobot repeatedly determines feasible poses (positions andorientations) of the hand at these points (see Sec. 4.3.2).

For each via point, the robot calculates a region on thefloor in the object coordinates such that if the mobile baseis in the region, there is at least one feasible hand pose. Bycalculating the intersection of the regions, the robot deter-mines the region on the floor where the robot can make thehand follow the entire trajectory. Such an intersection iscalled a feasible region of the task (see Fig. 12).

Feasible regions are used for the trajectory division. Todetermine if a trajectory needs division, the robot picks upone via point after another along the trajectory and repeat-edly updates the feasible region. If the size of the regionbecomes less than a certain threshold, the trajectory is di-vided at the corresponding via point. This operation contin-ues until the endpoint of the trajectory is processed. Fig. 13shows example feasible regions of the trajectory of openingthe door shown in Fig. 8. The entire trajectory is dividedinto two parts at point V ; two corresponding feasible re-gions are generated.

refrigerator

X

Y

O

Fig. 11: Via points.

via points

trajectory

Fig. 12: Calculation of a feasibleregion for via points.

A

B C

D

E

V

X

Y

Fig. 13: Example feasible regions.

nominal trajectory

generated trajectory

infeasible region

via point

tolerance

Fig. 14: On-line generation of a feasible trajectory.

4.3.2 On-line Trajectory Generation

A feasible trajectory is generated by iteratively searchingfor feasible hand poses for a sequence of via points. Thistrajectory generation is performed on-line because the rel-ative position between the robot and manipulated objectsvaries each time due to the uncertainty in the movement ofthe robot base. The robot estimates the relative positionbefore trajectory generation. The previously calculated tra-jectories can be, however, used as guides for calculating thecurrent trajectory; all trajectories are expected to be simi-lar to each other as long as the uncertainty in movement isreasonably limited.

Fig. 14 illustrates how a feasible trajectory is generated.In the figure, small circles indicate via points on a givennominal trajectory; two dashed lines indicate the boundaryof the tolerance; the hatched region indicates the one wherethe robot cannot take the corresponding hand pose due tothe structural limitation. A feasible trajectory is generatedby searching for a sequence of hand poses which are in thetolerance and near to the given via points (two squares in-dicate selected via points). The bold line in the figure in-

mobile base

6 DOF arm with hand

laser range finder

hand camera

main camera3-axis force sensor

host computer

Fig. 15: Our service robot.

(a) movement of the robot.(b) an obstacle map and a planned movement.

Fig. 16: Obstacle avoidance.

approach open grasp

close carry hand over

Fig. 17: Fetch a can from a refrigerator.

dicates the generated feasible trajectory. In the actual tra-jectory generation, the robot searches the six dimensionalspace of hand pose (position and orientation) for the feasi-ble trajectory.

During executing the generated trajectory, it is some-times necessary to estimate the object position. Currently,we manually give the robot a set of necessary sensing oper-ations for the estimation.

5 Manipulation and Motion Experiments

Fig. 15 shows our personal service robot. The robot is aself-contained mobile manipulator with various sensors. Inaddition to the above-mentioned functions, the robot needsan ability to move between a user and a refrigerator. Therobot uses the laser range finder (LRF) for detecting ob-stacles and estimating the ego-motion [8]. It uses the LRFand vision for detecting and locating refrigerators and users.

Fig. 16 shows a collision-avoidance movement of the robot.Fig. 17 shows snapshots of the operation of fetching a canfrom a refrigerator to a user.

6 Summary

This paper has described our personal service robot. Thefeature of the robot is a user-friendly human-robot inter-faces including interactive object recognition, robust speechrecognition, and easy teaching of mobile manipulation.

Currently the two subsystems, object and speech recog-nition and teaching of mobile manipulation, are imple-mented separately. We are now integrating these two sub-systems into one prototype system for more intensive ex-perimental evaluation.

Acknowledgment

This research is supported in part by Grant-in-Aid forScientific Research from Ministry of Education, Culture,Sports, Science and Technology, and by the KayamoriFoundation of Informational Science Advancement.

References[1] Morpha project, http://www.morpha.de/.[2] R. Bischoff. Hermes – a humanoid mobile manipulator for

service tasks. In Proc. of FSR-97, pp. 508–515, 1997.[3] R. Bischoff and V. Graefe. Dependable multimodal commu-

nication and interaction with robotic assistants. In Proc. ofROMAN-2002, pp. 300–305, 2002.

[4] M. Ehrenmann, O. Rogalla, R. Zollner, and R. Dillmann.Teaching service robots complex tasks: Programming bydemonstration for workshop and household environments. InProc. of FSR-2001, pp. 397–402, 2001.

[5] K. Ikeuchi and T. Suehiro. Toward an assembly plan fromobservation part i: Task recognition with polyhedral objects.IEEE Trans. on Robotics and Automat., Vol. 10, No. 3, pp.368–385, 1994.

[6] Y. Makihara, M. Takizawa, Y. Shirai, J. Miura, and N. Shi-mada. Object recognition supported by user interaction forservice robots. In Proc. of ICPR-2002, pp. 561–564, 2002.

[7] Y. Makihara, M. Takizawa, Y. Shirai, and N. Shimada. Ob-ject recognition in various lighting conditions. In Proc. ofSCIA-2003, 2003. (to appear).

[8] J. Miura, Y. Negishi, and Y. Shirai. Mobile robot map gen-eration by integrating omnidirectional stereo and laser rangefinder. In Proc. of IROS-2002, pp. 250–255, 2002.

[9] J. Pineau, M. Montemerlo, M. Pollack, N. Roy, and S. Thrun.Towards robotic assistants in nursing homes: Challenges andresults. Robotics and Autonomous Systems, Vol. 42, No. 3-4,pp. 271–281, 2003.

[10] N. Roy, G. Baltus, D. Fox, F. Gemperle, J. Goetz, T. Hirsch,D. Magaritis, M. Montemerlo, J. Pineau, J. Schulte, andS. Thrun. Towards personal service robots for the elderly.In Proc. of WIRE-2000, 2000.

[11] M. Takizawa, Y. Makihara, N. Shimada, J. Miura, and Y. Shi-rai. A service robot with interactive vision – object recogni-tion using dialog with user –. In Workshop on LanguageUnderstanding and Agents for Real World Interaction, 2003(to appear).