children playing with robots using stigmergy on a smart...

6
Children playing with robots using stigmergy on a smart floor Ali Abdul Khaliq Federico Pecora Alessandro Saffiotti AASS Cognitive Robotic Systems Lab, ¨ Orebro University, ¨ Orebro, Sweden Contact email: [email protected] Abstract— Reliable and safe interaction is essential when humans and robots move in close proximity. In this paper, we present a stigmergic approach where humans interact with robots via a smart floor. Stigmergy has been widely studied in robotic systems, however, HRI has thus far not availed itself of stigmergic solutions. We realize a stigmergic medium via RFID tags embedded in the floor, and use these to enable robot navigation, human tracking, as well as the interaction between robots and humans. The proposed method allows to employ robots with minimal sensing and computation capabilities. The approach relies only on the RFID sensors and the information stored in the tags, and no internal map is required for navigation. We design and implement a prototype game which involves a robot and a child moving together in a shared space. The prototype demonstrates that the approach is reliable and adheres to given safety constraints when human and robot are moving within close proximity of each other. I. I NTRODUCTION Symbiotic robots are receiving increasing attention as potential candidates for providing assistance to humans in disparate contexts, e.g., elderly and child care, assisting humans in hospitals, edutainment, carrying loads in industrial applications. In such applications, it is usually the case that humans and robots must interact, sometimes closely. Interac- tion between robot and human is typically separated into two categories depending on their respective locations. On one hand, remote interaction, in which the human and the robot are not co-located and have separate working environments, such as in teleoperation and supervisory control. On the other, proximity interaction, where the robot and the human are co-located and have close spatial interactions, as, e.g., in assistive and service robots [1]. In this paper we focus on proximity interaction. In these applications, robots typically employ a range of sensing de- vices to enable interaction (e.g., human following), including cameras installed on the robot and/or in environment, laser range finders, ultrasonic transponders, etc. In our approach, the robot does not require many sensors for interaction, rather, interaction is mediated via a smart floor. Specifically, we show: (1) how a smart floor can be used for enabling interaction between humans and robots with minimal sensing capabilities; (2) how a smart floor eliminates the need for an internal map of the environment for the purpose of robot navigation; (3) that our approach is safe when human and robot are moving within close proximity. We propose a stigmergic approach where the robot uses the information stored in environment, namely, RFID tags under the floor. The robot, which is equipped with RFID readers, reads the information stored in the RFID tags and interacts with the human, who also wears an RFID reader. II. RELATED WORK Human robot interaction has been widely studied in the academic community. Researchers are focusing on Human- Robot Interaction (HRI) in several application areas, some of them related to our work deal with interaction with children, human following, and assistive and educational applications. a) Human-following robots: Human-following is typ- ically seen as an essential part of HRI. The conventional method for realizing human-following robots, the non- contact method [2], typically consists of algorithms for tracking a specific person. CCD cameras are exploited in [3] to obtain high resolution pictures for recognizing a target person. Stereo-vision is used in [4], [5] for human tracking. However, these algorithms fail in scenarios where humans leave the field of view of the camera, poor lighting con- ditions, and where rapid movement is required. A network of cameras on the ceiling is used in [6] to determine the position of the human — here, too, lighting conditions remain an issue. Some tracking systems detect humans based on the color and texture of their clothing, distinguishing them from cluttered backgrounds using a camera installed on the robot [7]. This approach, however, requires humans to wear special clothes that can be easily distinguished from the environment, and also depends on lighting conditions for color and texture-based detection. Chu et al. [2] propose an unconventional method of human-following with a tether steering mechanism. This method is useful in cluttered envi- ronments for safe navigation and does not use cameras, laser range finders, or ultrasonic sensors. However, this system limits the autonomy of the robot, and is inherently limited to person following. In contrast to all above mentioned approaches, we exploit stigmergic mechanism to store information in the RFID floor, which is further used by robots to interact with (e.g., follow) humans. Our approach is computationally inexpensive, as it does not depend on vision-based sensory equipment or other complex sensor data interpretation algorithms. Furthermore, our approach is not limited to person following, and can be used to perform other tasks that involve synchronizing with human movements, e.g., tracking multiple humans, or keeping a desired distance from humans, etc. b) Interaction with children: Robots are increasingly seen as tools to promote or facilitate education for school children. In many studies, lego robots are used for such

Upload: others

Post on 29-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Children playing with robots using stigmergy on a smart floor130.243.105.49/~aakq/publications/RomartCity2016.pdf · 2016-09-08 · Children playing with robots using stigmergy on

Children playing with robots using stigmergy on a smart floor

Ali Abdul Khaliq Federico Pecora Alessandro SaffiottiAASS Cognitive Robotic Systems Lab, Orebro University, Orebro, Sweden

Contact email: [email protected]

Abstract— Reliable and safe interaction is essential whenhumans and robots move in close proximity. In this paper,we present a stigmergic approach where humans interact withrobots via a smart floor. Stigmergy has been widely studied inrobotic systems, however, HRI has thus far not availed itselfof stigmergic solutions. We realize a stigmergic medium viaRFID tags embedded in the floor, and use these to enable robotnavigation, human tracking, as well as the interaction betweenrobots and humans. The proposed method allows to employrobots with minimal sensing and computation capabilities. Theapproach relies only on the RFID sensors and the informationstored in the tags, and no internal map is required fornavigation. We design and implement a prototype game whichinvolves a robot and a child moving together in a shared space.The prototype demonstrates that the approach is reliable andadheres to given safety constraints when human and robot aremoving within close proximity of each other.

I. INTRODUCTION

Symbiotic robots are receiving increasing attention aspotential candidates for providing assistance to humans indisparate contexts, e.g., elderly and child care, assistinghumans in hospitals, edutainment, carrying loads in industrialapplications. In such applications, it is usually the case thathumans and robots must interact, sometimes closely. Interac-tion between robot and human is typically separated into twocategories depending on their respective locations. On onehand, remote interaction, in which the human and the robotare not co-located and have separate working environments,such as in teleoperation and supervisory control. On theother, proximity interaction, where the robot and the humanare co-located and have close spatial interactions, as, e.g., inassistive and service robots [1].

In this paper we focus on proximity interaction. In theseapplications, robots typically employ a range of sensing de-vices to enable interaction (e.g., human following), includingcameras installed on the robot and/or in environment, laserrange finders, ultrasonic transponders, etc. In our approach,the robot does not require many sensors for interaction,rather, interaction is mediated via a smart floor. Specifically,we show: (1) how a smart floor can be used for enablinginteraction between humans and robots with minimal sensingcapabilities; (2) how a smart floor eliminates the need for aninternal map of the environment for the purpose of robotnavigation; (3) that our approach is safe when human androbot are moving within close proximity. We propose astigmergic approach where the robot uses the informationstored in environment, namely, RFID tags under the floor.The robot, which is equipped with RFID readers, reads the

information stored in the RFID tags and interacts with thehuman, who also wears an RFID reader.

II. RELATED WORK

Human robot interaction has been widely studied in theacademic community. Researchers are focusing on Human-Robot Interaction (HRI) in several application areas, some ofthem related to our work deal with interaction with children,human following, and assistive and educational applications.

a) Human-following robots: Human-following is typ-ically seen as an essential part of HRI. The conventionalmethod for realizing human-following robots, the non-contact method [2], typically consists of algorithms fortracking a specific person. CCD cameras are exploited in [3]to obtain high resolution pictures for recognizing a targetperson. Stereo-vision is used in [4], [5] for human tracking.However, these algorithms fail in scenarios where humansleave the field of view of the camera, poor lighting con-ditions, and where rapid movement is required. A networkof cameras on the ceiling is used in [6] to determine theposition of the human — here, too, lighting conditionsremain an issue. Some tracking systems detect humans basedon the color and texture of their clothing, distinguishingthem from cluttered backgrounds using a camera installedon the robot [7]. This approach, however, requires humansto wear special clothes that can be easily distinguished fromthe environment, and also depends on lighting conditionsfor color and texture-based detection. Chu et al. [2] proposean unconventional method of human-following with a tethersteering mechanism. This method is useful in cluttered envi-ronments for safe navigation and does not use cameras, laserrange finders, or ultrasonic sensors. However, this systemlimits the autonomy of the robot, and is inherently limitedto person following.

In contrast to all above mentioned approaches, we exploitstigmergic mechanism to store information in the RFID floor,which is further used by robots to interact with (e.g., follow)humans. Our approach is computationally inexpensive, as itdoes not depend on vision-based sensory equipment or othercomplex sensor data interpretation algorithms. Furthermore,our approach is not limited to person following, and canbe used to perform other tasks that involve synchronizingwith human movements, e.g., tracking multiple humans, orkeeping a desired distance from humans, etc.

b) Interaction with children: Robots are increasinglyseen as tools to promote or facilitate education for schoolchildren. In many studies, lego robots are used for such

Page 2: Children playing with robots using stigmergy on a smart floor130.243.105.49/~aakq/publications/RomartCity2016.pdf · 2016-09-08 · Children playing with robots using stigmergy on

purposes, e.g., to teach physics, mathematics, and to train theability to solve logical problems [8], [9], [10]. In [11], robotswere used to teach language to school children, and Tomicet al. [12] propose an assistive robot that helps children inschool at the hospital. Robots may also provide assistanceto children with disabilities. In [13], [14], [15], researchersdeveloped robots suitable for children with autism. More-over, several robotic toys for children have been developed,including a storyteller robotic doll [16], a football-like rollingrobot [17], and a robotic dinosaur [18].

The above mentioned robots can significantly contributeto education and/or the mental care of children. However,they are typically limited in their physical interaction withchildren. Our approach enables safe proximity of childrenand large, moving robots. Our use-case exploits this toobtain a game that stimulate a child’s mental and physicaldevelopment at the same time.

c) RFID floor: The use of RFID floors has recentlyreceived attention due to its ability to provide reliable androbust navigation in indoor environments. Researchers ex-ploit stigmergy by storing distance information in the RFIDfloor, which is further used by a single or multiple robots fornavigation to predefined goal positions [19], [20], [21], [22].In [23], the robot uses the information stored into the RFIDfloor to navigate to arbitrary positions. Herianto et al. [24]store pheromone-based potential fields in RFID tags and usea kernel method to create a navigation gradient.

In this paper, we follow the above mentioned RFIDapproach, extending it beyond simple navigation tasks, toachieve human robot interaction via smart floors.

III. PROPOSED APPROACH

We propose a stigmergic algorithm and the use of RFIDtags for providing the stigmergic medium within the en-vironment. The RFID tags are embedded in the floor andorganized in a hexagonal grid in which an RFID tag is placedat the same (geometrical) distance from its neighboring tags.Mathematically, this structure can be represented by a graph(X,E), where vertices X = x1, . . . , xn are tags, and the arcsE connect 6-neighbor cells of a hexagonal grid. Given twonodes x, y, we denote by dist(x, y) the minimum distancebetween x and y in terms of number of arcs. Our algorithmsis stigmergic in nature since information is stored in theenvironment through the stigmergic medium, rather thanbeing stored in the memory of individual agents. This storedinformation is used by the same or a different agent later intime.

The information placed in RFID tags can be used toperform navigation tasks. In this case, this information relatesto distances between nodes. Our approach to navigationconsists of two phases: (1) building a goal map for predefinedgoal positions, explained in Section IV; and (2) estimatingthe distances to arbitrary goals for which the goal map wasnot built, explained in Section V.

IV. RFID GOAL MAP BUILDING

We utilize the above stigmergic approach to build a goalmap on the RFID floor. Each RFID tag stores the distance

to a predefined goal as the number of grid cells on theshortest collision-free path from the RFID tag to the givengoal. A goal can be seen as a user-defined position whichis relevant to the specific domain, and is marked manuallybefore starting the map building process. Given a set ofpredefined goals G = {g1, . . . , gk} ⊆ X , each RFID tagstores the distance to each of the k goals in a separate field.We assume that each cell x ∈ X has M read/write fieldswhich can be addressed individually. The value stored in fieldi of cell x is denoted by vi(x). In particular, v1(x), . . . , vk(x)will store the distance maps corresponding to our k distinctgoal locations (k < M). We call sub-goal map each one ofthese distance maps.

This algorithm for building the sub-goal maps is a variantof the standard Bellman-Ford algorithm, and can be sum-marized as follows (see our previous work [20] for a moredetailed discussion). Let us assume x is any tag on the RFIDfloor, the predefined goal tag for predefined goal j is denotedby xj , j = {1, . . . , k}, where k is the number of distinctpredefined goals. Each tag x stores k values, denoted byvj(x), j = {1, . . . , k}: one value for each sub-goal map.These values are physically stored in the k memory fields ofthe RFID tag, which can be addressed individually. Robotsstore k counters, denoted by count[j], j = {1, . . . , k}.During mapping, a robot navigates on the RFID floor; setsthe count[j] to 0 when it reaches predefined goal xj ; it thennavigates away from the goal, incrementing count[j] everytime it encounters another tag. The counters thus representthe distance traveled by a robot from the predefined goal,and are used by the algorithm to build the k predefined goalmaps. The value vj(x) of a tag x is updated with the valueof min(vj(x), count[j]) which provides the current estimateof the distance from node x to goal gj . While mapping,whenever a robot detects an obstacle, it makes a randomrotation and moves in any direction. A example of sub-goalmap built with this process is shown in Figure 1.

Navigation on a goal map can be seen as a simplegradient descent over the values vj(x). Any robot can useit to navigate to a predefined goal gj in the set of G bymoving along the steepest negative gradient of the vj(x)values. While navigating, the robot reads the values vj of theneighboring tags and moves towards the lowest value. Notethat the robot does not have any knowledge of its locationand orientation. It only relies on the information stored inthe neighboring tags. Hence, relying only on local sensing,a robot can find a globally optimal path to the goal [20].

V. ONLINE DISTANCE ESTIMATION ON AN RFIDFLOOR

Section III explains how to build the goal map andnavigate to the set G = {g1, ..., gk} ⊆ X of predefinedgoals. In most scenarios of practical interest, however, goalpositions can change over time. New goal positions maybe required, and it may be impractical to assume that allthe required goal positions have been included in G in themap building phase. One possible solution to this issue is toinitially select the cardinality of G equal to the totality of

Page 3: Children playing with robots using stigmergy on a smart floor130.243.105.49/~aakq/publications/RomartCity2016.pdf · 2016-09-08 · Children playing with robots using stigmergy on

Fig. 1. The distance map built for one of the 16 predefined goals.

tags in the RFID floor, and build the sub-goal maps for allthese tags. This is impractical due to the limited memory sizeof tags, as well as the time required to read/write informationto/from tags. The second option is to build from scratch thegoal map of the required goal positions. This, however, isvery time consuming.

We thus proposed a less expensive solution in [23], bothin terms of memory and time consumption. The solutionconsists in estimating the distances for a goal position notincluded in the map by appropriately using the informationalready present in the map.

A. The distance estimation process

In order to estimate the distances to a new goal position,suppose that we have a goal map that contains the sub goal-maps vi(x) and vj(x) for two goal positions gi and gj , andthat we want to navigate to a new goal position gN — seeFigure 2(a).

We define the following two functions:

v′i(x) = |vi(x)− vi(gN )|v′j(x) = |vj(x)− vj(gN )| (1)

The value of v′i(x) is zero for any cell x that has the samedistance to gi as gN . These cells define a boundary insideand outside of which the value of v′i(x) increases. Intuitively,v′i(x) gives the distance of cell x from that boundary. Thisfunction, which we call watershed of gi passing through gN ,is shown graphically in Figure 2(b). (The same argumentshold for v′j .) Finally, an estimate vN of the distances to thegoal gN , can be obtained by

vN (x) = max(v′i(x), v′j(x)) (2)

Selecting the maximum among the two gradients is reason-able, as it avoids under-estimating the distance to the newgoal gN .

In general, it is thus possible to compute a goal-dependentgradient given an initial map built for a predefined setof goals. However, note that merging goal fields by usingEquations 1–2 can create both multiple global minima andmultiple global maxima. The former are to be avoidedbecause robots navigate using gradient descent; the later willnot preclude reaching the goal via gradient descent, although

New Goal gN

Predefined goals gi, gj

(a)vj(x)

vi(x) v' i(x)

v' j(x)

(c)

(b)

w(x)

Fig. 2. Distance estimation to a new goal point using Equations 1–2.(a) original goal map for two predefined goals, (b) watersheds of bothpredefined goals, (c) combination of the two watersheds.

they may lead to sub-optimal paths, since the robot will“go around” the local maximum. Multiple global minimacan be avoided by including the sub-goal map of anotherpredefined goal gk, not co-aligned with gi and gj . Thewatershed of gk will only intersect the other two watershedsat gN . This argument suggests that many goals should beused for computing vN to avoid multiple global minima. Inour experiments, we use all of the goals in G: vN (x) =maxgi∈G v′i(x). Since by definition vi(gN ) = 0 for all gi ,we are guaranteed that vN has a global minimum at gN .

VI. NAVIGATION TOWARDS NEW GAOLS USING THEESTIMATED DISTANCES

In order to navigate to a goal gi in the set G, the robotcan perform simple gradient descent on the distance valuesby reading at each position the vi values of the neighboringtags and moving in the direction of the smallest value. Tonavigate to a goal point gN that is not in the set G, therobot performs gradient descent on the distance estimatevN , computed through Equation 2. Note that the robot doesnot require to estimate the entire map, rather the gradientvN (x) can be computed based on the distance values storedon the neighboring tags and the distance values stored inthe new goal point. Moreover, the robot does not need anyinformation related to its location and orientation. It can relyon the information coming from the environment to select theoptimal path and navigate to any point in the environment.This means that (1) navigation only requires minimal sensingand computational resources; (2) the goal point can changeduring navigation, and navigation will continuously adapt tothat.

Algorithm 1 is implemented for navigation to arbitrarypositions in the environment. Specifically, the algorithm takestwo arguments: p[] gives the distance values of predefinedgoal maps at the new goal point p. The threshold

Page 4: Children playing with robots using stigmergy on a smart floor130.243.105.49/~aakq/publications/RomartCity2016.pdf · 2016-09-08 · Children playing with robots using stigmergy on

argument provides minimum desired distance form the goalin terms of number of cells. Setting this value to 0 willallow the robot to stop at the goal tag, while increasingit will make the robot stop threshold tags away fromthe goal. The use of this argument is explained in sec-tion VII. The algorithm assumes that the robot is equippedwith R RFID readers and M goal maps are stored in thevj(x) fields of the tag x, j = {1, . . . ,M} (in our case,R = 6 and M = 16). AtPoint(threshold) checksif the robot has reached the desired distance to the goalpoint gN . Read_Tag(i,val[j]) reads from the RFIDtag reader i the value vj(x) of the tag x in its range.Dist_p(tag_val[j],p[j]) is the implementation ofEquation 1. max(u[i][]) is the implementation of Equa-tion 2, which computes the final gradient values for threeantennas. The angle is the bearing θ of the RFID reader thathas read the lowest (combined) value ρi. speed is a fixedparameter. SetVel() sets the velocities of the robot.

Require: Goal map from predefined goals G stored in RFID floor1: θ ← 02: while !AtPoint(threshold) do3: for i = 0 to R do4: for j = 0 to M do5: Tag val[j] ← Read Tag(i, val[j])6: u[i][j]←Dist p(Tag val[j],p[j])7: end for8: end for9: for i = 0 to R do

10: w[i]←max(u[i][])11: ρi ← w[i]12: end for13: θ ← Bearing(argmini ρi)14: SetVel(speed, θ)15: end while

Algorithm 1: Navigate(p[],threshold)

VII. GAME PLAYED WITH CHILDERN

To experience a variety of situations and to test thecapability of our approach to cope with these situations,we created a game which calls for meaningful interactionsbetween a child and a robot. In the game, the robot interactswith the child via the smart floor, and follows the child whiles/he moves across the floor.

A. Game setup

The game is played in an apartment-like test-bed facility,called “PEIS-Home 2”. The facility has an area of about6m × 9m and contains a hexagonal grid of approximately1, 500 read-write Texas Instruments Tag-it RFID tags underthe floor. Each tag has 64 writable memory blocks, and eachblock consists of 4 bytes. Snapshots of the apartment areshown in Figure 3. The construction and implementation ofthis floor is described in [22].

The game was played using a custom-built robot de-signed to interact with children in the MOnarCH Europeanproject [25]. This onmidirectional robot is equipped withsix RFID readers and antennas mounted in an hexagonalpattern, so that the robot can read the distance values from

six different tags and move towards the gradient withoutrotations. The MOnarCH robot is shown in Figure 3.

An RFID reader is attached to an antenna underneath thesole of one of the child’s shoes, thus enabling the reader toread the data from a tag over which the child is standing.The RFID reader is interfaced with a XBee PRO moduleto communicate with the robot. A child wearing the RFIDreader is shown in Figure 3.

B. Game design and explanation

A simple game, something similar to “first letter lastletter”, was designed for children between the age of 5 and10. We marked six different positions on the RFID floor withthe English letters (A, B, C, E, F and G) — see Figure 3.The child, wearing the RFID reader starts the game by goingto any of the six positions (e.g., the child may move to theposition marked with “A”). The robot, which is in an arbitraryposition, starts moving towards the child. While the robotmoves towards the child, s/he has to say any word startingwith the letter (e.g., apple for “A”), quickly move to any otherpoint marked with a letter, and repeat the process. The robotcontinuously follows the child in his/her movements. Thechild cannot move to another position unless s/he comes upwith a word starting with the letter s/he is currently standingon. If child fails to come up with a word, the robot “catches”the child and the game is over. The child wins if s/he visitsall goals without getting caught by the robot.

The distance map used by the robot in this setup is builtfor 16 predefined goals. These goals are not the positionsmarked with letters that are used in the game. When thechild wearing the RFID reader moves on the floor, the readersends the distances values to these 16 predefined goals to therobot (argument p[] in Algorithm 1). The robot estimatesthe distance to the RFID tag that is currently under thechild’s shoe, and moves towards the child. The robot catchingthe child is detected by the argument threshold in theAlgorithm 1. When the distance to the child reaches thisvalue, the robot stops moving and emits a beeping sound.

C. Results

In total five children tested the game. The game was playedwith a single robot and a single child at a time. Note thata variant of this game can be played with multiple childrenwearing RFID shoes. Each child played one or more practiceruns in a controlled environment with a playing the partof the robot. When sufficiently confident, the human wasreplaced by the robot. In total, the game was played 14 timesand the total time of robot-child interaction was around 45minutes1. Two difficulty levels were set up to make the gamemore interesting. Initially, the robot’s speed was 8cm/s; inlevel 2, the robot moved at a speed of 12cm/s. The detailsof the experiment are given in Table I. The results showstwo aspects of our approach:

1The videos of the games played with children can be found at (mindthe ˜ in the link when pasting in the browser) http://aass.oru.se/˜aakq/

Page 5: Children playing with robots using stigmergy on a smart floor130.243.105.49/~aakq/publications/RomartCity2016.pdf · 2016-09-08 · Children playing with robots using stigmergy on

Fig. 3. Apartment used for playing the game. Left and middle: the robot with six RFID readers and children wearing an RFID reader on their shoes.Right: visualization of RFID tag layout under the floor. Blue tags are associated with English letters used in the game.

Reliability: during all games, the robot always followedthe child. Figure 4 shows the trajectories of Child 1 and 5,along with the trajectories of the robot following them. Notethat the robot did not follow the trajectory of the child. Itfollowed the estimated gradient towards the tag under thechild’s shoe.

Safety: the minimum and maximum threshold valuesin Algorithm 1 were 0.4m and 0.6m. While following thechild, the robot sometimes crossed the maximum, but neverthe minimum threshold, and never touched the child.This is evident in Figure 5, which shows two plots ofdistances between the child and the robot during the game.The first plot (game lost by Child 1) shows that the robot didnot cross the minimum threshold while moving towardsthe standing child; the second plot refers to the game wonby Child 3.

D. Qualitative observations

Overall, the game was fun and entertaining for the chil-dren. At the start of the game almost every child was a bitnervous, but after some time their excitement and enthusiasmgrew, especially when playing with the robot. Every childhad a different reaction towards the game and the robot.Child 1 came up with a strategy in the last experiment: heused the slow speed of robot. He stayed longer on everypoint until the slow robot came very near to him, using thistime to think about which point to explore next and whatword to say. Child 5 was a bit slow in coming up withwords, so we let him play with the relaxed condition thathe could move to other points after staying for a while on apoint. Some Children frequently visited the “hidden” pointsbehind the walls and sometimes waited for the robot to comenear them before moving. There were some occasions whenchildren passed very near the robot. In this situation, therobot stopped moving and started beeping, however the gamewas not considered lost.

VIII. CONCLUSIONS

In this paper we proposed a stigmergic approach forhuman-robot interaction on a smart floor. We played agame with children as a proof of concept and discussed

Fig. 4. Trajectories of the robot during the game played by two differentchildren. Upper-left: trajectory of the robot following Child 5. Upper-right:trajectory of Child 5. Lower-left: trajectory of the robot following Child 1.Lower-right: trajectory of Child 1.

observations in the context of child-robot interaction, where,a robot tries to catch a child.

In our approach, a robot with minimalistic sensing capa-bilities interacts via a floor enriched with RFID tags. Weshow that a robot relying only on RFID sensors with nointernal map can effectively follow a human on an RFIDfloor. A robot does not need any localization as well as italso does not need any specific sensor able to detect, localizeand track the humans — a task which typically requiresnumerous and/or sophisticated sensors (e.g., cameras, stereovision, laser range finders, ultrasonic sensor). In addition tohuman detection, localization and human tracking, the RFID

Page 6: Children playing with robots using stigmergy on a smart floor130.243.105.49/~aakq/publications/RomartCity2016.pdf · 2016-09-08 · Children playing with robots using stigmergy on

TABLE IDETAILS OF THE GAME PLAYED WITH 5 CHILDREN.

Children Number of Times Level 1 Times Level 2 Times child Times robot Total time of games playedgames played played won touched child with robot (minutes)

Child 1 4 1 3 3 0 16.10Child 2 3 1 2 1 0 4.32Child 3 2 1 1 1 0 8.41Child 4 2 1 1 1 0 4.27Child 5 3 3 - 2 0 10.45

Fig. 5. Distance between the robot and the child during the game.

floor gives us for free the identification of different humans— since each is wearing a different RFID sensor on theirshoe.

In future work, we will evaluate our approach in detailboth empirically and theoretically. We will test our approachwith heterogeneous robots including humanoids, and analyzethe behavior of children interacting with different robots. Wewill also show how our approach can be used for robotsinteracting with multiple humans at the same time.

REFERENCES

[1] M. A. Goodrich and A. C. Schultz, “Human-robot interaction: Asurvey,” Found. Trends Hum.-Comput. Interact., vol. 1, no. 3, pp. 203–275, Jan. 2007.

[2] J.-U. Chu, I. Youn, K. Choi, and Y.-J. Lee, “Human-following robotusing tether steering,” International Journal of Precision Engineeringand Manufacturing, vol. 12, no. 5, pp. 899–906, 2011. [Online].Available: http://dx.doi.org/10.1007/s12541-011-0120-x

[3] T. Yoshimi, M. Nishiyama, T. Sonoura, H. Nakamoto, S. Tokura,H. Sato, F. Ozaki, N. Matsuhira, and H. Mizoguchi, “Developmentof a person following robot with vision based target detection,” inIntelligent Robots and Systems, 2006 IEEE/RSJ International Confer-ence on, 2006, pp. 5286–5291.

[4] T. Sonoura, T. Yoshimi, H. N. M. Nishiyama, S. Tokura, and N. Mat-suhira, Person Following Robot with Vision-based and Sensor FusionTracking Algorithm. InTech, 2008, p. 538.

[5] Z. Chen and S. T. Birchfield, “Person following with a mobile robotusing binocular feature-based tracking,” in Intelligent Robots andSystems, 2007. IROS 2007. IEEE/RSJ International Conference on,Oct 2007, pp. 815–820.

[6] K. Morioka, J.-H. Lee, and H. Hashimoto, “Human-following mobilerobot in a distributed intelligent sensor network,” IEEE Transactionson Industrial Electronics, vol. 51, no. 1, pp. 229–237, Feb 2004.

[7] N. Hirai and H. Mizoguchi, “Visual tracking of human back andshoulder for person following robot,” in Advanced Intelligent Mecha-tronics, 2003. AIM 2003. Proceedings. 2003 IEEE/ASME InternationalConference on, vol. 1, July 2003, pp. 527–532 vol.1.

[8] D. Williams, Y. Ma, L. Prejean, M. Ford, and G.Lai, “Acquisition ofphysics content knowledge and scientific inquiry skills in a roboticssummer camp,” Journal of Research on Technology in Education,vol. 40, no. 2, pp. 201–216, 2007.

[9] J. Lindh and T. Holgersson, “Does lego training stimulate pupils abilityto solve logical problems?” Computers and Education, vol. 49, no. 4,pp. 1097 – 1111, 2007.

[10] S. Hussain, J. Lindh, and G. Shukur, “The effect of lego training onpupils school performance in mathematics, problem solving abilityand attitude: Swedish data,” Journal of Research on Technology inEducation, vol. 9, no. 3, pp. 182–194, 2006.

[11] T. Kanda, T. Hirano, D. Eaton, and H. Ishiguro, “Interactive robotsas social partners and peer tutors for children: A field trial,” Hum.-Comput. Interact., vol. 19, no. 1, pp. 61–84, Jun. 2004.

[12] S. Tomic, F. Pecora, and A. Saffiotti, “Too cool for school - addingsocial constraints in human aware planning,” in The 9th InternationalWorkshop on Cognitive Robotics, 2014.

[13] A. Billard, B. Robins, J. Nadel, and K. Dautenhahn, “Building robota,a mini-humanoid robot for the rehabilitation of children with autism,”Assistive Technology, vol. 19, no. 1, p. 37, 2007.

[14] B. Robins, P. Dickerson, P. Stribling, and K. Dautenhahn, “Robot-mediated joint attention in children with autism : A case study inrobot-human interaction,” Interaction Studies, vol. 5, no. 2, pp. 161–198, 2004.

[15] I. Werry, K. Dautenhahn, B. Ogden, and W. Harwin, Cognitive Tech-nology: Instruments of Mind: 4th International Conference, CT 2001Coventry, UK, August 6–9, 2001 Proceedings. Berlin, Heidelberg:Springer Berlin Heidelberg, 2001, ch. Can Social Interaction Skills BeTaught by a Social Agent? The Role of a Robotic Mediator in AutismTherapy, pp. 57–74.

[16] C. Vaucelle and T. Jehan, “Dolltalk: A computational toy to enhancechildren’s creativity,” in CHI ’02 Extended Abstracts on HumanFactors in Computing Systems, ser. CHI EA ’02. New York, NY,USA: ACM, 2002, pp. 776–777.

[17] F. Michaud and S. Caron, “Roball - an autonomoustoy-rolling robot,”in Workshop on Interactive Robotics and Entertainment, 2000.

[18] Y. Fernaeus, M. Hakansson, M. Jacobsson, and S. Ljungblad, “Howdo you play with a robotic toy animal?: A long-term study of pleo,” inProceedings of the 9th International Conference on Interaction Designand Children, ser. IDC ’10. New York, NY, USA: ACM, 2010, pp.39–48.

[19] R. Johansson and A. Saffiotti, “Navigating by stigmergy: A realizationon an RFID floor for minimalistic robots,” in Proc of the IEEE IntConf on Robotics and Automation (ICRA), 2009, pp. 245–252.

[20] A. Khaliq, M. Di Rocco, and A. Saffiotti, “Stigmergic algorithms formultiple minimalistic robots on an RFID floor,” Swarm Intelligence,vol. 8, no. 3, pp. 199–225, 2014.

[21] A. Khailq and A. Saffiotti, “Stigmergy at work: Planning and naviga-tion for a service robot on an RFID floor.” in Proc of the IEEE IntConf on Robotics and Automation (ICRA), 2015, pp. 1085–1092.

[22] A. Khailq, F. Pecora, and A. Saffiotti, “Inexpensive, reliable andlocalization-free navigation using an RFID floor,” in Proc of the IEEEEuropean Conference on Mobile Robots (ECMR), 2015.

[23] A. Khailq, F. Pecora, and A. Saffiotti, “Point-to-point safe navigationof a mobile robot using stigmergy and rfid technology,” inTechnical report, 2016. [Online]. Available: http://aass.oru.se/∼aakq/publications/report2016.pdf

[24] T. Herianto, K. Tomoaki, and K. Daisuke, “Realization of a pheromonepotential field for autonomous navigation by radio frequency identifi-cation,” Advanced Robotics, vol. 22, pp. 1461–1478, 2008.

[25] MOnarCH FP7 Project:, http://monarch-fp7.eu/, retrieved on2016.02.04.