symmetric model of remote collaborative mixed reality ... · the tangible ar interface is an...

8
Symmetric Model of Remote Collaborative Mixed Reality Using Tangible Replicas Shun Yamamoto Keio University Email: [email protected] Yuichi Bannai CANON.Inc Email: [email protected] Hidekazu Tamaki Keio University Email: [email protected] Yuta Okajima Keio University Email: [email protected] Kenichi Okada Keio University Email: [email protected] Abstract— Research into collaborative mixed reality (MR) or augmented reality has recently been active. Previous studies showed that MR was preferred for collocated collaboration while immersive virtual reality was preferred for remote collaboration. The main reason for this preference is that the physical object in remote space cannot be handled directly. However, MR using tangible objects is still attractive for remote collaborative systems, because MR enables seamless interaction with real objects enhanced by virtual information with the sense of touch. Here we introduce tangible replicas (dual objects that have the same shape, size, and surface), and propose a symmetrical model for remote collaborative MR. The result of experiments shows that pointing and drawing functions on the tangible replica work well despite limited shared information. I. I NTRODUCTION In collocated collaborative systems, it is easy to share both physical objects and non-verbal information such as gaze or gesture between the users because the users exist in the same real space. An early collocated collaborative augmented reality (AR) system is AR2 Hockey [13] developed by MR Systems Laboratory in 1998, in which two users with head mounted displays (HMDs) seated on opposite sides of a table hit a virtual puck using real mallets. Kato et al. [8] developed a system where multiple users can play cards on which virtual objects are overlaid. Kiyokawa et al. [9] evaluated collocated AR interfaces under different conditions of display type and setting, finding that visibility of nonverbal cues (e.g. gaze and gesture ) has a considerable effect on communication. In 1980s, collocated collaboration systems such as meeting support systems were developed. For example, Colab [18] shared 2D information between a desktop PC and a projec- tor. Collocated collaborative AR enables the handling of 3D information associated with real objects in real space. In the case of remote collaboration, in contrast to collo- cated settings, neither real objects nor real spaces can be shared at the same time. As a result, most object sharing mechanisms in remote sites were provided by a collaborative virtual environment (CVE) using the virtual objects [14]. Early studies (e.g. [6][16]) insisted on the importance of force feedback in virtual environments (VEs). They showed that the sensory information such as visual and haptic feedback is a key source of information when acquiring and manipulating objects. However, incorporating rich interactive graphics and haptic feedback into virtual environments is costly, both in terms of computing cycles and equipment purchases [12]. The tangible AR interface is an approach designed to over- come these problems. Physical objects support collaboration by their appearance, physical affordance, and the sense of touch. The easiest way to use physical objects between remote sites is to represent the real object set in a local site as a virtual object in remote space. However, this asymmetric scheme might create a dual ecology problem [10]: that is, each user has a different manipulation condition between the physical and virtual object. We introduce here the concept of tangible replicas and propose a collaborative MR system mediated by the replicas where the users hold and interact with them. The system also provides the same manipulation condition for each user due to the symmetry of the system. II. RELATED WORK Studierstube [15] is a multi-user AR/MR environment that enables the display of 3D objects in real space. In a collocated situation, multiple users wearing HMDs gather in a room and interact with the 3D objects. The personal interaction panel, a two-handed interface composed of pen and pad, both of which are fitted with magnetic trackers, is used to control the application. Although a distributed execution mechanism is provided for remote users, the system cannot associate virtual objects with real objects or real spaces between the remote sites. RealWorld Teleconferencing [2] is an AR/MR conferencing system where remote collaborators are represented as live video images or virtual avatars that are overlaid to a set of small marked cards that can be freely positioned about a user in space. The user with the AR interface wears a HMD while his/her counterpart sits in front of a desktop monitor. A shared

Upload: others

Post on 08-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

Symmetric Model of Remote Collaborative MixedReality Using Tangible Replicas

Shun YamamotoKeio University

Email: [email protected]

Yuichi BannaiCANON.Inc

Email: [email protected]

Hidekazu TamakiKeio University

Email: [email protected]

Yuta OkajimaKeio University

Email: [email protected]

Kenichi OkadaKeio University

Email: [email protected]

Abstract— Research into collaborative mixed reality (MR) oraugmented reality has recently been active. Previous studiesshowed that MR was preferred for collocated collaboration whileimmersive virtual reality was preferred for remote collaboration.The main reason for this preference is that the physical objectin remote space cannot be handled directly. However, MRusing tangible objects is still attractive for remote collaborativesystems, because MR enables seamless interaction with realobjects enhanced by virtual information with the sense of touch.Here we introduce”tangible replicas”(dual objects that have thesame shape, size, and surface), and propose a symmetrical modelfor remote collaborative MR. The result of experiments showsthat pointing and drawing functions on the tangible replica workwell despite limited shared information.

I. I NTRODUCTION

In collocated collaborative systems, it is easy to share bothphysical objects and non-verbal information such as gaze orgesture between the users because the users exist in the samereal space. An early collocated collaborative augmented reality(AR) system is AR2 Hockey [13] developed by MR SystemsLaboratory in 1998, in which two users with head mounteddisplays (HMDs) seated on opposite sides of a table hit avirtual puck using real mallets. Kato et al. [8] developed asystem where multiple users can play cards on which virtualobjects are overlaid.

Kiyokawa et al. [9] evaluated collocated AR interfacesunder different conditions of display type and setting, findingthat visibility of nonverbal cues (e.g. gaze and gesture ) has aconsiderable effect on communication.

In 1980s, collocated collaboration systems such as meetingsupport systems were developed. For example, Colab [18]shared 2D information between a desktop PC and a projec-tor. Collocated collaborative AR enables the handling of 3Dinformation associated with real objects in real space.

In the case of remote collaboration, in contrast to collo-cated settings, neither real objects nor real spaces can beshared at the same time. As a result, most object sharingmechanisms in remote sites were provided by a collaborativevirtual environment (CVE) using the virtual objects [14].Early studies (e.g. [6][16]) insisted on the importance of force

feedback in virtual environments (VEs). They showed that thesensory information such as visual and haptic feedback is akey source of information when acquiring and manipulatingobjects. However, incorporating rich interactive graphics andhaptic feedback into virtual environments is costly, both interms of computing cycles and equipment purchases [12].

The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaborationby their appearance, physical affordance, and the sense oftouch. The easiest way to use physical objects between remotesites is to represent the real object set in a local site as a virtualobject in remote space. However, this asymmetric schememight create a dual ecology problem [10]: that is, each userhas a different manipulation condition between the physicaland virtual object.

We introduce here the concept of tangible replicas andpropose a collaborative MR system mediated by the replicaswhere the users hold and interact with them. The system alsoprovides the same manipulation condition for each user dueto the symmetry of the system.

II. RELATED WORK

Studierstube [15] is a multi-user AR/MR environment thatenables the display of 3D objects in real space. In a collocatedsituation, multiple users wearing HMDs gather in a room andinteract with the 3D objects. The personal interaction panel,a two-handed interface composed of pen and pad, both ofwhich are fitted with magnetic trackers, is used to control theapplication. Although a distributed execution mechanism isprovided for remote users, the system cannot associate virtualobjects with real objects or real spaces between the remotesites.

RealWorld Teleconferencing [2] is an AR/MR conferencingsystem where remote collaborators are represented as livevideo images or virtual avatars that are overlaid to a set ofsmall marked cards that can be freely positioned about a userin space. The user with the AR interface wears a HMD whilehis/her counterpart sits in front of a desktop monitor. A shared

Page 2: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

virtual whiteboard is displayed on another marked card in theAR user site and on a window in the desktop user site.

Billinghurst et al. developed a remote work assistancesystem named Block Party [3] that allows a remote expert toassist a user in building a real model out of plastic bricks. Theremote expert is seated at a desktop terminal, while the blockbuilder wears a seethrough HMD with a video camera. Theexpert manipulates the 3D virtual model of the target objecton the screen to show how to construct the object. The blockbuilder has a 3D view of the virtual model floating close tothe real workspace.

Both Real World Teleconferencing and Block Party areasymmetric systems that consist of two different interfaces:AR interfaces and desktop interfaces. The asymmetry maycause confusion between the users due to the existence of dualecology. Moreover, there exist seams between the windows onthe screen that impede intuitive operation for the desktop user.

Another trial that uses tangible objects for remote collabo-ration is Distributed Designer’s Outpost [5]. This system is acollaborative web site design tool that employs physical Post-it notes as interaction primitives on an electronic whiteboard.The location information and data on the Post-it is capturedby two cameras and is transformed to electronic data, anddisplayed as a virtual Post-it on a whiteboard in the remotesite. Although the movement of the physical Post-it is detectedand displayed on the other whiteboard, it is necessary for theuser to move it by hand when the corresponding virtual Post-itis moved by the counterpart in order to avoid inconsistency.

A number of studies showed that force feedback in VEimproved performance on a number of tasks and substantiallyreduced errors. For example, Arsenault et al. [1] showed thata task requiring the coordination of visual and haptic feedbacktook 12% less time than that with visual feedback alone.Lok et al. [11] concluded that interacting with real objectssignificantly improves task performance over interacting withvirtual objects in spatial cognitive tasks, and handling realobjects makes task performance and interaction in the VEmore like the actual task.

In the following text, we list some collaboration systemsusing real objects between remote sites.

Psybench [4] synchronizes distributed objects to providea generic shared physical workspace across distance. It isconstructed from two augmented and connected motorizedchessboards. Positions of the chess pieces on a10×8 grid aresensed by an array of membrane switches. The pieces havemagnetic bases so they can be moved using an electromagnetplaced on a 2-axis positioning mechanism under the surface.Although the system provides a symmetrical tangible userinterface between the remote sites, the actuator mechanismis needed to move real objects in each site.

In-Touch [4] is a tele-haptic communication system consist-ing of two handsized devices with three cylindrical rollers em-bedded within a base. The rollers on each base are hapticallycoupled such that each one feels like it is physically linkedto its counterpart on the other base. To achieve simultane-ous manipulation, In-Touch employs bilateral force feedback

technology, using position sensors and high precision motors.Although it is interesting that the system provides a means forexpression through touch, it is not designed to support specificcooperative work.

III. C ONCEPTUAL MODEL

A. Remote Collaboration Model

Table I shows the main features for the 3 remote col-laboration models of Media Space, Immersive CVE, andremote MR using TUI. In Media Space, awareness informationcan be obtained from video images, and the user cannotchange his/her viewpoint. However, in Immersive CVE, theawareness information can be taken from the avatars of thecounterpart, and the user viewpoint is controllable using aComputer Graphics (CG) technique. In the case of MR withTUI, viewpoint controllability, construction of the workspace,and the functions for the objects depend on the setting. In thispaper, the element of our MR with TUI model consists of atangible object enhanced by virtual information in a real spacein two sites.

The early systems of Media Space, such as MARMAID[20] , did not provide a seamless interface; the screen wasdivided into a talking head window and a whiteboard window.Clearboard [7] realized the seamless interface by overlappingthe talking head image onto the whiteboard image. A 3Dseamless space consists of some virtual objects and the virtualenvironment can be easily constructed in Immersive CVE. Inthe case of MR with TUI, a seamless space including bothtangible and virtual objects in the real environment can beachieved.

In terms of object manipulation, Media Space systemsprovide tele-pointing and drawing functions on a 2D screenwhile Immersive CVE enables manipulation of 3D objects aswell as pointing. The functions for the objects in MR systemswith TUI are limited to pointing and drawing on the 2D surfaceof the tangible objects, since changing the properties of thephysical objects is difficult.

B. Remote MR model with TUI

In our remote MR model, we choose the base physicalobject, on which the world coordinate system is set, in thereal space. The base physical object may be a plane suchas a floor, a table, or a monitor screen (e.g. in the case ofDistributed Designer’s Outpost [5]). Then the workspace is seton the base object in each site for the collaboration, as shownin Figure 1-(1). We define that the two spaces are ”equivalent”when the physical structure of the work spaces is the same.A set of tables of the same size is an appropriate example ofthe equivalent spaces. Each user can move around and workwith the objects in the same way under this condition. Thecounterpart is usually displayed as an avatar in remote site,and the shared object represented by a virtual object can bemanipulated in each site (Figure 1-(2)).

When we put the tangible object in site A shown in Figure 1-(3), the virtual object corresponding to the object is displayedas a gray oval in site B. This causes asymmetry between the

Page 3: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

TABLE I

MAIN FEATURES FORREMOTE COLLABORATIVE SYSTEMS

Model Awareness View Point Seamlessness Physical Object Function(Objects)

Media Space Video Image Fixed Overlaid image of Not available PointingUser’s body and Drawing

and Shared white board (2D screens)Immersive CVE Avator Controllable Saemless Space including Not available Pointing

3D virtual objects and Manipulation(Modification)3D virtual environments (3d objects)

Remote MR Avattor Controllable Seamless Space including Available Pointing3D virtual objects DrawingReal environment (3D tangible object)

two spaces. For example, the position of the virtual object insite B is changed when the tangible object in site A is movedwhile on the other hand, the real object in site A cannot changeits position without an actuator mechanism as the result ofmovement of the virtual object in site B.

In Figure 1-(4), we replace the virtual object with thereal object in site B such that each user has the real objectindependently. It is assumed that the real object in site A hasthe same size, shape, and surface as that in site B. We definethese objects as a set of tangible replicas. Although the modelin Figure 1-(4), is symmetric, the problem that the position ofthe real object in remote site cannot be changed still remains.

Fig. 1. A Remote MR Model with Tangible Objects

C. World and Object Coordinate Systems

To simplify the problem, we show the symmetric model inFigure 2 where each site consists of a tangible replica, a localstylus, shared CG texture on the replica, and a remote pointerof virtual representation.

In CG systems, an object represented by the object coordi-nates is transformed to the world coordinates in order to create

Fig. 2. World and Object Coordinate System

a view from an arbitrary camera position. Since the remotereplica cannot be moved directly, we move the remote pointerinstead of the replica, keeping its relative position. Therefore,we use the object coordinate system of the replica to representthe remote pointer, and transform the object coordinates intothe world coordinates to create a local view. As a result, theremote pointer moves in the following cases.

• When the counterpart moves either his stylus or replica,or moves both of them at the same time, and

• When the local user moves his replica.

The movement of the pointer is observed by the user asa composition vector of motion, created by his replica, andby the counterpart’s pointer and replica. The user cannotrecognize whether the counterpart is moving the pointer orthe replica. This model is especially effective if the replica isa portable object rather than one fixed on a table, because thelatter case can be managed in the world coordinate system.However, only one replica in each site can be handled at thesame time in this model.

D. Use Case

Figure 3 shows a use case of remote collaboration usingtangible replicas. Each user A and B existing in different siteshas the tangible replica of a plain white mug. He/she can painttexture on the cup with his stylus while wearing a HMD. Aline is drawn as a CG object on the surface of the replicawhere the stylus touched. Both A and B can draw and erase itmoving the replicas independently, and the results of operationare displayed and shared between them.

Page 4: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

Fig. 3. A Use Case of Tangible Replicas

Although the texture and pointer data are shared, it isunnecessary to exchange the information such as location andorientation data related to the replica between users. Therefore,user B’s replica and its view are not affected by the motionof user A’s replica, and vice versa. It is a main feature of thesystem that users have the ability to share the tangible objectoverlapped with the texture, while maintaining the consistencyof each user’s view.

E. Implementation

F. Virtual Objects in Object Coordinate Systems

In order to synchronize the virtual objects to be sharedbetween the sites, the parameters of the manipulated objectin site A are sent to site B, and vice versa; the objectsare simultaneously displayed using the same parameters inthe object coordinate system of each site. In Figure 3, eachvirtual object (the pointer and the texture)Sa is expressedby Soa = [xoa; yoa; zoa; 1]t, whereSoa is a set of locationparameters in the object coordinate system of site A, andSwa

is a set of location parameters in the world coordinate systemof site A. Transformation from object coordinates to worldcoordinates is calculated bySwa = MaSoa using the modelingtransformation matrixMa, whereMa is a4×4 homogeneousmatrix.

Since the virtual objects are managed based on the worldcoordinates in each site, we calculate the location parametersof the object coordinates by transformingSoa = M−1

a Swa,where M−1

a is the inverse matrix ofMa. Since the virtualobject is shared at the same object coordinates in each site,we can setSoa = Sob∗, whereSob is the object coordinatesin site B. After receivingSob from system A, the system insite B transformsSob to Swb (the world coordinates in site B)usingSwb = MbSob, whereMb is the modeling transformationmatrix of site B, and displays the object atSwb.

G. Texture

In order to overlap virtual objects on the replica correctly,it is necessary to get the 3D model of the replica beforehand.The user can draw a line when his/her stylus is touched onthe replica, and change the color of the drawing by pressingthe button on the stylus. We covered a number of 1x1 mmtransparent squares onto the replica so that the pixel size ofthe drawings is1mm2.

Fig. 4. The Scene Graph of the Virtual Object

As shown in Figure 4, each pixel is represented by a partof the scene graph. The scene graph has the tree structureconsisting of several nodes, one of which switches the nodecorresponding to the color. The switch node changes the colornode to the corresponding one when the system receives theuser’s request.

H. Synchronization of the shared objects

The modification of the object status caused by the usermanipulation, e.g. movement of the replica and the stylus anddrawing on the replica, must be updated in the remote site aswell as the local site. We use the virtual object managementtable shown in Table II which stores the shared virtual objectsthat can change the state. When the state of an object changes,the flag is set and the other corresponding data in Table II isupdated. This data is sent to the other site and the flag is resetby the background loop program checking periodically for theupdate.

TABLE II

V IRTUAL OBJECTSMANAGEMENT TABLE

Virtual Object IDVirtual Object Name

FlagType of Change

Amount of Change

The system of the receiver site updates the object data fordisplay of the object. This process is executed periodically ineach site in order to synchronize the shared object. In the caseof the drawing function, the pixel node ID and color ID arestored in the Virtual Object ID and the Amount of Changecolumns, respectively.

I. System Configuration

Figure 5 shows the system configuration. The video see-through HMD (Canon VH2002) is equipped with a pair ofNTSC video cameras and a pair of VGA LCDs. Its horizontalview angle is 51 degrees and its weight is 450g. A sensorreceiver made by FASTRAK R°that is attached to the HMDand the stylus receive 6 degree of freedom (DOF) parametersof its position and orientation. The same type of receiver isalso fixed on the replica.

Page 5: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

MR Platform [19] generates the CG image from the view-point of the HMD using the parameters from the sensorreceivers. A marker registration for the HMD that compensatesfor sensor registration error is made in order to preciselyoverlap the CG image onto the real object. Two video boardswithin the PC capture the video output from the right andleft cameras and send the composed image of video andCG to the right and left display monitors, respectively. Thespecifications of the PC are as follows: CPU=Pentium4 3.4GHz (PC1), Pentium4 2.4 GHz(PC2); RAM=1GB; Graphicsboard=nVIDIA GeForce4; OS=Red Hat Linux9. The systemconfiguration is identical between sites A and B, as shownin Figure 5. The handling of virtual objects in each siteis managed by the MR Platform, while synchronization iscontrolled by virtual object management units using Table II

J. Prototype System

Then we implement a prototype system. Each user has awhite cube replica by his hand and the stylus by the otherhand. And size of the cube is5cm × 5cm × 5cm, so thecube has15, 000pixcels on its surface. A pixel has three colornodes; transparent, white, and red. So, user can use two colorsand eraser; yellow and red and white. User selects color bypushing a button of stylus, and his color is displayed on thetop of the stylus as a virtual sphere. Opposite user’s stylus isdisplayed as a virtual cone and it is also displayed by keepingthe relative position and orientation to the cube in remoteuser’s side. Figure 6 shows the prototype system. And leftside of picture shows the view without HMD, and right sideshows the view through HMD.

Fig. 6. Prototype System

IV. EVALUATION

We focused on the evaluation of the pointing and drawingfunctions in the experiment to investigate the following ques-tions.

1) Can the pointer movement and pointing position berecognized correctly? Since the motion of the pointer

is displayed as a composite vector images of the threemovements described above.

2) In which condition is the performance of the pointingtask better? That is, the portable condition where bothworld and object coordinate systems are used, or thefixed condition where the replica is fixed on the tableusing only the world coordinate system?

3) Does the drawing function on the replica between theremote users work well? We expect the collaboration isestablished even under the condition where the aware-ness information is limited to the pointer.

The following three experiments were conducted: We chosea mutual pointing task in the Preliminary Experiment wherethe correct answer rate of pointed position and the pointingtime was measured.

In Experiment 1, we hypothesized that the pointing per-formance under the fixed condition is better than under theportable condition, since the movement of the pointer maybecome more complicated in the latter case, and a previousstudy [17] showed that performance under the condition ofobserver movement around the object was better than underthe condition of object rotation.

In Experiment 2, we created a game similar to tick-tack-toeusing a physical cube, and asked the subjects to play with itunder the remote condition using a pair of replicas and underthe collocated condition using one physical cube.

A. Conditions of the Preliminary Experiment and Experiment1

Twelve subjects (10 males and 2 females, aged from 20 to25) were divided into six pairs. One became an indicator andthe other played the role of responder in the pointing task andwore the HMD. Both systems of the indicator and responderwere set in the same room. Each workspace was partitioned sothat the other subject could not see it. Communication betweenthe subjects was via voice.

The replica used in the experiments was a12× 12× 12cmcube whose surface was divided into a 3x3 mesh (4 × 4cmsquare). The numbers from 01 to 45 generated by CG wererandomly overlapped on the mesh of the surfaces except thebottom. The average frame rate of the HMD display was 26.3frames/sec. No delay was observed during the experiments

B. Preliminary Experiment

This experiment aimed to investigate whether or not mutualpointing can be accomplished correctly;, that is, whether theindicator can point to the target as he wishes and the respondercan correctly read the number highlighted by the indicator.Each subject pointed to an arbitrary number with the stylus inone hand while holding the replica in the other.

The responder, who sat in front of the table, respondedto the number by moving his replica so he could trace thepointer. The indicator permitted the responder to answer bysaying”OK”when he pointed to the number. When he heardthe responder’s answer, the subjects changed role. This taskwas repeated five times. The total time was measured. In the

Page 6: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

Fig. 5. The Scene Graph of the Virtual Object

screenshot of Figure 7, the right pointer is local and the leftpointer is remote.

Fig. 7. A Screenshot of the Preliminary Experiment

The result of 60 pointing trials by six pairs showed theaverage time from pointing to response was 3.6 sec, with astandard deviation of 0.33 sec; the correct answer rate was100%. We observed that the responder could trace the pointerand correctly determine the number without trouble even whenthe motion of the pointer was displayed as a compositionvector of three movements.

C. Experiment 1

Since the pointing task in preliminary experiment wasaccomplished successfully, we conducted another experi-ment comparing the pointing and response time between theportable condition and fixed condition. In this experiment,the role of a pair of subjects was fixed: one was solely anindicator while the other was solely a responder. The indicator

pointed to a number on each of five surfaces of the cube exceptthe bottom surface (i.e., he pointed to five locations). Theother conditions were similar to those of the aforementionedpreliminary experiment.

In the fixed condition, both the indicator and the responderhad to move their upper body in order to see the numbers onthe back surfaces of the cube since the replica was fixed ontheir tables in front of them. The cube was fixed on the tablewith three surfaces in the subject’s view. The pointing timewas taken as the duration from the time when the indicatorbegan to point to the time when he said, ”OK” soon after fixinghis pointer. The response time was measured as the end of thepointing to the time when the responder said the number.

Figure 8 shows the average time per point of pointing andresponse. The average pointing time of the fixed conditionwas 3.7 seconds (standard deviation (sd): 1.4 sec), while thatof the portable condition was 2.7 sec (sd: 0.6 sec). The averageresponse time of the former system was 2.0 sec (sd: 0.6 sec)and that of the latter system was 1.6 sec (sd: 0.4 sec). Thecorrect answer rate was 100in each case.

We tested the difference of average pointing time and re-sponse time between the two conditions using thet−test. TheT value of the pointing time wasTp = 2.25 > T (22, 0.05),and that of the response time wasTr = 2.07 > T (22, 0.05).The difference of the average time between the two condi-tions was found to be significant at the 5% level. Therefore,hypothesis 1 is rejected.

D. Experiment 2

We created an extended tick-tack-toe game using a set ofcubes each of whose surface is divided by a3×3 mesh, shown

Page 7: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

Fig. 8. The Average Time of Pointing and Response

in Figure 9. Each user in turn puts a piece on a mesh so thathe/she makes a line with 5 consecutive pieces in a row or acolumn. The user must use at least 2 surfaces to win.

Fig. 9. Extended Tick-Tack-Toe Game

Five pairs of subjects joined the experiment under the twoconditions: the collocated condition where the pair sits acrossthe table from each other using a15 × 15cm cube, and theremote condition where each subject wearing a HMD holds a5×5cm replica. In the collocated condition, the subject paintsa circle on a mesh of the cube with a pen and hands the cube tohis/her counterpart to change the turn, whereas in the remotecondition, he/she touches a mesh with his/her stylus so that thepiece is put on the mesh. The ratio of the cube size (15:5) wasdetermined from that of the human view angle (136 degrees)to the view angle of the HMD (51 degrees).

Figure 10 shows player’s view through HMD. White coneof which tip is red sphere is player’s stylus and black cone isopposite player’s stylus. Figure 11 shows Experiment Scenery.Seeing player from behind, player has just a white cube, wearsHMD, and has a stylus.

Fig. 10. Player’s View

The number of turns in a minute was used as the perfor-mance measure, since it took over ten minutes for a game onaverage. The more turns, the more actively and efficiently thesubjects played the game.

Figure 12 shows the average number of turns per minute for5 pairs in both conditions. We found no significant differencehereTt = 1.02 < T (8, 0.05), although the average numberin the collocated condition was less than that in the remotecondition.

Fig. 11. Experiment Scenery

Fig. 12. The Average Number of Turns per Minute

After the game was over, the subjects were asked to com-plete a subject questionnaire. It is expected to answer using ascale of 1 (”Disagree”) to 5 (”Agree”), while comparing thefollowing two conditions.

• Q1: I had enough time to think of my next placementduring my partner’s turn.

• Q2: I could easily understand what my partner is doingduring his/her turn.

The average score in each condition is shown in Figure 13.There existed a highly significant difference between the twoconditions for each question with t values ofTq1 = 4.06 >T (18, 0.01); andTq2 = 6.59 > T (18, 0.01).

Fig. 13. The Average Scores of the Questionnaire

E. Discussions

We derived the fact from the preliminary experiment thatthe subject could recognize the pointed position correctly inour model showing the relative view of the replica and thepointer, and from Experiment 1 that the subject accomplished

Page 8: Symmetric Model of Remote Collaborative Mixed Reality ... · The tangible AR interface is an approach designed to over-come these problems. Physical objects support collaboration

the pointing task more efficiently in the portable condition thanin the fixed condition.

The reason for this may be due to two facts: the movementof the pointer is not so complicated for it to be traced in ashort time, and the subject had to physically move in order tosee the surface unseen from his view. As a matter of fact, weobserved from the recorded video that the subject turned thereplica in the opposite direction of the pointer movement suchthat the pointer was always seen in his/her field of view.

In Experiment 2, the performance measure of the gamein the remote condition is as good as that in the collocatedcondition. We found from the questionnaire that the subjectappreciated an arbitrary viewpoint can be set independent ofthe partner.

V. FUTURE WORK

The number of tangible objects that can be shared simul-taneously is limited to one in the proposed system. It is verydifficult to handle more than two tangible objects since therelative position of the two objects must be changed wheneither of the objects moves by a mechanism such as anactuator. As a practical solution, users can pick one objectfrom a group of objects by negotiating the target objects, andhandling one by one.

Another consideration is to relax the restriction of thereplica such that tangible objects of different size and/or shapecan be handled. For example, we can share objects with thesame shape but different size (e.g. a miniature and a realobject). Even when the objects have a different shape, userscan share them by the proposed method if the point on theobject corresponds to that on the other object.

Another issue to be discussed is how to display sharedvirtual objects independent of the replica such as the avatar orother objects in the environment, since the movement of thereplica may cause much more frequent and wider movementof the shared object.

VI. CONCLUSION

We have proposed a symmetric MR collaboration modelthat enables users to share and interact seamlessly with theobjects with the feeling of touch using tangible replicas.Shared objects are managed in both the object coordinatesystem based on the replica and the world coordinate systemin our model.

Results of the experiments showed the pointing task couldbe accomplished correctly without problem and the point-ing performance in the portable replica condition was moreefficient than in the fixed replica condition. Although thereexist some limitations of the tangible objects in the presentsystem, we believe that our tangible MR interface can extendthe application of remote collaboration systems.

ACKNOWLEDGEMENTS

This research was supported in part by the Ministry ofInternal Affairs and Communications, SCOPE.

REFERENCES

[1] R. Arsenault and C. Ware. Eye-hand co-ordination with force feedback.In Proceedings of CHI 2000, pages 408-414, 2000.

[2] M. Billinghurst and H. Kato. Novel collaborative paradigms: Real worldteleconferencing. In Extended Abstract of CHI ’99, pages 20- 29, 1999.

[3] M. Billinghurst, E. miller, and S.Weghorst. Collaboration with WearableComputers. Lawrence Erlbaum Associates, 2001.

[4] S. Brave, H. Ishii, and A. Dahlefy. Tangible interfaces for remotecollaboration and communication. In Proceedings of CSCW ’98, pages169-178, 1998.

[5] K. M. Everitt, S. R. Klemmer, R. Lee, and J. Landay. Two worlds apart:Bridging the gap between physical and virtual media for distributeddesign collaboration. In Proceedings of CHI ’03, pages 553-560, 2003.

[6] B. Hannaford, L. Wood, Guggisberg, D. McAffee, and H. Zack. Perfor-mance evaluation of a six-axis universal force-reflecting hand controller.In Proceedings of 19th IEEE Conference on Decision and Control, pages1197-1205, 1989.

[7] H. Ishii and M. Kobayashi. Clearboard: A seamless medium for shareddrawing and conversation with eye contact. In Proceedings of CHI ’92,pages 525-532, 1992.

[8] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana.Virtual object manipulation on a table-top ar environment. In Proceedingsof ISAR 2000, pages 111-119, 2000.

[9] K. Kiyokawa, M. Billinghurst, S. E. Hayes, A. Gupta, Y. Sannohe, andH. Kato. Communication behaviors of co-located users in collaborativear interfaces. In Proceedings of ISMAR ’02, pages 139-149, 2000.

[10] H. Kuzuoka, J. Kosaka, Y. Yamazaki, Y. Suga, A. Yamazaki, P. Luff,and C. Heath. Gesturing, moving and talking together: Mediating dualecologies. In Proceedings of CSCW ’04, pages 477-486, 2004.

[11] B. Lok and S. Naik. Effects of handling real objects and self-avatarfidelity on cognitive task performance and sense of presence in virtualenvironment s. Presence, 12(6):615-628, June 2004.

[12] A. H. Mason, M. A. Walji, E. J. Lee, and C. L. MacKenzie. Reachingmovements to augmented and graphic objects in virtual environments. InProceedings of CHI ’01, pages 426-433, 2001.

[13] T. Ohshima, K. Satoh, H. Yamamoto, and H. Tamura. Ar2 hockey: Acase study of collaborative augmented reality. In Proceedings of VRAIS’98, pages 268-275, 1998.

[14] O. Otto, D. Roberts, and R. Wolf. A review on effective closelycoupledcollaboration using immersive cve’s. In Proceedings of VRCIA ’06, pages145-159, 2006.

[15] D. Schmalstieg, A. Fuhrman, and G. Hesina. Bridging multiple userinterface dimensions with augmented reality. In Proceedings of ISAR2000, pages 20-29, 2000.

[16] T. B. Sheridan. Telerobotics, Automation andHuman Supervisory Con-trol. MIT Press, 1992.

[17] D. H. Shin, P. S. Dunston, and X. Wang. View changes in augmented re-ality computer-aided-drawing. ACM Transaction of Applied Perceptions,2(1):1-14, Jan. 2005.

[18] M. Stefik, G. Foster, D. G. Borrow, K. Kahin, S. Lanning, and L.Suchman. Beyond the chalkboard: Computer support for collaborationand problem solving in meetings. Communications of ACM, 30(1):32-47, Jan. 1987.

[19] S. Uchiyama, K. Takemoto, K. Sato, H. Yamamoto, and H. T. H. Mrplatform: A basic body on which mixed reality applications are built. InProceedings of ISMAR ’02, pages 246-253, 2002.

[20] K. Watabe, S. Sakata, K. Maeno, H. Fukuoka, and T. Ohmori. Dis-tributed multiparty desktop conferencing system: Mermaid. In Proceed-ings of CSCW ’90, pages 27-38, 1990.