[IEEE 2010 IEEE Virtual Reality Conference (VR) - Boston, MA, USA (2010.03.20-2010.03.24)] 2010 IEEE Virtual Reality Conference (VR) - Mixed reality in virtual world teleconferencing
Post on 09-Feb-2017
Embed Size (px)
Mixed Reality in Virtual World Teleconferencing
Tuomas Kantonen (1), Charles Woodward (1), Neil Katz (2)
(1) VTT Technical Research Centre of Finland, (2) IBM Corporation
ABSTRACTIn this paper we present a Mixed Reality (MR) teleconferencingapplication based on Second Life (SL) and the OpenSim virtualworld. Augmented Reality (AR) techniques are used fordisplaying virtual avatars of remote meeting participants in realphysical spaces, while Augmented Virtuality (AV), in form ofvideo based gesture detection, enables capturing of humanexpressions to control avatars and to manipulate virtual objects invirtual worlds. The use of Second Life for creating a sharedaugmented space to represent different physical locations allowsus to incorporate the application into existing infrastructure. Theapplication is implemented using open source Second Life viewer,ARToolKit and OpenCV libraries.
KEYWORDS: mixed reality, virtual worlds, Second Life,teleconferencing, immersive virtual environments, collaborativeaugmented reality.
INDEX TERMS: H.4.3 [Information System Applications]:Communications: Applications computer conferencing,teleconferencing, and video conferencing; H.5.1 [InformationSystems]: Multimedia Information Systems artificial,augmented, and virtual realities.
1 INTRODUCTIONThe need for effective teleconferencing systems is increasing,mainly due to economical and environmental reasons astransporting people for face-to-face meetings consumes lot oftime, money and energy. Massively multi-user virtual 3D worldshave lately gained popularity as teleconferencing environments.This interest is not only academic as one of the largest virtualconferences was held by IBM in late 2008 with over 200participants. The conference, hosted in a private installment ofSecond Life virtual world, was a great success saving anestimated $320,000 compared to the expense of having theconference held in the physical world .
In this paper, we present a system for mixed realityteleconferencing where a mirror world of a conference room iscreated in Second Life and the virtual world is displayed in thereal-life conference room using augmented reality techniques. Thereal peoples gestures are reflected back to Second Life. Theparticipants are also able to interact with shared virtual objects onthe conference table. A synthetic illustration of such a setting isshown in figure 1.
The structure of the paper is as follows. Section 2 describes thebackground and motivation for our work. Section 3 explainsprevious work related to the subject. Section 4 gives an overviewof the system we are developing. Section 5 goes into someexplanation of Second Life technical detail. Section 6 gives adescription of our prototype implementation. Section 7 provides adiscussion of results, as well as items for future work.Conclusions are in the section 8.
2 BACKGROUNDThere are several existing teleconference systems, ranging fromold but still often used audio teleconferencing and videoteleconferencing to web-based conferencing applications. 2Dgroupware and even massively multi-user 3D virtual worlds havealso been used for teleconferencing.
Each of these existing systems has its pros and cons.Conference calls are quick and easy to set up without otherhardware than a mobile phone, yet it is limited to audio only andrequires a separate channel e.g. for document sharing.Videoconferencing adds a new modality as pictures of participantsare transferred but it requires more hardware and bandwidth,being quite expensive in the high-end. Web-conferencing islightweight and readily supports document and applicationsharing but it lacks natural interaction between users.
We see several advantages of using a 3D virtual environment,such as Second Life or OpenSim among many other platforms, asalternative means for real-time teleconferencing andcollaboration. First, the users are able to see all meetingparticipants and get a sense of presence not possible in atraditional conference call. Second, the integrated voice capabilityof 3D virtual worlds provides spatial and stereo audio. Third, the3D environment itself provides a visually appealing sharedmeeting environment that is just not possible with other means ofteleconferencing. However, the lack of natural gestures constitutesa major drawback for real interaction between the participants.
Figure 1. Illustration of Mixed Reality teleconference:Second Life avatar among real people, wearing ultra lightweight data glasses, sharing a virtual object on the table,
inside virtual room, displayed in CAVE.
IEEE Virtual Reality 201020 - 24 March, Waltham, Massachusetts, USA978-1-4244-6238-4/10/$26.00 2010 IEEE
3 RELATED WORKIn our work, virtual reality and augmented reality is combined insimilar manner as in the original work by Piekarski et al. .Their work was quite limited in the amount of augmentedvirtuality as only position and orientation of users weretransferred into the virtual environment. Our work focuses oninteraction between augmented reality and a virtual environment.Therefore our work is closely related to immersive telepresenceenvironments such as [3, 4]. Several different immersive 3D videoconferencing systems are described in .
Local collaboration in augmented reality has been studied forexample in [6, 7]. Collaboration is achieved by presenting co-located users the same virtual scene from their respectiveviewpoints and providing the users simple collaboration toolssuch as virtual pointers. Remote AR collaboration has mostlybeen limited to augmenting live video such as in  or lateraugmenting a 3D model reconstructed from multiple videocameras as in . Remote sharing of the augmented virtualobjects and applications has been studied for example in .
Our work uses Second Life and the open source implementationof Second Life server called OpenSim, which are multi-uservirtual worlds, as the virtual environment for presenting sharedvirtual objects. Using Second Life in AR has been previouslystudied by Lang et al.  as well as Stadon  although theirwork does not include augmented virtuality.
In the simplest case, augmented virtuality can be achieved bydisplaying real video inside a virtual environment as in . Thisapproach has been also used for virtual videoconferencing in and augmenting avatar heads in . Another form of augmentedvirtuality is avatar puppeteering where human body gestures arerecognized and used to control the avatar, either only the avatarsface as in  or the whole avatar body as in . However, onlylittle previous work has been presented on augmenting SecondLife avatars with real life gestures. The main exception is the VR-Wear system  for controlling avatars facial expressions.
4 SECOND LIFE VIRTUAL WORLDSecond Life is a free, massively multi-user on-line game-like 3Dvirtual world for social interaction. It is based on communitycreated content and it even has a thriving economy. The virtualworld users, called residents, are represented by customizableavatars and can take part in different activities provided by otherresidents.
For interaction, Second Life features spatial voice chat, textchat and avatar animations. Only the left hand of the avatar can befreely animated on-the-fly, while all other animations rely on pre-recorded skeletal animations that the user can create and upload tothe SL server.
For non-expert SL users, however, meetings in SL can be quitestatic with the who is currently speaking indicator being the onlyactive element. From our experience, actively animating theavatar while talking takes considerable training and directs theusers focus away from the discussion.
Second Life has client-server architecture and each server isscalable to tens of thousands of concurrent users. The server isproprietary to Linden Labs but there exists also the communitydeveloped SL compatible server OpenSimulator .
5 SYSTEM OVERVIEWIn this project we developed a prototype and proof-of-concept ofvideo conference meeting taking place between Second Life andthe real world. Our system combines immersive virtualenvironment, collaborative augmented reality and human gesturerecognition in a way to support collaboration between real and
virtual worlds. We call the system Augmented Collaboration inMixed Environments (ACME).
In the ACME system, some participants of the meeting occupya space in Second Life while others are located around a table inreal world. The physical meeting table is replicated in Second Lifeto support virtual object interactions as well as avatar occlusions.The people in real world see the avatars augmented around a realworld table, displayed by video see through glasses, immersivestereoscopic walls or within a video teleconference screen.Participants in Second Life see the real world people as avatarsaround the meeting table, augmented with hand and bodygestures. Both the avatars and real people can interact with virtualobjects shared between them, on the virtual and physicalconference tables respectively.
The main components of the system are: co-located userswearing video-see-throught HMD, a laptop for each user runningthe modified SL client, a ceiling mounted camera above each userfor hand tracking and remote users using the normal SL client.
The system is designed for restricted conference roomenvironments where meeting participants are seated around a welllit, uniformly colored table. As an alternative to HMDs, a CAVEstyle stereo display environment or a plain old video screens canbe used.
Figure 2 shows how the ACME system is experienced in ameeting between two participants, one attending the meeting inSecond Life and the other one in real life. It should be noted thatthe system is designed for