wonderdac in practice: a demonstration of ... - cse.nd.edu

13
1 WonderDAC in Practice: A Demonstration of Discretionary Access Controls within the Project Wonderland CVE Timothy E. Wright and Greg Madey Computer Science & Engineering University of Notre Dame Notre Dame, IN 46556 Email: [email protected] Email: [email protected] Abstract The use of discretionary access control (DAC) within collaborative virtual environments (CVEs) has been a limited endeavor for both proprietary and open source systems. Yet, as virtual worlds become more use- ful and engaging to our computing society, the need to safeguard access to virtual objects (e.g., spaces, three-dimensional assets, two-dimensional images and videos, avatars, sound) becomes increasingly impor- tant. Elsewhere, we have proposed and implemented a DAC extension for the Project Wonderland CVE called WonderDAC. Here, we demonstrate how Won- derDAC can play a useful role in managing access within a virtual world adapted to educational pur- poses. In our demonstration, a small group of users researches a topic, builds presentations, and finally de- livers talks–all within the Wonderland CVE. Along the way, WonderDAC operates effectively to enable and ensure the privacy of research activities and orderly presentations afterwards. Keywords: Virtual reality, collaborative virtual en- vironment, access control, discretionary access Digital Peer Publishing Licence Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the current version of the Digital Peer Publishing Licence (DPPL). The text of the licence may be accessed and retrieved via Internet at http://www.dipp.nrw.de/ . 1 Introduction Desktop virtual reality (i.e., virtual reality [VR] appli- cations and systems that are accessible from a stan- dard, desktop workstation) has introduced avenues of entertainment, commerce, education, and science that have begun to take root within our computing society. A central theme of the desktop VR paradigm is one in which multiple, simultaneous users, represented by avatars, can remotely interact within a virtual world. We refer to such realms as collaborative virtual en- vironments (CVEs) and note the existence of popular commercial systems like Second Life, There, and Ac- tiveworlds, as well as open source platforms such as Croquet, OpenSim, and Project Wonderland. In [WM08a] and [WM08b], we extended Project Wonderland with a discretionary access control (DAC) system called WonderDAC. This was an effort to grap- ple with the problem of weak and limited access control found in both commercial and open source CVEs. Our solution was designed according to five use case scenarios: spatial object restrictions, non- spatial object restrictions, audio conversation restric- tions, avatar cloaking, and permissions and ownership changes. The first two of these deal with protecting ac- cess to spaces (three-dimensional) and objects (three- and two-dimensional), respectively. For example, en- abling/disabling a CVE participant’s ability to enter, view, and hear into a space, as well as their ability to see and use some asset. The third use case, audio con- versation restrictions, is aimed at managing the capac- ity of a participant to speak within an area. This can be applied to situations where one participant is address- ing a group of others and should not be interrupted, or used to enforce vocal silence in a particular space. The

Upload: others

Post on 09-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

1

WonderDAC in Practice: A Demonstration of Discretionary AccessControls within the Project Wonderland CVE

Timothy E. Wright and Greg Madey

Computer Science & EngineeringUniversity of Notre Dame

Notre Dame, IN 46556Email: [email protected]: [email protected]

Abstract

The use of discretionary access control (DAC) withincollaborative virtual environments (CVEs) has been alimited endeavor for both proprietary and open sourcesystems. Yet, as virtual worlds become more use-ful and engaging to our computing society, the needto safeguard access to virtual objects (e.g., spaces,three-dimensional assets, two-dimensional images andvideos, avatars, sound) becomes increasingly impor-tant. Elsewhere, we have proposed and implementeda DAC extension for the Project Wonderland CVEcalled WonderDAC. Here, we demonstrate how Won-derDAC can play a useful role in managing accesswithin a virtual world adapted to educational pur-poses. In our demonstration, a small group of usersresearches a topic, builds presentations, and finally de-livers talks–all within the Wonderland CVE. Along theway, WonderDAC operates effectively to enable andensure the privacy of research activities and orderlypresentations afterwards.

Keywords: Virtual reality, collaborative virtual en-vironment, access control, discretionary access

Digital Peer Publishing LicenceAny party may pass on this Work by electronicmeans and make it available for download underthe terms and conditions of the current versionof the Digital Peer Publishing Licence (DPPL).The text of the licence may be accessed andretrieved via Internet athttp://www.dipp.nrw.de/.

1 Introduction

Desktop virtual reality (i.e., virtual reality [VR] appli-cations and systems that are accessible from a stan-dard, desktop workstation) has introduced avenues ofentertainment, commerce, education, and science thathave begun to take root within our computing society.A central theme of the desktop VR paradigm is onein which multiple, simultaneous users, represented byavatars, can remotely interact within a virtual world.We refer to such realms as collaborative virtual en-vironments (CVEs) and note the existence of popularcommercial systems like Second Life, There, and Ac-tiveworlds, as well as open source platforms such asCroquet, OpenSim, and Project Wonderland.

In [WM08a] and [WM08b], we extended ProjectWonderland with a discretionary access control (DAC)system called WonderDAC. This was an effort to grap-ple with the problem of weak and limited accesscontrol found in both commercial and open sourceCVEs. Our solution was designed according to fiveuse case scenarios: spatial object restrictions, non-spatial object restrictions, audio conversation restric-tions, avatar cloaking, and permissions and ownershipchanges. The first two of these deal with protecting ac-cess to spaces (three-dimensional) and objects (three-and two-dimensional), respectively. For example, en-abling/disabling a CVE participant’s ability to enter,view, and hear into a space, as well as their ability tosee and use some asset. The third use case, audio con-versation restrictions, is aimed at managing the capac-ity of a participant to speak within an area. This can beapplied to situations where one participant is address-ing a group of others and should not be interrupted, orused to enforce vocal silence in a particular space. The

2

fourth use case, avatar cloaking, addresses the need forparticipants to control who can and cannot know theyare present in a CVE. For example, a participant mayonly wish that a particular group can see and interactwith their avatar. Finally, the last use case, permis-sions and ownership changes, proposes that a simple,effective graphical user interface be employed to han-dle DAC configuration management by CVE partici-pants.

In this paper, we discuss a demonstration in whichWonderDAC’s privacy and integrity mechanisms wereput to the test. Several participants were invited tospend a short time investigating and presenting a sim-ple topic entirely within the Wonderland CVE. At theoutset, they were divided into two teams, with eachteam given access to a separate, private space for re-search activities; neither team was permitted to accessthe other team’s room. To aid in their work, eachteam had its own Web browser, presentation builder,and whiteboard. Thirty minutes were alloted for re-search activities, after which the teams relocated to acommon space and used another 20 minutes to presenttheir findings to each other. To guide these presenta-tions, the teams alternated as audience and presenters,with only the presenting team being able to speak andmanipulate a virtual PDF viewer. Also, a virtual mi-crophone (accessible by the presenting team) enabledspeakers to be heard throughout much of the CVE–incase an audience member temporarily left the spacewhere the presentations were given. Finally, the teamswere allowed some additional time to socialize andwalk about the CVE. In total, the demonstration re-quired an hour and 15 minutes: the length of time fora typical college class meeting.

Following the demonstration, a 10 question surveywas administered to the participants. The goals of thesurvey were to solicit opinions about how well Won-derDAC seemed to function and how sensible its ap-proach and interfaces were. Depending on context,one of two five-point rating scales was used for eachsurvey question.

The remainder of this paper is organized as follows:the next section summarizes related works; Section 3looks at our approach to evaluating WonderDAC’s us-ability; Section 4 reviews and interprets the results ofour evaluation, Section 5 considers other issues facedby Wonderland and WonderDAC; and Section 6 con-cludes the chapter with summary remarks and a briefdiscussion of future work.

2 Related Work

The field of human-computer interface (HCI) studyhas been active for decades and offers establishedmethods for designing and testing various interfacetypes [Mye98][BJJ99][Bae08]. However, the treat-ment of VR within HCI is still relatively new, and notyet formalized [LK07][Mar99]. Nevertheless, we areinterested in approaches that can evaluate the usabilityof desktop CVEs, in particular as this relates to ac-cess control. Although our search of the literature hasrevealed some information about CVE usability test-ing, we have found nothing to addresses the role thatmight be played by access control. Hence, our reviewof related works is focused on those efforts that we canlearn from and adapt into our own evaluation method.

There are largely two categories into which we maydivide related works: usability tests and usability eval-uation approaches. Examples of tests are often aimedat comparisons of activities and/or interfaces relativeto the virtual world. Typically, there is both a quanti-tative and qualitative analysis involved. In [NF00], theauthors evaluate how efficiently 10 users can searchfor books within a virtual library. A quantitative anal-ysis is carried out to compare different clustering andquery highlighting techniques for the books; a qualita-tive analysis of user behavior within the virtual libraryis also undertaken. [KOR+02] presents an assessmentof the physical interfaces used in the navigation of athree-dimensional terrain system. Here, in an exper-iment with 24 participants, quantitative and qualita-tive comparisons are made by the authors for mouse,speech, gesture, and multimodal (speech and gesture)navigational means. Quantitatively, the time to locatea target with each interface and the number of correctlyrecalled locations are measured and evaluated. Qual-itatively, test participants are aked to rate various as-pects of the terrain system (e.g., ease of learning anduse, sense of presence, comfort). A purely quantita-tive CVE assessment is found in [GLB05], where theauthors discuss a model for transitioning between aug-mented reality (AR) and VR. They present two exper-iments in which they evaluate the ability of a user tonavigate a maze through AR and VR interfaces. Anal-ysis of their experiments is quantitative and focusedon the efficiency differences and gains of using AR orVR.

Regarding evaluation approaches for CVE usabil-ity, sometimes, an approach is derived to assist withthe construction and evaluation of a specific system;

3

other times, an approach is independent of any CVE.For example, in [GLG03], the authors build and eval-uate a medical training CVE according to three meth-ods: expert heuristic, formative evaluation, and sum-mative evaluation. The first two are applied in alter-nating fashion during the initial design of the CVE.Collectively, they leverage the knowledge of an ex-pert to resolve usability problems and improve theCVE’s design. The third method, summative evalu-ation, is applied last to make statistical comparisonsof how well interfaces, operations, and componentswork. Along similar lines, [GHS99] build and eval-uate a battlefield visualization environment accordingto what they call user-centered design. This followsa methodology akin to that above, but with an initialuser-task analysis step to explicitly call out the needfor “identifying a complete description of tasks, sub-tasks, and methods required to use a system, as wellas other resources necessary for user(s) and the sys-tem to cooperatively perform tasks.” No CVE is builtor assessed in [Mar99], where the author works froman established HCI perspective (for two-dimensionalinterfaces) in order to formulate a new approach forevaluating VR systems. Along the way it is notedthat “tasks with VR systems are not only performedwith the VR application, but also within or inside thevirtual environment.” This, then, becomes the pri-mary difference between VR systems and standardtwo-dimensional interfaces, and necessitates a differ-ent, augmented approach for VR usability evaluation.In [LK07], the authors present a substantial frame-work (what they call a handbook) for evaluating VRapplications. To assist the typical researcher or stu-dent with carrying out a meaningful usability evalua-tion of a VR system, they propose a number of simpleguidelines and include two brief illustrative case stud-ies. Finally, in [JGH+07] the authors devise a newHCI approach called Reality-Based Interaction (RBI).RBI is intended to better enable the evaluation and un-derstanding of emerging, post-WIMP (window-icon-menu-pointing device) interfaces such as virtual real-ity. The theme of RBI is that such interfaces are uni-fied in their employment of real-world interactions: in-terfaces may be mobile, and, as with VR, navigationand perception are often driving forces. RBI goes onto postulate that the most effective interface designsare those that maximze power (functionality and effi-ciency) and reality.

3 Evaluating WonderDAC

Our evaluation of WonderDAC comes with severalcaveats. First, WonderDAC is an extension to ProjectWonderland 0.4. This version of Wonderland is in-tended to support only small groups of participants ona well appointed server under moderate load. Next, wehave no interest in evaluating Wonderland and its inter-faces. Instead, we wish to evaluate how successfullyWonderDAC can control access to objects and spaceswithin Wonderland. Finally, to ensure that the use ofWonderland is not an impediment in our assessment ofWonderDAC, we decided to handpick a small group oftechnically savvy participants.

Ultimately, these stipulations diminish the utilityof a highly quantitative experiment. In fact, the lastcaveat means our participants are not probabilisticallysampled, and, thus, we are prevented from making anyreliable statistical inferences about WonderDAC’s op-eration [KP02]. However, we contend that this doesnot reduce the value of our assessment, so long aswe have designed a realistic demonstration and haveelicited useful feedback from the participants. To theseends, we have carried out a demonstration of Wonder-DAC’s capabilities within a Project Wonderland 0.4CVE according to the background and narrative pro-vided below. As a follow-up to the demonstration, weadministered a short, participant survey to gauge per-spectives and opinions about how WonderDAC oper-ated and whether or not its approach to access controlmade sense. In designing this demonstration, we haveadapted guidelines from theHandbook for EvaluationStudies in Virtual Realitydevised by [LK07].

3.1 WonderDAC Demonstration Back-ground

The essential research question we wish to addressis, does WonderDAC effectively and usefully controlaccess to spaces and objects within the WonderlandCVE? To answer this, we constructed a simple, virtualworld wherein a small group of participants engaged intwo educationally motivated activities: researching atopic and then presenting this research to others. Here,WonderDAC’s role was to enable the privacy of theparticipants as they conducted their research, and thento permit orderly presentations by controlling who wasable to speak and who had access to the presentationtool.

The demonstration environment included four pri-mary spaces in which the participants operated: a

4

team room where avatars first appeared when a clientlogged into Wonderland, a demonstration room thatshowed off some of Wonderland’s features, and twosecond-floor lofts where participants carried out re-search and built their presentations. In addition to thisbasic layout, the team room was further divided into amovie screening room, a central meeting area, and twomoderate-sized cubicles (for purposes of ambiance).The team room’s central meeting area is where pre-sentations were given, and was equipped with a virtualPDF viewer, tape recorder, and microphone.

The participants selected for this demonstrationwere all graduate students in Computer Science andEngineering. Researching, creating, and deliveringpresentations are activities with which the participantshad familiarity. We settled on a group of six in order topermit two small teams to simultaneously work on thetask of researching and creating presentations. Giventhe capacity limitations in Wonderland 0.4, we decidedto keep the number of participants as low as possibleso that we could include more assets within the vir-tual environment (e.g., there were several X Windowapplications, whiteboards, PDF viewers, and variousother objects available in the virtual world). Unfortu-nately, one of the participants was unable to attend thedemonstration, leaving us with a team of three and ateam of two.

Another person operated in the role of test mon-itor to observe and record (screen capture) researchactivities and the presentations. During the presenta-tions, the test monitor acted as a master of ceremonies,too. One guest observer was also present during thedemonstration, although their avatar employed Won-derDAC’s cloaking feature to remain invisible to theother participants until after the first presentation.

The Wonderland 0.4 CVE used in the demonstra-tion was installed on an Intel Core 2 Quad 2.5 gi-gahertz workstation with four gigabytes of RAM op-erating Ubuntu 8.04 LTS. The participants’ Wonder-land client software was installed on Intel Core 2 Duo2.13 gigahertz workstations, each with two gigabytesof RAM, ATI Radeon X1600 graphics adapters, and256 megabytes of video RAM. All participant work-stations were operating clones of the same WindowsXP image. Unlike the participants, the guest observeremployed OS X on an Intel Core 2 Duo 2.33 gigahertzMacBook Pro with three gigabytes of RAM, an ATIRadeon X1600 graphics adapter, and 256 megabytesof video RAM. Finally, the test monitor also used anOS X MacBook Pro, but with an Intel Core 2 Duo

2.5 gigahertz processor, four gigabytes of RAM, anNVidia GeForce 8600M GT graphics adapter, and 512megabytes of video RAM.

Prior to the demonstration, all participantswere given a 15 minute introduction to Wonder-land/WonderDAC and the chance to try out theWonderland CVE with the WonderDAC exten-sions. After the demonstration, the participants werede-briefed and asked to answer a simple questionnaire.

3.2 WonderDAC Demonstration Narrative

The following narrative acted as a flight-plan duringthe demonstration and is summarized as a schedule oftest goals in Subsection 3.3.

3.2.1 Environment Preparations Phase (com-pleted before demonstration)

Before the demonstration commences, the Wonder-land system administrator (who may also be the testmonitor) logs into the CVE and ensures that the envi-ronment is properly configured:

1. All necessary user accounts have been appropri-ately created

2. Two groups, TeamA and TeamB, have been cre-ated; half of the participants have been added toTeamA, and the other half to TeamB

3. The test monitor’s and guest observer(s) accountshave been added to both the TeamA and TeamBgroups (to enable full access to each team’sspace)

4. The Loft1 space has been assigned to the TeamAgroup; the Loft2 space has been assigned to theTeamB group

5. A Web browser (Firefox) and presentation tool(OpenOffice Impress) have been enabled in bothlofts and assigned to each respective loft’s group(i.e., TeamA or TeamB)

6. The rest of the environment has been left in itsdefault, initialized state

Test goals: the system administrator should be ableto create the two necessary ad hoc groups and add theparticipants, test monitor, and guests to these groups asrequired; also, the lofts and any objects within the lofts(i.e., X Window applications and whiteboards) should

5

Figure 1: A bird’s eye view of the virtual environmentused in the demonstration. The team room (with officecubes, common area, and movie screening room) isat the top, while the demo room is underneath in themiddle; Loft1 is to the left of the demo room and Loft2is to the right.

be assigned to their respective groups to enable spatialprivacy and integrity.

3.2.2 Login Phase (5 minutes)

The test monitor and guests log into the WonderlandCVE first, then the test monitor prepares to documentthe demonstration. The participants login next andproceed to their respective team lofts. Because themembers of one team cannot see or access the otherteam’s loft, locating the correct destination is trivial.

Test goals: the participants should only be able tosee and access their respective team’s loft; when mem-bers of either team enter their loft, they should disap-pear from the perspective of the other team; none ofthe participants should be able to see/hear any guestobservers who have logged into the CVE.

3.2.3 Research Phase (30 minutes)

Once all participants are ready, the research phase ofthe demonstration begins. Both teams spend the al-loted time researching the history of hypertext andbuilding a six minute presentation. At their disposal ineach loft are two X Window applications (the FirefoxWeb browser and OpenOffice Impress) and a virtualwhiteboard. The resulting presentations are saved asPDFs in locations provided by the test monitor.

At the end of this phase, the test monitor alters thegroup and permission configurations of both lofts to

Figure 2: A student participates in the Wonder-land/WonderDAC demonstration.

Figure 3: Participants begin the demonstration in theteam room, after logging into Wonderland.

Figure 4: TestUser1 looks out from their loft to whereTeamB’s loft (Loft2) should be. Because this partici-pant does not have access to Loft2, only a wall in thedemo room below is visible.

6

Figure 5: TeamA engages in research–the Firefoxweb browser is executing in the left window, whileOpenOffice Impress is the right.

Figure 6: TeamB engages in research.

permit full access by all participants. The X Win-dow applications are then closed to save on system re-sources.

Test goals: status quo–the teams should continue tohave access only to their respective lofts; neither teamshould be able to see/hear the team in the other loft un-til the phase has ended; guest observers should remaincloaked.

3.2.4 Presentation Phase (25 minutes)

Each team makes a six minute presentation (allow-ing each member a chance to talk) and answers ques-tions for four minutes at the end. Permission to use(i.e., alter) the team room’s PDF viewer and micro-phone should be granted to the presenting team whilethe other team acts as an audience. Five minutes arealloted to switching the teams from presenters to au-

Figure 7: TeamA presents their research results in theteam room.

Figure 8: TeamB gives their presentation.

dience members. All guest observers should revealthemselves at the end of the first presentation so thatthey may participate in questioning the presenters–they should stay de-cloaked for the remainder of thedemonstration.

Test goals: the test monitor should configure thegroup and permissions of the team room PDF viewerand microphone to enable control by the presentingteam (in the interests time and efficiency, this may takeplace at some convenient moment in the last phase);the group and permissions of the team room spaceshould be configured to permit only the presentingteam to speak; the virtual audio recorder should beused to capture the presentations, although only thetest monitor should be able to operate the recorder; theguest observers should correctly appear for all of theparticipants.

7

Figure 9: The test monitor uses WonderDAC to openup access control for the demo room and its contents.

3.2.5 Wrap-up Phase (15 minutes)

This phase is used to handle any overrun during the re-search or presentations, and offers the chance for par-ticipants to mill around, socialize, and try things out inWonderland. This includes watching the movie in thescreening room, starting up any of the X Window ap-plications, and interacting with any other Wonderlandfeature.

Test goals: the test monitor should recursivelychange the permissions for the demo, team, andscreening rooms to enable any participant interact withand alter the objects contained therein. (The X Win-dow applications default to world accessible permis-sions when created.)

3.3 Schedule of Test Goals

Table 2 provides a concise schedule of all the test goalsdiscussed above. In essence, this table is a check listfor elements of the demonstration. Headings along thetop include the demonstration phase followed by thefive use case scenarios that informed WonderDAC’sdesign.

3.4 Demonstration Follow-up Survey

In addition to putting WonderDAC’s functionality tothe test, we intended that our demonstration draw outopinions and perspectives regarding WonderDAC’suse and basic approach. For example, how did par-ticipants find access restrictions while using the Won-derland CVE? Were such restrictions useful or in theway? Also, did WonderDAC’s use of roles (i.e., owner,group, and other) and permissions (i.e., interact and

Table 1: Rating Scales

Value Description Value Description

1 Disagree 1 Ineffective2 Somewhat

disagree2 Somewhat

ineffective3 Have no

opinion3 Neutral

4 Somewhatagree

4 Somewhateffective

5 Agree 5 Effective

alter) make sense? Finally, was WonderDAC’s userinterface sensible and easy to use? The general con-tent of these questions was addressed at the end of ourdemonstration in the form of a short survey. The fiveparticipants were asked to respond to 10 questions byusing one of two rating scales for each question, de-pending on context. The scales are given in Table 1:

The survey questions, given in Table 3, are generallyphrased as statements with which the participant wasasked to register some form agreement or disagree-ment. The results of the demonstration and survey areprovided in Section 4.

4 Evaluation Results

The outcome of our demonstration offered two per-spectives of the Wonderland CVE extended by Won-derDAC. One addresses operational correctness, whilethe other is concerned with the end-user experience.By operational correctness, we refer to the Wonder-DAC mechanisms’ ability to function in expected andaccurate ways. Neither the participants nor the guestobserver experienced any significant events that ap-peared unusual or incorrect. Also, the test monitor hadno problems configuring access control within the vir-tual world before and during the demonstration. Par-ticipants were able to complete all scheduled activitiesand deliver coherent (if not brief) presentations aboutthe history of hypertext. Afterwards, they were ableto experiment and interact with various WonderDACdialogs. In short, the demonstration went accordingto plan, regarding WonderDAC’s capabilities. More-over, as Figure 10 indicates, participant responses tothe survey questions were basically positive about thedemonstration experience and WonderDAC’s features.

The outlier in this figure is Q10, the final questionof the survey that asks about WonderDAC’s ad hoc

8

Figure 10: Average participant ratings for the Demon-stration Follow-up Survey.

groups interface. Here, all but one of the participantsanswered the question with “Have no opinion,” whichmaps to a value of three. During the last fifteen min-utes of the demonstration, participants were given thechance to experiment with the virtual environment andmore directly access WonderDAC’s features. Becausethis phase of the demonstration was not as regimentedas the others, we believe that most of the participantsdid not take the opportunity to try out WonderDAC’sad hoc group feature. This, in turn, lead to a responsethat was closer to the middle of the scale.

End-user experience relative to Wonderland (howparticipants perceive the CVE and its immersive qual-ities) is not a significant factor for WonderDAC–otherparties are involved in improving this aspect of Won-derland. Nevertheless, we include a synopsis of whatwe observed for completeness’ sake. On the server,it is noteworthy that load averages for one, five, and15 minutes never came close to the threshold value ofone; this indicated that, within the intervals sampled,no processes were waiting for service by the CPU.Moreover, at no time did the server utilize swap space.

Unfortunately, the speediness of the Wonderlandserver did not translate to the client programs usedby the participants and guest observer. Although un-derlying client operating systems appeared to functionwell during the demonstration, off and on sluggishnesswhile maneuvering around the virtual world and utiliz-ing the X Window applications was the norm. A fewtimes, participants had to quit and restart their Won-derland clients due to freeze-ups. Differing with theseexperiences, the test monitor’s Wonderland client ex-hibited continuous, responsive behavior; in this case,however, a more powerful graphics adapter was used(see subsection 3.1). We did not attempt to diagnosethe cause of the client-side sluggishness, though it may

have been related to the combination of Java3D andthe ATI graphics adapters used by the participants andguest observer.

5 Other Wonderland/WonderDACConsiderations

Although the evaluation results generally indicate thatWonderDAC is effective at access control, there areseveral other issues we must also consider. We be-gin with Wonderland’s use of X Window applicationswithin its virtual environment. The utility of this fea-ture was heavily leveraged in our demonstration, andclearly offers a convenient means for extending Won-derland’s collaborative nature. However, by integrat-ing X Window applications within Wonderland, dan-gerously easy and direct access is provided to the un-derlying server. Furthermore, X Window applicationsmay potentially be executed by Wonderland’s ServerMaster Client (the SMC provides server-side X Win-dow capability) under the same operating system userID as other primary Wonderland components (i.e., theserver and jVoiceBridge). In our demonstration, forexample, a participant could have used FireFox or Im-press to browse and possibly edit various system andWonderland-related files. This risk can be somewhatmitigated by operating the SMC within a so-calledch-root jail1 on an independent host. Of course, havingto manage a separate server for X Window integrationwhile correctly and securely maintaining a chroot jailsignificantly increases the administrative overhead toa Wonderland system.

The non-existence of a root-level (i.e., administra-tor) account within WonderDAC is another importantissue to address. In theory, WonderDAC should neverrequire such an account: as a truly discretionary so-lution, the management of all access control is left tothe owners of virtual objects comprising a Wonderlandenvironment. Additionally, the presence of a root ac-count introduces significant risk. Root is often the tar-get of malicious activities, and, in the wrong hands,provides unfettered control over an underlying system.Without a root account, if a Wonderland administra-tor finds themselves in a pinch, they can always editthe role and permission fields within the XML filesthat describe most virtual objects and spaces. Un-

1This is a directory that, for affected users, is treated as the sys-tem’s root, thereby thwarting access to sensitive resources outsideof the jail.

9

fortunately, there is a problem regarding access con-trol of Wonderland’s server-side X Window capabili-ties. Owned by the SMC dummy account and exist-ing outside of the XML infrastructure of other virtualobjects, server-side X Window applications cannot berestricted by a normal participant.2 For our demonstra-tion, we had to modify WonderDAC to treat a specificaccount as though it had root privileges; this enabledus to fully control access to the Firefox and Impressapplications used by the participants. However, wefeel this is an unacceptable solution to the problem.It would be safer and more elegant for a newly createdX application to assume the roles and permissions ofthe space where it resides.

Continuing, we note that there is insufficient log-ging capabilities within Wonderland’s system compo-nents. Although error, informational, and debuggingmessages may be written to the terminal screen inwhich a component is executed, this is a very poorsubstitute for a secure, aggregated logging facility. Wehasten to add that our implementation of WonderDACis just as lacking in this area. Events that should becaptured by an appropriate logging facility include, butare not limited to: system accesses (successful and un-successful participant logins), all WonderDAC config-uration changes, error messages, informational mes-sages, Wonderland management activities (e.g., con-figuration changes, system restarts), and system exits(participant logoffs).

Next, we concern ourselves with the WonderDACuser interface. Although it works in a practical sense,WonderDAC’s GUI includes minor aesthetic and func-tional weaknesses, too. As part of the client soft-ware, the WonderDAC GUI is implemented in JavaSwing with simple menus and text boxes. A more vi-sually pleasing way to handle access control manage-ment would be through graphical dialogs that operatewithin the Wonderland client’s window (as opposed toon top of or around this window). This would alsoalign better with the operation of virtual objects suchas the PDF, video, and VNC viewers which alreadyinclude in-world controls. Functionally, it is desirablethat right-clicking on any virtual object brings up theWonderDAC Object Configuration dialog box. Unfor-tunately, due to the limited way mouse events are han-

2It should be noted that Wonderland’s client-side X Windowcapability (where a participant using Linux or Solaris starts an Xapplication on their workstation) does not suffer from thisprob-lem. Such applications are automatically owned by the initiatingparticipant.

dled by objects enclosed in two-dimensional frames(e.g., the whiteboard, the PDF viewer, X Window ap-plications), a button specifically for invoking the Won-derDAC dialog had to be added to all such objects.This work-around compels a participant to go a littleout of their way when configuring access control fortwo-dimensional assets. Also, there is no direct meansfor a participant to determine the system and ad hocgroups to which they belong. To some extent, this in-formation can be gleaned by examining the DAC con-figuration of virtual objects: the participant belongs toany group listed. However, this is clearly not an expe-dient way to obtain one’s group affiliations.

Finally, as we have discussed in [WM08b], Wonder-land is missing the network security features requisitefor privacy and non-repudiation. Data stream encryp-tion can enable privacy and integrity at the networklevel, while digital certificates could be employed tohelp prove the identities of participants and Wonder-land system components. The absence of these secu-rity measures undermines the ability of WonderDACto operate reliably and with assurance.

6 Conclusion and Future Work

We devised and implemented WonderDAC in responseto the limited and sometimes absent means of accesscontrol within CVEs. As an extension to Project Won-derland, WonderDAC enables access control manage-ment by the owners of virtual objects inside a givenWonderland world. All objects are included: spaces,three-dimensional assets, two-dimensional images andvideos, avatars, and sound. In addition, WonderDACoffers a simple, ubiquitous way to handle objects sim-ilarly by configuring two permissions (interactandal-ter) for each of three possible roles (owner, group, andother). This approach is further enhanced by allowingparticipants to create and manage ad hoc groups andapply these groups to objects they own.

In an effort to explore WonderDAC’s utility, weconducted a demonstration in which a small group ofgraduate students divided into two teams, researched atopic, and then presented their research–all within theWonderland CVE. During the demonstration, Wonder-DAC was used to maintain privacy and integrity withinthe virtual world and to ensure an orderly progressionof activities during the presentations (e.g., only oneteam could speak at a time or use the PDF viewer).Survey responses by the demonstration’s participantswere largely positive about WonderDAC’s operation

10

and interface.Despite what we perceive to be a successful demon-

stration, important ancillary issues remain to be tack-led before WonderDAC is a fully secure solution. Forexample, the ability to deploy X Window applicationsin Wonderland introduces great flexibility. For server-side X applications, however, this feature can pro-vide a malicious participant with a doorway to wher-ever the application is executing, and introduces accesscontrol management difficulties. Also, secure log-ging of Wonderland and WonderDAC events is lack-ing. This makes it quite difficult to track down and

resolve anomalies and issues of abuse. The Wonder-DAC interface needs improvements, too. Here, it isdesirable that a participant use dialogs and menus in-tegrated within the graphical, three dimensional Won-derland environment–as opposed to the Java Swingdialogs that occur on the participant’s desktop. Fi-nally, network security features such as data stream en-cryption and digital certificates should be employed toprotect Wonderland communications and ensure non-repudiation. WonderDAC can only be as reliable asWonderland’s communications are secure.

11Table 2: Schedule of Test Goals

Phase Spatial Objects Non-spatialObjects

Audio Conversa-tions

Avatar Cloaking GUI

EnvironmentPreparations

- - - - The testmonitor cre-ates/manages/assignsteam groups

Participant Lo-gin

Teams can ac-cess/view only theirrespective lofts

Teams can accessonly their respec-tive lofts’ objects

Teams cannot hearinto each other’slofts

Guest observers arehidden from teams

Guest observersenable avatarcloaking

Research Teams can ac-cess/view only theirrespective lofts

Teams can accessonly their respec-tive lofts’ objects

Teams cannot hearinto each other’slofts

Guest observers arehidden from teams

-

Presentation - Only presentingteam can use PDFviewer and micin team room.Only test monitorcan use audiorecorder

Only presentingteam can speak inteam room

Guest observers re-veal themselves af-ter first presentation

Test mon-itor managesgroups/permissionsfor team room,PDF viewer, andmic; operatesaudio recorder

Wrap-up All participants canaccess all rooms

All participantscan use all objects

All participants canspeak in all rooms

- Test mon-itor recur-sively changesgroups/permissionsfor team,screening, demorooms to enablefull access byall participants

12

Table 3: Demonstration Survey Questions

Question Text

1 Discretionary access control based on roles (owner, group, andother) and only two permissions (interact and alter) seems straight-forward and intuitive.

2 How effective was access control for entering, seeing, andhearinginto spaces (e.g., the team lofts and the screening room)?

3 Spaces for which one has no interact permission are completelyremoved from one’s perspective of the virtual world. In practice,this metaphor worked well.

4 How effective was access control for viewing and using non-spatial objects (e.g., X Window applications, whiteboards, PDFviewers, video viewers, 3D models)?

5 Non-spatial objects for which one has no interact permission arecompletely removed from one’s perspective of the virtual world.In practice, this metaphor worked well.

6 How effective was access control for restricting audio conversationin the team room (during presentations) and in the screeningroom(during the wrap-up phase)?

7 The avatar cloak may be used to restrict the ability of participantsto see and hear one’s avatar. Avatar’s for which one has no interactpermission are completely removed from one’s perspective of thevirtual world. In practice, this metaphor worked well.

8 By right-clicking on the walls of a space, on a 3D non-spatial ob-ject, or on the DAC button of a 2D window, one can activate theWonderDAC Object Configuration dialog box. This interface wassensible and concise.

9 By selecting Avatar Cloak from the Wonderland client Toolsmenu,one can activate the Avatar Cloak Properties dialog box. This in-terface was sensible and concise.

10 By selecting Ad Hoc Groups from the Wonderland client Toolsmenu, one can activate the Ad Hoc Group Builder/Maintainer dia-log box. This interface was sensible and concise.

13

References

[Bae08] Ronald M. Baecker,Timelines themesin the early history of hci—some unan-swered questions, interactions15 (2008),no. 2, 22–27.

[BJJ99] Keith A. Butler, Robert J. K. Jacob, andBonnie E. John,Human-computer inter-action: introduction and overview, CHI’99: CHI ’99 extended abstracts on Hu-man factors in computing systems (NewYork, NY, USA), ACM, 1999, pp. 100–101.

[GHS99] J.L. Gabbard, D. Hix, and II Swan, J.E.,User-centered design and evaluation ofvirtual environments, Computer Graphicsand Applications, IEEE19 (1999), no. 6,51–59.

[GLB05] Raphael Grasset, Philip Lamb, and MarkBillinghurst, Evaluation of mixed-spacecollaboration, ISMAR ’05: Proceedingsof the 4th IEEE/ACM International Sym-posium on Mixed and Augmented Reality(Washington, DC, USA), IEEE ComputerSociety, 2005, pp. 90–99.

[GLG03] Gernot Goebbels, Vali Lalioti, and Mar-tin Gobel,Design and evaluation of teamwork in distributed collaborative virtualenvironments, VRST ’03: Proceedings ofthe ACM symposium on Virtual realitysoftware and technology (New York, NY,USA), ACM, 2003, pp. 231–238.

[JGH+07] Robert J. K. Jacob, Audrey Girouard,Leanne M. Hirshfield, Michael S. Horn,Orit Shaer, Erin Treacy Solovey, andJamie Zigelbaum,Reality-based interac-tion: unifying the new generation of inter-action styles, CHI ’07: CHI ’07 extendedabstracts on Human factors in comput-ing systems (New York, NY, USA), ACM,2007, pp. 2465–2470.

[KOR+02] David M. Krum, Olugbenga Omoteso,William Ribarsky, Thad Starner, andLarry F. Hodges,Evaluation of a multi-modal interface for 3d terrain visualiza-tion, VIS ’02: Proceedings of the con-ference on Visualization ’02 (Washing-

ton, DC, USA), IEEE Computer Society,2002, pp. 411–418.

[KP02] Barbara Kitchenham and Shari LawrencePfleeger,Principles of survey research:part 5: populations and samples, SIG-SOFT Softw. Eng. Notes27 (2002), no. 5,17–20.

[LK07] S. Livatino and C. Koffel,Handbook forevaluation studies in virtual reality, Vir-tual Environments, Human-Computer In-terfaces and Measurement Systems, 2007.VECIMS 2007. IEEE Symposium on(2007), 1–6.

[Mar99] Tim Marsh, Evaluation of virtual real-ity systems for usability, CHI ’99: CHI’99 extended abstracts on Human factorsin computing systems (New York, NY,USA), ACM, 1999, pp. 61–62.

[Mye98] Brad A. Myers,A brief history of human-computer interaction technology, interac-tions5 (1998), no. 2, 44–54.

[NF00] Fernando A. Das Neves and Edward A.Fox, A study of user behavior in animmersive virtual environment for digi-tal libraries, DL ’00: Proceedings ofthe fifth ACM conference on Digital li-braries (New York, NY, USA), ACM,2000, pp. 103–111.

[WM08a] Timothy E. Wright and Gregory Madey,Discretionary access controls for a col-laborative virtual environment, Tech. Re-port TR 2008-12, University of NotreDame, College of Engineering, Universityof Notre Dame, Notre Dame, IN 46556,September 2008.

[WM08b] , Wonderdac: An implementationof discretionary access controls within theproject wonderland cve, Tech. Report TR2008-15, University of Notre Dame, Col-lege of Engineering, University of NotreDame, Notre Dame, IN 46556, November2008.