[ieee 2009 conference in games and virtual worlds for serious applications (vs-games) - coventry, uk...

8
Level of Realism for Serious Games Alan Chalmers and Kurt Debattista The Digital Laboratory WMG, University of Warwick UK Email: [alan.chalmers;kurt.debattista]@warwick.ac.uk Abstract—Serious games are playing an increasingly important role for training people about real world situations. A key concern is thus the level of realism that the game requires in order to have an accurate match of what the user can expect in the real world with what they perceive in the virtual one. Failure to achieve the right level of realism runs the real risk that the user may adopt a different reaction strategy in the virtual world than would be desired in reality. High-fidelity, physically based rendering has the potential to deliver the same perceptual quality of an image as if you were “there” in the real world scene being portrayed. However, our perception of an environment is not only what we see, but may be significantly influenced by other sensory input, including sound, smell, touch, and even taste. Computation and delivery of all sensory stimuli at interactive rates is a computationally complex problem. To achieve true physical accuracy for each of the senses individually for any complex scene in real-time is simply beyond the ability of current standard desktop computers. This paper discusses how human perception, and in particular any crossmodal effects in multi-sensory perception, can be exploited to selectively deliver high-fidelity virtual environments for serious games on current hardware. Selective delivery enables those parts of a scene which the user is attending to, to be computed in high quality. The remainder of the scene is delivered in lower quality, at a significantly reduced computation cost, without the user being aware of this quality difference. I. I NTRODUCTION If serious games are ever to be regularly used as a valuable tool to experiment, learn and train in the virtual world with confidence that the results and user’s actions are the same as would be experienced in the real world, then we need to be able to compute these environments to be perceptually equivalent as if we were “there” in the real world; so called “there-reality” or real virtuality. Perception in the real world is multi-sensory and thus any serious game should include an appropriate level of realism associated with each of the senses, including any crossmodal effects. For example, as shown in Figure 1, if we want to educate students about life in ancient Egypt, then we need to accurately recreate the visuals, sounds, smells, temperature and even tastes of the past. Morton Heilig’s Sensorama of the early 1960’s was one of the first attempts to provide a multi-sensory user environ- ment [1]. This mechanical device was capable of displaying stereoscopic 3D images and included user movement, stereo sound, wind and aromas which could be trigged during the film. Unfortunately Heilig was unable to find financial backing for his device and nothing further was developed. Although Fig. 1. Realism for educating students about ancient Egypt visuals, sound and to some extent, haptics are regularly used in serious games, the real-time requirements of the games means that these stimuli are rarely computed using physically based algorithms. So, for example, a rasterised solution, that does not take into account the physical properties of how light is transported throughout a scene, often referred to as local illumination, will be used for the real-time visuals rather than a global illumination solution. When attempts at global illumination are made these are typically “baked”, accounting for only some of the global illumination effects, for example global illumination is computed for only static objects. These methods most commonly use some sort of precomputation to account for the global illumination. Despite the importance of scent in our every day life, only much more recently has smell started to be introduced into virtual environments for training, for example [2], and therapy, for example [3], [4]. Results from preliminary studies has shown that the introduction of smell does indeed increase the user’s sense of “presence” in the virtual environment [5], [6]. In particular, the introduction of realistic smells, including the smells of burning rubber and flesh, have been used effectively to treat soldiers returning from Iraq with post-traumatic stress disorder [7]. Most recently an interactive olfactory display was developed for a “cooking game”, in which the duration and strength of a number of predefined smells was controlled and blended in real-time into a number of recipes by the game [8]. Although research on taste perception started in late 1500s, long before anything was known about the physiology and anatomy of taste, taste in virtual environments remains unex- plored. 2009 Conference in Games and Virtual Worlds for Serious Applications 978-0-7695-3588-3/09 $25.00 © 2009 IEEE DOI 10.1109/VS-GAMES.2009.43 225

Upload: kurt

Post on 27-Feb-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Level of Realism for Serious GamesAlan Chalmers and Kurt Debattista

The Digital LaboratoryWMG, University of Warwick

UKEmail: [alan.chalmers;kurt.debattista]@warwick.ac.uk

Abstract—Serious games are playing an increasingly importantrole for training people about real world situations. A key concernis thus the level of realism that the game requires in order to havean accurate match of what the user can expect in the real worldwith what they perceive in the virtual one. Failure to achieve theright level of realism runs the real risk that the user may adopta different reaction strategy in the virtual world than would bedesired in reality.

High-fidelity, physically based rendering has the potential todeliver the same perceptual quality of an image as if you were“there” in the real world scene being portrayed. However, ourperception of an environment is not only what we see, but may besignificantly influenced by other sensory input, including sound,smell, touch, and even taste. Computation and delivery of allsensory stimuli at interactive rates is a computationally complexproblem. To achieve true physical accuracy for each of thesenses individually for any complex scene in real-time is simplybeyond the ability of current standard desktop computers. Thispaper discusses how human perception, and in particular anycrossmodal effects in multi-sensory perception, can be exploitedto selectively deliver high-fidelity virtual environments for seriousgames on current hardware. Selective delivery enables those partsof a scene which the user is attending to, to be computed in highquality. The remainder of the scene is delivered in lower quality,at a significantly reduced computation cost, without the userbeing aware of this quality difference.

I. INTRODUCTION

If serious games are ever to be regularly used as a valuabletool to experiment, learn and train in the virtual world withconfidence that the results and user’s actions are the sameas would be experienced in the real world, then we needto be able to compute these environments to be perceptuallyequivalent as if we were “there” in the real world; so called“there-reality” or real virtuality. Perception in the real worldis multi-sensory and thus any serious game should include anappropriate level of realism associated with each of the senses,including any crossmodal effects. For example, as shown inFigure 1, if we want to educate students about life in ancientEgypt, then we need to accurately recreate the visuals, sounds,smells, temperature and even tastes of the past.

Morton Heilig’s Sensorama of the early 1960’s was one ofthe first attempts to provide a multi-sensory user environ-ment [1]. This mechanical device was capable of displayingstereoscopic 3D images and included user movement, stereosound, wind and aromas which could be trigged during thefilm. Unfortunately Heilig was unable to find financial backingfor his device and nothing further was developed. Although

Fig. 1. Realism for educating students about ancient Egypt

visuals, sound and to some extent, haptics are regularly usedin serious games, the real-time requirements of the gamesmeans that these stimuli are rarely computed using physicallybased algorithms. So, for example, a rasterised solution, thatdoes not take into account the physical properties of howlight is transported throughout a scene, often referred to aslocal illumination, will be used for the real-time visuals ratherthan a global illumination solution. When attempts at globalillumination are made these are typically “baked”, accountingfor only some of the global illumination effects, for exampleglobal illumination is computed for only static objects. Thesemethods most commonly use some sort of precomputation toaccount for the global illumination.

Despite the importance of scent in our every day life, onlymuch more recently has smell started to be introduced intovirtual environments for training, for example [2], and therapy,for example [3], [4]. Results from preliminary studies hasshown that the introduction of smell does indeed increase theuser’s sense of “presence” in the virtual environment [5], [6].In particular, the introduction of realistic smells, including thesmells of burning rubber and flesh, have been used effectivelyto treat soldiers returning from Iraq with post-traumatic stressdisorder [7]. Most recently an interactive olfactory display wasdeveloped for a “cooking game”, in which the duration andstrength of a number of predefined smells was controlled andblended in real-time into a number of recipes by the game [8].

Although research on taste perception started in late 1500s,long before anything was known about the physiology andanatomy of taste, taste in virtual environments remains unex-plored.

2009 Conference in Games and Virtual Worlds for Serious Applications

978-0-7695-3588-3/09 $25.00 © 2009 IEEE

DOI 10.1109/VS-GAMES.2009.43

225

Page 2: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Fig. 2. Hypothesis testing on the perception of Knossos in the past (a) Litwith simulated modern lighting (b) Lit by simulated beeswax lamps.

II. LEVELS OF REALISM (LOR)

A Level of Realism (LoR) may be defined as the physicalaccuracy of a virtual environment that is necessary in orderto achieve a one-to-one mapping of an experience in thevirtual environment with the same experience in the realenvironment [9]. This is particularly important for seriousgames for training because if there is not an appropriate LoR,then subjects may adopt a different task strategy in the virtualworld than in the real world, for example [10].

Furthermore, for hypotheses testing, such a one-to-one map-ping may be essential if we are to avoid misrepresentingthe real environment. For example, when reconstructing anarchaeological site, such as the Minoean palace of Knossosin Crete, for teaching students how the site may have beenperceived some 2,000 years ago when it was lit by beeswaxlamps rather than modern lighting as it is today, Figure 2,we need to be confident in the authenticity of our computerreconstruction [11].

A. There-Reality

There-reality environments are defined as those virtual envi-ronments which evoke the same perceptual response from aviewer as if they were actually present, or “there”, in the realscene being depicted [12]. This has also been termed “real-virtuality” [9]. Unlike physical realism, it is not necessary tofully compute all physical interactions in the scene (even if weknew what all these are), but rather we can exploit limitationsof human perception to only compute in high-fidelity thosemulti-modal parts of a scene which the viewer will actuallynotice.

Cross-modal effects relate to the impact on the perceptualexperience of one sensory input that the presentation of anadditional sensory input can have. Previous experience, forexample [13]–[15], demonstrates that cross-modal effects canbe considerable, even to the extent that large amounts of detailof one sense may be ignored when in the presence of othersensory inputs. The computational savings for such selectiverendering can be substantial when exploiting cross-modalities,for example visuals and sound [16], graphics and motion [17],and other features of the human perception system, such asinattentional blindness [18], [19].

B. The Perception Equation

The perception of an environment, P, may be described as afunction over time (t) of task (τ) and preconditioning (ρ):

P(τ,ρ)(t) =ωVV (t)+ωAA(t)+ωSS(t)+ωTT (t)+ωFF(t)+ωΔΔ(t) (1)

where V=visuals, A=audio, S=smell, T=taste and F=feel. Δis a “distraction factor” indicating how focussed the user ison the environment. For example he/she may be distractedby thinking about what to have for lunch etc. ωi is theparticular perceptual weighting that each of the senses, and anydistraction, has for the perception of that particular moment,with ∑ωi = 1. Each of these weightings is the threshold valuebelow which the perception of the environment would bedifferent from the perception if one was “there” in reality.Above this threshold there is no perceptual difference, and thusin Equation 1, the value of ωi is capped at this threshold [9].

The task the user is performing in the environment and anyprevious experience of the user are key factors which mayinfluence the perceptual importance of different aspects ofthe scene. A high level of familiarity with the environment,or habituation, may make the user perceive less [20], whiledeliberate preconditioning or the intensity and nature of thetask being performed, can force the user to attend to stimulithat they would not otherwise notice [21].

Furthermore, the effects of one sense may significantly in-fluence another. For example, Storms found that high-qualitysounds coupled with high-quality visual stimuli increase theperceived quality of the visual displays [22], while Winklerand Faller showed that both audio and video quality contributesignificantly to the perceived audiovisual quality [23], andMastoropoulou el al. showed the the combination of tempoand emotional suggestiveness of music would affect the users’visual perception of temporal rate and duration [24].

As an example of how the perception equation may bedifferent for each viewer and situation, consider the perceptionof a user walking through an urban environment. If the useris looking for a coffee shop, then shop signs and any smellspresent (to distinguish the smell of coffee) will be of highperceptual importance. If, on the other hand, the user was anexperienced soldier on combat patrol in Iraq, then sites ofpossible ambushes, and the sound of any rifle being cocked,would be of key importance. An inexperienced soldier mayperceive the environment differently, concentrating rather onthe heat and dust. The values of ωi would thus be different foreach of the senses in the coffee vs combat and experienced vsnovice conditions. Any selective delivery system would thusneed to ensure a high-fidelity smell for the coffee scenario,highly realistic sound for the experienced soldier and theappropriate level of heat and dust for the inexperienced soldier.

226

Page 3: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Fig. 3. Believable realism: Photograph (left) Computer graphics (middle)Graphics with scruffy textures (right).

For the inexperienced soldier, the sound quality, ie. ωS, couldbe of lower quality, and thus require less computational effortthan the ωS for his experienced colleague.

III. ACHIEVING REALISM

High-fidelity virtual environments need to be created bymodelling all the physical attributes of the real scene beingportrayed. Where the real scene exists, this modelling processcan be facilitated by capturing as much of the sensory stimuliand how it interacts with the environment directly on site. Agood example of techniques for capturing the lighting and itsinteraction (ie. the BRDFs of the materials present) can befound in [25]. Where the real information no longer exists,for example when reconstructing past historical environmentssuch as Figure 1, then as much evidence as is available needsto be used to avoid the very real danger of misrepresentingthe environment.

A. Believable Realism

As a result of precisely modeling an environment’s geometryand material properties, virtual environments often look pris-tine, Figure 3 (middle)with perfect sound and smell qualities.A real environment is seldom perfect and includes accu-mulated stains, dust, and scratches from every-day use anda multitude of background noise and ambient smells. Theabsence of such “scruffy ambience” in a virtual environmentaffects a viewer’s sense of perceived realism of this scene, ascan been seen for visuals in Figure 3 (right) [26]. Significantamounts of time are now used in serious games to artisticallyenhance believability of their virtual environments with simu-lated stains, dust, and scratches, such as Figure 4 and ambientnoise.

B. Physically-Based Simulations

Having constructed the model, the physical interaction of eachof the senses with the environment now needs to be accuratelysimulated. Such highly accurate physical simulations can takemany minutes to compute even on modern hardware, whichprecludes their use for the interactive requirements of a se-rious game. To illustrate how such computational complexitycan be significantly reduced without affecting the perceptual

Fig. 4. Scruffy textures enhancing believable realism.

quality of the resultant environment we will demonstrate howthe physically-based approach to simulating light based ongeometric optics, can be computed at interactive rates.

The simulation of physically-based lighting, requires mod-elling of physically properties that simulate the emission oflight, the reflective properties of any/all the materials in avirtual environment and a simulation of the transport of light.This computation can be calculated by solving the renderingequation [27].

While a few methods have achieved global illumination atclose to interactive rates using the more traditional rasteri-sation pipeline [28]–[30], these typically require some formof precomputation. Another method of solving the renderingequation, which is now becoming popular and will play abig role in future interactive environments and games, isray tracing [31]. Ray tracing, and its related methods solvethe rendering equation by simulating groups of photons asrays traversing a virtual scene. In most ray-tracing methods,such rays are shot from the virtual camera rather than fromthe light sources. Due to their recursive nature, ray tracingmethods tend to be computationally expensive, until nowconfined to specialised software for simulating lighting, suchas RADIANCE [32], and used in offline rendering for specialeffects and animation films. In the following sections wediscuss some of the methods used to improve the performanceof ray tracing methods.

C. Selective Rendering

While highly complex, the human visual system does havelimitations. Selective rendering algorithms allow known per-ceptual limitations to be translated into tolerances when ren-dering. One such perceptual limitation, results from visualattention. While other perceptual limitations may be exploited[34], we will present a selective rendering method that takesadvantage of visual attention.

The human visual system accounts for how and where ob-servers may focus their attention. Two major approaches

227

Page 4: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

(a) Task objects (b) Task map (c) Task map with gradient

(d) Saliency map (e) Importance map (f) Rendered Image

Fig. 5. An example of task objects, task maps, importance maps and saliency map for the Corridor Scene, a virtual environment in which the user plays therole of a fire safety officer [33].

broadly determine where humans direct their visual attention[35]. These processes are labelled bottom-up, which is anautomatic visual stimulus and top-down, which is voluntaryand focuses on the observers goal within an environment.The bottom-up process is primarily influenced by salientfeatures in a scene such as contrast, size, shape, colour,brightness, orientation, edges and motion. Models that recreatethe bottom-up aspect of visual attention, termed saliency maps,have been presented in [36], see Figure 5(d) for an exampleof a saliency map generated by the work in [37]. The top-down approach was demonstrated in Yarbus [38], in whichexperiments showed how the movements of the eye wererelated to the task being performed. Objects that are related totasks can be generated at a modelling stage [19]. An exampleof a task map can be seen in Figure 5. This is an exampleof a fire-safety simulation, where the user plays the role of afire-safety officer and is asked to investigate whether thereis fire safety equipment within a virtual environment. The

task objects are identified at the modelling stage, see Figure5(a). At run time if any of these objects are detected they arehighlighted and a task map is generated, see Figure 5(b) andFigure 5(c).

Figure 6, demonstrates a straightforward selective renderingmethod that focuses rendering computation only on aspects ofthe scene being attended to. The method may be viewed asa pipeline. For each frame computed, an initial, pre-selectiverendering, stage generates a preview of the scene using fastgraphics hardware to create a rapid image estimate. At thisstage, the objects related to a task are also identified. Inthe selective guidance stage, the saliency map is computedand combined with the task map, in this case with equalweighting, into what is termed an importance map [33]. Thisstage can also be computed on fast graphics hardware, see[37]. Finally, the selective rendering stage uses the selectiveguidance to trace rays. More rays are traced in areas of theimage which are visualised as being brighter. The rays could

228

Page 5: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

BQ TM

TMG

SM

IM

SQ

Fig. 6. The selective rendering pipeline.

also be prioritised in order to meet time constraints [39]. Thiswork has been validated with user studies [33], and has shownto compute images close to an order of magnitude faster thanusing traditional ray tracing, without any perceived loss inquality [39]. An alternative to using graphics hardware forthe pre-selective rendering and selective guidance stages, isto use progressive rendering methods based on ray tracing[39]. The granularity of the process may be made even finerby employing component-based rendering [40].

D. Interactive Ray Tracing

As available computation has increased, ray tracing methodshave started to become interactive. Initially, researchers tookadvantage of parallelism on shared memory supercomputers toachieve interactive frame rates [42]. Distributed parallelism hasalso been used in order to achieve interactive frame rates, asdemonstrated by Wald et al. in [43]. In this same publication, acommon technique that has been used to accelerate ray tracing,was also introduced. This method is the practice of groupingrays into packets, to take advantage of coherence. Packets alsohave the advantage of making efficient use of modern SIMDinstruction sets and can be used during traversal to amortisethe cost of traversing rays using packet traversal and culling.These ray packets are normally generated for groups of raysfired from a single point and therefore exhibit high degrees ofcoherence.

Acceleration data structures play a fundamental role in the useof tracing rays since they substantially decrease the number ofray to object intersections. Acceleration data structures needalso cater for packetisation. A number of acceleration struc-tures that fit this criteria have been developed for traditionaldata structures such as kd-trees [44] and grids [45]. Whenscenes are dynamic, acceleration data structures need to take

this into account and need to be updated or recomputed regu-larly [46]. Wald et al. [47] present an overview of interactiveray tracing for dynamic scenes.

Interactive ray tracing has been extended to interactive globalillumination using a number of methods. Wald et al. [48] com-bined their interactive ray tracer with Instant Radiosity [49]to compute global illumination, computing indirect diffusereflections and soft shadows by shooting shadow rays towardsvirtual point light sources and used photon mapping [50]for caustics. To reduce the computation they used interleavedsampling such that each pixel computed only a subset of thetotal virtual point light sources and used image-based filter-ing to remove the resulting structured noise. They achievedinteractive frame rates on a local PC cluster. Dubla et al.[41] combine the approaches of interactive ray tracing, basedon interleaved sampling, with adaptive methods to achieveinteractive frame rates for moderately complex scenes. Figure7 demonstrates two of their scenes that were rendered at ahandful of frames per second on a desktop eight-core PC. Dueto its adaptive nature, this method could be extended with thevisual attention methods discussed in the previous section, ina straightforward manner.

E. Achieving Multi-Modal Realism

In order to achieve interactive performances for all the senses,the other senses could benefit from similar methods. Methodsto exploit perceptual limitations for each of the senses willneed to be identified. Moreover, the combination of all thesenses means that a large number of cross-modal effects couldbe exploited to improve the performance. In order to achievesuch a performance selective guidance methods would needto be developed for each of the senses. Ideally, each of thesimulations used to model the effects required for each of the

229

Page 6: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Fig. 7. Adaptive interleave sampling scenes rendered with global illumination rendered at interactive rates [41].

senses can be modulated using Equation 1. Resource allocationalgorithms such as [39] that typically use progressive methodswould be suitable for such a computation.

Initial cross-modal effects have begun to be explored in thegraphics community. Mastoropoulou et al. [51] demonstratedhow to leverage acoustic effects to improve the performanceof the graphics within a virtual environment. The authorsextended the selective rendering framework presented previ-ously, where sound-emitting objects are considered with acertain importance, similar to task objects. Whenever, a soundis emitted from an object that is within view, the sound-emitting object is rendered in higher quality than the rest of theimage. Mastoropoulou et al. validated their framework with apsychophysical experiment in which subjects failed to noticethe drop of quality when a phone was ringing but noticedit when it had stopped, see Figure 8. Similar methods havebeen used to account for the effects of motion on a motionplatform [52]. In a further publication, Mastoropoulou et al.[53] showed how they could use sound effects to reduce thenumber of frames needing to be computed each second.

IV. CONCLUSION

Serious games require a high level of realism to ensure usertraining and education in the virtual environment is equivalentif the user was “there” in the real scene being portrayed.Furthermore, serious games need to accurately simulate all fivesenses. Computing physically accurate multi-sensory virtualenvironments at interactive rates is beyond the capabilitiesof modern desktop computers. Fortunately serious games aredeveloped for human users. A human can simply not attendto all his/her senses with equal high weighting at any time.Rather we perceive sensory input with more or less attentiondepending what task we are performing and any precondi-tioning we have for the environment. Knowledge of how auser may be attending to an environment at a moment in timecan be exploited to compute a perceptually accurate virtualenvironment. This is achieved by computing in high-fidelitythose parts of the scene being attended to. The remainder of the

scene can be delivered in a much lower quality, with significantless computational effort, without the user being aware of thisquality difference. Progressive computational methods can beused to selectively deliver the multi-sensory stimuli accordingto the weights of the perception equation. The key is to achieveat least the threshold values at the interactive results.

The challenge now is determine the perceptual importance foreach sense as a function of time for the serious game beingdeveloped and the level of user experience. These weights donot have to be determined precisely, but rather we need toensure the level of fidelity we deliver for each sense is abovethe perceptual high-fidelity threshold for that sense, includingany cross-modal effects. This work will now be undertakenthrough detailed empirical studies, and modern brain imagingtechniques such as fMRI, for example [54].

ACKNOWLEDGMENT

The authors would like to thank the organisers of IEEE VS-Games’09 for inviting us to write this paper. We would alsolike to thank the members of the Visualisation Group forproviding a stimulating working environment, David Howardof the University of York for suggesting the “distraction factor”in the perception equation, Piotr Dubla for use of his images,and the University of Bristol for the use of the Kalabsha imagewhich was created while Alan was at the University of Bristol.

REFERENCES

[1] H. Rheingold, Virtual Reality. Simon & Schuster, 1992.

[2] D. Washburn, L. Jones, R. Satya, C. Bowers, and A. Cortes, “Olfactoryuse in virtual environment training,” Modelling and Simulation Maga-zine, vol. 2, no. 3, 2003.

[3] W. Barfield and E. Danas, “Comments on the use of olfactory displaysfor virtual environments,” Presence, vol. 5, no. 1, pp. 109–121, 1995.

[4] C. Y., “Olfactory display: development and application in virtual realitytherapy,” in ICAT’06: Proceedings of the 16th International Conferenceon Artificial Reality and Telexistence. IEEE Press, 2006.

230

Page 7: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

Fig. 8. An example of taking advantage of cross-modal effects [51]. When the sound-emitting object (in this case the telephone) is active, the area aroundthe telephone is rendered in higher quality as visualised in the quality map.

[5] H. Q. Dinh, N. Walker, S. C., A. Kobayashi, and L. Hodges, “Evaluatingthe importance of multi-sensory input on memory and the sense ofpresence in virtual environments,” in IEEE Virtual Reality 1999. IEEEPress, 1999, pp. 222–228.

[6] M. Zybura and A. Eskeland, “Olfaction for virtual reality,” QuarterProject Industrial Engineering 543, 1999.

[7] J. Pair, B. Allen, M. Dautricourt, A. Treskunov, M. Liewer, K. Graap,G. Reger, and A. Rizzo, “A virtual reality exposure therapy applicationfor iraq war post traumatic stress disorder,” in IEEE Virtual Reality 2006.IEEE Press, 2006.

[8] T. Nakamoto, S. Otaguro, M. Kinoshita, M. Nagahama, K. Ohinishi,and T. Ishida, “Cooking up an interactive olfactory game display,” IEEEComputer Graphics and Applications, 2008.

[9] A. Chalmers and A. Ferko, “Levels of realism: From virtual reality toreal virtuality,” in SCCG’08. ACM SIGGRAPH, 2008, pp. 27–33.

[10] K. Mania, T. Troscianko, R. Hawkes, and A. Chalmers, “FidelityMetrics for Virtual Environment Simulations based on Spatial MemoryAwareness States,” Presence, Teleoperators and Virtual Environments,pp. 296–310, 2003.

[11] I. Roussos and A. Chalmers, “High fidelity lighting of knossos,” Pro-ceedings of VAST2003, pp. 47–56, 2003.

[12] A. Chalmers, K. Debattista, G. Mastoropoulou, and L. dos Santos,“There-Reality: Selective Rendering in High Fidelity Virtual Environ-ments,” The International Journal of Virtual Reality, vol. 6, no. 1, pp.1–10, 2007.

[13] H. McGurk and J. MacDonald, “Hearing lips and seeing voices,” Nature,vol. 264, pp. 746–748, 1976.

[14] B. Ramic, A. Chalmers, J. Hasic, and S. Rizvic, “Selective Rendering ina Multimodal Environment: Scent and Graphics,” SCCG 2007: SpringConference on Computer Graphics, pp. 189–192, 2007.

[15] S. Rimell, D. Howard, A. Tyrrell, P. Kirk, and A. Hunt, “Cymatic:restoring the physical manifestation of digital sound using haptic inter-faces to control a new computer based musical instrument,” in ICMC02:International Computer Music Conference, 2002.

[16] G. Mastoropoulou, K. Debattista, A. Chalmers, and T. Troscianko,“Auditory bias of visual attention for perceptuallyg uided selectiverendering of animations,” Graphite 2005, pp. 363–369, 2005.

[17] G. Ellis and A. Chalmers, “The Effect of Translational Ego-Motion onthe Perception of High Fidelity Animations,” SCCG 2006, 2006.

[18] A. Mack and I. Rock, Inattentional Blindness. Massachusetts Instituteof Technology Press, 1998.

[19] K. Cater, A. Chalmers, and G. Ward, “Exploiting visual tasks forselective rendering,” in Eurographics Symposium on Rendering. Euro-graphics, 2003, pp. 270–280.

[20] S. Marsland, U. Nehmzow, and J. Shapiro, “Novelty detection on amobile robot using habituation,” From Animals to Animats: The 6thInternational Conference on Simulation of Adaptive Behaviour, 2000.

[21] D. Nunez and E. H. Blake, “Conceptual priming as a determinantof presence in virtual environments,” in Afrigraph 2003. ACMSIGGRAPH, January 2003, pp. 101–108.

[22] R. Storms, “Auditory-visual cross-modal perception phenomena,” Ph.D.dissertation, Naval Postgraduate School, Monterey, California, 1998.

[23] S. Winkler and C. Faller, “Audiovisual quality evaluation of low-bitratevideo,” in SPIE/IS&T Human Vision and Electronic Imaging, vol. 5666.SPIE, 2005, pp. 139–148.

[24] G. Mastoropoulou and A. Chalmers, “The effect of music on theperception of display rate and duration of animated sequences: anexperimental study,” in TPCG’04: Theory and Practice of ComputerGraphics 2004. IEEE Press, 2003, pp. 128–134.

[25] Y. Yu, P. Debevec, P. Malik, and T. Hawkins, “Inverse global illumina-tion: Recovering reflectance models of real scenes from photographs,”ACMSIGGRAPH 1999, pp. 215–227, 1999.

[26] P. Longhurst, P. Ledda, and A. Chalmers, “Psychophysically based artis-tic techniques for increased perceived realism of virtual environments,”in AFRIGRAPH 2003. ACM SIGGRAPH, February 2003, pp. 123–131.

[27] J. T. Kajiya, “The rendering equation,” in SIGGRAPH ’86: Proceedingsof the 13th annual conference on Computer graphics and interactivetechniques. New York, NY, USA: ACM Press, 1986, pp. 143–150.

[28] M. Pan, R. Wang, X. Liu, Q. Peng, and H. Bao, “Precomputed radiancetransfer field for rendering interreections in dynamic scenes,” ComputerGraphics Forum, vol. 26, no. 3, 2007.

[29] C. Dachsbacher, M. Stamminger, G. Drettakis, and F. Durand, “Implicitvisibility and antiradiance for interactive global illumination,” in SIG-GRAPH ’07: ACM SIGGRAPH 2007 papers. New York, NY, USA:ACM, 2007, p. 61.

[30] T. Ritschel, T. Grosch, M. H. Kim, H.-P. Seidel, C. Dachsbacher, andJ. Kautz, “Imperfect shadow maps for efficient computation of indirectillumination,” ACM Trans. Graph., vol. 27, no. 5, pp. 1–8, 2008.

[31] T. Whitted, “An improved illumination model for shaded display,” inSIGGRAPH ’80. ACM Press, 1980, p. 14.

[32] G. Ward and R. Shakespeare, Rendering with radiance: the art andscience of lighting visualization. Morgan Kaufmann Publishers Inc.San Francisco, CA, USA, 1998.

[33] V. Sundstedt, K. Debattista, P. Longhurst, A. Chalmers, and T. Tros-cianko, “Visual attention for efficient high-fidelity graphics,” in SpringConference on Computer Graphics (SCCG 2005), May 2005. [Online].Available: http://www.cs.bris.ac.uk/Publications/Papers/2000222.pdf

231

Page 8: [IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES) - Coventry, UK (2009.03.23-2009.03.24)] 2009 Conference in Games and Virtual Worlds for Serious

[34] D. Bartz, D. Cunningham, J. Fischer, and C. Wallraven, “The role ofperception for computer graphics,” in Eurographics 2008, State of theArt Reports, 2008.

[35] W. James, The Principles of Psychology: Volume 1. Holt, New York,1890.

[36] L. Itti, C. Koch, and E. Niebur, “A model of Saliency-Based VisualAttention for Rapid Scene Analysis,” in Pattern Analysis and MachineIntelligence, vol. 20, 1998, pp. 1254–1259.

[37] P. Longhurst, K. Debattista, and A. Chalmers, “A gpu based saliencymap for high-fidelity selective rendering,” in AFRIGRAPH 2006 4thInternational Conference on Computer Graphics, Virtual Reality, Visu-alisation and Interaction in Africa. ACM SIGGRAPH, January 2006,pp. 21–29.

[38] A. Yarbus, “Eye movements during perception of complex objects,” inEye Movements and Vision, 1967, pp. 171–196.

[39] K. Debattista, “Seletive rendering for high-fidelity graphcs,” Ph.D.dissertation, University og Bristol, Bristol, UK, 2006.

[40] K. Debattista, V. Sundstedt, L. P. Santos, and A. Chalmers, “Selectivecomponent-based rendering,” in GRAPHITE, 3rd International Confer-ence on Computer Graphics and Interactive Techniques in Australasiaand South East Asia. ACM Press, November 2005, pp. 13–22. [Online].Available: http://www.cs.bris.ac.uk/Publications/Papers/2000422.pdf

[41] P. Dubla, K. Debattista, and A. Chalmers, “Adaptive interleaved sam-pling,” The Digital Laboratory, WMG, University of Warwick, Coventry,UK, Tech. Rep. TR0908a, 2008.

[42] S. Parker, W. Martin, P.-P. J. Sloan, P. Shirley, B. Smits, and C. Hansen,“Interactive Ray Tracing,” in 1999 Symposium Interactive 3D ComputerGraphics, 1999, pp. 119–126.

[43] I. Wald, P. Slusallek, C. Benthin, and M. Wagner, “Interactive RenderingWith Coherent Raytracing,” in EUROGRAPHICS 2001, Manchester,United Kingdom, September 2001, pp. 153–164.

[44] A. Reshetov, A. Soupikov, and J. Hurley, “Multi-level ray tracingalgorithm,” ACM Trans. Graph., vol. 24, no. 3, pp. 1176–1185, 2005.

[45] I. Wald, T. Ize, A. Kensler, A. Knoll, and S. G. Parker, “Ray TracingAnimated Scenes using Coherent Grid Traversal,” ACM Transactions onGraphics, pp. 485–493, 2006, (Proceedings of ACM SIGGRAPH 2006).

[46] I. Wald, S. Boulos, and P. Shirley, “Ray tracing deformable scenes usingdynamic bounding volume hierarchies,” ACM Trans. Graph., vol. 26,no. 1, p. 6, 2007.

[47] I. Wald, W. R. Mark, J. Gunther, S. Boulos, T. Ize, W. Hunt, S. G. Parker,and P. Shirley, “State of the Art in Ray Tracing Animated Scenes,” inEurographics 2007 State of the Art Reports.

[48] I. Wald, T. Kollig, C. Benthin, A. Keller, and P. Slusallek, “InteractiveGlobal Illumination using Fast Ray Tracing,” in 13th EUROGRAPHICSWorkshop on Rendering, Pisa, Italy, 2002.

[49] A. Keller, “Instant radiosity,” in SIGGRAPH ’97: Proceedings of the 24thannual conference on Computer graphics and interactive techniques.New York, NY, USA: ACM Press/Addison-Wesley Publishing Co., 1997,pp. 49–56.

[50] H. W. Jensen, Realistic Image Synthesis Using Photon Mapping. AKPeters, 2001.

[51] G. Mastoropoulou, K. Debattista, A. Chalmers, and T. Troscianko,“Auditory bias of visual attention for perceptually-guided selectiverendering of animations,” in GRAPHITE 2005, sponsored by ACMSIGGRAPH, Dunedin, New Zealand. ACM Press, December 2005.

[52] G. Ellis, A. Chalmers, and K. Debattista, “The effectof rotational ego-motion on the perception of high fidelityanimations,” in APGV 2006. APGV, July 2006. [Online]. Available:http://www.cs.bris.ac.uk/Publications/Papers/2000530.pdf

[53] G. Mastoropoulou, K. Debattista, A. Chalmers, and T. Troscianko, “Theinfluence of sound effects on the perceived smoothness of renderedanimations,” in APGV, 2005, pp. 9–15.

[54] G. Calvert and T. Thesen, “Multisensory integration: Methodologicalapproaches and emerging principles in the human brain,” Journal ofPhysiology, vol. 98, pp. 191–205, 2004.

232