[ieee 2012 ieee virtual reality (vr) - costa mesa, ca, usa (2012.03.4-2012.03.8)] 2012 ieee virtual...

2
New Input Modalities for Modern Game Design and Virtual Embodiment Reinhold Scherer, Markus Pröll, Brendan Allison, & Gernot R. Müller-Putz Institute for Knowledge Discovery, Laboratory of Brain-Computer Interfaces, Graz University of Technology ABSTRACT Brain-computer interface (BCI) systems are not often used as input devices for modern games, due largely to their low bandwidth. However, BCIs can become a useful input modality when adapting the dynamics of the brain-game interaction, as well as combining them with devices based on other physiological signal to make BCIs more powerful and flexible. We introduce the Graz BCI Game Controller (GBGC) and describe how techniques such as context dependence, dwell timers and other intelligent software tools were implemented in a new system to control the Massive Multiplayer Online Role Playing Game World of Warcraft (WoW). KEYWORDS: Brain-Computer Interface (BCI), Human-Computer Interaction (HCI), Game Design, Input Device, Virtual Reality, Virtual Embodiment, Massive Multiplayer Online Role Playing Game (MMORPG), World of Warcraft (WoW). INDEX TERMS: H.5.2 [User Interfaces] Input Devices and Strategies; H.1.2 [User/Machine Systems] Human Information Processing; I.3.8 [Computer Graphics] Applications; 1.5.5 [Pattern Recognition] Implementation – Interactive Systems 1 INTRODUCTION Human face-to-face interaction involves two main communication modalities: Verbal and nonverbal communication. The former relies on words to send messages and the latter does not. Nonverbal communication sends messages through body posture, gestures, facial expressions or eye gaze, and is essential in social interactions. We humans express ourselves using the whole body, consciously and unconsciously, yet in human-computer interaction (HCI) we are mostly limited to the use of our hands. For a more natural and more intuitive HCI, additional input modalities are needed. The games industry recently brought forth a wide variety of new input devices that focus on body movements (Nintendo Wii, Playstation Move and Microsoft Kinect). These new game controllers have been well accepted by the community, because they enable a completely different type of gameplay and let users be more active while playing a computer game. Gamers in general are considered early adopters for new technologies, since they are used to training to improve their skills [1]. One novel input modality for HCI, which is mostly focused on medical applications, uses electrophysiological signals. Such bioelectrical potentials, recorded from electrodes placed over specific body parts, can be used to encode messages and also to monitor the user’s mental and emotional state. Developing applications that are able to process this information and change and adapt accordingly to enhance the user’s experience may have a big impact on usability of HCI and on the virtual embodiment. Our group has extensive expertise with Brain-Computer Interfaces (BCIs) [2-7], and we only recently started exploring the use of BCIs for controlling off-the-shelf computer applications (e.g. Virtual Google Earth [8]). BCIs are communication devices that decode and interpret the ongoing brain activity and hence do not require any muscle activity for sending messages. Please see e.g. [9] for a more detailed description on BCIs and the different brain signals that can be used for control. So far, we mainly focused on the use of the non-invasive electroencephalogram (EEG), i.e., the bioelectrical brain activity recorded from electrodes placed on the scalp. EEG-based BCIs suffer from a poor signal-to-noise ratio and hence the information transfer rates (ITRs) achieved are low compared to manual control. Moreover, longer training periods are often required to gain control. We are therefore interested in studying the dynamics of interaction with standard applications and finding ways to overcome the low ITRs by using intelligent design and user interfaces. We selected the Massive Multiplayer Online Role Playing Game (MMORPG) World of Warcraft (Blizzard, Inc.) (WoW) as target application. We chose this game because of its broad success, immersive design, and the many degrees of freedom allowed. Previously, we reported on a BCI that enables users to use of three distinct mental tasks [4], i.e., only three degrees of freedom, to navigate the game avatar within the virtual game environment (mental imagination of left hand, right hand and foot movements mapped to navigation commands rotate left, rotate right and move forward, respectively) and to interact with other characters [10]. So far, we focused on the interpretation of oscillatory EEG components for controlling WoW. Currently, we are extending the capabilities of our system and added support for evoked potential (EP) based control, and are working on EEG-based detection of the user’s level of fatigue and emotional state. Additionally, we started to investigate commercial low-cost devices such as the Emotiv EPOC headset (Emotiv Systems, San Francisco, CA, USA), which can detect facial muscle activity inexpensively. Here, we present the Graz-BCI Game Controller (GBGC), an intelligent software solution that allows combining the different input modalities and implementing strategies to overcome the gap between the low ITR of BCIs and the usually high input rates for games. 2 METHODS 2.1 Strategies to overcome low ITRs A state of the art, complex MMORPG like WoW requires a lot of input from a player. Below, we describe some of the strategies implemented to increase the ITR. 2.1.1 Macros Instead of mapping a single key to an input, fast and easy configurable macros, consisting of multiple key presses, clicks, mouse movements and timeouts, allow more flexible commands. BCIs lack fast response times and cannot come close to those of standard input devices. This drawback can be partly overcome by allowing one brain signal to perform different tasks. Krenngasse 37/IV, 8010, Graz, Austria, e-mail: [email protected] 163 IEEE Virtual Reality 2012 4-8 March, Orange County, CA, USA 978-1-4673-1246-2/12/$31.00 ©2012 IEEE

Upload: gernot-r

Post on 10-Mar-2017

216 views

Category:

Documents


3 download

TRANSCRIPT

New Input Modalities for Modern Game Design and Virtual Embodiment

Reinhold Scherer, Markus Pröll, Brendan Allison, & Gernot R. Müller-Putz

Institute for Knowledge Discovery, Laboratory of Brain-Computer Interfaces, Graz University of Technology

ABSTRACT Brain-computer interface (BCI) systems are not often used as input devices for modern games, due largely to their low bandwidth. However, BCIs can become a useful input modality when adapting the dynamics of the brain-game interaction, as well as combining them with devices based on other physiological signal to make BCIs more powerful and flexible. We introduce the Graz BCI Game Controller (GBGC) and describe how techniques such as context dependence, dwell timers and other intelligent software tools were implemented in a new system to control the Massive Multiplayer Online Role Playing Game World of Warcraft (WoW). KEYWORDS: Brain-Computer Interface (BCI), Human-Computer Interaction (HCI), Game Design, Input Device, Virtual Reality, Virtual Embodiment, Massive Multiplayer Online Role Playing Game (MMORPG), World of Warcraft (WoW). INDEX TERMS: H.5.2 [User Interfaces] Input Devices and Strategies; H.1.2 [User/Machine Systems] Human Information Processing; I.3.8 [Computer Graphics] Applications; 1.5.5 [Pattern Recognition] Implementation – Interactive Systems

1 INTRODUCTION Human face-to-face interaction involves two main communication modalities: Verbal and nonverbal communication. The former relies on words to send messages and the latter does not. Nonverbal communication sends messages through body posture, gestures, facial expressions or eye gaze, and is essential in social interactions. We humans express ourselves using the whole body, consciously and unconsciously, yet in human-computer interaction (HCI) we are mostly limited to the use of our hands. For a more natural and more intuitive HCI, additional input modalities are needed.

The games industry recently brought forth a wide variety of new input devices that focus on body movements (Nintendo Wii, Playstation Move and Microsoft Kinect). These new game controllers have been well accepted by the community, because they enable a completely different type of gameplay and let users be more active while playing a computer game. Gamers in general are considered early adopters for new technologies, since they are used to training to improve their skills [1].

One novel input modality for HCI, which is mostly focused on medical applications, uses electrophysiological signals. Such bioelectrical potentials, recorded from electrodes placed over specific body parts, can be used to encode messages and also to monitor the user’s mental and emotional state. Developing applications that are able to process this information and change and adapt accordingly to enhance the user’s experience may have

a big impact on usability of HCI and on the virtual embodiment. Our group has extensive expertise with Brain-Computer

Interfaces (BCIs) [2-7], and we only recently started exploring the use of BCIs for controlling off-the-shelf computer applications (e.g. Virtual Google Earth [8]). BCIs are communication devices that decode and interpret the ongoing brain activity and hence do not require any muscle activity for sending messages. Please see e.g. [9] for a more detailed description on BCIs and the different brain signals that can be used for control. So far, we mainly focused on the use of the non-invasive electroencephalogram (EEG), i.e., the bioelectrical brain activity recorded from electrodes placed on the scalp. EEG-based BCIs suffer from a poor signal-to-noise ratio and hence the information transfer rates (ITRs) achieved are low compared to manual control. Moreover, longer training periods are often required to gain control. We are therefore interested in studying the dynamics of interaction with standard applications and finding ways to overcome the low ITRs by using intelligent design and user interfaces. We selected the Massive Multiplayer Online Role Playing Game (MMORPG) World of Warcraft (Blizzard, Inc.) (WoW) as target application. We chose this game because of its broad success, immersive design, and the many degrees of freedom allowed. Previously, we reported on a BCI that enables users to use of three distinct mental tasks [4], i.e., only three degrees of freedom, to navigate the game avatar within the virtual game environment (mental imagination of left hand, right hand and foot movements mapped to navigation commands rotate left, rotate right and move forward, respectively) and to interact with other characters [10].

So far, we focused on the interpretation of oscillatory EEG components for controlling WoW. Currently, we are extending the capabilities of our system and added support for evoked potential (EP) based control, and are working on EEG-based detection of the user’s level of fatigue and emotional state. Additionally, we started to investigate commercial low-cost devices such as the Emotiv EPOC headset (Emotiv Systems, San Francisco, CA, USA), which can detect facial muscle activity inexpensively. Here, we present the Graz-BCI Game Controller (GBGC), an intelligent software solution that allows combining the different input modalities and implementing strategies to overcome the gap between the low ITR of BCIs and the usually high input rates for games.

2 METHODS

2.1 Strategies to overcome low ITRs A state of the art, complex MMORPG like WoW requires a lot of input from a player. Below, we describe some of the strategies implemented to increase the ITR.

2.1.1 Macros Instead of mapping a single key to an input, fast and easy configurable macros, consisting of multiple key presses, clicks, mouse movements and timeouts, allow more flexible commands. BCIs lack fast response times and cannot come close to those of standard input devices. This drawback can be partly overcome by allowing one brain signal to perform different tasks.

Krenngasse 37/IV, 8010, Graz, Austria, e-mail: [email protected]

163

IEEE Virtual Reality 20124-8 March, Orange County, CA, USA978-1-4673-1246-2/12/$31.00 ©2012 IEEE

Figure 1. Principle of the Graz-BCI Game Controller (GBGC).

2.1.2 Context Awareness Performing different actions depending on the context can free up inputs for other controls. In WoW we perform several different actions depending on the point of interest (POI). A player who wants to interact with a quest-giving non-player character (NPC), gather a resource, loot an enemy corpse or collect a quest item, just has to move his avatar close enough to the POI to perform the right action.

2.1.3 Dwell Timers Dwell timers can help to introduce a binary degree of freedom, which can be used for confirmation. In WoW we use a dwell timer to confirm critical actions like accepting a new quest from an NPC or attacking a nearby enemy.

2.2 Basic principle of the GBGC The diagram in Figure 1 illustrates the principle of the GBGC. Devices such as the Graz-BCI or the Emotiv EPOC headset send information on the detected mental task, the current mental state or changes in facial expression via TCP/IP to the GBGC. The GBGC converts the individual input signals into corresponding keyboard and mouse control sequences for the application. We designed the GBGC to load a set of macros from a configuration text file. This gives us the flexibility to create and adjust configuration files for different game titles, which can be easily modified. The GBGC combines multiple keyboard key presses, mouse clicks, mouse movements and timeouts to allow a variety of relevant commands, which can be used to control commercial game titles that support standard mouse and keyboard inputs.

Figure 2. Emotiv EPOC control panel for facial expression

detection improving virtual embodiment. Detected facial expressions trigger matching character animations. In this

example, we mapped closed eyes to a sleeping animation and smiling to a funny laughing animation of the game avatar.

2.3 Facial expressions for Virtual Embodiment We extended the GBGC to be usable with the Emotiv EPOC headset, which also features a detection suite for facial expressions (Figure 2) and a pair of gyro sensors. Instead of typing chat commands, the player’s facial expressions are used to trigger character animations, which reflect the user’s emotional state.

3 CONCLUSION & FUTURE WORK For most commercial game applications, BCIs cannot offer smooth, accurate, high bandwidth control comparable to keyboards, mice, and conventional game controllers. However, there are many underappreciated ways to make BCIs more useful. By combining different kinds of BCIs with each other and with systems based on other physiological signals, the range of input signals can be increased considerably. Context-dependent mapping makes it possible for one signal to perform different tasks, depending on what works best in that context. Future work will explore new input signal(s) and combinations, improved use of context, and new game situations and environments.

4 ACKNOWLEDGEMENTS This work was partly supported by the ICT Collaborative Project BrainAble (247447). The authors thank Andreas Schabus and Microsoft Austria for providing the Emotiv EPOC headset.

REFERENCES [1] A. Nijholt, B. Reuderink, and D. Plass-Oude Bos. Turning

shortcomings into challenges: Brain–computer interfaces for games. Intelligent Technologies for Interactive Entertainment, 153-168, May 2009.

[2] G.R. Müller-Putz, R. Scherer, G. Pfurtscheller, and R. Rupp. EEG-based neuroprosthesis control: a step towards clinical practice. Neurosci. Lett., 382(1-2):169-74, July 2005.

[3] G.R. Müller-Putz, and G. Pfurtscheller. Control of an Electrical Prosthesis With an SSVEP-Based BCI. IEEE Trans. Biomed. Eng., 55:361-364, December 2007.

[4] R. Scherer, F. Lee, A. Schlögl, R. Leeb, H. Bischof, and G. Pfurtscheller. Toward Self-Paced Brain-Computer Communication: Navigation Through Virtual Worlds. IEEE Trans. Biomed. Eng., 55:675-682, February 2008.

[5] T. Solis-Escalante, G.R. Müller-Putz, C. Brunner, V, Kaiser, and G. Pfurtscheller. Analysis of sensorimotor rhythms for the implementation of a brain switch for healthy subjects. Biomed. Signal Process. Contro, 5(1):15-20, January 2010.

[6] R. Leeb, F.Y. Lee, C. Keinrath, R. Scherer, H. Bischof, and G. Pfurtscheller. Brain-Computer Communication: Motivation, aim and impact of exploring a virtual apartment. IEEE Trans Neural Syst. Rehabil. Eng., 15(4):473-482, December 2007.

[7] C. Neuper, R. Scherer, M. Reiner, and G. Pfurtscheller. Imagery of motor actions: differential effects of kinesthetic and visual-motor mode of imagery in single-trial EEG. Cognitive Brain Research, 25(3):668-77, December 2005.

[8] R. Scherer, A. Schlögl, F. Lee, H. Bischof, J. Jansa, and G. Pfurtscheller. The self-paced graz brain-computer interface: methods and applications. Computational intelligence and neuroscience, January 2007.

[9] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan. Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113:767-791, June 2002

[10] R. Scherer, E. Friedrich, B.Z. Allison, M. Pröll, M. Chung, W. Cheung, R.P. Rao, and C. Neuper. Non-invasive brain-computer interfaces: Enhanced gaming and robotic control. Lecture Notes in Computer Science, 6691: 362–369, 2011

164