developing gameplay mechanics for head-tracked stereoscopy: a feasibility study

Upload: rcreyke

Post on 29-May-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    1/30

    UNIVERSITY CAMPUS SUFFOLK

    Developing Gameplay

    Mechanics for Head-Tracked

    Stereoscopy: A Feasibility StudyB.Sc. (Hons) Computer Games Programming

    Roger Creyke (s110431)

    14th

    May 2010

    This text investigates the feasibility and limitations of developing gameplay mechanics for modern threedimensional game environments, specifically with tracked stereoscopic imaging via the use of a virtual reality

    head mounted display.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    2/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 2

    Acknowledgements

    I would very much like to thank:

    My wife Rachel for her support, patience, love and everything in between.

    My parents for their love, support, and tolerance of my academic procrastination. My programming tutor Dr Andrew Revitt for his enthusiasm and encouragement. My dissertation tutors Chris Janes and Robert Kurta for their guidance. My colleagues Alex Webb, Matt David and Daniel Stamp for their help with proof reading.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    3/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 3

    Contents

    1 Introduction .......................................................................................................................................... 4

    1.1 Central Problematic .............................................................................................................................. 4

    1.2 Aims and Objectives ............................................................ ................................................................. . 4

    2 Literature Review .................................................................................................................................. 5

    2.1 Stereoscopy .............................................................. ................................................................. ............ 5

    2.2 Tracking ......................................................... ................................................................. ....................... 7

    2.3 Head Mounted Displays ........................................................................................................................ 9

    3 Research ........................................................ ................................................................. ..................... 11

    3.1 Principle Research Objectives ............................................................................................................. 11

    3.2 Research Method ..................................................... ................................................................. .......... 11

    3.2.1 Installation ...................................................................................................................................... 11

    3.2.2 Interfacing ............................................................ ................................................................. .......... 11

    3.2.3 Testbed ........................................................................................................................................... 12

    3.2.4 Gameplay Mechanics ................................................................ ...................................................... 13

    4 Conclusion ..................................................... ................................................................. ..................... 16

    4.1 Findings ............................................................................................................................................... 16

    4.2 Summary & Recommendations .......................................................................................................... 16

    4.3 Further Development .......................................................... ................................................................ 16

    5 Bibliography ........................................................................................................................................ 17

    6 Table of Figures ................................................................................................................................... 19

    7 Glossary of Terms ..................................................... ................................................................. .......... 20

    8 Table of Software ................................................................................................................................ 21

    9 Appendices ............................................................... ................................................................. .......... 22

    9.1 Project Plan ......................................................................................................................................... 22

    9.2 Headset Interface Code ................................................................. ...................................................... 23

    9.2.1 Headset.cs ............................................................ ................................................................. .......... 23

    9.2.2 HeadsetEye.cs ...................................................... ................................................................. .......... 29

    9.2.3 HeadsetMeasurement.cs ........................................................... ..................................................... 30

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    4/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 4

    1 IntroductionVirtual reality head mounted displays are head mounted devices which usually project a three dimensional

    image to the user via the use of stereoscopic visual displays and monitor the users movements via the use of

    tracking devices. A stereoscopic visual display is a device which displays two images, usually taken from offsetangles, to give the illusion of depth. A tracking device is a piece of hardware which monitors the movement of

    its user.

    The application of virtual reality head mounted displays (HMDs) within the field of interactive entertainment is

    not a new proposition, but while such systems have been trialled commercially for over 20 years, they have yet

    to be adopted by the mainstream consumer market.

    This paper documents an investigation into the history of head mounted stereoscopy and attempts to clarify

    why the games industry has been hesitant to embrace the technology by understanding the limitations of, and

    difficulties with, developing tracked stereoscopic gameplay mechanics. Specifically, the development

    processes required to create a modern interactive application are investigated for such systems. Finally,presented are a number of new potential gameplay mechanics designed for stereoscopic HMDs.

    This paper outlines a number of methods available for implementing head mounted virtual reality. Firstly, the

    science of stereoscopy is investigated with the intention of understanding how users perceive depth, and what

    methods can be used to create an artificial sense of perspective. Secondly, the science of tracking is explored

    in detail, with the intention of understanding the optimum method available for delivering a convincing yet

    practical interactive experience.

    Research into the practical creation of two gameplay mechanics is discussed in detail along with both initial

    and retrospective observations of notable difficulties during development. Development of a custom headset

    interface library is also documented. Source code for the library is included, fully annotated and available for

    use in similar projects.

    Finally, a conclusion is presented, with recommendations for those who may be considering pursuing

    stereoscopic tracked gameplay development. This outlines limitations of the concept, workarounds for

    potential issues and an overall assessment of the feasibility of such an undertaking.

    1.1 Central ProblematicTo understand the practicalities, difficulties and limitations of developing gameplay mechanics for head

    tracked stereoscopic vision, with the intention of deciding whether this technology is widely suitable to the

    field of interactive entertainment.

    1.2 Aims and ObjectivesThis research project was chosen with the intention of achieving the following goals:

    To understand the science behind stereoscopy and head tracking. To develop a custom process for prototyping gameplay mechanics with an HMD. To apply knowledge garnered from the undergraduate degree course. To satisfy a personal interest in the subject matter.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    5/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 5

    2 Literature Review2.1 Stereoscopy

    A visual display channel is a presentation of visual information displayed for one

    eye. Two visual display channels with separate views are required to achieve

    stereopsis.

    (Sherman, 2003, p. 124)

    Stereoscopy is a broad term which defines any technique capable of recording three-dimensional visual

    information of an object or scene from dual observation points (see Figure 1). The ability to judge distance

    from the point of the observer through the use of this binocular vision is known as stereopsis.

    Kim argues that humans are 3D-orientated creatures, and operate daily using the depth perception capability

    in the immediate space around them. Consequently, providing depth information is important in realising 3D

    and natural interaction in the virtual environment (Kim, 2005, p. 81).

    Figure 1 - Stereoscopy

    Human beings exist or at least perceive to exist in a three dimensional environment and can perceive depth

    relative to their point of observation by using natural stereoscopic vision. Depth or distance can be perceived

    in three dimensions via triangulation. If two points of observation can be identified, the translation of the

    observation points relative to each other and the size of the target object can be determined.

    Stereography is not a new concept. Images designed to be viewed in 3D known as stereographs were invented

    in the 1840s. These were simply two still images taken by slightly offset cameras. In the stereograph below

    (see Figure 2) if we compare the apparent gap between the left hand door coving and the forehead of Edith

    Roosevelt on each side of the image we can see the angle is subtly different. In 1850 Sir William Brewster

    invented the lenticular stereoscope, a simple binocular style device designed to view a stereoscopic image by

    forcing each eye to focus on one of two images. Sherman suggests that the stereoscopic image depth cue is

    reliant on this parallax, which is the apparent displacement of objects viewed from different locations

    (Sherman, 2003, p. 119). The distance the observer must move to perceive the depth of a scene is relative to

    both the size of the target and the distance between it and the observer. For example, if a human observed

    the sun from the surface of Earth the parallax required to perceive depth would vastly exceed that of the same

    human observing a football in their back garden.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    6/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 6

    Figure 2 - Early 20th Century Stereograph of Edith Roosevelt (White House Museum, 1904)

    Traditionally in games and film, three dimensional scenes are outputted to a single two dimensional screen

    and we observe the scene in two dimensions. Stereoscopic scenes are observed from two different points to

    give the illusion of depth, the third dimension.

    Lens

    Screen

    Right-Eye Projector

    Left-Eye Projector

    Observer

    Figure 3 - Cinema-Style Light Polarisation Filtering

    One example implementation of this system is in modern projected three dimensional films which exploit the

    polarisation of light. Viewers wear a special pair of glasses in which the lens on each eye contains a polarising

    filter. The image projected contains two images projected on top of each other with light polarised in opposite

    directions and differing light is blocked in each lens (see Figure 3). This allows each eye to view a different

    image even though they are both focused on the same location.

    LightSource

    Light

    Source

    Figure 4 - Polarisation Example

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    7/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 7

    Lizuka argues polarised light is used due to the insensitivity of eyes to polarisation of light waves. Paths can be

    blocked selectively or, using a polariser, transmitted. Light passes through the polariser with minimum

    attenuation if the polariser's transmission axis is parallel to the polarisation direction of the incident light.

    However, if the axis is perpendicular to the incident light direction, the polariser absorbs the light instead of

    transmitting through it (Lizuka, 2004, p. 3). This is demonstrated in Figure 4 where we have two light sourceseach emitting un-polarised light. Light from both sources is vertically polarised. Light from the right hand light

    source then hits a lens with a vertically polarised transmission axis and therefore passes through the lens. As

    the left hand light source hits a lens which is polarised horizontally, it fails to pass through it.

    Craig et al note that the most common polarization scheme is where the images intended for one eye area are

    polarised horizontally, and the images intended for the other eye are polarised vertically (Craig, 2009, p. 48).

    Prior to polarised lenses, a similar system called anaglyph 3D was used to achieve the same effect. This

    involved the use of two coloured lenses, usually green and red, to obscure certain visual data from each eye.

    The technique was arguably cruder and much less accurate, causing the image palette to appear noticeably

    desaturated (Wilder, 2004).

    A more direct method of projecting three dimensional visual data is via a use of an HMD. This system requires

    the user to wear a headset which contains a small screen which covers each eye. These screens directly send

    visual data to the appropriate eye independently. Craig et al note that many high-end graphics engines and

    projectors have provisions to both render and display stereoscopic images through a single mechanism (Craig,

    2009, p. 11).

    Sherman argues head based VR visual displays are probably the equipment that most people associate with

    virtual reality (Sherman, 2003, p. 151). These headsets are usually equipped with head tracking equipment.

    2.2 Tracking

    Without information about the direction the user is looking, reaching, pointing,

    etc, it is impossible for the VR output displays to appropriately stimulate the

    sense. Monitoring the users body movements is called tracking.

    (Craig, 2009, p. 2)

    Kim highlights head tracking as the most important input device in virtual reality systems as it is an essential

    component in the realisation of natural interaction (Kim, 2005, p. 116). It is possible to track physical

    movement in a variety of ways. Mechanical trackers or joints can be attached to an individual, and by moving

    with them can record position and orientation. These systems can be difficult to control and cumbersome andcan restrict the movement of the user (Kim, 2005, p. 116). They are however considered more accurate than

    remote tracking due to the lack of possible interference (Craig, 2009, p. 18).

    Electromagnetic tracking involves the use of a low-level magnetic field generated by a transmitter, most

    commonly from two or three orthogonal coils embedded in a unit attached to the user. Signals in each coil are

    measured in the receiver to determine relative position to the transmitter. The coils act as antennae with the

    strength of the signal being inversely proportionate to the distance between each coil and its receiver

    (Sherman, 2003, p. 78). These devices are inexpensive in comparison to mechanical trackers but their accuracy

    is heavily influenced by the environment in which they are operated. For example devices such as computers,

    or other electrical equipment emitting a magnetic field can disrupt sensory data (Kim, 2005, p. 116). However,

    there is no line of sight obstruction, so electromagnetic trackers allow for complete freedom of movement.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    8/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 8

    Optical tracking systems make use of direct visual information to track an object or user. They usually use a

    video camera which may or may not receive depth information as well as colour (Sherman, 2003, p. 81).

    Microsofts Natal is one example of an optical tracker. This device tracks users using a fixed camera and can

    interpret the position of their joints, the orientation of their body and the distance between the user and the

    camera (Microsoft Corporation, 2010). By observing from multiple points it is possible to build up a reasonablyaccurate three dimension positional frame. Craig et al note that opaque objects will almost certainly interfere

    with sensor operation and accuracy as they obstruct the line of sight, therefore limiting freedom of movement

    (Craig, 2009, p. 21). Due to the high computational load that optical tracking systems require these systems

    are one of the most recent to have been researched (Kim, 2005, p. 118).

    Videometric trackers are the inverse of optical tracking systems. Usually a visual recording device is attached

    to the tracked object and translation is calculated by analysing visual data from the environment around it.

    Images in the surrounding space are interpreted with visual landmarks being used to calculate the quantity of

    movement between input frames. For such a system to work, the computer must be informed of the three

    dimensional location of landmarks prior to the tracking session (Sherman, 2003, p. 82). Videometric tracking

    can also be computationally expensive. Craig et al argue that while having multiple defined landmarks in allprobable directions can enable accurate tracking; the system is still prone to mistakes due to the heavy

    reliance on visual interpretation (Craig, 2009, p. 173).

    Ultrasonic tracking systems involve the use of high-pitched sounds to determine distance between transmitter

    and receiver. At timed intervals these sounds are emitted and the time taken for them to be received can be

    used to measure the distance of the object. Again, with multiple transmission points a three dimensional

    translation can be triangulated (Craig, 2009, p. 20). Kim states that while audio tracking is inexpensive it relies

    heavily on short distance evaluation and is often inaccurate (Kim, 2005, p. 117). To prevent interference

    between receivers they must be placed a minimum distance away from each other. On small lightweight units

    this is often a difficult task (Sherman, 2003, p. 84).

    Roll (Inclinometer)

    Pitch (Inclinometer)

    Yaw (Accelerometer)

    Forward Vector

    Figure 5 - Inertial Tracking

    An inertial tracking system measures changes in gyroscopic force, inclination and acceleration between input

    frames, usually through the use of electromechanical instruments. The most common manifestations of three

    dimensional inertial tracking systems combine accelerometers and inclinometers in a single unit (Craig, 2009,

    p. 21). An inclinometer is a device which can measure the angle of inclination or tilt. These devices usually rely

    on gravity and can be used to calculate both the pitch and roll axis in three dimensions. An accelerometer is a

    device which can measure the proper acceleration of an object, usually one which it is attached to directly. The

    angle and magnitude of acceleration can be applied to the current known position in a previous reference

    frame to calculate a new orientation (Sherman, 2003, p. 85). With a head tracking system accelerometers are

    usually only used on the yaw axis as rotating around the up vector does not change the angle of inclination so

    gravity cannot be used to determine a change in orientation (see Figure 5). Because inertial tracking requires arelative calculation of change it can, over time, require recalibration and is prone to inaccuracies. The

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    9/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 9

    accelerometer yaw axis is particularly sensitive and can often produce varying data if magnetic fields surround

    the unit and the accelerometer in question relies on a magnetic field to measure proper acceleration (Kim,

    2005, p. 117).

    Neural or muscular tracking is the most recent development in virtual reality tracking. It involves sensing the

    movement of individual body-parts or object joints. This system is not suitable for tracking the location of a

    user within an environment but can be applied to specific joints such as the fingers or neck of a user (Sherman,

    2003, p. 86). Such a system is currently relatively expensive and requires highly sensitive equipment capable of

    measuring muscle contractions and single changes. Because the extremity is not being measured in terms of

    three dimensional translation, an educated estimation must be applied to the data received from the sensors

    and the system will not account for situations in which for example a limb could not move to a certain point

    due to a physical impediment if enough pressure was being applied to do so (Craig, 2009, p. 21).

    There is no optimal solution for tracking which meets all criteria and all scenarios, therefore the closest fit

    technique must be decided on a case by case basis (Craig, 2009, p. 18). For head mounted displays neural

    tracking is not ideal due to the free rotation of the neck, and videometric tracking is arguably too inaccurate

    and expensive for a real-time interactive environment. Mechanical trackers are impractical due to movement

    restrictions and arguably ultrasonic tracking requires too much time spent on setting up and calibrating the

    equipment in the surrounding environment.

    Optical tracking systems are some of the least invasive but also some of the hardest to program for due to the

    software processing required when converting visual data into positional information (Kim, 2005, p. 118).

    For these reasons and based on the scope of this research project I have opted for the use of a relatively

    inexpensive inertial head mounted tracking system.

    2.3 Head Mounted Displays

    Head mounted displays usually employ two separate display devices designed

    to provide isolated display for each eye.

    (Kim, 2005, p. 94)

    Head mounted displays are devices which most commonly facilitate both the tracking of head movement and

    the output of visual, often stereoscopic data.

    Figure 6 - Sword of Damocles HMD (Sutherland, The Ultimate Display, 1965)

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    10/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 10

    Sherman states the screens in head-based displays are not stationery; rather, as their name suggests, they

    move in conjunction with the users head.(Sherman, 2003, p. 151) This applies to most modern head

    mounted displays but even the first prototype devices were capable of tracked stereoscopy.

    The first computer graphics driven head mounted display was created in 1968 by the acclaimed computer

    scientist Ivan Sutherland. The system was heavy and unwieldy, so much so that it had to be suspended from

    the ceiling of Sutherlands laboratory via a large robotic arm. The system (see Figure 6), codenamed Sword of

    Damocles, was considered by Sutherland to be the ultimate display(Sutherland, 1968).

    Sutherlands system required the mechanical arm to measure the position of the users head via a physical

    connection to the set. Modern head mounted displays resemble Sutherlands design but are considerably

    lighter, cheaper to manufacture and in the case of inertial systems more responsive (Sherman, 2003, pp. 29-

    36). An example of a modern affordable head mounted display is the Vuzix iWear VR920 (see Figure 7). The

    device tracks head movement with an inclinometer and accelerometer head tracker and outputs visual

    information to the user via twin high resolution LCD displays. It contains stereo headphone audio and weighs

    just over 3 ounces. (Vuzix Corporation, 2010)

    Figure 7 - Vuzix iWear VR920 (Vuzix Corporation, 2010)

    This device was chosen for the research project due to its C++ compatible SDK, its lightweight design and its

    overall price. It connects to a computer via USB for input and sound output, and via VGA for visual output.

    Sherman states stationary displays typically use time interlacing to provide left- and right-eye views (resolved

    by using shutter glasses), which head-based systems use dual visual outputs (one for each eye).(Sherman,

    2003, p. 152) The VR920 uses individual dual visual outputs with which the refresh rate is synchronised to

    interlace views to alternatives eyes.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    11/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 11

    3 Research3.1 Principle Research ObjectivesIn the central problematic (Section 1.1) there is primary emphasis on understanding the practicalities of

    developing gameplay mechanics for HMDs. The most effective way to understand these systems is arguably to

    emulate game development using an HMD device.

    With the aim of satisfying this requirement the following objectives were determined:

    1. Obtain the virtual reality head mounted display hardware required to investigate the technology.2. Install the equipment and driver software on a compatible box.3. Understand how the device communicates with the rest of the computer system.4. Design and implement a custom interface for communication with and control of the device.5. Develop a software testbed with a three dimensional environment.6. Design and implement one or more gameplay mechanics on the software testbed.

    3.2 Research Method3.2.1 InstallationThe research method was based on the principle research objectives. Each objective was tasked on the project

    plan (Section 9.1) and was undertaken in a linear step based sequence. Firstly, the equipment required for the

    research had to be cost assessed and obtained. After calculating total cost, initially the university was

    approached and funding was requested. While this appeared a possibility, it was raised that funding may take

    a while to be approved. Due to project time constraints it was decided to independently fund the equipment.

    Device drivers were installed on the development box and the headset was plugged in both via a USB port and

    a VGA connection. In terms of visual display the device functioned as an external or additional monitor. To

    output data rendered to the graphics card framebuffer, display settings had to be adjusted so that the system

    identified the device as a primary monitor.

    3.2.2 InterfacingAs with other external IO devices, communication between the system and the HMD is achieved via the use of

    software drivers written in x86 Assembly which expose a low level common interface as a C++ DLL. A

    calibration tool is automatically installed with the device drivers which must be executed each and every timethe device is used. The tool finds the extremes of each reading for each of the 3 axes which in turn allows the

    conversion of tracking data to positional information.

    To access this information a method must be called to initialise communication with the device. It was decided

    that the testbed would be written in C# to facilitate rapid iteration and to take advantage of the graphics

    wrapper in the XNA Framework. This was achieved through the use of explicit run-time linking. An example of

    the use of this via the DLLIMPORT keyword in C# can be seen in Section 9.2.1.

    Once tracking has been initialised the device was open for polling. By passing three integers through to a

    subsequent DLL method as parameters the device drive would synchronously populate the values before

    returning on the function. It was found that each parameter was populated with an unsigned halfword or shortinteger value, that is, a value between -32,768 to +32,767. This should theoretically make tracking very

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    12/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 12

    accurate with over 100 points of precision in every degree tracked. A simple formula was devised to convert

    the axis information into radians where denotes the raw halfword integer value:

    =

    Once the measurements on the individual axis were calculated they could be combined to create a matrix

    representing the translation of the orientation of the user in three dimensional space, relative to world space

    translation. This was achieved through the use of normalised vector constructs representing the axis around

    which each input would rotate, in this case being the up vector for yaw; the left vector for pitch and the

    forward vector for roll (see Figure 5). The view matrix was constructed by multiplying it by a matrix

    representing each individual translation.

    Stereoscopy was found to be enabled via a similar initialisation method to tracking which was also found in the

    device API. The system exploits a technology called Vertical Synchronization or VSYNC. This system was

    originally designed to prevent page tearing which occurs when a frame buffer is drawn to the screen half way

    through a drawing pass, especially on older CRT display devices. Vertical Synchronization coordinates framerendering with the vertical blanking interval, that is, the time period between visual frame updates to the user,

    to minimise this phenomenon.

    Rather than rendering the scene from two angles to a single frame buffer and splitting the image between

    eyes when it reaches the device, it was found that the VR920 HMD must receive images separately. This

    requires the frame rate of the game to be doubled if stereoscopy is enabled. Outputting stereoscopy to the

    device has a number of steps. Firstly, as per non-stereoscopic imaging the view matrix is constructed. This

    therefore requires that in each update loop the tracking data must be polled before drawing to screen, as with

    any other input device. Then the view matrix is slightly offset as though viewing from one of the observation

    points, for example the left eye. The scene is rendered to the frame buffer and the device is then informed

    that a frame is ready, and which eye it should be sent to when received. Because some shader techniques areasynchronous the system must then wait for the next vertical blanking interval. At this point the image is fired

    off to the HMD and displayed on the corresponding eye. Immediately after, the system gets to work rendering

    the same scene but from the other perspective and repeats the process of informing the HMD of which eye to

    display the image and waiting for the next vertical blanking interval. One of the eyes is therefore a fraction of a

    second behind the other, although this is unnoticeable to the user.

    This process is arguably slower for the system computationally because the graphics card has to cycle through

    all the shader techniques twice and the system has to wait for two vertical blanking intervals. It was observed

    that a more efficient solution could be to render each technique pass only once, juxtaposing the two views and

    sending them to the display device in one go. This would save rendering time but would require the HMD to be

    capable of splitting the image for each eye just before displaying it.

    The interface was designed as a high level device driver wrapper and was built as a separate C# project and

    outputted as a class library DLL (see Section 9.2). This was created with the intention of not tying the library to

    the specific project. The DLL will be distributed for other users to create managed virtual reality apps using the

    same hardware.

    3.2.3 TestbedDue to the interface library handling most of the low level communication with the headset, communication to

    the device could be initialised by the testbed simply by calling the methods in Figure 8. Polling of information

    was also automated by the interface by calling the Update method on the instantiated Headset class.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    13/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 13

    using Creyke.Vuzix;

    Headset headset = new Headset();headset.Stereoscopy.Open();headset.Tracking.Open();

    Figure 8 - Initialising Headset Interface

    It was decided that the testbed would be developed as a WinForms application using an embedded XNA

    canvas for rendering output. This would enable rapid development and iteration of gameplay mechanics by

    minimising the need to concentrate in areas of code maintenance such as memory management.

    An update loop was set up to poll tracking input from the interface. Visual data was then immediately drawn

    to the framebuffer and flushed to the screen or HMD to enable stereoscopy. This process was relatively easy

    to set up and was only required once.

    3.2.4 Gameplay Mechanics3.2.4.1 Pilot MechanicInitially it was decided to design the first mechanic to be as basic as possible. By rotating the users head, a

    virtual ship could in theory navigate through three dimensional space. A game environment was set up with a

    thousand models representing stars scattered around the origin of the world (see Figure 9). The player was

    then placed at the origin. A linear force was applied in the direction of the players view or forward vector,

    which propelled the player around the map.

    Figure 9 - Pilot Mechanic

    Stereoscopy was applied to the scene by using the technique described in Section 3.2.2. This was relatively

    easy once the headset interface had been set. The following process outlined in Figure 10 was used to render

    stereoscopic images to the headset.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    14/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 14

    Begin OcclusionQuery

    Notify HMD FrameHas Opened

    Render Next EyeTo Framebuffer

    End OcclusionQuery Scan-Line

    RenderingComplete?

    No

    PresentFramebuffer

    Occlusion QueryComplete?

    No

    Yes

    Just renderedRight Eye?

    Yes

    No

    YesFlush To Display

    Notify HMD NextEye Frame Is

    Available

    Wait For NextDraw

    Begin Draw

    Entry

    Figure 10 - Stereoscopic Rendering Process

    3.2.4.2 Turret MechanicThe turret gameplay mechanic allowed the player to fire at incoming ships from a set position. A crosshair was

    placed in the centre of the screen which turned red when a ray cast along the forward vector of the player

    view returned a hit with one of the incoming objects (see Figure 11).

    Figure 11 - Turret Mechanic

    This mechanic made it more obvious that the accuracy of the system was causing issues with user control.

    Many external factors such as magnetic interference were affected the devices on the HMD, specifically the

    accelerometer used for measuring yaw. This would be less prevalent in other analogue devices such as thumb

    sticks on a gamepad, as the joint would be physically measured as opposed to relying on signals and radio

    waves. It also was noted during development that motion or simulator sickness occurred very quickly when

    concentrating on a specific point, in this case the crosshair, while wearing the headset.

    It was decided that the best way to combat this was to apply a smoothing algorithm. The Savitzky-Golaysmoothing algorithm performs what is known as a polynomial regression to smooth a number of points in an n

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    15/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 15

    dimensional system (Savitzky & Golay, 1964). This algorithm can be run multiple times, each time smoothing

    the points more. The custom implementation of the algorithm was as follows:

    = + + + + + + + +

    Polled data from the yaw axis was taken, with 9 of every 10 original values being replaced by interpolated

    points to form straight lines (see Figure 12).

    Figure 12 - Initial interpolated points from yaw axis

    The custom Savitzky & Golay smoothing algorithm was then run multiple times over all points in the scene.

    After ten passes the data appeared smoother (see Figure 13).

    Figure 13 - Savitzky-Golay 10 Passes

    This did however produce another problem. A certain amount of data had to be read from the headset before

    it could be smoothed, so there was a lag between data being read and the position of the player on the screen

    being updated. This was however, relatively insignificant. 30 milliseconds were required to gather and process

    smoothing data but the game was rendered only every 16.7 milliseconds so the games perception of the

    users head orientation was only 1 frame behind.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    16/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 16

    4 Conclusion4.1 FindingsWhile modern virtual reality headsets come with universal connectors and proprietary device drivers there are

    a number of difficulties when developing for them. Firstly, the system requires the developer to wear the

    device when testing stereoscopic imaging. This can be both tiring and nauseating. Constant head movement

    needed to test tracking can also be awkward. These issues could however be addressed by creating an

    alternative source of user input and system output. For example, a windowed display visible on the primary

    development system and a gamepad controller or mouse input while developing the majority of game

    features.

    The accuracy of the system is also an issue. While modern inertial tracking virtual reality HMDs are relatively

    inexpensive, they are prone to magnetic interference from electrical devices. Coding for analogue data spikes

    and inaccuracies is much more difficult than taking input from a relatively reliable gamepad. There are

    workarounds for this too however, in the form of smoothing algorithms. There is however no guarantee of

    how accurate a users device will be, even if it is the same make and model of the device that the developer

    uses to create the game. This is because the environment in which the device is used will dictate the accuracy

    of the accelerometer and inclinometers.

    A positive of developing with stereoscopy is that, once an engine has stereoscopy enabled, it is relatively easy

    to add gameplay changes without a large amount of code. The system allows the fundamental concepts of real

    time three dimensional graphics to continue to exist. Older fixed function graphics pipelines would make the

    rendering and vertical synchronisation of two separate frames difficult, but modern cards can handle such

    issues with relative ease.

    4.2 Summary & RecommendationsWhen developing virtual reality gameplay mechanics for tracked stereoscopy it is important to consider the

    following:

    Is there a development time contingency for the extra time needed? Will the game or user benefit from the use of stereoscopy? Will the user have access to or be able to afford a stereoscopic headset? Is there the knowledge required to solve issues with the device within the development team?

    4.3

    Further Development

    The dynamic-link library and source code for the headset interface will be made available online along with

    this paper and any other resources. Information gained from this project will be applied to augmented reality

    projects for embedded devices, specifically the iPhone.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    17/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 17

    5 BibliographyCraig, A. (2009). Developing Virtual Reality Applications: Foundations of Effective Design. San Francisco:

    Morgan Kaufmann.

    Douglas, R. M., Alam, N. M., Silver, B. D., McGill, T. J., Tschetter, W. W., & Prusky, G. T. (2005). Independent

    visual threshold measurements in the two eyes of freely moving rats and mice using a virtual-reality

    optokinetic system. Visual Neuroscience, v. 22.

    Dunn, F., & Parberry, I. (2002). 3D Math Primer for Graphics and Game Development. Texas: Wordware.

    Eberly, D. H. (2007). 3D Game Engine Design. San Francisco: Morgan Kaufmann.

    Heineken, E., & Schulte, F. P. (2007). Seeing Size and Feeling Weight: The Size-Weight Illusion in Natural and

    Virtual Reality. Human Factors, v. 49 no. 1.

    Hutarew, G., Moser, K., & Dietze, O. (2004). Comparison of an auto-stereoscopic display and polarisedstereoscopic projection for macroscopic pathology.Journal of Telemedicine and Telecare, v. 10 no. 4.

    Kilburn, K. H., & Thornton, J. C. (1995). Prediction equations for balance measured as sway speed by head

    tracking with eyes open and closed. Occupational and Environmental Medicine, v. 52 no. 8.

    Kim, G. J. (2005). Designing Virtual Reality Systems: The Structured Approach. London: Springer.

    Lentjes, A. (2007). The Skinny on Stereo 3-D!Animation Magazine, v. 21 no. 9.

    Lizuka, K. (2004). Using cellophane to convert a liquid crystal display screen into a three dimensional display

    (3D laptop computer and 3D camera phone). Retrieved April 11, 2010, from

    http://individual.utoronto.ca/iizuka/research/cellophane.htm

    Microsoft Corporation. (2010, March). Project Natal. Retrieved April 17, 2010, from Xbox.com:

    http://www.xbox.com/en-US/live/projectnatal/

    Nicholson, P. T. (2001). Three-dimensional imaging in archaeology: its history and future. Antiquity, v. 75 no.

    288.

    Patterson, R., Winterbottom, M. D., & Pierce, B. J. (2006). Perceptual Issues in the Use of Head-Mounted Visual

    Displays. Human Factors, v. 48 no. 3.

    Risatti, H. (2006). Jackie Matisse Collaborations in Art + Science. Sculpture, v. 25 no. 9.

    Savitzky, A., & Golay, M. J. (1964). Smoothing and Differentiation of Data by Simplified Least Squares

    Procedures.Analytical Chemistry, 36 (8), 16271639.

    Sherman, W. (2003). Understanding Virtual Reality: Interface, Application and Design. San Francisco: Morgan

    Kaufmann.

    Slattery, D. R. (2008). VR and hallucination: a technoetic perspective. Technoetic Arts, v. 6 no. 1.

    Sutherland, I. E. (1968).A head-mounted three dimensional display. Salt Lake City: University of Utah.

    Sutherland, I. E. (1965). The Ultimate Display. Proceedings of IFIP, 2 (65), 506-508.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    18/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 18

    Vuzix Corporation. (2010, February). Vuzix iWear VR920 - The New Virtual Reality for Gamers. Retrieved April

    18, 2010, from Vuzix: http://www.vuzix.com/UKSITE/iwear/products_vr920.html

    Walters, B. (2009). The Great Leap Forward. Sight & Sound, v. ns19 no. 3.

    White House Museum. (1904). Retrieved May 3, 2010, from White House Museum:

    https://reader010.{domain}/reader010/html5/0530/5b0ea7861f1f4/5b0ea7983d842.jpg

    Wilder, F. (2004). What is Anaglyph 3D? Method and technique for anaglyphic stereo photography viewing.

    Retrieved May 3, 2010, from Anaglyph 3D Know-How:

    http://www.stcroixstudios.com/wilder/anaglyph/whatsanaglyph.html

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    19/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 19

    6 Table of FiguresFigure 1 - Stereoscopy ............................................................................................................................................ 5

    Figure 2 - Early 20th Century Stereograph of Edith Roosevelt (White House Museum, 1904) .............................. 6

    Figure 3 - Cinema-Style Light Polarisation Filtering ............................................................................................... . 6

    Figure 4 - Polarisation Example .............................................................................................................................. 6

    Figure 5 - Inertial Tracking ...................................................................................................................................... 8

    Figure 6 - Sword of Damocles HMD (Sutherland, The Ultimate Display, 1965) ..................................................... 9

    Figure 7 - Vuzix iWear VR920 (Vuzix Corporation, 2010) ..................................................................................... 10

    Figure 8 - Initialising Headset Interface .............................................................................................................. .. 13

    Figure 9 - Pilot Mechanic ...................................................................................................................................... 13

    Figure 10 - Stereoscopic Rendering Process .............................................................. ........................................... 14

    Figure 11 - Turret Mechanic ................................................................................................................................. 14

    Figure 12 - Initial interpolated points from yaw axis ............................................................ ................................ 15

    Figure 13 - Savitzky-Golay 10 Passes .................................................................................................................... 15

    All figures unless stated to the contrary are original material.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    20/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 20

    7 Glossary of Terms

    Term Detailed (if applicable) DefinitionAPI Application Programming

    Interface

    An interface provided in a software application or program

    which allows other software to interact with it, usually via

    exposed public functions and properties.

    C# A programming language often used for game tools such

    as level editors which require rapid development and

    iteration.

    C++ A middle level programming language often used to write

    time critical, process heavy software, used frequently to

    write the majority of modern console and desktop

    computer games.

    DLL Dynamic-Link Library A shared library compiled code file normally associated

    with the Microsoft Windows operating system.Explicit Run-

    Time Linking

    The process of loading a DLL dynamically at runtime and

    calling functions externally rather than compiling with the

    library directly.

    HMD Head Mounted Display A device worn on the head which usually outputs an image

    and in some cases tracks user movement.

    IO Input / Output A device, usually external or additional to the system,

    which performs input and output operations and

    communication via the use of Input / Output interfaces.

    Matrix A mathematical object commonly used to translate vectors

    between co-ordinate spaces through vector / matrix

    multiplication.

    Shader A small program usually designed for a graphics card andused to dictate the way in which an image is renderer.

    Stereoscopy The ability to see objects in three dimensions through the

    use of two observation points.

    Tracking The method of monitoring the movement of an object or

    individual.

    Vector A geometric entity representing one or more of a position,

    direction and magnitude.

    VR920 Vuzix iWear VR920 A mass produced virtual reality headset developed by

    Vuzix Corporation.

    V-SYNC Vertical Synchronization A graphics display process which defers frame rendering

    until the next vertical blanking interval.

    WinForms Windows Forms API The .NET graphical application programming interface for

    creating Windows form based applications.

    x86 Assembly The family of assembly languages for x86 processors which

    power the majority of modern home computers. These

    languages give the programmer access to individual CPU

    instructions via the use of simple and short mnemonics

    and are usually used for optimising code and

    communicating directly with the hardware at a base level.

    XNA Framework A group of software libraries from Microsoft which expose

    a number of low level graphics and game related functions

    to a managed framework.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    21/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 21

    8 Table of Software

    Vendor Product VersionAdobe Systems Photoshop CS4 11.0

    Reader 9 9.3.2

    Microsoft Corporation Project Professional 2007 12.0.4518.1014

    Visio 2007 12.0.6524.5003

    Visual C# Express 2008 9.0.30729.1 SP

    Visual C++ Express 2008 9.0.30729.1 SP

    Word 2007 (with PDF exporter plug-in) 12.0.6514.5000

    Vuzix Corporation iWear Calibrator 2.4.0.1

    iWear SDK 2.4.0.1 (IWEARDRV.DLL)

    2.2.0.1 (IWRSTDRV.DLL)

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    22/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 22

    9 Appendices9.1 Project Plan

    Thisim

    ageshowstheplanusedthroughouttheresearchprojectwhichallowed

    forahighovervieworprogress.

    Thecriticalpathishighlightedinred,

    while

    milestonesaredenotedbyadiamon

    d.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    23/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 23

    9.2 Headset Interface Code9.2.1 Headset.cs// Copyright 2010 Roger Creyke (s110431)// University Campus Suffolk

    using Microsoft.Xna.Framework;using Microsoft.Xna.Framework.Graphics;using System;using System.Runtime.InteropServices;

    namespace Creyke.Vuzix{

    ////// Defines an interface to the unmanaged headset api.///publicclassHeadset : IDisposable{

    ////// Returns / sets the distance between the two observation points.///publicfloat EyeSeparation{

    get { return eyeSeparation; }{ eyeSeparation = value; }}

    ////// Returns whether is mid render.///publicBoolean IsRendering{

    get { return isRendering; }}

    ////// Returns whether stereoscopy is currently open.///publicBoolean IsStereoscopyOpen{

    get { return isStereoscopyOpen; }}

    ////// Returns whether tracking is currently open.///publicBoolean IsTrackingOpen{

    get { return isTrackingOpen; }}

    ///

    /// Returns the current pitch of the headset.///publicHeadsetMeasurement Pitch{

    get { return pitch; }}

    ////// Returns the current roll of the headset.///publicHeadsetMeasurement Roll{

    get { return roll; }}

    ///

    /// Returns the current yaw of the headset.///

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    24/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 24

    publicHeadsetMeasurement Yaw{

    get { return yaw; }}

    // state booleans.

    protectedbool isRendering;protectedbool isStereoscopyOpen;protectedbool isTrackingOpen;

    // raw data.protectedHeadsetMeasurement pitch;protectedHeadsetMeasurement roll;protectedHeadsetMeasurement yaw;

    // axes.protectedVector3 viewForward;protectedVector3 viewLeft;protectedVector3 viewUp;

    // translation.protectedQuaternion[] rotation;

    protectedVector3[] scale;protectedVector3[] translation;protectedMatrix[] view;

    // gpu.protectedGraphicsDeviceManager graphics;protectedOcclusionQuery query;protectedfloat eyeSeparation;protectedGameWindow window;protectedint scanline;

    // device.protectedIntPtr stereoHandle;

    ////// Creates a new instance of Headset./////////public Headset(GraphicsDeviceManager graphics, GameWindow window){

    // store reference to device manager and game window.this.graphics = graphics;this.window = window;

    // create occlusion query for vsync interval.query = newOcclusionQuery(graphics.GraphicsDevice);

    // create measurement data beans.pitch = newHeadsetMeasurement();roll = newHeadsetMeasurement();yaw = newHeadsetMeasurement();

    // create decomposition parameters and view matrices.rotation = newQuaternion[3];scale = newVector3[3];translation = newVector3[3];view = newMatrix[3];for (int i = 0; i < view.Length; i++){

    rotation[i] = newQuaternion();scale[i] = newVector3();translation[i] = newVector3();view[i] = newMatrix();

    }

    // defaults.eyeSeparation = 3;scanline = 0;

    SetTracking(0, 0, 0);SetStereoscopy();

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    25/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 25

    }

    ////// Performs application-defined tasks associated with freeing, releasing, or// resetting unmanaged resources.///

    publicvoid Dispose(){

    // dispose of the gpu query which may well be executing.query.Dispose();

    }

    ////// Returns the decomposed rotation component of the view matrix for the/// specified eye./////////publicMatrix GetRotation(HeadsetEye eye){

    return view[(int)eye];}

    ////// Returns the decomposed scale component of the view matrix for the specified/// eye./////////publicMatrix GetScale(HeadsetEye eye){

    return view[(int)eye];}

    ////// Returns the decomposed translation component of the view matrix for the/// specified eye./////////publicMatrix GetTranslation(HeadsetEye eye){

    return view[(int)eye];}

    ////// Returns the view matrix for the specified eye./////////publicMatrix GetView(HeadsetEye eye){

    return view[(int)eye];}

    ////// Sets the measurements for all axis raw data values.//////Raw pitch value.///Raw roll value.///Raw yaw value.protectedvoid SetTracking(int pitchRaw, int rollRaw, int yawRaw){

    // store measurements.this.pitch.SetMeasurement(pitchRaw);this.roll.SetMeasurement(rollRaw);this.yaw.SetMeasurement(yawRaw);

    }

    ////// Calculates a perspective view matrix for each eye.

    ///protectedvoid SetStereoscopy()

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    26/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 26

    {// create orientation matrix.Matrix matOrient = Matrix.Identity;

    // set identity vectors.Vector3 trackedBackward = Vector3.Backward;

    Vector3 trackedUp = Vector3.Up;Vector3 trackedLeft = Vector3.Left;

    // calculate yaw about up vector.matOrient = Matrix.CreateFromAxisAngle(trackedUp, yaw.Radians);trackedBackward = Vector3.Transform(trackedBackward, matOrient);trackedLeft = Vector3.Transform(trackedLeft, matOrient);

    // calculate pitch about right vector.matOrient = Matrix.CreateFromAxisAngle(trackedLeft, pitch.Radians);trackedUp = Vector3.Transform(trackedUp, matOrient);trackedBackward = Vector3.Transform(trackedBackward, matOrient);

    // calculate roll about view vector.matOrient = Matrix.CreateFromAxisAngle(trackedBackward, roll.Radians);trackedUp = Vector3.Transform(trackedUp, matOrient);

    trackedLeft = Vector3.Transform(trackedLeft, matOrient);

    // create view matrix from orientation vectors.Vector3 stereoAdjustment;Vector3 lookTarget = trackedBackward;

    // set for each observation point.for (int eye = 0; eye < view.Length; eye++){

    switch (eye){

    // left eye offset.case (int)HeadsetEye.Left:

    stereoAdjustment = trackedLeft * eyeSeparation;lookTarget -= stereoAdjustment;break;

    // right eye offset.case (int)HeadsetEye.Right:

    stereoAdjustment = trackedLeft * eyeSeparation;lookTarget += stereoAdjustment;break;

    }

    // set view matrix.view[eye] = Matrix.CreateLookAt(Vector3.Zero, lookTarget, trackedUp);

    // decompose.view[eye].Decompose(

    out scale[eye], out rotation[eye], out translation[eye]);}

    }

    ///

    /// Attempts to initialize connection to headset.///publicvoid OpenStereoscopy(){

    // open stereoscopy.StaticOpenStereoscopy();isStereoscopyOpen = true;

    }

    ////// Attempts to close connection to headset.///publicvoid CloseStereoscopy(){

    // close stereoscopy.isStereoscopyOpen = false;

    StaticCloseStereoscopy(stereoHandle);}

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    27/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 27

    ////// Attempts to initialize connection to headset.///publicvoid OpenTracking(){

    // open tracking.

    StaticOpenTracking();isTrackingOpen = true;

    }

    ////// Attempts to close connection to headset.///publicvoid CloseTracking(){

    // close tracking.isTrackingOpen = false;StaticCloseTracking();

    }

    ////// Should be called before rendering.

    //////publicvoid RenderBegin(GameTime gameTime){

    // do initial checks for rendering.isRendering = true;if (!graphics.IsFullScreen){

    // store scanline position for windowed vertical synchronisation.scanline = window.ClientBounds.Bottom;if (scanline >= graphics.GraphicsDevice.DisplayMode.Height)

    scanline = graphics.GraphicsDevice.DisplayMode.Height - 1;}

    }

    ////// Informs headset interface that an eye is about to be rendered./////////publicvoid RenderBeginEye(GameTime gameTime, HeadsetEye eye){

    // kick off gpu occlusion query.query.Begin();

    }

    ////// Informs headset interface that all rendering for an eye has completed.////////////publicbool RenderCompleteEye(GameTime gameTime, HeadsetEye eye)

    {// finish gpu occlusion query.query.End();

    // prep device for receiving a frame.StaticWaitForOpenFrame(stereoHandle, false);

    // wait until scanling is complete if windowed.if (!graphics.IsFullScreen)

    while (graphics.GraphicsDevice.RasterStatus.ScanLine < scanline);

    // poll for vertical synchronisation.while (!query.IsComplete) ;

    if (eye == HeadsetEye.Left)

    { // present if left (first) eye.

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    28/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 28

    graphics.GraphicsDevice.Present();}

    // notify device next eye frame is available.return StaticSetStereoLR(stereoHandle, (int)eye);

    }

    ////// Informs headset that all rendering for both eyes has completed.//////publicvoid RenderComplete(GameTime gameTime){

    // kill occlusion query.query.End();

    // all rendering has completed.isRendering = false;

    }

    ////// Updates tracking and stereoscopy.

    ///publicvoid Update(){

    // tracking variables.int trackedPitch = 0;int trackedRoll = 0;int trackedYaw = 0;

    // get new positional values.StaticUpdate(ref trackedYaw, ref trackedPitch, ref trackedRoll);

    // update all values stored.SetTracking(trackedPitch, trackedRoll, trackedYaw);SetStereoscopy();

    }

    ////// Static unmanaged method for opening tracking on the device.//////[DllImport("iWearDrv.dll", SetLastError = false, EntryPoint = "IWROpenTracker")]protectedstaticexternlong StaticOpenTracking();

    ////// Static unmanaged method for closing tracking on the device.///[DllImport("iWearDrv.dll", SetLastError = false, EntryPoint = "IWRCloseTracker")]protectedstaticexternvoid StaticCloseTracking();

    ////// Static unmanaged method for polling tracked axes on the device.//////

    /////////[DllImport("iWearDrv.dll", SetLastError = false, EntryPoint = "IWRGetTracking")]protectedstaticexternlong StaticUpdate(refint yaw, refint pitch, refint roll);

    ////////////[DllImport("iWearDrv.dll", SetLastError = false, EntryPoint = "IWRSetFilterState")]protectedstaticexternvoid StaticSetFilterState(Boolean on);

    ////// Static unmanaged method for opening stereoscopy on the device.///

    ///[DllImport("iWrstDrv.dll", SetLastError = false, EntryPoint = "IWRSTEREO_Open")]

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    29/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    14th

    May 2010 Page 29

    protectedstaticexternIntPtr StaticOpenStereoscopy();

    ////// Static unmanaged method for closing stereoscopy on the device.//////

    [DllImport("iWrstDrv.dll", SetLastError = false, EntryPoint = "IWRSTEREO_Close")]protectedstaticexternvoid StaticCloseStereoscopy(IntPtr handle);

    ////// Static unmanaged method for notifying device of which eye has just rendered.////////////[DllImport("iWrstDrv.dll", SetLastError = false, EntryPoint = "IWRSTEREO_SetLR")]protectedstaticexternBoolean StaticSetStereoLR(IntPtr handle, int eye);

    ////// Static unmanaged method for enabling / disabling stereoscopy.//////

    //////[DllImport("iWrstDrv.dll", SetLastError = false, EntryPoint = "IWRSTEREO_SetStereo")]protectedstaticexternBoolean StaticSetStereoEnabled(IntPtr handle, Boolean enabled)

    ////// Static unmanaged method for vertical synchronisation of device displays.////////////[DllImport("iWrstDrv.dll", SetLastError = false, EntryPoint = "IWRSTEREO_WaitForAck")]protectedstaticexternByte StaticWaitForOpenFrame(IntPtr handle, Boolean eye);

    }}

    9.2.2 HeadsetEye.cs// Copyright 2010 Roger Creyke (s110431)// University Campus Suffolk

    namespace Creyke.Vuzix{

    publicenumHeadsetEye{

    Center = 0,Left = 1,Right = 2

    }}

  • 8/9/2019 Developing Gameplay Mechanics for Head-Tracked Stereoscopy: A Feasibility Study

    30/30

    Developing Gameplay Mechanics for Head-Tracked Stereoscopy

    Roger Creyke (s110431)

    9.2.3 HeadsetMeasurement.cs// Copyright 2010 Roger Creyke (s110431)// University Campus Suffolk

    using Microsoft.Xna.Framework;

    namespace Creyke.Vuzix{

    ////// Defines a head tracking measurement for a single axis.///publicclassHeadsetMeasurement{

    ////// Returns the measurement in degrees.///publicfloat Degrees{

    get { return degrees; }}

    ////// Returns the measurement in radians.///publicfloat Radians{

    get { return radians; }}

    ////// Returns the measurement in a raw high precision format.///publicint Raw{

    get { return raw; }}

    protectedfloat degrees;protectedfloat radians;protectedint raw;

    ////// Creates a new instance of HeadsetMeasurement.///public HeadsetMeasurement(){

    // default to zero.SetMeasurement(0);

    }

    ////// Sets the raw value of this HeadsetMeasurement.///

    ///Raw measurement.publicvoid SetMeasurement(int measurement){

    // store raw value and convert to other forms of measurement.raw = measurement;radians = (float)raw * MathHelper.Pi / 32768.0f;degrees = MathHelper.ToDegrees(radians);

    }}

    }