[ieee 2010 ieee virtual reality conference (vr) - boston, ma, usa (2010.03.20-2010.03.24)] 2010 ieee...

2
Interaction Capture in Immersive Virtual Environments via an Intelligent Floor Surface Yon Visell, Alvin Law, Jessica Ip, Severin Smith, Jeremy R. Cooperstock McGill University, Centre for Intelligent Machines and CIRMMT, Montreal, Canada ABSTRACT We present techniques to enable users to interact on foot with sim- ulated natural ground surfaces, such as soil or ice, in immersive vir- tual environments. Position and force estimates from in-floor force sensors are used to synthesize plausible auditory and vibrotactile feedback in response. Relevant rendering techniques are discussed in the context of walking on a virtual frozen pond. 1 I NTRODUCTION Sensations accompanying walking on natural ground surfaces in real world environments (sand in the desert, or snow in winter) are multimodal and highly evocative of the settings in which they oc- cur [8]. Limited prior research has addressed foot-based interaction with VR and AR environments. Arguably, one reason has been the lack of efficient techniques for capturing foot-floor contact interac- tions via distributed floor areas. Here, we present a novel solution using a network of instrumented floor tiles, and methods for cap- turing foot-floor contact interactions so as to render multimodal responses of virtual ground materials. Devices for enabling omnidirectional in-place locomotion in VEs exist [2], but are complex and costly. Most prior work on tac- tile interaction with floor surfaces utilizes surface sensing arrays [4, 6] for applications such as person tracking, activity tracking, or musical performance. Commercial entities such as Gesturetek and Reactrix have developed floor interfaces for dynamic visual displays of liquid or solid objects. In such cases, video sensing is often used, but it provides no direct information about contact forces. Such information is needed for simulating highly contact- dependent interactions with virtual materials. Further references on this topic are available in a related publication [9]. 2 DISTRIBUTED FLOOR I NTERFACE The floor interface consists of a 6 × 6 array of rigid tiles covering a 3.5 sq. m area. Each tile is instrumented with four force sensors and a vibrotactile (VT) actuator. The floor is coated in projection paint, and a pair of overhead video projectors is used for visual dis- play. The design of the actuated tile interface is described in detail elsewhere [7, 9]. It consists of a rigid plate 30.5 × 30.5 × 2 cm in size, supported on vibration mounts, and coupled to a voice coil ac- tuator. Actuators are driven by audio signals that are generated by a computer and amplified. The device has a passband from approxi- mately 50 to 750 Hz, and can reproduce the largest forces required for interaction with virtual ground surfaces (i.e., more than 30 N across the passband). We sense normal forces below the vibration mounts using resistive force sensors (Interlink model 402 FSR). This data is conditioned, amplified, and digitized via a 32-channel, 16-bit acquisition board, and relayed over Ethernet. An array of six small form factor computers is used for force data processing and audio-vibrotactile (VT) synthesis. Each is responsible for syn- thesizing VT feedback from six tiles, ensuring a palpable response to force input can be provided at low latencies. Data is forwarded to a separate visual rendering server. Figure 1: Left: Distributed floor interface situated within an immer- sive, rear projected VE simulator. Right: illustration showing sensing and actuating components. 3 I NTRINSIC CONTACT SENSING Intrinsic contact sensing resolves locations and forces of surface contact between objects using internal force and torque measure- ments [1], and can provide contact position estimates with fewer sensors than are needed for surface-based sensing. This method has previously been applied to problems in robotic manipulation. The idea is to represent a foot pressure distribution p R (x) by a point x c , known as the contact centroid, such that a normal force F c that gives rise to the same intrinsic force measurements as p R (x) does [1]. For a single floor tile (Fig. 2), with force sensor loca- tions x j where internal force measurements f j are taken, x c and the force F c =(0, 0, F c ) can be recovered from the measurements F j =(0, 0, f j ) by solving the force and torque equilibrium equa- tions, 4 j=1 f j + F c + f p = 0 (1) 4 j=1 x j × F j + x c × F c + x p × F p = 0 . (2) F p =(0, 0, f p ) is the weight of the the plate and actuator at the tile’s center x p . Solving the three nontrivial scalar equalities (1,2) yields: F c = 4 i=1 f i - f p , x c = 1 F c 4 i=1 (x i - x p ) f i + f c x p ! (3) This result can be readily extended to cases in which p R (x) over- laps several tiles [9]. Our device was empirically determined to be able to localize contacts with a typical accuracy of 2 cm, a number that compares favorably to the dimensions of an adult shoe. Further details are given in [9]. 4 CHANNELING MATERIAL INTERACTIONS We have implemented a demonstration consisting of a virtual frozen pond that users may walk on, producing patterns of sur- face cracks that are rendered via audio, visual, and VT channels (Fig. 3). R x c F c x 11 x 12 f 11 f 12 x 13 f 13 x 14 f 14 p R (x) Figure 2: Contact centroid x c from pressure distribution p R (x). 313 IEEE Virtual Reality 2010 20 - 24 March, Waltham, Massachusetts, USA 978-1-4244-6238-4/10/$26.00 ©2010 IEEE

Upload: jeremy-r

Post on 16-Mar-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2010 IEEE Virtual Reality Conference (VR) - Boston, MA, USA (2010.03.20-2010.03.24)] 2010 IEEE Virtual Reality Conference (VR) - Interaction capture in immersive virtual environments

Interaction Capture in Immersive Virtual Environmentsvia an Intelligent Floor Surface

Yon Visell, Alvin Law, Jessica Ip, Severin Smith, Jeremy R. CooperstockMcGill University, Centre for Intelligent Machines and CIRMMT, Montreal, Canada

ABSTRACT

We present techniques to enable users to interact on foot with sim-ulated natural ground surfaces, such as soil or ice, in immersive vir-tual environments. Position and force estimates from in-floor forcesensors are used to synthesize plausible auditory and vibrotactilefeedback in response. Relevant rendering techniques are discussedin the context of walking on a virtual frozen pond.

1 INTRODUCTION

Sensations accompanying walking on natural ground surfaces inreal world environments (sand in the desert, or snow in winter) aremultimodal and highly evocative of the settings in which they oc-cur [8]. Limited prior research has addressed foot-based interactionwith VR and AR environments. Arguably, one reason has been thelack of efficient techniques for capturing foot-floor contact interac-tions via distributed floor areas. Here, we present a novel solutionusing a network of instrumented floor tiles, and methods for cap-turing foot-floor contact interactions so as to render multimodalresponses of virtual ground materials.

Devices for enabling omnidirectional in-place locomotion inVEs exist [2], but are complex and costly. Most prior work on tac-tile interaction with floor surfaces utilizes surface sensing arrays[4, 6] for applications such as person tracking, activity tracking,or musical performance. Commercial entities such as Gesturetekand Reactrix have developed floor interfaces for dynamic visualdisplays of liquid or solid objects. In such cases, video sensingis often used, but it provides no direct information about contactforces. Such information is needed for simulating highly contact-dependent interactions with virtual materials. Further referenceson this topic are available in a related publication [9].

2 DISTRIBUTED FLOOR INTERFACE

The floor interface consists of a 6× 6 array of rigid tiles coveringa 3.5 sq. m area. Each tile is instrumented with four force sensorsand a vibrotactile (VT) actuator. The floor is coated in projectionpaint, and a pair of overhead video projectors is used for visual dis-play. The design of the actuated tile interface is described in detailelsewhere [7, 9]. It consists of a rigid plate 30.5× 30.5× 2 cm insize, supported on vibration mounts, and coupled to a voice coil ac-tuator. Actuators are driven by audio signals that are generated by acomputer and amplified. The device has a passband from approxi-mately 50 to 750 Hz, and can reproduce the largest forces requiredfor interaction with virtual ground surfaces (i.e., more than 30 Nacross the passband). We sense normal forces below the vibrationmounts using resistive force sensors (Interlink model 402 FSR).This data is conditioned, amplified, and digitized via a 32-channel,16-bit acquisition board, and relayed over Ethernet. An array ofsix small form factor computers is used for force data processingand audio-vibrotactile (VT) synthesis. Each is responsible for syn-thesizing VT feedback from six tiles, ensuring a palpable responseto force input can be provided at low latencies. Data is forwardedto a separate visual rendering server.

Figure 1: Left: Distributed floor interface situated within an immer-sive, rear projected VE simulator. Right: illustration showing sensingand actuating components.

3 INTRINSIC CONTACT SENSING

Intrinsic contact sensing resolves locations and forces of surfacecontact between objects using internal force and torque measure-ments [1], and can provide contact position estimates with fewersensors than are needed for surface-based sensing. This methodhas previously been applied to problems in robotic manipulation.The idea is to represent a foot pressure distribution pR(x) by apoint xc, known as the contact centroid, such that a normal force Fcthat gives rise to the same intrinsic force measurements as pR(x)does [1]. For a single floor tile (Fig. 2), with force sensor loca-tions x j where internal force measurements f j are taken, xc andthe force Fc = (0,0,Fc) can be recovered from the measurementsF j = (0,0, f j) by solving the force and torque equilibrium equa-tions,

4

∑j=1

f j +Fc + fp = 0 (1)

4

∑j=1

x j×F j +xc×Fc +xp×Fp = 0 . (2)

Fp = (0,0, fp) is the weight of the the plate and actuator at thetile’s center xp. Solving the three nontrivial scalar equalities (1,2)yields:

Fc =4

∑i=1

fi− fp, xc =1Fc

(4

∑i=1

(xi−xp) fi + fcxp

)(3)

This result can be readily extended to cases in which pR(x) over-laps several tiles [9]. Our device was empirically determined to beable to localize contacts with a typical accuracy of 2 cm, a numberthat compares favorably to the dimensions of an adult shoe. Furtherdetails are given in [9].

4 CHANNELING MATERIAL INTERACTIONS

We have implemented a demonstration consisting of a virtualfrozen pond that users may walk on, producing patterns of sur-face cracks that are rendered via audio, visual, and VT channels(Fig. 3).

Rxc

Fc

x11 x12

f11 f12

x13

f13x14

f14

pR(x)

Figure 2: Contact centroid xc from pressure distribution pR(x).

313

IEEE Virtual Reality 201020 - 24 March, Waltham, Massachusetts, USA978-1-4244-6238-4/10/$26.00 ©2010 IEEE

Page 2: [IEEE 2010 IEEE Virtual Reality Conference (VR) - Boston, MA, USA (2010.03.20-2010.03.24)] 2010 IEEE Virtual Reality Conference (VR) - Interaction capture in immersive virtual environments

4.1 Audio-tactile renderingAudio and VT display channels provide feedback accompanyingthe fracture of the virtual ice sheet underfoot. The two are derivedfrom a local fracture mechanics model. Fracture events are de-scribed via an event time ti and energy loss Ei. Figure 4 illustratesthe continuum model and a simple mechanical analog used for syn-thesis. In the stuck state, the surface has stiffness K = k1 + k2 andis governed by:

F(t) = mx+bx+K(x− x0), x0 = k2ξ (t)/K (4)

where ξ (t) represents the net plastic displacement up to time t. AMohr-Coulomb yield criterion is applied to determine slip onset:When the force Fξ on the plastic unit exceeds a threshold F0 (ei-ther constant or stochastic), a slip event with incremental displace-ment ∆ξ (t), and energy loss ∆W , representing the inelastic work offracture growth, is generated. Slip displacements are rendered asimpulsive transients, by exciting a bank of modal oscillators withimpulse response s(t) = ∑i aie−bit sin(2π fit), determined by am-plitudes ai, decay rates bi, and frequencies fi. An independentresponse is rendered in parallel for each tile.

4.2 Visual animation and controlSurface cracks are often animated by simulating the inelastic evo-lution of a distributed stress state [5, 3]. Here, we adopt a sim-pler simulation fusing the local temporal crack-growth model givenabove with heuristics for spatial pattern growth. This efficientmethod allows us to the stringent real-time requirements for audio-VT feedback. The contact centroid xc provides a local summary ofthe spatial stress injected by the foot. A fracture pattern consistsof a collection of crack fronts, defined by linear sequences of nodepositions, c0,c1, . . . ,cn. Fronts originate at seed locations p = c0.The fracture is rendered as line primitives `k = (ck− ck−1) on theice sheet (Fig. 5). Seed locations p are determined by foot-floorcontact. Our method is mesh-free, and the seed locations are un-constrained. A crack event initiated by the audio-tactile processat time ti with energy E(ti) results in the creation of a new seedor the growth of fractures from an existing one. A new seed p isformed at the location of the dominant contact centroid xc if noexisting seed lies within distance ∆p. The new seed p is createdwith a random number Nc of latent crack fronts, c1

0,c20, . . .c

Nc0 . We

sample Nc uniformly in 2,3, . . .6. A crack front propagates froma seed p nearest to xc. With probability 1/Nc the jth crack frontof p is extended. Its growth is determined by a propagation vectord j

m such that c jm = c j

m−1 + d jm. We take d j

m = αEn jm, where E is

the crack energy, α is a global growth rate parameter, and n jm is the

direction. Since we lack information about the principal stress di-rections at the front, we propagate in a direction n j

m = n jm−1 + β t,

where β ∼N(β ;0,σ) is a Gaussian random variable and t = n j×u,where u is the upward surface normal (i.e., t is a unit vector tangentto n j). The initial directions at p are spaced equally on the circle.

5 CONCLUSION

This paper described techniques for interaction with virtual groundmaterial simulations using a distributed, multimodal floor inter-

Figure 3: Still images from the frozen pond scenario.

R2 R1

m

k2

ξ

k1

b

Time, t

Fracture transient

B. C.

Force, Fe(t)

E(ξ )Slip energy,

Fe

A.

ck

Fe

Figure 4: A. Behavior at the crack front ck is modeled via a fracturemechanics approach. A visco-elasto-plastic body undergoes shearsliding fracture. B. A simple mechanical analog. C. Each slip eventis rendered as an impulsive transient.

c11

p0

c21

c12 c2

2

c13

xc

n1^

Figure 5: Crack patterns, graphs of lines between nodes, ci.

face. The methods are low in cost and complexity, and accessible tomultiple users without body-worn markers or equipment. Despitethese promising results, there are several aspects in which this sys-tem can be improved, including: tile sensing accuracy, tile arraydensity, the use of multi-tile force data, and more accurate phys-ical simulation of ground materials. Nonetheless, we believe thatthese methods hold promise for improving presence in immersiveVR and AR environments.

ACKNOWLEDGEMENTS

The authors acknowledge support from the MDEIE of Quebec forthe EU FP7 project NIW (ICT-FET no. 222107).

REFERENCES

[1] A. Bicchi, J. Salisbury, and D. Brock. Contact sensing from force mea-surements. The International Journal of Robotics Research, 12(3):249,1993.

[2] J. Hollerbach. Locomotion interfaces and rendering. In M. Lin andM. Otaduy, editors, Haptic Rendering: Foundations, Algorithms andApplications. A K Peters, Ltd, 2008.

[3] J. O’Brien and J. Hodgins. Graphical modeling and animation ofbrittle fracture. In Proceedings of the 26th annual conference onComputer graphics and interactive techniques, pages 137–146. ACMPress/Addison-Wesley Publishing Co. New York, NY, USA, 1999.

[4] J. Paradiso, C. Abler, K.-y. Hsiao, and M. Reynolds. The magic car-pet: physical sensing for immersive environments. In CHI ’97: CHI’97 extended abstracts on Human factors in computing systems, pages277–278, New York, NY, USA, 1997. ACM.

[5] M. Pauly, R. Keiser, B. Adams, P. Dutre, M. Gross, and L. Guibas.Meshless animation of fracturing solids. Proceedings of ACM Sig-graph 2005, 24(3):957–964, 2005.

[6] A. Schmidt, M. Strohbach, K. v. Laerhoven, A. Friday, and H.-W.Gellersen. Context acquisition based on load sensing. In UbiComp’02: Proceedings of the 4th international conference on UbiquitousComputing, pages 333–350, London, UK, 2002. Springer-Verlag.

[7] Y. Visell and J. Cooperstock. Optimized Design of a Vibrotactile Dis-play via a Rigid Surface. In Proc. of IEEE Haptics Symposium, 2010.

[8] Y. Visell, F. Fontana, B. Giordano, R. Nordahl, S. Serafin, andR. Bresin. Sound design and perception in walking interactions. Inter-national Journal of Human-Computer Studies, 2009.

[9] Y. Visell, A. Law, S. Smith, J. Ip, R. Rajalingham, and J. Cooper-stock. Contact sensing and interaction techniques for a distributedmultimodal floor display. In Proc. of IEEE 3DUI, 2010.

314