[ieee 2013 ieee virtual reality (vr) - lake buena vista, fl (2013.3.18-2013.3.20)] 2013 ieee virtual...

2
Multi-View Augmented Reality for Underground Exploration Mustafa Tolga Eren * Sabanci University Murat Cansoy Sabanci University Selim Balcisoy Sabanci University ABSTRACT We propose a novel multi-view visualization technique, which allows effortless interaction with subterranean data and tries to maximize spatial perception whilst minimizing view clut- ter. The multi-view augmented reality technique introduces two correlating displays; i) a perspective egocentric view with focused edge overlay and focused geometry clipping and ii) an orthographic cut-away display that visualizes a thin slice of subterranean data intersecting a user controlled anchor. Keywords: Outdoor Augmented Reality, Underground Visu- alization, X-Ray Visualization, Multi-View Augmented Real- ity Index Terms: H.5.1 [Information Systems]: Multimedia In- formation Systems—Artificial, augmented, and virtual reali- ties; I.3.7 [Computing Methodologies]: Computer Graphics— 3D Graphics; 1 I NTRODUCTION Outdoor Augmented Reality is a wide research field with a large set of application areas spanning from defense to enter- tainment. A key issue is visualization of occluded objects with highest possible accuracy and preservation of spatial relation- ships between visible and rendered objects. A large set of re- search activities are focused on displaying information hidden behind other surfaces such as walls, buildings, and mountains based on X-ray visualization techniques [1]. There are several techniques on exploring existing urban infrastructure and archeological artifacts such as ground-penetrating radar, electrical-resistance-tomography and robotic sensors. New and existing pipe networks and other geo-referenced subterranean data are documented using geo- graphical information systems. Hence there is a need for in- situ visualization of what is documented on a mobile device such as a mobile phone or tablet in an AR fashion. The main contribution of this paper is a novel multi-view vi- sualization technique, which allows effortless interaction with subterranean data and tries to maximize spatial perception whilst minimizing view clutter. 2 RELATED WORK Occluded geometry visualization has been studied extensively in the AR domain. A careless overlay of occluded geometry as seen in Figure 1a, is not sufficient for visualizing these ob- jects. Previous studies enhance the scene by employing X-ray visualization via image-based ghostings [7] or simple edge- overlay techniques to give the sense of occlusion while visual- izing hidden geometry [1]. Figure 1b presents an edge overlay * e-mail: [email protected] e-mail: [email protected] e-mail: [email protected] Figure 1: Visualization of underground pipe networks using dif- ferent techniques. a) Standard method b) Edge overlay c) Dig box d) Our multi-view technique. technique. Detected edges of the background image are over- laid on top of infrastructure pipes. This technique provides the user, cues to perceive occluded objects. However the user is not supplied with any focus cues. In the Smart Vidente project, researchers utilized a dig box, to present focused visualization for an excavation site. We implemented a similar visualization that can be seen in Figure 1c. The rendering of underground objects is restricted to the volume covered by this rectangular excavation box. In this work flow, the excavation box is cre- ated and fixed to user defined geo-location. This technique is tailored for examining a specific excavation location [6]. 3 MULTI -VIEW TECHNIQUE We employ a combination of Magic Lens [2] and X-ray visu- alization techniques via an anchor. The anchor is an interac- tive widget with textual information consisting of two parts; “above” and “under” planes. While a focused edge overlay technique around the upper part of the anchor provides X-ray visualization, the lower part acts as a Magic Lens for display- ing an orthographic projection of occluded geometry in a sep- arate view. These views are synchronized and share the same data with shape cues correlating with each other. During the exploration process, single view approach often leads to misinterpretation of the underlying data [5]. By displaying the same data us- ing different techniques and from different angles; viewers are encouraged to match correlated elements. A cutaway slice of the infrastructure that lay immediately below this anchor is displayed on the orthographical view. When anchor position changes, both views change in a con- sistent manner. The first part of the anchor is always lo- cated above the surface, touching the ground grid. This part is named “above” plane and demonstrated in the upper row of Figure 2a. The second part is located directly beneath the “above” plane, and is always under the ground grid. Moreover “under” plane can shift through the ground via user interac- tion as seen in Figure 2b and Figure 2c. We utilize the “under plane” and the scene’s projection onto its surface to visualize occluded geometry. To preserve focus while exploring hidden objects, we utilize a combination of techniques employed in edge overlay and dig box techniques. First, we employ a sphere based clipping 117 IEEE Virtual Reality 2013 16 - 20 March, Orlando, FL, USA 978-1-4673-4796-9/13/$31.00 ©2013 IEEE

Upload: selim

Post on 26-Feb-2017

218 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: [IEEE 2013 IEEE Virtual Reality (VR) - Lake Buena Vista, FL (2013.3.18-2013.3.20)] 2013 IEEE Virtual Reality (VR) - Multi-view augmented reality for underground exploration

Multi-View Augmented Reality for Underground ExplorationMustafa Tolga Eren∗

Sabanci UniversityMurat Cansoy†

Sabanci UniversitySelim Balcisoy‡

Sabanci University

ABSTRACT

We propose a novel multi-view visualization technique, whichallows effortless interaction with subterranean data and triesto maximize spatial perception whilst minimizing view clut-ter. The multi-view augmented reality technique introducestwo correlating displays; i) a perspective egocentric view withfocused edge overlay and focused geometry clipping and ii)an orthographic cut-away display that visualizes a thin slice ofsubterranean data intersecting a user controlled anchor.

Keywords: Outdoor Augmented Reality, Underground Visu-alization, X-Ray Visualization, Multi-View Augmented Real-ity

Index Terms: H.5.1 [Information Systems]: Multimedia In-formation Systems—Artificial, augmented, and virtual reali-ties; I.3.7 [Computing Methodologies]: Computer Graphics—3D Graphics;

1 INTRODUCTION

Outdoor Augmented Reality is a wide research field with alarge set of application areas spanning from defense to enter-tainment. A key issue is visualization of occluded objects withhighest possible accuracy and preservation of spatial relation-ships between visible and rendered objects. A large set of re-search activities are focused on displaying information hiddenbehind other surfaces such as walls, buildings, and mountainsbased on X-ray visualization techniques [1].

There are several techniques on exploring existingurban infrastructure and archeological artifacts such asground-penetrating radar, electrical-resistance-tomographyand robotic sensors. New and existing pipe networks and othergeo-referenced subterranean data are documented using geo-graphical information systems. Hence there is a need for in-situ visualization of what is documented on a mobile devicesuch as a mobile phone or tablet in an AR fashion.

The main contribution of this paper is a novel multi-view vi-sualization technique, which allows effortless interaction withsubterranean data and tries to maximize spatial perceptionwhilst minimizing view clutter.

2 RELATED WORK

Occluded geometry visualization has been studied extensivelyin the AR domain. A careless overlay of occluded geometryas seen in Figure 1a, is not sufficient for visualizing these ob-jects. Previous studies enhance the scene by employing X-rayvisualization via image-based ghostings [7] or simple edge-overlay techniques to give the sense of occlusion while visual-izing hidden geometry [1]. Figure 1b presents an edge overlay

∗e-mail: [email protected]†e-mail: [email protected]‡e-mail: [email protected]

Figure 1: Visualization of underground pipe networks using dif-ferent techniques. a) Standard method b) Edge overlay c) Digbox d) Our multi-view technique.

technique. Detected edges of the background image are over-laid on top of infrastructure pipes. This technique provides theuser, cues to perceive occluded objects. However the user isnot supplied with any focus cues. In the Smart Vidente project,researchers utilized a dig box, to present focused visualizationfor an excavation site. We implemented a similar visualizationthat can be seen in Figure 1c. The rendering of undergroundobjects is restricted to the volume covered by this rectangularexcavation box. In this work flow, the excavation box is cre-ated and fixed to user defined geo-location. This technique istailored for examining a specific excavation location [6].

3 MULTI-VIEW TECHNIQUE

We employ a combination of Magic Lens [2] and X-ray visu-alization techniques via an anchor. The anchor is an interac-tive widget with textual information consisting of two parts;“above” and “under” planes. While a focused edge overlaytechnique around the upper part of the anchor provides X-rayvisualization, the lower part acts as a Magic Lens for display-ing an orthographic projection of occluded geometry in a sep-arate view.

These views are synchronized and share the same data withshape cues correlating with each other. During the explorationprocess, single view approach often leads to misinterpretationof the underlying data [5]. By displaying the same data us-ing different techniques and from different angles; viewers areencouraged to match correlated elements.

A cutaway slice of the infrastructure that lay immediatelybelow this anchor is displayed on the orthographical view.When anchor position changes, both views change in a con-sistent manner. The first part of the anchor is always lo-cated above the surface, touching the ground grid. This partis named “above” plane and demonstrated in the upper rowof Figure 2a. The second part is located directly beneath the“above” plane, and is always under the ground grid. Moreover“under” plane can shift through the ground via user interac-tion as seen in Figure 2b and Figure 2c. We utilize the “underplane” and the scene’s projection onto its surface to visualizeoccluded geometry.

To preserve focus while exploring hidden objects, we utilizea combination of techniques employed in edge overlay anddig box techniques. First, we employ a sphere based clipping

117

IEEE Virtual Reality 201316 - 20 March, Orlando, FL, USA978-1-4673-4796-9/13/$31.00 ©2013 IEEE

Page 2: [IEEE 2013 IEEE Virtual Reality (VR) - Lake Buena Vista, FL (2013.3.18-2013.3.20)] 2013 IEEE Virtual Reality (VR) - Multi-view augmented reality for underground exploration

Figure 2: “above” and “under” planes are placed in an emptyscene. An edge overlay is drawn to denote the ground plane. a)front view, b) “above” and “under” at close proximity, c) “under”plane at a depth of four meters directly beneath the “above”plane.

Figure 3: Clipping sphere used for focus preservation throughinformation filtering. a) Focused on red pipe layer. b) Focusedon the area between the layers. c) Focused on blue pipe layer.

mechanism in order to filter out cluttering geometry. A virtualclipping sphere is pinned to the center of the user intractable“under” plane (Figure 3). Secondly, we provide a 2D clippingmechanism for restricting highlighted surface features.

The orthographic view is generated with a virtual ortho-graphic camera that is positioned facing the “under” plane.This projection has a very narrow near to far field range. Weemploy this narrow range in order to mimic volumetric cut-away visualization.

The geometry of the orthographic view is rendered in threepasses. In the first pass only a wire frame representation of theobjects is rendered. Second and third passes are rendered witha hatching shader [3]. Second pass is drawn using flipped nor-mals to render back faces. Finally the front faces are renderedin the third pass. This non-photorealistic rendering techniqueis aimed for the use of professional field workers and civil en-gineers that are familiar with technical illustrations. We de-cided to use a similar hatching technique for acquiring char-coal sketch like cutaway illustrations. Another advantage ofusing a hatching based shader is, by drawing hatched linesalong the normals of the object we preserve the sense of ge-ometry on the final image.

3.1 InteractionWhile the user’s position and orientation are being tracked viaa marker, she can also trigger two separate touch based in-teractions. In order to create an egocentric visualization, theanchor’s position is fixed on the X axis in camera space. Thisassures the anchor will always be in the center of the screenhorizontally.

For touch based interactions the screen space is divided intotwo. If the user touches and drags her finger on the perspec-tive view, the “above” and “under” plane pair is moved closeror further away along the viewing direction. Similarly user in-teraction on the orthographic view, translates the bottom partof the anchor, along the Y axis through the ground. This in-teraction causes the “under” plane to shift into the ground.The “above” plane is left on the surface to reflect the “under”plane’s relative position on the ground. The defined user inter-actions are demonstrated in figure 4.

3.2 PrototypeWe employ a mobile AR setup where users explore the un-derground objects through a smartphone. Perspective view in

Figure 4: Touch based interactions translates the anchor a)along the viewing direction, b) through the ground.

our technique is similar to classical mobile AR; video imagesfrom a calibrated camera are used to register and track prede-fined markers. A custom marker placed on the ground levelis utilized in our demonstrations. The virtual objects are thenrendered relative to this marker’s position and orientation. Forregistration and tracking purposes we have used the VuforiaSDK[4]. Our prototype runs on a Samsung Galaxy S2 smart-phone at approximately 25 FPS.

4 DISCUSSION AND CONCLUSION

A limitation of multi-view technique is the fixed panel size(1x2 meters). For precise measurements the relative distancebetween visualized objects should be smaller than the panelsize. This limitation can be overcome by providing a panelsize that is suitable for the physical environment that is beinginspected. It is also possible to let the user change the panelsize in runtime via a specific user interaction. In this case theorthographic view would also be scaled to match the paneldimensions, allowing precise comparisons.

Another topic worth mentioning is the anchor pair’s size onthe screen. Depending on the real world size of the anchorpanels, they may take up screen space on the perspective dis-play. The perspective display should be used for navigationalpurposes where the orthographic display is used to measurethe distances. Since the panel’s size has no effect on the ortho-graphic display, they do not clutter the measurement process.

In this work we have demonstrated how a multi-view visu-alization technique can be adapted into the Augmented Realityfield for improved exploration of underground structures.

REFERENCES

[1] B. Avery, C. Sandor, and B. H. Thomas. Improving Spatial Per-ception for Augmented Reality X-Ray Vision. IEEE VR 2009,pages 79–82, Mar. 2009.

[2] E. Bier, M. Stone, K. Pier, W. Buxton, and T. DeRose. ToolglassAnd Magic Lenses: The See-Through Interface. In Proceedingsof the 20th annual conference on Computer graphics and interac-tive techniques, pages 73–80. ACM, 1993.

[3] B. Freudenberg, M. Masuch, and T. Strothotte. Real-Time Halftoning: A Primitive For Non-Photorealistic Shading.(November 2003), 2004.

[4] Qualcomm. Vuforia sdk, "http://www.qualcomm.com/solutions/augmented-reality", retrieved on April, 2012.

[5] J. Roberts. On encouraging multiple views for visualization. InInformation Visualization, 1998. Proceedings. 1998 IEEE Con-ference on, pages 8 –14, jul 1998.

[6] G. Schall, E. Mendez, E. Kruijff, E. Veas, S. Junghanns, B. Re-itinger, and D. Schmalstieg. Handheld Augmented Reality for un-derground infrastructure visualization. Personal and UbiquitousComputing, 13(4):281–291, June 2008.

[7] S. Zollmann, D. Kalkofen, E. Mendez, and G. Reitmayr. Image-based ghostings for single layer occlusions in augmented reality.In ISMAR 2010, pages 19–26. IEEE, 2010.

118