mockup builder 3d modeling on and above the surface · 3d modeling 3d user interface abstract we...

14
Special Section on Touching the 3rd Dimension Mockup Builder: 3D modeling on and above the surface Bruno R. De Arau ´ jo a,n , Ge ´ ry Casiez b , Joaquim A. Jorge a , Martin Hachet c a INESC-ID, DEI IST, Technical University of Lisbon, Portugal b LIFL, INRIA Lille, University of Lille, Villeneuve d’Ascq, France c INRIA Bordeaux - LaBRI, Talence, France article info Article history: Received 18 September 2012 Received in revised form 20 December 2012 Accepted 21 December 2012 Available online 7 January 2013 Keywords: 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows virtual mockups to be created using gestures. Our goal is to provide familiar ways for people to conceive, create and manipulate three-dimensional shapes. To this end, we developed on-and-above-the-surface interaction techniques based on asymmetric bimanual interaction for creating and editing 3D models in a stereoscopic environment. Our approach combines both hand and finger tracking in the space on and above a multi-touch surface. This combination brings forth an alternative design environment where users can seamlessly switch between interacting on the surface or above it to leverage the benefit of both interaction spaces. A formal user evaluation conducted with experienced users shows very promising avenues for further work towards providing an alternative to current user interfaces for modeling. & 2013 Elsevier Ltd. All rights reserved. 1. Introduction Despite the growing popularity of virtual environments, they have not yet replaced desktop Computer Aided Design (CAD) systems when it comes to modeling 3D scenes. Traditional virtual reality idioms are still umbilically connected to the desktop metaphor they aim to replace, by leveraging on the familiar ‘‘Windows, Icons, Menus, Pointing’’ (WIMP) metaphors. Worse, the command languages underlying many of these systems do not map well to the way people learn to conceive, reason about and manipulate three-dimensional shapes. In this paper, we explore 3D interaction metaphors to yield direct modeling techniques in stereoscopic multi-touch virtual environ- ments. Combined with user posture tracking based on a depth camera and three-dimensional finger tracking, this rich environment allows us to seamlessly pick and choose the sensing techniques most appropriate to each modeling task. Based on this groundwork, we have developed an expressive set of modeling operations which build on user’s abilities at creating and manipulating spatial objects. Indeed, from a small set of simple, yet powerful functions, users are able to create moderately complex scenes with simple dialogues via direct manipulation of shapes in less cumbersome ways. Our immersive environment aims at supporting gestural and direct manipulation following a push-and-pull modeling paradigm to edit both topological and geometric representations of 3D models. By doing so, our goal is to propose plausible 3D gestures for modeling fashioned after physical mock-up interaction. Finally, we want to hide the underlying mathematical details associated to traditional CAD systems, thus affording users more intimate contact with virtual shapes without sacrificing their creativity. While we do not envisage working at full size, the ability to control scale at will is an important feature to easily explore models. By using a godlike view, we render virtual models as close as possible to physical mockup-ups without the associated physical constraints. While the bimanual interaction model has been previously published using a similar hardware setup in [1], this paper describes an extended version of the work presented at the ACM CHI workshop on the 3rd dimension of CHI (3DCHI) [2] focused on the 3D modeling techni- ques and their evaluation. We performed a formal user evaluation to assess the benefits and limitations of our approach as compared to a CAD modeling system with similar push and pull modeling ability. The remainder of the paper is organized as follows. After an overview of the related work we introduce our modeling setup and our modeling approach. We then present the results of a pre- liminary evaluation comparing Mockup Builder to Rhino 3D with two experts and continue with a formal user evaluation with 14 participants to compare Mockup Builder to Sketchup 8 in different modeling scenarios. We finish by devising areas for improving Mockup Builder based on these evaluations before concluding. 2. Related work Several strategies have been followed to propose expressive and easy-to-learn user interfaces that support 3D modeling tasks. Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/cag Computers & Graphics 0097-8493/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cag.2012.12.005 n Corresponding author. Tel.: þ351 214233566; fax: þ351 213145843. E-mail addresses: [email protected], [email protected] (B.R. De Arau ´ jo), gery.casiez@lifl.fr (G. Casiez), [email protected] (J.A. Jorge), [email protected] (M. Hachet). Computers & Graphics 37 (2013) 165–178

Upload: others

Post on 25-May-2020

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Computers & Graphics 37 (2013) 165–178

Contents lists available at SciVerse ScienceDirect

Computers & Graphics

0097-84

http://d

n Corr

E-m

bdearau

jorgej@

journal homepage: www.elsevier.com/locate/cag

Special Section on Touching the 3rd Dimension

Mockup Builder: 3D modeling on and above the surface

Bruno R. De Araujo a,n, Gery Casiez b, Joaquim A. Jorge a, Martin Hachet c

a INESC-ID, DEI IST, Technical University of Lisbon, Portugalb LIFL, INRIA Lille, University of Lille, Villeneuve d’Ascq, Francec INRIA Bordeaux - LaBRI, Talence, France

a r t i c l e i n f o

Article history:

Received 18 September 2012

Received in revised form

20 December 2012

Accepted 21 December 2012Available online 7 January 2013

Keywords:

3D modeling

3D user interface

93/$ - see front matter & 2013 Elsevier Ltd. A

x.doi.org/10.1016/j.cag.2012.12.005

esponding author. Tel.: þ351 214233566; fa

ail addresses: [email protected],

[email protected] (B.R. De Araujo), gery.casiez@l

acm.org (J.A. Jorge), [email protected] (M

a b s t r a c t

We present Mockup Builder, a semi-immersive environment for conceptual design which allows virtual

mockups to be created using gestures. Our goal is to provide familiar ways for people to conceive,

create and manipulate three-dimensional shapes. To this end, we developed on-and-above-the-surface

interaction techniques based on asymmetric bimanual interaction for creating and editing 3D models in

a stereoscopic environment. Our approach combines both hand and finger tracking in the space on and

above a multi-touch surface. This combination brings forth an alternative design environment where

users can seamlessly switch between interacting on the surface or above it to leverage the benefit of

both interaction spaces. A formal user evaluation conducted with experienced users shows very

promising avenues for further work towards providing an alternative to current user interfaces for

modeling.

& 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Despite the growing popularity of virtual environments, theyhave not yet replaced desktop Computer Aided Design (CAD)systems when it comes to modeling 3D scenes. Traditional virtualreality idioms are still umbilically connected to the desktopmetaphor they aim to replace, by leveraging on the familiar‘‘Windows, Icons, Menus, Pointing’’ (WIMP) metaphors. Worse,the command languages underlying many of these systems do notmap well to the way people learn to conceive, reason about andmanipulate three-dimensional shapes.

In this paper, we explore 3D interaction metaphors to yield directmodeling techniques in stereoscopic multi-touch virtual environ-ments. Combined with user posture tracking based on a depthcamera and three-dimensional finger tracking, this rich environmentallows us to seamlessly pick and choose the sensing techniquesmost appropriate to each modeling task. Based on this groundwork,we have developed an expressive set of modeling operations whichbuild on user’s abilities at creating and manipulating spatial objects.Indeed, from a small set of simple, yet powerful functions, users areable to create moderately complex scenes with simple dialogues viadirect manipulation of shapes in less cumbersome ways. Ourimmersive environment aims at supporting gestural and directmanipulation following a push-and-pull modeling paradigm to editboth topological and geometric representations of 3D models.

ll rights reserved.

x: þ351 213145843.

ifl.fr (G. Casiez),

. Hachet).

By doing so, our goal is to propose plausible 3D gestures formodeling fashioned after physical mock-up interaction. Finally, wewant to hide the underlying mathematical details associated totraditional CAD systems, thus affording users more intimate contactwith virtual shapes without sacrificing their creativity. While we donot envisage working at full size, the ability to control scale at will isan important feature to easily explore models. By using a godlikeview, we render virtual models as close as possible to physicalmockup-ups without the associated physical constraints. While thebimanual interaction model has been previously published using asimilar hardware setup in [1], this paper describes an extendedversion of the work presented at the ACM CHI workshop on the 3rddimension of CHI (3DCHI) [2] focused on the 3D modeling techni-ques and their evaluation. We performed a formal user evaluation toassess the benefits and limitations of our approach as compared to aCAD modeling system with similar push and pull modeling ability.

The remainder of the paper is organized as follows. After anoverview of the related work we introduce our modeling setup andour modeling approach. We then present the results of a pre-liminary evaluation comparing Mockup Builder to Rhino 3D withtwo experts and continue with a formal user evaluation with 14participants to compare Mockup Builder to Sketchup 8 in differentmodeling scenarios. We finish by devising areas for improvingMockup Builder based on these evaluations before concluding.

2. Related work

Several strategies have been followed to propose expressiveand easy-to-learn user interfaces that support 3D modeling tasks.

Page 2: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178166

To take advantage of both stereoscopic displays and gestures infree space, Schkolne et al. [3] introduced surface drawing usinghand motions in the air to describe ribbon-like shapes based onhand posture. Additionally, a set of tangible tracked artifacts wereavailable, each with its own modeling functionality. While thisapproach allows creating free-form shapes, it appears inadequateto create rigorous manufactured shapes. FreeDrawer [4] alleviatesthis issue by providing a tracked stylus allowing the user tosketch networks of curves on top of a Responsive Workbench.These curves can then be used to define the boundary of free-formsurfaces that can be interactively deformed. However thisapproach does not support more complex CAD editing andprimitives. The system proposed by Fleish et al. [5] supports bothfreeform shape creation and regular CAD primitives by adaptingtraditional WIMP based interfaces to virtual immersive environ-ments using a transparent PIPSheet artifact to map menus. Theirsystem can be used by several users in a collaborative way tosupport the designing task as presented by Kaufmann [6] usinghead mounted displays. However the lack of physical supportmakes drawing in empty space more adequate to free formmodeling than creating ‘constructive solid geometry’’—like reg-ular objects [7]. Haptic devices can help sketching in the airalthough their working space is often restricted [8]. This providesan attractive solution for 3D modeling since users are able toeasily learn how to use these systems and rigor improves rapidlywith training as shown by recent studies [9]. Instead of onlyrelying on gestures in empty space, our approach takes advantageof both the surface and space above it for what they are bestdesigned for, to combine the benefits of both interaction spaces.

Sketching is a powerful communication tool of any realconceptual design task. However it is still discarded by mostexisting CAD modeling systems which rely primarily on singlecursor based interaction and WIMP metaphor. Regarding tradi-tional 2D environments, research on sketch based modelinginterfaces has proposed several approaches to take advantage ofdesigner drawing skills. Olsen presented a deep survey of mostexisting techniques [10]. These systems rely on gesture recogni-tion (SKETCH), stroke beautification (Pegasus), line drawingreconstruction (SmartPaper), suggestive interfaces (Chateau),push-and-pull sketching (Sesame), freeform contour based infla-tion (Teddy or ShapeShop) to make sketching as a usable alter-native to traditional CAD systems. We invite the reader to refer tothis survey for further details regarding these systems andtechniques. Forsberg et al. [11] propose an adaptation of theSKETCH system to a stereoscopic ActiveDesk environment namedErgoDesk. However they still rely exclusively on 2D gestures tocreate geometry using a light pen and the stereoscopic visualiza-tion is primary used for 3D exploration of shapes using a 6DoFtracker. Our approach adopts several of these techniques to gofurther than existing drawing-in-the-air approaches while mixing2D sketch with 3D gestures continuously.

With the widespread adoption of multi-touch devices and lessexpensive and intrusive tracking solutions such as the MicrosoftKinect, academic research on tabletop has refocused on ‘‘on’’ and‘‘above’’ surface interaction techniques. Muller-Tomfelde et al. pro-posed different methods to use the space above the surface toprovide ways of interacting with 2D tabletop content closer to reality[12]. While tangible devices complement the surface physically witha direct mapping to the GUI such as in the Photohelix system andStereoBlocks [13], gestures above the surface mimic physical inter-action with real objects. Tangible interfaces offer natural manipula-tions and artifacts can correctly map tools functionality [3]. They canbe as effective or even better than WIMP interfaces for 3D manip-ulation and edition as demonstrated by Novotny et al. [14]. Wilsonet al. proposed several metaphors to interact with different displayswhile capturing full body posture [15]. Users can also interact

physically in space with projected GUI. In contrast to tangibleinterfaces, our approach is not limited to physical representationsand provides an unconstrained design environment for shaperepresentation. Regarding user interface, we prefer to use the surfacefor GUI since it is more adequate for discrete selection and explorespace gestures for modeling actions.

Our approach explores the continuous space as presented byMarquardt et al. [16]; however we extend their approach bycombining it with the bimanual asymmetric model proposedby Guiard [17]. This model proposes guidelines for designing biman-ual operations based on observations of users sketching on paper.Guiard identified different rules and actions for the preferred (alsodominant-hand or DH) and non-preferred (also non-dominant hand,or NDH) hand. While the DH performs fine movements and manip-ulates tools, the NDH sets the spatial frame of reference and issuescoarse movements. This approach has been explored by severalsystems [18–21] by combining finger- or hand-gestures with pendevices. Brandl et al. proposed a sketching system where the userselects options through touch using the NDH on a WIMP-basedgraphical interface, while the DH sketches using a pen device [18].Such a configuration allows to better explore hand gestures proposingricher interaction concepts to represent 2D editing operations such asdemonstrated by Hinckley et al. [19]. Indeed, this makes switchingbetween modalities easier and allows users to perform a wide rangeof 2D editing tasks without relying on gestures or GUI invocations.To model 3D curves, Lee combined hand gestures with sketchingusing a collapsible pen to define curve depth on a tabletop [20]. TheNDH is tracked allowing users to seamlessly specify 3D modelingcommands or modes such as the normal direction of an extrusionwhile specifying the displacement by interacting with the pen on thevirtual scene. Contrary to their approach, we prefer to keep thesurface for fast and accurate 2D drawing, while benefiting from the3D input space for controlling depth directly. Lopes et al. adapted theShapeShop sketch based free-form modeler to use both pen andmulti-touch simultaneously [21]. They found out that the asymmetricbimanual model allows users to perform more manipulations in lesstime than conventional single interaction point interfaces, whichincreased the percentage of time spent on sketching and modelingtasks. By tracking user hands, we adopt the asymmetric bimanualmodel to easily switch between sketching, model editing, navigationand spatial manipulation of objects. In addition, we do not need torely on special input devices nor extra modalities to assign differentroles to each hand.

We rely on a stereoscopic visualization setup for architecturalmodel visualization similar to [22]. While this system allowsnavigating or annotating the 3D scene mainly as if it were insidethe table and use fingers as proxies over the scene, our interactiontechniques focus on modeling and direct manipulation since 3Dmodels are rendered as if they were lying atop the table. To avoidhand occlusions over the visualization, Toucheo [23] proposed afish-tank like setup using a multi-touch surface and a stereo-scopic display. However similar to other setups relying on semi-transparent mirrors to create an holographic illusion, it bothreduces the working space and constrains using the above surfacespace to hand gestures. Thus, our stereoscopic visualization setupprovides more freedom of movement allowing a continuous spaceof interaction. In addition, adopting a bimanual asymmetricmodel makes it possible to develop new interaction techniqueswhich could benefit interaction with holographic display tech-nologies when they become available.

3. Hardware modeling setup

Our semi-immersive environment uses a stereoscopic multi-touch display 140� 96 cm combined with a Kinect depth camera

Page 3: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Fig. 1. Our stereoscopic multi-touch modeling setup showing the multi-touch

stereoscopic display, the Gametraks used to track the fingers above the surface

and a Kinect to track user’s head.

Fig. 2. Fingers tracking above the multi-touch surface using two Gametrak

devices.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178 167

and two Gametraks. Head tracking is achieved in a non-intrusiveway thanks to the Kinect skeleton detection algorithm. Theskeleton also tracks user hands allowing us to locate the domi-nant hand according to users’ handedness. The Gametrack devicesallow us to track fingers with a good precision over the workingspace above the table while reducing occlusion problems andproviding a higher framerate (125 Hz) compared to techniquesbased on the Kinect device alone (Fig. 1). The visualization relieson a back-projection based system located under the tablerunning at 120 Hz with a 720p pixels resolution. It is coupledwith 3D Vision NVIDIA active shutter glasses for the stereoscopicvisualization. The 3D scene is rendered on top of the surface andthe point of view is updated according to the position andorientation of the user’s head to take into account motionparallax. The IR transmitter for the glasses uses a wavelengthdifferent from the multi-touch table which is based on the laserlight plane technique. It is set at a position to cover the workingvolume around the table where the user interacts. Finger data arethen sent using TUIO messages to our custom built application.The two Gametraks track the 3D position of the index and thumbof each hand when they are not in contact with the multi-touchsurface.

These low cost gaming devices are placed in a reverse positioncentered above the table at a distance of 120 cm. The 3D positionof each finger is computed from the two angles of rotation and thelength of each cable, digitalized on 16 bits and reported at 125 Hzto the host computer. The retractable strings are attached to thefingers through a ring. Although strings introduce some visualclutter, they were not found to distract users from their task. Thestrings create a minor spring effect which reduces user handtremor without adding fatigue. We added a 6 mm diameter lowprofile momentary switch button on each index finger to detectpinch gestures without ambiguity (Fig. 2). This simple solutionprovides a good trade-off regarding precision, cost and cumber-someness compared to using a high end marker based opticaltracking system or low sampling frequency device such as theKinect.

To obtain a continuous interaction space, the coordinates fromthe different input devices need to be converted into a uniquereference space. We chose the Kinect coordinate system as ourprimary coordinate system since it covers the interaction space aswell as the user space. 2D touch positions on the multi-touchsurface are converted to 3D space using a transformation matrixdefining the surface plane in the Kinect coordinate system. Such a

matrix is computed by identifying the four multi-touch surfacecorners in the RGB image captured by the Kinect and bycalculating the plane using the corresponding depth informationof each corner. This plane definition coupled with the real-timehead tracking provided by the Kinect skeleton allows us to definethe frustum of the off-axis stereo perspective projection to render3D content on top of the surface from the user point of view.

Regarding the Gametrak devices, a transformation matrix iscomputed, for each tracked finger converting local Gametrakcoordinates into our primary Kinect reference space. The rigidtransformation is computed using a RANSAC algorithm [24] on aset of 1000 matching point pairs corresponding to Gametrak localpoints and touch position on the multi-touch surface. While theuser is interacting on the surface, we are able to fuse by proximitythe information gathered from the Kinect skeleton tracking (handpositions), from the Gametrak devices and from the multi-touchsurface in a unique reference space. The redundancy of informa-tion from the different input devices allows us to identify whichfinger of which hand is interacting on the surface or in the air orto choose the input source with the best tracking resolution. Forfurther details, we invite the reader to consult the calibration andinput data fusion section in [1].

4. Our modeling approach

We propose a direct modeling approach to create, edit andmanipulate 3D models using a small set of operations. Afterdrawing sketches, users can create 3D models by pushing andpulling existing content of the scene such as Sesame [25] orGoogle Sketchup. Our models are represented using a boundaryrepresentation which decomposes the topology of objects intofaces, edges and vertices. We adapt our modeling approach totake advantage of the bimanual interaction model while the useris interacting on and above the surface.

4.1. Sketching on the surface

The multi-touch surface is primarily used as a sketchingcanvas where the user interacts using fingers as depicted byFig. 3. User can sketch on the surface creating planar shapes from

Page 4: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Fig. 3. User sketching a 2D shape on the multi-touch surface using the DH fingers. Fig. 4. User performing an extrusion along the normal of a shape by moving its DH

in space using a pinch gesture with the Gametrak device.

Fig. 5. Curvilinear extrusion of a shape defined by the motion of the user DH in

space while pressing the pinch button on the Gametrak device.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178168

close contours using the DH. Contours might use lines, curves orboth and can be sketched using multiple strokes. Open strokeswhose extremities are close to each other are merged into a singlestroke. While sketching, input data is fitted incrementally to thebest fit of lines and cubic Bezier curves. Our incremental fittingalgorithm based on curve fitting tries to guarantee the continuitybetween curves and segments by adding tangency constraintsduring the fitting process. When a closed contour is created onthe surface, simple planar polygons can be created by the user.We perform a simple stroke beautification based on constraintsdetected from sketches. These constraints rely on line segmentsto detect parallel and perpendicular line pairs and segment pairswith equal length. We use a threshold on angles betweensegments for parallelism and perpendicularity and a thresholdratio relationship between segments with similar length.An energy function is specified for each type of constraint andwe perform an error minimization method to beautify usersketches. Thanks to this process, regular shapes can be createdusing line drawing. Regarding closed conic sections, we use a 2Dshape recognizer [26] to detect circles and ellipses which areapproximated by a closed piecewise curve using four cubic Beziersegments. We also use the 2D shape recognizer to detect simplegestures such as an erasing command by drawing a scribble.When an erasing gesture is recognized, if it overlaps open strokes,they are erased. However if it overlaps only shapes and not openstrokes, overlapped shapes are erased. This solution allows to useopen strokes as construction lines while modeling.

4.2. Creating 3D shapes by pushing and pulling operations

Gestures with the DH above the surface are interpreted as 3Dobject creation or edition. Creation of 3D shapes consists inextruding a planar shape previously sketched on the surfacefollowing the push-and-pull modeling metaphor. The user firstapproaches the DH index finger near a planar shape on the surfaceto highlight it. He or she then performs a pinch gesture, pressingthe button located on the index finger, to extrude the shape alongthe normal of the surface (Fig. 4). The height of the extrudedobject is then continuously updated and co-located with theposition of the finger until the button is released. Planar shapescan also be extruded along the trajectory defined in the air afterthe user has selected this operation in a menu displayed on theNDH (Fig. 5). While the user is defining the trajectory, the path iscontinuously re-evaluated and fitted into line segments and curve

pieces similarly to what is done for strokes on the surface.Segments and curve pieces are then used to create smooth freeform extrusion of the profile offsetting the gesture from thecentroid of the face to its vertices as presented by Coquillart[27]. This method enables to extrude both polyline and curvi-linear profiles along linear or curvilinear paths.

Additionally, topological features of the shape (vertices, edgesand faces) can be selected and displaced along a normal directionupdating the geometry of the object but not changing its topologyas done by the extrusion operation. It offers edition by pushingand pulling any topological feature of our boundary representa-tion. Selection of features is done implicitly by touching ageometrical feature on the surface and explicitly using a pinchgesture in space. Since edges and vertices can be shared by morethan one face or edge respectively, a continuous selectionmechanism is provided for selection disambiguation analyzingthe previously highlighted entity. For example, it is possible tohighlight a particular edge of a face shared by two faces byselecting it from the face the user is interested in. If no geome-trical feature is selected while doing the pinch gesture with theDH, the user can sketch 3D lines or curves in space.

Page 5: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Fig. 6. User moving a shape after selecting it using his index finger on the multi-

touch surface with his NDH.Fig. 7. User scaling an object in space using both hands tracked with the

Gametrak device. Both rotation and scale can be controlled when both pinch

buttons are pressed.

Fig. 8. Example of a contextual menu appearing below the NDH position while the

user selects a face in space with the DH. The options are presented according the

features available for the selected item.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178 169

4.3. Manipulating 3D shapes

When starting a gesture on the surface with the NDH, it isinterpreted as object transformation if it is performed on anobject or world manipulation otherwise. Single touch gestures areinterpreted as object or world translation. More than one fingergestures are interpreted as translation, rotation and scale opera-tions on objects or world following the well-known 2D RSTparadigm over 3D scenes as proposed by Knoedel and Hachet[28]. 3D objects are constrained to movements along the planeparallel to the multi-touch surface. A gesture started with theNDH can be complemented by the DH allowing translation,rotation and scale with both hands (Fig. 6).

The bimanual interaction used on the surface is also validabove the surface allowing to rotate, translate and scale objectsusing two fingers. As on the surface, the NDH begins the interac-tion using a pinch gesture. The NDH defines translations onlywhile complemented by DH, it adds rotation and scale operationsusing the method proposed by Wang et al. [29] or similar to thehandle bar metaphor proposed by Song et al. [30] as depicted inFig. 7. These direct 3D object manipulations appear much moreefficient compared to indirect interactions on the multi-touchsurface alone.

4.4. Menu based interaction

We rely on menu based graphical user interface to distinguishbetween modeling modes such as linear and curvilinear extrusionor other operations such as copy. Modes are presented throughitems shown in a contextual menu presented under the NDHwhile a shape or part of it is selected with the DH. Modespresented in the contextual menu correspond to the ones avail-able in the current mode associated to the operation performedby the DH (Fig. 8). If the operation carried by the DH onlysupports a single mode, no contextual menu is shown under theNDH. To avoid visual clutter, the contextual menu transparency isadjusted based on the distance between the NDH and the surface.Above 15 cm, the menu is fully transparent and becomes pro-gressively opaque as the NDH approaches the surface. To improvethe accessibility, the contextual menu follows the NDH but itslocation is progressively fixed as the NDH comes closer to thesurface to avoid spatial instabilities and reduce errors whileselecting an item.

The discrete mode selection includes the extrusion type(normal to a face or along a trajectory), updating the objecttopology or simply moving it, the cloning operation and thesnapping operation described in the following section. When ashape is created, we associate to each face the straight extrusionalong the normal as the default mode since it is the most likelyoperation in the push-and-pull modeling approach. When thestraight extrusion starts, we automatically change the mode tothe face move operation, updating the shape without adding newtopological changes. Successive extrusions can be done to createstacked like shape parts by interacting with the menu. Since themenu follows the position of the NDH, it can be used to define thelocation where clones appear when the cloning operation isselected by the user. The cloning is available when any shape isselected and it duplicates the entire shape as illustrated in Fig. 9.

4.5. Navigating between surface and space

Creating 3D planar shapes in space remains an operationdifficult to perform due to lack of physical constraints to guide

Page 6: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Fig. 9. Object cloning example: while the user selects an object using the DH, the

cloning option is selected with the NDH in the contextual menu, defining at the

same time the position of the new instance of the object.

Fig. 10. To sketch on a face perpendicular to the surface, the user first selects it

directly in space using his DH and then selects the snapping option in the

contextual menu displayed underneath the NDH.

Fig. 11. Example of a face snapped to the surface, allowing the user to add details

on it by sketching.

Fig. 12. Example of contextual menu to scale the profile of a shape with the NDH

while the user is extruding the shape in space with the DH.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178170

the hand. We propose a snapping operator to easily switchbetween the surface and space allowing the use of sketches onthe surface or gestures in 3D space at convenience. Snapping isavailable through the contextual menu accessible on the NDH tosnap on or back on any selected face (Fig. 10). This is achieved bycomputing a transformation matrix to align the 3D scene to thevisible grid defined as a representation of the table surface.A simple linear animation between the two orientations isrendered to help the user understand the new orientation of themodel. Furthermore, it allows sketching details on existing shapes(Fig. 11) or guaranteeing that new shapes are created on top of anexisting shape.

4.6. Constraining 3D operations

Since most of the 3D editing operations are performed usingonly the DH, we decided to use the free NDH to enrich our 3Doperators and constrain both sketching and 3D modeling to createmore rigorous and controlled shapes. The simplest constrainedoperation allows sketching symmetrical shapes on the surface.

First, the user sketches a straight line defining a mirroring planewhich can be selected by touching it with the NDH. While themirroring plane is selected, sketches using the DH are automati-cally mirrored and are considered as additional strokes if theselection remains active at the end of the sketch. By creating aclosed shape formed by a stroke and its mirrored version, userscan create symmetrical shapes. It can also be used to addsymmetrical details to an existing stroke or shape.

3D operations above the surface can also be constrained. Forexample, while an object is being extruded with the DH, the NDHcan select a face of an object to define a maximum or minimumheight constraint. Once the constraint is defined, the user con-tinues to move his DH until the maximum or minimum height isreached. Further movements along the preceding direction has noeffect on the height of the object. This allows the user to alsodefine that the height of an object should not be higher or lowerthat the height of another object.

While the two previous operations illustrate discrete con-straints defined by the NDH which can be activated before orduring an editing operation, we also explore the usage of dynamicconstrains, which can be updated continuously during an extru-sion operation. This is illustrated with the scale constraint that

Page 7: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Table 1Mockup Builder input lookup table showing main functionalities for each input modalities represented as the fused 2D/3D input events. Each row represents a set of input

conditions and the resulting action or input state change. The B column defines if the Gametrack buttons are pressed. It is considered as pressed for any surface touch. The

input usage column is a state which can be defined for inputs. Empty cells indicate indifferent conditions or unchanged values.

Condition Result

Input event B Handedness Input usage Selection status NDH selection Face state Stroke shape Action Input usage

New Y – – onWidget – – 2D WidgetDown WIDGET

New – – – !empty – – – Add Selection –

Update N – – !empty – – – Add Selection –

Update – – CEXTRUDE – – – – Update CurvedExtrusion –

Update Y DH NONE – – – – Update Stroke –

Update Y DH !WIDGET !empty – MOVE – Update LinearMove MOVE

Update Y DH !WIDGET !empty – LEXTRUDE – Update LinearExtrusion LEXTRUDE

Update Y DH !WIDGET !empty – CEXTRUDE – Update CuvedExtrusion CEXTRUDE

Update Y 1st NDH NONE !empty – – – Translate Object –

Update Y 1st NDH NONE empty – – – Translate View –

Update Y 2nd NDH NONE !empty empty – – RotateScale Object –

JNDHZ1 J !empty

Update Y 2nd NDH NONE empty empty – – RotateScale View –

JNDHZ1

Delete – – CEXTRUDE – – – – End CurvedExtrusion –

Delete – – LEXTRUDE – – – – End LinearExtrusion –

Delete – – MOVE – – – – End LinearMove –

Delete – – WIDGET – – – – WidgetUp –

Delete Y DH NONE – – – 2D Delete Deleting Overlapping –

Gesture Content

Delete Y DH NONE – – – Closed Planar Shape –

Stroke Created

Delete Y DH NONE – – – Open Split Overlapping –

Stroke Faces

Delete Y DH NONE – – – Other Create Simple Stroke –

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178 171

consists in scaling the profile while extruding a shape (Fig. 12).This allows the creation of a cone or a frustum from a circle or aquadrilateral planar face respectively. The scaling factor can becontrolled dynamically using a 2D overlay menu accessible by theNDH while extruding the shape.

While traditional modeling interfaces usually require con-straints to be defined before performing 3D operations in orderto define a problem to be solved by the application, our approachproposes an interactive constraint modeling solution. Doing so,we take advantage of the increase of expressiveness provided bybimanual interaction techniques. Furthermore, we hypothesisthat this definition of constraints on the fly allows to improvethe flow of interaction and better fits constraint-based modelingin conceptual design stages.

4.7. Putting all together

The interaction techniques previously described are supportedby fusing input events from the multi-touch surface, Gametrakand Kinect and then handling strokes. Strokes for the multi-touchsurface are defined by the sequence of points starting from thetime the user touches the surface with a finger until that fingerleaves the surface. For the Gametrak, a stroke is defined by thesequence of points with the same button state, for example afterthe pinch button is pressed and until it is released. Kinect inputsare only used to update the hand position information and thehead tracking needed for the stereoscopic visualization. For eachnew fused input event, we update the state of the contextualmenu and its position depending on shape features selected andthe distance between the non-dominant hand and the surface.

Table 1 presents how our modeling technique is controlledbased on the status of the fused input event (new event, updatedevent, deleted event) and state information of the application. Forthe sake of clarity, Table 1 does not represent the activation ofconstraints used for symmetric drawings and constrained heightextrusion, which are applied by updating dedicated variables in

the application. Regarding menu based commands related toshape features, we store the information directly with shapestogether with their geometric representation. This is representedby the face state column in Table 1 where MOVE, LEXTRUDE andCEXTRUDE values represent a shape to be moved, extruded alonga normal or along a curve respectively. Thanks to this solution, weare able to deal with the different input devices update frequen-cies while abstracting the 2D and 3D nature of the input.

5. Preliminary evaluation

We implemented a prototype to demonstrate our modelingapproach in Cþþ using OpenGL and OpenSG for stereoscopicvisualization. Our system was deployed on an Intel I7 9202.67 GHz processor with 3 Gb of memory RAM and an NVidiaQuadro 4000 graphics card running Microsoft Windows 7 64-bitoperating system. We followed a multi-threaded architectureseparating both input processing and interaction behavior fromthe visualization loop. Along the development, around 20 under-graduate and graduate students in computer science with vari-able experience with CAD applications and one architecturalresearcher tested the system. They informally assessed thedifferent design choices and iteratively improved the design ofthe interface. Thanks to stereo, we provide co-location betweenuser hands and virtual objects adapted to direct modelingmethods. While an initial version of our system used physics todetect collisions, this proved to be problematic when modeling, sothis feature was removed in the subsequent versions. Howeverfurther informal tests indicate this could reveal to be advanta-geous both for stacking and supporting 3D manipulations. Fig. 13presents two models built using our system by one of the authors.While the left model was built in 5 min 20 s, the other model wascreated in 2 min 45 s. Based on both screenshots, we ask to aprofessional architect with 5 yr of experience to built both modelusing his preferred modeling system. Using the Rhino3D modeler,

Page 8: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Fig. 13. 3D models designed using Mockup Builder (top row) and Rhino 3D (bottom row): a table with a chair and a simple building fac-ade. The bottom models were

created by a professional architect based on the upper screenshots.

Table 2Property comparison between both system used during the user evaluation.

System Mockup Builder Sketchup 8

Setup 3D stereoscopic Desktop

tabletop with 17 in. screen

Visualization Stereoscopic Orthogonal

perspective or perspective

Input type Multi-point Single cursor

or bimanual

Modeling Push-and-pull Push-and-pull

Shape Sketch based Primitive

creation instantiation

2D UI On demand Fixed menus

contextual menus

3D UI 3D direct 2D direct

manipulation manipulation

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178172

this expert user took 5 min 41 s and 3 min 46 s respectively forthe same models. We should note that while the timing aresimilar, the modeling approach was different. Mockup Builderproposes a push-and-pull approach fostering consecutive extru-sion operations while the expert user took advantage of symme-trical properties and revolution operation to generate the glassmodel. The following section presents a deeper evaluation of thesystem to better assess timing comparison with other systemsand the modeling capability of our approach.

6. Formal user evaluation

A formal within-subject evaluation of Mockup Builder wasperformed with 14 participants with an average duration of 1 hand 31 min per user. The goal of this evaluation was to compareand to assess the benefits of both our modeling approach and theuser interface offered by our system. In order to perform such astudy, the experiment was designed as a comparison of modelingtasks with our system and an existing commercial modelingapplication. Since Mockup Builder relies on a push-and-pullmodeling metaphor and mainly it targets architectural and designusers, we choose Google Sketchup as a representative of CADmodeling systems with a low learning curve compared to othermore complex systems. Google Sketchup has become a populartool specialty to create 3D content. It is also used by students inarchitecture as a teaching tool or advanced architects to roughly

create 3D models instead of using more complex systems. Whileits interface is not fully representative of CAD systems based onfour views and support of several geometric representations, itoffers the opportunity to question several design choices pre-sented by our interface, i.e. the stereoscopic visualization, mixedsketching and push-and-pull approach, the usage of gestures todefine extrusions and the bi-manual interaction model. Table 2presents a comparison between both systems and the scenarioused during the evaluation.

The evaluation consisted in the comparison of three modelingtasks with variable complexity on both systems. For each task,two screenshots of the expected scenes to model were presentedwith information about specific items to be fulfilled by partici-pants. No length measure was provided in order to focus on themodeling approach and the screenshots were exactly the same forboth systems. The order of presentation of each system wascounterbalanced across participants. Fig. 14 presents one screen-shot of each task. The first task consisted in the creation of a cityusing mass models around imaginary streets represented by avariety of extruded shapes. It was requested to experimentseveral profiles with regular or irregular shapes, at least onetruncated cone and one of the object had to be an exact copy of anexisting shape. The second task consisted in the creation of a glasswhich should be cloned. The duplicated glass had to contain astraw modeled by the user and placed correctly inside the glass.The last task was a scene representing a simple house with a roofand a front door and an object similar to a tree or cactus likeshape. The composition of the scene had to fulfill the placementillustrated by the screenshot.

For each system a brief demonstration was performed by thefacilitator leading the experiment on each system, followed by apractice session for the participant to become familiar with themodeling interface. Then the three tasks were performed sequen-tially while the participant was videotaped, logging of all actionsperformed by the user were retrieved from each application andthe resulting 3D model was saved. While the logging wasimplemented directly on our Mockup Builder application, wecreate a Ruby based plug-in using the Sketchup Ruby API toregister a set of observers logging changes performed on the 3Dmodel and the access to any feature of the Sketchup userinterfaces such as activating an option using the toolbar. Thescripts was used to automate the testing process saving the 3Dmodel and additional screenshots of the scene when closing theapplication after each task. A short user manual (five pages long)

Page 9: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

OtherUnity3d

RhinoBlenderAutocad

SketchupMaya

3ds Max

0.0% 10.0% 20.0% 30.0% 40.0% 50.0% 60.0%

Fig. 15. Percentage of participants with the previous experience on each system.

Table 3Task participant details: CS and ARCHI are participants with computer science or

architectural background respectively. Modeling experience was established

based on questionnaire information and interview.

Id Academic

course

Main

activities

Age Modeling

experience

Sketchup

usage

P1 CS Bsc student 21 Novice No

P2 ARCHI Architect 24 Advanced Yes

P3 CS Bsc student 23 Novice No

P4 ARCHI Msc student 29 Advanced No

P5 CS Phd student 26 Intermediate Yes

Researcher

P6 ARCHI Phd student 32 Advanced Yes

Researcher

P7 ARCHI Phd student 41 Expert No

Architect

P8 ARCHI Designer 27 Expert Yes

P9 ARCHI Professor 48 Expert No

Architect

P10 ARCHI Architect 27 Advanced Yes

Tim

e (s

)

Participants

0

200

400

600

800

1000

1200

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 Avg.

Sketchup

Mockup

Fig. 16. Time in second for the execution of Task 1 for each participant, the last

value presents the average time with error bars representing 95% CI for the mean.

Fig. 14. Screenshots of the three tasks performed during the user evaluation.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178 173

of each system was provided to the user explaining the basicfeatures of each system. A quick reference card was also madeavailable regarding the Sketchup system with an inventory of allicons and shortcuts of the application. Finally at the end of both testa questionnaire was provided to the participants in order to assesstheir satisfaction and the different aspects of each modelinginterface.

6.1. User profile

The 14 participants (10 males and four females) where mostlystudents (eight out of 14) from the architecture and gamingcourse of computer science with ages ranging from 21 to 48(M¼26, IQR¼6.25). Only one of the participant was left handedand performed the Mockup Builder test using the left hand as thedominant hand. In total nine users had an architectural back-ground and five came from computer science. Regarding archi-tectural background four of them were architects on their dailyactivity, one designer, one professor and three undergraduatestudents. Only two participants did not have any experience on3D modeling but in game programming or design tools such asUnity3D. Regarding the five participants from computer science,four of them were undergraduate students and one was a PhDcandidate. Fig. 15 presents the percentage of participants withexperience on several modeling systems. Half of the participantswere experienced with Sketchup. None of them had previouslyexperienced Mockup Builder. About 71.4% of the participants hadpreviously experienced stereoscopic viewing mainly thanks to 3Dmovies (57.1 %). All participants were experienced with gaming:92.9 % using a last generation gaming console such as NintendoWii, Sony Playstation 3 and Microsoft XBOX 360 and their inputdevices: Kinect (35.7%), Wiimote (57.1%) and Move (42.9%).Regarding multi-touch technology, 85.7% of the participants usedit daily on mobile phones or tablets (57.1%) and 35.7% hadexperience with larger multi-touch (Z15 in.) surfaces.

6.2. Task analysis

Regarding the task execution, we were able to retrieve thecomplete logging information from 10 out of 14 participants.We did not consider in this analysis the data of four participantssince we could not use both the timing and command sequence

information due to a logging problem in the Mockup Builderapplication at the beginning of our test. However they areconsidered in the other topics of the analysis since they correctlyexecuted the test on both systems and answered the question-naire. Table 3 shows the user profile for each participant whosetiming information is reported in Figs. 16–18. The timing infor-mation was retrieved automatically from the logging files whilethe user performed each task on each modeling system. Thebeginning is defined by the user launching the system afterreading the task description and it finished with the closing ofthe application when the user was satisfied with the modeledobject. It provides a broad overview of the performance of eachsystem. As mentioned before, the tasks were preceded by apractice period with no time limit.

Page 10: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Tim

e (s

)

Participants

0

200

400

600

800

1000

1200

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 Avg.

Sketchup

Mockup

Fig. 17. Time in second for the execution of Task 2 for each participant, the last

value presents the average time with error bars representing 95% CI for the mean.

Tim

e (s

)

Participants

0

200

400

600

800

1000

1200

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 Avg.

Sketchup

Mockup

1400

Fig. 18. Time in second for the execution of Task 3 for each participant, the last

value presents the average time with error bars representing 95% CI for the mean.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178174

On average, participants spent 8.21 min on Sketchup and15.19 min on Mockup Builder during this free test session.A repeated measures ANOVA revealed a significant interaction(F 2,18 ¼ 6:1 and p¼0.01) between the system used (Sketchup,Mockup Builder) and the task performed on modeling time. Post-hoc pair-wise comparisons showed significant difference(p¼0.01) between the two systems for the third task (MockupBuilder¼635 s, Sketchup¼284 s). No other significant effect orinteraction was found. However considering the difference inexpertise among the participants shown in Table 3, it is alsointeresting to look at the raw results. As presented in both Figs. 16and 17, most of the participants were able to perform the first andsecond tasks more rapidly using Mockup Builder than usingSketchup. The lower time observed for the first task (averagetime of 9.49 min for Sketchup versus 7.27 min for MockupBuilder) can be explained by the easiness of creating a greatvariety of 2D shapes using our sketch based approach. In thesecond task (average time of 8.07 min for Sketchup versus6.48 min for Mockup Builder), the difference can be mostlyexplained by difficulties to create freeform extrusions to repre-sent the straw object using Sketchup. In addition, we observed itwas harder to place the straw inside the glass using the Sketchupsingle view. The bimanual model and 3D direct manipulationfrom Mockup Builder shown to greatly ease the completion forthese two tasks. Regarding the third task, most of the participantsperformed better using Sketchup than Mockup Builder. The main

problem was related with the roof creation and the strategyfollowed by the participants to create the cactus like shape. Wenoticed an initial difficulty from the participants to understandthe snapping operation as a solution to sketch on a face located inspace. During the experiment, the participants were first invitedto consult the small user manual which prove to be sufficientregarding this operation. In addition most of the participantsstarted by creating the roof using the scaling operation of theextrusion and they had to start a new house model since we donot provide the undo mechanism. This may explain the timingdifference. However we believe that with this option, most of theusers would be able to complete the task with a time similar toSketchup. During the task execution, we noticed difficultiesselecting menu options while maintaining the dominant handon the correct feature. This aspect should be improved to reducethe number of necessary or incorrect access to the contextualmenu. Fig. 20 presents examples of the 3D models generated byfive participants for each task using our Mockup Builder systemand the Sketchup application.

6.3. Questionnaire analysis

The questionnaire is based on a Likert scale from 1 to 5 foreach questions. The median values and the corresponding inter-quartile range are presented in Fig. 19 for each system. For all thequestions the best score is the best system, since we invert theresult presentation for negative affirmations such as ‘‘I haddifficulties’’, ‘‘It was difficult’’ or ‘‘My hands or the cursordisturbed me’’ (i.e. Questions 4, 5, 30, 31, 32, 33 and 34). Thequestions were organized into groups relative to global aspects ofthe interface, 3D perception and easiness of the operations.Regarding the easiness of operations the main aspects requestedwere creation of shapes, extrusion, manipulations, selection andgraphical user interfaces. The same questions were asked for bothsystems.

While both systems scored high values on the Likert scale(Z3), most of the answers do not show statistically significantdifference between the two systems. Participants found easier tocreate 2D shapes on Sketchup than Mockup Builder (Question 6)with a respective median value of 5.0 and 4.0 and a significanteffect group (W¼3.5, Z¼�2.39, po0:05, r¼0.45) as shown by aWilcoxon Signed-rank test. However curve selection scored betterin Mockup Builder since it offers a more flexible representationbased on sketching and a better 3D perception. We also note thatselection of features was easier in Sketchup using the mouse thatin Mockup Builder using the 3D space as shown by Question 23(median value 5.0 and 3.0 for Sketchup and Mockup BuilderW¼0, Z¼�3.173, po0:005, r¼0.59) and Question 24 (medianvalue 5.0 and 3.0 for Sketchup and Mockup Builder W¼0,Z¼�3.34, po0:005, r¼0.63). This drawback is mainly due tothe hand occlusion problem in Mockup Builder as revealed by theQuestions 32 (5.0 vs 3.5 with W¼3, po0:05, r¼0.49), 33 (5.0 vs3.0 with W¼2, po0:005, r¼0.53) and 34 (5.0 vs 3.0 with W¼4,po0:005, r¼0.54) and to correctly identify which face washighlighted as shown by Question 30 (4.0 vs 3.5 with W¼3.5,po0:05, r¼0.44). However both systems ranked similar scoresregarding the usage of menus. On the other hand, the erasingsolution of Mockup Builder which is subject to recognition doesnot seem to be as efficient as the undo operation as showsQuestion 28 (4.5 vs 1.5 with W¼0, po0:05). However we cannotice a preference regarding manipulation and view control inMockup Builder. It was easier for participants to place objects inspace using our system and to perceive position and size relationbetween objects.

Page 11: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Fig. 19. Details of the questionnaire results showing median values and quartiles for both Mockup Builder and Sketchup. Each question was answered on a Likert scale

where 1 represented ‘‘strongly disagree’’ or similar and 5 represented ‘‘strongly agree’’ or similar, depending on the question. For each question, first we present the value

obtained by Mockup Builder followed by the value obtained by Sketchup. A star (n) in front of a question number represents a significant effect found.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178 175

6.4. Additional user comments

Through the questionnaires we asked participants for whatthey liked or disliked most on each system. We collected informalcomments during the experiment and participants were free tomake any additional suggestion to improve each system. Regard-ing Mockup Builder, six out of 14 participants (6/14) had positiveglobal comments such as ‘‘the system is fun to work’’, ‘‘it is easy todraw shapes’’, ‘‘I like the concept’’ or ‘‘the system is interactiveand easy to map ideas to commands’’ and the most attractiveaspects mentioned by participants were the 3D perception (6/14),direct manipulation (5/14) and the drawing approach (4/14). Onthe other hand, the most relevant negative aspects were related tohand coordination problems (3/14), lack of precision (3/14),difficulty of selection (3/14), fatigue (3/14) and cyber-sickness(3/14). Additional suggestions to improve the system include themissing undo/redo functionality (6/14), the need for a moreeffective erasing method (4/14), an easier way to perform thesnapping operation (3/14) and missing modeling features such asboolean operators or basic primitive instantiations (3/14). Finally,three out of 14 participants proposed to include a system todefine objects’ dimensions to overcome the inherent lack ofprecision with the drawing approach. Two participants suggestedto improve the sketching technique such as including the abilityto draw inside a shape or revise the split operation that was foundto be too restrictive to add details on a face.

Regarding Sketchup, participants preferred to highlight globalpositive aspects (7/14) instead of specific characteristics of thesystem. For example they commented that ‘‘it is fast to testsimple ideas’’, ‘‘it is easy to create shapes’’ or ‘‘more practical’’.Only three out of 14 participants pointed out specific aspects suchas the extrusion system or the easiness to select shape features.The main drawbacks highlighted by the participants were relatedto difficulties in controlling the view (3/14) or manipulatingshapes especially regarding rotations (3/14). Three participantsmentioned the lack of flexibility regarding curve based modeling

operations and two participants had negative comments aboutmenus due to the constant need to access the toolbar or thecomplexity of the menu hierarchy. Most of the complains wererelated to the 3D perspective visualization and the traditional 2Dcursor based interaction metaphor. Five out of 14 participantssuggested to improve the viewing system by increasing thefeedback trough rendering effects or animations, the usage ofpredefined views or even a spectator view to navigate into the 3dmodel to improve the perception of depth cues. Three out of the14 participants also proposed new features such as booleanoperations or free form operations (3/14). Only one architectparticipant commented that he would not use a commercialproduct such as Sketchup for his daily activities since it wasfound too restrictive to create complex models.

6.5. Areas for improvement

On both systems, participants were able to fulfill the requestedtasks. While it is difficult to formally measure and compare thequality of the models produced, their informal comparisonsuggest a similar quality. Fig. 20 presents the final models foreach task for five of the participants on both systems. Eachcolumn presents the results for one participant on each systemalternatively. Mockup Builder models were exported in VRMLformat and the screenshots were rendered using a 3D modelvisualizer. These are followed on each column by the correspond-ing Sketchup screenshot. From the five participants presented inthis figure, we should note that the third column (participant 5) isfrom a user with no architectural background and the second andfourth columns represent participants (participants 4 and 9) withno prior experience with Sketchup. Compared to the initialscreenshots, it is visible that the resulting models are very similaron each system showing the reliability of our prototype comparedto a commercial product such as Sketchup, which is veryencouraging.

Page 12: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

Fig. 20. Resulting 3D models for five participants (P2,P4,P5,P9,P10) where each column represents a participant. The first, third and fifth rows present models obtained

using Mockup Builder for Tasks 1, 2 and 3 respectively. The second, fourth and sixth rows present models obtained using Sketchup for Tasks 1, 2 and 3 respectively.

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178176

For the different tasks with the two interfaces, the participantsmanaged to replicate as faithfully as possible the different objects,except for the straw object in the second task with Sketchup. Dueto the limited ability to model curved shapes with Sketchup,participants could not create the straw as they actually wanted.We also observed difficulties to correctly position the straw insidethe glass with Sketchup while this task was performed seamlesslywith Mockup Builder. Finally for the third task, we observedthe objects modeled using Mockup Builder were not correctlyaligned with the scene floor plane. This is due to the fact thatMockup Builder does not represent explicitly it. Future releasesshould represent the scene floor plane combined with a separate

representation of the drawing grid. This would help maintaining afixed reference when the user snaps on face.

3D perception: For Mockup Builder, both the stereoscopicvisualization and the interaction above the surface helped inmanipulating 3D objects and perceive 3D relationships betweenshapes. This was highlighted by 43% of the participants as themost attractive feature of the system. However 21% of theparticipants raised possible problems related to fatigue, nauseaand motion sickness if they would have to use it for a prolongedperiod of time or on a daily basis. Such problems could beminimized using faster and more precise head tracking solutionsthan the current Kinect device. Currently, both the low frame rate

Page 13: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178 177

(30 frames per second) of the Kinect and its inherent latency(around 80 ms) create a jellied effect on the 3D object visualiza-tion when performing fast head movements.

Accurate calibration between the Gametrak and the visualiza-tion is also important to keep fingers and virtual objects co-located. The current calibration achieves around 1 mm offsetclose to the surface. However the offset is around 20 mm abovethe surface due to the distance between the real fingertip positionand the position where the Gametrak’s string is attached to thefinger (Fig. 2). Most of the participants were able to accommodatethis offset thanks to the virtual cursor representation. Howeverwe had to re-calibrate the system with a different ring positionson the finger for participant P7 due to his difficulty for selectingshape features in space. New systems for tracking fingertips suchas the leap motion might lessen such problems. In addition, visualeffects such as shadows could be added to improve depthperception not only between existing virtual objects but alsobetween the user’s hand and virtual content, which could be doneeasily implemented using the 3D data captured by the Kinect.These improvements would minimize problems related to handocclusion and the lack of haptic feedback in space.

3D user interface: On both systems, participants pointed outproblems with menus. Some participants complained aboutSketchup due to the need of using the toolbar each time theyneeded to invoke a new command or switch between manipula-tion and modeling modes. Some participants also complainedabout the contextual menu in Mockup Builder. We observedduring the experiment that participants did not always tookadvantage of all the modeling possibilities offered through thecontextual menu. In fact, some shape features have availableoptions in the contextual menu while other features have noavailable option, which can confuse users. To cope with thisproblem, we could improve the visual feedback to encourageusers to invoke the contextual menu when needed.

We need to deepen our analysis of bimanual asymmetricinteraction considering some participants were sometimes con-fused during the experiment. Finally, a more effective undo/redomechanism should be added to recover from errors and theerasing solution should be improved to allow erasing localfeatures instead of all the lines or shapes as it is currentlyperformed.

Sketch based modeling: Mockup Builder provides a good sup-port for sketching and freeform shapes while following a con-sistent approach for both curves and lines creation. On the otherhand, Sketchup relies primarily on primitive based creation and itonly provides few operators for freeform shapes as highlighted bythe participants. However the sketching recognition system inMockup Builder should be improved to ease the creation of linesand arcs and the creation of simple primitives.

On both systems, participants characterize the modelingability as too simple or not precise enough. By enriching sketchingwith construction lines, we would help to overcome the lack ofrigor inherent from sketching based approaches. Measurementfeedback such as the one proposed by Sketchup is also a key stepto solve such issue and should be considered in the future. Wecould improve our boundary representation to support sketchinginside a face, since our splitting operator alone is too limited toadd details on a face. Such improvement will enable a bettercombination between sketches and existing shapes. By providingthe ability to reuse sketches in our modeling operations such asextruding along an existing curve, we could enrich our sketchingbased approach further with boolean operators. New operationssuch as revolution and support for non planar surface creationwould enforce both the usage of 3D gestures and sketching for 3Dmodeling as a broader communication tool compared to existingtraditional approaches.

7. Conclusions and future work

We have described an approach to model 3D scenes using semi-immersive virtual environments through a synergistic combination ofnatural modalities afforded by novel input devices. While earlyexperiments and informal assessments of our system show promiseand seemingly validate some of these assumptions, we performed aformal user evaluation with both novice and expert users to highlightand explore both the strengths and the weakness of our modelinginterface. This study allowed us to obtain user feedback aboutMockup Builder. It revealed that the global usability of MockupBuilder was good. It also highlighted areas of improvements for anumber of functionalities were participants encountered difficulties.The overall adhesion of the participants to Mockup Builder iscomparable to Sketchup. This is a very positive results as Sketchupis a very popular, well established tool, whereas Mockup Builder isstill in a prototype stage.

Acknowledgments

We would like to thank the anonymous reviewers for provid-ing us with constructive comments and suggestions. This workwas partially funded by Fundac- ~ao para a Ciencia e Tecnologiathrough ‘‘Digital Alberti’’ (PTDC/AUR/AQI/108274/2008), MIVIS(PTDC/EIAEIA/104031/2008) and CEDAR (PTDC/EIA-EIA/116070/2009) projects, INESC-ID multi-annual funding through PIDDACProgram fund (PEst-OE/EEI/LA0021/2011). It was also partiallyfunded by the ANR InSTInCT project (ANR-09-CORD-013) and theInterreg IV-A 2 seas SHIVA project. Bruno Araujo was supportedby doctoral Grant SFRH/BD/31020/2006.

References

[1] De Araujo B, Casiez G, Jorge JA. Mockup Builder: direct 3D modeling on andabove the surface in a continuous interaction space. In: Proceedings of the2012 graphics interface conference, GI ’12, Canadian Information ProcessingSociety, Toronto, Canada; 2012. p. 173–80.

[2] De Araujo B, Casiez G, Jorge J, Hachet M. Modeling on and above astereoscopic multitouch display. In: 3DCHI—the 3rd dimension of CHI,Austin, USA.

[3] Schkolne S, Pruett M, Schroder P. Surface drawing: creating organic 3Dshapes with the hand and tangible tools. In: Proceedings of the SIGCHIconference on human factors in computing systems, CHI ’01, ACM, NY, USA;2001. p. 261–8.

[4] Wesche G, Seidel H-P. Freedrawer: a free-form sketching system on theresponsive workbench. In: Proceedings of the symposium on virtual realitysoftware and technology, ACM, NY, USA; 2001. p. 167–74.

[5] Fleisch T, Brunetti G, Santos P, Stork A. Stroke-input methods for immersivestyling environments. In: International conference on shape modeling andapplications, IEEE Computer Society, Los Alamitos, CA, USA; 2004. p. 275–83.

[6] Kaufmann H, Schmalstieg D. Designing immersive virtual reality for geome-try education. In: Proceedings of the IEEE conference on virtual reality, IEEEComputer Society, Washington, DC, USA; 2006. p.51–8.

[7] Perkunder H, Israel JH, Alexa M. Shape modeling with sketched feature linesin immersive 3D environments. In: Proceedings of the 7th sketch-basedinterfaces and modeling symposium, SBIM ’10, Eurographics Association,Aire-la-Ville, Switzerland; 2010. p. 127–34.

[8] Keefe DF, Zeleznik RC, Laidlaw DH. Drawing on air: Input techniques forcontrolled 3D line illustration. IEEE Trans Vis Comput Graph2007;13:1067–81.

[9] Wiese E, Israel JH, Meyer A, Bongartz S. Investigating the learnability ofimmersive free-hand sketching. In: Proceedings of the 7th sketch-basedinterfaces and modeling symposium, SBIM ’10, Eurographics Association,Aire-la-Ville, Switzerland; 2010. p. 135–42.

[10] Olsen L, Samavati FF, Sousa MC, Jorge JA. Sketch-based modeling: a survey.Comput Graph 2009;33:85–103.

[11] Forsberg AS, LaViola JJ, Zeleznik RC. Ergodesk: a framework for two- andthree-dimensional interaction at the activedesk. In: Proceedings of theimmersive projection technology workshop. p. 11–2.

[12] Muller-Tomfelde C, Hilliges O, Butz A, Izadi S, Wilson A. Interaction on thetabletop: bringing the physical to the digital. In: Muller-Tomfelde C, editor.Tabletops—horizontal interactive displays, human–computer interactionseries, Springer London; 2010. p. 189–221.

Page 14: Mockup Builder 3D modeling on and above the surface · 3D modeling 3D user interface abstract We present Mockup Builder, a semi-immersive environment for conceptual design which allows

B.R. De Araujo et al. / Computers & Graphics 37 (2013) 165–178178

[13] Jota R, Benko H. Constructing virtual 3D models with physical buildingblocks. In: CHI ’11 extended abstracts on human factors in computingsystems, CHI EA ’11, ACM, NY, USA; 2011. p. 2173–8.

[14] Novotny T, Lindt I, Broll W. A multi modal table-top 3D modeling tool inaugmented environments. In: Proceedings of the 12th eurographics sympo-sium on virtual environments, EG; 2006. p. 45–52.

[15] Wilson A, Benko H. Combining multiple depth cameras and projectors forinteractions on, above and between surfaces. In: Proceedings of the 23rdannual ACM symposium on user interface software and technology, UIST ’10,ACM,NY, USA; 2010. p. 273–82.

[16] Marquardt N, Jota R, Greenberg S, Jorge JA. The continuous interaction space:interaction techniques unifying touch and gesture on and above a digitalsurface. In: Proceedings of the 13th IFIP TC 13 international conference onhuman–computer interaction, vol. part III, INTERACT’11, Springer-Verlag,Berlin, Heidelberg; 2011. p. 461–76.

[17] Guiard Y. Asymmetric division of labor in human skilled bimanual action: thekinematic chain as a model. J Motor Behav 1987;19:486–517.

[18] Brandl P, Forlines C, Wigdor D, Haller M, Shen C. Combining and measuringthe benefits of bimanual pen and direct-touch interaction on horizontalinterfaces. In: Proceedings of the working conference on advanced visualinterfaces, AVI ’08, ACM, NY, USA; 2008. p. 154–61.

[19] Hinckley K, Yatani K, Pahud M, Coddington N, Rodenhouse J, Wilson A, BenkoH, Buxton B. Penþtouch¼new tools. In: Proceedings of the 23rd annual ACMsymposium on user interface software and technology, UIST ’10, ACM, NY,USA; 2010. p. 27–36.

[20] Lee J, Ishii H. Beyond: collapsible tools and gestures for computationaldesign. In: Proceedings of the 28th international conference extendedabstracts on human factors in computing systems, CHI EA’10, ACM, NY,USA; 2010. p. 3931–6.

[21] Lopes P, Mendes D, Araujo B, Jorge JA. Combining bimanual manipulation andpen-based input for 3D modelling. In: Proceedings of the 8th eurographicssymposium on sketch-based interfaces and modeling, SBIM ’11, ACM, NY,USA; 2011. p. 15–22.

[22] Jean Baptiste De la Rivi�ere, Nicolas Dittlo, Emmanuel Orvain, Cedric Kervegant,Mathieu Courtois, and Toni Da Luz. 2010. iliGHT 3D touch: a multiviewmultitouch surface for 3d content visualization and viewpoint sharing. In: ACM

International Conference on Interactive Tabletops and Surfaces (ITS 010). ACM,New York, NY, USA, 312–312, http://dx.doi.org/10.1145/1936652.1936740.

[23] Hachet M, Bossavit B, Cohe A, De la Rivi�ere J-B. Toucheo: multitouch andstereo combined in a seamless workspace. In: Proceedings of the 24th annualACM symposium on user interface software and technology, UIST ’11,

ACM,NY, USA; 2011. p. 587–92.[24] Fischler MA, Bolles RC. Random sample consensus: a paradigm for model

fitting with applications to image analysis and automated cartography.Commun ACM 1981;24:381–95.

[25] Oh J-Y, Stuerzlinger W, Danahy J. Sesame: towards better 3D conceptualdesign systems. In: Proceedings of the 6th conference on designing inter-active systems, DIS ’06, ACM, NY, USA; 2006. p. 80–9.

[26] Fonseca M, Jorge J. Using fuzzy logic to recognize geometric shapes inter-actively. In: Proceedings of the 9th IEEE international conference on fuzzy

systems, vol. 1; 2000.[27] Coquillart S. Computing offsets of b-spline curves. Comput Aided Des

1987;19:305–9.[28] Knoedel S, Hachet M. Multi-touch rst in 2D and 3D spaces: studying the

impact of directness on user performance. In: Proceedings of the 2011 IEEEsymposium on 3D user interfaces, 3DUI ’11, IEEE Computer Society,Washington, DC, USA; 2011. p. 75–8.

[29] Wang R, Paris S, Popovic J. 6D hands: markerless hand-tracking for computeraided design. In: Proceedings of the 24th annual ACM symposium on user

interface software and technology, UIST ’11, ACM, NY, USA; 2011. p. 549–58.[30] Song P, Goh WB, Hutama W, Fu C-W, Liu X. A handle bar metaphor for virtual

object manipulation with mid-air interaction. In: Proceedings of the SIGCHIconference on human factors in computing systems, CHI ’12, ACM, New York,NY, USA; 2012. p. 1297–306.