parameter-controlledskeletonization –a framework …coe · 8.8. a swarm of volumetric wasps...

115
PARAMETER-CONTROLLED SKELETONIZATION – A FRAMEWORK FOR VOLUME GRAPHICS BY NIKHIL GAGVANI A thesis submitted to the Graduate School—New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Doctor of Philosophy Graduate Program in Electrical and Computer Engineering Written under the direction of Professor D. Silver and approved by New Brunswick, New Jersey January, 2001

Upload: others

Post on 08-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

PARAMETER-CONTROLLED SKELETONIZATION – AFRAMEWORK FOR VOLUME GRAPHICS

BY NIKHIL GAGVANI

A thesis submitted to the

Graduate School—New Brunswick

Rutgers, The State University of New Jersey

in partial fulfillment of the requirements

for the degree of

Doctor of Philosophy

Graduate Program in Electrical and Computer Engineering

Written under the direction of

Professor D. Silver

and approved by

New Brunswick, New Jersey

January, 2001

Page 2: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

c�

2001

Nikhil Gagvani

ALL RIGHTS RESERVED

Page 3: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

ABSTRACT OF THE THESIS

Parameter-Controlled Skeletonization – A Framework for Volume

Graphics

by Nikhil Gagvani

Thesis Director: Professor D. Silver

Computer graphics models are typically represented as a collection of polygons or spline patches.

Such models only describe the geometry and attributes for the surface of the objects which they

represent. Since these models are hollow, interactions which break or deform an object require

special consideration. Various natural phenomena like clouds, smoke and water cannot be eas-

ily represented using surface based models. An alternative is to use a volumetric representation

of objects. Volumetric models can describe the interior properties of objects. These properties

can be both physical and optical, which makes it possible to model accurate deformations and

light interactions for realistic representation of natural phenomena. Volume graphics focuses on

the modeling, manipulation and rendering of volumetric objects.

The task of volume modeling, manipulation and deformation is particularly difficult owing

to the enormous size of the models. Our research has focused on an efficient abstraction of a

volumetric model. We thin the volumetric model into a skeleton using a parameter controlled

thinning algorithm. Control of a single thinness parameter allows the skeleton to be represented

at various density levels. We then demonstrate the versatility of our multi-scale skeleton for a

variety of operations on volumetric models. These operations include deformation, animation,

ii

Page 4: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

collision detection, navigation and tracking. Existing algorithms for these operations on vol-

umetric models are either unrealistic, or are specific to a single operation. Our skeleton rep-

resentation provides a unified framework for many common operations on volumetric models.

Furthermore, the skeleton abstraction allows us to apply existing techniques for surface based

animation and collision detection, making it compatible with commercial packages and toolkits.

Keywords: Skeleton, Scientific Visualization, Endoscopy, Computer Animation, Collision De-

tection.

iii

Page 5: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

Acknowledgements

First and foremost, I would like to thank my advisor, Dr. Deborah Silver, who made this thesis

possible with her vision and encouragement. I am very grateful to Prof. N. Zabusky who got

me started on this work. The financial support from Sarnoff Corporation for the latter half of

this work is also gratefully acknowledged.

The work for this thesis was done at Vizlab, The Laboratory for Visiometrics and Modeling

at Rutgers University. I would like to thank several of my collaborators and co-students in the

lab – Jaideep Ray, Xin Wang, Dilip Kenchammana-Hosekote, Kundan Sen and Arindam Bhat-

tacharya. Special thanks are due to Andre Martinez who greatly facilitated the work on motion

capture based animation. Wes Townsend and Sandra Cheng from Vizlab created some of the

animations for this work.

I would also like to thank Dr. Marsha Jessup of the Robert Wood Johnson Medical School for

valuable input on medical visualization, Dr. Bernhard Geiger of Siemens Corporate Research

for providing the trachea dataset and Dr. R.A. Robb of the Mayo Clinic for the colon dataset.

I am also grateful to the National Library of Medicine for providing the Visible Human dataset

and to Kitware for the Visualization Toolkit which has served as an invaluable resource.

Last, but not the least, I am indebted to my parents for providing me a sound educational

foundation and for their unwavering support in all my academic pursuits.

The Vizlab also acknowledges the support of the CAIP center and the New Jersey Commis-

sion on Science and Technology.

iv

Page 6: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

Table of Contents

Abstract ��������������������������������������������������������������������������������� ii

Acknowledgements ��������������������������������������������������������������������� iv

List of Figures ��������������������������������������������������������������������������� viii

1. Introduction ������������������������������������������������������������������������� 1

1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2. Prior Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3. Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4. Overview of Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2. Related Work In Shape Description ����������������������������������������������� 7

2.1. Topological Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1.1. Extended Safe Point Thinning Algorithm (ESPTA) . . . . . . . . . . . 9

2.1.2. Bertand’s Parallel Thinning Algorithm . . . . . . . . . . . . . . . . . . 11

2.1.3. Summary of Topological Thinning . . . . . . . . . . . . . . . . . . . . 13

2.2. Distance Transform Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3. Voronoi Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3. Parameter-Based Volume Thinning ����������������������������������������������� 17

3.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.2. Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2.1. The Distance Transform . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.2.2. Skeleton Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.2.3. Ball Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

v

Page 7: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

4. Shape Manipulation ��������������������������������������������������������������� 34

4.1. Iso-Surfacing for Volume Manipulation . . . . . . . . . . . . . . . . . . . . . 34

4.2. Direct Volumetric Shape Manipulation . . . . . . . . . . . . . . . . . . . . . . 35

5. Traditional Character Animation ������������������������������������������������� 38

5.1. Skeleton-Based Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.2. Keyframing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.3. Inverse-kinematics and Motion Capture . . . . . . . . . . . . . . . . . . . . . 42

6. Skeleton-based Volume Animation ����������������������������������������������� 44

6.1. The Skeleton-Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

6.1.1. Automatic Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . 45

6.1.2. Articulated Skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6.2. Volume Reconstuction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.2.1. Binary Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.2.2. Sampled Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . 54

6.3. Analysis of Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6.3.1. Sampled Reconstruction for Bent Shapes . . . . . . . . . . . . . . . . 57

6.3.2. Reversibility for Sampled Reconstruction . . . . . . . . . . . . . . . . 60

6.4. Non-Rigid Deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

7. Volume Animation Production ����������������������������������������������������� 63

7.1. Animation in Maya . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

7.1.1. Skeleton-Tree Animation . . . . . . . . . . . . . . . . . . . . . . . . . 64

7.1.2. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

7.2. Motion-Capture Animation in Character Studio . . . . . . . . . . . . . . . . . 66

7.2.1. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

8. Skeleton-Based Volumetric Collision Detection ����������������������������������� 72

8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

8.2. Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

vi

Page 8: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

8.3. Distance Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

8.4. Bounding Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

8.4.1. Hierarchical Bounding Tree . . . . . . . . . . . . . . . . . . . . . . . 77

8.4.2. Animated Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

8.5. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

8.6. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

9. Applications in Visualization ������������������������������������������������������� 85

9.1. Volume Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

9.2. Virtual Endoscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

9.2.1. Constructing the Centerline . . . . . . . . . . . . . . . . . . . . . . . 87

9.2.2. Navigation along the centerline . . . . . . . . . . . . . . . . . . . . . 88

9.3. Medical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

9.4. Oil Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

10. Future Work ����������������������������������������������������������������������� 93

10.1. Physically-Based Volume Animation . . . . . . . . . . . . . . . . . . . . . . . 93

10.2. Volume Graphics Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 94

10.2.1. Shape Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

10.2.2. Volume Morphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

11. Conclusion and Discussion ��������������������������������������������������������� 96

References ������������������������������������������������������������������������������� 98

vii

Page 9: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

List of Figures

1.1. Polygonal and volumetric models of a human head . . . . . . . . . . . . . . . 2

2.1. Neighborhood of a point in 3D . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2. ESPTA applied to an S shaped object . . . . . . . . . . . . . . . . . . . . . . . 11

2.3. Bertrand’s algorithm applied to an S shaped object . . . . . . . . . . . . . . . 13

2.4. Distance transform in 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.5. Voronoi Skeletons with different pruning thresholds . . . . . . . . . . . . . . . 16

3.1. The neighbors of a voxel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2. Minimal set for reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3. Growing the ball around a voxel . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.4. Mean Neighbor Distance Transform (MNT) and its relation to the maximum

thinness (TP). The dark voxel will be included in the skeleton if TP is less than

DT-MNT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.5. A maple leaf and its skeleton at various thinness values. Thinness is increasing

from left to right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.6. Lossless reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.7. Lossy reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.8. Effect of the Thinness Parameter . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.9. Skeletons for a volumetric ellipsoid and the Visible Male using the weighted

�������� �� metric (center) and the Euclidean metric (right). . . . . . . . . . . . 33

3.10. The dependence of skeletal voxels on the thinness parameter. . . . . . . . . . . 33

5.1. An articulated skeleton consisting of bones and joints used for character animation. 39

5.2. Attaching surfaces to the articulated skeleton. . . . . . . . . . . . . . . . . . . 41

5.3. A joint hierarchy with an inverse-kinematics handle . . . . . . . . . . . . . . . 42

6.1. The Volume Deformation Pipeline . . . . . . . . . . . . . . . . . . . . . . . . 45

viii

Page 10: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

6.2. The Skeleton Tree, with increasing spatial coherence (i.e. decreasing value co-

herence): a. � = 0.5, b. � = 0.75, c. � = 0.95, where � is the connectivity

parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6.3. An articulated skeleton (red) is defined by the animator. All points in the volu-

metric skeleton (black) are then connected to the articulated skeleton. Only the

articulated skeleton has to be manipulated for animation. . . . . . . . . . . . . 49

6.4. The quality of reconstruction is better for a thicker volumetric-skeleton. The

skeleton on the left has 42298 points, its reconstructed model is shown next to

it. The thinner skeleton to the right has 9666 points, its reconstructed model is

to the extreme right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

6.5. Comparison of the reconstruction quality for the weighted �������� �� and Eu-

clidean skeletons. The first column shows the original object, the second col-

umn shows weighted reconstruction and Euclidean reconstruction is shown in

the third column. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

6.6. Reconstruction Loss for different Thinness Parameters . . . . . . . . . . . . . 55

6.7. Overlapping spheres can cause problems during reconstruction. When the cylinder-

cube model is bent at the center, spheres from the straight part at the bottom

over-write values in the top, bent part as seen in the center image. This is fixed

in the image on the right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

6.8. A cross section of the original volume is shown to the left. A cross section of the

reconstructed volume in the same pose is in the middle. The right image shows

a cross section of the reconstructed volume in a new pose. . . . . . . . . . . . 57

6.9. A cuboid embedded in a cylinder and its skeleton. The two bones of the articu-

lated skeleton are shown as green and red lines. . . . . . . . . . . . . . . . . . 58

6.10. Reconstruction of the cuboid at various rotation angles. The iso-surface is shown

here. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6.11. Reversibility analysis for sampled reconstruction. . . . . . . . . . . . . . . . . 61

6.12. Bulge and Pinch effect by changing the distance transform of skeletal voxels. . 62

ix

Page 11: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

7.1. The lookup table used to recover Distance Transform values from the deformed

skeleton-tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

7.2. Creating a Group Hierarchy. Pieces of the skeleton-tree are combined into groups

in a hierarchical manner. These groups correspond exactly to the animation-

skeleton created in Maya. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

7.3. Skeleton geometry in Alias. Joints in the animation-skeleton are shown by cir-

cles; the triangles are bones of the animation-skeleton. IK handles are shown as

diagonal lines between joints. . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

7.4. Frames from an animation of the volumetric dragon. . . . . . . . . . . . . . . 70

7.5. Frames from an animation of the human trachea. . . . . . . . . . . . . . . . . 70

7.6. Binding the Biped model in Character Studio to the articulated skeleton . . . . 71

7.7. Extracting the deformed articulated skeletons (red) from the animated Biped

model in Character Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

7.8. Volume rendered frames of a running sequence. The sequence was generated

using motion capture data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

8.1. Inscribed and expanded circles for an ellipse. Inscribed circles are automati-

cally computed based on the boundary coverage metric. Expanding these cir-

cles yields a bounding shape for the ellipse. . . . . . . . . . . . . . . . . . . . 76

8.2. The reconstruction-spheres (center) and the collision-spheres (right) for the Vis-

ible Male volume. Only 100 spheres approximate the volume in this case. . . . 77

8.3. Three levels of the collision detection hierarchy are shown for the Visible Male

volume. The bounding volumes from left to right have 30, 100 and 300 spheres

respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

8.4. The hierarchical intersection graph. An edge exists from a node � to a node �in the next level, if the sphere corresponding to � intersects the sphere for node � . 78

8.5. A local occupancy map is computed for the intersecting spherical caps. Voxel

intersections are tested only within this local occupancy volume. . . . . . . . . 79

x

Page 12: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

8.6. Three frames from an animation of volumetric bugs colliding. The two frames

to the left show the case where no collision was detected. The right frame shows

their positions when a collision was detected. . . . . . . . . . . . . . . . . . . 80

8.7. A volumetric wasp colliding with the Visible Man. The figure shows two sam-

ple frames from the animation. A collision was detected in the frame to the right. 83

8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-

imated using a volumetric skeleton. The centers of bounding spheres are at-

tached to the volumetric skeleton and follow the motion of the skeleton. . . . . 84

9.1. Skeletons for fast Vortex Tracking. (a) Segmented vortex structures.(b) Skele-

ton of these structures, thinness = 1.0 . . . . . . . . . . . . . . . . . . . . . . 85

9.2. Trachea and its Skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

9.3. Interactive generation of the trachea centerline. (a) Two endpoints are defined

for a navigation path. (b) A centerline is generated for each path. . . . . . . . 88

9.4. 3D trachea dataset, Camera views from different points along the Trachea cen-

terline, (shown inset in (a)). Points on the camera path are shown as spheres in

(a). Note the visible bifurcation in (b). . . . . . . . . . . . . . . . . . . . . . . 89

9.5. Stretching a colon dataset. We use volume animation techniques to stretch a

volumetric model of the human colon. . . . . . . . . . . . . . . . . . . . . . . 91

9.6. The skeleton-tree of a rock sample. The skeleton-tree is used to extract connec-

tivity between pores. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

xi

Page 13: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

1

Chapter 1

Introduction

Computer graphics is concerned with producing still and motion imagery by synthetic means us-

ing a computer. Applications of computer graphics include entertainment, architecture, design,

medicine and scientific visualization. Computer generated imagery for entertainment is focused

on realism. Furthermore, the use of computer graphics in entertainment allows for artistic ex-

pression as evidenced in the creation of imaginary and virtual worlds and scenarios.

On the other hand, visualization employs computer graphics to enhance the understanding

of data and information. By means of graphical metaphors and symbology, visualization tech-

niques aim to convey patterns in the data. While the goal of visualization has been to present

real data (or data from simulations of real phenomena) in a visual form for better understanding,

graphics has been concerned with producing real looking images, mostly by synthetic means.

The field of volume graphics has emerged from existing work in volume visualization. Vol-

ume visualization deals with the manipulation and display of three-dimensional images as would

be obtained from CT, MRI and ultrasound imaging. Numerical simulations of fluid dynamics

and naturally occurring phenomena such as clouds and fire also use volume visualization tech-

niques. Volume graphics presents a meeting point for traditional graphics and visualization by

enabling interaction of real data with synthetic objects.

1.1 Motivation

The production of computer graphics imagery typically consists of three steps - modeling, ma-

nipulation and rendering. Modeling refers to the creation of data structures and representations

which describe objects and their environment. Manipulation is concerned with modification of

the model. Rendering is the process of image generation. A renderer converts a computer model

into a still image or a series of images. The renderer is strongly tied to the modeling technique,

Page 14: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

2

Figure 1.1: Polygonal and volumetric models of a human head

and a separate renderer is generally required for each type of model that has to be rendered.

Over the past thirty years, various modeling techniques have been investigated. The most

popular technique is polygon-based modeling which is a boundary surface representation. The

surface of each object is described by a set of polygons. Properties such as material and texture

are defined for every polygon and used by the renderer to compute the final image. Polygonal

models describe only the surface of objects. Fuzzy objects such as clouds and fire cannot be

easily represented using polygonal models. In contrast, volumetric modeling represents objects

as a set of three-dimensional samples called voxels. Volumetric models are created by sampling

a 3D field at various grid-points. The intensity of the field at those grid-points is stored in a

voxel. This field can range from the density field for tissues in CT, to the vorticity field in a fluid

dynamics experiment. Voxels in the interior of objects are also modeled in volume graphics,

therefore it is easier to look inside objects and to represent clouds and fire. Figure 1.1 shows a

comparison between polygonal and volumetric models. A human head modeled with polygons

is shown to the left. The image on the right is generated using a volumetric model of the same

head; therefore internal structures like the skull are visible. For this reason, volumetric models

are popular in medical diagnosis.

The rendering of volume models is called volume rendering. Volume rendering has received

much attention over the last ten years [22, 85, 49]. Applications of volume rendering have in-

cluded medical diagnostics and CAD applications. For volume rendering, the intensity at ev-

ery voxel is mapped into an opacity and color value using opacity and color transfer functions.

Light rays passing through a voxels are attenuated by an amount proportional to the opacity at

that voxel. Therefore, the opacity of a voxel determines the amount of background color that

Page 15: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

3

is visible through the voxel. If the opacity for surface voxels is reduced, one can look inside

volumetric models. This is very useful in detecting anomalies in models of tissues, organs and

manufactured devices.

There is little work on volume modeling. Generally, volume models are derived from imag-

ing sources like CT and MRI, or from mathematical simulations. Such models represent real-

world objects. Synthetic volume models can be created by converting polygonal models to vol-

umetric representations [79, 41]. This is generally done via a scan-filling operation on each three

dimensional polygon in the scene. Such conversion cannot represent the interior of objects. Im-

plicit modeling [12] is another means for creating synthetic volumetric models. In implicit mod-

eling, mathematical equations are used to describe the intensity of voxels in the interior of an

object. Particle systems [70, 23] use thousands of particles moving under physical constraints

to model fuzzy volumetric objects such as gases and fire.

There has not been much research on volumetric manipulation and deformation. While vol-

ume rendering techniques can produce compelling images of static models, the ability to manip-

ulate, animate and interact with these models is missing. This limits the domain of application

for volume graphics. We believe that this lack of volumetric manipulation techniques is respon-

sible for the low popularity of volume graphics. Given the stunning images that can be produced

by volume renderers, volume graphics should be more mainstream. However, images of static

models have limited appeal and application. In this thesis, we propose a data structure called the

skeleton which can be used for easy manipulation of volume models. We also show applications

of the skeleton towards volume modeling and navigation through volumetric models.

1.2 Prior Context

Prior work on volume manipulation has used free-form deformation or physically-based defor-

mation. The computations involved spring-mass models, continuum models [33] and finite-

element methods [19]. Gibson has suggested the 3D Chain Mail algorithm [32] for propagat-

ing deformations rapidly through a volume. These techniques are sophisticated since they in-

volve the specification of material properties and setting up mathematical equations. Kurzion

and Yagel [48] have proposed a method to deform the rays during the rendering phase using ray

Page 16: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

4

deflectors. The choice of ray deflectors for a desired target motion is not easy in their method.

The runtime is proportional to the number of deflectors, which can be large for complex motions.

Current volume deformation methods are tailored to a specific application domain. In order

for volume graphics to become mainstream, a simple mechanism for volume deformation is re-

quired. Currently, computer animators working with surface models manipulate a “stick-like”

representation of a model called the skeleton 1. Movement of the skeleton causes a correspond-

ing movement of the model. For the skeletal motion to be realistically mapped to the model, a

binding is created between the surface geometry and the skeletal structure. This causes points on

the surface to move along with the skeleton. A skeleton simplifies the task of model manipula-

tion because it is a simplification of the model. Extremely realistic animations can be produced

by use of the skeleton abstraction.

A volumetric skeleton could greatly simplify the task of volume manipulation. In this the-

sis, we first look at automatic skeleton generation algorithms. A skeleton can be automatically

generated from a model via a thinning operation. Several two dimensional thinning methods

for images have been published which work very well. Volumetric thinning is a harder problem

because a thinning operation on a volume generally produces a skeleton with lines as well as

surfaces which is not a simple stick-like approximation that is suitable for animation.

In addition, there are applications of a skeleton abstraction that could be applied to the vol-

umetric domain. If the skeleton is centered with respect to the boundaries of a model, it can be

used as a collision-free path for navigating through the interior of the model. In some cases,

the model can be reconstructed from its skeleton which has applications in compression. The

skeleton is also frequently used for automatic shape identification and matching because it is a

unique but simpler representation of a model. There are a variety of skeletonization algorithms

which create a skeleton specific to an application. However, no single algorithm can produce

skeletons at multiple densities which would address a wide variety of applications.

1The term skeleton has a different meaning in different contexts. In animation, a skeleton is a stick-like figure,in anatomy it refers to the collection of bones and in computer vision it is a thinned representation of a shape

Page 17: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

5

1.3 Approach

Our approach towards volume manipulation is to automatically extract a volumetric skeleton

from the model. Manipulation of the skeleton causes a corresponding movement of the model.

There are an advantages to using an automatically derived volumetric skeleton. An automat-

ically derived skeleton is centered within the model. Also, there is a direct binding with the

voxels of the model, which obviates the tedious step of correcting an ad-hoc skeleton as is nec-

essary for surface models.

We would also like to use the skeleton for shape identification and navigation in volume

graphics. Therefore, our thinning algorithm produces skeletons at multiple densities. The den-

sity is controlled by a single parameter called the thinness parameters. The parameter-controlled

skeleton provides a powerful abstraction of the volume model. It can be used for volume model-

ing and morphing because it allows an existing model to be deformed into a new one. It captures

the shape of a model, preserving spatial relationship between component parts, and can therefore

be used for occlusion queries to speed up volume rendering and interactive collision detection

for virtual reality. Since the skeleton is thin, it can be used as a collision free path for navigation

through volumetric models of organs which finds application in virtual endoscopy and surgical

path planning.

1.4 Overview of Material

This thesis continues with a survey of existing methods for shape description on a computer

(Chapter 2). In Chapter 3, we describe our parameter-controlled skeleton algorithm. Popular

techniques for shape manipulation are discussed in Chapter 4, followed by a description of tradi-

tional computer animation in Chapter 5. We follow this discussion by introducing our algorithm

for the manipulation of volumetric models in Chapter 6. Chapter 7 is devoted to integration is-

sues with commercial animation packages. Chapter 8, we describe an algorithm for efficient

collision detection of volumetric models. Other applications of the skeleton model in visual-

ization are described in Chapter 9. Specifically, we describe applications to virtual endoscopy

in medical diagnosis and to volume tracking with applications in fluid dynamics and weather

modeling. Potential applications of our method and future work are discussed in Chapter 10.

Page 18: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

6

We conclude and list specific contributions in Chapter 11.

Page 19: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

7

Chapter 2

Related Work In Shape Description

Automated object recognition is an important problem in several fields. In order for a computer

to recognize an object, distinguishing properties of the the object have to be available. Shape

is one such property which can help differentiate various objects. In addition, color and texture

can also be used to recognize and classify objects. Shape description is therefore a precursor for

automated object recognition.

The problem of shape description is tightly coupled with the modeling technique employed

for the representation of objects. Computer vision and image analysis techniques attempt to ex-

tract shape information from a series of images. Stereoscopy [44] uses two or more cameras

looking at the same scene. Using the images from these cameras, stereoscopic methods attempt

to extract the depth of pixels in the scene by relating it to the disparity of the pixels in those

images. Further processing is done on the depth-pixels to cluster them into structures like lines

and planes. Other image analysis techniques directly extract edges from photographs to create

a model for an object. Once salient features of objects are available from the computer model,

matching can be done by computing the correlation between features. Some simple techniques

for correlation involve the comparison of normalized curve lengths, areas, volumes and mo-

ments. If a match is found to some level of accuracy, the object can be matched with existing

object templates.

For complex pixelized objects, the number of features might be too numerous for efficient

correlation. If the shape of the object can be abstracted into a simpler description, matching can

be done faster. However, it is imperative that such a simple description be unique to the object

that it describes. Therefore, another popular technique is to thin the object into a reduced model

before matching. Thinning refers to the process of reducing an object to a simpler representation

with fewer pixels/voxels that still retains the essential features of the original object. The process

Page 20: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

8

of thinning an object results in the skeleton of the object.

A skeleton has several desirable properties which make it suitable for shape description. Ide-

ally, the skeleton is required to have the following properties, some of which are conflicting.

� Thin : The skeleton is thinner than the original model. Therefore, there is lesser data to

compare. Consequently shape matching is faster.

� Centered : Generally, the skeleton is centered with respect to the boundaries of the object

that it describes.

� Connected : The skeleton is topologically equivalent to the object that it describes. There-

fore, the number of connected components in the skeleton is equal to those in the object.

� Reconstructible : The object can be reconstructed from the skeleton. This is the inversion

property which requires that the thinning operation be reversible.

Thinning methods are dependent on the modeling technique used to represent the object.

They can be broadly divided into three categories, with each method satisfying a subset of the

properties described above. These approaches to thinning and skeleton generation are described

in the following subsections.

2.1 Topological Thinning

Topological thinning methods are influenced by the work done in characterizing the topological

properties of 2D images. Methods like [1, 65] identify simple points, the removal of which does

not change the topology of the image. Certain simple points which are end points are left un-

changed to preserve useful information about the shape and extremities of the object. A simple

characterization of end points in a two–dimensional digital picture is a point which is adjacent

to just one point. These algorithms can also be easily parallelized as described in [80]. Several

authors [82, 53, 58] have tried to extend the idea to three dimensions. However, characterizing

3D end points does not easily extend from the 2D approach. Morgenthaler [21] has attempted

a study of such a characterization. Many other characterizations of 3D simple points and end

points reported in the literature have later been shown to be incorrect [34, 7]. Recently, a lot of

Page 21: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

9

work has been reported for 3D simplicity tests. Latecki and Ma [52] show that the 26 neighbors

of a point can be either black, white or don’t-care points for the simplicity test. A total of 3 ���different configurations can occur for the neighborhood of a point in 3D. If a lookup table were

to be used for each of the configurations, it would need at least 318 G-bytes which is not feasi-

ble. They propose a space saving algorithm which uses a lookup table of size 1.4M-bytes. Ma

and Sonka [57] describe a parallel 3D thinning algorithm that works on well-composed images

which are 6-connected. They compute the skeleton of medical datasets, but the images have to

be converted to well-composed ones and smoothed by a dilation operation [44]. Bertrand and

Malandain [6] and Bertrand [5] have also reported characterizations of three dimensional simple

points. We look at two thinning algorithms in detail in the next subsections.

2.1.1 Extended Safe Point Thinning Algorithm (ESPTA)

The ESPTA algorithm, proposed in [58] is an extension of the two dimensional Safe Point Thin-

ning Algorithm (SPTA) [60]. The 2D SPTA works by examining the eight-neighbors of a pixel,

� , to check if � is a safe point. A pixel is defined to be a safe point if it is an edge point and if its

removal results in the loss of connectivity or of excessive erosion of the image. For SPTA, Nac-

cache and Shinghal establish four Boolean conditions for characterizing four types of safe points

corresponding to four types of neighbors. An image pixel is one of the four edge types - Left,

Right, North or South if it has a zero (background) neighbor in the corresponding direction. The

task of determining whether a point is safe is then reduced to matching the eight-neighborhood

of the point with four template windows, which can be efficiently expressed by a Boolean ex-

pression.

In 3D, the 3x3x3 neighborhood of a voxel � is examined for the safe point condition. The

neighbor voxels of � are named as shown in Figure 2.1. With reference to the viewing direction,

the 3x3x3 neighborhood is thought to be formed by three different 3x3 neighborhood frames in

2D which are termed as back frame, mid frame and front frame.

Mukherjee, et al. identify two orthogonal planes on which points � and � lie and simul-

taneously satisfy the 2D left safe point condition in these two planes. The resulting Boolean

expression obtained for a left safe point is :������� ! �#"$�&%'� �)( � � ( � � ( �+*-,.�&%'� � ( �0/$,.�&%'� � ( �01$, �

Page 22: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

10

V0

V1V2V3

V4

V5 V6 V7

V8

P

n1

n0

n2n3

n4

n5 n6 n7

u0

u1u2u3

u4

u5 u6 u7

u8

Back Frame

Mid Frame

Front Frame

Figure 2.1: Neighborhood of a point in 3D

� � � � ! �#"$�&%'23" ( 234 (65 " (75 48,.�&%'234 ( 2 ,.�&% 5 4 ( 5 ,+9:�+;� ! � ����� � � � � � �

Similar conditions exist for the other types of safe points. A point � is a left safe point if and

only if� is false. If

� evaluates to true, then � is flagged for removal and all flagged points

are removed at the end of a pass. A pass consists of scanning all the image points for one of

the following types of safe point conditions - Left, Right, North, South, Front and Back. The

algorithm iteratively removes non-safe points and stops when no point is flagged in any of the

six passes in an iteration.

It has been shown in [34] that the ESPTA algorithm cannot preserve the 18 connectivity of a

3D object. This is because ESPTA checks only 14 neighbors of a point, these 14 neighbors being

formed by the points in the two orthogonal planes used to derive the safe point condition. Our

implementation of ESPTA on a 3D regular grid demonstrated this shortcoming of the algorithm.

Results of subjecting an S shaped object to thinning by ESPTA are shown in Figure 2.2.

Mukherjee, Das and Chatterji give a modified version of ESPTA in [59] which preserves

the 26 connectivity of the object. The output of the modified algorithm yields a thinned object

which is quite unlike the original one. The cost of maintaining connectivity is a noisy skeleton

as can be seen from the figures in their paper [59].

Page 23: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

11

Figure 2.2: ESPTA applied to an S shaped object

2.1.2 Bertand’s Parallel Thinning Algorithm

Bertrand presents a parallel 3D thinning algorithm in [4] which is also based on scanning the

neighborhood of a border point to ensure connectivity of the thinned object. He uses the notion

of simple points and end points which are defined below. The simple point condition ensures that

any of the points removed will not alter the topology of the object and the end point condition

preserves useful information about the shape of the object and prevents excessive erosion.

A simple point is a point, the removal of which does not alter the topology of the image.

An end point is a point in the object that belongs to a curve or a surface. The algorithm works

by identifying border points which are simple points and not end points and marks them for

deletion. Again, marking and deleting is done in 6 independent directional passes as for the

ESPTA. The algorithm proceeds iteratively till no more points are marked for deletion in any of

the passes of an iteration.

In order to characterize simple points and end points, Bertrand defines certain sets of points

based on the neighborhood of a border point < . The notation = denotes the cardinality of a set.

The cardinalities are defined as follows :

>6?A@CBEDGF is the n-neighborhood of a point D , HJILKMONOPM�QRK .> =TSVU@ - The number of 1-neighbors in the n-neighborhood ( HWIXKMONOPM�QRK ) of a border

point D .

> =ZYS U@ - The number of 0-neighbors in the n-neighborhood ( HWIXKMONOPM�QRK ) of a border

point x.

> =T[]\ - For a border point D , the number of ^ that satisfy all of the following :

Page 24: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

12

– _ is a 6-neighbor of `– _ is a 1

– a � %'_b,cLdVe� 4 !gf�ihTj � 4 - For a border point ` , the number of _ that satisfy all of the following :

– _ is a 18-neighbor of `– _ is a 1

– a � %'_b, clkdVe� !gf�ihTj ��� - For a border point ` , the number of _ that satisfy all of the following :

– _ is a 26-neighbor of `– _ is a 0

– a ��� %'_b,clkd e� 4 !gf�ih kj � 4 - For a border point ` , the number of _ that satisfy all of the following :

– _ is a 18-neighbor of `– _ is a 0

– a ��� %'_b,c kdVe� 4 !gf�ih kj ��� - For a border point ` , the number of _ that satisfy all of the following :

– _ is a 26-neighbor of `– _ is a 0

– a ��� %'_b,c kdVe� 4 !gf

A border point ` is a simple point if and only if it meets the following criteria :

�ih dVe� = 1 OR

�ih kdVe��� = 1 OR

� ( h kj ��� = 0 AND h kdme� 4 = 1) OR

Page 25: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

13

Figure 2.3: Bertrand’s algorithm applied to an S shaped object

� ( hTj � = 0 AND h kj ��� = 0 AND h kj � 4 = 0 AND d e��n hoj � 4 ( hoj ��� =1)

A border point ` is not an end point if and only if :

�ih kj ��� = 0 AND

�ihTj ���op! 0 AND

�ih d e� q hoj ��� +2

Results of applying Bertrand’s algorithm to the S object are shown in Figure 2.3. In order

to preserve connectivity, the algorithm is very conservative about the removal of points. Con-

sequently, a very thick skeleton is obtained as can be seen from the figure.

2.1.3 Summary of Topological Thinning

Topological methods have been the most widely investigated among all thinning algorithms be-

cause of the strong mathematical basis for the connectivity test. However, testing for simple

points in 3D is not a trivial operation and it is difficult to prove the correctness of a given simplic-

ity test. These algorithms work well for computer generated images which are smooth. Noisy

images have to be smoothed in order to get a thin skeleton. Topological thinning can guaran-

tee connected skeletons at the cost of reconstructibility. Since these methods use a topological

property like the Euler characteristic to test points for removal, two objects with the same Euler

characteristic but different geometries could be thinned to similar skeletons, e.g. a cuboidal box

with sharp corners and one with rounded corners would yield the same skeleton.

Page 26: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

14

2.2 Distance Transform Methods

The distance transform (DT) at a point within an object is defined as the minimum distance to a

boundary point. Since the skeleton is required to be centered with respect to the object bound-

ary, the distance transform gives useful cues for point removal. Points closest to the center of

the object would have the maximum distance transform value. Several distance metrics can be

used to compute the distance transform. A distance metric is a way to measure the distance be-

tween two points in the image. The Euclidean or exact metric between two points %E` � � _ � ��r � ,and %E` � � _ � ��r � , is defined as s %E` � n `)t$, � ( %'_ � n _ � , � ( % r � n r � , � . The computation of a

correct Euclidean DT is neither efficient nor algorithmically trivial. Several algorithms have

been proposed for the Euclidean DT [87, 68, 72]. The Euclidean metric can be approximated

by Manhattan %.u ` � n ` � u ( u _ � n _ � u ( u r � n r � uv, or chess-board ( wV9x`y%.u ` � n ` � u � u _ � n _ � u � u r � n r � uv, )metrics for faster computation. Weighted distance metrics can be used to approximate the Eu-

clidean distance. Such metrics are denoted by � 9 ��z8��{|� where the local distance between

face neighbors is 9 , between edge neighbors is z and between vertex neighbors is { if a vox-

elized representation of points is imagined. Not all combinations of local distances 9 , z and {result in useful distance transforms. The distance transform has to be regular for it to be use-

ful. Regularity properties of distance transforms in 3D are discussed in [15, 45]. The ��}�����distance transform of a 2D image is shown in Figure 2.4 with the original image on the left and

the distance transform values on the right.

Figure 2.4: Distance transform in 2D

The medial surface/axis can then be defined as the locus of the centers of maximal circles

(2D) or balls (3D). The circles are constructed with the center at a point and have a radius equal

Page 27: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

15

to the distance transform at that point. A maximal circle is one which is not completely con-

tained in the circle of any other point. The set of points thus extracted do not guarantee a con-

nected skeletal representation. Niblack et al. [62] identify saddle points in 2D images to get a

connected skeleton. The concept of saddle points is hard to extend from 2D because of the lack

of a unique cyclic ordering of points around a test point. Since the distance transform is a scalar

field, vector field characterizations like those described in [39] require computation of the gra-

dient and eigenvalues which is expensive. Since it is difficult to identify saddle points in 3D,

the property of connectivity is difficult to achieve. Distance transform methods are well suited

for reconstruction of the object from the skeleton points and their distance transform values. If

a Euclidean or quasi-Euclidean metric is used for the distance transform, the skeleton is robust

under rotation of the object.

2.3 Voronoi Methods

The Voronoi Diagram is a well-known tool in Computational Geometry [67]. Given a set�

of � points in a plane, the Voronoi polygon of a point ~)� is the polygon enclosing all points

in the plane that are closer to ~)� than to any other point in�

. The Voronoi Diagram ( �A� ) is

the collection of the Voronoi polygons of all the points in�

. This concept can be extended to

3D as well, where the ��� is the collection of Voronoi polyhedra. The medial-axis, which is a

synonym for the skeleton, is a subset of the Voronoi Diagram. Since a maximal ball is tangent

to the object boundary, its center is equidistant from at least two different points on the object

boundary. Therefore, the ��� of points on the object boundary will yield Voronoi edges/faces

near the center of the object which are equidistant from two or more boundary points, giving part

of the medial-axis. A 2D skeletonization algorithm based on the Voronoi Diagram of a shape’s

boundary points is described in [64]. Given a set of boundary points, the algorithm extracts the

Voronoi Diagram. Most methods for the 2D Voronoi diagram impose the condition that 3 points

cannot be co-circular. Such restrictions can be eliminated by using infinite precision arithmetic

or using perturbation schemes like simulation of simplicity [24]. Ogniewicz and Kubler use

exact rational arithmetic to compute the Voronoi diagram and avoid the problem of co-circular

points. In their method, the Voronoi diagram is pruned using various pruning techniques based

on a threshold value. The results of their algorithm applied to a rectangular shape with different

Page 28: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

16

pruning thresholds are shown in Figure 2.5.

Figure 2.5: Voronoi Skeletons with different pruning thresholds

Other researchers [69, 75, 76] compute the 3D medial-axis/skeleton by using some form

of the Voronoi Diagram or its dual, the Delaunay triangulation. These methods do not compute

the Delaunay triangulation of general point sets and work well only for polyhedral solids used in

CAD and solid modeling. A 3D Voronoi skeletonization algorithm for large, complex datasets

with a topologically correct regularization method is described in [61]. It tries to address the

issue of keeping the skeleton topologically accurate while pruning the Voronoi diagram. Since

they work with points on the object boundary, Voronoi methods are best suited for polyhedral

objects for which the surface information is available but volumetric models are not available.

This makes it difficult for such methods to deal with cavities and holes in objects. The computa-

tional cost also grows with the number of polyhedrons and in many cases, it may not be possible

to easily compute the Voronoi diagram for arbitrary boundary configurations.

Page 29: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

17

Chapter 3

Parameter-Based Volume Thinning

3.1 Motivation

With the widespread availability of MR and CT scanning equipment three dimensional volume

datasets are becoming commonplace. Such images find application in medicine, luggage scan-

ners, oceanographic visualization, oil exploration and automated inspection of moulded and cast

manufactured products. Large fluid dynamics and scientific simulations also give rise to three

dimensional datasets. Regions of interest in these datasets, referred to as voxelized objects or

features can be segmented out and presented to the user or scientist. However, these regions are

sometimes too large and unwieldy for further inspection or analysis. A thinning procedure may

be used to extract essential properties of the object. When a thinning procedure is applied to the

object, a thinner representation called the skeleton is generated.

The skeleton is related to the medial-axis which is the locus of points centered with respect

to the boundaries of an object. For three dimensional objects, the medial-axis is not just a curve,

but a surface, often called the medial-surface. A centerline is a curve like representation of the

medial-surface for 3D shapes. It is very useful for path planning in Virtual Endoscopy [71, 40].

After segmenting an organ from an MRI scan, the centerline provides the camera path for au-

tomatic navigation and inspection of the organ. Such a virtual fly-through simulates the video

from a real camera used in endoscopic procedures to detect polyps and tumours. The centerline

can also be used to generate an accurate “stick-like” model of a volume object to achieve realistic

animation [17]. In computer graphics animation, the motion of animated characters is controlled

by manipulating the motion of a “stick-like” representation. In addition, if the original object

can be reconstructed from the skeleton, the thinning process can be used for compression and

speedup for fast matching and recognition.

It is desirable for the skeleton to be thin, centered, topologically accurate, and in many cases,

Page 30: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

18

allow reconstruction of the original object. When skeletons are used for shape description, the

topological characteristics of the object need to be preserved. As described earlier in Chapter

2, Topological Thinning algorithms deal with this property. The primary concern of these al-

gorithms is the identification of simple points and end points. (Points refer to 2D pixels or 3D

voxels which are part of the segmented region of interest). The simple point test checks the

local neighborhood of a test point to determine whether removal of the test point will discon-

nect its neighbors. Since the test is purely based on local connectivity, certain primitive shapes

like cuboids might be excessively thinned (to just one point). Therefore, certain simple points

which are end-points are left unchanged to preserve useful information about the shape of the

object. While these methods work well for two dimensional shapes [1, 65], efficient and com-

plete characterization of end points in 3D is not easy [5, 57]. Connectivity of the resulting skele-

ton is implicit in topological methods. These methods work well for smooth objects but noisy

(real-world) objects have to be smoothed prior to topological thinning.

Reconstructibility is a necessary property for compression applications. Distance Transform

methods can satisfy this requirement by storing the minimum boundary distance or the Distance

Transform value at every skeletal point. However, thinness and reconstructibility are two con-

flicting characteristics. Forcing connectivity of the skeleton may introduce extraneous points

which are not essential for reconstruction, yielding a thick skeleton and hence conflicting with

the thinness requirement. Distance Transform methods for 2D shapes [62] achieve connectiv-

ity by identifying “saddle-points” and “local maxima”. Due to the absence of a unique cyclic

ordering of points around a given point in 3D, identification of saddle points is not easy. Vector

field characterizations of saddle points like those described in [39] require computation of the

gradient and eigenvalues which is expensive. When a Euclidean or quasi-Euclidean metric is

used for the Distance Transform, the skeleton is invariant under rotation of the object.

If the boundary representation of an object is available, the medial-axis and medial-surface

can be extracted by pruning the Voronoi Diagram [67, 2] of the boundary points. 2D medial-axis

algorithms based on the Voronoi Diagram of a shape’s boundary points are described in [64, 16].

Computation of the Voronoi Diagram for arbitrary point sets needs special methods like rational

arithmetic or perturbation schemes like simulation of simplicity [24]. These methods do not

compute the Voronoi Diagram of general point sets and work well only for polyhedral solids

Page 31: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

19

used in CAD and solid modeling. Moreover, extraction of the boundary for complex volume

models results in a very dense set of points [56], and the memory costs for the Voronoi diagram

of such large points sets can be prohibitively large.

It is clear that existing methods for 3D thinning emphasize certain properties of the resulting

skeleton depending on the application. The varied applications of skeletons have several con-

flicting requirements which makes it difficult for a single method to address all of them. While

some applications require the skeleton to be as thin as possible, others impose the condition that

the object be reconstructible from the skeleton. Automatic navigation of medical datasets needs

an extremely thin skeleton or a centerline. This is also the case for generating “stick-like” rep-

resentations for animation control. For compression, the original object must be reconstructible

from the skeleton. For reconstruction, if every minor feature in the original object needs to be

captured, the resulting skeleton is very thick and dense. A tracking application [77] requires a

balance between thinness and accuracy in order to be correct and fast.

Existing methods for 3D thinning do not allow control over the density of the skeleton.

Topological thinning works well for smooth, regular objects. However, real world objects tend

to have noisy boundaries which would cause a lot of the points to be identified as end-points by

a topological thinning method resulting in fairly thick skeletons. Boundary noise can cause the

Voronoi Diagram to be very dense, and topologically correct pruning techniques for 3D Voronoi

skeletons are difficult to derive.

In the next section, we describe a new algorithm for volume thinning which is parameter-

based, allowing the user the control the thickness/thinness of the skeleton.

3.2 Algorithm

Figure 3.1: The neighbors of a voxel

Page 32: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

20

The volume is considered to be uniformly sampled in all three dimensions. A voxel is the small-

est unique element of this sampled volume. Voxels are partitioned into object-voxels and background-

voxels. The object-voxels are taken to be 26-connected and the background-voxels are taken to

be 6-connected (for a discussion on connectedness see [46]). Boundary-voxels are object-voxels

that are 6-neighbors of background-voxels, i.e. they lie on the boundary. For a voxel � , we de-

fine F-neighbors (face), E-neighbors (edge) and V-neighbors (vertex). F-neighbors of a voxel

are the 6-neighbors and share a face with the voxel in a cubic grid. E-neighbors are the 18-

neighbors that are not 6-neighbors, i.e. they share an edge of the voxel cube. V-neighbors are

the 26 neighbors that are not 6-neighbors or 18-neighbors, i.e they share a vertex of the voxel

cube. This concept is illustrated in Figure 3.1.

3.2.1 The Distance Transform

The distance transform at a voxel p = � x,y,z � is defined as

����� ! min(i,j,k)

�$;���%�%E` � _ ��r , � %'� ���R��� ,�,��C%'� ���R��� ,��m�����

where ;�� is the distance from voxel (x,y,z) to voxel (i,j,k) and ��� is the set of boundary

voxels. We compute the distance ;R� using a �X������ |� weighted distance metric, which ap-

proximates the Euclidean metric fairly well [15].

The distance transform can be computed by using neighborhood masks which are based on

the idea that global distances in the image are approximated by propagating local distances.

In [15], Borgefors describes a two-pass method to compute the weighted distance transform

in three dimensions. Her algorithm works on the complete cuboidal array consisting of both

object and background voxels. It consists of two passes over the entire array, each pass start-

ing at diagonally opposite corners of the array cuboid. In each pass, a new value is computed

for each object voxel by summing the values of already visited neighbors and the correspond-

ing local distances, 9 , z or { and using the minimum of these sums. Leymarie and Levine [55]

have shown that a mask propagation method can have the same complexity for Euclidean and

weighted metrics.

Page 33: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

21

We have developed a weighted distance transform algorithm that works using a compact oc-

tree representation. We have also experimented with Saito and Toriwaki’s Euclidean Distance

Transform method [72] which works on the whole voxel array. Both algorithms are described

below.

Algorithm for the �������� �� Distance Transform

In order to deal with extremely large data sets, the object is represented by an octree [73]

structure and background voxels are not stored. Since it is not efficient to traverse the octree

in array order, we have developed peeling technique that successively propagates the distance

transform inwards starting from the boundary voxels. In the first pass, all boundary voxels are

identified. This is done by checking the 26-neighborhood of every object voxel for background

voxels. If the object voxel has a background voxel as a F-neighbor, it is assigned a DT value of 3,

otherwise, if it has any background voxel as its E-neighbor, but not as a F-neighbor, it is assigned

a DT value of 4. If there are no background voxels which are F-neighbors or E-neighbors, but

there is some background voxel which is a V-neighbor, the boundary voxel is assigned a DT

value of 5. In the second pass, the algorithm then recursively checks neighbors of each of the

marked voxels, adding 3, 4 or 5 to the distance transform depending on whether it is a F, E or

V neighbor.

Let � be the set of object-voxels, k� be the set of background-voxels and ��� denote the set of

boundary-voxels. We use a peeling technique which propagates the boundary inwards, assign-

ing distance transform values to object-voxels which are in the neighborhood of the boundary-

voxels. The distance transform value for a voxel is updated only if the new value is smaller than

the current value.

For all voxels ���J� , assign a distance transform �������l�Calculate the Distance transform of boundary-voxels

For all voxels ���J� that have a ( face/edge/vertex ) neighbor �A� k�������� ( 3 for face / 4 for edge / 5 for vertex )

Add p to BV

Page 34: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

22

Propagate the boundary inward

Repeat for all �m���A�Find all voxels ����� which are ( face/edge/vertex ) neighbors of p

Assign ���0��� min �����0� , ��� � + ( 3 for face / 4 for edge / 5 for vertex ) �Remove p from BV

Add r to BV

until no ��� � is modified.

A circular queue is used to keep track of the distance values for successive peeling. We use

the fact that at any given instant, if points with a distance transform value � are being processed,

their neighbors can get DT values of � ( � , � ( or � ( only. A linked list is created for

every distance transform value; the list stores pointers to all nodes in the octree which have a

distance transform value corresponding to the list value � . In the first pass, the circular queue

is initialized with lists for DT values 3, 4 and 5 with the head of the queue at 3. In the second

pass, the boundary is propagated inwards. The list at the head of the queue is traversed and all

neighboring object voxels are evaluated for a DT value which is the sum of the list value and

the local distance increment, i.e. ��� or . If the new DT value is lower than the existing value,

the voxel is added to a list corresponding to the new value. Once the list at the head of the queue

has been processed, the head is moved to the next higher list for processing. The computation

stops when the queue is empty, i.e there are no more distance transform values to process.

Algorithm for the Euclidean Distance Transform

We have implemented the Euclidean distance transform algorithm described by Saito and

Toriwaki [72]. The algorithm is described below.

Let D = �:�$� � �R� and S = �R¡�� � �R� be the Euclidean distance

transformation (EDT) and the squared EDT of a binary picture F = �.¢£� � �3�respectively. Then the value �$� �¤� is defined as the minimum distance

value from the voxel ( i, j, k) to the closest 0-voxel in the input

picture F, that is

�R¡�� � �3� = w¥�¤� �§¦3¨v©£¨«ª�� �:�$¬£%�%®­ � ¯°�²± , � %&³ �-´�Oµ ,�,�¶

Page 35: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

23

¢ ¦·©¤ª = 0,1q

pq

L,1q

qq

M,1q

rq

N �= w¥�¤� �§¦3¨v©£¨«ª.� �:%®­ n ³),�¶ ( % ¯ n ´ ,�¶ ( % ± n µ ,�¶�, ;¢ �§¦3¨v©£¨«ª�� = 0,1

qpq

L,1q

qq

M,1q

rq

N � ,� ����¨ �O¨«�·� = w¥�¤� �¸¦3¨v©£¨«ª�� �:� ¬ %�%®­ � ¯°�²± , � %&³ �-´�Oµ ,�, ;

¢ �¸¦3¨v©�¨«ª.� = 0,1q

pq

L,1q

qq

M,1q

rq

N � .= ¹ ¡�� � �

Note that the diatance value ¡�� �¤� at any 0-voxels (background voxels) is 0 because

the closest 0-voxel is itself.

We use the term distance transformation (DT) to represent both the transformation to

calculate the picture D from the input picture F and the distance picture D itself.

Algorithm The Input picture,

F = �.¢£� �¤�3�:%.º q ­ q¼» � º q ¯ qg½ � º q ± q¿¾ , .Transformation 1. Derive from F a picture G = �8À3� � �R� defined as follows -

(transformation in the i-axis direction)

À3� � � = w¥�¤�+Á:�:%®­ nà,�¶°Ä ¢ Á � � !ÆÅ � º q  q¿» � .Transformation 2. Derive from the above picture G a picture H = �RÇ�� �¤�R�given by the following equation - (transformation in the j-axis direction)

Ç�� � � = w¥�¤�yÈ3�8ÀR� È � ( % ¯ n�É , ¶ Ä8º q É qʽ � .Transformation 3. Obtain from the above picture H a picture S = �R¡�� � �R�defined by the following equation - (transformation in the k-axis direction)

¡�� � � = w¥���GË8�RÇ � � Ë ( % ± nÃÌ , ¶ Ä8º q Ì q¿¾ � .Then the following property is true.

Property 1. The picture S = �R¡�� �¤�3� is the squared EDT of a picture

F = �.¢£� �¤�3� . That is, a voxel (i,j,k) in the picture S = �R¡�� � �R� has a

value equal to the square of the Euclidean distance from the voxel (i,j,k) to the

closest 0-voxel.

We implemented the �l������ � and Euclidean distance transform algorithms described

above. A comparison of runtimes for various datasets is presented in Table 3.1. For small datasets,

Page 36: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

24

Object Size Total VoxelsPercentage”1” Voxels

Time( �¿������ T� )

%Time(Euclidean)

Ellipsoid1 128x128x128 2097152 5.98 11.304 10.192Ellipsoid2 256x256x256 16777216 0.30 4.918 54.335Dragon 250x150x100 3750000 4.56 13.305 10.838VisibleMale

123x84x468 4835376 34.14 139.25 57.25

Table 3.1: Comparison of runtimes for the weighted ������� � and the squared EuclideanDistance transforms

.

the read time for loading the volume from a file is comparable to the actual computation; there-

fore, only the distance transform computation times are compared. We found that the octree-

based ������� �� metric is better than the squared Euclidean metric only for sparse volumes,

where the number of background voxels is high. This is the case for the Ellipsoid2 object in Ta-

ble 3.1 which has less than 1% object voxels. For other volumes, the squared Euclidean metric

outperforms the ������� J� metric. With 34% object voxels, the Visible Male volume shows

the superiority of the Euclidean distance transform for dense volumes.

The reduced runtime for the Euclidean distance transform can be attributed to the Ð�%'Ñ Ò-Ó:aÔ,search time for the octree versus the constant time for a voxel read for the array-based Euclidean

distance transform. We also do not compute the exact Euclidean distance transform for all vox-

els, but only the squared Euclidean transform, thus avoiding a costly square-root operation.

This suffices for most applications. For our skeleton algorithm (described in the next section),

the squared Euclidean distance transform is fine. Examples of skeletons using each metric are

shown in the next section along with other results.

3.2.2 Skeleton Extraction

Once the distance transform has been computed, voxels which are essential to the skeleton have

to be identified. We exploit the reconstructibility property of skeletons to identify these voxels.

In the strict sense, this property implies that if the object were to be correctly reconstructed from

the skeletal voxels and their distance transform values, the skeletal structure should capture all

the shape characteristics. Therefore, the property of reconstructibility makes the skeleton accu-

rate in the sense that there are longer spines in regions with sharp corners or curvature changes.

Page 37: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

25

By relaxing the reconstruction criterion, skeletons of varying density can be obtained. The es-

sential theoretical considerations are developed by stating some definitions and observations

below. These have also been defined in [28].

Definition 1 If a voxel � has a distance transform ����� , the Ball �Õ%§�G, associated with � is the

set of object voxels � such that the transform distance ;���%§� � �x, from � to � is strictly less than

���� .

The ball is a digitized version of a sphere. Its radius is determined by the distance transform

value at a voxel and the ball is constructed using the distance metric, in this case, the �������� ��metric.

Definition 2 The ball for an object voxel is maximal if it is not wholly contained in the ball of

any other voxel.

Observation 1 The set of voxels whose balls are maximal is sufficient to reconstruct the object.

This observation is true because every voxel in the object is contained at least in its own

ball, and all non-maximal balls are contained in maximal ones. The concept of a witness voxel

is now introduced in a manner similar to that for witness pixels in [62].

Definition 3 The witness for a voxel � is any 26-neighbor � such that the distance transform of

� , ���#× = ��� � - (3 for F-neighbor / 4 for E-neighbor / 5 for V-neighbor). Thus, if a voxel � is a

witness for a voxel � , �Õ% �x,ÙØi�¥%§�+, .

Let the cost incurred in moving from � to a neighbor � be ;�Ú , ;�Û or ;:Ü for the cases when

� is an F-neighbor, E-neighbor or V-neighbor. Then, the witness to a pixel � is any 26-neighbor

� such that the cost incurred in moving from � to � is equal to the difference in their distance

transforms. Thus, if a voxel � is a witness of a voxel � , then �¥% �x,ÙØÝ�¥%§�+, .Since the set of balls associated with witness voxels is contained in the balls of other voxels,

the set of non-witness voxels should suffice for reconstruction of the object. However, this set of

non-witness voxels is not a minimal set for reconstruction. Nilsson and Danielsson [63] have

described a 2D method to iteratively identify the minimal set based on boundary coverage of

Page 38: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

26

3

6

9 9 9

3 3

3

3

3

3

3 3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3 3

3

3

3

3 3

3

3

3

3 3

3

3

3

3 3

3

3

3

3

3

3

3

3

3 3

3

3

33

3

3

3

3

3

3 3

3

3

3

3 3

3

3

3

3

3

3

3

3

33333

3

3

3

3 3 3 33 3 3 3 3

3

3

3

33333

3

3

33

33333

3

3

3

3 3 3 3 3

3

3

33

3

3

3

3 3 3 3 3

3

3

3

33333333

3

3

3

3 3 3 3 3

3

3

3

3 3 3 3 3

3

3

3

33333

3

3

3

33

3

3

3

3 3 3 3 3

3

3

3

33 333333

3

3

33

3 3 3 3 3

3

3

3

33333

3

3

3

3 3 3 3 3

3

3

3

33333

3

3

3

3 3 3 3 3

3

3

3

3

3

3

3

3 3 3 3 3

3

3

3

3333 3 3 3 3 3

3

3

3

33333

3

3

33

3

3 3 3

3

33

3

3

3

3 3 3

3

33

3

3

3

3 3 3

3

33

3

66 6 6

6

66666

6 6

6

6

6

6

666 6 6 6

6

666

6

66

6

6 6 6

6

6

6

6 6

6

666

6 6

6 6 6

6

666

6

6 6 6

6

66

66

6

6

6 6 6

6

66

6

6

666

6

6 6 6

6

6

6

6

a. b. c.

Figure 3.2: Minimal set for reconstruction

locally maximum pixels. Since our method allows control over the density, the resulting set of

points is allowed to be thinner than the minimal set required for complete reconstruction. The

ball of a voxel may not be completely contained in that of another, but may be contained in

the union of the balls of several other voxels. Figure 3.2 shows a 5x5x5 cube with the distance

transform value at every voxel indicated by the number inside. Figure 3.2a shows the ball of

the black voxel, the voxels in the ball being marked gray. Figures 3.2b and 3.2c show the ball

of each of the other voxels marked black. All the black voxels are non-witness voxels, but the

ball of the black voxel in (a) is contained in the union of the other two. Hence, the non-witness

voxel marked black in (a) is not essential to the skeleton.

The identification of the essential non-witness voxels is not a simple problem. A brute force

approach which grows the ball of every non-witness voxel and checks it for inclusion in the balls

of all other voxels would be computationally expensive. It is however possible to check the 26-

neighborhood of every voxel and this is equivalent to checking for inclusion in all other balls.

Claim 1 The ball of a voxel � must be contained in the ball of one of its 26 neighbors if it is to

be contained in the ball of any other voxel in the object.

Page 39: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

27

Proof : Consider voxel � , the ball of which is contained in the ball of some voxel

� . Then �Õ% �x, � �¥%§�+, � � p! � . Therefore, there exists an uphill path from � to

� consisting of voxels of increasing distance transform value. Let this path go

through voxel �bÞ , where �bÞ is a 26-neighbor of � . Since �ßÞ is on the uphill path

from � , ��� �áà+â ����� .Consider growing a ball around a voxel (Figure 3.3). We step through successive

voxels reducing the distance transform by either ;xÚ , ;xÛ or ;Ü . Since the ball of �contains the ball of � , consider a series of ball propagating moves from � leading

to � . The reduced distance transform value ��� ×£ã � when the ball propagation

reaches � must be greater than or equal to ����� since �¥% ��, � �¥%§�+, . Therefore,

��� ×�ã � â ����� .The ball propagation path passes through � Þ ,ä ��� �áà ! ���#×�ã �å( % ; Ú � ; Û � ; Ü , since ��� �áà ã � ! % ; Ú � ; Û � ; Ü , .

Therefore, it follows that ��� �áà â ��� ��( % ; Ú � ; Û � ; Ü , which implies that the

ball of voxel � is contained within the ball of its neighbor �bÞ . If � is a non-witness

voxel, this relation is a strict inequality. If we have the case where the ball of �is contained in the union of the balls of voxels �8Þ , �8Þ Þ and �-Þ Þ Þ , we consider ball

growing paths from each of these voxels which pass through neighbors � Þ , � Þ Þ and

� Þ Þ Þ . By a similar argument as above, the ball of � must therefore be contained in

the union of the balls of � Þ , � Þ Þ and � Þ Þ Þ æ .

To find non-witness voxels, the 26-neighbors of all object voxels need to be scanned. Since

all non-witness voxels have maximal balls, no single neighbor’s ball completely contains the

ball of a non-witness voxel � . As illustrated above, it can however be contained in the union of

the balls of neighboring voxels. If such neighbors exist, then their balls have to be at least as

big as the ball of � , which implies that their distance transform value should be greater than or

equal to ����� .Rather than grow the ball for every neighbor and scan for containment in the union of balls,

a simple approach is to average the distance transform values of the neighbors �²� of � . The moti-

vation behind averaging the distance transform values of the neighbors is that if there are several

Page 40: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

28

p’, DTp’

Ball of P

Ball of q

p, DTp

q, DTq

Ball propag

ation

path.

Figure 3.3: Growing the ball around a voxel

neighbors with a higher distance transform value, the ball of � could be contained in the union

of their balls. Therefore, if the mean of the neighbors’ distance transform, çèam�Ö� , is close to

or greater than ����� , we do not want to keep � in the skeleton.

Definition 4 çèam�Ö� !êéÔë'ìí§î�ï$ð)ñ-ò í��� � � � �²�ó��� � �²� is a 26-neighbor of � .

We introduce the thinness parameter �]~ , that allows control over the removal of non-witness

voxels yielding skeletons of varying density. �]~ determines how close çèam�Ö� should be to

���� for � to be added to the skeleton.

Condition 1 ô:õöçèam��� � ���Ö� n �]~ , add � to the skeleton.

The above condition requires that the distance transform of a voxel be greater than the mean

of its neighbors’ distance transform by at least �]~ for it to be included in the skeleton. A low

value of �]~ indicates that � is retained in the skeleton if its distance transform is slightly greater

than that of its neighbors. This results in a thick skeleton. A high value of �]~ means that for

inclusion in the skeleton, � must have a distance transform that is much greater than that of its

neighbors, resulting in a thinner skeleton. In a single pass over all the object voxels, skeletal

voxels are marked if they satisfy Condition 1.

Figure 3.4 illustrates the concept of the thinness parameter using a ��}���T� weighted met-

ric. The mean of the neighbors distance transforms are 4.0, 3.875 and 3.75 for the dark pixel in

Page 41: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

29

Figs. 3.4a, 3.4b and 3.4c respectively. Therefore, the dark pixel will be included in the skeleton

only if �]~ is less than 2, 2.125 and 2.25 respectively. This shows that the center pixel increases

in importance as the shape boundary becomes irregular. Skeletons at thinness values of 0, 2,

4, and 6 for a 2D shape are shown in Figure 3.5. The squared Euclidean metric was used to

compute the Distance Transform.

6 6 6

444

4

4 3

2

24

4

2

222

2

2

2

2

2 2

2

2

2

2

2

3

4 4 4

4

a

DT-MNT = 2.25DT-MNT = 2.125DT-MNT = 2.0

3

cb

4

22222

2

4

2 2 2

2

24

4 2

2

2

2

4

2

4 4 4

4

4

2

2222 2

222

2

2

2

Figure 3.4: Mean Neighbor Distance Transform (MNT) and its relation to the maximum thin-ness (TP). The dark voxel will be included in the skeleton if TP is less than DT-MNT.

The range of �]~ for a given metric can be computed by considering the maximum and mini-

mum possible distance transforms for the neighbors of a voxel. As an example of the �������� ��metric, for a voxel (� � ����� ) the minimum sum of the the neighbors distance transforms would

be ÷�ø]%'����� n � , ( t } øå%'����� n , (Ãù ø]%'���Ö� n , , giving a minimum mean w¥�¤�#çèam��� !���Ö� n �ûúxüRü . Therefore, the maximum value of �]~ for the �������� �� metric is 4.077. Simi-

larly, the minimum �]~ can be computed by considering the maximum çöaV��� . Such a condi-

tion occurs when � is part of a cluster of equal valued local maxima. For practical purposes, the

minimum value of �]~ can be assumed to be zero, which occurs when all neighbors of a voxel

have a distance transform equal to the voxel’s distance transform.

Similarly, the maximum thinness parameter can be computed for the Euclidean metric. The

minimum sum of neighbors transforms would be ÷#ø0%�%'��� � n t$, ( t } ø0%'��� � n ¹ } , (ýù ø#%'��� � n¹ � , , giving a minimum mean w¥�¤�#çèam� � ! ��� � n t3� tO÷ . Therefore, the maximum value of

�]~ for the Euclidean distance transform is 1.416. Again, as shown above, the minimum useful

value of �]~ for the Euclidean metric is zero.

The parameter controlled skeletonization method outlined above thins the volume, keeping

only the voxels that satisfy Condition 1. It requires three passes over the skeletal voxels, two for

computing the distance transform and one pass to identify the skeletal voxels. The complexity

Page 42: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

30

Figure 3.5: A maple leaf and its skeleton at various thinness values. Thinness is increasing fromleft to right.

is therefore Ð�%'a�þ�ÿ��R, for the �������� �� metric, where a�þ®ÿ�� is the number of object-voxels. For

the Euclidean metric, the complexity is expressed in terms of the total number of voxels in the

dataset. The runtime complexity is then Ð�%'aA� þ ��, where a�� þ � is the total number of voxels in the

dataset. These skeletal voxels are not generally connected. However, since they were based on

the criterion for reconstructibility, the skeletal voxels capture the essential shape properties of

the object. Applications that require a connected representation can further process the small set

of skeletal voxels in a manner desired for the application. Some such applications are described

in the following chapters.

3.2.3 Ball Growing

In order to reconstruct the object from the skeletal voxels, balls of radius equal to the distance

transform value have to be constructed, with their center at each of the skeletal voxels. In this

chapter, we describe a simple recursive strategy. A ball growing path is initiated from every

skeletal voxel and starts with a value equal to the distance transform at that voxel. If the weighted

metric is used, Every move to a neighboring voxel incurs a cost of either 3, 4 or 5 for face, edge

and vertex neighbors respectively, and the path value is decremented by the cost. For the Eu-

clidean metric, corresponding costs are 1, ¹ } and ¹ � respectively. Neighboring voxels then

serve as starting points for new ball growing paths with the decremented values. The ball grow-

ing function is invoked on each of the neighboring voxels using the decremented path value. A

path terminates when its value is less than or equal to 3 (weighted) or 1 (Euclidean) but greater

than zero. All voxels along every ball growing path are inserted into the reconstructed object.

The quality of reconstruction depends upon the thinness of the skeleton. There can be a loss of

boundary voxels when reconstruction is done from a very thin skeleton.

This recursive approach works for small volumes. Due to a finite stack size, this approach is

not suitable for larger volumes. A faster, scan-fill based algorithm for reconstruction is described

Page 43: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

31

in Chapter 6.

3.3 Results

In this section, we apply our algorithm to some example datasets. Further results are shown with

examples from medical visualization and another from computational fluid dynamics.

Figure 3.6a shows a 9x9x9 cube. The skeleton is shown in Figure 3.6b for a thinness value

of 2.0. Using the skeletal points, the object is reconstructed as shown in Figure 3.6c.

Figure 3.6: Lossless reconstruction

Figure 3.7: Lossy reconstruction

Figure 3.7a is a digitized cylinder with height 12 and radius 6. The skeleton in Figure 3.7b

has been extracted using a thinness value of 1.6. Figure 3.7c shows the lossy reconstructed ob-

ject. Note how the ball growing process squares out the rounded corners because of the pseudo-

Euclidean metric, hence extra voxels are observed in the reconstructed object.

Figure 3.8 shows the skeleton for a cylindrical object with varying thinness parameter. The

sharp curvature discontinuity at the faces of the cylinder is responsible for the multiple spikes,

which increase in density as the thinness parameter is reduced. From this figure, it is clear that

high thinness values like 2.0 can be used to generate very thin skeletons, as required in centerline

Page 44: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

32

Figure 3.8: Effect of the Thinness Parameter

generation. Applications which need more shape information would need a thicker skeleton

which can be obtained with thinness values of 1.5 or lower. The various skeletons shown could

also be used to get varying degrees of reconstruction.

Figure 3.9 is a comparison of the skeletons extracted using the weighted ������� �� metric

and the Euclidean metric. Skeletons for the ellipsoid1 and Visible Male volumes (see Table 3.1).

For each object, a thinness value was used that gave an equal number of voxels for the weighted

and Euclidean metrics. Figure 3.9 shows an iso-surface of the original object to the left, the

weighted skeleton in the center and the Euclidean skeleton at the right. For the ellipsoid, notice

that the Euclidean skeleton is more regular compared to the weighted skeleton which has some

protruberances. For the Visible Male volume, the Euclidean skeleton is smoother and better cen-

tered compared to the ������� � skeleton. Another advantage of the Euclidean skeleton is in

improved reconstruction for the same number of skeletal voxels. This improvement is demon-

strated in Chapter 5 of the thesis.

An important issue is the dependence of the skeleton and the reconstructed shape on the thin-

ness parameter. Figure 6.6 shows the relation between the thinness parameter and the number

of voxels in the skeleton using the Euclidean metric. Results for the weighted metric are similar.

Page 45: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

33

Figure 3.9: Skeletons for a volumetric ellipsoid and the Visible Male using the weighted ������� T� metric (center) and the Euclidean metric (right).

We skeletonized three different volume models, the ellipsoid and Visible Male (Figure 3.9

and a volumetric dragon. From the plot in Figure 3.10, it is clear that the number of voxels in

the skeleton fall off exponentially with an increase in the thinness parameter. The falloff rate

depends upon the complexity of the shape. Regular shapes such as the ellipsoid exhibit sharp

chages in the curve. This is due to the fact that the distance field is very regular for the ellipsoid;

therfore there are clusters of voxels with similar values of ���Ö� n çöaV��� . These clusters are

culled at specific thinness parameters resulting in sharper changes. Complex shapes such as the

dragon and the Visible Human have a smoother falloff due to the presence of a range of values

for ����� n çèam�Ö� . The plots for both the dragon and the Visible Male exhibit a similar profile.

0.001

0.01

0.1

1

10

100

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

% S

kele

tal V

oxel

s

Thinness Parameter

Skeletal Voxels for Different Thinness Parameters

EllipseDragon

Visible Human

Figure 3.10: The dependence of skeletal voxels on the thinness parameter.

Page 46: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

34

Chapter 4

Shape Manipulation

With volumetric datasets becoming more widespread, and the computational power to render

these datasets being available, volume graphics is gaining momentum. The goal of volume

graphics is to replace the traditional polygon-based graphics pipeline with a volume-based pipeline.

This includes volume modeling, animation and rendering. Tools for deforming and animating

volumetric objects have applications in scientific and medical visualization and, lately, in com-

mercial animation where computational simulations and volumetric models are used for gener-

ating realistic effects.

However, because the volumes are large and rendering is non-interactive, manipulating vol-

umetric objects is difficult and existing methods for volume animation are non-intuitive. Volume

modeling and animation in non-scientific applications is sparse. Some existing approaches con-

vert volume models to polygonal models, and perform the deformations and animations in the

polygonal domain. Other approaches involve free-form deformation which is difficult to con-

trol or physically-based animation which is computationally prohibitive. This chapter discusses

existing approaches to volume deformation and animation.

4.1 Iso-Surfacing for Volume Manipulation

Most realistic animation today is done using surface-based models. One approach to volume

manipulation is therefore to extract a surface from the volume and animate that surface. March-

ing Cubes [56] is a popular method to extract the surface of volume models. A threshold inten-

sity is specified as input to the Marching Cubes algorithm. The method then computes an iso-

surface of the volume at that threshold intensity and triangulates the iso-surface. Once a trian-

gulated model is available, traditional deformation and animation techniques can be used. How-

ever, the expressiveness allowed by volumetric models is lost in such an approach.

Page 47: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

35

Furthermore, the iso-surfacing process incurs fitting errors and generates a large number of

primitives. The choice of the threshold can affect the shape of the isosurface. In many cases,

a single threshold might not be sufficient to describe the surface of the desired shape. A large

number or triangles are generated by iso-surface fitting. This increases the computational cost

for deformation and animation. Finally, the resultant deformed shapes are no longer in the vol-

umetric domain, therefore they cannot be volume rendered, and internal structures cannot be

visualized.

4.2 Direct Volumetric Shape Manipulation

A volume model differs from surface-based models in that the actual 3D raster is available as

voxels. Therefore, a volume model is essentially a three-dimensional image. Consequently,

existing methods from 2D image processing and warping can be adapted to manipulate volume

models.

Most image warping operations work at the pixel or block level to achieve transformations

like rotation and stretching. The notion of specific objects in the image does not exist for such

global warping. In contrast, image morphing techniques warp objects in a source image to those

in a target image. Beier and Neely [3] describe one such image morphing technique. Feature

points are selected in the source and target images, and a transformation is computed between

corresponding feature points. Smooth interpolation of this transformation morphs the source

object into the target object.

Lerios and others extended Beier and Neely’s technique to 3D volume morphing [54]. More

recently, Fang et al have have used the same idea for deformable volume rendering [25]. They

include the warping as part of the rendering process. Rendering is done using 3D texture map-

ping hardware. Therefore, their solution is faster than Lerios’ method for volume morphing.

Both these methods are feature-based and require a source and target object. Therefore these

techniques are not suitable for general volume deformation. Other work on volume morphing

has also been done by He and Kaufman [38] using a wavelet transform. Hughes [43] describes

an innovative method to morph volume objects in the Fourier domain. A Fourier description

of the volume is not always available. While these methods cannot be used in the context of

Page 48: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

36

animating a volumetric object since a target model is not available, they could be used for in-

betweening key frame objects generated by other deformation techniques.

Other work in volume modeling/deformation involves applying a transformation to every

voxel in the object (free form deformation) or defining a physical model for the full object (physically-

based deformation). The computations can include spring-like models, continuum models [33]

and finite element methods (FEM) [19]. Gibson and Mirtich [33] have presented a survey of

work done in modeling deformable objects.

Physically-based animation is used for realistic modeling of collision and deformation. Chen

et al [19] propose a volumetric mass spring model and a FEM model for animating volumet-

ric objects. Gibson [32] has suggested the 3D Chain Mail algorithm to propagate deformation

through a volume rapidly. These are sophisticated techniques requiring specification of material

properties like elasticity and are sometimes an overkill for animators interested in simple and

fast articulated deformation.

Kurzion and Yagel [48] have proposed a method where volumes are not directly deformed,

but the rays used to render the object are deformed. Light rays are deformed by a deflector,

which is a vector of gravity positioned in space with a spherical field of influence. The deforma-

tion is realized by the renderer, thus tying the deformation mechanism to the rendering scheme.

The method is not completely intuitive for animators and computational time is proportional to

the number of deflectors (which can be large).

Zhongke and Prakash have described a method for volume deformation [88], and have ap-

plied it to the Visible Human volume dataset. They first divide the volume into voxel clusters

by manual selection of regions on 2D slices. Each cluster is approxmimated by a polyhedron.

The small number of polyhedrons thus obtained are physically animated using a FEM solver.

Forces and constraints are defined at the boundary of each polyhedron and the finite-element

solver computes joint deformations. Finally, 3D texture-mapping is used to fill in the polyhedral

model with the voxel cluster. While the computation cost for animation is somewhat reduced

in this approach by using a simple polyhedral approximation to the human, their method still

requires the specification of material properties and forces which makes it difficult to achieve

realistic humanoid animation.

Page 49: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

37

Another volume modeling technique is implicit modeling [9, 10, 11, 14]. In implicit model-

ing, primitives (points, lines, surfaces, etc.) of an implicit model can be treated as its skeleton.

There are problems with animating the implicit skeleton. These include coherence loss when

two primitives from the set composing a character are placed too far from each other, unwanted

blending when two unblendable parts of an object, e.g. hands and legs, are placed too close to

each other and blended, and volume variation.

In order to use the implicit modeling approach to animation, an existing volume object has

to be first fitted with an implicit model. Fitting an accurate model is hard and the reconstruction

process has to deal with artifacts like bulges for branching shapes [10]. Ferley et al. [26] have

proposed a method for implicit reconstruction of branching shapes from scattered points. They

use a Voronoi method that yields a complex geometric skeleton consisting of lines and polygons

A pruning algorithm is thus necessary to remove skeletal elements that have no perceptual sig-

nificance. The simplification is lossy with respect to surface detail, and discontinuities on the

surface are hard to model. The skeletonization process is Ð�%'a�Ñ Ò-Ó:a , , where a is the number

of scattered points, and has to address numerical issues involved in computing the Voronoi di-

agram for arbitrary point sets. Furthermore, computing a good implicit function is non-trivial,

and involves computing the zeros of a high degree polynomial for each skeletal point [11]. Tech-

niques for reconstruction include ray tracing, scan conversion and polygonization, which usu-

ally involve substantial floating point computation [14, 13, 86].

To summarize, existing methods for volume deformation and animation treat the volume as

a 3D raster and attempt to use complex, physically-based methods for warping the raster. These

methods are non-intuitive and it is difficult to control the target pose. Our aim is to achieve

realistic animation that equals the state of the art in surface-based animation. Current animation

tools use a simple skeleton approximation for the animation of surface-based models. In the

following chapters, we will describe an intuitive volume animation method that is based on our

parameter-controlled skeleton which can work with existing commercial animation packages.

Page 50: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

38

Chapter 5

Traditional Character Animation

This chapter describes current techniques for realistic character animation using computer graph-

ics. All commercial animation tools such as Alias PowerAnimator (now Maya), 3D Studio

MAX and Softimage can animate surface-based models. These tools integrate modeling, ma-

nipulation and rendering into a single package.

A complete animation comprises a series of frames. These frames are played back at a rate

of 30 per seconds (typical) to give the illusion of continuous motion. A short animation lasting

a few minutes has thousands of frames. Each frame is a rendering of a three dimensional scene.

A scene is composed of objects (models), lights and a camera. The position, direction and field

of view (FOV) of the camera determine the portion of the scene that is rendered.

In current animation packages, objects are modeled as surfaces. Models are represented in a

variety of ways; polygon meshes, Bezier patches, NURBS and subdivision surfaces [36] are the

most common surface modeling primitives. Polgon meshes are defined as a set of vertices and

faces. Properties such as a surface normal, color and material are defined at vertices and inter-

polated for rendering. A high fidelity polygonal model can have several thousand to a million

polygons which makes it difficult to edit such a model. Bezier patches and NURBS surfaces

are defined via a series of control points. The number of control points is much smaller than the

number of polygons that would be needed for an equivalent polygonal model. Control vertices

are moved to affect the shape of the model. For hardware rendering, the patches are tesselated

into polygons. Many software raytracers can directly render NURBS patches.

Various techniques are available to aid the task of character deformation and animation. Free

form and procedural animation are difficult to control. Limited realism can be achieved with

these approaches. The most common approach for deforming a character model into a new pose

uses a simplified skeleton abstraction of the model. This process is described in the next section.

Page 51: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

39

5.1 Skeleton-Based Animation

Animation of a model where the surface is directly manipulated is quite difficult. An articulated

skeleton abstraction 1 is commonly employed to simplify the task. The articulated skeleton con-

sists of bones and joints. These bones and joints approximate the basic shape and degrees of

freedom of the character being animated. Figure 5.1 shows an example of a typical skeleton.

Figure 5.1a shows the bone and joint strucuture. Bones are deformed into a running pose which

causes surface points to deform similarly. The results of deformation are shown in Figure 5.1b

and c.

a

b c

Figure 5.1: An articulated skeleton consisting of bones and joints used for character animation.

The task of defining an articulated skeleton is a non-trivial one. The articulated skeleton

should closely approximate the actual shape of the model. The placement of joints can affect

the overall quality of animation; incorrect placement of joints can result in undesirable stretching

artifacts.

1To disambiguate the skeleton used in animation, from the parameter controlled skeleton, we will use the termarticulated skeleton in the context of animation.

Page 52: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

40

Once a suitable articulated skeleton has been defined by the animator, surface mesh vertices

or control points must be attached to the bones so that they can move as the skeleton deforms.

This process is called skinning. According to Simerman [78] setup and skinning of the articu-

lated skeleton can consume a significant portion of the total production time for an animation.

During skinning, each bone is attached to a set of surface vertices. Shape envelopes are used

to aid this process. A shape envelope is a region in space that is defined around a bone. All

vertices in the shape envelope for a bone are attached to that bone. Furthermore, each vertex

has a bone-weight. This weight determines the contribution of the movement of the bone to

the movement of the vertex. A bone-weight of 100% implies that the vertex follows the bone

rigidly.

An example of skinning is shown in Figure 5.2 2 The mesh is shown in white, the bones

are shown in yellow. Two shape envelopes, an inner and outer envelope are shown as red and

brown wireframes. Vertices are colored according to their bone weights for the thigh-bone. Blue

vertices are the least weighted, green and yellow are moderately weighted and red vertices are

highly weighted. Creation of bone-weights is an iterative process. As the articulated skeleton is

deformed, wrinkles might appear on the surface if weights are not appropriately assigned. Often,

bone-weights have to be reassigned if the deformed pose is very different from the default pose.

For example, the shoulder joint, which allows a full � ÷3ú������ of rotation, is extremely difficult

to animate.

Since a typical animation can have thousands of frames, setting up the articulated skeleton

for each frame is still an enormous job. Techniques such as keyframing, inverse-kinematics and

motion capture can speed up the process. The following sections discuss these techniques.

5.2 Keyframing

Keyframing is the process of assigning values to parameters at specific moments in time – that is,

to specific frames in an animated sequence. These values are then interpolated automatically by

the application thus reducing the effort that would be involved if each frame were to be manually

specified.

2These images are courtesy of webreference.com.

Page 53: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

41

Figure 5.2: Attaching surfaces to the articulated skeleton.

The most important parameters to be keyframed are the transformations of objects, the cam-

era, and lights. Thus all objects in the scene can be scaled (resized), rotated and transformed

(moved) in the course of the animated sequence. Lights can be translated and rotated (if they

are directional lights). The rendering camera can also be tranformed and rotated, providing the

freedom of camera movement characteristic of motion pictures. In addition, keyframes can also

be specified based on the pose of the articulated skeleton. Give a source and target pose as

keyframes, joint angles can be interpolated to generate intermediate poses. This process is also

called in-betweening.

Most animation packages allow a combination of parameters to be keyframed simultane-

ously. Surface material characteristics of an object, the color or intensity of a light, the zoom

ratio of the camera, and even the geometry of objects can be keyframed.

The application interpolates between the keyframes, creating the frames in between the keyframes

when rendering. The control of this process of interpolation is very important in creating effec-

tive animation. Interpolation can occur in both space and time. For example, most applications

will create curved path between pose keyrames where possible. The speed of the interpolation

may be curved as well, so that the change begins slowly, speeds up, and slows down into the next

keyframe. Control over the speed of interpolation can be used to convey the notion of physics,

for instance when an animated character stretches under the effect of gravity or is squished on

impact with the ground [51].

Page 54: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

42

5.3 Inverse-kinematics and Motion Capture

While keyframing can reduce effort in creating an animated sequence, the realism of the motion

depends on the skill of the animator. An articulated skeleton has about twenty joints for typical

humanoid animation. Therefore, explicit specification of every joint angle for each frame can

be very tedious.

Figure 5.3: A joint hierarchy with an inverse-kinematics handle

Inverse-kinematics [47] can be used to automatically compute joint angles. Joints in the

articulated skeleton are arranged in a hierarchy such that movement of a parent-joint affects all

child-joints below it in the hierarchy. An inverse-kinematics (IK) handle is added between a

leaf joint and a joint above it in the skeletal hierarchy. This is illustrated in figure 5.3. Two

bones (shown in blue) with three joints ��� and are connected into a skeleton hierarchy. An

IK handle (red) is created between joints � and . When joint is moved with joint � being

fixed, the position for joint � is automatically computed based on kinematic constraints. In this

manner, inverse-kinematics allow goal-directed pose selection where moving a hand to a target

computes elbow and shoulder angles automatically. Constraints can be also be used to limit the

range of joint angles to what can be physically achieved.

Extremely accurate human animation is particularly difficult. Motion-capture is a powerful

technique for recording motion data. A motion-capture system consists of a collection of sen-

sors that simulataneously feed position and orientation data into a computers. These sensors are

Page 55: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

43

attached to a live human actor which enables the actor’s movements to be recorded in real time.

This captured motion can be reapplied to a virtual character to achieve very realistic animation.

There are several challenges in motion capture. The placement of sensors on the human ac-

tor is important. Too few sensors may not give good results, while too many sensors can quickly

produce too much data. Noise in the motion data is also an important issue; filtering which in-

volves some smoothing is done to reduce the noise. Also, the articulated skeleton must be de-

signed to match the sensor locations. Creation of correctly-proportioned skeletons is non-trivial

and the data from the real world must be scaled and offset to fit the proportions of the virtual

character.

Our goal in this work is to use a volumetric skeleton for the realistic animation of volume

models. We would like to use commercial animation packages and leverage keyframing, IK

and motion capture in the volumetric domain. The next chapter describes a method to use the

parameter-controlled thinning technique to extract a volumetric skeleton which is then used for

realistic volume animation.

Page 56: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

44

Chapter 6

Skeleton-based Volume Animation

As described in the previous chapter, articulated skeletons are often used for deformation and

animation [81]. In this chapter, we propose using our volumetric skeleton to control the anima-

tion and deformation of volumetric objects.

Our method works as follows. First, the volume is skeletonized using the parameter-controlled

thinning algorithm (Chapter 3). A volumetric skeleton at some suitable thinness parameter is

chosen for animation. The voxels in the volumetric skeleton are connected into a skeleton-tree

which can be manipulated like standard articulated skeletons, i.e., deformed and animated. We

also store the distance value at every skeletal voxel. A deformed object can therefore be recon-

structed from the deformed skeleton-tree using these distance values. There are three steps in

skeleton based animation: computation of the skeleton and the skeleton-tree (skeletonization),

animation of the skeleton (deformation) and, finally, regeneration of the object from the ani-

mated skeleton (reconstruction).

The volume deformation pipeline is illustrated in Figure 6.1 using a volumetric ellipsoid.

The final deformed ellipsoid is a result of subjecting the deformed skeleton-tree to volume re-

construction. The advantage of using a completely volumetric method is that we avoid errors

introduced by fitting surfaces or geometric primitives to the native form of volumetric data. Fur-

thermore, the method is computationally efficient, and functionally intuitive.

6.1 The Skeleton-Tree

The skeletal voxels extracted by the parameter control technique described in Chapter 3 are not

in a format easily amenable to manipulation. In general, the voxels are not connected. A low

Page 57: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

45

Figure 6.1: The Volume Deformation Pipeline

thinness parameter can be used to force a thicker skeleton and voxel connectivity. A thick skele-

ton, however, defeats the motivation of manipulating a thin reduced version of the original ob-

ject. For a typical volume animation, the animator uses an interactive tool for changing the thin-

ness parameter to browse through several skeletons and choose the one most suited to the task.

Adding extra voxels to enforce connectivity of the skeleton would change this chosen skeleton

and is therefore not desirable.

A simple solution is to connect the skeletal voxels by line segments. Each skeletal voxel

can then be considered to be an articulation node which can be transformed or can be used to

define constraints on the motion of other nodes connected to it. A skeleton hierarchy can also be

defined in terms of these articulation nodes and the line segments attached to them. Therefore,

a connectivity scheme that uses line segments to connect skeletal voxels results in a versatile

and intuitively deformable skeletal structure.

Connection of skeletal voxels can be done either automatically or manually. Automatic

methods work well for simple shapes with few skeletal voxels. They yield a consistent connec-

tivity which can be easily replicated. On the other hand, manual specification of voxel connec-

tivity can achieve precise placement of joints. We first describe an automatic method to deter-

mine the connectivity of skeletal voxels followed by a manual connectivity method for realistic

animation.

6.1.1 Automatic Connectivity

We exploit two kinds of coherence in the properties of skeletal voxels to automatically derive

connectivity information. They are summarized in the following observations:

Observation 2 Spatial Coherence: Skeletal voxels which are close to each other are more likely

to be connected than those that are far.

Page 58: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

46

Observation 3 Value Coherence: Skeletal voxels whose distance transform values differ slightly

are more likely to be connected than voxels with widely differing distance transform values.

Observation 4 Containment: Connections between skeletal voxels should not cross object bound-

aries.

The first observation is self explanatory. The second observation can be explained by the fact

that the distance transform is a measure of the distance of the voxel from the closest boundary.

A higher distance transform implies a voxel which is in the interior of the object, while a lower

distance transform value implies that the voxel is close to a boundary. Since the skeleton is

centered within the object, voxels close to the center must be connected to other voxels close to

the center and similarly for voxels near the boundary.

We use the skeletal voxels to create a weighted undirected graph. Every skeletal voxel is a

vertex in the graph and every vertex is connected to every other vertex by edges to form a fully

connected graph. Edge weights are computed using a linear combination of the spatial and value

coherence. For a graph edge going from voxel ��� to voxel ��� , the edge weight is computed as

�������������= � �"!$#&%(' �����)�*�,+.- �0/1�,2*�43 - !5' �6� /7!5' �8� 2�3 , 9;:<�1:=� .

In the above equation, !>#?%(' �����)��� is the spatial distance between voxels ��� and ��� ,!5' �6� and !5' �*� are the distance transform values of voxels ��� and �@� and � is the connectiv-

ity parameter which specifies the relative importance of the spatial and value coherence. It can

be reasoned that spatial coherence is the primary factor for voxel connectivity because voxels

which are far apart but have similar distance transform values should not be connected in favor

of spatially closer voxels with almost similar distance transform values. Value coherence is a

secondary factor which is used to decide between locally competing voxels which are almost the

same distance apart. Useful values of the connectivity parameter � are therefore in the range of

0.8 to 0.95.

The spanning tree of a graph A is a connected sub-graph with no cycles which has the same

set of vertices as A [20]. The minimum spanning tree of a weighted graph is the spanning tree

whose edges sum to minimum weight. Therefore, the minimum spanning tree (MST) of the

Page 59: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

47

Figure 6.2: The Skeleton Tree, with increasing spatial coherence (i.e. decreasing value coher-ence): a. � = 0.5, b. � = 0.75, c. � = 0.95, where � is the connectivity parameter.

above graph will give an acyclic connected set of skeletal voxels. Each edge in the MST is used

to create a line segment between skeletal voxels. In this manner, a connected tree structure con-

sisting of skeletal points (vertices) and connecting line segments (edges) is generated automat-

ically from the original volume object. We call this tree the Skeleton Tree [27]. Since the MST

is a well known data structure, fundamental algorithms can be used to manipulate and traverse

the skeleton-tree. The skeleton-tree for three different values of the connectivity parameter is

shown in Figure 6.2. Note that in Figure 6.2a and b the effect of value coherence can be seen

in the form of slanting lines which connect the horizontal voxels at the sides to the ones at the

center.

Even though the number of skeletal voxels is a small fraction of the voxels in the original

volume object, there can be several hundred voxels in the skeleton. With B skeletal voxels,

there are B�

edges in the fully connected graph, hence computation of the MST is C - B�2 Since

spatial coherence is the dominating factor, one optimization in the MST computation is to create

a sparser graph by connecting edges only between voxels that are spatially closer than a certain

threshold distance.

Algorithm for the Skeleton-Tree

Let D be the set of skeletal voxels, � be the connectivity parameter and E be the maximum

absolute distance between skeletal voxels for an edge to be created in the graph F . Object voxels

in the volumetric object are denoted by G .

Initialize empty graph FFor all voxels HJI D

For all voxels K�ILD , K5MN H and !$#&%('�O �QPSR ECompute all voxels TVU in the discrete line W joining voxels H and K

Page 60: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

48

If all T�U6IXGAdd edge H YZK to F .

Compute��� O �QP = �L�"!$#&%(' O �QP +.- �0/1�,2*�43 - !5' O /1!�' P 2�3

Extract the minimum spanning tree [ of graph F .

Once skeletal voxels have been connected, the skeleton-tree can be deformed either proce-

durally or using off the shelf animation software tools. This is done by labeling some voxels

in the skeleton-tree as joints and moving all voxels connected to the joint together as a limb.

For complex objects, precise connectivity of skeletal voxels into limbs is desirable. The next

subsection introduces the articulated skeleton and an alternate connectivty method which allow

such control over the connectivity.

6.1.2 Articulated Skeleton

For complex objects, as for the Visible Male volume, the number of lines (around 20,000) in the

skeleton-tree is too large for interactive manipulation in most animation packages. Users need

to freely control the connectivity of the skeleton to achieve realistic animation. For an animator,

connectivity is not simply a matter of closeness but of artistic interpretation. This is especially

true for “humanoid” animation where body parts move together around known joints. In this

case the animator needs control over the connectivity process. Such control cannot be achieved

via an automated process. Therefore we create the skeleton-tree in two passes.

In the first pass, a user selects about 25 voxels labeled as joints from among the voxels of

the model. This joint selection is an interactive process. Pairs of joints are connected manu-

ally via lines into bones. We call this sparse combination of joints and bones the articulated

skeleton. Our articulated skeleton is exactly similar to that used in traditional animation as de-

scribed earlier in Chapter 5. The articulated skeleton can be imported into animation packages

and deformed using traditional skeletal animation techniques.

In the second pass, set of remaining skeletal voxels are grouped into limbs and attached to

their corresponding joints in the articulated skeleton. We can apply the same MST algorithm de-

scribed in the previous subsection to automate this connectivity. The graph is created such that

Page 61: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

49

all edges originate from one of the 25 joints in the articulated skeleton. Therefore, when we

extract the MST all voxels of the volumetric skeleton are connected to some joint in the artic-

ulated skeleton. In practice, this output from our automated connectivity algorithm needs to be

corrected by reassigning some skeletal voxels to a new joint for precise control over limbs. This

is similar to the process of reassigning control vertices in surface-based character animation.

Figure 6.3 shows an articulated skeleton (red) along with the volumetric skeleton (black).

The figure on the right shows all voxels in the volumetric skeleton connected to those in the

articulated skeleton via lines (gray). When the articulated skeleton is manipulated, the transfor-

mation that each of its edges undergoes is applied to all the black points connected to the root

node of that edge. In this manner, the cloud of voxels forming the volumetric skeleton is moved

to correspond to the movement of the articulated skeleton. An interactive tool has been written

to which helps the animator choose points for animation while viewing the volume.

Figure 6.3: An articulated skeleton (red) is defined by the animator. All points in the volumetricskeleton (black) are then connected to the articulated skeleton. Only the articulated skeleton hasto be manipulated for animation.

The skeleton-tree is intuitive because it suggests the shape of the object. A key feature of the

skeleton-tree is that it can be imported into traditional animation environments and animated,

allowing animators to use their existing library of motion control tools, like motion capture,

parametric key frame animation and constraint-based inverse-kinematics. This integration with

Page 62: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

50

animation tools is detailed in the next chapter.

6.2 Volume Reconstuction

Once various frames of the deformed skeleton-tree have been created, each deformed volume

has to be reconstructed. Volume reconstruction is the last step in our volume deformation pipeline.

A simple recursive algorithm for this purpose was described in Chapter 3. In the following sub-

section, we describe a scan-filling approach to volume reconstruction for both binary and sam-

pled volumes.

6.2.1 Binary Reconstruction

Every skeletal voxel has an associated distance transform value. Voxel spheres of radius equal to

the distance transform value are grown around every skeletal point. These spheres are sufficient

to reconstruct the original object provided the skeleton is sufficiently thick. Overlapping spheres

also do not create bulging or breaking artifacts near bends if there are enough skeletal voxels.

We use a scan fill technique to grow these spheres. Bounding box extents are computed

from the distance transform, and each voxel within the bounding box is tested for inclusion in

the sphere. Distance computations are done using theR]\ _^��`.a or Euclidean metric as in

the skeleton extraction step. The algorithm for theR.\ _^��`$a metric is illustrated below. The

algorithm for the Euclidean metric is easier. The distance computation in the pseudo-code below

is replaced with the actual Euclidean distance value.

Algorithm :

Compute the Bounding Box (BBOX)

SZ bdc - !5' Ofe \ 2�/g�ihj9ikDefine BBOX to lie between

( l&O -SZ, m�O -SZ, n�O -SZ) and ( l&O +SZ, m�O +SZ, noO +SZ)

For every point p - l(om�n&2 in BBOX doq

Compute the distance from p to rltsubv3 - lwOu/�lx2�3myszb{3 - m�OQ/7m2�3

Page 63: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

51

n|subv3 - noOQ/1n?2�3

}w~��_� H ~ b]��# B - l�syomysi�n|sy2� ~|�o� H ~ b]���S� - ltsyomys��n|si2�|~��_� H ~ b q ltsyomysi�n�s4� ��~|�o� H ~ MN }&~|�o� H ~ �w�*� �|~��o� H ~ MN � ~|�o� H ~y�

��� ~����x�Q� b�`S� }w~��_� H ~+ ^Q� - ��~|�o� H ~ / }&~|�o� H ~ 2+<\ � - � ~��_� H ~ / �|~��_� H ~ 2

If- �f� ~|� �*�Q� R !5' � 2 then point Q is in the sphere.

In the above algorithm, }w~��_� H ~ �|~��_� H ~ �f�*� � ~|�o� H ~ are the number of steps taken to ver-

tex, edge and face neighbors respectively. Computing the distance between two points using a

weighted metric can be thought of as taking successive steps to neighbors. The idea is to first

take as many steps to vertex neighbors as possible since this is similar to the diagonal distance

between points. When vertex steps are exhausted, the minimum distance can be travelled by

moving to edge neighbors taking edge steps, and finally face steps. Due to the use of a weighted

metric, the reconstruction fill step does not use floating-point or square root operations at all, and

thus allows a very fast implementation.

The quality of reconstruction depends on the number of voxels in the volumetric-skeleton.

Better reconstruction is achieved by using a lower thinness parameter for the volume thinning

step. An important issue is the value of the thinness parameter that must be used for a desired

quality of reconstruction. An animator can preview a polygonal model made of spheres which

can be rapidly generated for a chosen volumetric-skeleton. This process takes from a few sec-

onds to a couple of minutes depending on the number of points in the volumetric-skeleton. Fig-

ure 6.4 shows two different volumetric-skeletons and their reconstructed models. The volumetric-

skeleton on the left has 42298 voxels which is about 0.33% of the 12.8 million voxels in the orig-

inal model. To the right, a thinner volumetric-skeleton is shown with 9666 voxels. The quality

Page 64: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

52

of reconstruction is better for the skeleton on the left.

Figure 6.4: The quality of reconstruction is better for a thicker volumetric-skeleton. The skele-ton on the left has 42298 points, its reconstructed model is shown next to it. The thinner skeletonto the right has 9666 points, its reconstructed model is to the extreme right.

We also compare the weightedR<\ _^��`5a and Euclidean metrics with respect to the quality

of reconstruction achieved. The Euclidean metric is exact; therefore, it is expected to yield better

reconstruction compared to the weighted metric. A volumetric dragon, and the ellipsoid and

Visible Male from Figure 3.9 are used for this test. For instance, the ellipsoid is skeletonized

using a thinness parameter of 1.5 for the weighted transform, and a thinness parameter of 0.577

for the Euclidean transform. Both skeletons have 1505 voxels. After reconstruction, there is a

loss of 3.42% of the voxels for theR�\ _^��`�a metric and a loss of 0.31% for the Euclidean

metric indicating the superiority of the Euclidean metric for skeletonization and reconstruction.

This reconstruction loss is measured as the difference in the total number of voxels between

the original ellipsoid and the reconstructed ellipsoid. Table 6.1 summarizes the results of this

comparison for all three objects.

Volume rendered results of weighted and Euclidean reconstruction are shown in Figure 6.5.

Images in the left column show the original ellipsoid and Visible Male models. The center col-

umn shows weighted reconstruction while the images in the right column show Euclidean recon-

struction. Reconstruction times from Table 6.1 indicate that Euclidean reconstruction is faster

than weighted reconstruction. Also note that in some cases, Euclidean reconstruction can add

extra voxels at the boundary. In Table 6.1, the Visible Male reconstructed from the Euclidean

Page 65: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

53

ObjectTotalVoxels

SkeletonVoxels

Time(3,4,5)

Recon. Vox-els (3,4,5)

Time(Euc.)

Recon. Vox-els (Euc.)

Ellipsoid 125425 1505 48.369 121140 22.709 125040Dragon 171041 2400 26.615 159933 16.728 169951Visible Male 1650562 21055 1385.910 1599344 402.538 1653018

Table 6.1: Comparison of reconstruction for the weightedR�\ _^��`�a and the Euclidean Dis-

tance transforms.

skeleton has more voxels that the original volume. This increase is due to roundoff errors at the

boundary. The radii of reconstruction spheres for the Euclidean skeleton are floating point num-

bers. Using these floating point radii on a discrete voxel grid results in extra (or fewer) voxels

at the boundary.

We also study the correlation between the thinness parameter and the lossiness of recon-

struction. The reconstruction loss is computed as follows. Each object is skeletonized at various

thinness values between zero and one, and then reconstructed from those skeletons. A difference

volume which is the voxel-by-voxel difference between the original object and the reconstructed

object is computed. Reconstruction loss is then defined as the number of 1-voxels in the differ-

ence volume. An alternative technique to compute reconstruction loss would be to subtract the

number of 1-voxels in the reconstructed volume from the 1-voxels in the original volume. As de-

scribed above, reconstruction from the Euclidean skeleton can increase the number of 1-voxels;

therefore a simple subtraction is not suitable for this study.

Figure 6.6 shows the dependence of reconstruction loss on the thinness parameter for the

Euclidean skeleton. As with the number of skeletal voxels (Figure 3.10), this dependence is ex-

ponential. The loss is negligible or zero below a certain thinness value, i.e. adding any skeletal

voxels does not improve the reconstruction. Complex models such as the dragon and the Visible

Male exhibit very similar profiles. Regular shapes such as the ellipsoid have large clusters of

skeletal voxels with equal !5'g/7� B ' , and exhibit a sharper profile due to this quantization.

Note that for the ellipsoid, the largest meaningful thinness value is about 0.8 beyond which there

are no skeletal voxels.

Page 66: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

54

Figure 6.5: Comparison of the reconstruction quality for the weightedR<\ _^��`�a and Euclidean

skeletons. The first column shows the original object, the second column shows weighted re-construction and Euclidean reconstruction is shown in the third column.

6.2.2 Sampled Reconstruction

The reconstruction method outlined above creates a binary volume as output. For sampled vol-

umes, where every voxel has some intensity associated with it, binary reconstruction cannot

model the interior of the object. Therefore, when the spheres are scan-filled, each voxel in the

sphere should get a value which is the sample value at that voxel. This is not a problem for the

initial pose because a direct lookup can be performed into the original volumetric model.

The sample value at a voxel can be recovered by considering the tranformation that any

voxel undergoes. Every voxel in the reconstructed volume belongs to a sphere centered at some

skeletal voxel. Skeletal voxels are rotated about joints in the articulated skeleton. The trans-

formation for every joint is known. Therefore, the centers of reconstruction spheres are trans-

formed by the joint angle about the joint voxel. Since spheres are considered to be rigid, all

Page 67: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

55

0.01

0.1

1

10

100

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Perc

enta

ge L

oss

Thinness Parameter

Reconstruction Loss for Different Thinness Parameters

EllipseDragon

Visible Human

Figure 6.6: Reconstruction Loss for different Thinness Parameters

voxels in the sphere undergo the same transformation. We consider only rotations and transla-

tions for transformation, which suffice for most animations using skeletons.

Therefore, to recover the original sample values during scan-filling, the inverse transforma-

tion is applied to every voxel in the sphere to derive its original un-warped location. Looking up

the original volumetric model at the un-warped location gives the sample value for that voxel.

This process is similar to reverse sampling commonly used for 2D image warping.

There is another important issue that must be addressed for the reconstruction of a sampled

volume from a deformed skeleton-tree. A voxel in the reconstructed volume can be included

in multiple spheres. Every time a sphere including a voxel is scan-filled, it will write over the

existing value in that voxel. This can cause errors in regions of bends because spheres from

across the bend , which have a different transformation (hence a different value) can over-write

the “correct” value at a voxel. Such over-writing results in artifacts. Figure 6.7 shows three

volume rendered images of a bar embedded inside a cylinder. The image on the left shows the

undeformed object. For purposes of illustration, one half of the skeleton was bent to the left,

and the volume was reconstructed. In the middle image, reconstruction was done from top to

the bottom. The bar is broken because spheres from the straight part of the bar (below the bend)

over-wrote the values from the top, bent part of the bar. The right image in Figure 6.7 shows

the corrected bending achieved using the proximity buffer method outlined below.

Proximity Buffer

In order to avoid artifacts arising from overlapping spheres, we use the following strategy.

Page 68: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

56

Figure 6.7: Overlapping spheres can cause problems during reconstruction. When the cylinder-cube model is bent at the center, spheres from the straight part at the bottom over-write valuesin the top, bent part as seen in the center image. This is fixed in the image on the right.

A separate proximity-buffer equal to the size of the reconstructed volume is maintained and ini-

tialized to a high value, greater than the longest diagonal of the bounding box of the volume.

When a sphere tries to write a voxel’s location with a sample value, the distance of the center

of the sphere to the voxel is computed. If this distance is less than the value in the proxim-

ity buffer, the sample value is written to the voxel and the proximity buffer is updated with the

new value. This ensures that spheres that are far away from a voxel, and possibly “across the

bend” do not over-write values contributed by a sphere that is closer to the voxel being filled.

The effectiveness of this approach is shown in Figure 6.8. The left image in Figure 6.8 shows a

section of the original volumetric model, the middle image shows the same section for a model

reconstructed from the volumetric-skeleton and the right image shows a section of the model

in a running pose. The reconstructed object has 12.26 million voxels compared to 12.8 million

in the original. Pseudo-code for sampled reconstruction using the proximity buffer is provided

below.

Algorithm for Sampled Reconstruction

Voxels in the skeleton-tree are grouped into B limbs. Every voxel H in the skeleton-tree

has a distance transform value !5'?O and undergoes a transformation '�O?�'�O I q ' � �h�h�h '*� � . The

original sampled volume is G , the reconstructed volume is � and the proximity buffer is � .

Initialize � and � . � �_ ��� � �xb]9 , � �_ �¡� �¢�xb�£For each voxel H in the skeleton-tree

Compute the bounding box ¤@¤�C �QOFor each voxel K in ¤�¤�C �QO

If !$#&%('�O �QP :g!�'�O and !$#&%('�O �QPSR �¡�¥K �

Page 69: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

57

Figure 6.8: A cross section of the original volume is shown to the left. A cross section of thereconstructed volume in the same pose is in the middle. The right image shows a cross sectionof the reconstructed volume in a new pose.

Mark K as an object voxel

Compute ���¥K �*b G,� '4¦�O � K �

�¡�¥K �*b]!$#&%('�O �QPEnd

6.3 Analysis of Reconstruction

As described in earlier sections, volume deformation is achieved by transforming the centers of

spheres, then scan-filling those transformed spheres. In this section we examine the range of

rotation angles for which sampled reconstruction is accurate. We also measure the capability

of sampled reconstruction to be able to preserve internal structures when using the proximity

buffer heuristic.

6.3.1 Sampled Reconstruction for Bent Shapes

When the articulated skeleton is rotated about a joint, segments adjacent to the joint move closer

inside the bend and move apart on the opposite side. Since spheres are rigidly attached to the

joint, rotation about the joint causes spheres inside the bend to come closer and those on the

Page 70: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

58

outer side of the bend to move apart. As the bend angle is increased, this effect becomes more

pronounced. In our model, we assume inelastic voxels; therefore wrinkles could appear inside

the bend where spheres from different halfs of the bend intersect. Similarly, spheres moving

apart on the opposite side could cause the volume to tear. This is not a problem specific to our

method. Polygonal meshes being bent around a joint will exhibit similar artifacts. In commer-

cial animation packages, some heuristics are applied which tend to move vertices such that the

bend remains smooth. However, no general solution exists.

Figure 6.9: A cuboid embedded in a cylinder and its skeleton. The two bones of the articulatedskeleton are shown as green and red lines.

We study the effect of bend angle on reconstruction quality for sampled reconstruction. Note

that wrinkling and tearing artifacts will not be very noticable for binary reconstruction due to the

presence of interior voxels which appear to “fill in ” tears. This can be taken to be an advantage

of volume models over surface models.

A cuboid embedded in a volumetric cylinder is used for our test. The sharp discontinuity

between the intensity of the cuboid and the surrounding cylinder makes it easy to detect anoma-

lies in reconstruction. Errors in reconstruction would be manifested as breaks in the cuboid as

seen earlier (Figure 6.7).

The size of the dataset is 64x64x128 voxels. The cylinder is 70 voxels deep with a radius of

16 voxels.The cuboid is also 70 voxels deep and is 12x12 voxels square in the X-Y plane. We

skeletonize the volume at a thinness of 1.0 using the weightedR<\ _^��`�a metric to get a skele-

ton with 1629 voxels. Figure 6.9 shows the volume rendered test model along with its skeleton

(right). The articulated skeleton consists of two bones in the center of the volume, shown as

Page 71: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

59

green and red lines.

We rotate the articulated skeleton by 30, 60, 90, 120, 150 and 180 degree angles about the

joint between the two segments. Skeleton voxels connected to the bent segment are also rotated

by the same angle. The deformed volumes are reconstructed and an iso-surface is extracted at

the cuboid intensity value. The isosurface is meant to highlight discontinuities in the interior

after reconstruction. These iso-surfaces can be seen in Figure 6.10. No reconstruction errors

are observed upto a rotation angle of 90 degrees. The staircase pattern for the 30 and 60 degree

surfaces is due to discretization, not due to an error in reconstruction. However, beyond 90 de-

grees, spheres from the bent half rotate across the joint into the undeformed half. This creates

a small protruberance as can be seen in Figure 6.10(d). To avoid these artifacts, a number of

measures can be taken.

1. The joint can be moved to a more appropriate skeletal voxel.

2. A greater number of voxels can be used for reconstruction.

3. Voxels connectivity can be changed for frames with incorrect reconstruction. This reas-

signment of voxels closely mimics the process used in traditional animation.

However, the algorithm is robust upto a 90 degree rotation which is sufficient for most realistic

motions.

Figure 6.10: Reconstruction of the cuboid at various rotation angles. The iso-surface is shownhere.

Page 72: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

60

6.3.2 Reversibility for Sampled Reconstruction

Since sampled volumes have internal structure, it is important to preserve the structure when

the volume is deformed to a new pose. One way to measure reconstruction accuracy is to res-

can the volume in a deformed pose and compare it with the deformed, reconstructed volume.

Since that is not possible, we measure the accuracy of sampled reconstruction by measuring the

reversibility of a transformation. The basic notion is that if a source volume �¨§ is deformed via

joint transformations ' to a warped volume � � , the inverse transformations ' ¦�

applied to � �should give �¨§ exactly.

In order to test this, we create an experiment with the Visible Human dataset decimated 4

times in each dimension (for speed) . The size of the dataset is 145x85x470 voxels. We skele-

tonize the volume using the weighted metric and a thinness parameter of 1.18 for a total of 21814

skelton voxels. An articulated skeleton is created and imported into Character Studio. The artic-

ulated skeleton is then animated using motion capture data for a jogging sequence. We choose

an arbitrary frame from this sequence and compute the transformations ' for segments of the

articulated skeleton. The deformed volume � � is reconstructed using this transformation ' to

index into the original volume �§ . This process is shown in the first row of Figure 6.11. Fig-

ure 6.11a shows the original volume � § . The original and deformed articulated skeletons are

shown in Figures 6.11b,c respectively. Figure 6.11d shows the deformed volume � � which is

reconstructed via a lookup into volume �¨§ .We use a unit transformation to index into �§ to reconstruct the original pose ��©|ª U¬«�­ (Fig-

ure 6.11e). The inverse transformation '4¦�

is then computed. Volume � � (Figure 6.11f) is re-

constructed by applying ' ¦�

to the deformed skeleton and by indexing into the deformed vol-

ume � � . This ensures that any reconstruction errors in � � are propagated to � � . Note that � �should be the same as ��©�ª U¬«�­ because the transformation has been inverted. Finally, we calcu-

late the per-voxel difference between � � and � ©|ª U®«�­ and store it in �¨soU�¯�¯ which is shown as color

coded voxels in Figure 6.11g. The magnitude of the difference is minimum for blue voxels and

maximum for red voxels.

On close examination of the difference volume, we see that most of the differences are in

boundary voxels, which can be attributed to discretization errors, because we round off voxels

Page 73: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

61

R(I) R( )

R(T)T

S

g

Diff.

fe

a b c d

T

-1T

-1

Figure 6.11: Reversibility analysis for sampled reconstruction.

positions to integers during reconstruction. In all, 3.39% of the voxels were found to be differ-

ent. Of these, only 0.82 were non-boundary voxels which confirms the fact that most errors are

due to discretization at the boundary. We also looked for errors, specifically in the vicinity of

joints, due to intersecting spheres. There were minimal differences at joints; most differences

were at the boundary. This suggests that a sufficiently thick skeleton should be able to give good

reconstruction at a variety of angles.

6.4 Non-Rigid Deformation

Besides shape deformations that can be achieved by transforming the co-ordinates of skeletal

voxels, the distance transform values of skeletal voxels can be modified to achieve bulge and

pinch effects. Since the distance transform value is the radius of the reconstruction sphere, mod-

ifying the distance transform changes the radius of this sphere, thereby achieving a bulge or

Page 74: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

62

pinch when the distance transform is increased or reduced respectively. Ripple effects can also

be achieved by alternately increasing and reducing the distance transform values. An example

of this is shown in Figure 6.12. The image on the left shows the original ellipsoid object. The

center and right images show the bulge and pinch effect as the distance transform values are

changed for voxels near the center. Deforming a portion of the object to form additional limbs

can also be achieved by pulling out a skeletal voxel and inserting additional skeletal voxels along

the lines connecting the pulled voxel to the rest of the skeleton-tree. Reconstructing the spheres

for these new interpolated skeletal voxels would result in a new limb.

Figure 6.12: Bulge and Pinch effect by changing the distance transform of skeletal voxels.

Page 75: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

63

Chapter 7

Volume Animation Production

In the previous chapter, we described two approaches to connecting the skeleton into a skeleton-

tree. We further described a reconstruction technique to create deformed volumetric models

from a deformed skeleton-tree. While the skeleton-tree can be manipulated using free-form de-

formations or procedural animation, very realistic effects can be achieved by using commercial

animation tools for this task.

Computer animation packages such as Maya 1 and Character Studio 2 are based on key-

frame animation. Important or “key” scenes are drawn by the animator and the intervening

scenes (frames) are automatically drawn by the program. Character animation is achieved by

these tools via skeleton-based shape deformation tools. These packages have been created for

the animation of surface based models; therefore we only use the skeleton manipulation func-

tionality of these tools. We animate our automatic MST-based skeleton-tree in Maya using key-

framing and inverse-kinematics. Character Studio is used for motion capture animation of the

manually defined articulated skeleton described in the previous chapter. We begin with a de-

scription of traditional animation in Maya, then show how our skeleton-tree can be manipulated

in that framework. In a subsequent section, we describe animation using Character Studio and

our articulated skeleton abstraction.

7.1 Animation in Maya

A skeleton in Maya consists of joints connected by bones. The skeleton is manipulated to cause

corresponding manipulations of the model. Currently, the skeleton has to be defined manually

by the animator. The choice and placement of the skeleton affects the range and accuracy of

1Maya is a trademark of Alias °Wavefront.

2Character Studio is a trademark of Autodesk, Inc.

Page 76: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

64

movements that can be performed. For surface models, this synthetic skeleton is bound to the

skeleton in order for it to deform as the skeleton deforms. Maya attempts to do this automati-

cally by dividing geometry points into sets called skin point sets according to the proximity of

a surface point to a joint. Therfore, movement of a skin point set is bound to the motion of the

nearest joint. Depending on the pose of the geometry and skeleton during the binding process,

a few skin points could join an inappropriate set which can cause the model to break during an-

imation. Inappropriate points then have to be manually moved to the appropriate set which is a

laborious task.

In addition, inverse kinematics (IK) handles can be added to the skeleton. As described in

Chapter 5, inverse kinematics [47] refers to the automatic computation of joint angles based

upon the movement of the lowest joint in the skeletal chain. It allows for goal-directed pose

selection, where moving a hand to a target computes angles for the elbow and shoulder auto-

matically.

In Maya, skin and surface deformation effects like muscle bulges are achieved by other spe-

cial deformation tools called flexors. Again, flexors can be of various types and influence the

surface geometry when the joint moves or rotates. Setting up a skeleton, its skin point sets and

defining flexors therefore requires a good amount of training and feel, primarily because the

skeleton is synthetically constructed and has no direct binding to the surface geometry. Other

commercial tools employ similar skeleton based techniques for animation.

7.1.1 Skeleton-Tree Animation

We use a volumetric dragon to illustrate the process of skeleton-tree animation in Maya. The

dragon is thinned at an appropriate thinness value (1.8, weighted metric ) and connected into a

skeleton-tree. We import the skeleton-tree into Maya as geometry and animate it an a regular

geometric object.

For this purpose, line segments of the skeleton-tree are organized in an Open Inventor [84]

file which defines an indexed line set containing the edges of the skeleton-tree. Maya has import

modules for Open Inventor geometry. We need to preserve the distance transform value associ-

ated with every voxel in order to reconstruct the deformed object from the deformed skeleton-

tree. Therefore, distance transform values have to be encoded in the Inventor file such that they

Page 77: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

65

are preserved through the process of manipulation in Maya. Embedding the distance transform

as point or line attributes such as color or texture is not useful since they get changed during

animation.

Our solution to this problem is based on the fact that Maya assigns a unique node identifier

to each voxel in the skeleton-tree which can be used to associate the distance transform values

with corresponding voxels.

Once the skeleton-tree is imported into Maya, a simple IK hierarchy is set up for the torso

and each limb. This step is necessary because the skeleton-tree has hundreds or thousands of

voxels; therefore every vertex in the skeleton-tree cannot be defined as a joint in Maya. A few

vertices are picked as joints. We can use Maya’s automatic skin-point assignment to bind the

remaining vertices to bones of the IK hierarchy. However, this process is error prone. Since we

have a tree structure, an external process can create such binding rapidly while ensuring that the

binding does not violate the structure of the skeleton-tree. Note that If the number of voxels in

the skeleton-tree is small, a joint can be set up at every voxel.

A typical grouping consist of a separate group for each of the upper-arm, lower-arm, hand,

upper-jaw and lower-jaw (for the dragon). Vertices in the skeleton-tree are now treated by Maya

as deformable geometry. Figure 7.2 shows a snapshot of the groups in the arm and the corre-

sponding hierarchy graph in Maya. Every limb group is bound to the joint above it and trans-

forms rigidly with the joint. Inverse kinematics handles can then be attached and rotational con-

strains defined at the joints. The complete skeleton hierarchy with joints is shown in Figure 7.3.

Joints are shown as circles and the bones of the animation-skeleton are shown as triangles. There

is a triangle for every group defined in the previous step. A joint is placed at the interface be-

tween two groups. IK handles between joints are indicated by diagonal lines between them as

shown in the magnified view. When two joints are connected by an IK handle, the angles for

the joints in between are automatically computed based on constraints. Once the IK handle is

set up, the tip of a limb can be moved to a target location which causes the entire limb to move.

Smooth natural motion can be achieved by appropriate choice of constraints. Key frames can

be specified for the end deformations and the intermediate frames are automatically generated.

Each frame of the deformed skeleton-tree is exported in Inventor format. Distance transform

values have to be recovered for the voxels in each frame. The first frame exported from Maya

Page 78: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

66

Reference

File

Skeleton-Tree

Nodes

Dist. Transform

Table

NODE_ID1

{

x1, y1, z1

x2, y2, z2

}

NODE ID2

{

x1, y1, z1

x2, y2, z2

}

x, y, z, DT

x, y, z, DT

...

....

x, y, z, DT

ID1 DT1

ID2 DT2

...

...

Figure 7.1: The lookup table used to recover Distance Transform values from the deformedskeleton-tree.

has no deformations, therefore the voxels have their original coordinate values. This frame is

used as a reference to construct a table of node IDs and their corresponding coordinate values

with the distance transform values. For all successive frames, the node ID is used to recover

the distance transform values. This process is illustrated in Figure 7.1. The frames with the

deformed dragon can now be reconstructed by filling the spheres centered at the skeleton-tree

voxels. Each reconstructed frame is then volume rendered. Playing back the volume rendered

frames produces an entire smoothly animated sequence of the dragon moving.

7.1.2 Examples

A complete animation was created for the dragon. A few frames fromothe animation are shown

in Figure 7.4. The entire animation was completed by a student animator in less than five hours.

A similar volume animation was done with the human trachea. Figure 7.5 shows the main stem

of the trachea moving.

7.2 Motion-Capture Animation in Character Studio

Character Studio allows motion capture sequences to be applied to a pre-defined humanoid char-

acter called the “Biped”. We import our articulated skeleton into Character Studio as an inverse

kinematics (IK) chain. The IK chain is bound to the Biped character by translating and scaling

the Biped to fit the imported skeleton. Joints on the skeleton are bound to specific Biped limbs.

This process is illustrated in Figure 7.6 in which the Biped (left) and the articulated skeleton

(center) are combined (right).

Page 79: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

67

Once the articulated skeleton’s IK chain is bound to the Biped model, a motion capture se-

quence is applied to the Biped. Character Studio modifies the motion data to fit the scaled Biped

model such that the motion is continuous and does not cause the model to break. The next step

is to extract a deformed articulated skeleton for every frame of the animation. For this purpose,

a sphere is bound to every joint in the IK chain. We then export the geometry for every frame of

the Biped model and extract the center of each sphere from this geometry. These sphere centers

are used as the deformed positions of voxels in the articulated skeleton. Figure 7.7 illustrates

the results of this process. Three frames of the Biped model in a running sequence are shown

to the left. Corresponding articulated skeletons are shown to the right.

Once the deformed articulated skeleton has been extracted for every frame, other skeletal

voxels have to be deformed similarly. We compute the transformation between the initial pose

and the deformed pose at every joint of the articulated skeleton. The transformation undergone

by a bone around a joint is applied to all voxels connected to that joint in the skeleton-tree. The

result of this process is a complete skeleton-tree in a deformed pose. Finally, the deformed vol-

umetric object is reconstruced from its deformed skeleton-tree and rendered.

7.2.1 Examples

We animated the Visible Male dataset using several motion capture sequences. The photo dataset

was used at a resolution of 290x169x940 voxels, which is half of the full resolution in each

dimension. Volume thinning the data at a thinness parameter of 1.5 produced a skeleton with

42,298 voxels. The thinning process took 1 hour on one 400Mhz UltraSparc processor. Thin-

ning has to be done only once because the volumetric-skeleton at thinness 1.5 includes all thin-

ner skeletons, which can later be extracted in less than a second. The run time for thinning

is independent of the thinness parameter specified. Twenty five points were chosen from this

volumetric skeleton and connected into an articulated skeleton by an animator. The articulated

skeleton conforms with the Biped model in Character Studio. The motion sequence data was

provided courtesy of Viewpoint Digital and Kinetix [83]. We applied a skipping and running se-

quence to the articulated skeleton and exported each frame. The transformation for every bone

in the articulated skeleton was applied to all voxels connected to that bone. The volume was

then reconstructed using the scan-filling technique described in Chapter 6. The execution time

Page 80: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

68

for reconstruction depends on the number of voxels in the volumetric-skeleton. Using a pure

software-based scan-filling program, reconstruction took about 90 minutes on one 400Mhz Ul-

traSparc processor for a volumetric-skeleton with 42K spheres.

Volume-rendered frames from the run sequence are shown in Figure 7.8. Note that internal

details (sample values) are preserved through the motion.

Page 81: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

69

Figure 7.2: Creating a Group Hierarchy. Pieces of the skeleton-tree are combined into groupsin a hierarchical manner. These groups correspond exactly to the animation-skeleton created inMaya.

Page 82: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

70

Figure 7.3: Skeleton geometry in Alias. Joints in the animation-skeleton are shown by circles;the triangles are bones of the animation-skeleton. IK handles are shown as diagonal lines be-tween joints.

Figure 7.4: Frames from an animation of the volumetric dragon.

Figure 7.5: Frames from an animation of the human trachea.

Page 83: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

71

Figure 7.6: Binding the Biped model in Character Studio to the articulated skeleton

Figure 7.7: Extracting the deformed articulated skeletons (red) from the animated Biped modelin Character Studio

Figure 7.8: Volume rendered frames of a running sequence. The sequence was generated usingmotion capture data.

Page 84: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

72

Chapter 8

Skeleton-Based Volumetric Collision Detection

8.1 Introduction

Collision detection is an important operation in several computer graphics and virtual reality ap-

plications. As a result, it has been the topic of extensive research. Most of the work in collision

detection has been limited to surface models because surface (polygonal) models are prevalent

in traditional computer graphics applications. However, over the past few years, the interest in

volume graphics has started to grow. Volumes are more common now due to the availability

of 3D sampling devices. Volume rendering is used for virtual surgery, virtual endoscopy sim-

ulations [40], in CAD/CAM, and baggage inspection systems. The VolumePro [66] board has

recently been available for PC based volume rendering. There is also a lot of interest in using

3D texture mapping for volume rendering [18]. However, the tools for using volumes in virtual

reality applications or interactive games lag behind the honed polygonal toolkits. This includes

modeling, animation/deformation and collision detection.

For many virtual reality applications and interactive games, one must detect when objects

collide to determine the next action to be taken. Collision detection is an integral component

to these applications, without it, there is no “interactivity”. Furthermore, it is a computation

that must be performed over and over and for each actor/object in the scene. Because of this,

collision detection must be fast.

In general, interference between objects is detected using a simplified bounding shape around

each object. Such bounding shapes include bounding spheres, axis-aligned bounding boxes

and oriented bounding boxes. In order to test for collision between two objects, their bound-

ing shapes are first tested. If a collision is not detected between the bounding shapes, further

testing is not done. Generally, a hierarchy of bounding shapes with increasing complexity and

geometric accuracy is used. If a collision is detected at a certain level, the testing proceeds to

Page 85: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

73

the bounding shape at the next level, which is a tighter fit to the model. Testing stops when the

most complex level (the object/primitive itself) is reached or a collision is not detected. It is im-

portant for the intersection test between bounding shapes to be extremely fast. The efficiency

of collision detection algorithms can be evaluated based on the number of bounding shapes and

on how tightly they fit the model.

The number of voxels in a volumetric object far exceeds the number of surface primitives

used in interactive applications. Adapting spatial subdivision strategies used for building colli-

sion detection hierarchies for surface models can result in too many bounding shapes or too poor

a fit. On the other hand, a volumetric model is a 3D raster, therefore fast voxel-by-voxel testing

can be done using an octree. This property is useful while testing at the finest level. An efficient

bounding volume strategy which facilitates rapid intersection testing of coarser representations

is still required.

In this chapter, we present an algorithm to represent a volumetric object using a collection

of spheres. This algorithm is also discussed in [29]. Our parameter controlled skeleton (Chapter

3) is used to create this hierarchy of spheres. These spheres are computed based on the shape

of the object (circumscribing logical features); therefore they form a tight fit around the object.

Furthermore, these spheres can be computed at multiple levels of detail, which facilitates the

creation of a collision detection hierarchy. The main advantage of this method is that the spheres

follow the shape of the object. Therefore, this method can be used for volumetric objects which

are animated (i.e translating and deforming) without recomputing the hierarchical sphere tree.

The next section describes existing techniques for surface and volumetric collision detec-

tion. In Section 3, we describe the computation of the distance transform and our metric for

shape-based sphere computation. Section 4 describes the computation of bounding spheres and

a hierarchical tree. We present some results in Section 5.

8.2 Related Work

Theoretically, volumetric collision detection is a much less complex problem than polygonal

collision detection [42, 35]. When using polygons, a series of equations must be solved to detect

collisions at the lowest level (i.e. plane/plane equations, etc.). Since floating point calculations

Page 86: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

74

are generally used, these equations are tedious and slow in addition to being subject to roundoff

errors. Volumetric objects are simpler. Objects collide when two objects attempt to occupy the

same voxel space. A hardware solution should then be able to detect collision, much like writing

into an occupied pixel in the frame buffer. Unfortunately, no hardware solution currently exists

for this purpose.

When using a software solution, problems arise because of the amount of memory required

to store a large volume dataset. Therefore, smart data structures and some type of hierarchi-

cal organization must be used. In [37], work was presented for volumetric collision detection

based upon an octree and a sphere-tree. Probabilities were defined at the outer level to deter-

mine whether an object’s voxel boundary and another object’s voxel boundary were occupying

the same space. While an octree is a good approach for the lowest level of detail, it still contains

many intermediate levels and does not necessarily divide the object into “logical” feature/based

units. Moreover, animated volumes cannot be easily handled with octrees.

In [31], Gibson presents a method to do both volumetric collision detection and deforma-

tion based upon the same data structure. The deformation is done using a mass-spring model,

where each voxel is assumed to be connected to its neighbors using springs to propagate forces.

Collision is detected by using an occupancy map which contains pointers to the original ob-

ject so that the proper response (deformation, etc.) can be computed. Enhancements to speed

up the occupancy map implementation are also discussed, and these include mapping just the

boundary shell of objects, using bounding boxes, or the hierarchical approach from [37]. These

two papers are most relevant to the approach taken in this chapter. Other papers on volumetric

collision detection can be found in [31].

In this chapter, we discuss another type of hierarchical approach based upon spheres which

are computed from the shape of the volumetric object. This approach is similar to that used by

Langrana et al. [50]. They used a set of shape-conforming bounding spheres to detect collisions

in a virtual knee palpation simulator. In their method, the positions and radii of these spheres

were manually specified. We automatically compute a shape-based bounding sphere hierarchy.

One drawback listed in [31] for existing hierarchical methods is that methods which rely

on “significant preprocessing have limitations in systems where objects deform significantly or

where elements can be interactively created or destroyed”. Although the method described in

Page 87: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

75

this chapter involves preprocessing, it has the advantage that the spheres are based upon the

shape of the objects, e.g. in the case of a human, a feature would be a hand, arm, leg, etc. Since

animation occurs when features are transformed about joints, a set of hierarchical spheres which

is feature based can be transformed with the feature and still enclose that feature.

8.3 Distance Computation

In the discussion which follows, we use a volumetric object which has been segmented from

the background. A probabilistic method similar to that described in [37] could be used for fuzzy

boundaries. However, for many VR applications (and certainly game applications), object bound-

aries tend to be well defined.

A low value of 'Sr indicates that the distance transform of voxel H is slightly greater than that

of its neighbors, therefore H is not very important for boundary coverage. A high value of 'zrmeans that H must have a distance transform that is much greater than that of its neighbors, and

its sphere would likely not be covered by the spheres of its neighboring voxels. Consequently,

as 'Sr decreases, the number of spheres covering the object increase. The full theoretical de-

scription of the thinness parameter can be found in Chapter 3.

The thinness parameter allows us to represent a volumetric object at several levels of detail.

For some threshold value of the thinness parameter, we can rapidly extract voxels whose dis-

tance transforms are greater than those of their neighbors by at least 'Sr . Construction of the

spheres centered at those voxels yields a representation of the boundary of the object. We call

the set of these spheres the reconstruction-spheres, because they can be used to reconstruct the

boundary of the object (at that thinness value). Note that the method is independent of the un-

derlying dataset, whether binary or sampled, since the computation of reconstruction-spheres is

only dependent on the boundary.

8.4 Bounding Spheres

For intersection testing, it is important that the original object be completely bound by the ap-

proximating spheres. The set of reconstruction-spheres based on the thinness parameter are in-

scribed inside the shape; therefore the union of several such spheres at some level of detail will

Page 88: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

76

not completely include all the voxels in the object. However, if we can expand these spheres

by a sufficient amount, they will grow to cover all boundary voxels. Figure 8.1 illustrates this

concept for an ellipse. The figure on the left shows four circles tangential to the boundary of the

ellipse. The figure on the right shows the same circles, which have now been expanded to cover

all the boundary pixels.

Figure 8.1: Inscribed and expanded circles for an ellipse. Inscribed circles are automaticallycomputed based on the boundary coverage metric. Expanding these circles yields a boundingshape for the ellipse.

The amount of expansion is determined using the following strategy. Each reconstruction-

sphere is filled in and the volume that is covered by the union of these inscribed spheres is com-

puted. It is then subtracted from the original volume, which results in the set of uncovered vox-

els. For every uncovered voxel, we find the center of the reconstruction-sphere that is closest

to it, and increase the radius of that sphere to include the uncovered voxel. The computational

complexity of this expansion step can be reduced by using a Voronoi diagram [67]. The Voronoi

diagram is computed for the voxels at the center of each reconstruction-sphere. In a single pass

over the uncovered voxels, the closest sphere and the distance from the center of that sphere

can be determined. In the same pass, the maximum distance to an uncovered voxel for each

reconstruction-sphere is computed. The final radius of a reconstruction-sphere is then the max-

imum of the current radius and the distance to the furthest uncovered voxel which must be cov-

ered by the sphere. Once every reconstruction sphere is expanded to the maximum radius, the

union of all such collision-spheres completely covers the volumetric object. This is a prepro-

cessing step which does not have to be done at run-time.

Figure 8.2 shows the sphere expansion process for the Visible Male volume. The original

Page 89: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

77

volume is shown to the left. In the center, the 100 most important reconstruction-spheres are

shown. These are not sufficient to bound the volume but they capture the essential shape of the

Visible Man. These reconstruction-spheres are expanded into collision spheres as shown in the

image to the right. Every sphere is assigned a unique color to differentiate it from its neighbors.

Figure 8.2: The reconstruction-spheres (center) and the collision-spheres (right) for the VisibleMale volume. Only 100 spheres approximate the volume in this case.

8.4.1 Hierarchical Bounding Tree

Using different values of the thinness parameter in Equation 1 allows us to create bounding

volumes at different levels of detail. With a higher thinness parameter, we get a larger bounding

volume but fewer spheres, while a lower thinness parameter yields a tighter bounding volume

with more spheres. We compute !5')/±� B ' , the difference between the distance transform and

the mean of the neighbors’ distance transform for every voxel. Voxels are sorted in decreasing

order of !5'²/³� B ' . For a desired number of voxels, � , at some level of description, we

extract the first � voxels from this sorted list.

Three levels of bounding volumes for the Visible Male volume are shown in Figure 8.3.

From left to right, the bounding volume has 30, 100 and 300 spheres respectively.

We create a directed graph which encodes the hierarchy for collision testing. Every level of

the graph contains the spheres at that level, with a single bounding sphere for level zero. When

the collision test succeeds at some level, the spheres at the next level, which are children of the

current sphere, are tested for collision. Our data structure is like a k-ary tree. However it is

Page 90: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

78

Figure 8.3: Three levels of the collision detection hierarchy are shown for the Visible Male vol-ume. The bounding volumes from left to right have 30, 100 and 300 spheres respectively.

not strictly a tree in the sense that a child can have multiple parents. A sphere %�� is a child of

sphere %´� if %0� is at a level above %�� and %�� intersects sphere %´� . This is based on the premise

that when collision testing is done at a finer level, only those spheres that intersect the current

sphere need to be tested. This keeps the number of tests within bounds at higher levels of detail.

Also note that we use the term intersection to include containment. The hierarchical graph data

structure is shown in Figure 8.4.

Level 0

Level 1

Level 2

Figure 8.4: The hierarchical intersection graph. An edge exists from a node H to a node K in thenext level, if the sphere corresponding to K intersects the sphere for node H .

When the intersection test succeeds at the lowest level of the graph, we test the actual voxel

raster for exact collision determination. Only a small fraction of voxels, which lie inside the

colliding spheres, are tested. A local occupancy map is created at run-time which contains the

intersecting spherical caps. Figure 8.5 shows two intersecting spheres and a bounding box for

the spherical caps. The bounding box serves as the occupancy map. The height and depth of

the occupancy map are equal to the diameter of the circle of intersection, the width is the sum

of the heights of the caps. Without loss of generality, with sphere centers located at co-ordinates- 9?o9?o9&2 and

- ��o9?o9&2 , the diameter of the circle of intersection is given by,

Page 91: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

79

! N ��µ^f� �|¶Q� / - � � /7· � + ¶Q� 2 � (8.2)

and the height of the caps can be computed as,

¸ � N- ·S/ ¶ + �?2 - · + ¶ /1��2

�y� (8.3)

¸ � N- ¶ /7· + �?2 - ¶ + ·z/1��2

�y� (8.4)

d,0,00,0,0

rR

Figure 8.5: A local occupancy map is computed for the intersecting spherical caps. Voxel in-tersections are tested only within this local occupancy volume.

The advantage of using a geometric intersection hierarchy pays off when there are multiple

volumetric objects in the scene because most collision tests between pairs of objects would be

negative. Also, when the rendering is being done in hardware, collision culling can be carried

out on the host CPU. Occupancy testing in voxel memory on the hardware board would consume

precious memory bandwidth at the expense of the frame rate.

8.4.2 Animated Volumes

In Chapter 6, we described a method for animating volumetric objects. Animation is achieved

by means of a volumetric skeleton which is based on the thinness parameter described above.

A thinned, centered set of voxels are extracted from the volume and connected into a skeleton-

tree. Parts of the tree are grouped into limbs for articulation. The skeleton-tree is animated using

traditional animation methods like keyframing, inverse kinematics and motion capture. Finally,

the volume is reconstructed from the deformed skeleton-tree by filling in spheres of radius equal

to the distance transform at every voxel.

Page 92: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

80

For collision detection on animated volumes, we attach the centers of bounding spheres to

the skeleton-tree. These bounding spheres then move along with the limbs during animation,

and can be used to test for collisions for any deformed pose of the volumetric model.

8.5 Results

We use two different datasets to test our algorithm. The first dataset consists of two synthetic

volumetric bugs of size 128x128x128 moving towards each other. Each bug has 53617 voxels.

A five level hierarchy of bounding volumes has been constructed, consisting of 1, 20, 50, 125

and 375 spheres respectively. The tightness of the bounding volume was measured by comput-

ing the number of voxels at each level of the hierarchy. The bounding volume with 20 spheres

has 1634240 voxels, about 30 times the original volume. This improves to 431196, 189103 and

132166 voxels at higher levels of detail. Note that at the highest level, the bounding volume is

only 2.45 times larger than the actual object.

Figure 8.6: Three frames from an animation of volumetric bugs colliding. The two frames tothe left show the case where no collision was detected. The right frame shows their positionswhen a collision was detected.

We also measured the average number of children for each level of the intersection graph.

Between levels 1 and 2, going from 20 to 50 spheres, the average number of children per node

is 3.35. If every node had only one parent, this number would be 2.5 which is an indication of

the efficiency of placement of spheres. Between levels 2 and 3, the average number of children

are 8.84 and between levels 3 and 4, there are 17.62 children per node. The efficiency decreases

as the number of spheres increase, because each sphere is smaller. The total time for collision

testing depends on the relative positions of the two objects and the local shape complexity. We

Page 93: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

81

can quantify the average time by computing the average number of spheres that are tested for in-

tersection and multiplying it by the time taken to compute one sphere intersection. The average

time for collision detection using the above hierarchy is 8.1 milliseconds.

Three frames of the animation 1 are shown in Figure 8.6. The total time for collision de-

tection and voxel enumeration for the frames in Figure 8.6 took 0.2 ms, 240 ms and 5.5 ms re-

spectively. 19 voxels were found to intersect in the right frame. The increase in collision testing

time for the middle frame is attributed to the proximity of the legs, and their elongated shape.

Since no voxels intersect in the middle frame, exhaustive testing of all occupancy maps had to

be done in that region, which possibly tested overlapping regions. One possible improvement

would be to coalesce all occupancy maps before testing at the voxel level. The disadvantage

of a combined occupancy map is an increase in the computation time for cases with collisions.

An alternative solution would be to terminate collision testing at the sphere level depending on

frame-time constraints in a real-time simulator. Note that the detection time for collisions in the

right frame is only 5.5 ms because voxels were reported only for the first pair of spheres which

had intersecting voxels. These numbers can be compared with He and Kaufman’s method [37].

They tested a 258x258x111 CT head with a 15x57x15 radiation beam and reported collision

times upto 412 ms using the sphere tree and 252 ms for the octree.

The second test dataset consists of the Visible Male dataset at 1/2 resolution (290x169x940).

A CT scan of a wasp, downsampled to 64x64x64 is used for collision testing. The Visible Man

(segmented) has 12821930 voxels. We compute a hierarchy with 6 levels of bounding volumes

having 1, 30, 100, 300, 900 and 2700 spheres. The bounding volumes at levels 1,2,3 and 4 have

3.64, 2.18, 1.84 and 1.74 times the the number of voxels in the original object. Note that for a

complex shape like the Visible Man, the fit is much tighter at higher levels of detail as compared

to the synthetic bug.

The average number of children for levels 1,2, 3 and 4 are 19.27, 43.00, 87.30 and 153.55

respectively. The average time for collision detection and voxel enumeration is 24.43 millisec-

onds. Detection time varied from a minimum of 0.2 ms to a maximum of 555 ms. Collisions

were detected in 10 out of 72 frames. Two frames from the animation are shown in Figure 8.7.

1The animations referenced in this chapter are available at http://www.caip.rutgers.edu/ ¹ gagvani/collision.html

Page 94: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

82

Note that the volumes for the wasp and the man in Figure 8.7 overlap since no response is com-

puted when collision occurs. Run times were computed on a single processor R10000 SGI Oc-

tane running at 195 MHz.

An example of collision detection for an animated volume is shown in Figure 8.8. The figure

shows an animation of the Visible Man being chased by three wasps. The centers of collision

spheres were attached to the articulated skeleton and were animated along with the skeleton as

described in the previous section.

8.6 Discussion

The method described here fits into the larger framework of a volume graphics and animation

system. In a volumetric VR/animation system, objects are required to move and deform in real-

time, which warrants the use of hardware accelerated volume rendering. Storing an additional

occupancy map and updating it in real-time would need additional hardware/memory resources.

The method discussed here can detect and cull collision events in the application, taking a huge

burden off the accelerator board. Furthermore, since it is based on sphere intersection, popular

VR toolkits can directly be used in the volumetric domain. For example, toolkits like WTK [74]

use sphere hierarchies and have built-in intersection testing.

For volumetric objects that are deforming and animating, we do not incur any pre-processing

penalty. Since the centers of the collision-spheres are voxels belonging to the volumetric object,

they move with the object. Most realistic polygonal animations use a simplified shape called the

skeleton [17]. The hierarchical bounding spheres attach directly to a skeleton-tree used for ani-

mation and therefore the same data structure can be used for collision detection and animation.

Furthermore, because movement is generally feature-based, our shape-based hierarchy can take

advantage of a-priori knowledge of the movement. For instance, if just a hand is being extended,

only the part of the collision-hierarchy which corresponds to the hand needs to be tested. This

cannot be done with non-shape-based methods.

Volume objects are difficult to animate and control because they are very large and standard

graphics tools and animation software operate in a polygonal environment. In this chapter, we

have shown a method for collision detection based on the features of an object. The spheres

Page 95: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

83

form a tight fit around the object and can easily be incorporated into a volumetric VR/animation

environment.

Figure 8.7: A volumetric wasp colliding with the Visible Man. The figure shows two sampleframes from the animation. A collision was detected in the frame to the right.

Page 96: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

84

Figure 8.8: A swarm of volumetric wasps chasing the Visible Man. The Visible Man is animatedusing a volumetric skeleton. The centers of bounding spheres are attached to the volumetricskeleton and follow the motion of the skeleton.

Page 97: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

85

Chapter 9

Applications in Visualization

9.1 Volume Tracking

Figure 9.1: Skeletons for fast Vortex Tracking. (a) Segmented vortex structures.(b) Skeleton ofthese structures, thinness = 1.0

In [77], Silver and Wang use a volume based approach for tracking features. They extract fea-

tures from a time varying dataset and perform a volume difference test over time-steps to match

features. Objects which match to a tolerance value are considered to be the same. Since the

complete volumes are used for the difference test, the algorithm is computationally intensive.

However, the skeleton of each volume can be used to perform the matching. The number of

Thinness % ErrorSkel.Time(sec)

Trackingtime(sec)

% TotalTime

None 0 0 437.00 1000.5 4.32 314.64 109.25 97.001.0 5.15 300.23 69.30 84.562.0 14.50 272.08 22.45 67.40

Table 9.1: Error and speedup for feature tracking vortices in a fluid dynamics simulation. Skele-tons of the vortex shapes have between 1% and 15% of the voxels of the original shapes.

Page 98: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

86

voxels in the skeletons ranges from 1% to 15% of the original object voxels. We present results

for tracking a 128 º dataset which is from a simulation of turbulent vortex structures. The vortex

structures are tracked over 30 time-steps, first using the complete volume data and then using

skeletons of three different thinness values. We compare the time taken for tracking the skeletal

points and the the time to track the complete volume. The percentage error is also calculated and

results are tabulated in Table 9.1. The percentage total times shown in the table are for the com-

bined skeletonization and tracking process. A significant speedup is observed when the skeletal

points are tracked. The time for skeletonization can be amortized if several tracking runs need

to be performed. The speedup achieved by tracking the skeletons is also observed to be greater

as the number of time-steps increase. The skeleton of a typical dataset being tracked is shown

in Figure 9.1. A thinness value of 1.0 was used for this figure.

9.2 Virtual Endoscopy

Virtual endoscopy [71, 40, 30] is an exciting new method for diagnosing tumors and polyps

in organs. In a regular endoscopic procedure, an endoscope (camera on a catheter) is inserted

into the organ (colon, trachea) and the interior of the organ is imaged to detect abnormalities.

Such physical intervention is very uncomfortable for the patient and requires a great amount of

skill by the surgeon using the endoscope. The endoscope can rupture the walls of the organ and

sometimes result in fatality.

In virtual endoscopy, the patient is scanned using non-invasive techniques such as CT or

MRI. The organ is segmented out of these scans as a volumetric model. The model is then ren-

dered on a graphics workstation, and the surgeon can navigate through the model using a virtual

camera. Most organ models are complex and difficult to navigate without getting lost or collid-

ing into walls. We can exploit the centered-ness property of a skeleton to derive a collision free

navigation path for the virtual camera. This is described in the following subsections.

Page 99: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

87

9.2.1 Constructing the Centerline

A centerline is a curve that is centered with respect to the object boundaries. It can serve as the

path for a virtual camera in surgical path planning. A centerline description is also useful for an-

imation control. We describe a simple midpoint subdivision algorithm to generate the centerline

from the skeletal voxels. It is a semi-automatic algorithm in which the user specifies end-points

for centerline generation. These end-points are specified from among the skeletal voxels. Such

a strategy enables the user to accurately pick end-points near the center since the skeletal voxels

are already centered with respect to the object boundary. Moreover, it allows multiple center-

lines to be rapidly generated for interactive exploration of the dataset. This is true because the

skeletal voxels are pre-computed and for every new path, the object does not have to be thinned

again. The operator can see the skeletal voxels and thus all bifurcations and bumps are visible

as potential navigation paths.

Let SK be the set of skeleton voxels. Let H � and H � be the end-points of the centerline such

that H � , H � I SK. We have a subdivision parameter (“fineness”) F, which determines the number

of points along the centerline and gives a stopping condition for the recursion.

Centerline ( H � , H � , F )q

If ( distance ( H � , H � )R

F ) return

Find the midpoint Ht» , of H � and H � such that H» =Oi¼�½?O�¾�

Locate point K�IXG�D such that

distance ( q, H » ) is minimum for all K�IXG�DCall Centerline ( H � , q, F )

Call Centerline ( q, H � , F )

The method outlined above is a simple, fast approach and works well for tube-like objects

frequently occurring in medical applications. Since it uses the closest points, it could get per-

turbed by small “hairs” in the skeleton. A better method which refines the “fineness” value as the

recursion proceeds has also been implemented which shows a better tolerance to small spikes.

Page 100: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

88

Various other strategies to connect the set of skeletal voxels can also be used. Bitter et al. have

invented an algorithm for this purpose in [8].

9.2.2 Navigation along the centerline

Figure 9.2: Trachea and its Skeleton

Figure 9.3: Interactive generation of the trachea centerline. (a) Two endpoints are defined for anavigation path. (b) A centerline is generated for each path.

We demonstrate our centerline algorithm on the segmented human trachea The dataset consists

of 281 slices of 512x512 images, and the segmented trachea has 148,892 voxels. The complete

Page 101: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

89

Figure 9.4: 3D trachea dataset, Camera views from different points along the Trachea centerline,(shown inset in (a)). Points on the camera path are shown as spheres in (a). Note the visiblebifurcation in (b).

trachea and its skeleton with a thinness parameter of 2.5 are shown in Figures 9.2a,b. The skele-

ton captures all the bifurcations in the object and includes the disjoint fragments of the object.

The disjoint parts of the skeleton can be connected by using a minimum spanning tree as shown

in Figure 9.2c.

Two different centerlines are generated, with the user-defined end-points as indicated in Fig-

ure 9.3a. The trachea with the centerline inside it can be seen in Figure 9.3b. In Figure 9.4

camera shots taken from four different points along the centerline can be seen. The camera is

looking downwards into the bifurcation of the trachea. The inset legend indicates the camera

position for each of the views.

The centerline consists of small line segments between skeletal voxels. The camera position

can be set at various samples along these line segments, while it orientation can be in the direc-

tion of the line segment. Such a naive approach results in jerky camera motion between adjacent

Page 102: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

90

line segments. Therefore, we fit a smooth cubic spline through the skeletal voxels on the center-

line. The spline is evaluated at equidistant locations to get the camera position for navigation.

Camera orientation is computed by averaging the tangent vectors over 5 camera positions. This

results in a smooth flythrough animation for the model.

9.3 Medical Modeling

Volume deformation an animation can serve as a powerful tool for medical modeling. Generally,

models of organs are segmented from medical images such as obtained via CT or MRI. These

organs are oriented with the patients body at the time of the scan. The complex shapes of these

organs makes it difficult to make quantitative measurements of the model or to compare with

models obtained from earlier scans.

Figures 9.5a,b and c show three views of a a human colon segmented from an MRI scan.

The size of the volume is 205x133x261 voxels. This data was provided by Dr. Richard Robb

at the Mayo Clinic. We first skeletonize the colon using the weightedR¿\ _^��`7a metric and

a thinness parameter of 2.0. The segmented colon has 657632 object voxels while the skeleton

has 17769 voxels. An articulated skeleton is manually selected from the skeleton voxels and

is shown in Figure 9.5d. Other skeleton voxels are attached to the articulated skeleton. Owing

to the large number of twists in the shape, a fully automated MST algorithm fails to accurately

connect all skeleton voxels to the appropriate joint in the articulated skeleton. Therefore, some

voxels have to be moved to their appropriate joint.

We then stretch out the articulated skeleton (Figure 9.5e) into a straight line. This is equiv-

alent to transforming the initial pose into a deformed pose for volume animation. Transforma-

tions for every joint are computed and stored when the articulated skeleton is stretched out into

a line. All skeleton voxels attached to a joint transform with the joint. We scan-fill the spheres

centered at each skeleton voxel using sampled reconstruction described in Chapter 5. The final

stretched colon is shown in Figure 9.5f. The stretched nature makes it amenable to easier nav-

igation for virtual colonoscopy. Stretching the colon model also exposes some parts of which

may have been hidden inside a fold in the original volume. The stretched model can be better

Page 103: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

91

compared with similar stretched models from previous and future scans, because it is indepen-

dent of the pose of the patient.

Figure 9.5: Stretching a colon dataset. We use volume animation techniques to stretch a volu-metric model of the human colon.

9.4 Oil Discovery

Microscopic computed tomography (microCT) can be used to image the three-dimensional struc-

ture of very small objects. Resolutions as high as a few microns per voxel can be obtained.

Geologists and oil explorers acquire microCT volumes of sandstone samples to study the dis-

tribution of pores in the sandstone. The distribution and connectivity of pores is then analyzed

to estimate the likelihood of oil yield for future prospecting.

Volumetric models of reservoir sandstones can be obtained using microCT. 3D skeletons

(centerlines) of the pore space network are used to define the structure of the pore space as nodes

and links. The skeleton forms a basis for geometrical analysis of each of the pore bodies (nodes)

and pore throats (links).

We applied our skeleton-tree algorithm to a rock sample volume from Statoil, Norway. A

thin skeleton of the sandstone volume was created and connected into a skeleton-tree. The edges

of the skeleton-tree indicate pore throats and the vertices indicate pore bodies. The skeleton-tree

is useful for establishing connectivity between pores. Subsequent fluid simulations which try to

model the flow of oil through the rock sample use this connectivity information. The pore space

is where the oil and gas flows through the reservoir. It can be imagined to be the void space

between sand grains in beach sand.

Page 104: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

92

Figure 9.6 shows a volume rendering of the sandstone data. Skeletal voxels are marked red,

rock voxels are pink and pores are colored black. The image on the right in Figure 9.6 shows the

skeleton-tree for the red skeletal voxels in the left image. Edges in the skeleton-tree are shown

as lines, and skeletal voxels are shows as dots. Note the complex structure of the pore network.

Analysis of such a complex structure would be very cumbersome without the use of a simpler

abstraction like the skeleton-tree.

Figure 9.6: The skeleton-tree of a rock sample. The skeleton-tree is used to extract connectivitybetween pores.

Page 105: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

93

Chapter 10

Future Work

In this thesis, we have developed the theory for a multi-resolution skeleton. We have also demon-

strated the usefulness of our skeleton representation for several common operations in volume

graphics. In the future, we hope to address other applications in volume and surface graphics.

They are described in the following sections.

10.1 Physically-Based Volume Animation

The volume deformation and animation technique described in Chapter 5 is based on kinematics

alone. The resulting animated volumes are anatomically correct. However, the elastic nature

of bones and tissues is not accounted for in our animation pipeline. Animations created using

motion capture data look realistic, but they are based on purely kinematic parameters like joint

rotations. Consequently, our animation is a re-mapping of motion captured from one body to a

different body. A heavier, fat person would run in the same manner as a thinner person.

An advantage of a volume model is that exact information about the size and shape of var-

ious tissues is available. Given known stiffnesses for the bones and muscles, a physics based

simulation could better simulate the effects of forces acting on muscle groups, and the interac-

tions between various muscle groups. Finite element methods (FEM) and mass-spring models

can better approximate the dynamic behavior of of muscles, and they can be physically correct.

We also do not exploit any anatomical information to add constraints to the animation. As

an example, no provision is made for joints deforming beyond their physically possible limits.

Biomechanics, which is the application of principles of physics and engineering to living things,

has a vast body of knowledge which can be exploited to arrive at anatomical constraints. The

use of such constraints can significantly reduce the computation involved in a general FEM or

mass-spring model of the entire human body.

Page 106: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

94

Some of the factors that must be considered for physically-based volume animation include

accurate segmentation, advanced biomechanical modeling and the development of robust, fast

FEM solvers. Currently, segmentation of the Visible Human data is still an active research prob-

lem. Muscle groups have to be segmented from their background in order to measure mass and

volume properties for accurate physics simulations. In addition, if every voxel is considered a

node in an FEM grid, or a mass in a mass-spring simulation, it would result in a very large sys-

tem of equations which could take potentially days to solve. Therefore, robust and fast solvers

have to be developed for physically-based volume animation to be feasible.

10.2 Volume Graphics Applications

The skeleton and skeleton-tree can be applied to various other volume graphics problems, some

of which are described below.

10.2.1 Shape Matching

There have been few reports of work on volumetric shape matching. With volume models be-

coming more popular, classification and matching of these models will be required. An example

would be matching vortex structures in a weather simulation. If a tornado is known to have a

specific shape signature, it is critical to be able to match flow structures with such a signature.

The skeleton-tree is a simple abstraction which can be used for the purpose of matching

shapes. Parameters such as normalized length and curvature can be matched for the sub-tree be-

tween corresponding voxels in two models. Normalizing these parameters ensures scale-independence.

The skeleton is known to be rotationally invariant, therefore matching is simplified. Graph match-

ing algorithms can also be exploited to find correspondences between two shapes.

10.2.2 Volume Morphing

The problem of shape matching can be extended to that of shape morphing. Given correspond-

ing features on two different shapes, the morphing problem is to create a sequence of smoothly

interpolating shapes. The combination of the skeleton-tree and volume reconstruction can be

used to do volume morphing.

Page 107: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

95

Both source and target volumes can be skeletonized and connected into articulated skele-

tons. Corresponding feature voxels can be picked on each of the articulated skeletons. A trans-

formation can then be computed to warp source voxels to their corresponding target voxels. In-

termediate transforms can then be computed for each voxel in the articulated skeleton, and and

intermediate volume reconstructed using these transforms. The result will be a sequence of in-

terpolated shapes.

Page 108: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

96

Chapter 11

Conclusion and Discussion

We have created an algorithm for extracting a multi-resolution skeleton from volumetric objects.

The multi-resolution skeleton is controlled by a single thinness parameter. Skeletons at various

resolutions can be created by changing the thinness parameter. Our algorithm computes vox-

els which are most important for reconstruction at some resolution, and marks them as skeleton

voxels. The runtime is linear in the number of voxels, and the algorithm supports various dis-

tance metrics like the weighted and Euclidean metric. The algorithm also works for 2D images

and should extend to dimensions greater than three.

We have also demonstrated various applications of the parameter-controlled skeleton tech-

nique to solve some classical problems in volume graphics, which were either unsolved or cum-

bersome to solve. We showed how the skeleton can be connected into an articulated skeleton

and animated using existing animation tools. This gives unprecedented levels of realism for an-

imating volumes. Furthermore, we can animate extremely large volumes with several million

voxels using this method as shown through animations of the Visible Human. We have also

demonstrated the creation of a compact bounding representation of a volumetric object based

on parameter-controlled skeletonization. This bounding representation has been applied to vol-

umetric collision detection, and can detect collisions between multi-million voxel volumes in

sub-second times without the need for a additional memory buffer.

Specific contributions of this work are as follows :

À A new algorithm for extracting a multi-resolution skeleton from binary images and vol-

umes,

À The skeleton-tree data structure for shape abstraction,

À A skeleton-based volume animation method which is capable of realistic animation and

Page 109: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

97

integrates easily with existing animation tools,

À A shape-based hierarchical volumetric collision detection method that can run in real time

on large real-world volumetric model, and

À Various applications of the skeleton to virtual endoscopy, volume tracking, medical mod-

eling and oil discovery.

We expect this method will form the bridge between the volume and polygonal domains and

will hopefully allow volumetric models to become ubiquitous in the field of computer graphics.

We also hope that our parameter-controlled skeleton will be applied to other problems in volume

graphics such as shape matching, volume morphing and registration. Several researchers have

shown interest in extending our work, and various projects over the world are underway which

applies this work to different problem domains.

Page 110: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

98

References

[1] C. Arcelli and G. Sanniti di Baja. A Width-Independent Fast Thinning Algorithm. IEEETransactions on Pattern Recognition and Machine Intelligence, 7(4):463–474, 1985.

[2] F. Aurenhammer. Voronoi diagrams: A survey of a fundamental geometric data structure.ACM Comput. Surv., 23(3):345–405, September 1991.

[3] T. Beier and S. Neely. Feature-based Image Metamorphosis. Computer Graphics (Pro-ceedings of SIGGRAPH 92), 26(2):35–42, July 1992.

[4] G. Bertrand. A Parallel Thinning Algorithm For Medial Surfaces. Pattern RecognitionLetters, 16:979–986, 1995.

[5] G. Bertrand. A Boolean Characterization of Three-Dimensional Simple Points. PatternRecognition Letters, 17:115–124, 1996.

[6] G. Bertrand and G. Malandain. A New Characterization of Three-Dimensional SimplePoints. Pattern Recognition Letters, 15:169–175, 1994.

[7] G. Bertrand and G. Malandain. A Note on “Building Skeleton Models via 3-D Medial Sur-face/Axis Thinning Algorithms” . Graphical Models and Image Processing, 57(6):537–538, 1995.

[8] I. Bitter, M. Sato, M. Bender, K. McDonnell, A. Kaufman, and M. Wan. CEASAR: Ac-curate and Robust Algorithm for Extracting a Smooth Centerline. In Proceedings of IEEEVisualization 2000, pages 45–52, October 2000.

[9] J.F. Blinn. A Generalization of Algebraic Surface Drawing. ACM Transactions on Graph-ics, 1(3):235–256, July 1982.

[10] J. Bloomenthal. Bulge Elimination in Implicit Surface Blends. In Implicit ’95 - The FirstEurographics Workshop on Implicit Surfaces, pages 7–20. Grenoble, France, April 1995.

[11] J. Bloomenthal. Skeletal Design of Natural Forms. Ph.D. Thesis, The University of Cal-gary, Calgary, Alberta, January 1995.

[12] J. Bloomenthal, C. Bajaj, J. Blinn, M-P. Cani-Gascuel, A. Rockwood, B. Wyvill, andG. Wyvill. Introduction To Implicit Surfaces. Morgan Kaufman Publishers, 1997.

[13] J. Bloomenthal and K. Shoemake. Convolution Surfaces. In Computer Graphics (SIG-GRAPH ’91 Proceedings), volume 25, pages 251–257, July 1991.

[14] J. Bloomenthal and B. Wyvill. Interactive Techniques for Implicit Modeling. ComputerGraphics (Symposium on Interactive 3D Computer Graphics), 24(2):109–116, March1990.

Page 111: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

99

[15] G. Borgefors. On Digital Distance Transforms in Three Dimensions. Computer Visionand Image Understanding, 64(3):368–376, November 1996.

[16] J. W. Brandt and V. R. Algazi. Continuous Skeleton Computation by Voronoi Diagram.CVGIP: Image Understanding, 55(3):329–338, May 1992.

[17] N. Burtnyk and M. Wein. Interactive Skeleton Techniques for Enhancing Motion Dynam-ics in Key Frame Animation. Communications of the ACM, 19:564–569, October 1976.

[18] B. Cabral, N. Cam, and J. Foran. Accelerated Volume Rendering and Tomographic Re-construction Using Texture Mapping Hardware . In Arie Kaufman and Wolfgang Krueger,editors, Symposium on Volume Visualization, pages 91–98. ACM SIGGRAPH, October1994.

[19] Y. Chen, Q. Zhu, and A. Kaufman. Physically-based Animation of VolumetricObjects. Technical Report TR-CVC-980209, SUNY Stony Brook, February 1998.URL:http://www.cs.sunysb.edu/Á vislab/projects/deform/Papers/animation.ps.

[20] T.H. Cormen, C.E. Leiserson, and R.L. Rivest. Introduction to algorithms. MIT Press andMcGraw-Hill Book Company, 6th edition, 1992.

[21] Morgenthaler D.G. Three Dimensional Simple Points: Serial Erosion, Parallel Thinning,and Skeletonization. Technical report, TR-1005, Computer Science Center, University ofMaryland, College Park, 1981.

[22] R.A. Drebin, L. Carpenter, and P. Hanrahan. Volume Rendering. In Computer Graphics(SIGGRAPH ’88 Proceedings), volume 22, pages 65–74, August 1988.

[23] D.S. Ebert, W.E. Carlson, and R.E. Parent. Solid spaces and inverse particle systemsfor controlling the animation of gases and fluids. The Visual Computer, 10(4):179–190,March 1994.

[24] H. Edelsbrunner. Algorithms in Combinatorial Geometry. Springer-Verlag, Berlin, 1987.

[25] S. Fang, R. Srinivasan, R. Raghavan, and J.T. Richtsmeier. Volume Morphing and Render-ing: An Integrated Approach. Computer Aided Geometric Design, 17(1):59–81, January2000.

[26] E. Ferley, M. Gascuel, and D. Attali. Skeletal Reconstruction of Branching Shapes. InImplicit Surfaces ’96: 2nd International Workshop on Implicit Surfaces, pages 127–142,Eindhoven, The Netherlands, October 1996.

[27] N. Gagvani, D. Kenchammana-Hosekote, and D. Silver. Volume Animation Using TheSkeleton Tree . In IEEE Volume Visualization Symposium, pages 47–54, October 1998.

[28] N. Gagvani and D. Silver. Parameter Controlled Volume Thinning. Graphical Modelsand Image Procesing, 61(3):149–164, May 1999.

[29] N. Gagvani and D. Silver. Shape-based Volumetric Collision Detection. In Proc. IEEEVolume Visualization Symposium, pages 57–61, October 2000.

[30] B. Geiger and R. Kikinis. Simulation of Endoscopy . Springer-Verlag, April 1995.

Page 112: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

100

[31] S. Gibson. Using Linked Volumes to Model Object Collisions, Deformation, Cutting,Carving and Joining. IEEE Transactions on Visualization and Computer Graphics,5(4):330–348, October-December 1999.

[32] S.F. Gibson. 3D Chain Mail: A Fast Algorithm For Deforming Volumetric Objects. InProceedings 1997 Symposium on Interactive 3D Graphics, pages 149–154, April 1997.

[33] S.F. Gibson and B. Mirtich. A Survey of Deformable Modeling in Com-puter Graphics. Technical Report TR97-19, MERL, November 1997.URL:http://www.merl.com/reports/TR97-19/TR97-19.ps.gz.

[34] W.X. Gong and G Bertrand. A Note on “Thinning of 3-D Images Using the Safe PointThinning Algorithm ” . Pattern Recognition Letters, 11:499–500, 1990.

[35] S. Gottschalk, M. Lin, and D. Manocha. OBB-Tree: A Hierarchical Structure for RapidInterference Detection. In Computer Graphics (Proceedings of SIGGRAPH 96), pages171–180, August 1996.

[36] Mark Halstead, Michael Kass, and Tony DeRose. Efficient, Fair Interpolation UsingCatmull-Clark Surfaces. In James T. Kajiya, editor, Computer Graphics (Proceedings ofSIGGRAPH 93), pages 35–44, August 1993.

[37] T. He and A. Kaufman. Collision Detection for Volumetric Objects. In Proceedings ofIEEE Visualization, pages 27–34, October 1997.

[38] T. He, S. Wang, and Kaufman A. Wavelet-Based Volume Morphing. In Proceedings Vi-sualization 94, pages 85–92, Los Alamitos, CA, October 1994. IEEE Computer SocietyPress.

[39] J.L. Helman and L. Hesselink. Visualization of Vector Field Topology in Fluid Flows.IEEE Computer Graphics and Applications, 11(3):36–46, 1991.

[40] L. Hong, A. Kaufman, Y-C. Wei, A. Viswambharan, M. Wax, and Z. Liang. 3D VirtualColonoscopy. In IEEE Symposium on Frontiers in Biomedical Visualization, pages 26–32,1995.

[41] J. Huang, R. Yagel, and V. Filippov. Accurate Method for the Voxelization of Planar Ob-jects. In Proceedings IEEE Symposium on Volume Visualization, pages 119–126, October1998.

[42] P.M. Hubbard. Interactive Collision Detection. In Proceedings of IEEE Symposium onResearch Frontiers in Virtual Reality, October 1993.

[43] J.F. Hughes. Scheduled Fourier Volume Morphing. Computer Graphics, 26(2):43–46,July 1992.

[44] R. Jain, R. Kasturi, and B.G. Schunck. Machine Vision. McGraw-Hill, Inc., New York,1995.

[45] C.O. Kiselman. Regularity Properties of Distance Transformations in Image Analysis.Computer Vision and Image Understanding, 64(3):390–398, November 1996.

[46] T.Y. Kong and A. Rosenfeld. Digital Topology: Introduction and Survey. Computer Vi-sion, Graphics and Image Processing, 48:357–393, 1989.

Page 113: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

101

[47] J. Korein and N. Badler. Techniques for Generating the Goal Directed Motion of Artic-ulated Structures. IEEE Computer Graphics and Applications, 2(9):71–81, November1982.

[48] Y. Kurzion and R. Yagel. Space Deformation Using Ray Deflectors. In 6th EurographicsWorkshop on Rendering, pages 21–32. Springer, Vienna, JunJunee 1995.

[49] P. Lacroute and M. Levoy. Fast Volume Rendering Using a Shear–Warp Factorizationof the Viewing Transformation. In Proceedings of SIGGRAPH ’94, Computer GraphicsProceedings, Annual Conference Series, pages 451–458, July 1994.

[50] N.A. Langrana, G. Burdea, K. Lange, D. Gomez, and S. Deshpande. Dynamic Force Feed-back in a Virtual Knee Palpation. Artificial Intelligence in Medicine, 6:321–333, 1994.

[51] J. Lasseter. Principles of Traditional Animation Applied to 3D Computer Animation.Computer Graphics (Proceedings of SIGGRAPH 87), 21(4):35–44, July 1987.

[52] L. Latecki and C.M. Ma. An Algorithm for a 3D Simplicity Test. Computer Vision andImage Understanding, 63(2):388–393, March 1996.

[53] T.C. Lee, R.L. Kashyap, and C.N. Chu. Building Skeleton Models Via 3-D Medial Sur-face/Axis Thinning Algorithms. Graphical Models and Image Processing, 56(6):462–478, November 1994.

[54] A. Lerios, C. Garfinkle, and M. Levoy. Feature-based Volume Metamorphosis. In Com-puter Graphics (Proceedings of SIGGRAPH 95), pages 449–456, August 1995.

[55] F. Leymarie and M.D. Levine. Fast Raster Scan Distance Propagation on the DiscreteRectangular Lattice. CVGIP : Image Understanding, 55(1):84–94, 1992.

[56] W. Lorensen and H. Cline. Marching Cubes: A High Resolution 3D Surface ConstructionAlgorithm. In Maureen C. Stone, editor, Computer Graphics (Proceedings of SIGGRAPH87), volume 21, pages 163–169, July 1987.

[57] C.M. Ma and M. Sonka. A Fully Parallel 3D Thinning Algorithm and Its Applications.Computer Vision and Image Understanding, 64(3):420–433, November 1996.

[58] J. Mukerjee, P.P Das, and B.N. Chatterji. Thinning of 3-D Images Using the Safe PointThinning Algorithm (SPTA). Pattern Recognition Letters, 10:167–173, 1989.

[59] J. Mukherjee, P.P Das, and B.N. B.N. Chatterji. On Connectivity Issues of ESPTA. Pat-tern Recognition Letters, 11:643–648, 1990.

[60] N.J. Naccache and R. Shinghal. SPTA : A Proposed Algorithm For Thinning Binary Pat-terns. IEEE Transactions on Systems Man Cybernetics, 3:409–418, 1984.

[61] M. Naf, G. Szekely, R. Kikinis, M.E Shenton, and O. Kubler. 3D Voronoi Skeletons andTheir Usage for the Characterization and Recognition of 3D Organ Shape. Computer Vi-sion and Image Understanding, 66(2):147–161, 1997.

[62] W. Niblack, P.B. Gibbons, and D. Capson. Generating Skeletons and Centerlines from theDistance Transform. CVGIP : Graphical Models and Image Processing, 54(5):420–437,September 1992.

Page 114: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

102

[63] F. Nilsson and P-E. Danielsson. Finding the Minimal Set of Maximum Disks for BinaryObjects. Graphical Models and Image Processing, 59(1):55–60, January 1997.

[64] R.L. Ogniewicz and O. Kubler. Hierarchic Voronoi Skeletons. Pattern Recognition,28(3):343–359, 1995.

[65] T. Pavlidis. A Thinning Algorithm for Discrete Binary Images. Computer Graphics andImage Processing, 13:142–157, 1980.

[66] H. Pfister, J. Hardenbergh, J. Knittel, H. Lauer, and L. Seiler. The VolumePro Real-TimeRay-casting System. In Alyn Rockwood, editor, Proceedings of SIGGRAPH 99, Com-puter Graphics Proceedings, Annual Conference Series, pages 251–260. Addison WesleyLongman, August 1999.

[67] F.P. Preparata and M.I. Shamos. Computational Geometry. Springer-Verlag, New York,1990.

[68] I. Ragnemalm. The Euclidean Distance Transformation in Arbitrary Dimensions. PatternRecognition Letters, 14:883–888, 1993.

[69] J.M. Reddy and G.M. Turkiyyah. Computation of 3D Skeletons Using a GeneralizedDelaunay Triangulation Technique. Computer-Aided Design, 27(9):677–694, September1995.

[70] W.T. Reeves. Particle Systems - A Technique for Modeling a Class of Fuzzy Objects.ACM Transactions on Graphics, 2(2):91–108, April 1983.

[71] R.A. Robb. Virtual (Computed) Endoscopy: Development and Evaluation Usingthe Visible Human Datasets. In Visible Human Project Conference, October 1996.www.nlm.nih.gov/research/visible/vhp conf/robb/robb pap.htm.

[72] T. Saito and J. Toriwaki. New algorithms for Euclidean Distance Transformation of ann-Dimensional Digitized Picture with Applications. Pattern recognition, 27:1551–1565,1994.

[73] H. Samet. The Design and Analysis of Spatial Data Structures. Addison-Wesley, Reading,Massachusetts, 1989.

[74] Sense8 Corporation. The World Toolkit Home Page. http://www.sense8.com.

[75] D.J. Sheehy, C.G. Armstrong, and D.J. Robinson. Shape-Description by Medial SurfaceConstruction. IEEE Trans. on Visualization and Computer Graphics, 2(1):62–72, March1996.

[76] E.C. Sherbrooke, N.M. Patrikalakis, and E. Brisson. An Algorithm for the Medial AxisTransform of 3D Polyhedral Solids. IEEE Trans. on Visualization and Computer Graph-ics, 2(1):44–61, March 1996.

[77] D. Silver and X. Wang. Tracking and Visualizing Turbulent 3D Features. IEEE Transac-tions on Visualization and Computer Graphics, 3(2):129–141, June 1997.

[78] T. Simerman. Anatomy of an animation. Computer Graphics World, 18(3), March 1995.

Page 115: PARAMETER-CONTROLLEDSKELETONIZATION –A FRAMEWORK …coe · 8.8. A swarm of volumetric wasps chasing the Visible Man. The Visible Man is an-imated using a volumetric skeleton. The

103

[79] M. Sramek and A. Kaufman. Object Voxelization by Filtering. In Proc. IEEE Symposiumon Volume Visualization, pages 111–118, October 1998.

[80] R. Stefanelli and A. Rosenfeld. Some Parallel Thinning Algorithms for Digital Pictures.Journal of the ACM, 18:255–264, 1971.

[81] N. Thalmann and D. Thalmann. Computer Animation, Theory and Practice, 2nd RevisedEd., pages 175–177. Springer-Verlag, 1990.

[82] Y. F. Tsao and K.S. Fu. A 3D Parallel Skeletonwise Thinning Algorithm. Proc. IEEEPattern Recognition Image Processing Conf., pages 678–683, 1982.

[83] Viewpoint Digital. Motion Data. http://www.viewpoint.com/freestuff/ktx/.

[84] J. Wernecke. The Inventor Mentor. Addison-Wesley Publishing Company, 1994.

[85] L. Westover. Footprint Evaluation for Volume Rendering. In Computer Graphics (Pro-ceedings of SIGGRAPH 90), volume 24, pages 367–376, August 1990.

[86] G. Wyvill, C. McPheeters, and B. Wyvill. Data Structures for Soft Objects. The VisualComputer, 2(4):227–234, April 1986.

[87] H. Yamada. Complete Euclidean Distance Transformation by Parallel Operation. In Pro-ceedings 7th International Conference on Pattern Recognition, pages 69–71, 1984.

[88] W. Zhongke and E.C. Prakash. Visible Human Walk: Bringing Life Back to the DeadBody . In Proceedings of International Workshop on Volume Graphics, pages 347–356,March 1999.