in copyright - non-commercial use permitted rights ... · functions, such as isolines and...

13
Research Collection Report Multiresolution compression and reconstruction Author(s): Staadt, Oliver G.; Gross, Markus H.; Weber, Roger Publication Date: 1997 Permanent Link: https://doi.org/10.3929/ethz-a-006652072 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection . For more information please consult the Terms of use . ETH Library

Upload: others

Post on 17-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

Research Collection

Report

Multiresolution compression and reconstruction

Author(s): Staadt, Oliver G.; Gross, Markus H.; Weber, Roger

Publication Date: 1997

Permanent Link: https://doi.org/10.3929/ethz-a-006652072

Rights / License: In Copyright - Non-Commercial Use Permitted

This page was generated automatically upon download from the ETH Zurich Research Collection. For moreinformation please consult the Terms of use.

ETH Library

Page 2: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

Multiresolution Compression And Reconstruction

Oliver G. Staadt, Markus H. Gross, Roger Weber

Computer Science DepartmentETH Zürich

ABSTRACTThis paper presents a framework for multiresolution compressionand geometric reconstruction of arbitrarily dimensioned datadesigned for distributed applications. Although being restricted touniform sampled data, our versatile approach enables the handlingof a large variety of real world elements. Examples include nonpa-rametric, parametric and implicit lines, surfaces or volumes, all ofwhich are common to large scale data sets. The framework is basedon two fundamental steps: Compression is carried out by a remoteserver and generates a bitstream transmitted over the underlyingnetwork. Geometric reconstruction is performed by the local clientand renders a piecewise linear approximation of the data. Moreprecisely, our compression scheme consists of a newly developedpipeline starting from an initial B-spline wavelet precoding. Thefundamental properties of wavelets allow progressive transmissionand interactive control of the compression gain by means of globaland local oracles. In particular we discuss the problem of oracles insemiorthogonal settings and propose sophisticated oracles toremove unimportant coefficients. In addition, geometric con-straints such as boundary lines can be compressed in a losslessmanner and are incorporated into the resulting bit-stream. Thereconstruction pipeline performs a piecewise adaptive linearapproximation of data using a fast and easy to use point removalstrategy which works with any subsequent triangulation technique.As a result, the pipeline renders line segments, triangles or tetrahe-dra. Moreover, the underlying continuous approximation of thewavelet representation can be exploited to reconstruct implicitfunctions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly to higher dimensions the performance of our framework isillustrated with results achieved on data very popular in practice:parametric curves and surfaces, digital terrain models, and volumedata.

CR Descriptors: E.4 [Coding and Information Theory]: Datacompaction and compression; I.3.5 [Computational Geometryand Object Modeling]: Curve, surface, solid, and object represen-tations; Splines; I.3.7 [Three–Dimensional Graphics and Real-ism]; I.4.5 [Reconstruction]: Transformation methods.

Additional Keywords and Phrases: wavelets, isosurfaces, vol-umes, triangulation, tetrahedralization, meshing, oracles.

1 INTRODUCTION

1.1 Motivation and Previous WorkGeometry compression is an attractive and emerging subfield incomputer graphics research which has gained much importance inrecent years. Especially when aiming at distributed, interactiverendering and visualization applications, many of which areclosely related to the WWW, efficient data encoding is an essentialprerequisite for both storage efficiency and real time performance.In this context, we often face client server setups where a remoteserver maintains complex data sets which have to be browsed,inspected, analyzed or rendered with low latency by a local client.Apart from rendering complex scenes, consider the case of visual-

Email: {staadt, grossm, weber}@inf.ethz.chWWW: http://www.inf.ethz.ch/department/IS/cg

izing large digital terrain or medical volume data sets locatedsomewhere in a remote data base: For fast searching and browsingit is often sufficient to generate a low level of detail representation.Conversely, it is sometimes desirable to preserve interesting fea-tures such as boundaries, isolines, or spatially appealing regions infull detail while keeping the overall through-put of the communi-cations channel as low as possible. Fig. 1 illustrates some exam-ples where different criteria hold for a meaningful datarepresentation.

Hence, the underlying data representation should be flexibleand has to encompass both global and local level of detail whileaccounting for constraints imposed by special data features. Obvi-ously, as opposed to standard image compression methods, infor-mation loss is a manifold problem and has to be controlled muchmore carefully in graphics applications. As a consequence, elabo-rate data encoding and compression methods are called for whichsuccessfully address the situations featured above. While, on theclient side, visual data inspection and analysis is tightly related tothe computation of geometric reconstructions from the data,mostly in terms of piecewise linear elements such as line segments,triangles, or tetrahedra. It is therefore desirable to perform recon-structions efficiently from the bitstream of incoming data. More-over, geometry should be refined progressively as more and moredata arrives at the client. All representations have to be adaptive, inthe way that the number of triangles has to vary as a function of theclient’s performance and interest while still providing a meaning-ful representation.

It is clear that much successful research effort has been spenton developing appropriate methods for geometry compression.Early approaches go back to Douglas et. al. [8] who proposed asimplification method for line segments. We can also find a vastamount of literature on mesh representation strategies, a good sur-vey of which is provided by Heckbert et. al. [13]. Later, Deering[7] for instance, proposed a scheme to compress triangular shapesand their attributes. Hoppe [14] suggested the concept of progres-sive meshes for triangulated shapes where edge split and collapsetechniques lead to a continuous hierarchy of levels of detail of anobject and constraints may be imposed easily. Others [17], [9] dis-cussed representation and parametrization strategies for meshes ofarbitrary topology using linear wavelets. However, high compres-sion gain along with continuous approximation requires smooth,higher order polynomial wavelets, which are difficult to defineover arbitrary meshes. The special case of digital terrain data wasaddressed, for instance, by Lindstrom et. al. [16] and Gross

Figure 1: Illustration of situations arising from visual inspec-tion of data sets: a) 2D nonparametric surface. b) 2D parametricsurface. c) 3D implicit isosurface.

a) b) c)

Page 3: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

et. al. [12]. The latter one used an underlying wavelet representa-tion to govern mesh refinement and featured both global and locallevels of detail.

In summary, much effort has been spent on finding appropriatemesh simplification and representation methods which allow forfast and progressive transmission and rendering of complexscenes. However, little attention has been spent on the followingissues:

• Many technical applications in practice, such as medicalimaging systems, produce raw data sampled over uni-form grids. Due to their complexity, these data sets haveto be compressed and stored in a remote database server.Thus, visual inspection and browsing requires computa-tion of piecewise linear geometric reconstructions fromthe compressed data set.

• Up to now compression was mostly considered a meshrepresentation problem. The manifold aspects of a fullcompression pipeline such as precoding, global and localoracles, quantization, and optimal bit allocation haverarely been discussed in full detail.

• Compression and reconstruction should be embedded ina framework which provides an interface for the clientand offers a testbed for individual methods. In particular,both lossy and lossless compression must be combinedto satisfy demands arising from geometric reconstruc-tions.

1.2 Our ApproachThe research presented in this paper was stimulated by the issuesdiscussed above. The goal was to provide an efficient and versatilecompression and reconstruction pipeline which accounts for cli-ent-server setups. The framework is hybrid in the sense that it com-bines both lossy and lossless compression. Being restricted touniform sampled data, we can use bounded B–spline wavelets,such as in [21] and [19], for data precoding. Some of their relativeshave successfully been used in image compression [23]. Theunderlying approximation features high compression gain, elimi-nation of boundary problems, multiresolution progressive setups,and both global and local oracles within the error bounds of .Furthermore, B–spline wavelets allow one to build linear timedecomposition and reconstruction schemes forming a basis for fastcompression and decompression. The geometric reconstruction ofthe data essentially combines a generalized point removal/insertionstrategy with a subsequent triangulation. We restrict our attentionto vertex removal and keep it independent of the meshing. That is,we consider a meaningful triangulation as a plug-in, such as pro-vided by theqhull library [1]. Special emphasis, however, is givento implicit reconstruction tasks which occur in many applications.For this, we exploit the smoothness properties of the underlyingapproximation which allows more smooth and precise computa-tion of implicit intersections than current methods. Again, triangu-lation algorithms, such as marching tetrahedra [2], are provided asplug–ins from other sources. Thus, our framework features modu-lar and object oriented design, currently embedded inAVS/Express.

Fig. 2 depicts an overview of the framework. The individualcomponents can be combined according to requirements of theapplication. The remote server performs data compression and isgoverned by parameter settings for global and local oracles, and abitstream received by a client is produced. It is at this step wheregeometric reconstruction and interactive visualization are com-puted. The quality of the geometric reconstruction computed bythe client can be controlled depending on network performance,computational and storage capabilities of the client, or on the datathemselves.

We are aware that the restriction to uniform sampled datamight be considered a major drawback. We believe, however, thatthe rich variety of applications covered by our approach justifiesthe presented research.

The remainder of our contribution is organized as follows: InSection 2 we describe the fundamentals of the multiresolution dataprecoding emphasizing new methods for the construction of globaloracles for semiorthogonal B–spline wavelets. Section 3 addressesall relevant issues related to our compression strategies for quanti-zation and bit allocation. A fast and easy to use geometric recon-struction technique based on progressive point removal/insertion isexplained in Section 4. The special problem of implicit interpola-tion for isolines and -surfaces is elucidated in Section 5. Finally,we illustrate the versatility of our framework with various exam-ples ranging from different surface types to volume data.

2 MR APPROXIMATIONSIn this section we discuss the mathematical fundamentals of thepreprocessing we employ for data preconditioning. As stated ear-lier, B–spline wavelets are used as a precoding transform sincethey combine various advantageous features, such as vanishingmoments, continuous approximation, bounded interval definition,linear time algorithms, and localization. For reasons of readability,we first review some basics of cardinal B–spline wavelets. How-ever, our attention is mostly directed to the definition of global ora-cles, that is, schemes to reject unimportant coefficients. Our globaloracle consists of a greedy algorithm resulting from an elaborateanalysis of errors in semiorthogonal settings [11], an excerpt ofwhich is given in Section 2.2. Additionally, we will demonstratehow local oracles reject coefficients in unimportant spatial regionsand thus enable the construction ofelectronic magnifying glassesfor interactive data inspection. For reasons of simplicity, we per-form all computations for 1D functions, but the results extendstraightforwardly to higher dimensions.

2.1 B-Spline WaveletsB–spline wavelets were initially introduced by Chui [4], and wereextended to bounded intervals by [19] and [21], while nonuniformknot sequences were addressed for instance by [3]. Due to a richvariety of literature in this area, we restrict our introduction tothose topics essential for an understanding of our framework.

B–spline wavelets can be constructed from a multiresolutionhierarchy of cardinal B–spline scaling functions. Semiorthogona-lity invokes an additional degree of freedom, however. Thus,

L2

Figure 2: Illustration of the conceptual components of our com-pression and reconstruction framework embedded into a client-server setup

Data

Wavelet Transform

Oracle

Compression

Constraints

Decompression / WT-1

Point Removal

Meshing

Rendering

compression pipeline decompression pipeline

clientserverprogressive

compressed bitstream

transmission

L2

Page 4: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

approaches as in [19] or [21] end up in slightly different construc-tion schemes. We adapted the methods of Quak et. al. [19] to con-struct B–spline wavelets of arbitrary order bounded to the interval.

Assuming the reader is familiar with some fundamentals ofdiscrete wavelet transforms (DWT), the implementation of the for-ward transform is carried out by sequences of projection operators

, where stands for the decomposition level.

An initial function is mapped from the higher resolution

approximation space onto a lower resolution space

and onto its orthogonal complement space . Given the

coefficient vectors and for the scaling functions ,

and wavelets in the 1D setting, with

(1)

,

the decomposition is performed by matrix operations

. (2)

Due to the semiorthogonality, we require the inverse operators and to compute the reconstruction with

. (3)

It can be easily proven [19] that the operators relate to eachother by

. (4)

In the case of cardinal B–spline wavelets, sparse operatorsand come along with dense matrices and . In order toconstruct linear time algorithms for both decomposition and recon-struction, it is sufficient to know the sequences and toperform an additional base transform of the coefficients into theirduals and using the inner product matrices and .This results in a decomposition and reconstruction scheme asdepicted in Fig. 3.

Note that the decomposition involves solutions of the sparse

linear systems of type for each iteration and

for the last iteration step. Fortunately, this can beaccomplished in linear time as well. For brevity we abandon allmathematical details associated with the construction of thesetransforms and refer the reader to [19]. Instead, we direct ourattention in the following section to the problem of global oracles.

2.2 Global OraclesA global oracle rejects unimportant wavelet coefficients from thetransform while minimizing a given error norm. It is clear that theglobal oracle controls the compression and is essential for infor-mation loss. In orthogonal settings, optimal oracles can be con-structed easily by sorting coefficients according to theirmagnitude, and by rejecting the smallest ones from the list [21].This strategy is commonplace in many applications and offersgood results [10]. Unfortunately, in semiorthogonal spaces, con-struction becomes more difficult and has hardly been addressed indepth. Maximum distance norm oracles have been proposed by[22] for biorthogonal wavelets. Mathematical analysis of thebehavior of approximation errors for semiorthogonal wavelets isclosely related to signal energy computations.

The computational scheme for the global oracle is based on theobservation that the energy of a function expanded by semi-orthogonal wavelets is obtained by the following sum of scalarproducts:

(5)

• : scalar product operator.

Due to the orthogonality of different complement spaces it issufficient to analyze the error norm of a single space . In orderto derive an incremental method we assume out of

coefficients in this space to vanish. The approx-imation error is determined by the following relation:

(6)

where represents the residual approximation anddenotes the set of all coefficients being rejected from the initialtransform. In a next step we compute how the upper error behaves

when rejecting an arbitrary , assuming coefficients to

have already been rejected in previous procedures. That is, wecompute an expression for the error increment generated by a sin-gle coefficient.

(7)

Equation (7) expresses the dependence of the error on an incre-ment of the rejection set. We will refer to it as theconditionalapproximation error in all subsequent discussions. The factor of 2

Figure 3: Linear time decomposition and reconstruction pyra-mids for cardinal B–spline wavelet transforms a) decomposition.b) reconstruction.

Am Bm, m 1…M=

f x( )

Vm Vm 1+

Wm 1+

cm dm φim

x( )

ψ im

x( )

cim

f φim,⟨ ⟩= di

mf ψ i

m,⟨ ⟩=

i : 1…N 2m⁄ order 1– , order: B-Spline order+

cm 1+ Amcm

= dm 1+ Bmcm=

Pm Qm

cm Pmcm 1+ Qmdm 1++=

Am

Bm------- Pm

|Qm[ ]1–

=

Pm

Qm Am Bm

Pm Qm

c̃m d̃m Φm Ψm

cm c̃mΦmcm PM( )Tc̃M 1–

c̃M 1– c̃M

QM( )Tc̃M 1–

d̃M

ΨMdM d̃M

=

dM

cM

ΦMcM c̃M=

a)

c̃m d̃m

: coefficient vectors in dual space, T: transpose,

cM

dM

PMcM

QMdM

…Pm 1+ cm 1+

Qm 1+ dm 1+

cm

cm 1+

dm 1+

b)

Ψm dm⋅ d̃m

=

ΦM cM⋅ c̃M

=

L2

f x( )

f x( )L

22 cM c̃

M• dm d̃m

•m 1=

M

∑+=

Wm

KN 2

morder 1–+⁄

∆ fm

x( ) ∆ f 'm

x( )– L2

2

Rej K( )

dimψ i

mx( )

i Rej K( )∈∑ di

mψ im

x( )i Rej K( )∈

∑,=

dim

djm

j Rej K( )∈∑

i Rej K( )∈∑ ψ i

mx( ) ψ j

mx( ),⟨ ⟩⋅=

∆ f 'm

x( ) Rej K( )

dkm

0≠ K

∆ fm

x( ) ∆ f 'm

x( )– L2

2

Rej K k,( )

∆ fm

x( ) ∆ f 'm

x( )– L2

2

Rej K( )dk

m( )2

2+ dkm

dim

i Rej K( )∈∑ ψk

mx( ) ψ i

mx( ),⟨ ⟩⋅ ⋅

+=

Page 5: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

follows immediately from the symmetry of the inner productmatrix. Apparently, the conditional increment is computed by add-ing one row and one column to the matrix type structure represent-ing the double summation of (6), such as depicted in Fig. 4. Insummary, the error can be updated by adding the products of thecoefficient and the elements of the rejection set times the asso-ciated entry of the inner product matrix. Note that this error can beconsidered a score which reflects the conditional importance of anindividual coefficient.

The relations derived represent an essential step towards thedevelopment of an oracle. They allow one to predict how theapproximation error changes when rejecting an individual waveletcoefficient under the precondition thatK other coefficients havebeen rejected earlier. Based on this fundamental relationship, it ispossible to develop a greedy rejection algorithm which optimizeslocally and computes a minimum error rejection set of coefficients.In essence, the greedy oracle operates as follows: It first assigns aninitial score to all wavelet coefficients of all iterations . Thescore is defined by the overall conditional approximation error,which governs the oracle. In a second step, the oracle iterativelyselects the coefficient with the minimum score, rejects it, andrecomputes the scores of all other coefficients. The iteration loopruns up to a predefined number of cycles or up to a predefinederror bound . As with equation (7), the score can be recom-puted by an appropriate increment after each iteration. Thus weend up with a simple reject–and–update scheme for our oracle. Thepseudocode is provided below:

Initialize : score[i,m] ← d[i,m] ⋅d[i,m];for i ← 1 to K

for m ← 1 to M doSearch : coefficient[i rej ,mrej ]| score[irej,mrej] = min ≠ 0;

Reject : d[i rej ,mrej ] ← 0;if m = m rej && score[i,m] ≠ 0 then

increment[i,m] ← 2 ⋅d[i,m] ⋅d[i rej ,mrej ] ⋅ψ[i,i rej ,m]+ score[i rej ,mrej ] - old_score

else if score[i,m] ≠ 0increment[i,m] ← score[i rej ,mrej ] - old_score;

Update : score[i,m] ← score[i,m] + increment[i,m];end ;

After each step, thescore[i,m] of a coefficientd[i,m]represents the overall conditional approximation error when reject-ing d[i,m] . Note that the time–complexity of the oracle equals

and applies only on forward compression.

2.3 Local Oracles and SelectiveRefinement

A local oracle allows one to control the approximation locally ininteresting regions. Here, the spatial localization of the basis func-tions enables us to accentuate particular wavelets while suppress-ing the influence of others. In this understanding, a straightforwardlocal oracle consists of a weighting function which operates on thecoefficients of the transform. A first approach to this is given inGross et. al. [12] who employed a Gaussian weighting. The basicidea is to assume some ellipsoidal weighting area as a local regionof interest in the spatial domain. Localization of the wavelet trans-

form enables the projection of scaled and translated versions of itinto wavelet space, where individual coefficients are influenced.The initial version presented in [12], however, did not consider thesupport regions of individual basis functions, and can lead to someartifacts by rejecting wavelets ranging into the region of interest.Therefore, we extended the method with the computation of sup-port and bounded wavelets.

An illustration for geometry based image representation isgiven in Fig. 5. In Fig. 5a, we computed a mesh of the IEEE Com-puter Society logo using the approach explained subsequently.Selectively refined meshes using Gaussian weighting are presentedin Fig. 5b and Fig. 5c respectively, where the areas of interest arelocated around different areas of the logo. For illustration, themesh was kept artificially dense.

3 COMPRESSION STRATEGIES

This section explains in detail all essential processing steps associ-ated with the definition of appropriate compression strategies. Wefirst give an overview of the compression and decompression pipe-lines, which are hybrid, in the sense that they combine both lossyand lossless methods depending on the type of feature to encode.

Data compression has a long tradition and has been studiedintensively [20]. However, the individual requirements of a geome-try based approach encouraged us to design the pipeline explainedsubsequently. For instance, in the context of lossy compression,issues of floating point data handling and quantization must beadapted to our needs where the structure of the wavelet representa-tion plays an important role. Furthermore, additional effort has tobe spent on progressive settings. Since the preservation of con-straints, such as iso- or boundary values or lines, is desirable inmany applications we propose a lossless compression strategy forthese features.

3.1 Overview

Based on the wavelet precoding steps explained previously, wedesigned a compression/decompression pipeline as depicted inFig. 6. The forward compression proceeds as follows: After extrac-tion of constraints, the data set is normalized, wavelet-transformedand both local and global approximation errors are controlled bythe oracles introduced above. Sorting of the individual channels ofthe WT transforms the multidimensional array into a 1D data vec-tor which is quantized and encoded subsequently. Line-constraints,as extracted earlier, are fed into a lossless compression scheme.Conversely, the decompression pipeline inverts the procedure andprepares the data for subsequent geometric reconstruction.

Figure 4: Illustration of the conditional approximation error in-crement.

dkm

dkm

dim ψk

mx( ) ψ i

mx( ),( )

dkm( )

2

∆ fm

x( ) ∆ f 'm x( )– L2

2

Rej K( )

m

KEb

O N2( )

Figure 5: Illustration of the effect of a local oracle on a triangu-lated image. a) Initial triangulation. b), c) Local oracle is centeredat the upper and central area of the triangulation..

a) b) c)

Page 6: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

3.2 Progressive Lossy CompressionHandling of Floating Point ValuesFirst the data is normalized, i. e. the values are scaled to .We decided to carry this outbefore transformation, because post–normalization maps an offset onto small wavelet coefficients and ismore difficult to handle upon compression.

In order to prepare the data for bandwise progressive transmis-sion, we sort the multidimensional coefficient array into a 1D vec-tor as displayed in Fig. 7. Here, the array is traversed from themost significant scaling function coefficients to the high frequencybands representing fine grained detail.

Note that the vector contains floating point values and has to beconverted into an array of integers.

QuantizationThe quantization step comprises a multiplication of the initialfloating point coefficients with a factor of , where repre-sents the number of bits to be assigned for each coefficient. Subse-quent rounding operations transform the floating point value intosigned integer formats of size . Let be a coefficient, weobtain it’s quantized version by

. (8)

Note that strongly affects the quantization error and appearsas noise after reconstruction. Lossless quantization would typicallyrequire 23 bits on a 32 bit machine for single precision due to theIEEE–754 floating point format.

Coding and Bit AllocationThe major task in the proposed compression is to convert the quan-tized integer vector into a bitstream of data. Therefore, we employan entropy coding scheme in the spirit of JPEG [24]. Assuming

that many of the coefficients will equal zero, encoding is carriedout as follows: All nonzero coefficients are represented by 2–tuples, where the first element represents the number of bits of thesecond one. The second element contains the data value itself. Allnegative numbers are thus replaced by their absolute values, wherein the case of a positive number the first bit is cleared. This enablesthe encoding of the sign. Let’s say to encode a value of 17 we get(5, 00001), whereas to encode -17 we obtain (5, 10001). Similarly,5 is represented by (3, 001), whereas -5 is converted to (3, 101).Note specifically that since the number of bits is known inadvance, the representation is unique and the additional encodingof the sign bit in the most significant bit is possible.

Zero valued coefficients are encoded differently. Here we rec-ommend a runlength coding up to a length of which gen-erates a set of 32 new symbols. These symbols, together with thefirst part of our 2-tuples, are stored in a Huffman–table which hasessentially 64 entries. The Huffman symbols are as follows:

• Symbols 0 – 30: First element of a 2–tuple minus 1

• Symbol 31: ‘EOB’ (End Of Bitstream)

• Symbols 32 – 63:Runlength of ‘zero’–coefficients

The scheme proposed here compromises the complexity of theHuffman–table with the maximum number of zero coefficients(32) to be encoded in one symbol. The ‘EOB’ Symbol usuallyallows the encoding of long sequences of ‘zero’–coefficients in theleast significant positions of our data vector. However, it is onlyused where the Huffman table has not been built individualy. Thefollowing pseudocode illustrates the procedural flow of thescheme:

// N:total number of integer coefficients// d i : coefficient i// hufflen i : length of Huffman–code for symbol i// huffcode i : Huffman-code for symbol i// WriteBits (l,i):// appends the last l bits of i to bitsream// Make2Tupel (i,first,second):// converts integer into 2-tuplei ← 0;while i < N do

if d i = 0 thenj ← 0;while j<32 && d i ← 0 do inc (i); inc (j); end ;WriteBits (hufflen j+31 , huffcode j+31 );

elseMake2Tupel (d i ,first,second);WriteBits (hufflen first-1 ,huffcode first-1 );WriteBits (first,second);inc (i);

end ;end ;WriteBits (hufflen 31,huffcode 31);

For brevity we do not explain the construction of the Huff-man–table and refer to standard literature, such as [20]. However,in our framework the Huffman–table is generated individually foreach data set upon compression and is transmitted along with thedata and header information, which is presented in Table 2. Sincethe size of the table is fixed to 64 entries, this does not lead to anotable overhead. Another solution would be the employment of ageneric table, such as in image compression which, however, dropsthe compression gain and, due to the variety of geometric data, ismuch more difficult to construct. An example of encoding asequence of coefficients is given in Fig. 8.

It should be stated again that progression is achieved channelby channel. That is, we transmit the low frequency scaling functioncoefficients first followed by the wavelet coefficient channels inorder of ascending frequency.

Figure 6: Compression pipeline including both lossless andlossy data compression. For decompression, all of the above stepshave to be reversed.

Figure 7: Conversion of the multidimensional array into a 1Dcoefficient vector depicted for a 2D WT.

Normalization

Wavelet Transform

Channel Weighting

Oracle

Channel Sorting

Com

pres

sion

Quantization

Encoding

Input Data

Constraint Extraction

∆ Coding ∆ Coding ∆ Coding

ArithmeticCoding

ArithmeticCoding

ArithmeticCoding

Merge Bitstream

Con

stra

int

Com

pres

sion

x y data

Merge Bitstream

loss

y

loss

less

0 … 1,,[ ]

ψ2 1,

ψ2 3,

ψ2 2,

φ2

ψ2 1,

ψ1 1,

ψ1 3,

ψ1 2,

φ2ψ1 1,ψ1 2, ψ2 3, ψ2 2,

M 2=

direction of transmission

2n 1–

n

n cfloatcquant

cquant round 2n 1– cfloat⋅( )=

n

25 32=

Page 7: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

Some results of the lossy compression of a B–spline surfacewith different parameter settings are depicted in Fig. 9. In order todecompose the control points of this B–spline surface we used thepipeline explained in detail in Section 4.3.

Finally, Table 1 compares the proposed encoding scheme(encode) with some of the most popular lossless compressionmethods, likezip, arc, urbon andcompress. Note that informationloss occurs only upon coefficient removal and quantization. Thus,all subsequent steps in our pipeline are lossless and can be com-pared with some standard algorithms. Results are given for a 3Dvolume data set, where the data was prequantized with 8 bits and16 bits respectively. Interestingly, even in lossless mode ourmethod competes with popular algorithms in overall performance.

3.3 Compression of ConstraintsIn many cases it is desirable to compress spatially interesting fea-tures, such as boundary- or isolines and individual vertices in a los-sless manner. We call these dataconstraints, since they usuallyconstrain subsequent geometric reconstruction. In our pipeline werepresent constraints as polylines or polygons. Fig. 10 illustratesthe use of constraints in a digital terrain data set of the Swiss Alps.Here the geometric reconstruction, i. e. triangulation of the sur-face, was simplified up to a given bound. The constraints invoked

by the polygon force the reconstruction to keep the triangulationdense, however. The constraint is imposed in terms of a terrain fol-lowing polyline of a given extent.

Assuming the polyline constraint is represented as a stream ofvertices of type(x, y, data) , we employ a lossless compres-sion strategy, as shown in Fig. 6.

The position and the data value are encoded separatelyusing both delta and higher order arithmetic compression algo-rithms. For details see [20].

The resulting bitstream format is presented below in Fig. 11,where two headers are followed by the individualx-, y- and data-streams.

4 VERTEX REMOVAL STRATEGIESThe following section is dedicated to vertex removal methods,which enable the client to compute geometric reconstructionsadaptively and progressively from the incoming bitstream of data.

Figure 8: Example of encoding a sequence of coefficients andthe resulting bitstream.

Figure 9: Compression of a B–spline surface with differentquantizations. No additional point removal is performed ( ).Some triangles degenerate due to quantization. a) 50% coeffi-cients, 23 bit quantization, compression gain 1:1.33. b) 10 bit, 1:4.c) 7 bit, 1:5. d) 5 bit, 1:10.(Data source: CourtesyAdvanced Visual System Inc.)

Table 1:Comparison of the proposed method (encode) with some pop-ular compression algorithms (3D volume data set of Fig. 19 and Fig. 20:128x64x64 voxels).

8 BIT QUANT. 16 BIT QUANT.CPU(IN S)

50%COEFF.(IN KB)

10%COEFF.(IN KB)

50%COEFF.(IN KB)

10%COEFF.(IN KB)

ENCODE 568 245 1,835 466 2

ZIP 618 290 2,399 660 5

ARC 711 300 2,727 764 13

URBAN 501 233 1,888 496 69

COMPRESS 533 253 2,407 607 3

UNCOMPRESSED 2,248 2,248 4,496 4,496 0

0.037 0.147 0.000 0.000 0.000 0.439Wavelet Coefficients

Quantization

2–tuples

1011 0001100 11 000101101 100010 1011 0111101

Huffman symbol value ‘zero’ Huffman symbol

Bitstream

76 301 0 0 0 899

(7,76) (9,301) 0 0 0 (10,899)

(256 bits)

with 12 bits

(67 bits)

a) b) c) d)

ε 0=

Figure 10: Illustration of constraints in a digital terrain data set.a) Interactive specification of the constraint path. b) Mesh afterconstraint insertion.(Data source: CourtesyBundesamt für Landestopographie, Bern,Switzerland)

Figure 11: Data format of the bitstream for constraint compres-sion. The individual header formats are given in Table 2.

Table 2: Header formats of bitstream.

NAME TYPE DESCRIPTION

GE

NE

RA

LH

EA

DE

R

magic_number byte ASCII ‘67’stream_size integer total size

xValues_size integer size of x-streamyValues_size integer size of y-stream

info byte misc infowidth float constraint width

field_dims integer[2] mesh dimensionnpoints integer # points of constraint

ndata integer # extracted data valuesxFirstValue float first x-coordinateyFirstValue float first y-coordinate

HE

AD

ER

FO

R

AR

ITH

ME

TT

IC

CO

DIN

G

arithFirstValue float first extracted data valuemaxValue float maximal valueminValue float minimal value

nIntBits integer multiplication factorhuffFirstValue integer 1st integer value

HE

AD

ER

FO

R

EN

CO

DE

iterationDepth short iteration depthweightArr float[] array of weightshuffTable integer[64] Huffman–tablequantBits short quantization (# of bits)

degree short degree of B–spline basesminValue float minimum coefficient valuemaxValue float maximum coefficient value

nDim short # of dimensionsdimArr short[] dimension array

a) b)line constraint path

x y,( )

general headerAC headerx-streamy-streamdata-stream

transmission

Page 8: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

When seeking an appropriate algorithm, computational perfor-mance and invariance to the dimensionality are important consid-erations. Due to the rich literature on vertex removal in graphicsand computational geometry we found that the well-known algo-rithm of Douglas et. al. [8] is a good starting point. First, we brieflyexplain its initial form in a nonparametric 1D setting and illustrateits application in multiresolution representations. Here, specialemphasis is given to extension of the method for progressivereconstruction. Next, we generalize the method to multidimen-sional and parametric cases and give some examples of how itworks. The versatility of the introduced method imposes norestriction on subsequent triangulation methods, which can rangefrom constraint Delaunay [18] to fast look–up tables [12].

4.1 1D SettingsIn order to construct a point removal strategy, let’s first considerthe 1D setting. Here, the problem reduces to finding a strategy forthe reduction of line segments in piecewise linear approximations.Inspired by the algorithm of [8] we extended these ideas and mod-ified the method to a recursive and progressive algorithm, illus-trated in Fig. 12. It starts by connecting the first point of a curve,

, with the last point . All intermediate points representingthe curve are compared against the line segment and thepoint with the largest distance, for instance , is identified. If itsdistance exceeds a predefined threshold , the vertex is consid-eredimportant and labeled. We split the initial line segment in twohalves, on each of which the algorithm can be applied recursively.Obviously, the quality of the removal can be controlled by the dis-tance threshold. The advantage of this extension to the originalmethod lies in the tree type refinement of the vertex analysis com-ing along with the recurrence relations.

The distance can be computed in different ways, where, how-ever, the computation of the vertical distance, such as depicted inFig. 12c, is computationally much more expensive for generalmultidimensional settings. Therefore, we recommend computationof they-distance (see Fig. 12a) approximating nonparametric data.The problem of parametric data sets will be discussed in upcomingsections.

4.2 Generalizations to MultipleDimensions

Generalizations of the method towards multidimensional nonpara-metric data is straightforward. Starting from an initial grid, as inFig. 13, the algorithm seeks the vertex with the maximum dis-tance and subdivides the field into 4 (in 2D) or 8 (in 3D) subcellson which the method is applied recursively. In these cases the dis-tances to the bilinear and trilinear interpolants of the cell verticesare computed, respectively.

Recalling the multiresolution B–spline approximation of thedata motivates the extension of the algorithm towards a channel-wise progressive point insertion. Therefore, the algorithm analyzesmesh vertices progressively and labels unimportant points as newdata comes in. In 2D, for instance, the basic idea is to start from aninitial vertex field of resolution in each direction, whererepresents the maximum iteration. The vertices are provided by thescaling function approximation and are processed furtherby our algorithm. To define a distance metric, we assume a bilinearinterpolant between the vertices which approximates the B–splinescaling function representation. If the difference signalis received, the resolution is refined by 2 and all newly insertedvertices are checked conforming to our distance metric. Ifrequired, they will be inserted.

In order to compute the intermediate vertices for each iteration,an inverse wavelet transform has to be applied on all coefficients ofa given iteration as soon as they are received and decompressed.

An apparent drawback of this approach, however, deservessome attention: Once a vertex is labeled as important there is noway to reject it in subsequent steps. Obviously, the detail signalsadded during progression influence the importance of each vertex.Therefore, we recommend an exponential alignment of the thresh-old to the iteration. That is if stands for the current iterationstep, the associated threshold is computed by

(9)

: global threshold governing the point removal.

In our implementation we employ a tree type data structure tomaintain the individual cells representing the mesh. The tree growsiteratively as progression proceeds. After iteration, the leaves ofthe tree represent the remaining cells and can be triangulated withappropriate methods. Fig. 14 further elucidates the data representa-tion.

For subsequent triangulations we employed theqhull libraryfrom [1] in 2D and 3D. Note, that theN–tree type cell structureenables computation of very fast meshings using look–up tables,

Figure 12: a) Recursive algorithm assuming a smooth represen-tation of the underlying curve: a) P2has largest vertical distance. b)new approximation after insertion of P2. c) example for verticaldistance measure. d) final result.

P0 PkP0Pk

P2ε0

curve

approximationa)

y-distance

vertical distance

b)

c) d)

P0 Pk

P2

P0

P2

Pk

P0

P2

PkP1P3 P0

P2

PkP1

P3

P

Figure 13: Extension towards multiple dimensions exemplifiedfor nonparametric data: 2D version. The underlying B–spline patchis outlined in bold. A new vertex is inserted at position and thedistance is computed with respect to the bilinear-interpolant of

.

Figure 14: Construction of a 1D tree data structure with 64 ver-tices and its growth during progression. The equivalent list struc-ture is given below. a) First segment at the beginning. b) Insertionof P29 causes split into two segments. c) Final tree after insertingall points.

Pi j

Pi j 1+

Pi 1 j 1+ +

Pi 1 j+

P

surface patch

distance to thebilinear interpolant

P

Pi j Pi j 1+ Pi 1 j+ Pi 1 j 1+ +, , ,

2m M– M

f M x y,( )

∆ f m x y,( )

m

ε0 mε m( )

ε m( ) ε0 eM m–⋅=

ε0

begin end1 64

root

a) b) c)

1 64root root 1 28 29 64 root

1 28 29

641

64

rootactive segment

root

Page 9: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

such as the ones presented in [12]. An example of progressivepoint removal is depicted in Fig. 18, where the mesh is refinedgradually with each wavelet channel arriving at the client side.

4.3 Parametric Data SetsA parametric version of the introduced algorithm can be con-structed as elucidated below. For reasons of simplicity, we restrictour description to 2D parameter spaces, however higher dimen-sional spaces can be easily generalized from that. The conceptualcomponents of our pipeline are illustrated in the diagram of Fig.15. We assume an initial parametric B–spline surface to bedefined by its vector valued control points

at iteration .

(10)

: number of tensor product B–spline scaling functions.

Thus compression has to proceed separately on thex, y andzcomponents of the control vertices. Specifically, the WT and theoracles operate for now independently on the individual coordi-nates.

However things become more complicated upon reconstruc-tion, which operates again in parameter space indepen-

dently for the spatial coordinates , , and .

As a result three binary label fields are generated indicating theimportance of individual vertices in parameter space for subse-quent triangulation. Unfortunately, different results are obtainedfor , , and and we have to decide on the

final removal. This decision is accomplished by applying a Bool-eanOR operator over the individual vertex fields a motivation of

which is given as follows: As explained earlier the non-parametricversion of our removal strategy holds for linear approximations interms of triangulations and thus refines the mesh in spatial regions,where the underlying function features nonlinear behavior. In theparametric setting similar criteria are valid for a linear approxima-tion of a surface. The mesh has to be refined in those regions wherethe surface shows nonlinear behavior, that is where the local curva-ture does not equal zero. This however happens if either ,

or indicate nonlinearity. Obviously, the Boolean

OR of the label fields considers a vertex important if one or moreof the three coordinates behave locally nonlinear. The usefulnessof this approach can be seen in Fig. 17, where a parametric surfaceis compressed and reconstructed with different parameter settings.Here, we end up with a dense mesh in spatial regions of high cur-vature and simplification occurs in regions of local planarity.

5 IMPLICIT INTERPOLATIONIn the following section the problem of implicit reconstruction isaddressed. In practice, implicit structures are mostly isolines orisosurfaces. An advantageous feature coming along with the multi-resolution B–spline representation is the higher order continuousapproximation of the underlying data. Although any computationbases on adaptive triangulations obtained from previous proce-dures, this property can be exploited to reconstruct implicit struc-tures more precisely. For instance, in 2D data sets, piecewise linearrepresentations of isolines can be recovered by immediate compu-tation of the intersections along the triangle edges from the B–spline approximation. Similar procedures hold for isosurfacereconstructions from tetrahedralizations. The cubic polynomialsperform data smoothing and cancel out most of the jags and dis-continuities commonplace in standard methods.

5.1 IsolinesIn order to handle isolines we start from the initial B–splinedescription of the underlying 2D height function andobtain an implicit formulation by

(11)

: isovalue.

Recalling the approximation properties of B–splines we rec-ommend to precompute an interpolation problem to get the appro-

priate coefficients related to the data samples to be

interpolated. These types of interpolations are extensively investi-gated and relate tightly to inverse B–spline filtering problemswhich perform in linear time [23].

For (11) we provide a polyline approximation using a march-ing triangle-like look-up table which operates on a triangle meshrepresenting . A slight extension of the look-up tableenables the extraction of those parts of the surface which are inte-rior to the isoline, that is whose function values . Thevertices of the describing polygonal hull are given by the intersec-tions of the isoline with triangle edges, such as shown in Fig. 16a.

Note that in cases 011, 101, and 110 the initial triangle repre-senting the surface is split into 2 primitives. The intersection of theisoline, implicitly defined by (11), with the triangle edge is calcu-lated using a binary search along the edge. Here we exploit theregional separation with respect to provided by the isoline.

Figure 15: Illustration of the conceptual components for para-metric compression and reconstruction.

s u v,( )

ci j0

ci j x,0

ci j y,0

ci j z,0, ,( )

T= m 0=

s u v,( ) ci j0 φ j

0v( )φi

0u( )

j 1=

J

∑i 1=

I

∑=

J I⋅

u

v

1011100110011100101101100010111111110110

WT

OracleCom

p. z

WT

OracleCom

p. y

u

v

ci j{ }

cij y,{ } cij z,{ }cij x,{ }

WT

OracleCom

p. x

bitstream

Decomp. x Decomp. y Decomp. z

Removal x Removal y Removal z

u

v

u

v

label fields

OR

Meshing

uv

adaptive mesh in u

v

control mesh of aparametric surface

in parameterspace

parameter space

triangulated surface inspatial domain

Merge Bitstream

u v,( )sx u v,( ) sy u v,( ) sz u v,( )

sx u v,( ) sy u v,( ) sz u v,( )

sx u v,( )

sy u v,( ) sz u v,( )

f x y,( )

f x y,( ) cij0 φ j

0y( )φi

0x( )

j 1=

J

∑i 1=

I

∑ τ= =

τ

cij0

f x y,( )

f x y,( ) τ>

τ

Page 10: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

Fig. 18 illustrates the performance of the method on digital ter-rain data. We employed progressive triangular approximations ofdifferent quality to compute both isolines and to extract interiorregions. Comparing the isolines computed by our method withthose of a linear interpolation reveals the superiority of theapproach.

5.2 IsosurfacesSimilar relations hold for the generation of isosurfaces and interioror exterior volumes. Here we start from a B-spline volume approx-imation of :

(12)

: number of tensor product volume B–splines.

After solving the initial B–spline interpolation problem theisosurface is obtained by a marching tetrahedron [2] algorithm,where the intersections of the surface with the tetrahedral edges arecomputed using the binary search on the B–spline volume. Again,a little work on the look-up table enables one to extract interior orexterior volume segments which are important for many applica-tions, such as finite element simulations [15]. Fig. 16b exploitssymmetry and illustrates the 5 out of 16 cases arising upon triangu-lation giving the connectivity for the extraction of interior vol-umes.

Note especially that individual tetrahedra may split up intothree primitives for representing the bounding surface of the inte-rior volume.

The results given in Fig. 19 and Fig. 20 illustrate the approxi-mation behavior of the method, where standard marching cubesand marching tetrahedron with linear interpolation are contrastedto the B-spline binary search algorithm. Although intersectioncomputation is more expensive we observe that the resulting isos-urface is figured out more precisely and smoothly using the newapproach.

6 RESULTSIn this section we demonstrate the versatility of the introducedcompression and reconstruction framework by investigating itsperformance on different data sets. First, Fig. 17 shows a series oftriangulations of a parametric interpolating B-spline surface gener-ated in accordance to the diagram in Fig. 15. The initial controlmesh of points was decomposed and reconstructedusing 60% of the coefficients. By fixing the quantization to 10 bitswe achieved a compression gain of 1:5. We observe that the qualityof the reconstruction is governed by the parameter . The trian-gulation was computed using theqhull library from [1]. Interest-ingly, our method generates numerousslivers, long thin triangles,which are mostly located along the shaft of the object. This effectcan be explained easily by analyzing the curvature in thoseregions. We find that it differs significantly inu andv direction inparameter space and therefore long, thin structures provide an effi-cient planar approximation.

Further progressive mesh refinements and isoline reconstruc-tions are depicted in Fig. 18 for a digital terrain data set of theSwiss Alps, Matterhorn region. The initial grid consists of

vertices and 168,000 triangles. We applied a forwardcompression by keeping only 5% of the coefficients at a decompo-sition level of , which corresponds to a compression gainof 1:40 at 10 bits quantization. The series reveals how the mesh isrefined progressively and adaptively upon reconstruction with eachincoming wavelet channel. The extracted isolines were computedusing the method of Section 5.1 and are contrasted against the bili-near interpolations of Fig. 18h and Fig. 18i. By comparing them toFig. 18a and Fig. 18b we note that artifacts coming along with lin-ear interpolation are avoided in our approach using the binarysearch technique. Further pictures from this series illustrate theextraction of interior and exterior regions and variations of thecompression gain. Especially Fig. 18l illustrates the quality of theapproximation at a compression gain of 1:100.

Figure 16: Polyline approximation of isolines: a) Isoline as computed by intersections with the triangle edges and look-up table to extract interioror exterior parts of the surface. b) Generation of isosurfaces and interior volumes using a marching tetrahedron algorithm: 5 basis cases arising upontriangulation. The connectivity table for generation of interior volumes is presented in Table 3.

10

1112

14

24

25 26

2713

001 010

100

011

101 110

000

111

positive negative

v0

v2

v1 v0

v2

v1v3

v5

v0

v2

v1

v0

v2

v1 v0

v2

v1

v0

v2

v1 v0

v2

v1 v0

v2

v1

v4

a)

isoline positive

negativecase 0 case 1

case 2 case 3

case 4

v3

v2

v1v0

v7 v8

v6v5

v4

v9

b)

Table 3:Connectivity table for generation of interior volumes (see Fig.16b).

NO. V0 V1 V2 V3 TETRAHEDRA (VERTEX LIST)0 - - - - {}1 - - - + { {v4, v7, v8, v3} }

2 - - + +{ {v8, v5, v6, v2}, {v5, v8, v7, v3},

{v8, v5, v2, v3} }

3 - + + +{ {v7, v5, v9, v2}, {v3, v1, v2, v7},

{v7, v2, v9, v1} }4 + + + + { {v0, v1, v2, v3} }

f x y z, ,( )

f x y z, ,( ) cijk0 φk

0z( )φ j

0y( )φi

0x( )

h 1=

H

∑j 1=

J

∑i 1=

I

∑ τ= =

H J I⋅ ⋅

100 100×

ε0

701 481×

M 7=

Page 11: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

A set of 3D isosurface reconstructions is presented in Fig. 19.Here a voxels subset of the CT–VHD (Fig. 20a)was decomposed, compressed and tetrahedralized adaptively toobtain a fraction of the skull surface. We fixed the parameters to

and achieved a compression gain of 1:15 at 20 bits. In thispicture our method is compared to a standard marching cubes tech-nique and to a trilinear interpolating marching tetrahedron, asincluded in the libraries ofAVS/Express 3.0. Again, the higherorder interpolation provided by the cubics smooths out most arti-facts striking in the reconstruction of Fig. 19c, where continuity islost and the surface “breaks up”. Moreover, we avoid even some ofthe “voxel-like” artifacts of the marching cubes reconstructionshown in Fig. 19b.

Reconctructions of interior and exterior volumes from thesame data set are depicted in Fig. 20b and Fig. 20c. We observethat our point removal strategy keeps the tetrahedra dense in thoseregions, where the underlying volume features high spatial fre-quencies. The adaptive tetrahedralizations were computed usingthe look–up table extensions proposed in Section 5.2. This allowsone to extract anatomic substructures for further processing, suchas FEM. Finally, the computing times of some of the examples arelisted in Table 4.

7 CONCLUSION AND FUTURE WORKWe presented a versatile framework for multiresolution compres-sion and reconstruction of non-parametric, parametric and implicitdata which bases on wavelet approximations. Although beingrestricted to uniform grids the scheme handles many real worlddata types and features numerous advantageous properties, such asboth lossless and lossy compression or progressive and selectivemesh refinement. However the current implementation only sup-ports channelwise progressive mesh refinement. Hence, futureresearch activities have to comprise the development of a true“coefficient-wise” progressive mesh refinement procedure, whichimproves the approximation as data comes in. The extension of theframework toward nonuniform sample grids is still interesting, inspite of the fact that appropriate WTs have already been introducedto the community [3]. In addition the inclusion of 2D and 3D tex-ture compression and reconstruction is an important issue forongoing investigations.

8 ACKNOWLEDGMENTThis research was supported in parts by the ETH research councilunder grant No. 41–2642.5. We are grateful to the Bundesamt fürLandestopographie, Bern, Switzerland, for the digital terrain dataand to the NLM for providing the VHD set. Our special thanks toR. Kisseleff, U. Hengartner, J.–P. Hofstetter, and P. Ruser forimplementing parts of the framework.

9 REFERENCES[1] C. B. Barber, D. P. Dobkin, and H. Huhdanpaa. “Qhull,” 1996.

http://www.geom.umn.edu/locate/qhull.

[2] J. Bloomenthal. “An implicit surface polygonizer.” In P. Heckbert,editor, Graphics Gems IV, pages 324–349. Academic Press,Boston, 1994.

[3] M. D. Buhmann and C. A. Micchelli. “Spline prewavelets for non-uniform knots.” Numerische Mathematik, 61(4):455–474, May1992.

[4] C. Chui.An Introduction to Wavelets. Academic Press, 1992.[5] A. Cohen, I. Daubechies, and J. Feauveau. “Bi-orthogonal bases of

compactly supported wavelets.”Comm. Pure and Applied Mathe-matics 45, pages 485–560, 1992.

[6] T. H. Cormen, C. E. Leiserson, and R. L. Rivest.Introduction toAlgorithms. MIT Press, Cambridge, Massachusetts, 1994.

[7] M. F. Deering. “Geometry compression.” In R. Cook, editor,SIG-GRAPH 95 Conference Proceedings, Annual Conference Series,pages 13–20. ACM SIGGRAPH, Addison Wesley, Aug. 1995.held in Los Angeles, California, 06-11 August 1995.

[8] D. Douglas and T. Peucker. “Algorithms for the reduction of thenumber of points required to present a digitized line or its carica-ture.” The Canadian Cartographer, 10(2):112–122, December1973.

[9] M. Eck, T. DeRose, T. Duchamp, H. Hoppe, M. Lounsbery, andW. Stuetzle. “Multiresolution analysis of arbitrary meshes.” InR. Cook, editor,SIGGRAPH 95 Conference Proceedings, AnnualConference Series, pages 173–182. ACM SIGGRAPH, AddisonWesley, Aug. 1995. held in Los Angeles, California, 06-11 August1995.

[10] S. J. Gortler, P. Schroder, M. F. Cohen, and P. Hanrahan.“Wavelet radiosity.” InComputer Graphics Proceedings, AnnualConference Series, 1993, pages 221–230, 1993.

[11] M. H. Gross. “L2 optimal oracles and compression strategies forsemiorthogonal wavelets.” Technical Report 254, Computer Sci-ence Department, ETH Zürich, 1996. http://www.inf.ethz.ch/publications/tr200.html.

[12] M. H. Gross, O. G. Staadt, and R. Gatti. “Efficient triangular sur-face approximations using wavelts and quadtree data structures.”IEEE Transactions on Visualization and Computer Graphics,2(2):130–143, June 1996.

[13] P. S. Heckbert and M. Garland. “Survey of polygonal surface sim-plification algorithms.” Technical report, CS Dept., CarnegieMellon U., to appear. http://www.cs.cmu.edu/~ph.

[14] H. Hoppe. “Progressive meshes.” In H. Rushmeier, editor,Compu-ter Graphics (SIGGRAPH ’96 Proceedings), pages 99–108, Aug.1996.

[15] R. M. Koch, M. H. Gross, F. R. Carls, D. F. von Büren,G. Fankhauser, and Y. I. H. Parish. “Simulationg facial surgeryusing finite element models.” In H. Rushmeier, editor,ComputerGraphics (SIGGRAPH ’96 Proceedings), pages 421–428, Aug.1996.

[16] P. Lindstrom, D. Koller, W. Ribarsky, L. F. Hodges, N. Faust, andG. A. Turner. “Real-time, continuous level of detail rendering ofheight fields.” In H. Rushmeier, editor,Computer Graphics (SIG-GRAPH ’96 Proceedings), pages 109–118, Aug. 1996.

[17] J. M. Lounsbery.Multiresolution Analysis for Surfaces of Arbi-trary Topological Type. PhD thesis, University of Washington,Seattle, 1994.

[18] F. P. Preparata and M. I. Shamos.Computational Geometry.Springer, New York, 1985.

[19] E. Quak and N. Weyrich. “Decomposition and reconstructionalgorithms for spline wavelets on a bounded inverval.”Appliedand Computational Harmonic Analysis, 1(3):217–231, June 1994.

[20] K. Sayood. Introduction to Data Compression. Morgan Kauf-mann, San Francisco, 1996.

[21] E. J. Stollnitz, T. D. DeRose, and D. Salesin.Wavelets for Compu-ter Graphics. Morgan Kaufmann Publishers, Inc., 1996.

[22] E. J. Stollnitz, T. D. DeRose, and D. H. Salesin. “Wavelets forcomputer graphics: A primer.”IEEE Computer Graphics andApplications, 15(3):76–84, May 1995 (part 1) and 15(4):75–85,July 1995 (part 2).

[23] M. Unser, A. Aldroubi, and M. Eden. “Fast b–spline transformsfor continous image representation and interpolation.”IEEE Tran-sactions on Pattern Analysis and Machine Intelligence, 13(3):277–285, Mar. 1991.

[24] G. K. Wallace. “The jpeg still picture compression standard.”Communications of the ACM, 34(4):30–44, Apr. 1991.

Table 4:Computing times for various steps of our compression and re-construction framework.

STEPB-SPLINE SURFACE

(FIG. 17)DTM

(FIG. 18)VHD

(FIG. 19/20)

Compression 0.1 sec. 1.5 sec. 2 sec.

Decompression 0.1 sec. 1.5 sec. 2 sec.

Point Removal 1 sec. 2 sec. 2 sec.

128 64 64××

M 4=

Page 12: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

Figure 17: Compression and reconstruction of a parametric B–spline surface for different levels of linear approximation. a) , 19602 triangles(100%). b) , 43% triangles. c) , 34% triangles. d) , 22% triangles.

Figure 18: Extraction of isolines and interior surfaces from a digital terrain model of the Swiss Alps and progressive mesh refinement: 3 isolines areextracted for , and , respectively. a) , Wavelet channel 1, 0.1% triangles. b) Channel 2, 0.35% triangles. c) Channel3, 1.15% triangles, d) Channel 4, 2.97% triangles e) Channel 5, 5.80% triangles. f) Channel 6, 8.88% triangles. g) Channel 7, 15.83% triangles. h), i)Standard isoline algorithm for channels 1&2. j) DTM split into interior and exterior regions at . k) 5% coeff., compression gain 1:33,

, 62% triangles. l) 1% coeff., compression gain 1:100, , 62% triangles.

a) b) c) d)

ε0 0=ε0 0.005= ε0 0.01= ε0 0.02=

a) b) c)

d) e) f)

g) h) i)

τ 120= τ 125= τ 130= ε0 0.01=

τ 130=ε0 0.0035= ε0 0.0035=

Page 13: In Copyright - Non-Commercial Use Permitted Rights ... · functions, such as isolines and isosurfaces more smoothly and pre-cisely than commonplace methods. Although it scales straightfor-wardly

Figure 19: Extraction of isosurfaces volume data. Isovalue (skull). a) Our method, 20% coefficients, , 124,343 tetrahedrons, com-pression gain 1:15. 51,970 triangles b)Marching Cubes, same compression settings, 540,800 cells, 40,522 triangles. c)Marching Tetrahedron(asprovided byAVS/Express 3.0), same settings as a). Data source: Visible Human Project. Courtesy National Library of Medicine.

Figure 20: Extraction of interior and exterior volumes. a) Initial CT volume data set with 2,704,000 tetrahedrons. b) Interior and exterior volumes, (skin surface), 133,091+ 34,290 tetrahedrons. c) Interior volume, (skull), 124,491 tetrahedrons.

Figure 18: (Continued) Extraction of isolines and interior surfaces from a digital terrain model of the Swiss Alps and progressive mesh refinement: 3isolines are extracted for , and , respectively. a) , Wavelet channel 1, 0.1% triangles. b) Channel 2, 0.35% triangles.c) Channel 3, 1.15% triangles, d) Channel 4, 2.97% triangles e) Channel 5, 5.80% triangles. f) Channel 6, 8.88% triangles. g) Channel 7, 15.83% tri-angles. h), i) Standard isoline algorithm for channels 1&2. j) DTM split into interior and exterior regions at . k) 5% coeff., compression gain1:33, , 62% triangles. l) 1% coeff., compression gain 1:100, , 62% triangles.

τ 120= τ 125= τ 130= ε0 0.01=

τ 130=ε0 0.0035= ε0 0.0035=

j) k) l)

a) b) c)

τ 75= ε0 0.1=

a) b) c)

τ 42= τ 75=