ray tracing with extended cameras

22
THE JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION, VOL. 7: 211-227 (1996) Ray Tracing with Extended Cameras HELWIG LOFFELMANN AND EDUARD GROLLER Institute of Computer Graphics (WWW:http://www.cg.tuwien.ac.at/), Technical University of Vienna, A-1 040 Karlsplatz 13/186/2, Austria (email: {helwig,groeller]@cg.tuwien.ac.at) SUMMARY This paper discusses an extended camera model for ray tracing. As an alternative to standard camera modules an abstract camera machine is presented. It represents a framework of extended cameras which is based on standard mapping functions. They are integrated within the abstract camera machine to complete the camera function, which generates rays out of image locations (pixels). Modelling the camera function as an abstract camera machine in combination with standard mapping functions opens a wide field of applications and the specification of extended cameras is greatly simplified. By using extended cameras it is easily possible to produce special and artistic effects, e.g. a local non-linear zoom of especially interesting regions while still retaining an overview of the whole scene. Overviews of given scenes can be modelled and several views of the same object can be integrated into one picture. Several examples of extended cameras designed with the abstract camera machine are discussed and colour plates made with these cameras are presented. KEY WORDS: ray tracing; camera model; projections; image distortions 1. INTRODUCTION Ray tracing is a well known and useful technique for rendering photorealistic images. Typically a pinhole camera model, i.e. a perspective projection, is used for rendering. Many extensions were made to ray tracing in the past,''* but only extensions previously applied to the camera model are of special interest for this work. 1.1. Previous work Potmesil and Chakravarty introduced a method to realize the simulation of a lens and an apert~re.~ They proposed a two-path procedure with a standard hidden surface algorithm, e.g. ray tracing, as first step. A separate focus post-process blurs the image to simulate the application of a lens and an aperture. In addition to the simulation of a lens and an aperture their method can be used to simulate blurring due to a translational motion. Unfortunately this method requires costly calculations. Chen proposed an advanced method to realize similar effect^.^ He also uses a standard hidden surface process as a first step to calculate an intermediate picture. But instead of using the electromagnetic wave phenomenon as done by Potmesil and Chakravarty, he uses the light particle theory to model a lens and an aperture CCC 1049-8907/96/0402 1 1 - 1 7 0 1996 by John Wiley & Sons, Ltd. Received March 1995 Revised October 1995

Upload: helwig-loeffelmann

Post on 06-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

THE JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION, VOL. 7: 211-227 (1996)

Ray Tracing with Extended Cameras

HELWIG LOFFELMANN AND EDUARD GROLLER Institute of Computer Graphics (WWW:http://www.cg.tuwien.ac.at/), Technical University

of Vienna, A-1 040 Karlsplatz 13/186/2, Austria (email: {helwig,groeller]@cg.tuwien.ac.at)

SUMMARY This paper discusses an extended camera model for ray tracing. As an alternative to standard camera modules an abstract camera machine is presented. It represents a framework of extended cameras which is based on standard mapping functions. They are integrated within the abstract camera machine to complete the camera function, which generates rays out of image locations (pixels). Modelling the camera function as an abstract camera machine in combination with standard mapping functions opens a wide field of applications and the specification of extended cameras is greatly simplified. By using extended cameras it is easily possible to produce special and artistic effects, e.g. a local non-linear zoom of especially interesting regions while still retaining an overview of the whole scene. Overviews of given scenes can be modelled and several views of the same object can be integrated into one picture. Several examples of extended cameras designed with the abstract camera machine are discussed and colour plates made with these cameras are presented.

KEY WORDS: ray tracing; camera model; projections; image distortions

1. INTRODUCTION

Ray tracing is a well known and useful technique for rendering photorealistic images. Typically a pinhole camera model, i.e. a perspective projection, is used for rendering. Many extensions were made to ray tracing in the past,''* but only extensions previously applied to the camera model are of special interest for this work.

1.1. Previous work

Potmesil and Chakravarty introduced a method to realize the simulation of a lens and an a p e r t ~ r e . ~ They proposed a two-path procedure with a standard hidden surface algorithm, e.g. ray tracing, as first step. A separate focus post-process blurs the image to simulate the application of a lens and an aperture. In addition to the simulation of a lens and an aperture their method can be used to simulate blurring due to a translational motion. Unfortunately this method requires costly calculations.

Chen proposed an advanced method to realize similar effect^.^ He also uses a standard hidden surface process as a first step to calculate an intermediate picture. But instead of using the electromagnetic wave phenomenon as done by Potmesil and Chakravarty, he uses the light particle theory to model a lens and an aperture

CCC 1049-8907/96/0402 1 1 - 1 7 0 1996 by John Wiley & Sons, Ltd.

Received March 1995 Revised October 1995

212 H. LOFFELMANN AND E. GROLLER

as a post-process. This speeds up the calculations significantly and solves the major problem of the previously described approach. Later Chen presented a new algorithm for highly defocused scene^,^ and gave an efficient algorithm for scenes with a uniform background.

Another extension to ray tracing is the inclusion of the depth of field effect by simulating the usage of a lens directly within the ray tracer. Cook, Porter and Carpenter discussed this approach in their well known paper on distributed ray tracing.6 Stochastically distributed rays start from arbitrary positions on the lens and run through a point of the focal plane, which is determined by the corresponding image location. Afterwards the contributions of several rays are integrated to deter- mine the colour of one pixel.

Wyvill and McNaughton extended the camera model for ray tracing in a more general way.7 They depart from the goal of making photorealistic images and concentrate on the opinion of many artists: people creating pictures want to say more about their point of view than can be said by a realistic image like a photograph. They give a principal characterization of the optical model and present some specific examples of this generalized camera function. For instance, they present a caricature and a fish-eye view. Finally they point out the possibility of making animations by changing camera parameters over time.

Acquisto and Groller introduced the concept of centres of interest (COINS) as another extension to ray tracing.8 COINS are located within interesting regions in the scene, and rays near a COIN are bent towards the corresponding point. This generates a local, non-linear zoom effect automatically applied to the interesting regions.

IMAX and OMNIMAX projection systems require pre-distorted pictures. Nelson Max described the calculations which are necessary to adjust a standard renderer for these projections.9 Greene and Heckbert use a standard renderer without any modifications in a first step and apply a technique similar to ray tracing as a post-process to produce IMAX or OMNIMAX images.'O At the opera a slide projector is often used to simulate the background of a scene. Dorsey, Sillion and Greenberg presented a method to pre-distort slides, which is necessary to produce the undistorted images on the backdrop.'

The CAVE of Cruz-Neira, Sandin and DeFanti is an approach in the area of virtual reality to provide a good feeling of immersion for the user. Non-typical variations of perspective projections are necessary for this approach.'* Cartographic projections try to handle the problem, that a sphere cannot be mapped onto a plane without any distortions. Alan Paeth compiled a collection of cartographic projections for computer graphics.13 M.C. Escher is an artist, whose work still fascinates many scientists of various research areas. He intensively explored projections and spaces of different dimensions. l 4

1.2. Our approach Similar to Wyvill and McNaughton we dropped the restriction of producing

realistic images onlya7 With our approach we investigated the wide area of unusual and curious projections, which are often very different from the standard camera models (parallel or perspective projection). Extending the camera model-thus acting in object space (3D)-allowed us to overcome some of the restrictions with image composition techniques, which are always applied to pictures in image space (2D).

RAY TRACING WITH E X T E N D E D CAMERAS 213

This camera extension could be seen in most cases as a complex projection of the object space onto a projection object (surface).

In this paper we show how to decompose this complex projection function into a sequence of simple mappings which increase flexibility and intuition during the caniera specification process. Furthermore various already well known mapping functions are investigated to explore their usability as sub-modules for our structured extended camera model.

Pictures by M. C. Escher, such as ‘Balcony’ or ‘Picture Ga l l e r~ ’ , ’~ were major inspirations for the work presented in this paper. Thus one aim of our approach was to realize local zoom effects with extended cameras and ray tracing. As another example, we wanted to produce some overview cameras that integrate a look-around into one image. Furthermore applications of extended cameras in the field of virtual reality could be thought of.

Early results revealed another aspect of these extended cameras. Many of the produced images are aesthetic due to their unreality. They remind the viewer of pictures by modern painters such as Pablo Picasso, who integrated different views of the same object from several points of view into one painting. The beauty of images produced with extended cameras could be one additional intention and inspiration for applications as well.

2. THE ABSTRACT CAMERA MACHINE The concept of an extended camera is divided into two layers. The absrruct camera machine represents the theoretical part of this system, whereas the second layer consists of various specific transformations. The abstract camera machine by itself cannot be used for rendering. Appropriate transformations, which are described later, have to be chosen and plugged into ports specified for the abstract camera machine. These transformations close the path of mappings within this abstract module, which gencrates rays out of 2D image locations. See Figure 1 for an illustration of this principle.

I image locations Tracer

Abstract Camera Machine I I

various transformations Figure 1. The abstract cumera machine ar laver between a ru) trucer and vuriuu 7 transforniationc

214

2.1. Interface specifications Image locations are represented as 2D Cartesian co-ordinates and rays by starting

points in 3D and their directions. The direction of a ray is concisely specified by two angles in a spherical co-ordinate system (2D). Most ray tracers represent a direction by a 3D vector instead of two angles and therefore the following camera interface definition was chosen for our camera extension: getRay: a* - a3 x a3.

H. LOFFELMANN .AND E. GROLLER

2.2. Co-ordinate systems To abstract a generalized camera model from the standard one, it is useful to

extract the co-ordinate systems that are involved in such a camera function first. Input and output of a camera tie us to given co-ordinate spaces, which are a 2D image space and a 3D object space. Consequent classification of intermediate values reveals another 2D co-ordinate system for parameter values, which are used to parameterize the objects in object space. We use the following co-ordinate systems within our model:

1. DCS-usually the output of a renderer is a raster image with a certain resolution. Lines and rows are indexed and therefore each pixel-even each location within a pixel-can be addressed using a 2D device co-ordinate system (DCS) value.

2. OCS-casting rays into a scene includes the necessity of direct interaction between rays and objects (e.g. intersection calculations) in object space. Thus rays are always given by 3D object co-ordinate system (OCS) values.

3. PCS-An important part of the extended camera function is the parametrization of a projection object (OCS). Since parameter space and image space are not directly correlated, decoupled 2D parameter co-ordinate system (PCS) values are used for the parametrization,

2.3. A generalization of the camera module The task of a camera module is to generate rays out of image locations. Thus at

least one ray composed of two OCS values (the starting point and the direction vector) has to be generated for each DCS image location. For each of the two components of a ray the image location is transformed into a parameter value (PCS) and afterwards into an OCS value. This yields two sequences of transformations within the generalized camera model (see Figure 2).

The sequence, which is generating the starting points of rays, can be specified intuitively. An OCS object, named the eye object, specifies a projection object. It is parametrized by PCS parameter values, which are generated from the DCS image locations. This part of the ray generation will be referenced as the getEyePnt pro- cedure.

The specification of the second part of the camera can be done in several variations. Ray directions may be specified independently from the eye object (e.g. parallel projection directions), or in some relation to it (e.g. surface normals as ray directions). Three main methods of producing the ray directions can be categorized:

1. One is to use one OCS object for the generation of both the starting points and the ray directions (e.g. surface normals as ray directions).

RAY TRACING WITH EXTENDED CAMERAS 215

Figure 2. A generalized camera model for ray tracing

The other methods use separate OCS objects for generating starting points and ray directions.

2. With the absolute method the ray directions are determined solely from a

3. With the relative method ray directions are generated as the difference vector

The procedure which generates the direction object points is named getDirPnt. getDirVec is the function which produces ray directions out of two OCS object points. It can easily be seen that getEyePnt and getDirPnt are syntactically quite similar, since both generate an OCS object point out of a DCS image location. The third component, getDirVec, completes this generalized model of an extended camera.

second OCS object (direction object).

between the direction object point and the eye object point.

2.4. The extended camera

In this section the internal structure of the procedures getEyePnt and getDirPnt is discussed. Both mappings are composed of four consecutive mappings (picture mapper, parameter space transformer, object generator, and object space transformer). See Figure 3 for an illustration of this extended camera concept. The picture arranger is used to combine several cameras into one compound camera module.

The picture mapper

This DCS-PCS mapping function decouples device co-ordinates and parameter values. Thus the picture mapper provides device independence as one feature of extended cameras. An affine transformation composed of scaling, reflection, and translation operations is sufficient for this purpose.

21 6 H. LOFFELMANN AND E. GROLLER

Figure 3. The internal structure of the abstract camera machine; Jigure similar to that in Reference I 5

RAY TRACING WITH EXTENDED CAMERAS 217

The param,eter space transformer

This PCS-PCS co-ordinate system transformer can be used to specify more general parametrizations of those objects that are produced by the following object generator. Reparametrizations and non-linear distortions can be easily achieved using this transformation. Since a parameter space transformer maps PCS into itself, transformations of this kind can be iterated or combined to a sequence. Nevertheless the parameter space transformer is optional in the getEyeFnt and getDirPnt pro- cedures.

The object generator

The parametrization of an OCS object is defined by a mapping from PCS into OCS. It generally specifies a parametrized surface object in object space, which is then used to define the starting point and/or the direction vector of a ray. A major topic when using an object generator is the choice of the parametrization. Different parametrizations-they can be achieved by either using a parameter space transformer or specifying them directly within the object space generator-have a great influence on the resulting image.

The object space transformer

This OCS-.OCS co-ordinate system transformer is introduced for mainly two reasons. One is that users want to specify a camera at some specific (normalized) location and do the exact positioning of the camera geometry afterwards. This is also useful for specifying a camera pan. The other reason is that using an object space transformer object points can be modified in a locally varying way (e.g. local zoom using COINS^). Similar to the parameter space transformer, the object space transformer can be iterated or combined to a sequence within getEyePnt and getDirPnt. Obviously this transformation is optional as well.

The picture arranger

An extended camera defined so far specifies one complex projection of a scene onto a projection object. Additionally the picture arranger (DCS-DCS) is introduced to enable the use of more than one extended camera for producing a compound image. The major advantage of using the picture arranger instead of precomputing all extended camera images first and composing the final image afterwards is that only those parts of the images are computed that effectively contribute to the final image.

3. TRANSFORMATION MODULES

As already stated above, the abstract camera machine is based on a set of standard transformations. They are necessary to complete the sequence of mappings within the abstract camera machine to produce rays out of image locations (see Figure 1). Some mappings that are useful, such as picture mapper, parameter space transformer,

218 H. LOFFELMANN AND E. GROLLER

object generator, object space transformer and/or picture arranger, are summarized in this section.

3.1. PCS and OCS transformers A parameter space transformer and an object space transformer are used for

getEyePnt and getDirPnt to transform the PCS or OCS, respectively. Several classes of transformations can be used for such a transformer (Please note that sequences and/or iterations of these transformers can be used as a PCS or OCS transformer themselves, since all these transformations map a co-ordinate system into itself.):

A@ne transformations-2D and 3D affine transformations are often used in computer graphics. For getEyePnt and getDirPnt the use of affine transform- ations enables a position-, orientation-, and size-independent representation of other, more complex transformations (e.g. a local zoom function). Local zoom as a parameter space transformer-zooming can be realized by a parameter space transformer before parametrization is done. Similar to Escher’s picture ‘Balcony’ some portions of the picture are enlarged without loosing an overview of the scene. This effect can be achieved by enlarging the corresponding parameter area. Analytic functions can be specified, which realize a non-linear co-ordinate space transformation that can be used as local zoom function. More general transformations as parameter space transformers-grid distor- tion methods and warping/morphing techniques are additional possibilities for transforming the PCS. 16*17 More modelling flexibility is provided by specifying the transformation at some special locations. Interpolation or approximation is used between these user specified seed points. Corrvs as object space transformer-although a parameter space transformer might be quite useful for realizing a local zoom effect, an object space transformer can be used for this purpose as well. Especially the principle of COINS can be modelled as an object space transformer.8 Grid distortions as object space transformer-many of the 2D grid distortion methods can be easily adapted to 3D and used as object space transformation.

3.2. OCS objects and their parametrizations One major transformation in the sequence of mappings getEyePnt and getDirPnt

is the parametrization of an OCS object, which is done by an object generator. Often surface objects are used as such objects for the transformation, but theoretically objects of any dimension can be used during this step. Even fractal objects can be used for OCS objects. Since most parametric objects have a lot of different parametri- zations, the adequate parametrization has to be chosen carefully. Again various object classes are useful for the object generator:

1. Simple analytic primitives-many simple objects (point, line, circle, plane, sphere, cylinder, cone, torus, and others) can be used as OCS objects. Most of them have a default parametrization, but several variations are possible in most cases.

2. Quadrics and superquadrics-quadrics and superquadrics are well known and

RAY TRACING WITH EXTENDED CAMERAS 219

often used in computer Although their definitions are quite simple, they provide significantly more modelling flexibility to the user than simple analytic primitives. A parametric representation of quadrics and superquadrics can be achieved by using the spherical product of two conic or superconic curves g(u) and h(v), which is given by the equation2'

With this definition, horizontal cross-sections ( z = const.) through the resulting surface s(u,v) are scaled versions of g(u), and cross-sections containing the z- axis are scaled versions of h(v). For example, a circle g and a line h with constant x produce a cylinder s = g @ h.

Many different parametrizations of these surface objects are possible, and the decision which is chosen must be done carefully. A detailed discussion of the properties of some quadric and superquadric parametrizations can be found in Reference 22.

3. Conics and superconics-one major application of conics and superconics is the combination of these curve objects to a parametrizable surface using the spherical product (see above). Additionally conic and superconic curves can be used as OCS object themselves. Several different parametrizations of conics and superconics can be thought of.22

4. More general curves-in addition to simple analytic primitives, conics and superconics, more general curves can be used as OCS objects. Seed points are defined at specific locations and interpolation or approximation is used to generate the curve. Fortunately these curves are parametric by definition, and tangent vectors can be easily derived in most cases. Such interpolation or approximation curves can be combined using the spherical product to generate a surface object as well.

5. Free form sullfaces-free form surfaces have a wide application area for the object generator. The main advantage of these objects is that flexible modelling of a smooth surface is possible. Again there are several possibilities to interp- olate or approximate the control points of such a surface. As free form surfaces are parametric objects by definition, a parametrization is always given.

6. Sweep objects-another class of objects which is useful for the object generator are sweep objects. For generating a sweep object, a 1D shape is moved along a 1D path through the OCS. It defines a surface object together with a corresponding parametrization. The path curve and the swept shape may be both defined by more general parametric curves (see above).

3.3. Picture mapper and picture arranger Since the only purpose of the picture mapper is to provide device independence,

and an affine transformation is sufficient for this aim, no variations of this first step in the sequence of mappings were considered in depth.

220 H. LOFFELMANN AND E. GROLLER

Another part of the extended camera model is the combination of several cameras to a compound camera using the picture arranger. Such a picture arranger is composcd of three consecutive procedures. First a clipping operation is performed to allow user-defined shapes for extended camera images (a lot of clipping shapes can be thought of). After an optional transformation of the clipped image (e.g. scaling) the result is combined with the final image in a user-defined way (e.g. by occluding the background or by using an a-channel).

4. IMPLEMENTATION

A rendering system was set up to test the ideas which were presented in the previous sections. The public domain ray tracer POV-Ray was chosen to provide a powerful ray tracing system and a C t t library was developed which provides the whole functionality of the extended camera POV-Ray was adapted to use the camera library instead of the pinhole camera module. Both layers (the abstract camera machine and the layer of standard computer graphics transformations) are directly represented in the library. Abstract classes specify the interface of certain transformations, and concrete representatives of all the modules are derived from these abstract interface classes. A camera description language was developed and a parser is incorporated in the library. The picture mapper, parameter space transformer, object generator, and object space transformer are realized as separate class derivation structures. Parametrizable objects and the picture arranger are realized separately as well. The whole system is based on the VEGA libraries.24 These hierarchically ordered libraries provide software development tools, basic types, templates, and mathematical components.

5. RESULTS

A set of scenes was modelled for POV-Ray. Several cameras were specified for the extended camera module and pictures were rendered to illustrate the large number of possibilities with the presented camera extension (see also the appendix for colour plates). They express new possibilities in making computer generated images. The structure of the extended cameras, which were used to compute the result images, are explained in this section and the design ideas for these cameras are discussed.

5.1. The superelliptic camera The superelliptic camera uses a superellipsoid (Figure 4) for the generation of the

eye and direction object points. It was used to render a virtual gallery with images previously generated at our institute (see Plates 1, 2, and 3). Four walls form a rectangular room and boxes are placed regularly all over the walls. Small cylinders connect these boxes along the axes of this grid. A thin wall fills the space between boxes and cylinders. Seven pictures are mounted on the walls. All together they fill a lot of the wall space, which is useful for visualising the distortions of the superelliptic camera.

Surface normals were used to project this scene onto the superellipsoid (Plate 2). The superellipsoid itself was realized as a spherical product of two superellipses g(u) and h(v). The parametrization formulae are given by the equations

RAY TRACING WITH EXTENDED CAMERAS 22 1

Figure 4. The geometry of the superellipitic camera

where a, b and c are modelling parameters that were set to 45, 25, and 25; and y is the superconic parameter (a value of 1/4 was used for this camera). See Figure 4 for a plot of this superellipsoid. It was placed inside the room and the axes of the superellipsoid coincide with the room’s axes. Using this camera for the projection all the pictures on the walls can be seen within one image without discontinuities.

To visualize the transformation of the OCS, the room without pictures and walls was projected (Plate 3) using the same camera. The edges of the room are clearly seen and the distortions can be recognized from boxes and cylinders.

This type of projection has several advantages. Very low distortions occur in the dominating areas of the image, thus the pictures on the walls are almost undistorted. Just near the corners of the room some distortions occur, but they also preserve the feeling of a rectangular room for the viewer. The projection is continuous and differentiable in each point of the image, thus no jumps in neither location nor direction occur.

5.2. The superhyperbolic camera I The superhyperbolic camera I (Figure 5) was used to render a similar scene,

which simulates a wall with some other pictures generated at our institute (see Plates 4, 5 , and 6). Seven by seven units form a single wall. The distances between the pictures are again very small, but with this way of tessellating the wall, the distortions of the applied extended cameras are easier to understand. As in the gallery described in the previous section the cylinders exceed the wall a bit, so the regular structure of the wall is more obvious.

A two-sheeted superhyperboloid (see Figure 5) was built as a spherical product of two superhyperbolae g(u) and h(v)

222 H. LOFFELMANN AND E. GROLLER

Figure 5. The geometry of the superhyperbolic camera I

where a, b and c are modelling parameters that were set to 45, 25, and 25; and y is the superconic parameter (a value of 1/4 was used for this camera). As for the superelliptic camera the surface normals were taken as direction vectors. Only one of the two sheets was used for the projection (Plate 5 ) and the size of the camera was adjusted to the centre unit of the wall. This causes the central area of the superhyperboloid sheet to cover most of the image area and thus the middle wall unit is expanded to more than 30 by 30 per cent of the image area. All the remaining 48 wall units are still visible and fill the adjacent border space. They are heavily deformed and reduced considerably, although they all show up in the final image.

To illustrate the distortions of this camera, the wall was also projected onto the superhyperboloid without any pictures and wall pieces (Plate 6). Some benefits result from using a superhyperboloid instead of a hyperboloid in combination with an appropriate parametrization. Taking this superhyperboloid projection of the wall as an example, we can see that the middle area of the wall remains almost undistorted, but scaled up so that some details can be obtained more easily. Although this small part of the whole wall is enlarged, an overview is still given as the remaining scene shows up in the border area of the picture. Thus a local non-linear zoom effect is realized by this extended camera.

5.3. The superhyperbolic camera I1 Although the extended camera described in this section is just a variation (different

parametrization) of the previously described camera, the resulting image is signifi- cantly different from the previously discussed (see Plates 4, 7, and 8). It uses exactly the same OCS object (see Figure 6) as projection surface. The only difference is that another parametrization was used for g(u) and h(v):

RAY TRACING WITH EXTENDED CAMERAS 223

Figure 6. The geometry of the superhyperbolic camera II

coshyu bla . sinhyu g(u) = ( ) and h(v) = (“ .

c . sinhyv

where a, b and c are modelling parameters that were set to 8112, 4512, and 4512; and y is the superconic parameter (set to 1/4 for this camera). Again the distortions of the scene geometry are visualized by projecting the grid of the wall onto the superhyperboloid. Plate 8 shows the resulting image. The middle ‘cross’ (13 wall units) disappears almost completely using this parametrization. The rest is distorted in a smooth and pleasing way.

When looking at the wall and the pictures through this camera one can see again that in the middle area of the whole image some portions of the pictures seem to disappear, e.g. the candle (Plate 7). From the authors’ point of view especially this plate demonstrates that pictures of extended cameras often have an aesthetic aspect.

5.4. The hyperbolic cylinder camera One more extended camera (Figure 7) was designed for the gallery ‘POV-Ray’

(see Plates 9 and 10). This scene is simulating a gallery with pictures rendered with the ray tracer POV-Ray them~e lves .~~ Four pictures are positioned on two walls, which are perpendicular to each other. Each picture is lit by a spot light which is placed in front and above the top of the wall. A perspective view (Plate 9) of this scene was rendered using POV-Ray’ s built-in pinhole camera.

This scene was rendered with the hyperbolic cylinder camera (Plate lo), which uses parts of a hyperbolic cylinder (see Figure 7) and the surface normals for projection. This surface object is realized as a spherical product of a hyperbola and a line; the parametrization formulae are:

g(u) = (‘ . ‘Osh ‘1 and h(v) = (i) b - sinh u

( 5 )

224 H. LOFFELMANN AND E. GROLLER

Figure 7. The geometry of the hyperbolic cylinder camera

where a and b are modelling parameters that were both set to 3 for this camera. The asymptotes of the hyperbola are perpendicular to each other and coincide with the diagonals in the x-y-plane. Using this camera we can project the pictures on the walls onto the image almost without any distortions. They show up in the final image significantly bigger and less distorted than within standard perspective projec- tions. The right-angle geometry of both walls is unfolded in some way, but not broken into pieces. Thus a feeling for the scene geometry remains for the viewer. Similar cameras can be used for integrating several views of one object, e.g. a building, into one picture without any discontinuities.

5.5. The hyperbolic cylinder camera ‘local zoom’ This extended camera was generated by taking the same hyperbolic cylinder and

adding a local zoom effect using a parameter space transformer (see Plates 9, 10, and 1 1 ) . This camera was then again applied to the art gallery ‘POV-Ray’ (Plate 11). The zoom effect was positioned at the rightmost of the four images. Especially some of the leaves at the left part of the plant shown in this image are enlarged and rendered much bigger than in Plate 10. At the pot of the plant, the distortions produced by this zoom effect can be seen best. Nothing of the overall view of this scene is lost using this camera. Near the zoomed portions some other portions of the image are compressed and therefore smaller than in Plate 10. This yields an effect similar to M. C. Escher’s picture ‘Balcony’, where nothing of the overall view is lost, but parts are compressed around the enlarged balcony.

5.6. The circle and torus camera This camera was used to render one of POV-Ray’s scenes, which was taken from

the public domain software package (see Plates 12 and 13). It simulates a roman temple in the desert. On a sandy surface, 14 rippled columns are placed in a circle. Arches span the columns and a massive ring is placed on top of all the columns. All parts of the temple are textured and look as if they were made from marble. A

RAY TRACING WITH EXTENDED CAMERAS 225

sky is simulated with a few clouds. Sharp shadows are cast into the sand by a low standing sun. The POV-Ray pinhole camera view (Plate 12) shows this temple from the front and the camera described in the following paragraph, which was modelled by a set of bright 3D arrows.

The circle and torus camera was built using two OCS objects instead of one. As direction object a torus is chosen and a circle coinciding with the centre circle of the torus is taken as eye object. The parametrization of the torus is similar to the ‘Longitude and Latitude’ parametrization of the sphere. The camera basically realizes a normal-projection onto the torus. The parameter range covers about one quarter of the torus. This camera was used to make a ray traced picture of the roman temple (Plate 13). For this purpose the camera was positioned and rotated within the OCS. The main radius of the torus is of similar size to the radius of the temple, but the camera torus is positioned at some distance from the centre of the temple. The main centre of the torus is somewhere near the top of one column of the temple. A rotation of 30 degrees around one axis of the torus running from its main centre towards the centre of the temple, induces the eye object circle to enter the temple below one arch and leave the structure slightly above the top ring. This heavily distorting camera illustrates the flexibility of the system and shows how artistic effects are easily realizable using extended cameras.

5.7. The line and swinging cylinder camera This line and swinging cylinder camera was used to render another POV-Ray

scene, which is included in the POV-Ray system (see Plates 14 and 15). The scene consists of four 3D letters, which together form the word ‘SOFT’. The perspective view (Plate 14) shows these letters from the front and a 3D model of the camera described later. The grey arrows represent some of the projection directions generated by this extended camera.

The camera is in some way similar to the previously described one. Instead of torus and circle, a cylinder and a line inside the cylinder are used to render Plate 15. The line and cylinder run through the ‘0’ in a horizontal manner. See Figure 8 for a cross-section through this camera. To imagine the whole camera (in 3D) the

I \ \ Figure 8. A cross-section through the line and swinging cylinder camera

226 H. LOFFELMANN AND E. GROLLER

cross-section can be assumed to curl around the line denoted by ( r = 0). This line coincides with the axis of the cylinder. The intersection of the cylinder and the cross-section is marked by ( r = 1). Lines with a dot at one end represent rays that are used for projection at any vertical line in plate 15. The rays start on the line ( r = 0) and run through the cylinder ( r = 1). The back and forth swinging character of the rays is caused by a parameter space transformer used for getDirPnt (cylinder). It is a local zoom transformation located at cylinder height 0. To explain why certain parts of the scene show up twice in Plate 15 (especially the ‘0’) one could say that cylinder points overtake corresponding line points in some portions of the image and are left behind them in other parts of the image.

6. CONCLUSION

Ray tracing is a powerful method for rendering photorealistic images. In past years many extensions and enhancements have been introduced. In this paper we present an extension to ray tracing, which opens the door to a broad field of applications far off photo-realism. Impressive effects can be achieved when extended cameras are used instead of standard camera modules, as perspective or parallel projection.

We specified the rather simple interface of a generalized camera module, which just produces rays for image locations. Further structuring of the generalized camera module was necessary to explore the field of extended cameras systematically. Thus the extended camera model was devided into two layers as a top-level structure. An abstract camera machine was defined between the ray tracer and a second layer of standard computer graphics transformations and mappings. A major part of this paper describes the internal structure of the abstract camera machine and additionally some ideas are given of what is useful for the layer of transformations.

With our extended model of a camera module several applications can be easily realized, which would cause significant problems when approached with traditional image composition techniques. The main advantage of these extended cameras is that they are modelled and applied in object space (3D). Various projections onto arbitrary surfaces in 3D can be easily realized beside well known standard viewing techniques. Local zoom effects can be achieved using 2D or 3D techniques (e.g. centres of interests). Multiple views of scene objects can be integrated into one picture without any discontinuities. Smooth distortions within images of well known scene objects add an aesthetic aspect to this concept. With some extended cameras images can be computed, which can be seen as an integration of a complete camera pan into a single picture.

The major contribution of this paper is the structuring of the complex extended camera function. The framework of this structuring is the abstract camera machine, which has to be configured by a set of standard mappings. Several classes of these mapping functions have been investigated with respect to their usability as sub- modules for the abstract camera machine. This structured approach simplifies the specification and variation of extended cameras.

ACKNOWLEDGEMENTS

The authors would like to thank their colleagues Werner Purgathofer and Christoph Traxler for their helpful comments on preliminary versions of this paper.

RAY TRACING WITH EXTENDED CAMERAS 227 REFERENCES

1. A. S. Glassner (ed.), An Introduction to Ray Tracing, Academic Press, 1989. 2. L. R. Speer, ‘An updated cross-indexed guide to the ray-tracing literature’, Computer Graphics,

3. M. Potmesil and I. Chakravarty, ‘Synthetic image generation with a lens and aperture camera model’, ACM Transactions on Graphics, 1, (2), 85-108 (1982).

4. Y. C. Chen, ‘Lens effect on synthetic image generation based on light particle theory’, Visual Computer, 3, (3), 125-136 (1987).

5 . Y. C. Chen, ‘Synthetic image generation for highly defocused scenes’, in N. Magnenat-Thalmann and D. Thalmann (eds), New Trends in Computer Graphics (Proceedings of CGI ’88), 1988, pp. 117-125.

6. R. L. Cook, T. Porter and L. Carpenter. ‘Distributed ray tracing’, Computer Graphics (Proceedings of SIGGRAPH ’841, 18, (3) , 137-147 (1984).

7. G. Wyvill and C. McNaughton, ‘Optical methods’, Proceedings of Computer Graphics International (CGI) ’90, 1990.

8. P. Acquisto and E. Groller, ‘A distortion camera for ray tracing’, in J. J. Connor, S. Hernandez, T. K. S. Murthy and H. Power (eds), Visualization and Intelligent Design in Engineering and Architecture, Computational Mechanics Publications, Elsevier Science Publishers, April 1993, pp.

9. N. L. Max, ‘Computer graphics distortion for IMAX and OMNIMAX projection’, Proceedings of Nicograph ’83, December 1983, pp. 137-159.

10. N. Greene and P. S. Heckbert, ‘Creating raster omnimax images from multiple perspective views using the elliptical weighted average filter’, IEEE Computer Graphics & Application,\, 6, (6), 21- 27 (1986).

11. J. O’B. Dorsey, F. X. Sillion and D. P. Greenberg, ‘Design and simulation of opera lighting and projection effects’, Computer Graphics (Proceedings of SIGGRAPH ’91), 25, (4), 41-50 (1991).

12. C. Cruz-Neira, D. J. Sandin and T. A. DeFanti, ‘Surround-screen projection-based virtual reality: the design and implementation of the CAVE’, Computer Graphics (Proceedings of SIGGRAPH

13. A. W. Paeth, ‘Digital cartography for computer graphics’, in A. Glassner (ed.), Graphics Gems,

14. B. Ernst, Der Zauberspiegel des M. C. Escher (in English: The Magic Mirror of M. C. Escher),

15. E. Groller and H. Loffelmann, ‘Extended camera specification for image synthesis’, Machine

16. T. Beier and S. Neely, ‘Feature-based image metamorphosis’, Computer Graphics (proceedings of

17. G. Wolberg (ed.), Digital Image Warping, IEEE Computer Society Press, 1990. 18. A. H. Barr, ‘Faster calculation of superquadric shapes’, IEEE Computer Graphics and Applications,

19. A. H. Barr, ‘Superquadrics and angle-preserving transforms’, IEEE Computer Graphics and Appli-

20. W. Boehm and H. Prautzsch (eds), Geometric Conceptsfor Geometric Design, A. K. Peters, 1994. 21. P. Hanrahan, ‘A survey of ray-surface intersection algorithms’, in A. S. Glassner (ed.), An

Introduction to Ray Tracing, Academic Press, 1989, pp. 79-1 19. 22. H. Loffelmann and E. Groller, ‘Parametrizing superquadrics, Proceedings of the Winter School of

Computer Graphics and Visualisation (WSCG) ‘95, Plzen, February 1995, pp. 162-1 72. 23. H. Loffelmann, ‘Extended cameras for ray tracing’, Diploma Thesis, Institute of Computer (Liraphics,

Technical University of Vienna, Austria, 1995. 24. R. F. Tobler, H. Loffelmann and W. Purgathofer, ‘VEGA: Vienna environment for graphics

applications’, Proceedings of the Winter School of Computer Graphics and Visualisation ( WSCG) ’95, Plzen, February 1995, pp. 323-328.

26, (l), 41-72 (1992).

105-1 18.

’93), 135-142 (1993).

Academic Press, 1990, pp. 307-320.

Heinz Moser Publisher, 1978.

Grapics and Vision, 3, (3), 513-530 (1994).

SIGGRAPH ’92), 26, (2), 35-42 (1992).

1, (I), 41-47 (1981).

cations, 1, (l), 11-23 (1981).

25. (a href=”ftp://ftp.povray.org/pub/povray/images/“)Images produced with POV-Ray(/a)

Plate 1 (Lucas, Trunde and Bonnet). A series ofvolumes disphyed according to the ray-tracing algorithm. (A) and ( F ) : experimental reconstructed volumes (prophase and metaphase) . (B) to (E) : morphed volumes interpolated between the initial undfinal states

Plate 2 (Loffelmann and Groller)

Plate 3 (Loffelmunn and Groller)

Plate I (Leffelrnann and Groller)

Plate 5 (Loffelmann and Groller)

Plate 4 (Liffelmaniz and Groller)

Plate 6 (Ltjfelmunn and Grci'ller)

Plute 7 (Loffelmann and Grdler) Plate 8 (Lojfelrnanrz ond GroLler)

Plate 9 (Loffelmann and Groller)

Plate 10 (Liiffelmann and Groller)

Plutr 12 (Lofdmann and Croller)

Plate 14 (Liiffelmann und Groller)

Plate 11 (Liiffelmann and Griiller)

Plate 13 (Lijfelmann and Griiller)

Plate I5 (Loffelmann and Grdler)