computer graphics and vtk shroeder et al. chapter 3 university of texas – pan american csci 6361,...

165
Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Upload: timothy-miller

Post on 11-Jan-2016

220 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Computer Graphicsand VTK

Shroeder et al. Chapter 3

University of Texas – Pan AmericanCSCI 6361, Spring 2014

Page 2: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

About “Libraries” and “Toolkits”

• It is, of course, possible to program visual representations using just a language and its primitives for visualization

– E.g., java, c/c++ and opengl (more later), for graphs, vector fields, etc.

• However, many of the same visual representations are used “often”, e.g., charts scatterplots, but not often enough to be a language primitive

• Libraries, or toolkits, provide a means to access these “often” used elements, focused on a domain

– e.g., vtk for visualization, Qt for interface design

• Such “libraries” and “toolkits” effectively another software layer– Some closer to the native language than others– Loosely stated, those “closer” to the native language are more flexible, but may

trade off ease of use

Page 3: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Libraries” and “Toolkits” for visualization

• VTK– Allows recompilation to extend, access OpenGL, win system– Close to language, robust, established/static, many file formats supported, vis

techniques oriented to “scientific” visualization

• Others, http://faculty.utpa.edu/fowler/Visualization.html

Page 4: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Libraries” and “Toolkits” prefuse

• Toolkit in Java for building information visualizations

• Fine-grained building blocks for constructing visualizations (as opposed to pre-made views)

• Data model is a graph (entities & relations)

• Includes library of layout algorithms, navigation and interaction techniques

• Written in Java2d

Page 5: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Libraries” and “Toolkits” prefuse

Page 6: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Libraries” and “Toolkits” D3 – Data-Driven Documents

• Javascript-based

• Very similar to Protovis… – Except use web standardse.g.,

Scalable Vector Graphics (SVG) vs. proprietary graphics set

– Declarative Syntax like Protovis

• Creating/Modifying selections of the HTML DOM

• Good support for changing data

– Takes advantage of CSS3 Transformations and Transitions

• Integrates seamlessly into any webpage

Page 7: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Libraries” and “Toolkits” D3 – Data-Driven Documents

• https://github.com/mbostock/d3/wiki/Gallery, selectable examples

Page 8: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Libraries” and “Toolkits” IBM’s Many Eyes

• Many Eyes– IBM website– Ease of creating new

visualizations – Discuss visualizations– Users upload own data sets – All become public – table or unstructured text – Word tree at right

Page 9: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Libraries” and “Toolkits” Others

• Piccolo– Graphics toolkit with built-in

zooming and panning support– 2D

• Javascript InfoVis Toolkit

• Tableau Public

• Processing– Language

Page 10: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

About Visualization

• From Schroeder et al.:

– The field [of visualization] is broad, including elements of computer graphics, imaging, computer science, computational geometry, numerical analysis, statistical methods, data analysis, and studies in human perception.

• Tonight …

• Some forest • Enough to use VTK

• Some trees• Enough to “appreciate” role of cg

Page 11: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Computer Graphics and Visualization

• Recall, 1st points in class:– (Computer-based) Visualization:

• Use of computer-supported, interactive, visual representations of data to amplify cognition• Cognition is the acquisition or use of knowledge

• Now, computer graphics (cg) to accomplish goals of visualization

• CG – “process of generating images using computers (or rendering)”– Converting graphical data into an image– For data visualization

• Transform data into “graphical data”, or graphics primitives - Points, lines, surfaces, etc.• These graphics primitives are then rendered

• CG is intimately related to visualization– Sets “limits” on techniques for visualization, implicitly then creating orienting attitudes– It is part of the “craft” of visualization

• Knowledge of both hardware and software systems are necessary in practice (craft) of visualization

Page 12: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Overview

• Introduction: Software architecture of VTK and other “layers”

• Photorealism and Complexity: Polygon representations

• Viewing: Objects, coordinate systems, projection– Image-order and Object-order methods

• Buffers: Details of graphics hardware to understand software, z-buffer algorithm

• Surface properties: Shading/lighting

• Cameras

• Transformation matrices

• VTK software architecture and an example

Page 13: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Software ArchitectureAbstraction and “languages”

• Abstraction is at the core of computer science and information technology– Have allowed the advances seen in making electronic information systems use

• E.g., advances in languages– ASM -> early “high level”, e.g., FORTRAN -> structured, Pascal, C -> object-oriented, C++, Java

Your (application) program

“Application Library” (VTK, GLUT, … or anything)

Graphics Library (OpenGL, DirectX, …)

Graphics Hardware (frame buffers, firmware, …)

Display (and input) Hardware (screen, mouse, ….)

Window System (MS Windows, Apple, Motif)

Page 14: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Software ArchitectureApplications and layers

• “Applications” are programs– (that programmers write)

• “Libraries” and software layers have – Application programmer interfaces (APIs)

Your (application) program

“Application Library” (VTK, GLUT, … or anything)

Graphics Library (OpenGL, DirectX, …)

Graphics Hardware (frame buffers, firmware, …)

Display (and input) Hardware (screen, mouse, ….)

Window System (MS Windows, Apple, Motif)

• “Libraries” essentially are higher level, or provide abstractions, for lower levels

• In fact, interaction among layers

Page 15: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Software ArchitectureInteraction among layers

• E.g., your program c/c++ or Java program, using VTK

• Uses VTK classes

Your (application) program

“Application Library” (VTK, GLUT, … or anything)

Graphics Library (OpenGL, DirectX, …)

Graphics Hardware (frame buffers, firmware, …)

Display (and input) Hardware (screen, mouse, ….)

Window System (MS Windows, Apple, Motif)

• VTK classes use/call:• OpenGL, which accesses Graphics Hardware• Also, Window System• And input devices, through window system

• Also, application can access OpenGL and Window System

Page 16: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Really Big Picture - Human Vision:CG Camera model - “Light Strikes the Retina…” --- more soon

• Interaction of light with human visual perceptual system leads to vision– Light strikes an object (and is reflected to our eyes) after complex series of interactions– “Photons traveling everywhere” - absorbed, reflected, refracted, diffracted, as interacts with objects– Ambient optical array is light reaching a point - Raytracing in computer graphics– View plane for computer graphics

Page 17: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Really Big Picture - Human Vision: CG Camera model - “Light Strikes the Retina…” --- more soon

• And, of course, computer graphics, too, is about vision

– “Through the view plane”

• So, things about light, etc. are relevant

– Physics, optics, etc.

• Difference for cg is focus is on “computation”, “good (and fast) enough”, etc.

– Which is at core of computer graphics

• A computer science point – – Analogous to alogrithmic

approximation techniques– But human vision system, task,

etc. considered

Page 18: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG: Photorealism and Complexity

Page 19: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG: Photorealism and Complexity

• Examples below exhibit range of “realism” in computer graphics– Realism just one of goals of computer graphics

• In general, trade off realism for speed– Wireframe – just the outline– Polygons – flat shading– Polygons – smooth shading– Raytracing – consider “all” interactions of light with object

Polygons – Flat shading Ray tracingWireframe Polygons - Smooth shading

Page 20: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG: Photorealism and Complexity

• Examples below exhibit range of “realism” in computer graphics– Realism just one of goals of computer graphics

• In general, trade off realism for speed– Wireframe – just the outline– Polygons – flat shading– Polygons – smooth shading– Raytracing – consider “all” interactions of light with object

• Closest to photorealistic … but essentially follows rays from light source!

Ray tracing

Page 21: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

It’s (almost) all about Polygons

• Consider tractability, interactivity and selection of image models – Not physical– Leads to using “good enough” (for the task) representation– Much of challenge of cg lies in representing the analog world on a digital device

• E.g., approximation of circle as series of straight lines

• Though some surfaces and objects can be described mathematically, e.g., sphere, most cannot, e.g., crocodile

• Approximation for objects is typically polygon mesh

Ray tracing

Page 22: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

It’s (almost) all about PolygonsPolygons are tractable approximation

• Consider tractability, interactivity and selection of image models – Not physical– Leads to using “good enough” (for the task) representation– Much of challenge of cg lies in representing the analog world on a digital device

• E.g., approximation of circle as series of straight lines

• Though some surfaces and objects can be described mathematically, e.g., sphere, most cannot, e.g., crocodile

• Approximation for objects is typically polygon mesh

Polygons – Flat shading Ray tracingWireframe Polygons - Smooth shading

Page 23: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Polygon Representations

Page 24: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Polygon RepresentationsMore is better … for photorealism

• More is always better (for polygon count and photorealism)– 20, 80, 320 for sphere– Sampling vs. discrete– Tesselation

• Fair amount of detail in creating – usually tools, e.g., Maya or toolkit, e.g., VTK (below) used

– vtkSphereSource *sphere = vtkSphereSource::New();

– // number of divisions of “latitude” and “longitude”

– sphere->SetThetaResolution(16);

– sphere->SetPhiResolution(16);

– vtkPolyDataMapper *sphereMapper = vtkPolyDataMapper::New();

Page 25: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Polygon Mesh RepresentationsMaya example

• 940 and ~1m polygons (en.9jcg.com/comm_pages/blog_content-art-51.htm)

– But they are still polygons!

Page 26: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Representing PolygonsOpenGL Example: Draw cube from faces

0

5 6

2

4 7

1

3

• VTK has similar functionality

void colorcube( ) // vertices defined by location, etc.{ polygon(0,3,2,1); polygon(2,3,7,6); polygon(0,4,7,3); polygon(1,2,6,5); polygon(4,5,6,7); polygon(0,1,5,4);}

• Vertices are ordered to obtain correct outward facing normals

• Many such “subtleties” in cg programming!

• Normal• Direction vector, perpendicular to a

surface

Page 27: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Representing Polygons OpenGL Example: Representing a Mesh

• Consider a simple “mesh”

• Already 8 nodes and 12 edges– 5 interior polygons– 6 interior (shared) edges

• Each vertex has location vi = (xi yi zi)

• How to efficiently store for use is a significant data structure question

– Hence, the large number of representations in OpenGL

– … and in VTK (next slide)

v1

v2

v7

v6

v8

v5

v4

v3

e1

e8

e3

e2

e11

e6

e7

e10

e5

e4

e9

e12

Page 28: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK Two-Dimensional Cell TypesMany types, fyi, in part to support formats

• Triangle– Primary 2D cell type– Definition: counter-clockwise ordering of 3

points• order of the points specifies direction of surface normal

• Triangle strip– Composite 2D cell consisting of a strip of

triangles– Definition: ordered list of n+2 points

• n is the number of triangles

• Quadrilateral– Primary 2D cell type– Definition: ordered list of four points lying in a

plane• constraints: convex + edges must not intersect

• Polygon– Primary 2D cell type– Definition: ordered list of 3 or more points

• constraint: may not self-intersect

Page 29: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Viewing

Page 30: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG Orientation: Objects, Projections, Clipping, Surfaces, View Plane

Page 31: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG Orientation: Objects, Projections, Clipping, Surfaces, View Plane

• Objects in a 3-D scene– Defined by whatever method

• E.g., toolkit, file

• Projection: Mapping 3-D to 2-D– Scene models are in 3-D space and (but)

images are 2-D• so need way of projecting 3-D to 2-D

• Projection: Fundamental approach: – Define a plane in 3-D space

• View plane (or image plane or film plane)– Project scene onto plane– Map to window viewport– … which is all covered in cg

• Determine what visible – clipping

• Determine color of point on view plane– Shading

Page 32: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG Orientation: Objects, Projections, Clipping, Surfaces, View Plane

• Objects in a 3-D scene– Defined by whatever method

• E.g., toolkit, file

• Projection: Mapping 3-D to 2-D– Scene models are in 3-D space and (but)

images are 2-D• so need way of projecting 3-D to 2-D

• Projection: Fundamental approach: – Define a plane in 3-D space

• View plane (or image plane or film plane)– Project scene onto plane– Map to window viewport– … which is all covered in cg

• Determine what visible – clipping

• Determine color of point on view plane– Shading

Page 33: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG Orientation: Objects, Projections, Clipping, Surfaces, View Plane

• Objects in a 3-D scene– Defined by whatever method

• E.g., toolkit, file

• Projection: Mapping 3-D to 2-D– Scene models are in 3-D space and (but)

images are 2-D• so need way of projecting 3-D to 2-D

• Projection: Fundamental approach: – Define a plane in 3-D space

• View plane (or image plane or film plane)– Project scene onto plane– Map to window viewport– … which is all covered in cg

• Determine what visible – clipping

• Determine color of point on view plane– Shading

Page 34: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG Orientation: Objects, Projections, Clipping, Surfaces, View Plane

• Objects in a 3-D scene– Defined by whatever method

• E.g., toolkit, file

• Projection: Mapping 3-D to 2-D– Scene models are in 3-D space and (but)

images are 2-D• so need way of projecting 3-D to 2-D

• Projection: Fundamental approach: – Define a plane in 3-D space

• View plane (or image plane or film plane)– Project scene onto plane– Map to window viewport– … which is all covered in cg

• Determine what visible – clipping

• Determine color of point on view plane– Shading

Page 35: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG Orientation: Objects, Projections, Clipping, Surfaces, View Plane

• Objects in a 3-D scene– Defined by whatever method

• E.g., toolkit, file

• Projection: Mapping 3-D to 2-D– Scene models are in 3-D space and (but)

images are 2-D• so need way of projecting 3-D to 2-D

• Projection: Fundamental approach: – Define a plane in 3-D space

• View plane (or image plane or film plane)– Project scene onto plane– Map to window viewport– … which is all covered in cg

• Determine what visible – clipping

• Determine color of point on view plane– Shading

Page 36: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Projection: Essential Definitions(quick look)

• Projectors

• View plane (or film plane)

• Direction of projection

• Center of projection– Eye, projection reference point

Page 37: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Viewing, Projection, and Projectors(not a new idea)

• Projection onto image plane not a new idea

• Can examine evolution of artistic representations for many examples of elements to be considered in computer graphics

• As perspective studied by artists, used devices to understand

• Here, “projector” piece of string!

“Unterweisung der Messung”, Albrecht Dürer. Woodcut 1525

Page 38: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

CG: Classes of AlgorithmsWhat is tractable and what is not – Global and local illumination models

• Again, goal of cg is to provide image on screen good and fast enough to accomplish goal

• Global illumination models• Consider all light, reflections, etc.• Similar conceptually to way things work in

the natural world with, e.g., the sun and the human eye

• Computationally, determine image (on image plane) by going from “eye” to illumination source (bounce, etc.)

• Image order/precision algorithm

• Local illumination models• Consider simpler model,

• Object order/precision algorithm• “Just” look at each object, determine if it will

be visible and if so draw it• Surely may be millions of objects, but

are cg techniques for efficiency• Clipping, visible surface

determination (z-buffer), etc.

Page 39: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Ray TracingA global illumination image precision techniques

• Image formed from all light reaching view

• Ray tracing (or casting)– Basically, “running things backwards,

constrained by pixels …”– Follow rays from center of projection until they

either are absorbed by objects or go off to infinity

– Can handle global effects• Multiple reflections• Translucent objects

– Slow– Must have whole data base available at all

times

• Radiosity– Energy based approach– Very slow

Page 40: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Ray TracingA global illumination image precision techniques

• High level algorithm:for each pixel on screen {

determine ray from eye through pixelfind closest intersection of ray with an objectcast off reflected and refracted ray, recursivelycalculate pixel color draw pixel

}

• Rays cast through image pixels – Solves visibility

• Complexity: – O( n . p), where n = objects, p = pixels,

from above for loop or just, at each pixel consider all objects and find

closest point

Page 41: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Ray TracingA global illumination image precision techniques

• High level algorithm:

– for each pixel on screen {determine ray from eye through pixelfind closest intersection of ray with an objectcast off reflected and refracted ray, recursivelycalculate pixel color draw pixel

}

• Recursive algorithm:

raytrace( ray ) { // find closest intersection // cast shadow ray, calculate color_local color_reflect = raytrace( reflected_ray ) color_refract = raytrace( refracted_ray ) color = k1*color_local

+ k2*color_reflect + k3*color_refract

return( color ) }

Page 42: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Object PrecisionLocal illumination – typically, much more tractable than global

• Resolve for all possible view directions from a given eye point

– Historically, first

• Each polygon is clipped by projections of all other polygons in front of it

• Irrespective of view direction or sampling density

• Resolve visibility exactly, then sample the results

• Invisible surfaces are eliminated and visible sub-polygons are created

– e.g., variations on painter's algorithm, poly’s clipping poly’s, 3-D depth sort

Page 43: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Object PrecisionLocal illumination – typically, much more tractable than global

• (very) High Level Algorithm

for (each object in the world) {

1. determine parts of object whose view is unobstructed by other parts of it or any

other object (visible surface determination) 2. draw pixel in appropriate color (shading) }

• Complexity: – O( n2 ), where n = number of objects – from above for loop or just– “must consider all objects (visibility) interacting

with all others”

Page 44: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Visible Surface Determination

Page 45: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Visible Surface Determination

• Example of CG use of “good enough”, surface based technique to do things quickly enough for interactivity

• Cleverly eliminate some surfaces for consideration

• Saves time

• Examples:– Painter’s algorithm– Back-face culling

• (Z-buffer later – “doing it in hardware”)

Page 46: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Visible Surface Determination:Painter’s Algorithm

• To start at the beginning …– Way to resolve visibility exactly– Create drawing order, each poly overwriting the previous ones guarantees correct

visibility at any pixel resolution

• Strategy is to work back to front– find a way to sort polygons by depth (z), then draw them in that order

• do a rough sort of polygons by smallest (farthest) z-coordinate in each polygon• draw most distant polygon first, • Then work forward towards the viewpoint (“painters’ algorithm”)• Pretty “brute force”, but it’s easy and it works – will see z-buffer in hardware later

Page 47: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Back-Face Culling Example of CG technique

• Back-face culling directly eliminates polygons not facing the viewer

– Don’t see those– E.g., cube and house at right

• And there is the constraint of convex (no “inward” facing) polygons

• Computationally, can eliminate back faces by:

– Line of sight calculations– Plane half-spaces

• In practice can be very efficient, – surface (and vertex) normals often stored with

vertex list representations– Normals used both in back face culling and

illumination/shading models

Page 48: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Back-Face Culling: Line of Sight Interpretation

• Line of Sight Interpretation

• Use outward normal (ON) of polygon to test for rejection

• LOS = Line of Sight, – The projector from the center of projection (COP) to any point P on the polygon.

• If normal is facing in same direction as LOS, it’s a back face:– Use cross-product– if LOS . ON >= 0, then polygon is invisible—discard– if LOS . ON < 0, then polygon may be visible

Page 49: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Buffers

Page 50: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Buffers

• A “buffer” is just a piece-of / place-in some sort of memory

• Will consider a bit, as much of pragmatics of graphics requires some elementary knowledge

• And many advanced techniques that are computational tractable are available today because of commodity availability of large amounts of memory for graphics use

– Moore’s law is good– Gb(s) of memory on graphics card (and gpu computing, too)

• Frame buffer (and it’s depth), “scanning out an image”, double buffering, z-buffer, OpenGL Buffers

Page 51: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Raster Definitions

• Raster– A rectangular array of points or

dots

• Pixel (or, arcanely, Pel)– One dot or picture element of the

raster

• Scan Line– A row of pixels– Called that historically because in

a crt an electron stream is moved across (scans) the crt face

Page 52: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Frame Buffer – “Memory Mapped”

• Image, i.e., what shown on display, exists as way memory values set in frame buffer

– Image “mapped” to memory• (supposed to be house and

tree)– In practice, not 0’s and 1’s

• “Frame buffer”, “refresh buffer”– like “frame” of a movie– Represents image,

• what shows up on display

– “Buffer” is a term for memory• Hence, “frame buffer”

• Simple electronic process goes through (scans) this memory and control output device

– Video – scan - controller– “scan out the image”

Page 53: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

BTW - Addressability and Resolution

• Fairly straightforward mapping of memory to display

• Addressability– Number of individual dots per inch

that can be created– May differ in horizontal and vertical

• Resolution– Number of distinguishable lines

per inch that a device can create– Closest spacing at which adjacent

black and white lines can be distinguished

• In fact, resolution usually less than addressability

– Smooths out jaggies, etc.

(1,1)

(9,7)

Page 54: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Scanning out the Image”Overview

• Again, frame buffer holds memory values representing pixel values

• Scan controller goes through memory locations

Page 55: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

FYI - Scanning out Image, Details

• 1 bit Bilevel Display– Recall, frame buffer holds value for display

• Color, intensity– Simplest

• Black and white (or any 2 colors) • 1000 x 1000 display needs 1,000,000 bits or 128k memory (8 bits/byte)

• Memory access time > display time, so fetch chunks and control with registers– 1024x1024 memory 15 nanosec/pixel => 32 pixel chunks stored in shift registers/buffers

• Digital intensity value -> Digital to Analog Conversion (DAC) ->analog signal to drive display

Page 56: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Frame Buffer “Depth”

• 1 bit– For 2 levels– Display is 0-off or 1-on

• 4 or 8 bits– For 16 levels or 256 levels of intensity (gray)– Output is varied in intensity

• 00-off … • 0011-gray …• 1111-bright

Page 57: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Frame Buffer “Depth”“True Color”

• 24 bits– “True color”

• Enough eye can’t distinguish

• > 24 bits– True color +

• Fog• Alpha blending• Text overlay• …

• In general, n-bits / pixel

• Total frame buffer memory required– Addressability x depth– E.g., 1000 x 1000 x 24-bits = 3 mb– Also, Multiple frame buffers for animation, etc.

• Currently, large amounts of graphics memory used for textures for texture mapping, multiple buffers, etc.

Page 58: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Double Buffering”

• If only one frame buffer, when pixels (memory values) changed while constructing image, would see the pixels changing!

– Or at least would appear to “flash” or blink

Page 59: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Double Buffering”

• If only one frame buffer, when pixels (memory values) changed while constructing image, would see the pixels changing!

– Or at least would appear to “flash” or blink

• Essentially all systems have two frame buffers– Image is drawn into one frame buffer, while other is

displayed– Then, other buffer displayed– “swapbuffer” commands

Page 60: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

In fact, Lots of Buffers

• Graphics Buffer depth typically a few hundred bits and can be much more– Color

• Front: from which image is scanned out

• Back: “next” image written into• 2 x 32-color + 8-alpha

– Depth• Hidden surface removal using z-buffer

algorithm• 16 min, 32 for fp arithmetic

– Color indices• Lookup tables

– Accumulation• Blending, compositing, etc.

• Pixel at mi, nj is all bits of the k bit-planes

Page 61: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z-Buffer Algorithm

Page 62: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z-Buffer AlgorithmThe z-buffer hardware

• Frame/refresh buffer:– Recall, screen is refreshed one scan

line at a time, from pixel information held in a refresh or frame buffer

• Additional buffers can be used to store other pixel information– E.g., double buffering for animation

• 2nd frame buffer to which to draw an image (which takes a while)• then, when drawn, switch to this 2nd frame/refresh buffer and start

drawing again in 1st

• Also, a z-buffer in which z-values (depth of points on a polygon) stored for VSD

• E.g., right:• 1.0 is init. far, so other vals visible

Page 63: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z-Buffer Algorithm, 2The algorithm – brute force is good (if you have the hardware)

• Init Z-buffer to background value– furthest plane view vol., e.g, 1.0 at right– Say, 255, 8-bit

• Polygons scan-converted in arbitrary order– When pixels overlap, use Z-buffer to decide which polygon “gets” that pixel

• If new point has z values less than previous one (i.e., closer to the eye), its z-value is placed in the z-buffer and its color placed in the frame buffer at the same (x,y)

• Otherwise the previous z-value and frame buffer color are unchanged– Below shows numeric z-values and color to represent first poly values

• Just draw every polygon (actually, after clipping – more later)– If find a piece (one or more pixels) of a polygon is closer to the front of what

there already, draw over it

Page 64: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z-Buffer Algorithm, 3

• Polygons scan-converted in arbitrary order

• After 1st polygon scan-converted, at depth 127

• After 2nd polygon, at depth 63 – in front of some of 1st polygon

• Finally, just show on screen everything not at initialized far value

Page 65: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z-Buffer Algorithm, 4Psuedocode

• Algorithm again:– Draw every polygon that can’t reject trivially– “If find piece of polygon closer to front, paint

over whatever was behind it”

void zBuffer() { // Initialize to “far” for ( y = 0; y < YMAX; y++) for ( x = 0; x < XMAX; x++) { WritePixel (x, y, BACKGROUND_VALUE); WriteZ (x, y, 0); } // Go through polygons for each polygon for each pixel in polygon’s projection { // pz = polygon’s Z-value at pixel (x, y); if ( pz < ReadZ (x, y) ) { // New point is closer to front of view WritePixel (x, y, poly’s color at pixel (x, y)); WriteZ (x, y, pz); } } }

Frame buffer holds values of polygons’ colors:

Z buffer holds z values of polygons:

Page 66: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z-Buffer Pros

• Simplicity lends itself well to hardware implementations - fast– ubiquitous

• Polygons do not have to be compared in any particular order: – no presorting in z is necessary

• Only consider one polygon at a time– ...even though occlusion is a global problem it is solve by local solution(s)– brute force, but it is fast!

• Z-buffer can be stored with an image– Allows to correctly and easily composite multiple images – Withougt having to merge the models, which is hard– Good for incremental addition to a complex scene

• Can be used for non-polygonal surfaces, e.g., constructive solids– And intersect, union, difference, any z = f (x,y)

Page 67: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z-Buffer Problems

• Can’t do anti-aliasing– Requires knowing all poly’s involved in a given pixel

• Perspective foreshortening– Compression in z axis caused in post-perspective space– Objects originally far away from camera end up having Z-values that are very close

to each other

• Depth information loses precision rapidly, which gives Z-ordering bugs (artifacts) for distant objects

– Co-planar polygons exhibit “z-fighting” - offset back polygon – Floating-point values won’t completely cure this problem

Page 68: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z – Fighting, 1

• Because of limited z-buffer precision (e.g. only 16 or 24 bits), z-values must be rounded

– Due to floating point rounding errors, z-values end up in different equivalence class

• “Z-fighting” occurs when two primitives have similar values in the z-buffer

– Coplanar polygons (two polygons occupy the same space)

– One is arbitrarily chosen over the other

– Behavior is deterministic: the same camera position gives the same z-fighting pattern

Two intersecting cubes

Page 69: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Z – Fighting, 2

• Lack of precision in z-buffer leads to artifacts

Van Dam, 2010

Page 70: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Photorealism and Complexity

Page 71: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Photorealism and Complexity

• Recall, … examples below exhibit range of “realism”

• In general, trade off realism for speed – interactive computer graphics– Wireframe – just the outline– Local illumination models, polygon based

• Flat shading – same illumination value for all of each polygon• Smooth shading (Gouraud and Phong) – different values across polygons

– Global illlumination models• E.g., Raytracing – consider “all” interactions of light with object

Polygons – Flat shading Ray tracingWireframe Polygons - Smooth shading

Page 72: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

ShadingAbout

• Process of applying an illumination model to determine “color and intensity” at a point

• In rendering objects each point on object can have different color or shade

• Light-material interactions cause each point to have a different color or shade

• To determine color or shade need to consider :– Light sources– Material properties– Location of viewer– Surface orientation

• Terminology– “Lighting”

• modeling light sources, surfaces, and their interaction– “Shading”

• how lighting is done with polygon

Page 73: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Rendering Equation

• Light travels …– Light strikes A

• Some scattered, some absorbed, …

– Some of scattered light strikes B• Some scattered, some absorbed, …• Some of this scattered light strikes A• And so on …

• Infinite scattering and absorption of light can be described by rendering equation – Bidirectional reflection distribution function– Cannot be solved in general– Ray tracing is a special case for perfectly

reflecting surfaces

• Rendering equation is global, includes:– Shadows– Multiple scattering from object to object– … and everything

translucent surface

shadow

multiple reflection

Page 74: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Elements of Global IlluminationSaw this earlier, here for challenge

• Global Illumination– Simulating what happens when other objects effect light

reaching a surface element• e.g., ray tracing

• Lights and Shadows– Most light comes directly light sources– Light from a source may be blocked by other objects

• In “shadow” from that light source, so darker• All Non – global can’t do shadows

• Inter-Object Reflection– Light strikes other objects, bounces toward surface element– When that light reaches surface element from other surface

elements, brightens surface element (indirect illumination)

• Expensive to Compute– Many objects in scene affect light reaching surface elements

• But, necessary for some applications

translucent surface

shadow

multiple reflection

Page 75: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

BTW: Light Interactions with a Solid

• Indeed, there are complexities in modeling light …– but good enough is good enough…– Watt, “3d Computer Graphics”

Page 76: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

“Surface Elements” for Interactive CGTractable solution for interactive CG

• A computer graphics issue/orientation:– Consider everything or just “sampling a scene”?

• Again, global view considers all light coming to viewer:

– From each point on each surface in scene• object precision

– Points are smallest units of scene – Can think of points having no area or infinitesimal area

• i.e., there are an infinite number of visible points. • Of course, computationally intractable

• Alternatively, consider surface elements– Finite number of differential pieces of surface

• E.g., polygon– Figure out how much light comes to viewer from each of

these pieces of surface– Often, relatively few (vs. infinite) is enough

• Reduction of computational expense through use of surface elements is at core of tractable (interactive) computer graphics

Page 77: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Surface Elements and Illumination

• Tangent Plane Approximation for Objects– Most surfaces are curved: not flat– Surface element is area on that surface

• Imagine breaking up into very small pieces • Each of those pieces is still curved,

– but if we make the pieces small enough, – then we can make them arbitrarily close to being flat

– Can approximate this small area with a tiny flat area

• Surface Normals– Each surface element lies in a plane.– To describe plane, need a point and a normal– Area around each of these vertices is a surface element

where we calculate “illumination”

• Illumination– Light rays coming from rest of scene strike surface

element, and head out in different directions– Light that goes in direction of viewer from that surface

element (if the viewer moves, light will change)– This is the “illumination” of that surface element– Will see model for cg later

Page 78: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

In sum: Local vs. Global Rendering

• Correct shading requires a global calculation involving all objects and light sources– Recall “rendering equation”

• infinite scattering and absorption of light – Incompatible with pipeline model which

shades each polygon independently (local rendering)

• However, in computer graphics, especially real time graphics, happy if things “look right”– Exist many techniques for approximating

global effects– Will see several

Page 79: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Light-Material Interaction

• Light that strikes an object is partially absorbed and partially scattered (reflected)

• Amount reflected determines color and brightness of object

– Surface appears red under white light because the red component of light is reflected and rest is absorbed

– Can specify both light and surface colors

• Reflected light is scattered in a manner that depends on the smoothness and orientation of surface to light source

• Specular surfaces– Smoother surfaces, more reflected light is

concentrated in direction a perfect mirror would reflected the light

– Light emerges at single angle

• Diffuse surfaces– Rough (flat, matte) surface scatters light in all

directions– Appear same from different viewing angles

smooth surface

rough surface

Page 80: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Light Sources

• General light sources are difficult to work with because must integrate light coming from all points on the source

• Use “simple” light sources

• Point source– Model with position and color– Distant source = infinite distance away (parallel)

• Spotlight– Restrict light from ideal point source

• Ambient light– Same amount of light everywhere in scene– Can model contribution of many sources and reflecting

surfaces

Page 81: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Overview: Local Rendering Techniques

• Will consider– Illumination (light) models focusing on following elements:

• Ambient• Diffuse• Attenuation• Specular Reflection

– Interpolated shading models:• Flat, Gouraud, Phong, modified/interpolated Phong (Blinn-Phong)

Page 82: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

About (Local) Polygon Mesh Shading

• Recall, any surface can be illuminated/shaded/lighted (in principle) by: 1. calculating surface normal at each visible point and 2. applying illumination model

• Where efficiency is consideration, e.g., for interactivity (vs. photorealism) approximations are used

– Fine, because polygons themselves are approximation– And just as a circle can be considered as being made of “an infinite number of

line segments”, • so, it’s all in how many polygons there are!

• Interpolation of illumination values are widely used for speed– And can be applied using any illumination model

• Three methods - each treats a single polygon independently of others (non-global)

– Constant (flat)– Gouraud (intensity interpolation)– Interpolated Phong (normal-vector interpolation)

Page 83: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Flat/Constant Shading, About

• Single illumination value per polygon– Illumination model evaluated just once for each polygon

• 1 value for all polygon, Which is as fast as it gets!– As “sampling” value of illumination equation (at just 1 point)– Right is flat vs. smooth (Gouraud) shading

• If polygon mesh is an approximation to curved surface, – faceted look is a problem– Also, facets exaggerated by mach band effect

• For fast, can (and do) store normal with each surface– Or can, of course, compute from vertices

• But, interestingly, approach is valid, if:– Light source is at infinity (is constant on polygon)– Viewer is at infinity (is constant on polygon)– Polygon represents actual surface being modeled (is not an

approximation)!

Page 84: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Flat/Constant Shading, Light Source(cg note)

• In cg lighting, often don’t account for angle of rays

• Approach is valid, if …– Light source is at infinity (is constant on polygon)– Viewer is at infinity (is constant on polygon)– Polygon represents actual surface being modeled (is not an

approximation)

• Consider point light sources at right

– Close to surface: L1 <> L2 <> L3

– Farther from surface: L1 <> L2 <> L3, but closer --------------------

– At “infinity” can consider: L1 = L2 = L3! – same for V!, • so and are constant on polygon

Page 85: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

But, … Mach Banding

• Mach banding– Exaggerated differences in

perceived intensities • At adjacent edges of differing

intensities– Non-intuitive and striking

• An “illusion” in sense that perception not veridical (true)

• May or may not be apparent here …

Page 86: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

But, … Mach Banding• Mach banding

– Exaggerated differences in perceived intensities

• At adjacent edges of differing intensities

– Non-intuitive and striking• An “illusion” in sense that

perception not veridical (true)

• In fact, physiological cause– Actual and perceived intensities due to

cellular bilateral inhibition• sensation (response of retinal cells)

depends on how cell neighbors are stimulated

• Eye’s photoreceptors responds to light …

– according to intensity of light falling on it minus the activation of its neighbors

– Great for human edge detection– A challenge for computer graphics

Page 87: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Gouraud Shading, About

• Recall, for flat/constant shading, single illumination value per polygon

• Gouraud (or smooth, or interpolated intensity) shading overcomes problem of discontinuity at edge exacerbated by Mach banding

– “Smooths” where polygons meet– H. Gouraud, "Continuous shading of curved surfaces,"

IEEE Transactions on Computers, 20(6):623–628, 1971.

• Linearly interpolate intensity along scan lines– Eliminates intensity discontinuities at polygon edges– Still have gradient discontinuities,

• So mach banding is improved, but not eliminated– Must differentiate desired creases from tessellation artifacts

• (edges of a cube vs. edges on tesselated sphere)

Page 88: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Gouraud Shading, About(cg note)

• To find illumination intensity, need intensity of illumination and angle of reflection

– Flat shading uses 1 angle– Gouraud estimates – …. Interpolates

• 1. Use polygon surface normals to calculate “approximation” to vertex normals

– Average of surrounding polygons’ normals– Since neighboring polygons sharing vertices and edges

are approximations to smoothly curved surfaces – So, won’t have greatly differing surface normals

• Approximation is reasonable one

• 2. Interpolate intensity along polygon edges

• 3. Interpolate along scan lines– i.e,, find:

• Ia, as interpolated value between I1 and I2• Ib, as interpolated value between I1 and I3• Ip, as interpolated value between Ia and Ib

– formulaically, next slide

Page 89: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

What Gouraud Shading Misses(cg note)

• Misses specular highlights in specular objects – because interpolates vertex colors instead of vertex

normals– interpolating normal comes closer to what actual

normal of surface being “polygonally” approximated would be

• Illumination model following, and its implementation in Phong shading, does handle

• Below:– Flat/constant, Gouraud/interpolated intensity, Phong

Page 90: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Illumination Model, Describing Light, 0

Page 91: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Illumination Model, Describing Light, 1(overview)

• Will be looking at model of illumination for cg– How to find color and intensity at a location– A cg model – start with physics and make it fast

• Start with light

• Units of light– light incident on a surface and exiting from a

surface is measured in specific terms defined later

• for now, consider the ratio:Light exiting surface from the viewer Light incident on surface from light

• Another way to conceptualize– Quick take:

• Just not as much “light” (energy) per surface area unit

• dA “shorter” (less) on left, than right• And can describe quantitatively

Page 92: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Describing Light, 2(overview)

• Factors in computing “Light exiting surface”– physical properties of the surface (material)– geometric relationship of surface with respect to

viewer– geometric relationship of surface with respect to

lights– light incident on the surface (color and intensity of

lights in the scene)– polarization, fluorescence, phosphorescence

• Difficult to define some of these inputs– not sure what all categories of physical properties

are …, • and the effect of physical properties on light is

not totally understood– polarization of light, fluorescene, phosphorescene

difficult to keep track of– Light exiting surface toward viewer– Light incident on surface from lights

Page 93: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

A Simple Illumination Model(overview)

• Following is …one of first illumination models that “looked good” and could be calculated efficiently

– simple, non-physical, non-global illumination model– describes some observable reflection characteristics of surfaces– came out of work done at the University of Utah in the early 1970’s– still used today, as it is easy to do in software and can be optimized in hardware

• Later, will put all together with normal interpolation

• Components of a simple model– Reflection characteristics of surfaces

• Diffuse Reflection• Ambient Reflection• Specular Reflection

• Model not physically-based, and does not attempt to accurately calculate global illumination

– does attempt to simulate some of important observable effects of common light interactions

– can be computed quickly and efficiently

Page 94: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Illumination Model: Considers Diffuse, Ambient, Specular …

• Each point of final image is sum of three values, with attenuation for distance from light source

Wikipedia: Phong shading, 9/09

Page 95: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection Characteristics of Surfaces, Diffuse Reflection (1/7)

• Diffuse Reflection– Diffuse (Lambertian) reflection

• typical of dull, matte surfaces – e.g. carpet, chalk plastic

• independent of viewer position• dependent on light source position

– (in this case a point source, again a non-physical abstraction)

• Vectors L and N used to determine reflection

– Value from Lambert’s cosine law … next slide

rough surface

Page 96: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

• Lambert’s cosine law:– Specifies how much energy/light reflects

toward some point– Computational form used in equation for

illumination model

• Now, have intensity (I) calculated from:

– Intensity from point source– Diffuse reflection coefficient (arbitrary!)– With cos-theta calculated using normailized

vectors N and V• For comutational efficiency

• Again:

Reflection Characteristics of Surfaces, Lambert’s Law (2/7)

Page 97: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection Characteristics of Surfaces, Energy Density Falloff (3/7)

• Less light as things are farther away from light source

– Illumination attenuation with distance

• Reflection - Energy Density Falloff– Should also model inverse square law

energy density falloff

• However, this makes surfaces with equal differ in appearance ¾ important if two surfaces overlap

• Formula often creates harsh effects – Do not often see objects illuminated

by point lights– Can instead use formula at right– Experimentally-defined constants – Heuristic

Page 98: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection Characteristics of Surfaces, Ambient Reflection (4/7)

• Ambient Reflection

• Diffuse surfaces reflect light

• Some light goes to eye, some goes to scene– Light bounces off of other objects and

eventually reaches this surface element – This is expensive to keep track of accurately – Instead, we use another heuristic

• Ambient reflection– Independent of object and viewer position– Again, a constant – “experimentally determined”– Exists in most environments

• some light hits surface from all directions • Approx. indirect lighting/global illumination

– A total convenience, • but images without some form of ambient lighting

look stark, they have too much contrast– Light Intensity = Ambient + Attenuation*Diffuse

Page 99: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection Characteristics of Surfaces, Color (5/7)

• Colored Lights and Surfaces

• Write separate equation for each component of color model

– Lambda - wavelength– represent an object’s diffuse color by one

value of for each component• e.g., in RGB• are reflected in proportion to• e.g., for the red component

• Wavelength dependent equation– Evaluating the illumination equation at only 3

points in the spectrum is wrong, but often yields acceptable pictures.

– To avoid restricting ourselves to one color sampling space, indicate wavelength dependence with (lambda).

Page 100: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection Characteristics of Surfaces, Specular Reflection (6/7)

• Specular Reflection

• Directed reflection from shiny surfaces– typical of bright, shiny surfaces, e.g. metals– color depends on material and how it scatters light

energy• in plastics: color of point source • in metal: color of metal• in others: combine color of light and material color

– Depends on light source and viewer position• E.g., as move view, place where “shiny” on object moves

Page 101: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection Characteristics of Surfaces, Specular Reflection (7a/7)

• Phong Approximation– Again, non-physical, but works

• Deals with differential “glossiness” in a computationally efficient manner

• Below shows increasing n, left to right

Page 102: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection Characteristics of Surfaces, Specular Reflection (7b/7)

• Yet again, constant, k, for specular component

• Vectors R and V express viewing angle and so amount of illumination

• n is exponent to which viewing angle raised

– Measure of how “tight”/small specular highlight is

Page 103: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Putting it all together:A Simple Illumination Model

• Non-Physical Lighting Equation– Energy from a single light

reflected by a single surface element

• For multiple point lights– simply sum contributions

• An easy-to-evaluate equation that gives useful results

– It is used in most graphics systems,

• but it has no basis in theory and does not model reflections correctly!

Page 104: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Phong (Interpolated Vector) Shading/Model

• Calculating normal at each point is computationally expensive

• Can interpolate normal– As interpolated vertices with Gouraud shading

• Interpolated Vector Model:– Rather than recalculate normal at each at each

step, interpolate normal for calculation

– Much more computationally efficient

• Bui Tuong Phong, "Illumination for Computer Generated Images," Comm. ACM, Vol 18(6):311-317, June 1975

Page 105: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Phong Shading

• Normal vector interpolation– interpolate N rather than I– especially important with specular

reflection– computationally expensive at each

pixel to recompute• must normalize, requiring

expensive square root

• Looks much better than Gouraud and done in “mid-range” hardware

• Bui Tuong Phong, "Illumination for Computer Generated Images," Comm. ACM, Vol 18(6):311-317, June 1975

Page 106: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Demo Program

• Calculates Phong shading at each point on sphere

Page 107: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Demos

• Demos:– http://www.cs.unc.edu/~clark/courses/comp14-spr04/code/Spher

eLightApplet.html– http://www.cs.auckland.ac.nz/~richard/research-topics/PhongAp

plet/• Select html to execute

Page 108: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Finally, Creating Surface Detail – FastTexture Mapping

• Texture mapping (increasingly in h/w)

– “Paste” a photograph or painted bitmap on a surface to provide detail

• (e.g. brick pattern, sky with clouds, etc.)

– Consider a map as wrapping paper, but made of rubber

– Map pixel array texture/pattern map onto surface to replace (or modify) original color

• Used extensively – Photorealism with pictures,

instead of models, polygons, ray-tracing, …

– Moore’s law is good– Allows gb texture memories

Page 109: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Camera Models

Page 110: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Recall, (Several) Coordinate Systems

• Model/object …– local coordinate system

• World …– Where the models are placed

• View …– Logical Image Plane

• Display (image plane)– X,Y Pixel locations

• In cg and VTK several coordinate systems come into play:

Page 111: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Camera Models

• Positioning a (metaphorical) camera to view a scene– “place (position) the camera and aim it”– Or, positioning view point and specifying viewing direction

• Natural to position camera in world space as if real camera1. Identify the eye point where the camera is located 2. Identify the look-at point that we wish to appear in the center of our view3. Identify an up-vector vector oriented upwards in our final image

• Specify camera configuration – different ways for diff graphics systems

Page 112: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK Camera – Quick Look …

• Camera movements around focal point• camera.tcl

– Pointing toward focal point, white– Elevation, green– Azimuth, orange– Roll, yellow– Direction of projection, purple arrow

• Camera movements centered at camera position

– Pointing toward focal point, white– View up, blue– Pitch (vs. elevation), green– Yaw (vs. azimuth), orange– Roll, yellow– View plane normal (vs. direction of

projection), purple arrow

Page 113: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

3 D Viewing: The Synthetic Camera (cg detail)

• Programmer's reference model – for specifying 3 D view projection parameters to the computer

• General synthetic camera: – position of camera – orientation – field of view (wide angle, normal...) – depth of field (near distance, far distance) – focal distance – tilt of view/film plane

• (if not normal to view direction, produces oblique projections) – perspective or parallel projection?

• (camera near objects or an infinite distance away)

• Will use a simpler, slightly less powerful model:

Page 114: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

View Volumes

• A view volume– contains everything visible from

the point of view or direction: – i.e., What does the camera see?

• Conical view volumes – Expensive math (simultaneous

quadratics) when clipping objects against cone's surface

• Rectangular cone – Can approximate conical with

rectangular “cone” (called a frustum)

– Works well with a rectangular viewing window

– Simultaneous linear equations for easy clipping

Page 115: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Synthetic Camera Parameters(overview)

Need to know six things about the synthetic camera model

– in order to “take a picture” (create an image)

1. Position of the camera – Location (x, y, z) from where it is looking)

2. Look vector – specifies in what direction the camera is

pointing

3. Orientation of camera consists of:the direction the camera is pointing and what angle the camera is rotated about that look vector i.e., direction of Up vector

4. Aspect ratio of electronic ``film.'' Ratio of width to height

5. View angle. –Determines amount of perspective distortion: the bigger the angle, the wider the field of view, the more distortion

6. Front and back clipping planes.–Limit extent of camera's view by rendering (parts of) objects lying between them, throwing away everything outside of them

Optional parameter: Focal length. –Often used for photorealistic rendering. Objects at distance focal length from camera rendered in sharp detail. Objects closer or farther away get blurred. Reduction in visibility is continuous.

Page 116: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Position

• Position is analogous to a photographer’s vantage point from which to shoot a photo– Three degrees of freedom: x,

y, and z coordinates in 3 space

Page 117: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Orientation, 1

• Orientation is specified by a point in 3D space:

– 1. to look at (or a look vector) and – 2. an up vector, which is used to

define the angle of rotation about look vector

• Default orientation is looking down the negative z axis and up direction is pointing straight up in y axis

Page 118: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Orientation, Look and Up Vectors

• More concrete way to say same thing as orientation

• Look Vector – the direction the camera is looking – 3 degrees of freedom;– can be any vector in 3 space

• Up Vector – determines how camera is rotated around look vector – E.g., whether holding camera horizontally or vertically

(or in between) – projection of up vector must be in the plane

perpendicular to the look vector

Page 119: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

View Volumes

• A view volume– contains everything visible from the

point of view or direction: – i.e., What does the camera see?

• Conical view volumes – What humans, etc., have– Expensive math (simultaneous

quadratics) when clipping objects against cone's surface

• Rectangular cone – Can approximate conical with

rectangular “cone” (called a frustum)

– Works well with a rectangular viewing window

– Simultaneous linear equations for easy clipping

Page 120: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Front and Back Clipping Planes, 1

• Volume of space between front (near) and back (far) clipping planes defines what camera can see

• Position of planes defined by distance along Look Vector

• Objects appearing outside of view volume don't get drawn

• Objects intersecting view volume get clipped

Page 121: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Front and Back Clipping Planes, 2

• Reasons for Front (near) Clipping Plane:

– Don't want to draw things too close to camera • would block view of rest of scene • objects would be prone to distortion

– Don't want to draw things behind camera. • wouldn't expect to see things behind the camera

– Cg note: in the case of the perspective camera, if we decided to draw things behind the camera, they would appear upside down and inside out because of perspective transformation

Page 122: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Front and Back Clipping Planes, 3

• Reasons for Back (far) Clipping Plane: – Don't want to draw objects too far away from camera

• distant objects may appear too small to be visually significant, but still take long time to render

• by discarding them we lose a small amount of detail but reclaim a lot of rendering time

• alternately, the scene may be filled with many significant objects; for visual clarity, we may wish to declutter the scene by rendering those nearest the camera and discarding the rest

Page 123: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Aspect Ratio

• Analogous to the size of film used in a camera

• Proportion of width to height

• Determines aspect ratio of image displayed on screen – Square viewing window has aspect ratio of 1:1 – Movie theater ``letterbox'' format has aspect ratio of 2:1 – NTSC television has an aspect ratio of 4:3, and HDTV is 16:9

Page 124: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Transformations

Page 125: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Recall, (Several) Coordinate Systems

• Model/object …– local coordinate system

• World …– Where the models are placed

• View …– Logical Image Plane

• Display (image plane)– X,Y Pixel locations

• In cg and VTK several coordinate systems come into play:

Page 126: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Transformations

• Why are they important to graphics?– moving objects on screen / in space– mapping from model space to world space to camera

space to screen space– specifying parent/child relationships– …Changes between coordinate system -

• “Changing something to something else via rules”– mathematics: mapping between values in a range set

and domain set (function/relation)– geometric: translate, rotate, scale, shear,…

• Transformation: Maps an object into another object– In general, a transformation maps every point on an

object to another point in the underlying coordinate space.

– Change size, location, orientation of objects without changing underlying model (or primitive drawing commands)

– Animation, Instancing

v=T(u)

Page 127: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Good Transformations in CG

• Line Preserving– Because, if not ….

P1’

P2’

P1

P2

Line Not Preserved

Page 128: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Transformation Objectives

• Will look at less “mathematically” and revisit matrix operations

• Introduce standard transformations– Scaling– Translation– Rotation– Shear

• Derive homogeneous coordinate transformation matrices

• Learn to build arbitrary transformation matrices from simple transformations

Page 129: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Scaling

• Scale - Changing the size of an object • Scale object by scaling x and y coordinates of each vertex in object

• Can have different scale values for scaling x and y: sx, sy

• So, for all points, x,y , x’ = x * sx, y’ = y * sy

Page 130: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Scaling Example

• Below, sx = sy = 2

– x’ = x * sx, y’ = y * sy

– For all pts (x,y), x’ = x * 2, y’=y * 2

1,1

2,2

2,2

4,4

x 2 =

Page 131: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Scaling Example – As Matrix

• Below, sx = sy = 2– x’ = x * sx, y’ = y * sy– For all pts (x,y), x’ = x * 2, y’=y * 2

• Represent points as vectors, transformation as matrix, and multiply– Allows many things … efficiency– . 222 P

1,1

2,2

2,2

4,4

x 2 =

,111 P

y

x

s

s

0

0

Page 132: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Scaling Example – Using Matrix

• Represent points, P1 and P2, as vectors and scaling values, sx, sy as matrix– Then multiply each point by scale matrix… (“over and down”) to transform

1,1

2,2

2,2

4,4

x 2 =

22)1*2()1*0()0*1()1*2(20

02*11'1

P

44)2*2()0*2()0*2()2*2(20

02*22'2

P

Page 133: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Matrix Stuff

• Premultiplying by a row vector is the same as postmultiplying by a column vector– Will see both ways in graphics, but former is more frequent

• Identity matrix– That matrix which, when a matrix is multiplied by it, results in the

same value of the matrix being multiplied

22)1*2()1*0()0*1()1*2(20

02*11

22)1*2()1*0()0*1()1*2(1

1

20

02

2131*20*10*32*01*10*32*01*01*3

100

010

001

*213

Page 134: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Translation

• Translate - Change the position of an object • Translate object by translating x and y coordinates of each vertex in

object

• Can have different translation values for translating in x and y: tx, ty

• So, for all points, x,y , x’ = x + tx, y’ = y + ty – Same song, second verse

Page 135: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Translation - Matrix, the problem

• Again, want x’ = x + tx, y’ = y + ty

• Consider how a matrix might be used:– Unlike scaling, above is just adding, not multiplying– Have seen that matrix multiplication is a series of repeated adds and multiplies– There is a matrix that, when used in multiplication, returns original values– Just need to get original values returned, but with something (tx, ty) added

(___))1*2()0*2((___))0*2()1*2(10

01*22

)1*1()0*2()0*2()0*1()1*2()0*2()0*1()0*2()1*2(

100

010

001

*122

add Tx add Ty

How about Tx? How about Ty?

Page 136: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Translation - Matrix, the solution

• Again, want x’ = x + tx, y’ = y + ty

)1*1()0*2()0*2()0*1()1*2()0*2()0*1()0*2()1*2(

100

010

001

*122

How about Tx? How about Ty?

)1*1()0*()0*()*1()1*()0*()*1()0*()1*(

1

010

001

*1

yxTyxTyx

TT

yx yx

yx

1yx TyTx

• So, that is what we want!• But, what about the extra value in the position vector?• No problem, homogenous coordinates and mathematics

Page 137: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Translation Matrices

• VTK, and all systems employing graphics, use matrices and allow programmer access to them

• That’s what the cow example is about ….

Page 138: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Translation Matrix

• Translation using a 4 x 4 matrix T in homogeneous coordinates– p’=Tp where

– T = T(dx, dy, dz) =

• This form is better for implementation because all affine transformations can be expressed this way and multiple transformations can be concatenated together

1000

d100

d010

d001

z

y

x

Page 139: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Rotation (2D), Briefly

• Consider rotation about the origin by degrees (as radians)– radius stays the same, angle increases by

x’=x cos –y sin y’ = x sin + y cos

x = r cos y = r sin

x = r cos (y = r sin (

Page 140: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Rotation about the z Axis

• Rotation about z axis in three dimensions leaves all points with the same z

– Equivalent to rotation in two dimensions in planes of constant z

– or in homogeneous coordinates

p’=Rz()p

x’=x cos –y sin y’ = x sin + y cos z’ =z

Page 141: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Rotation Matrix

1000

0100

00 cossin

00sin cos

R = Rz() =

Page 142: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Rotation about x and y axes

• Same argument as for rotation about z axis– For rotation about x axis, x is unchanged– For rotation about y axis, y is unchanged

R = Rx() =

R = Ry() =

1000

0 cos sin0

0 sin- cos0

0001

1000

0 cos0 sin-

0010

0 sin0 cos

Page 143: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Scaling

1000

000

000

000

z

y

x

s

s

s

S = S(sx, sy, sz) =

x’=sxxy’=syxz’=szx

p’=Sp

• Expand or contract along each axis (fixed point of origin)

Page 144: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Reflection

• Corresponds to negative scale factors

originalsx = -1 sy = 1

sx = -1 sy = -1 sx = 1 sy = -1

Page 145: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Shear

• Helpful to add one more basic transformation• Equivalent to pulling faces in opposite directions

Page 146: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Shear Matrix

Consider simple shear along x axis

x’ = x + y cot y’ = yz’ = z

1000

0100

0010

00cot 1

H() =

Page 147: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Concatenation and Efficiency

• We can form arbitrary affine transformation matrices by multiplying together rotation, translation, and scaling matrices

• Because the same transformation is applied to many vertices, – cost of forming a matrix M=ABCD is not significant compared to

the cost of computing Mp for many vertices p

• The difficult part is how to form a desired transformation from the specifications in the application

Page 148: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Rotation About a Fixed Point other than the Origin

- Note that this is not necessarily intuitive!- Move fixed point to origin- Rotate- Move fixed point back

- M = T(pf) R() T(-pf)

Page 149: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Stereoscopic Viewing

Page 150: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Stereoscopic Viewing

• Compelling effect!– But, ergonomics challenges to apply– “promise of the future … always has been always will be” ???

• Left and right eyes get different images

• Change in eye separation with depth

• But, only perceive one image– Perceptual processes integrate different views from the two eyes

• E.g, LCD Shutter Glasses– >120 hz monitor refresh– Different images on odd and even– LCD “Shutters” open and close for left and right eyes

• (used to be metal shutters!)

Page 151: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK

Page 152: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK

• Visualization Tool Kit– vtk.org– Large user community – most

widely used public domain visualization suite

• Lots of functionality … we’ll do enough in class to get you started

• Extensible and object oriented

• Tonight:– Camera (no surprises but lots of

functionality)– Terminology and architecture

through example

Page 153: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK Camera

• Camera movements around focal point

– camera.tcl– Pointing toward focal point, white– Elevation, green– Azimuth, orange– Roll, yellow– Direction of projection, purple arrow

• Camera movements centered at camera position

– Pointing toward focal point, white– View up, blue– Pitch (vs. elevation), green– Yaw (vs. azimuth), orange– Roll, yellow– View plane normal (vs. direction of

projection), purple arrow

Page 154: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK Camera, Some Functions(gee whiz)

• D:\README.html– Doxygen-generated manual pages. For Windows users, a compressed help file is available.

• void Dolly (double distance)• void Roll (double angle)• void Azimuth (double angle)• void Yaw (double angle)• void Elevation (double angle)• void Pitch (double angle)• void Zoom (double factor)• virtual void GetFrustumPlanes (double aspect, double planes[24])• void SetPosition (double x, double y, double z)• void SetFocalPoint (double x, double y, double z)• void SetViewUp (double vx, double vy, double vz)• virtual double * GetDirectionOfProjection ()• void SetViewAngle (double angle)• void SetWindowCenter (double x, double y)• void SetViewPlaneNormal (double x, double y, double z)

Page 155: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK Camera, more functions(gee whiz, again)

• void PrintSelf (ostream &os, vtkIndent indent)• virtual const char * GetClassName ()• virtual int IsA (const char *type)• void OrthogonalizeViewUp ()• void SetObliqueAngles (double alpha, double beta)• void ApplyTransform (vtkTransform *t)• virtual vtkMatrix4x4 * GetViewTransformMatrix ()• virtual void Render (vtkRenderer *)• unsigned long GetViewingRaysMTime ()• void ViewingRaysModified ()• void ComputeViewPlaneNormal ()• vtkMatrix4x4 * GetCameraLightTransformMatrix ()• virtual void UpdateViewport (vtkRenderer *vtkNotUsed(ren))• virtual vtkTransform * GetViewTransformObject ()• void SetPosition (const double a[3])• virtual double * GetPosition ()• virtual void GetPosition (double &, double &, double &)• virtual void GetPosition (double[3])• void SetFocalPoint (const double a[3])• virtual double * GetFocalPoint ()• virtual void GetFocalPoint (double &, double &, double &)• virtual void GetFocalPoint (double[3])• void SetViewUp (const double a[3])• virtual double * GetViewUp ()• virtual void GetViewUp (double &, double &, double &)• virtual void GetViewUp (double[3])• void SetDistance (double)• virtual double GetDistance ()• virtual void GetDirectionOfProjection (double &, double &, double &)• virtual void GetDirectionOfProjection (double[3])

Page 156: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK Camera, yet more functions(… and again)

• void SetRoll (double angle)• double GetRoll ()• void SetParallelProjection (int flag)• virtual int GetParallelProjection ()• virtual void ParallelProjectionOn ()• virtual void ParallelProjectionOff ()• void SetUseHorizontalViewAngle (int flag)• virtual int GetUseHorizontalViewAngle ()• virtual void UseHorizontalViewAngleOn ()• virtual void UseHorizontalViewAngleOff ()• virtual double GetViewAngle ()• void SetParallelScale (double scale)• virtual double GetParallelScale ()• void SetClippingRange (double near, double far)• void SetClippingRange (const double a[2])• virtual double * GetClippingRange ()• virtual void GetClippingRange (double &, double &)• virtual void GetClippingRange (double[2])• void SetThickness (double)• virtual double GetThickness ()• virtual double * GetWindowCenter ()• virtual void GetWindowCenter (double &, double &)• virtual void GetWindowCenter (double[2])• virtual double * GetViewPlaneNormal ()• virtual void GetViewPlaneNormal (double &, double &, double &)• virtual void GetViewPlaneNormal (double[3])• void SetViewShear (double dxdz, double dydz, double center)• void SetViewShear (double d[3])• virtual double * GetViewShear ()• virtual void GetViewShear (double &, double &, double &)• virtual void GetViewShear (double[3])• virtual void SetEyeAngle (double)• virtual double GetEyeAngle ()• virtual void SetFocalDisk (double)• virtual double GetFocalDisk ()• virtual vtkMatrix4x4 * GetPerspectiveTransformMatrix (double aspect, double nearz, double farz)• virtual vtkMatrix4x4 * GetCompositePerspectiveTransformMatrix (double aspect, double nearz, double farz)• void SetUserTransform (vtkHomogeneousTransform *transform)• virtual vtkHomogeneousTransform * GetUserTransform ()• double * GetOrientation ()• double * GetOrientationWXYZ ()• void SetViewPlaneNormal (const double a[3])

Page 157: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

VTK Architecture - example

• Maybe not the best 1st example, but …

– A really quick sampling of what can be done

• “Object-oriented”– Instance, inheritance, subclass, …

• Will step through the source code to produce screens at right

• If you’re new to cg, etc., … hang in there

– “learning by immersion”– Bear with the oo jargon

Instances of vtkRenderWindow

Instances of vtkRenderer

Instances of vtkActor

vtkMapper defines actor geometry

vtkProperty defines actor surface properties

One or more vtLightsilluminate the scence

Page 158: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Model.cxx – Overview – 1/5

#include - vtk stuff

main (){ // create rendering windows and three renderers // create an actor and give it cone geometry // create an actor and give it cube geometry // create an actor and give it sphere geometry // assign our actor to both renderers // set the size of our window

// set the viewports and background of the renderers // draw the resulting scene // Clean up

One vtkRenderer defines view for each renderer

Instances of vtkRenderWindow

Instances of vtkRenderer

Instances of vtkActor

vtkMapper defines actor geometry

vtkProperty defines actor surface properties

One or more vtLightsilluminate the scence

Page 159: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Model.cxx – 2a/5

#include - vtk stuff

main (){ // create rendering windows and three renderers // create an actor and give it cone geometry // create an actor and give it cube geometry // create an actor and give it sphere geometry // assign our actor to both renderers // set the size of our window

// set the viewports and background of the renderers // draw the resulting scene // Clean up

#include "vtkRenderer.h"#include "vtkRenderWindow.h"#include "vtkRenderWindowInteractor.h"#include "vtkConeSource.h"#include "vtkPolyDataMapper.h"#include "vtkActor.h"#include "vtkCubeSource.h"#include "vtkSphereSource.h"#include "vtkProperty.h"

main (){ // create rendering windows and three renderers vtkRenderer *ren1 = vtkRenderer::New(); vtkRenderer *ren2 = vtkRenderer::New(); vtkRenderWindow *renWindow1 = vtkRenderWindow::New(); renWindow1->AddRenderer(ren1); renWindow1->AddRenderer(ren2); vtkRenderWindowInteractor *iren1 = vtkRenderWindowInteractor::New(); iren1->SetRenderWindow(renWindow1);

.:

Page 160: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Model.cxx – 2b/5

#include - vtk stuff

main (){ // create rendering windows and three renderers (just 1 shown) // create an actor and give it cone geometry // create an actor and give it cube geometry // create an actor and give it sphere geometry // assign our actor to both renderers // set the size of our window

// set the viewports and background of the renderers // draw the resulting scene // Clean up

main (){ // create rendering windows and three renderers vtkRenderer *ren1 = vtkRenderer::New(); vtkRenderer *ren2 = vtkRenderer::New(); vtkRenderWindow *renWindow1 = vtkRenderWindow::New(); renWindow1->AddRenderer(ren1); renWindow1->AddRenderer(ren2); vtkRenderWindowInteractor *iren1 = vtkRenderWindowInteractor::New(); iren1->SetRenderWindow(renWindow1);

.:

Page 161: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Model.cxx – 3/5

#include - vtk stuff

main (){ // create rendering windows and three renderers // create an actor and give it cone geometry // create an actor and give it cube geometry // create an actor and give it sphere geometry // assign our actor to both renderers // set the size of our window

// set the viewports and background of the renderers // draw the resulting scene // Clean up

// create an actor and give it cone geometry vtkConeSource *cone = vtkConeSource::New(); cone->SetResolution(8); vtkPolyDataMapper *coneMaper = vtkPolyDataMapper::New(); coneMapper->SetInput(cone->GetOutput()); vtkActor *coneActor = vtkActor::New(); coneActor->SetMapper(coneMapper); coneActor->GetProperty()->SetColor(0.2000,0.6300,0.7900);

// create an actor and give it cube geometry vtkCubeSource *cube = vtkCubeSource::New(); vtkPolyDataMapper *cubeMapper = vtkPolyDataMapper::New(); cubeMapper->SetInput(cube->GetOutput()); vtkActor *cubeActor = vtkActor::New(); cubeActor->SetMapper(cubeMapper); cubeActor->GetProperty()->SetColor(0.9804,0.5020,0.4471);

// create an actor and give it sphere geometry vtkSphereSource *sphere = vtkSphereSource::New(); sphere->SetThetaResolution(16); sphere->SetPhiResolution(16); vtkPolyDataMapper *sphereMapper = vtkPolyDataMapper::New(); sphereMapper->SetInput(sphere->GetOutput()); vtkActor *sphereActor = vtkActor::New(); sphereActor->SetMapper(sphereMapper); sphereActor->GetProperty()->SetColor(0.8900,0.6600,0.4100);

Page 162: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Model.cxx – 4/5

#include - vtk stuff

main (){ // create rendering windows and three renderers // create an actor and give it cone geometry // create an actor and give it cube geometry // create an actor and give it sphere geometry // assign our actor to both renderers // set the size of our window

// set the viewports and background of the renderers // draw the resulting scene // Clean up

// assign our actor to both renderers ren1->AddActor(coneActor); ren2->AddActor(sphereActor); ren3->AddActor(cubeActor);

// set the size of our window renWindow1->SetSize(800,400); renWindow2->SetSize(400,400);

// set the viewports and background of the renderers ren1->SetViewport(0,0,0.5,1); ren1->SetBackground(0.9,0.9,0.9); ren2->SetViewport(0.5,0,1,1); ren2->SetBackground(1,1,1); ren3->SetBackground(1,1,1);

// draw the resulting scene renWindow1->Render(); renWindow2->Render();

iren1->Start();

Page 163: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Model.cxx – 5/5

#include - vtk stuff

main (){ // create rendering windows and three renderers // create an actor and give it cone geometry // create an actor and give it cube geometry // create an actor and give it sphere geometry // assign our actor to both renderers // set the size of our window

// set the viewports and background of the renderers // draw the resulting scene // Clean up

// Clean up ren1->Delete(); ren2->Delete(); renWindow1->Delete(); iren1->Delete(); ren3->Delete(); renWindow2->Delete(); iren2->Delete(); cone->Delete(); coneMapper->Delete(); coneActor->Delete(); cube->Delete(); cubeMapper->Delete(); cubeActor->Delete(); sphere->Delete(); sphereMapper->Delete(); sphereActor->Delete();

return 0;}

Page 164: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

End

• .

Page 165: Computer Graphics and VTK Shroeder et al. Chapter 3 University of Texas – Pan American CSCI 6361, Spring 2014

Homework 1 : VTK Environment and Compilation