11/12/2015 cs123 introduction to computer graphics andries van dam © 1 of 34 polygon rendering...
TRANSCRIPT
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
1 of 34
Polygon Rendering Techniques
Swapping the Loops
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
2 of 34
Ray tracer produces visible samples of scene Samples are convolved with a filter to form pixel image Ray tracing pipeline:
Review: Ray Tracing Pipeline
Scene Graph
-Traverse scene graph-Accumulate CTM-Spatially organize objects
Acceleration Data Structure (ADS)suitable for ray tracing
Generate ray for each sample
-traverse ADS-for nearest cell to farthest cell -intersect objects in cell with ray -return nearest intersection in cell -otherwise continue to next cell
Evaluate lightingequation at sample
All samples Convolve with filter Final image pixels
Generate reflection rays
Preprocessing
Ray Tracing
Post processing
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
3 of 34
Often need to render triangle meshes Ray tracing works well for implicitly defined surfaces Many existing models and modeling apps are based on polygon
meshes – can we render them by ray tracing the polygons? Easy to do: ray-polygon intersection is a simple calculation (use barycentric
coords) Very inefficient: common for an object to have thousands of triangles, for a
scene to have hundreds of thousands or even millions of triangles – each needs to be considered in intersection tests
Traditional hardware pipeline is more efficient for many triangles: Process the scene polygon by polygon, using “z-buffer” to determine visibility Local illumination model Use crude interpolation shading approximation to compute the color and/or
uv of most pixels, fine for small triangles
Rendering Polygons
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
4 of 34
Polygons (usually triangles) approximate the desired geometry One polygon at a time:
Each vertex is transformed into screen space, illumination model evaluated at each vertex. No global illumination (except through ambient term hack)
One pixel of the polygon at a time (pipeline is highly simplified): depth is compared against z-buffer; if the pixel is visible, color value is computed
using either linear interpolation between vertex color values (Gouraud shading) or a local lighting model for each pixel (e.g., Phong shading using interpolated normals and various types of hardware assisted maps)
Traditional Fixed Function Pipeline (disappearing)
Geometry Processing World Coordinates
Rendering/Pixel Processing Screen Coordinates
Selective traversal of Acc Data Str (or traverse scene graph) to get CTM)
Transform vertices to canonical view volume
Lighting: calculate light intensity at vertices (lighting model of choice)
View volume clipping
Image-precision VSD: compare pixel depth (z-buffer)
Shading: interpolate vertex color values (Gouraud) or normals (Phong)
Trivial accept/reject and back-face culling
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
5 of 34
Back face culling: Often don’t need to render triangles “facing away” from camera Skip triangle if screen space vertex positions aren’t in counter
clockwise order View volume clipping:
If triangle lies entirely outside of view volume, no need to render it Some special cases requiring polygon clipping: large triangles,
triangles that intersect faces of the view volume Occlusion culling:
Triangles that are behind other opaque triangles don’t need to be rendered
Visible Surface Determination - review
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
6 of 34
Allows programmer to fine tune and optimize various stages of pipeline
This is what we have used in labs and projects this semester
Shaders are fast! (They use multiple ultra-fast ALU’s, or arithmetic logic units). shaders can do all the techniques mentioned in this lecture in real time
For example, physical phenomena such as shadows and light refraction can be emulated using shaders
Programmable Shader-Based Pipeline
The image on the right is rendered with a custom shader. Note that only the right image has realistic shadows. A normal map is also used to add detail to the model.
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
7 of 34
Constant Shading: No interpolation, pick a single representative intensity and
propagate it over entire object Loses almost all depth cues
Classical Shading Models Review (1/6)
Pixar “Shutterbug” images from:
www.siggraph.org/education/materials/HyperGraph
/scanline/shade_models/shading.htm
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
8 of 34
Faceted Shading: Single illumination value per polygon With many polygons approximating a
curved surface, this creates an undesirable faceted look.
Facets exaggerated by “Mach banding” effect
Classical Shading Models Review (2/6)
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
9 of 34
Gouraud Shading: Linear Interpolation of intensity across
triangles to eliminate edge discontinuity Eliminates intensity discontinuities at polygon
edges; still have gradient discontinuities Mach banding is largely ameliorated, not
eliminated Must differentiate desired creases from
tessellation artifacts (edges of cube vs. edges on tessellated sphere)
Can miss specular highlights between vertices because it interpolates vertex colors instead of calculating intensity directly at each point
Classical Shading Models Review (3/6)
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
10 of 34
Gouraud shading can miss specular highlights because it interpolates vertex colors instead of calculating intensity directly at each point, or even interpolating vertex normals (Phong shading)
and would cause no appreciable specular component, whereas would, with view ray aligned with reflection ray. Interpolating between Ia and Ib misses the highlight that evaluating I at c using
would catch
Classical Shading Models Review (4/6)
Phong shading: Interpolated normal comes close to actual
normal of true curved surface at a given point Reduces temporal “jumping” effect of
highlight, e.g., when rotating sphere during animation (example on next slide)
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
11 of 34
Classical Shading Models Review (5/6)
Gouraud
Gouraud Phong
Phong Shading: Linear interpolation of normal
vectors instead of color values Always captures specular highlight More computationally expensive At each pixel, interpolated normal is re-
normalized and used to compute lighting model
Looks much better than Gouraud, but still no global effects
Phonghttp://www.cgchannel.com/2010/11/cg-science-for-artists-part-2-the-real-time-rendering-pipeline/
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
12 of 34
Classical Shading Models Review (6/6) Global illumination models:
Objects enhanced using texture, shadow, bump, reflection, etc., mapping
These are still hacks, but more realistic ones
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
13 of 34
Originally designed for offline rendering to improve quality Now used in shader programming to provide real time quality
improvements Maps can contain many different kinds of information
Simplest kind contains color information, i.e., texture map Can also contain depth, normal vectors, specularity, etc. (examples coming…) Can use multiple maps in combination (e.g., use texture map to determine
object color, then normal map to perform lighting calculation, etc.)
Maps Overview
Indexing into maps – what happens between pixels in the original image? Depends! OpenGL allows you to specify
filtering method GL_NEAREST: returns nearest pixel GL_LINEAR: triangle filtering
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
14 of 34
Choosing a texture map resolution Want high resolution textures for
nearby objects But high resolution textures are
inefficient for distant objects Simple idea: Mipmapping
MIP: multum in parvo, “much in little” Maintain multiple texture maps at
different resolutions Use lower resolution textures for
objects further away Example of “level of detail” (LOD)
management common in CG
Mipmapping
Mipmap for a brick texturehttp://flylib.com/books/en/1.541.1.66/1/
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
15 of 34
Render each object twice First pass: render normally Second pass: use transformations to project object
onto ground plane, render completely black Pros: Easy, can be convincing. Eye more
sensitive to presence of shadow than shadow’s exact shape
Cons: Becomes complex computational geometry problem in anything but simplest case Easy: projecting onto flat, infinite ground plane How to implement for projection on stairs? Rolling
hills?
Shadows (1/5) – Simplest Hack
http://web.cs.wpi.edu/~matt/courses/cs563/talks/shadow/shadow.html
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
16 of 34
For each light For each point in scene If is in shadow cast by //how to compute? Only use indirect lighting (e.g., ambient term for Phong lighting) Else Evaluate full lighting model (e.g., ambient, diffuse, specular for Phong)
Next: different methods for computingwhether is in shadow cast by
Shadows (2/5) – More Advanced
Stencil shadow volumes implemented by former cs123 TA and former UTA, now Ph.D. Kevin Egan and former PhD student and book co-author Prof. Morgan McGuire, on nVidia chip
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
17 of 34
For each light + object pair, compute mesh enclosing area where the object occludes the light Find silhouette from light’s perspective
Every edge shared by two triangles, such that one triangle faces light source and other faces away
On torus, where angle between normal vector and vector to light becomes >90°
Project silhouette along light rays Generate triangles bridging silhouette and its
projection to obtain the shadow volume A point P is in shadow from light L if any shadow
volume V computed for L contains P Can determine this quickly using multiple passes and
a “stencil buffer” More here on Stencil Buffers, Stencil Shadow Volumes
Shadows (3/5) – Stencil Shadow Volumes
http://www.ozone3d.net/tutorials/images/stencil_shadow_volumes/shadow_volume.jpg
Silhouette
Projected Silhouette
Example shadow volume(yellow mesh)
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
18 of 34
Shadows (4/5) – Another Multi-Pass Technique: Shadow Maps
Shadow map (on right) obtained by rendering from light’s point of view
(darker is closer)
Render scene using each light as center of projection, saving only its z-buffer Resultant 2D images are “shadow maps”,
one per light Next, render scene from camera’s POV
To determine if point P on object is in shadow: compute distance dP from P to light source convert P from world coordinates to shadow
map coordinates using the viewing and projection matrices used to create shadow map
look up min distance dmin in shadow map
P is in shadow if dP > dmin , i.e., it lies behind a closer object
dP
Light Came
ra
P
dmi
n
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
19 of 34
Pro: Can extend to support soft shadows Soft shadows: shadows with “soft” edges (e.g. via
blurring) Stencil shadow volumes only useful for hard-edged
shadows Con: Naïve implementation has impressively bad
aliasing problems When the camera is closer to the scene than the light,
many screen pixels may be covered by only one shadow map pixel (e.g., sun shining on Eiffel tower)
Many workarounds for aliasing issues Percentage-Closer Filtering: for each screen pixel,
sample shadow map in multiple places and average Cascaded Shadow Maps: multiple shadow maps,
higher resolution closer to viewer Variance Shadow Maps: use statistical modeling
instead of simple depth comparison
Shadows (5/5) – Shadow Map Tradeoffs
Hard shadows
Soft shadows
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch08.htmlhttp://www-sop.inria.fr/ reves/Marc.Stamminger/psm/
Aliasing produced by naïve shadow mapping
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
20 of 34
Environment Mapping (for specular reflections) (1/2)
Skybox
Object
Approximate reflections by creating a skybox a.k.a. environment map a.k.a. reflection map A hack to approximate recursive raytracing Often represented as six faces of a cube surrounding
scene Can also be a large sphere surrounding scene, etc.
To create environment map, render entire scene from center of an object one face at a time Can use a single environment map for a cluster of
objects Can do this offline for static geometry, but must
generate at runtime for moving objects Rendering environment map at runtime is expensive
(compared to using a pre-computed texture) Can also use photographic panoramas (see Lab 8!)
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
21 of 34
Environment Mapping (2/2) To sample environment map
reflection at point P: Computer vector E from P to eye Reflect E about normal N to obtain
R Use the direction of R to compute
the intersection point with the environment map Treat P as being center of map, i.e.
move environment map so P is in center
See more here: http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter07.html
Environment map
eye
P
E
N
R
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
22 of 34
Observation What if we replaced the 3D sphere on the right with a 2D circle? The circle would have fewer triangles (thus renders faster) If we kept the sphere’s normals, the circle would still look like a
sphere! Works because human visual system infers shape from patterns of
light and dark regions (“shape from shading”). Brightness at any point is determined by normal vector, not by actual geometry of model
Overview: Surface Detail
Image credit: Dave Kilian, ‘13
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
23 of 34
Idea: Surface Detail
Original hi-poly model
Lo-poly model with hi-polymodel’s normals preserved
Two ideas: use texture map either to vary normals or vary mesh geometry itself. First let’s vary normals
Start with a hi-poly (high polygon count) model Decimate the mesh (remove triangles) Encode hi-poly normal information into texture Map texture (i.e., normals) onto lo-poly mesh In fragment shader:
Look up normal for each pixel Use Phong shading (normal interpolation0 to
calculate pixel color normal obtained from texture
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
24 of 34
Normal Mapping
Tangent-space normal map
http://www.3dvf.com/forum/3dvf/Blender-2/modelisation-organique-tutoriel-sujet_16_1.htm
Idea: Fill a texture map with normals Fragment shader samples 2D color texture Interprets R as Nx ; G as Ny ; B as Nz
Easiest to render, hardest to produce Usually need specialized modeling tools like
Pixologic’s Zbrush; algorithm on slide 27 Variants
Object-space: store raw, object-space normal vectors (absolute specification of normal)
Tangent-space: store normals in tangent space Tangent space: UVW system, where W is aligned
with the original low-res normal RGB corresponds to perturbations of low-res
normal Relative specification of normal
Have to convert normals to world space to do lighting calculation Tangent space
Original mesh Object-space normal map Mesh with normal map applied (colors
still there for visualization purposes)
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
25 of 34
Normal mapping can completely alter the perceived geometry of a model
Normal Mapping Example
Image courtesy of www.anticz.com
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
26 of 34
Normal Mapping Video
https://youtu.be/m-6Yu-nTbUU?t=2m21s
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
27 of 34
Creating Normal Maps Algorithm (“baking the texture
map”) Original mesh M simplified to mesh S For each pixel in normal map for S:
Find corresponding point on S: PS
Find closest point on original mesh: PM
Average nearby normals in M and save value in normal map
Manual Artist starts with S Uses specialized tools to draw
directly onto normal map (‘sculpting’)
M
S
Each normal vector’s X,Y,Z components
stored as R,G,B values in texture
Normal map
PS
PM
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
28 of 34
Bump Mapping: Example
+ =
Original object (plain
sphere)
Bump map (height map
to perturb normals - see next slide )
Sphere with bump-
mapped normals
http://cse.csusb.edu/tong/courses/cs520/notes/texture.php
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
29 of 34
Idea: instead of encoding normals themselves in the map, encode relative heights (or “bumps”) Black: minimum height delta White: maximum height delta
Much easier to create than normal maps How to compute a normal from a height map?
Collect several height samples from texture Convert height samples to 3D coordinates to calculate the
average normal vector at the given point Transform computed normal from tangent space to object space
(and from there into world space) You computed normals like this for a terrain mesh in Lab 4
(terrain)!
Bump Mapping, Another Way to Perturb Normals
Nearby values in (1D) bump map
Original tangent-space normal = (0, 0, 1)
Bump map visualized as tangent-space height deltas
Transformed tangent-space normalNormal vectors for triangles
neighboring a point. Each dot corresponds to a pixel in the
bump map.
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
30 of 34
Actually move the vertices along their normals by looking up height deltas in a height map Displacing the vertices of the mesh will
deform the mesh, producing different vertex normals because the face normals will be different
Unlike bump/normal mapping, this will produce correct silhouettes and self-shadowing
By default, does not provide detail between vertices like normal/bump mapping To increase detail level we can subdivide the
original mesh Can become very costly since it creates
additional vertices
Other Techniques: Displacement Mapping
http://en.wikipedia.org/wiki/Displacement_mapping http://www.nvidia.com/object/tessellation.html https://support.solidangle.com/display/AFMUG/Displacement
Displacement map on a plane at different levels of subdivision
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
31 of 34
Other Techniques: Parallax Mapping (1/2) Extension to normal/bump mapping Distorts texture coordinates right before sampling, as a function of normal
and eye vector (next slide) Example below: looking at stone sidewalk at an angle Texture coordinates stretched along the near edges of the stones, which are “facing” the
viewer Similarly, texture coordinates compressed along the far edges of the stones, where you
shouldn’t be able to see the “backside” of the stones
http://amdstaging.wpengine.com/wordpress/media/2012/10/I3D2006-Tatarchuk-POM.pdf
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
32 of 34
Other Techniques: Parallax Mapping (2/2) Modify original texture coordinates (u,v) to better approximate where the
intersection point of eye ray with perturbed surface would be on the bumpy surface
Option 1 Sample the height map at point (u,v) to get height h Approximate region around (u,v) with a surface of constant
height h Intersect eye ray with approximate surface to get new
texture coordinates (u’, v’) Option 2
Sample the height map in region around (u,v) to get the normal vector N’ (use the same normal averaging technique that we used in bump mapping)
Approximate region around (u,v) with the tangent plane Intersect eye ray with approximate surface to get (u’, v’)
Both produce artifacts when viewed from a steep angle Other options discussed here:
http://www.cescg.org/CESCG-2006/papers/TUBudapest-Premecz-Matyas.pdf
For illustration purposes, we use a smooth curve to show the varying heights in the neighborhood of point (u,v)
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
33 of 34
Traditional parallax mapping only works for low-frequency bumps, does not handle very steep, high-frequency bumps
Other Techniques: Steep Parallax Mapping
http://graphics.cs.brown.edu/games/SteepParallax/mcguire-steepparallax.pdf
Using option 1 from previous slide: the black eye ray correctly
approximates the bump surface the gray eye ray misses the first
intersection point on the high-frequency bump
Steep Parallax Mapping Instead of approximating the bump surface, iteratively step along eye ray
at each step, check height map value to see if we have intersected the surface more costly than naïve parallax mapping
Invented by Brown PhD ‘06 Morgan McGuire and Max McGuire Adds support for self-shadowing
11/12/2015
cs123 INTRODUCTION TO COMPUTER GRAPHICS
Andries van Dam©
34 of 34
Comparison
Creases between bricks look flat
Creases look progressively deeper
Objects have self-shadowing