9/25/2001cs 638, fall 2001 today shadow volume algorithms vertex and pixel shaders
TRANSCRIPT
9/25/2001 CS 638, Fall 2001
Shadow Volumes
• A shadow volume for an object and light is the volume of space that is shadowed– That is, all points in the volume are in shadow for that
light/object pair
• Creating the volume:– Find silhouette edges of shadowing object as seen by the
light source– Extrude these edges away from the light, forming
polygons– Clip the polygons to the view volume
9/25/2001 CS 638, Fall 2001
Practicalities
• Silhouette edges can be found by looking at polygon normals– Silhouette edges are those between a front facing face and back
facing face (from light’s point of view)– The result is a sequence of edges with common vertices
• Assuming convex shadowing objects
• Extend all the vertices of silhouette edges away from the light source
• Clip them to the view volume, and form the polygons that bound the shadow volume
• Final result are a set of shadow volume boundary polygons
9/25/2001 CS 638, Fall 2001
Key Observation
• All points inside a shadow volume are in shadow• Along a ray from the eye, we can track the shadow state by
looking at intersections with shadow volume boundaries– Assume the eye is not in shadow– Each time the ray crosses a front facing shadow polygon, add one to
a counter– Each time the ray crosses a back facing shadow polygon, subtract
one from a counter– Places where the counter is zero are lit, others are shadowed
• We need to count the number of shadow polygons in front of a point. Which buffer can count for us?
9/25/2001 CS 638, Fall 2001
Real-Time Shadow Volumes
• Compute shadow volumes per frame– Not a problem for only a few moving objects
– Use simplified object geometry to speed things up
• Four pass algorithm– Render scene with ambient light only (everything shadowed)
– Render front facing shadow polygons to increment stencil buffer
– Render back facing shadow polygons to decrement stencil buffer
– Render scene again only for non-shadowed (stencil=0) regions
• Horrible details follow…
9/25/2001 CS 638, Fall 2001
Details
• Turn off any light sources and render scene with ambient light only, depth on, color buffer active
• Disable the color and depth buffers for writing, but leave the depth test active
• Initialize the stencil buffer to 0 or 1 depending on whether the viewer is in shadow or not (ray cast)
• Set the stencil test to always pass and the operation to increment if depth test passes
• Enable back face culling
• Render the shadow volumes - this will increment the stencil for every front facing polygon that is in front of a visible surface
• Cont…
9/25/2001 CS 638, Fall 2001
Details
• Enable front face culling, disable back face culling
• Set the stencil operation to decrement if the depth test passes (and leave the test as always pass)
• Render the shadow volumes - this decrements for all the back facing shadow polygons that are in front of visible objects. The stencil buffer now has positive values for placing in shadow
• Set the stencil function to equality with 0, operations to keep
• Clear the depth buffer, and enable it and the color buffer for writing
• Render the scene with the lights turned on
• Voila, we’re done
9/25/2001 CS 638, Fall 2001
Variant
• Render fully lit, then add shadows (blend a dark polygon) where the stencil buffer is set– http://www.gamasutra.com/features/19991115/bestimt_freitag_01.htm
9/25/2001 CS 638, Fall 2001
Why Use Shadow Volumes?
• They correctly account for shadows of all surfaces on all other surfaces– No shadow where there shouldn’t be shadow– No problem with multiple light sources
• Adaptable quality by creating shadow volumes with approximate geometry
• Shadow volumes for static object/light pairs can be pre-computed and don’t change
• More expensive than light maps, but not too bad on current hardware– Can’t combine the techniques. Why not?
9/25/2001 CS 638, Fall 2001
Other Shadow Algorithms
• There are other algorithms suitable for non-real-time situations, or requiring special hardware
• Two interesting cases:– Derive shadow polygons from light sources
• Compute view from light position
• Transform visible polygons into world space
• Merge them with existing polygons to indicate non-shadowed polygons
– Shadow z-buffer• Compute z-buffer from light viewpoint
• Render normal view, compare world locations of points in the z-buffer and shadow z-buffer – points with same location are lit
9/25/2001 CS 638, Fall 2001
Vertex and Pixel Shaders
• In traditional graphics, a shader is a program that computes something about each vertex or pixel– Renderman shading language, for instance, computes the color of
each pixel with a program
• Trends in graphics APIs and hardware are bringing shaders into real-time contexts– DirectX 8.0– Nvidia GeForce III– Nvidia OpenGL extensions
• http://developer.nvidia.com/ for info
9/25/2001 CS 638, Fall 2001
Figure 1: A vertex shader's position in the DX8 rendering pipeline.
Modified Pipeline (DirectX 8.0)
9/25/2001 CS 638, Fall 2001
Vertex Shaders
• Vertex shaders are implemented in hardware in new generation cards
• They get as input the (x,y,z) vertex location, its texture coordinates (s,t), its color, normal, etc
• They have access to some registers– NOT retained from one vertex to the next
• They have access to some constant memory– Programmer specifies what’s in that memory
• They run a program– A sequence of instructions that compute on the available data
9/25/2001 CS 638, Fall 2001
IO for Vertex Shaders
Figure 2: The inputs and outputs of vertex shaders. Arrows indicate read-only, write-only, or read-write.
9/25/2001 CS 638, Fall 2001
Vertex Program Ops
• All operations work on vectors– Scalars are stored as vectors with the same value in each coordinate
• Nvidia hardware provides an instruction set with 17 operations, including, but not limited to:– Add ADD, Multiply (by scalar, and vector element-wise), Multiply
and add MAD, Reciprocal square root RSQ, Dot product DP3, …– LIT which implements the Phong lighting model in one instruction– Can re-arrange (swizzle) and negate vectors before doing op
• Matrices can be automatically mapped into registers• No branches, but can be done with other instructions
– Set a value to 0/1 based on a comparison, then multiply
9/25/2001 CS 638, Fall 2001
Vertex Programming Example
• Morph between a cube and sphere while doing lighting with a directional light source (gray output)
• Cube position and normal in attributes (input) 0,1• Sphere position and normal in attributes 2,3• Blend factor in attribute 15• Inverse transpose modelview matrix in constants 12-14
– Used to transform normal vectors into eye space
• Composite matrix is in 4-7– Used to convert from object to homogeneous screen space
• Light dir in 20, half-angle vector in 22, specular power, ambient, diffuse and specular coefficients all in 21
9/25/2001 CS 638, Fall 2001
Vertex Program Example
• # blend normal and position v=v1+(1-)v2 MOV R3, v[3] ; MOV R5, v[2] ; ADD R8, v[1], -R3 ; ADD R6, v[0], -R5 ; MAD R8, v[15].x, R8, R3 MAD R6, v[15].x, R6, R5 ;
• # transform normal to eye space DP3 R9.x, R8, c[12] ; DP3 R9.y, R8, c[13] ; DP3 R9.z, R8, c[14] ;
• # transform position and output DP4 o[HPOS].x, R6, c[4] ; DP4 o[HPOS].y, R6, c[5] ; DP4 o[HPOS].z, R6, c[6] ; DP4 o[HPOS].w, R6, c[7] ;
• # normalize normal DP3 R9.w, R9, R9 ; RSQ R9.w, R9.w ; MUL R9, R9.w, R9 ;
• # apply lighting and output color DP3 R0.x, R9, c[20] ; DP3 R0.y, R9, c[22] ; MOV R0.zw, c[21] ; LIT R1, R0 ; DP3 o[COL0], c[21], R1 ;
9/25/2001 CS 638, Fall 2001
Shading Languages
• Programming shading hardware is still a difficult process– Akin to writing assembly language programs
• Shading languages and accompanying compilers allow user to write shaders in high level languages– For example, take the Renderman shading language and
compile it for the Nvidia hardware
– Proudfoot et. al. in SIGGRAPH 2001
9/25/2001 CS 638, Fall 2001
Pixel Shaders
• Pixel shaders operate on fragments in place of the texturing hardware– After rasterization, before any fragment tests or blending
• Compute RGBA values for the fragment:– Fragment color, textures, fog color as inputs in registers– Temporary storage in registers
• Each holds RGBA
– A series of combiners that take the values in registers and perform operations on them
• Referred to as a register combiner architecture• Supported by DirectX 8.0 and OpenGL Extensions
9/25/2001 CS 638, Fall 2001
Overall Architecture
http://developer.nvidia.com/docs/IO/1382/ATT/RegisterCombiners.ppt
The pictures I used in lecture are copyright NvidiaDownload the slides from the link below
Slide 7 from Nvidia
9/25/2001 CS 638, Fall 2001
General Combiner Operations
Dot is dot productMult is element-wise multiply
Slide 16 from Nvidia
9/25/2001 CS 638, Fall 2001
Multitexture Light Map with Fog
• Diffuse color CD provided per vertex
• Texture0 is diffuse map TD, Texture1 is light map TL
• CF is fog color, AF is fog factor
• Computes AF * CD * TD * TL + (1 – AF) * CF
• Uses two textures, but only one general combiner
• Advantages: performance (single pass, no blending)
• Disadvantages: none