graphics pipeline. goals understand the difference between inverse- mapping and forward-mapping...
TRANSCRIPT
Graphics Pipeline
Goals
Understand the difference between inverse-mapping and forward-mapping approaches to computer graphics rendering
Be familiar with the graphics pipeline From transformation perspective From operation perspective
Approaches to computer graphics rendering
Ray-tracing approach Inverse-mapping approach: starts from pixels A ray is traced from the camera through each pixel Takes into account reflection, refraction, and diffraction in a
multi-resolution fashion High quality graphics but computationally expensive
Not for real-time applications Pipeline approach
Forward-mapping approach Used by OpenGL and DirectX State-based approach:
Input is 2D or 3D data Output is frame buffer Modify state to modify functionality
For real-time and interactive applications, especially games
Ray-tracing – Inverse mapping
For every pixel, construct a ray from the eye for every object in the scene intersect ray with objectfind closest intersection with the raycompute normal at point of intersectioncompute color for pixel shoot secondary rays
Pipeline – Forward mapping
Start from the geometric primitives to find the values of the pixels
The general view (Transformations)
Modeling Transformation
Lighting
Viewing Transformation
Projection Transformation
3D scene, Camera 3D scene, Camera Parameters, and Parameters, and light sourceslight sources
graphics Pipelinegraphics Pipeline FramebuffeFramebufferr
DisplayDisplay
Clipping
Rasterization
ViewportTransformation
Input and output of the graphics pipeline Input:
Geometric model Objects Light sources geometry and transformations
Lighting model Description of light and object properties
Camera model Eye position, viewing volume
Viewport model Pixel grid onto which the view window is mapped
Output: Colors suitable for framebuffer display
Graphics pipeline
What is it?The nature of the processing steps to display a computer
graphic and the order in which they must occur. Primitives are processed in a series of stages Each stage forwards its result on to the next stage The pipeline can be drawn and implemented in different
ways Some stages may be in hardware, others in software Optimizations and additional programmability are
available at some stages Two ways of viewing the pipeline:
Transformation perspective Operation perspective
Modeling transformation
3D models defined in their own coordinate system (object space)
Modeling transforms orient the models within a common coordinate frame (world space)
Modeling Transformation
Lighting
Viewing Transformation
Projection Transformation
Object space World space
Clipping
Rasterization
ViewportTransformation
Lighting (shading)
Vertices lit (shaded) according to material properties, surface properties (normal) and light sources
Local lighting model (Diffuse, Ambient, Phong, etc.)
Modeling Transformation
Lighting
Viewing Transformation
Projection Transformation
Clipping
Rasterization
ViewportTransformation
Viewing transformation
It maps world space to eye (camera) space
Viewing position is transformed to origin and viewing direction is oriented along some axis (usually z)
Modeling Transformation
Lighting
Viewing Transformation
Projection Transformation
Clipping
Rasterization
ViewportTransformation
Projection transformation(Perspective/Orthogonal)
Specify the view volume that will ultimately be visible to the camera
Two clipping planes are used: near plane and far plane
Usually perspective or orthogonal
Modeling Transformation
Lighting
Viewing Transformation
Clipping
Rasterization
Projection Transformation
ViewportTransformation
Clipping
The view volume is transformed into standard cube that extends from -1 to 1 to produce Normalized Device Coordinates.
Portions of the object outside the NDC cube are removed (clipped)
Modeling Transformation
Lighting
Viewing Transformation
Projection Transformation
Rasterization
Clipping
ViewportTransformation
Viewport transformation (to screen space)
Maps NDC to 3D viewport: xy gives the screen window z gives the depth of each point
Modeling Transformation
Lighting
Viewing Transformation
Projection Transformation
Rasterization
Clipping
ViewportTransformation
Rasterization (scan conversion)
Rasterizes objects into pixels Interpolate values as we go
(color, depth, etc.)
Modeling Transformation
Lighting
Viewing Transformation
Clipping
Projection Transformation
Rasterization
ViewportTransformation
Summary of transformations
glMatrixMode(GL_MODELVIEW);
//glTranslate; glRotate; glScale
//gluLookAt
glMatrixMode(GL_PROJECTION);
//glTranslate; glRotate; glScale
//gluPerspective
//glFrustrum
glViewPort(0, 0, w, h);
DirectX transformations
World transformation Device.Transform.World
= Matrix.RotationZ(…) OpenGL does not have one
View transformation: Device.Tranform.View
= Matrix.LookAtLH(…)
Projection transformation: device.Transform.Projection
= Matrix.PerspectiveFovLH
OpenGL pipeline (operations)
OpenGL pipeline Display list A group of OpenGL commands that have been stored (compiled) for later
execution. Vertex and pixel data can be stored/cached in a display list. (Why?) Vertex Operation
Each vertex and its normal coordinates are transformed by GL_MODELVIEW matrix from object coordinates to eye coordinates. When lighting is enabled, the lighting calculation of a vertex is performed using the transformed vertex and normal data; thus producing new color for the vertex.
Primitive AssemblyThe geometrical primitives are transformed by projection matrix then clipped by viewing volume clipping planes. After that, viewport transform is applied in order to map 3D scene to screen space coordinates. Lastly, if culling is enabled, the culling test is performed.
Pixel Transfer OperationUnpacked pixels may undergo transfer operations such as scaling, wrapping, and clamping. The transferred data are either stored in texture memory or rasterized directly to fragments.
OpenGL pipeline Texture Memory
Texture images are loaded into texture memory to be applied onto geometric objects. Rasterization
The conversion of both geometric and pixel data into fragment. Fragments obtained form a rectangular array containing color, depth, line width, point size, and anti-aliasing calculations (GL_POLYGON_SMOOTH). When requested, the interior pixels of a polygon will be filled. A fragment corresponds to a pixel in the frame buffer.
Fragment OperationFragments are converted to pixels onto frame buffer. The first step in this stage is to generate a texture element, texel, from texture memory and apply it to each fragment. Fog calculations are then performed. When enabled, several fragment tests are performed in the order: Scissor Test , Alpha Test, Stencil Test, and Depth Test. Finally, blending, dithering, logical operation, and masking by bitmasks are performed and actual pixel data are stored in frame buffer.
FeedbackUsed to get OpenGL’s current states and information (glGet*() and glIsEnabled() commands are just for that). A rectangular area of pixel data from frame buffer can be read using glReadPixels(). Fully transformed vertex data can be obtained using the feedback rendering mode and the feedback buffer.
Zoom into OpenGL pipeline (see the OpenGL bluebook)
Summary of operations
Fixed Pipeline
Programmable PipelineHigh-level Shading Language