tiea311 tietokonegraikan perusteetusers.jyu.fi/~nieminen/tgp17/tiea311_2017_lec16.pdf · basic idea...

126
TIEA311 Tietokonegrafiikan perusteet kev ¨ at 2017 (“Principles of Computer Graphics” – Spring 2017) Copyright and Fair Use Notice: The lecture videos of this course are made available for registered students only. Please, do not redistribute them for other purposes. Use of auxiliary copyrighted material (academic papers, industrial standards, web pages, videos, and other materials) as a part of this lecture is intended to happen under academic ”fair use” to illustrate key points of the subject matter. The lecturer may be contacted for take-down requests or other copyright concerns (email: paavo.j.nieminen@jyu.fi).

Upload: others

Post on 20-Oct-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

  • TIEA311Tietokonegrafiikan perusteetkevät 2017

    (“Principles of Computer Graphics” – Spring 2017)

    Copyright and Fair Use Notice:

    The lecture videos of this course are made available forregistered students only. Please, do not redistribute them forother purposes. Use of auxiliary copyrighted material(academic papers, industrial standards, web pages, videos,and other materials) as a part of this lecture is intended tohappen under academic ”fair use” to illustrate key points of thesubject matter. The lecturer may be contacted for take-downrequests or other copyright concerns (email:[email protected]).

  • TIEA311 Tietokonegrafiikan perusteet – kevät 2017(“Principles of Computer Graphics” – Spring 2017)

    Adapted from: Wojciech Matusik, and Frédo Durand : 6.837 ComputerGraphics. Fall 2012. Massachusetts Institute of Technology: MITOpenCourseWare, https://ocw.mit.edu/.

    License: Creative Commons BY-NC-SA

    Original license terms apply. Re-arrangement and new contentcopyright 2017 by Paavo Nieminen and Jarno Kansanaho

    Frontpage of the local course version, held during Spring 2017 at theFaculty of Information technology, University of Jyväskylä:http://users.jyu.fi/˜nieminen/tgp17/

  • TIEA311 - Final week in JyväskyläLast week:

    I Ray Casting: cover fully. (DONE: necessary ideas covered onlecture; practicals in the final Assignments! One week to go!)

    I Ray Tracing: basic idea (DONE), skipped details→ possible tocontinue as a “hobby project”; teachers of “TIEA306Ohjelmointityö” may be contacted regarding credit for (any)hobby projects.

    Plan for the final two lectures:

    I First, a “debriefing” of Instanssi 2017.I Shading, texture mapping: Cover the principles up to Phong

    model (last week) and texture coordinates (today).I Cherry-pick title slides from advanced stuff that we mostly defer

    to the follow-up course (starts next week) and/or future self-study.I On Thursday’s 2nd part, take a look back and discuss what

    actually happened on this course. (1st part: GPUs andrasterization)

  • Instanssi 2017 debriefI In Instanssi 2017 last weekend, I spotted 8 persons from

    our course (myself included; miscomputed only 7 onlecture btw)

    I Three entries in the demo competitionI One entry in Pikkiriikkinen (max 4096 bytes executable)I Two second places in the compos!I Three times +1 extra credit points earned for this

    course!I Nasty and elusive bug found and corrected in a hobby

    project (related to premature object state update in afunction with a side-effect)

    I New course material for the important metaskill of learningnew stuff: https://yousource.it.jyu.fi/tiea311-kurssimateriaalikehitys/tiea311-kurssimateriaali-avoin/blobs/master/instanssi17_4k_intro_webgl/project_memo.txt

  • MIT EECS 6.837 Computer Graphics

    1 MIT EECS 6.837 – Matusik

    Texture Mapping & Shaders

    © Remedy Enterainment. All rights reserved. This content is excluded from our CreativeCommons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • All materials seen so far are the same everywhere – In other words, we are assuming the BRDF is independent

    of the surface point x – No real reason to make that assumption

    3

    Spatial Variation

    © ACM. All rights reserved. This content is excludedfrom our Creative Commons license. For moreinformation, see http://ocw.mit.edu/help/faq-fair-use/.

    Courtesy of Fredo Durand. Used with permission.© source unknown. All rights reserved.This content is excluded from our CreativeCommons license. For more information,see http://ocw.mit.edu/help/faq-fair-use/.

  • • We will allow BRDF parameters to vary over space – This will give us much more complex surface appearance – e.g. diffuse color kd vary with x – Other parameters/info can vary too: ks, exponent, normal

    4

    Spatial Variation

    © ACM. All rights reserved. This content is excludedfrom our Creative Commons license. For moreinformation, see http://ocw.mit.edu/help/faq-fair-use/.

    Courtesy of Fredo Durand. Used with permission.© source unknown. All rights reserved.This content is excluded from our CreativeCommons license. For more information,see http://ocw.mit.edu/help/faq-fair-use/.

  • • From data : texture mapping – read color and other information

    from 2D images

    • Procedural : shader – write little programs that compute

    color/info as a function of location

    5

    Two Approaches

    © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 6

    Effect of Textures

    Courtesy of Jeremy Birn.

  • 7

    Texture Mapping

    Image of a cartoon of a man applying wall paper has been removed due to copyright restrictions.

  • 8

    Texture Mapping 3D model Texture mapped model

    Image: Praun et al.

    ?

    © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.© Oscar Meruvia-Pastor, Daniel Rypl. All rights reserved. This

    content is excluded from our Creative Commons license. Formore information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 9

    Texture Mapping Texture

    mapped model

    Image: Praun et al.

    Texture map (2D image)

    We need a function

    that associates each

    surface point with a

    2D coordinate in the

    texture map

    © ACM. All rights reserved. This content is excluded fromour Creative Commons license. For more information, seehttp://ocw.mit.edu/help/faq-fair-use/.

    © ACM. All rights reserved. This content is excluded fromour Creative Commons license. For more information, seehttp://ocw.mit.edu/help/faq-fair-use/.

    © Oscar Meruvia-Pastor, Daniel Rypl. All rights reserved. Thiscontent is excluded from our Creative Commons license. Formore information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 10

    Texture Mapping Texture

    mapped model

    Image: Praun et al.

    Texture map (2D image)

    For each point

    rendered, look up

    color in texture map

    © ACM. All rights reserved. This content is excluded fromour Creative Commons license. For more information, seehttp://ocw.mit.edu/help/faq-fair-use/.

    © ACM. All rights reserved. This content is excluded fromour Creative Commons license. For more information, seehttp://ocw.mit.edu/help/faq-fair-use/.

    © Oscar Meruvia-Pastor, Daniel Rypl. All rights reserved. Thiscontent is excluded from our Creative Commons license. Formore information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Each vertex P stores 2D (u, v) “texture coordinates” – UVs determine the 2D location in the texture for the vertex – We will see how to specify them later

    • Then we interpolate using barycentrics

    11

    UV Coordinates

    (u0, v0)

    (u1, v1) (u2, v2) u

    v (αu0+βu1+γu2, αv0+βv1+γv2)

  • • Each vertex P stores 2D (u, v) “texture coordinates” – UVs determine the 2D location in the texture for the vertex – We will see how to specify them later

    • Then we interpolate using barycentrics

    12

    UV Coordinates

    (u0, v0)

    (u1, v1) (u2, v2) u

    v

  • • Ray cast pixel (x, y), get visible point and α, β, γ • Get texture coordinates (u, v) at that point

    – Interpolate from vertices using barycentrics • Look up texture color

    using UV coordinates

    13

    Pseudocode – Ray Casting

    Scene

    Texture map

    Leonard McMillan, Computer Science at the University of North Carolina in Chapel Hill.

  • • Per-vertex (u, v) “texture coordinates” are specified: – Manually, provided by user (tedious!) – Automatically using parameterization optimization – Mathematical mapping (independent of vertices)

    14

    UV Coordinates?

    (u0, v0)

    (u1, v1) (u2, v2) u

    v

  • • Goal : “flatten” 3D object onto 2D UV coordinates • For each vertex, find coordinates U,V such that

    distortion is minimized – distances in UV correspond to distances on mesh – angle of 3D triangle same as angle of triangle in UV plane

    • Cuts are usually required (discontinuities)

    15

    Texture UV Optimization

    © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • For this course, assume UV given per vertex • Mesh Parameterization: Theory and Practice”

    – Kai Hormann, Bruno Lévy and Alla Sheffer ACM SIGGRAPH Course Notes, 2007

    • http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=SigCourseParam@2007&Author=Levy

    16

    To Learn More

  • 17

    Slide from Epic Games

    3D Model UV Mapping

    © Epic Games Inc. All rights reserved. This content is excluded from our CreativeCommons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Information we need: • Per vertex

    – 3D coordinates – Normal – 2D UV coordinates

    • Other information – BRDF (often same for the whole object, but could vary) – 2D Image for the texture map

    18

    3D Model

  • 19

    Questions?

    © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • What of non-triangular geometry? – Spheres, etc.

    • No vertices, cannot specify UVs that way!

    • Solution: Parametric Texturing

    – Deduce (u, v) from (x, y, z) – Various mappings are possible....

    20

    Mathematical Mapping

  • • Planar – Vertex UVs and

    linear interpolation is a special case!

    • Cylindrical • Spherical • Perspective

    Projection

    21

    Common Texture Coordinate Mappings Planar

    Spherical

    Spherical

    Images removed due to copyright restrictions.

  • • Modeling from photographs • Using input photos as textures

    24

    Projective Texture Example

    Figure from Debevec, Taylor & Malik http://www.debevec.org/Research

    © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Specify texture coordinates (u,v) at each vertex • Canonical texture coordinates (0,0) → (1,1)

    – Wrap around when coordinates are outside (0, 1)

    Texture Tiling

    seamless tiling (repeating) tiles with visible seams (0,0) (3,0)

    (0,3)

    (0,0) (3,0)

    (0,3)

    (0,0)

    (1,1)

    (0,0)

    (1,1)

    Note the range (0,1) unlike

    normalized screen coordinates!

  • • Texture mapping can be used to alter some or all of the constants in the illumination equation – Diffuse color kd, specular exponent q, specular color ks... – Any parameter in any BRDF model!

    – kd in particular is often read from a texture map

    29

    Texture Mapping & Illumination

    Constant Diffuse Color Diffuse Texture Color Texture used as Label Texture used as Diffuse Color

  • 30

    Gloss Mapping Example

    Spatially varying kd and ks

    Ron

    Fra

    zier

    © source unknown. All rights reserved. This content is excluded from our CreativeCommons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 31

    Questions?

  • • The normal vector is really important in conveying the small-scale surface detail – Remember cosine dependence – The human eye is really good at

    picking up shape cues from lighting!

    • We can exploit this and look up also the normal vector from a texture map – This is called “normal mapping” or “bump mapping” – A coarse mesh combined with detailed normal maps can

    convey the shape very well! 32

    We Can Go Even Further...

  • • For each shaded point, normal is given by a 2D image normalMap that stores the 3D normal

    For a visible point interpolate UV using barycentric

    // same as texture mapping Normal = normalMap[U,V] compute shading (BRDF) using this normal

    33

    Normal Mapping

  • 34

    Normal Map Example

    Original Mesh 4M triangles

    Paolo Cignoni

    Image courtesy of Maksim on Wikimedia Commons. License: CC-BY-SA. This content is excluded fromour Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 35

    Normal Map Example

    Simplified mesh, 500 triangles

    Simplified mesh + normal mapping

    Paolo Cignoni

    Image courtesy of Maksim on Wikimedia Commons. License: CC-BY-SA. This content is excluded fromour Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 36

    Normal Map Example

    Diffuse texture kd

    Normal Map

    Final render

    Models and images: Trevor Taylor

    © source unknown. All rights reserved. This content is excluded from our CreativeCommons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 37

    Generating Normal Maps

    • Model a detailed mesh • Generate a UV parameterization for the mesh

    – A UV mapping such that each 3D point has unique image coordinates in the 2D texture map

    – This is a difficult problem, but tools are available • E.g., the DirectX SDK has functionality to do this

    • Simplify the mesh (again, see DirectX SDK) • Overlay simplified and original model • For each point P on the simplified mesh, find

    closest point P’ on original model (ray casting) • Store the normal at P’ in the normal map. Done!

  • • You can store an object-space normal – Convenient if you have a

    unique parameterization • ....but if you want to use a tiling normal map, this will not work

    – Must account for the curvature of the object!

    – Think of mapping this diffuse+normal map combination on a cylindrical tower

    • Solution: Tangent space normal map – Encode a “difference” from the

    geometric normal in a local coord. system 38

    Normal Map Details

  • 39

    Questions? Epic Games

    Image from Epic Games has been removed due to copyright restrictions.

  • • Functions executed when light interacts with a surface

    • Constructor: – set shader parameters

    • Inputs: – Incident radiance – Incident and reflected light directions – Surface tangent basis (anisotropic shaders only)

    • Output: – Reflected radiance

    40

    Shaders (Material class)

  • • Initially for production (slow) rendering – Renderman in particular

    • Now used for real-time (Games) – Evaluated by graphics hardware – More later in the course

    • Often makes heavy use of texture mapping

    41

    Shader

  • 42

    Questions?

  • 43

    Procedural Textures

    Image by Turner Whitted

    • Alternative to texture mapping

    • Little program that computes color as a function of x,y,z:

    f(x,y,z) color

    © Turner Whitted, Bell Laboratories. All rights reserved. This content is excluded from ourCreative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Advantages: – easy to implement in ray tracer – more compact than texture maps

    (especially for solid textures) – infinite resolution

    • Disadvantages

    – non-intuitive – difficult to match existing texture

    44

    Procedural Textures

  • 45

    Questions?

  • • Critical component of procedural textures

    • Pseudo-random function – But continuous – band pass (single scale)

    • Useful to add lots of visual detail http://www.noisemachine.com/talk1/index.html http://mrl.nyu.edu/~perlin/doc/oscar.html http://mrl.nyu.edu/~perlin/noise/ http://en.wikipedia.org/wiki/Perlin_noise http://freespace.virgin.net/hugo.elias/models/m_perlin.htm (not really Perlin noise but very good) http://portal.acm.org/citation.cfm?id=325247

    46

    Perlin Noise

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Pseudo random • For arbitrary dimension

    – 4D is common for animation • Smooth • Band pass (single scale) • Little memory usage

    • How would you do it?

    47

    Requirements

  • • Cubic lattice • Zero at vertices

    – To avoid low frequencies • Pseudo-random gradient

    at vertices – define local linear functions

    • Splines to interpolate the values to arbitrary 3D points

    48

    Perlin Noise

  • TIEA311 - Today in Jyväskylä

    I Basic idea of Perlin noise is nicely introduced on “Lecture16” of the original course material.

    I We skip it here. I hope the follow-up course starting nextweek has time for this, among many other wonderfulthings.

    I Pseudo-random noise is very easy to incorporate inreal-time graphics shaders. If you want, you can just“copy-paste” code that you trust (and that has a licensethat allows inclusion in your current work!)

    I Next, we proceed directly to applications of Perlin noise.

  • • A scale is also called an octave in noise parlance

    55

    Noise At One Scale

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • A scale is also called an octave in noise parlance • But multiple octaves

    are usually used, where the scale between two octaves is multiplied by 2 – hence the name

    octave

    56

    Noise At Multiple Scales

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • That is, each octave f has weight 1/f

    57

    Sum 1/f noise

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Absolute value introduces C1 discontinuities

    58

    sum 1/f |noise|

    • a.k.a. turbulence

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Looks like marble!

    59

    sin (x + sum 1/f |noise|)

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • sum 1/f(noise) sum 1/f( |noise| )

    60

    Comparison •noise sin(x + sum 1/f( |noise| ))

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 61

    Questions?

  • • Marble – recall sin (x[0] + sum 1/f |noise|) – BoringMarble = colormap (sin(x[0]) – Marble = colormap (sin(x[0]+turbulence)) – http://legakis.net/justin/MarbleApplet/

    • Wood – replace x (or parallel plane)

    by radius – Wood = colormap (sin(r+turbulence)) – http://www.connectedpixel.com/blog/texture/wood

    62

    Noise For Solid Textures

    © Ken Perlin. All rights reserved. This content is excludedfrom our Creative Commons license. For more information,see http://ocw.mit.edu/help/faq-fair-use/.

  • 64

    Other Cool Usage: Displacement, Fur

    © Ken Perlin. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • 65

    Questions?

    Image removed due to copyright restrictions. Please the image of “blueglass.gif” from http://mrl.nyu.edu/~perlin/imgs/imgs.html.

  • • Noise: one ingredient of shaders • Can also use textures • Shaders control diffuse color, but also specular

    components, maybe even roughness (exponent), transparency, etc.

    • Shaders can be layered (e.g. a layer of dust, peeling paint, mortar between bricks).

    • Notion of shade tree – Pretty much algebraic tree

    • Assignment 5: checkerboard shader based on two shaders

    66

    Shaders

  • • Programmable shader provide great flexibility • Shaders can be extremely complex

    – 10,000 lines of code! • Writing shaders is a black art

    67

    Bottom Line

  • Sampling, Aliasing,

    & Mipmaps

    1

    MIT EECS 6.837 Computer Graphics

    Wojciech Matusik, MIT EECS

  • Examples of Aliasing

    2

    © Rosalee Nerheim-Wolfe, Toby Howard, Stephen Spencer. All rights reserved. This content is excluded fromour Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Examples of Aliasing

    3

    © Rosalee Nerheim-Wolfe, Toby Howard, Stephen Spencer. All rights reserved. This content is excluded fromour Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Examples of Aliasing

    4

    © Rosalee Nerheim-Wolfe, Toby Howard, Stephen Spencer. All rights reserved. This content is excludedfrom our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Examples of Aliasing Texture Errors

    point sampling

    5

  • In photos too

    See also http://vimeo.com/26299355

    6

    © source unknown. All rights reserved. This content is excluded from our CreativeCommons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Philosophical perspective • The physical world is continuous, inside the

    computer things need to be discrete • Lots of computer graphics is about translating

    continuous problems into discrete solutions – e.g. ODEs for physically-based animation, global

    illumination, meshes to represent smooth surfaces, rasterization, antialiasing

    • Careful mathematical understanding helps do the right thing

    7

  • What is a Pixel? • A pixel is not:

    – a box – a disk – a teeny tiny little light

    • A pixel “looks different” on different display devices

    • A pixel is a sample – it has no dimension – it occupies no area – it cannot be seen – it has a coordinate – it has a value

    8

    © source unknown. All rights reserved. This content isexcluded from our Creative Commons license. For moreinformation, see http://ocw.mit.edu/help/faq-fair-use/.

  • • In signal processing, the process of mapping a continuous function to a discrete one is called sampling

    • The process of mapping a continuous variable to a discrete one is called quantization – Gamma helps quantization

    • To represent or render an image using a computer, we must both sample and quantize – Today we focus on the effects of sampling and how to fight them

    More on Samples

    discrete position

    discrete value

    9

  • Sampling Density

    • If we’re lucky, sampling density is enough

    Input Reconstructed 12

  • Sampling Density

    • If we insufficiently sample the signal, it may be mistaken for something simpler during reconstruction (that's aliasing!)

    • This is why it’s called aliasing: the new low-frequency sine wave is an alias/ghost of the high-frequency one

    13

  • Discussion • Types of aliasing

    – Edges • mostly directional

    aliasing (vertical and horizontal edges rather than actual slope)

    – Repetitive textures • Paradigm of aliasing • Harder to solve right • Motivates fun

    mathematics

    14

    © Rosalee Nerheim-Wolfe, Toby Howard, Stephen Spencer. All rightsreserved. This content is excluded from our Creative Commons license.For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Solution? • How do we avoid that high-frequency patterns

    mess up our image? • We blur!

    – In the case of audio, people first include an analog low-pass filter before sampling

    – For ray tracing/rasterization: compute at higher resolution, blur, resample at lower resolution

    – For textures, we can also blur the texture image before doing the lookup

    • To understand what really happens, we need serious math

    16

  • • Your intuitive solution is to compute multiple color values per pixel and average them

    In practice: Supersampling

    jaggies w/ antialiasing

    18

  • Uniform supersampling • Compute image at resolution k*width, k*height • Downsample using low-pass filter

    (e.g. Gaussian, sinc, bicubic)

    19

  • Low pass / convolution • Each output (low-res) pixel is a weighted average

    of input subsamples • Weight depends on relative spatial position • For example:

    – Gaussian as a function of distance – 1 inside a square, zero outside (box)

    20 http://homepages.inf.ed.ac.uk/rbf/HIPR2/gsmooth.htm

    © 2003 R. Fisher, S. Perkins, A. Walker and E. Wolfart. All rights reserved. This content is excluded fromour Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Recommended filter • Bicubic

    – http://www.mentallandscape.com/Papers_siggraph88.pdf

    • Good tradeoff between sharpness and aliasing

    23 http://de.wikipedia.org/wiki/Datei:Mitchell_Filter.svg

  • Choosing the parameters • Empirical tests determined usable parameters

    – Mitchell, Don and Arun Netravali, "Reconstruction Filters in Computer Graphics", SIGGRAPH 88.

    http://www.mentallandscape.com/Papers_siggraph88.pdf http://dl.acm.org/citation.cfm?id=378514

    25

    © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Spatial Filtering • Remove the high frequencies

    which cause artifacts in texture minification.

    • Compute a spatial integration over the extent of the pixel

    • This is equivalent to convolving the texture with a filter kernel centered at the sample (i.e., pixel center)!

    • Expensive to do during rasterization, but an approximation it can be precomputed

    projected texture in image plane

    pixels projected in texture plane 48

  • MIP Mapping • Construct a pyramid

    of images that are pre-filtered and re-sampled at 1/2, 1/4, 1/8, etc., of the original image's sampling

    • During rasterization we compute the index of the decimated image that is sampled at a rate closest to the density of our desired sampling rate

    • MIP stands for multum in parvo which means many in a small place

    49

  • MIP Mapping Example

    MIP Mapped (Bi-Linear) Nearest Neighbor

    50

  • Examples of Aliasing Texture Errors

    nearest neighbor/ point sampling

    mipmaps & linear interpolation

    52

  • TIEA311 - Today in Jyväskylä

    I Much more about sampling issues and antialiasing on“Lecture 17” of the original course material.

    I The previous few slides were just a low-resolution sampleof the original slide set – (pun, intended, funny).

    I As mentioned earlier, we gladly defer the theory to ourlocal courses “TIES324 Signaalinkäsittely” and techniquesto “TIES471 Reaaliaikainen renderöinti”.

  • Global Illumination and Monte Carlo

    MIT EECS 6.837 Computer Graphics Wojciech Matusik with many slides from Fredo Durand and Jaakko Lehtinen

    1 © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Global Illumination • So far, we've seen only direct lighting (red here) • We also want indirect lighting

    – Full integral of all directions (multiplied by BRDF) – In practice, send tons of random rays

    4

  • Direct Illumination

    5 Courtesy of Henrik Wann Jensen. Used with permission.

  • Global Illumination (with Indirect)

    6 Courtesy of Henrik Wann Jensen. Used with permission.

  • Global Illumination

    • So far, we only used the BRDF for point lights – We just summed over all the point light sources

    • BRDF also describes how indirect illumination reflects off surfaces – Turns summation into integral over hemisphere – As if every direction had a light source

    7

  • Reflectance Equation, Visually

    outgoing light to direction v

    incident light from direction omega

    the BRDF cosine term

    v

    Sum (integrate) over every

    direction on the hemisphere,

    modulate incident illumination by

    BRDF

    Lin Lin

    Lin

    Lin

    8

  • The Reflectance Equation

    • Where does Lin come from? – It is the light reflected towards x from the surface point in

    direction l ==> must compute similar integral there! • Recursive!

    x 10

  • • Where does Lin come from? – It is the light reflected towards x from the surface point in

    direction l ==> must compute similar integral there! • Recursive!

    – AND if x happens to be a light source, we add its contribution directly

    The Rendering Equation

    x 11

  • • The rendering equation describes the appearance of the scene, including direct and indirect illumination – An “integral equation”, the unknown solution function L

    is both on the LHS and on the RHS inside the integral • Must either discretize or use Monte Carlo integration

    – Originally described by Kajiya and Immel et al. in 1986 – More on 6.839

    • Also, see book references towards the end

    The Rendering Equation

    12

  • The Rendering Equation

    • Analytic solution is usually impossible • Lots of ways to solve it approximately • Monte Carlo techniques use random samples for

    evaluating the integrals – We’ll look at some simple method in a bit...

    • Finite element methods discretize the solution using basis functions (again!) – Radiosity, wavelets, precomputed radiance transfer, etc.

    13

  • How To Render Global Illumination?

    Lehtinen et al. 2008

    15 © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Ray Casting

    • Cast a ray from the eye through each pixel

    16

  • Ray Tracing

    • Cast a ray from the eye through each pixel • Trace secondary rays (shadow, reflection, refraction)

    17

  • “Monte-Carlo Ray Tracing” • Cast a ray from the eye through each pixel • Cast random rays from the hit point to evaluate

    hemispherical integral using random sampling

    18

  • “Monte-Carlo Ray Tracing” • Cast a ray from the eye through each pixel • Cast random rays from the visible point • Recurse

    19

  • “Monte-Carlo Ray Tracing” • Cast a ray from the eye through each pixel • Cast random rays from the visible point • Recurse

    20

  • “Monte-Carlo Ray Tracing”

    • Systematically sample light sources at each hit – Don’t just wait the rays will hit it by chance

    21

  • Results H

    enrik

    Wan

    n Je

    nsen

    22

    Courtesy of Henrik Wann Jensen. Used with permission.

  • Monte Carlo Path Tracing • Trace only one secondary ray per recursion

    – Otherwise number of rays explodes! • But send many primary rays per pixel (antialiasing)

    23

  • Monte Carlo Path Tracing • Trace only one secondary ray per recursion

    – Otherwise number of rays explodes! • But send many primary rays per pixel (antialiasing)

    Again, trace shadow rays from each intersection

    24

  • Monte Carlo Path Tracing • We shoot one path from the eye at a time

    – Connect every surface point on the way to the light by a shadow ray

    – We are randomly sampling the space of all possible light paths between the source and the camera

    25

  • • 10 paths/pixel

    Path Tracing Results H

    enrik

    Wan

    n Je

    nsen

    26 Courtesy of Henrik Wann Jensen. Used with permission.

  • Note: More noise. This is not a coincidence; the integrand has higher variance (the BRDFs are “spikier”).

    • 10 paths/pixel

    Path Tracing Results: Glossy Scene H

    enrik

    Wan

    n Je

    nsen

    27 Courtesy of Henrik Wann Jensen. Used with permission.

  • • 100 paths/pixel

    Path Tracing Results: Glossy Scene H

    enrik

    Wan

    n Je

    nsen

    28 Courtesy of Henrik Wann Jensen. Used with permission.

  • Demo

    • http://madebyevan.com/webgl-path-tracing/

    31

    Image removed due to copyright restrictions. Please see the above link for further details.

  • Path Tracing is costly • Needs tons of rays per pixel!

    34

  • Irradiance Caching • Store the indirect illumination • Interpolate existing cached values • But do full calculation for direct lighting

    40

  • Photon Mapping

    • Preprocess: cast rays from light sources, let them bounce around randomly in the scene

    • Store “photons”

    44

  • Photon Mapping

    • Preprocess: cast rays from light sources • Store photons (position + light power + incoming direction)

    45

  • The Photon Map

    • Efficiently store photons for fast access • Use hierarchical spatial structure (kd-tree)

    46

  • Photon Mapping - Rendering • Cast primary rays • For secondary rays

    – reconstruct irradiance using adjacent stored photon – Take the k closest photons

    • Combine with irradiance caching and a number of other techniques

    Shooting one bounce of secondary rays and using the density approximation at those hit points is called final gathering.

    47

  • • Many materials exhibit subsurface scattering – Light doesn’t just reflect off the surface – Light enters, scatters around, and exits at another point – Examples: Skin, marble, milk

    More Global Illumination Coolness Im

    ages

    : Jen

    sen

    et a

    l.

    49 Courtesy of Henrik Wann Jensen. Used with permission.

  • That Was Just the Beginning

    • Tons and tons of other Monte Carlo techniques – Bidirectional Path Tracing

    • Shoot random paths not just from camera but also light, connect the path vertices by shadow rays

    – Metropolis Light Transport • And Finite Element Methods

    – Use basis functions instead of random sampling – Radiosity (with hierarchies & wavelets) – Precomputed Radiance Transfer

    • This would warrant a class of its own!

    51

  • What Else Can We Integrate? • Pixel: antialiasing • Light sources: Soft shadows • Lens: Depth of field • Time: Motion blur • BRDF: glossy reflection • (Hemisphere: indirect lighting)

    52

    © source unknown. All rights reserved. This content isexcluded from our Creative Commons license. For moreinformation, see http://ocw.mit.edu/help/faq-fair-use/.

    © source unknown. All rights reserved.This content is excluded from our CreativeCommons license. For more information,see http://ocw.mit.edu/help/faq-fair-use/.

    Courtesy of Henrik Wann Jensen.Used with permission.

    © ACM. All rights reserved. This content isexcluded from our Creative Commonslicense. For more information, seehttp://ocw.mit.edu/help/faq-fair-use/.

  • Domains of Integration

    • Pixel, lens (Euclidean 2D domain) – Antialiasing filters, depth of field

    • Time (1D) – Motion blur

    • Hemisphere – Indirect lighting

    • Light source – Soft shadows

    Famous motion blur image from Cook et al. 1984

    53 © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Rendering glossy reflections • Random reflection rays around mirror direction

    – 1 sample per pixel

    Motivational Eye Candy

    54

    © source unknown. All rights reserved. This content is excluded from our CreativeCommons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • • Rendering glossy reflections • Random reflection rays around mirror direction

    – 256 samples per pixel

    Motivational Eye Candy

    55 © source unknown. All rights reserved. This content is excluded from our CreativeCommons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Monte Carlo Integration

    • S is the integration domain – Vol(S) is the volume (measure) of S

    • {xi} are independent uniform random points in S • The integral is the average of f times the volume of S • Variance is proportional to 1/N

    – Avg. error is proportional 1/sqrt(N) – To halve error, need 4x samples

    61

  • Monte Carlo Computation of

    • The probability is /4 • Count the inside ratio n = # inside / total # trials • n * 4 • The error depends on the number or trials

    Demo def piMC(n): success = 0 for i in range(n): x=random.random() y=random.random() if x*x+y*y

  • Questions?

    • Images by Veach and Guibas, SIGGRAPH 95

    Naïve sampling strategy Optimal sampling strategy

    67 © ACM. All rights reserved. This content is excluded from our Creative Commonslicense. For more information, see http://ocw.mit.edu/help/faq-fair-use/.

  • Stratified Sampling

    • With uniform sampling, we can get unlucky – E.g. all samples clump in a corner – If we don’t know anything of the integrand,

    we want a relatively uniform sampling • Not regular, though, because of aliasing!

    • To prevent clumping, subdivide domain into non-overlapping regions i – Each region is called a stratum

    • Take one random sample per i

    77

  • • 6.839! • Eric Veach’s PhD dissertation

    http://graphics.stanford.edu/papers/veach_thesis/

    • Physically Based Rendering by Matt Pharr, Greg Humphreys

    For Further Information...

    81

  • TIEA311 - Today in Jyväskylä

    I If you got interested, you may want to check the whole“Lecture 18” from the original course material.

    I Much more about integration on our courses aboutnumerics. (Some years of math studies is a prerequisite)

    I More about stratified sampling on our courses about datamining (exactly the same methods used for some datamining / machine learning methods)

    I Perfect goals for thesis projects! Our faculty has a longhistory in related numerical methods and theory, so staffresources for thesis supervision should be quite easy tofind.

  • TIEA311 - Final week in JyväskyläLast week:

    I Ray Casting: cover fully. (DONE: necessary ideas covered onlecture; practicals in the final Assignments! One week to go!)

    I Ray Tracing: basic idea (DONE), skipped details→ possible tocontinue as a “hobby project”; teachers of “TIEA306Ohjelmointityö” may be contacted regarding credit for (any)hobby projects.

    Plan for the final two lectures:

    I First, a “debriefing” of Instanssi 2017.I Shading, texture mapping: Cover the principles up to Phong

    model (last week) and texture coordinates (today).I Cherry-pick title slides from advanced stuff that we mostly defer

    to the follow-up course (starts next week) and/or future self-study.I On Thursday’s 2nd part, take a look back and discuss what

    actually happened on this course. (1st part: GPUs andrasterization)