tnm046 computer graphics lab instructions 2013

37
TNM046 Computer Graphics Lab instructions 2013 Stefan Gustavson May 20, 2013 Contents 1 Introduction 1 1.1 About OpenGL ................................ 1 1.2 Programming in OpenGL .......................... 1 1.2.1 Window and input handling ..................... 2 1.2.2 CPU graphics programming ..................... 2 1.2.3 GPU graphics programming ..................... 2 1.3 OpenGL program structure ......................... 3 1.4 C programming ................................ 3 1.4.1 C versus C++ and Java ....................... 3 1.4.2 C syntax ................................ 4 1.4.3 Functions, and nothing else ..................... 4 1.4.4 Memory management ......................... 5 1.4.5 Pointers and arrays .......................... 5 1.4.6 Source files .............................. 6 1.4.7 main() ................................. 6 1.4.8 Example ................................ 6 1.5 OpenGL bottlenecks ............................. 6 2 Exercises 8 2.1 Your first triangle ............................... 9 2.1.1 Vertex coordinates .......................... 10 2.1.2 Shaders ................................ 12 2.2 Transformations, more triangles ....................... 17 2.3 Color, normals and shading ......................... 23 2.4 Camera and perspective ........................... 29 2.5 Textures, models from file, interaction ................... 32 2.5.1 Textures ................................ 32 2.5.2 Geometry from files .......................... 34 2.5.3 Interaction ............................... 35 1

Upload: others

Post on 11-Feb-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TNM046 Computer Graphics Lab instructions 2013

TNM046 Computer GraphicsLab instructions 2013

Stefan Gustavson

May 20, 2013

Contents

1 Introduction 11.1 About OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Programming in OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2.1 Window and input handling . . . . . . . . . . . . . . . . . . . . . 21.2.2 CPU graphics programming . . . . . . . . . . . . . . . . . . . . . 21.2.3 GPU graphics programming . . . . . . . . . . . . . . . . . . . . . 2

1.3 OpenGL program structure . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 C programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4.1 C versus C++ and Java . . . . . . . . . . . . . . . . . . . . . . . 31.4.2 C syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4.3 Functions, and nothing else . . . . . . . . . . . . . . . . . . . . . 41.4.4 Memory management . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.5 Pointers and arrays . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.6 Source files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4.7 main() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4.8 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.5 OpenGL bottlenecks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Exercises 82.1 Your first triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.1 Vertex coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1.2 Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2 Transformations, more triangles . . . . . . . . . . . . . . . . . . . . . . . 172.3 Color, normals and shading . . . . . . . . . . . . . . . . . . . . . . . . . 232.4 Camera and perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.5 Textures, models from file, interaction . . . . . . . . . . . . . . . . . . . 32

2.5.1 Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.5.2 Geometry from files . . . . . . . . . . . . . . . . . . . . . . . . . . 342.5.3 Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

1

Page 2: TNM046 Computer Graphics Lab instructions 2013

1 Introduction

In these lab exercises you will learn how to program interactive computer graphics usingOpenGL, a popular and very useful framework for 3D graphics. Contrary to most tutorialsyou will find on the Web, we will not take the easy shortcuts, but instead go to greatlengths to make you understand the details and ask you to implement as much as possibleyourself. You will not just be using existing code libraries, you will write your own codeto understand properly what you are doing. The exercises focus on the fundamentals.There will not be time for the more advanced stuff during this introductory course, butyou will learn enough to continue on your own using other resources. You will also haveplenty of opportunities to learn and use more of OpenGL and 3D graphics in general inseveral other MT courses.

1.1 About OpenGL

OpenGL is a cross-platform application programming interface (API). It is available onmost computing platforms today: Microsoft Windows, MacOS X, Linux, iOS, Android,and lately even for web-based content through WebGL. It was originally designed for theprogramming language C, but it can be used with almost any programming language inexistence. We could have used Java, but Java OpenGL, JOGL, is a third party add-on toJava which is a bit tricky to get working, and Java is definitely not a common languagefor OpenGL programming. WebGL is an interesting platform, but as of 2013 it is still abit immature, and it requires JavaScript which is, to put it bluntly, a useful but terribleprogramming language. The most common languages for OpenGL programming are Cand C++. Because there is nothing in OpenGL as such that requires an object-orientedlanguage, we have chosen plain old C as the language for these lab exercises. C wasinvented in the early 1970’s, but it has evolved over time and remains both useful andpopular.

Your adventures in OpenGL will begin with opening a window and drawing a singletriangle in 2D, and end with drawing an interactive view of a complicated 3D modelloaded from a file, complete with a surface texture and simulated lighting. More advancedthings like large scenes, animated models, advanced visual effects and more complicatednavigation and interaction will be saved for later courses, but feel free to explore thoseconcepts on your own if you want to. It’s not magic, and you can do it.

1.2 Programming in OpenGL

In older versions of OpenGL, before version 3, it was a very simple matter to draw asingle triangle in a constant color, and many books and online tutorials still show youhow to do this in the ”old school” way. We will not ask you to do that, simply becauseyou should never do it. We will not even tell you how it used to be done. Instead, we willshow you how to do it in the modern, correct way, using OpenGL version 3 and above.It takes a little more effort to draw your first triangle, but you will not learn any badhabits, and the skills you acquire will be directly applicable to drawing large scenes withhundreds of thousands of triangles, complex surface appearances and many different lightsources.

1

Page 3: TNM046 Computer Graphics Lab instructions 2013

OpenGL has three layers of coding, as presented below.

1.2.1 Window and input handling

The first level involves opening a window on the monitor and getting access to drawingthings in it with OpenGL commands. This is referred to as getting an OpenGL context.This level is done very differently under different operating systems, and despite our aimnot to hide the details, we have chosen not to do that part from scratch, because ithas absolutely nothing to do with 3D graphics as such. Instead, you will use a librarycalled GLFW (www.glfw.org) to handle the stuff that is specific to the operating system.GLFW is available for Windows, Linux and MacOS X, and you may use any of thoseoperating systems to do the lab work in this course. We will use GLFW for opening awindow and setting its size, and also to handle mouse and keyboard input.

1.2.2 CPU graphics programming

The second level of coding for OpenGL is about specifying vertex coordinates, trianglelists, normals, colors, textures, lights and transformation matrices, and sending this dataoff to the graphics card (graphics processing unit, GPU) for drawing. Today, interactive3D graphics is almost always drawn with hardware acceleration, and OpenGL was de-signed to make good use of it. This level is where we are going to spend much of oureffort.

1.2.3 GPU graphics programming

The third level of coding is shader programming. Shaders are small programs that aresent to the graphics hardware for execution, and they come in two flavors: vertex shadersand fragment shaders. (Actually, there are two more: geometry shaders and tessellationshaders, but they are more specialized, and many applications don’t need them. Wewon’t mention them further here.)

A vertex shader operates on each vertex of a 3D model. It receives vertex coordi-nates, normals, colors and the like from arrays in your main program and transforms thecoordinates and normals to screen coordinates using transformation matrices. The reasonfor this being done on the GPU is that matrix multiplications can be burdensome on theCPU if you have hundreds of thousands of vertices and each of them is to be transformedby a matrix many times per second. The repetitive task of multiplying a matrix and ahuge amount of vectors is better performed by specialized, parallel hardware, leaving theCPU free to perform other tasks.

A fragment shader operates at the pixel level. Between the vertex shader and thefragment shader, a triangle is subdivided into a collection of pixel samples, fragments,that are to be drawn to the screen. For each of those pixels, the fragment shader receivesinterpolated coordinates, normals and other data from the vertex shader, and computesthe RGB color value.

Shaders are written in a special programming language called OpenGL Shading Lan-guage, GLSL. It is a small and simple language, with a syntax similar to C and C++ butspecialized for computations on vertex and pixel data. There are convenient vector and

2

Page 4: TNM046 Computer Graphics Lab instructions 2013

matrix data types in GLSL, and it is quite easy to learn GLSL if you know computergraphics and some other C-like programming language.

We will not go deep into details on shader programming, and you will not be requiredto learn all the intricacies of GLSL, but you will write simple vertex shaders to transformvertices and normals from model space to screen space, and likewise simple fragmentshaders to compute lighting and color for your objects. Further adventures in shaderprogramming are encouraged, but they are outside the scope of this course. You willlearn more in later courses on computer graphics. If you are curious and don’t want towait, we recommend the tutorials at www.opengl-tutorial.org.

1.3 OpenGL program structure

There is a general structure and sequence of operations which is common to all OpenGLrendering. For each frame, this is what needs to be done:

� Clear the screen

� Set up data for drawing (vertices, matrices, shaders, textures)

� Draw a bunch of triangles

� Repeat the previous two steps if you want to draw more objects

� Swap the back and front buffers to show your rendered image

Settings in OpenGL that are made in the setup phase take effect immediately, andremain valid for subsequent drawing commands until a setting is cleared or changed.Drawing commands issued in the drawing phase are executed right away, and trianglesfrom a previous drawing command can be drawn by the GPU while the main CPUprogram prepares the data for the next drawing command. The CPU waits for the GPUto finish drawing before swapping the buffers, but all other commands are sent to theGPU without any exact knowledge of how long they will take or exactly when they finish.

1.4 C programming

This lab series is written in C rather than C++. C is getting old, but it is still used alot for real programming tasks, so learning C is still very useful. It is a simple language,which can no longer be said of C++. No advanced features of C++ are needed for thetasks presented here. OpenGL was written for plain C, and is still very much suitedto being used from C or other non-object oriented languages. When you learn moreon graphics programming, you will probably wish to use some of the practical modernfeatures of C++ and some useful libraries for C++, but here we will stick to C. You haveseen and used C++ and Java, which both borrow most of their syntax from C, so C codeshould look quite familiar to you.

1.4.1 C versus C++ and Java

C is not a particularly pretty language. In fact, it allows and even encourages program-mers to write very ugly programs, but it was designed to be efficient and easy to use. Cis still highly useful and popular for many applications, particularly for hardware related,

3

Page 5: TNM046 Computer Graphics Lab instructions 2013

system-level programming. Learning the basics of C is very helpful also for understandingC++. People switching from Java to C++ are often misled by the similarities betweenthe two languages and forget the important differences. It is of great help to a C++ pro-grammer to know some of the pitfalls of C and to keep in mind that C++ is somewhatof an object-oriented facade built on top of C, not an object-oriented language designedfrom the ground up like Java.

Compared to Java, C and even C++ are relatively inconvenient languages to use, butthey have one definite advantage: speed. Programs written in C or C++ can execute tentimes faster or more than exactly the same program written in Java. The reason is thelarge overhead in Java bytecode interpretation, object manipulation and runtime safetychecks. C and C++ have no such overhead, but they are by design less secure and offerless support for the programmer.

1.4.2 C syntax

C has a syntax that should be familiar to you. Identifiers are case sensitive, a semicolonterminates a statement, curly braces are used to group statements together, and mostcontrol constructs like if-statements, while and for loops and switch clauses look andwork the same in C as in Java and C++. Primitive types are also mostly similar: int,double, float and the like. Most operators have an identical syntax, an array index isplaced within square brackets, and comments are written in the same manner. At thesource code level, C, Java and C++ look very much alike for the most part. Almost allC statements are also valid in C++, and many C statements are valid Java statementsas well. (The reverse does not hold true, though: there are many things you can do inJava and C++ which you can’t do in C.) Some strange quirks in Java and C++, like theincrement operator (i=i+1 may be written i++) and the rather bizarre syntax of a for

loop, are directly inherited from C.

1.4.3 Functions, and nothing else

C is an imperative, procedural language, not an object oriented language. Everythingthat Java and C++ have in terms of object orientation is entirely lacking in C. There areno classes, no objects, no constructors or destructors, no methods, no method overloadingand no packages. In C, you have variables and functions, and that’s it. You may thinkof it as if you only had one single class in a Java or C++ program.

Furthermore, a C function must have a globally unique name. In Java and C++,methods belong to classes and only need to have a unique name within the class ornamespace, and methods within a class may be overloaded and share a common descrip-tive name if only their lists of parameters differ. C makes no such distinctions. Oneresult of this is that functions with similar purposes that operate on different kinds ofdata must be distinguished by unique names. For a large API like OpenGL, this is defi-nitely a drawback. The OpenGL function to send a value to a shader program (a conceptwe will learn about later) may take floating point, integer or unsigned integer arguments,and possibly 2, 3 and 4-element vectors of either type. This results in the following twelvefunction names being used for what is basically the same purpose:glUniform1f(), glUniform2f(), glUniform3f, glUniform4f(),

glUniform1i(), glUniform2i(), glUniform3i(), glUniform4i(),

4

Page 6: TNM046 Computer Graphics Lab instructions 2013

glUniform1ui(), glUniform2ui(), glUniform3ui(), glUniform4ui()

The prefix ”gl” in the name of every OpenGL function is due to the lack of classes,separate namespaces or packages in C. The suffixes are required to distinguish the variantsfrom each other. In a more type-aware language, these functions could all have beenmethods of an OpenGL library object or namespace, and all could have been given thesame name: Uniform(). (In fact, object oriented interfaces to OpenGL often work thatway.)

1.4.4 Memory management

Java has a very convenient memory management system that takes care of allocatingand de-allocating memory without the programmer having to bother with it. C++ hasa reasonably convenient syntax to create and destroy objects by the new and delete

keywords. C, on the contrary, requires all memory allocation and de-allocation to beperformed explicitly. This is a big nuisance to programmers, and a very common sourceof error in a C program. However, an invisible memory management is not only a blessing.The lack of automatic memory management and other convenient functionality is one ofthe reasons why C is much faster than Java, and C encourages writing more memoryefficient code than C++.

1.4.5 Pointers and arrays

Java by design shields the programmer from performing low level, hardware-related oper-ations directly. Most notably, direct access to memory is disallowed in Java, and objectsare referenced in a manner that makes no assumption regarding where or how they arestored in the computer memory. C and C++, both being older, more hardware orientedand immensely less secure languages, expose direct memory access to the programmer.Any variable may be referenced by its address in memory by means of pointers. A pointeris literally a memory address, denoting where the variable is stored in the memory of thecomputer. Pointers pointing to the wrong place in memory are by far the most commoncauses of error in C and C++ programs, but no useful programs can be written withoutusing pointers in one way or another. Pointers are a necessary evil in C, and unfortunatelythey are just as essential in C++. Arrays in Java are abstract collections of objects, andwhen you index an array in Java, the index is checked against upper and lower boundsto see whether the referenced index exists within the array. This is convenient and safe,but slow. C and C++ both implement arrays as pointers, with array elements stored inadjacent memory addresses. This makes indexing very easy, because if you have an arrayarr with a number of elements and want to access arr[4], all you have to do is to add 4times the size of one array element to the address of the first element and look at thatnew address. Unfortunately, no bounds checking is performed, so an index beyond thelast element of an array, or even a negative index, is perfectly allowed in C and C++ andwill often execute without error. Indexing out of bounds of an array could cause fatalerrors later in the program, and the end result will most probably be wrong. Indexingarrays out of bounds is a very common pointer error in C and C++, and forgetting tocheck for incorrect bounds in critical code is the most common security hole in C andC++ programs alike.

5

Page 7: TNM046 Computer Graphics Lab instructions 2013

1.4.6 Source files

The source code for a C program may be split into several files, but it is not a simple taskto decide what goes where, or how many files to use. In Java, and mostly also in C++,the task is extremely straightforward: use one file for each class, and name the file afterthe class it contains. In C, there are no classes, so you could theoretically write even avery large C program in one single file. This is inconvenient, and not recommended evenfor the small programs you will create in these exercises, but different programmers splittheir code somewhat differently into several files. There are guidelines on how to organiseC source code to make it easy to understand, but there is no single set of guidelines thateveryone uses.

To compensate for the lack of packages in C, there is a concept called header filesor include files, which is a sort of half-baked solution that works OK most of the time.Include files are inserted by the compiler into the source code at compile time, and in-cluded header files with type definitions, variable declarations and function prototypescan be used to provide access to extra libraries and functions that are defined in othersource files which are compiled separately. Unfortunately, include files can be used in-appropriately, as there is nothing in the C language itself that actually mandates whatshould go where. Self-taught and undisciplined C programmers sometimes re-invent thewheel badly or learn from bad but ”clever” examples, and create a disorganised mess offiles with a programming style of their own that can be very hard to read.

1.4.7 main()

A stand-alone C program must have a function called main. It is defined like this:int main(int argc, char *argv[])

When a C program starts, its main function is invoked. Where to go from there isentirely up to the programmer. The program exits when the main function exits, or whenthe statement exit is executed. This is similar to both Java and C++.

1.4.8 Example

A very short but complete C program is given in Listing 1. It calculates a function valuefor ten numbers from 0.0 to 0.9, stores them in an array and exits. It is of no particularuse, but it shows the syntax for some common tasks.

A program that does something useful probably needs some input and output. Wecan include a standard library header file <stdio.h> that provides this for us. Thereare many standard library functions in C, and not much real programming can be donewithout them. However, it is far beyond the scope of this brief introduction to presentany of them. We just use the printf function as an example in Listing 2 below. (Notethat even this very brief example is a complete C program.) Documentation on the Cstandard library functions can be found in books on C programming and on the Internet.

1.5 OpenGL bottlenecks

The execution of an interactive 3D graphics application can encompass many differenttasks: object preparation, drawing, creation and handling of texture images, animation,

6

Page 8: TNM046 Computer Graphics Lab instructions 2013

/* A program that does nothing much useful */

double myfunction(double x) {

double f;

f = 3.0*x*x - 2.0*x + 8.0;

return f;

} // End of myfunc ()

int main(int argc , char *argv []) {

int i;

double x;

double values [10];

for (i=0; i<10; i++) {

x = 0.1*i;

values[i] = myfunction(x);

} // End of for loop

return 0;

} // End of main()

Listing 1: C example code.

/* A program that prints a countdown and exits */

#include <stdio.h> // Standard input and output library

int main(int argc , char *argv []) {

int i;

for (i=10; i>0; i++) {

printf("%d, ", i);

}

printf("Launch !\n");

}

Listing 2: C example code.

simulation and handling of user input. In addition to that, there may be a considerableamount of network communication and file access going on. 3D graphics is taxing evenfor a modern computer, and real world 3D applications often end up hitting the limit forwhat a computer can manage in real time.

The bottleneck can be in pure rendering power, meaning that the GPU has troublekeeping up with rendering the frames fast enough in the desired resolution. Such problemscan often be alleviated by reducing the resolution of the rendering window. However,sometimes the bottleneck is at the vertex level, either because the models are too detailedor because some very complicated calculations are made in the vertex shader. A solutionin that case is to reduce the number of triangles in the scene. Sometimes the bottleneckis not the graphics, but the simulation or animation performed on the CPU before thescene is ready for rendering. The remedy for that is different for each case, and it is notalways easy to know what the problem is. A lot of effort in game development is spent onbalancing the requirements from the gameplay simulation, the geometry processing andthe pixel rendering in a manner that places reasonable demands on the CPU and GPUhorsepower.

Your exercises in this lab series will probably not hit the limit on what a moderncomputer can manage in real time, but in the last exercise, where you load models fromfile, we will provide you with an example of a huge model with lots of triangles, to showwhat OpenGL on a modern computer is capable of, and to show that there is still a limitto what you can do if you want interactive frame rates.

7

Page 9: TNM046 Computer Graphics Lab instructions 2013

2 Exercises

There are five exercises below in sections 2.1 to 2.5, each containing several tasks. Eachtask is enclosed in a frame, like this:

This is a task which you should perform.

Information on how to perform the tasks can be found in this text and in the book.In some cases you may need to look up OpenGL documentation on the Internet. Gooddocumentation of OpenGL is available on http://www.opengl.org. A good set ofmore general tutorials going quite a bit further than these exercises is published onhttp://www.opengl-tutorial.org.

Of course, in most cases you also need to prepare and think to perform the taskspresented to you. You will probably need to spend extra time outside of scheduled labsessions to prepare the tasks or to finish up.

Be prepared to demonstrate your solutions to the lab assistant for approval and com-ments. You are expected to demonstrate all tasks during the five scheduled lab sessions.

8

Page 10: TNM046 Computer Graphics Lab instructions 2013

2.1 Your first triangle

To get you started, we present you with an example that opens a window, enters arendering loop and waits for the program to be terminated by the user either by closingits window or by pressing the ESC key. For the moment, the rendering loop just erasesthe frame, so all you will see is a blank screen in a specified color. Listing 3 presentsworking C code for a complete program, but a file more suitable as a base for you to buildupon for the upcoming exercises is in the file GLprimer.c in the lab material. That filehas some useful extra stuff in it and contains extensive, detailed comments, but it hasthe same overall structure as the simple example in Listing 3.

/* Author: Stefan Gustavson ([email protected]) 2013

* This code is in the public domain.

*/

#include <stdio.h> // Required for printf ()

#include <GL/glfw.h>

void setupViewport () {

int width , height;

glfwGetWindowSize( &width , &height );

glViewport( 0, 0, width , height ); // The entire window

}

int main(int argc , char *argv []) {

GLboolean running = GL_TRUE; // Main loop exits when this is set to GL_FALSE

glfwInit ();

if( !glfwOpenWindow (256, 256, 8,8,8,8, 32,0, GLFW_WINDOW) )

{

glfwTerminate (); // glfwOpenWindow failed , quit the program.

return 1;

}

printf("GL vendor: %s\n", glGetString(GL_VENDOR));

printf("GL renderer: %s\n", glGetString(GL_RENDERER));

printf("GL version: %s\n", glGetString(GL_VERSION));

printf("Desktop size: %d x %d pixels\n", vidmode.Width , vidmode.Height);

glfwSwapInterval (0); // Do not wait for screen refresh between frames

while(running)

{

setupViewport ();

glClearColor (0.3f, 0.3f, 0.3f, 0.0f);

glClear(GL_COLOR_BUFFER_BIT);

// --- Rendering code should go here ---

glfwSwapBuffers ();

if(glfwGetKey(GLFW_KEY_ESC) || !glfwGetWindowParam(GLFW_OPENED)) {

running = GL_FALSE;

}

}

glfwTerminate ();

return 0;

}

Listing 3: C code for a very small OpenGL program using GLFW.

Do not blindly accept the code in GLprimer.c as some kind of magic spell. Instead,please study it carefully. You are not asked to understand every little detail about itright away, but read it. Print it out if you like, and scribble on the printout until you

9

Page 11: TNM046 Computer Graphics Lab instructions 2013

understand its structure. It is not a lot of code, and it has lots of comments, so mostof it should be perfectly readable to you. This code will be the foundation for all thefollowing exercises, and you should be almost as familiar with it as if you had writtenit yourself. You can skip understanding the details in the included library tnm046.c

and its associated header file tnm046.h, but take a good look at those files as well. Theycontain some ugly and disinteresting code to make modern OpenGL work under MicrosoftWindows (the so-called ”extension loading”), but also some functions that you will uselater for loading shaders from files. Later, you will be asked to add some functions ofyour own creation to these files.

Make sure to really read the code in GLprimer.c. We cannot stress this enough.Don’t just gloss over it, or leave the reading for later. Read it, line by line. Do notproceed until you have done so.

When you have read the code, create a project in your programming editor of choice,compile the code in GLprimer.c and make sure it works.

The window that is opened will be blank, because we are not drawing anything,but it should open and close without error messages. You will need to link with theOpenGL library (of course) and the library GLFW. How to do that depends on youroperating system, your compiler and your development environment. You may also haveto download the GLFW library. Some pre-packaged development environments includeit by default, but most don’t. GLFW is the only extra library that is required for theselab exercises. In the course lab, the GLFW library is already installed.

2.1.1 Vertex coordinates

There is a rendering loop in the program, and it follows the general structure presentedin section 1.3. However, we are not drawing anything yet – the rendering loop just clearsthe screen. Drawing in OpenGL is generally performed by sending vertex coordinatesand triangle lists in large chunks to the GPU. This is done by filling arrays with data andusing special functions to copy that data to vertex buffers. This is not terribly convenientfor the programmer, but it is fast and efficient for the GPU, because all the data is sent inone go, and it is copied to the GPU memory to speed up the drawing and enable efficientrepeated rendering of the same object in many consecutive frames. This method is themodern way of getting data to the GPU. There used to be simple but slow alternativemethods, but they should not be used any more, so there is no point in learning them,and we will not teach them. What you should use is a vertex buffer. The code we askyou to use below might seem like a lot of work for drawing a single triangle, or even afew dozen triangles, but when you are drawing thousands or even millions of triangles,this is a very efficient and structured way to do it.

A vertex buffer consists of a sequence of vectors, packed together in sequence as alinear array of float numbers. OpenGL defines its own datatype called GLfloat, buton all modern platforms it is merely a synonym for float, and you may use either type

10

Page 12: TNM046 Computer Graphics Lab instructions 2013

interchangeably in your code. A simple demo to get us started is to define a static orstatic const array and initialize it on creation in the C code. (Just static data is notnecessarily constant, it can be changed, but the type qualifier const tells the compilerthat it should not change during execution.) In C, each element of an array that isdeclared can be given an initialization value. Thus, simple vertex arrays can be createdand set to suitable values directly in your source code, and compiled into the program.This is not the common way of getting geometry into an OpenGL program, but it isuseful for simple demonstrations.

To send data to the GPU and draw it, you need to go through several steps:

1. Create a vertex array object (VAO) to refer to your geometry

2. Activate (”bind”) the vertex array object

3. Create a vertex buffer and an index buffer

4. Activate (”bind”) the vertex buffer and index buffer objects

5. Create data arrays that describe your 3D geometry

6. Connect the data arrays to the OpenGL buffer objects

7. Possibly repeat steps 1 through 6 to specify more objects for the scene

8. Activate the vertex array object you want to draw (”bind” it again)

9. Issue a drawing command to the GPU

10. Possibly repeat steps 8 and 9 until the entire scene has been drawn

Code for this sequence of operations for a single object is given in Listing 4.In the C code, vertex array objects and buffer objects are just integers. The actual

vertex data resides in a data structure in the GPU memory, and the integers are just IDnumbers that are associated with that data structure, to keep track of the data that wassent to the GPU and to refer to it later during rendering. The name ”object” for theseintegers is a bit confusing, but you can think of them as ”handles”, unique identifiersthat are used to refer to objects that are actually stored on the GPU.

Once you have familiarized yourself with what the code in Listing 4 does, add it toyour program, compile and run it.

You should see a triangle on the screen, and it should appear white. Note where yourvertices end up. The default coordinate system in an OpenGL window extends acrossthe entire window and ranges from −1 to 1 in both x and y, regardless of the pixeldimensions. The z direction points straight out from the screen, and the z coordinate ofa vertex does not affect its apparent position on the screen in the default view. However,the view can of course be changed, and objects can be transformed to make their z depthinfluence how they are drawn. We’ll get to that later.

11

Page 13: TNM046 Computer Graphics Lab instructions 2013

2.1.2 Shaders

Modern use of OpenGL requires shaders. We could get away without them here onlybecause most OpenGL implementations are somewhat forgiving, so a failure to specifyshaders will result in a default behaviour of not transforming the vertices (in the vertexshader step) and drawing all pixels in a solid white color (in the fragment shader step).This is actually pretty useless, and we shouldn’t rely on it, so let’s write a couple ofshaders like we are supposed to.

Create two empty files and write code in them according to Listings 5 and 6. Wewill soon get to what the code actually does, and you will extend it later, but for themoment, just copy it. Note that also the first line, #version 330 core, is required.Name the files whatever you like, but use the extension .glsl, and use descriptivenames.

GLSL is similar in syntax to C, so you can add the files to your project in yourdevelopment environment, and open and edit them with the C editor. Even if the editordoesn’t really understand GLSL and makes some mistakes on syntax highlighting andindentation, a C editor works reasonably well also for GLSL editing. However, do notinclude the shaders in the list of files that are to be compiled by the C compiler. Theshader code is not meant for the CPU, and it is not compiled into the executable file.Instead, the shader files are read when the program is run, and the GLSL code is compiledby the OpenGL driver. This means that if you only make changes to the shaders, you donot need to recompile your main program. Just change the text in the shader files andrun your program again.

To use a shader pair, both shaders need to be compiled, linked and activated togetheras a program object in OpenGL. Shader compilation is a bit complicated, so we presentyou with a function to do the job in the file tnm046.c. The function createshader()

takes as input arguments the names of two files, one for the vertex shader and one for thefragment shader, and returns a program object. If the compilation fails for some reason,like if a file can’t be found or if you have a syntax error in your code, an error messageis printed and no valid program object is returned.

Using the program object after it has been created is as simple as calling the OpenGLfunction glUseProgram() with the object as argument. All subsequent objects will berendered using that shader program until you make another call to change it. You canalso deactivate the shader program by calling glUseProgram(0).

The C code you need to add to your program is presented in Listing 7. As you cansee, shader program objects, like vertex array objects, are plain integers in C. The actualshader program is stored in GPU memory, and the integer is just a numeric ID, a referenceto the shader program that was sent to the GPU.

Add shaders to your program, and compile and run it to see that things work theway they should.

Orange is nice, but we might want to draw the triangle in a different color. The pixel

12

Page 14: TNM046 Computer Graphics Lab instructions 2013

color is set in our fragment shader by assigning an RGB value to the the output variablecolor.

Edit your fragment shader to draw the triangle in a different color.

Note that you do not need to recompile your code if you only change the GLSL files.It is enough to just quit and restart your program.

Change the position of some vertices by editing the array in the C code.

Changes to the C code require a recompilation.The vertex positions can also be changed in the vertex shader. That is, in fact, its

very purpose. The vertex shader is executed for each vertex, so changes to the vertexshader affect all vertices in the object.

Change the position of all vertices by editing the vertex shader. Translate the triangleby adding a constant vector, and scale the triangle by multiplying the coordinateswith a scalar (uniform scaling) or a vector (non-uniform scaling).

The triangle is all a single color. That’s a bit boring. OpenGL has an inherent mech-anism for interpolation between vertices. Any property of a triangle is allowed to varyacross its surface, including normal vectors, texture coordinates and colors. To specifyadditional properties for each vertex, you need to activate additional vertex attribute ar-rays similar to the one you used for the vertex coordinates. Each additional array needsto be connected to input values in the shader, and the vertex shader needs to pass theinterpolated values on to the fragment shader through a pair of variables: one out vari-able from the vertex shader and one in variable in the fragment shader. Both shouldhave the same name and type.

To specify a color for each vertex, you can use the code in Listing 8. You also needto make changes to the shader code to pass the color values to the shaders and use themfor rendering. The shader code is presented in Listing 9.

Specify different colors for each vertex of your triangle, and edit the shaders to renderthe triangle with a smooth interpolated color across its surface.

13

Page 15: TNM046 Computer Graphics Lab instructions 2013

// --- Put this code at the top of your main() function.

// Vertex coordinates (x,y,z) for three vertices

GLuint vertexArrayID , vertexBufferID , indexBufferID;

static const GLfloat vertex_array_data [] = {

-1.0f, -1.0f, 0.0f, // First vertex , xyz

1.0f, -1.0f, 0.0f, // Second vertex , xyz

0.5f, 1.0f, 0.0f // Third vertex , xyz

};

// A single triangle built from vertices 0, 1 and 2

static const GLuint index_array_data [] = {

0,1,2

};

// ----------------------------------------------------

// --- Put this code after loadExtensions (), but before the rendering loop

// Generate 1 VAO , put the resulting identifier in vertexArrayID

glGenVertexArrays (1, &vertexArrayID);

// Generate 1 buffer , put the resulting identifier in vertexBufferID

glGenBuffers (1, &vertexBufferID);

// Generate 1 buffer , put the resulting identifier in indexBufferID

glGenBuffers (1, &indexBufferID);

// Activate the vertex array object

glBindVertexArray(vertexArrayID);

// Activate the vertex buffer object

glBindBuffer(GL_ARRAY_BUFFER , vertexBufferID);

// Present our vertex coordinates to OpenGL

glBufferData(GL_ARRAY_BUFFER , sizeof(vertex_array_data),

vertex_array_data , GL_STATIC_DRAW);

// Enable vertex attribute array 0 to send it to the shader.

glEnableVertexAttribArray (0);

// Specify the format of the data in the vertex buffer.

glVertexAttribPointer (0, 3, GL_FLOAT , GL_FALSE , 0, (void*)0);

// The arguments specify , from left to right:

// Attribute 0 (must match the layout in the shader).

// Dimensions 3 means x,y,z - becomes a vec3 in the shader.

// Type GL_FLOAT means we have float input data in the array.

// GL_FALSE means "no normalization", but has no meaning for float data.

// Stride 0 means vertex x,y,z are packed tightly together.

// Array buffer offset 0 means our data starts at the first element.

// Activate the index buffer object

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER , indexBufferID);

// Present our vertex indices to OpenGL

glBufferData(GL_ELEMENT_ARRAY_BUFFER , sizeof(index_array_data),

index_array_data , GL_STATIC_DRAW);

// Deactivate the vertex array object again to be nice

glBindVertexArray (0);

// ----------------------------------------------------

// --- Put the following code in the rendering loop

// Activate the vertex array object we want to draw

// (In most cases , we have more than one to choose from)

glBindVertexArray(vertexArrayID);

// Finally , draw our triangle with 3 vertices.

// When the last argument is NULL (0), it means "use the currently bound

// index buffer ". (This is not obvious .) The index buffer is part of the

// VAO state and is bound again whenever the VAO is bound.

glDrawElements(GL_TRIANGLES , 3, GL_UNSIGNED_INT , (void*)0);

Listing 4: C code to define and use minimal vertex and index arrays.

14

Page 16: TNM046 Computer Graphics Lab instructions 2013

#version 330 core

layout(location = 0) in vec3 Position;

void main(){

gl_Position = vec4(Position , 1.0);

}

Listing 5: A minimal vertex shader

#version 330 core

out vec4 color;

void main(){

color = vec4 (1.0, 0.5, 0.0, 1.0);

}

Listing 6: A minimal fragment shader

// --- Add this to the variable declarations

GLuint programObject; // Our single shader program

// --- Add this in main() after loadExtensions () and before the rendering loop

programObject = createShader("vertex.glsl", "fragment.glsl");

// --- Add this to the rendering loop , right before the call to glDrawElements ()

glUseProgram(programObject);

Listing 7: Shader activation using the function createShader() from tnm046.c

// --- Add this where the other vertex buffer declarations are

GLuint colorBufferID; // Vertex colors

// --- Add this after the other array declarations

static const GLfloat color_array_data [] = {

1.0f, 0.0f, 0.0f, // Red

0.0f, 1.0f, 0.0f, // Green

0.0f, 0.0f, 1.0f, // Blue

};

// --- Add this after the other vertex buffer activations

// Generate buffer , activate it and copy the data

glGenBuffers (1, &colorBufferID);

glBindBuffer(GL_ARRAY_BUFFER , colorBufferID);

glBufferData(GL_ARRAY_BUFFER ,

sizeof(color_array_data), color_array_data , GL_STATIC_DRAW);

// Tell OpenGL how the data is stored in our color buffer

glVertexAttribPointer (1, 3, GL_FLOAT , GL_FALSE , 0, (void*)0);

// Attribute 1, 3 dimensions (R,G,B - vec3 in the shader),

// type GL_FLOAT , not normalized , stride 0, start at element 0

// Enable a second attribute (in this case , color)

glEnableVertexAttribArray (1);

Listing 8: Specifying vertex colors as a vertex attribute array

15

Page 17: TNM046 Computer Graphics Lab instructions 2013

// --- Add this to the declarations in the vertex shader

layout(location = 1) in vec3 Color;

out vec3 interpolatedColor;

// And somewhere in its main() function , add this:

interpolatedColor = Color; // Pass interpolated color to fragment shader

// --- Add this to the declarations in the fragment shader

in vec3 interpolatedColor;

// And in its main() function , set the output color like this:

color = vec4(interpolatedColor , 1.0);

Listing 9: Using vertex colors in the shaders

16

Page 18: TNM046 Computer Graphics Lab instructions 2013

2.2 Transformations, more triangles

A single triangle is not a terribly exciting scene, but all it takes to make a more interestingobject is more triangles, so let’s keep the scene this simple for a while longer.

A very fundamental part of 3D graphics is transformations, formalized by 4×4 matri-ces and homogeneous coordinates. Transformations are usually performed on the GPU,because they can be done faster and more efficiently there. Matrices can be created andused entirely within the shader code, but for the most part the transformations need tobe controlled by the main program, depending on camera movements and changes to thescene. Data that needs to change, but which is constant across an entire object ratherthan unique for every vertex, is sent to shaders by a mechanism called uniform variables.A variable in CPU memory is sent to the GPU by a call to one of a set of OpenGLfunctions starting with glUniform. The full mechanism for sending uniform variables toa shader program consists of three steps:

� Create and set a variable of the correct type in the CPU program

� Find the uniform variable by name in the shader program object

� Copy the variable value to the shader program object

A uniform variable time of type float is declared in GLSL as shown in Listing 10.

uniform float time;

Listing 10: Declaration of a uniform variable in a shader

In GLSL, uniform variables must be declared in a global scope, outside of any func-tions. How to find this variable by name and set it from a C program is shown inListing 11:

// Declarations: the C variable and the location of its GLSL counterpart

float time;

GLint location_time;

// Do this before the rendering loop

location_time = glGetUniformLocation(programObject , "time");

// Do this in the rendering loop to update the uniform variable "time"

time = (float)glfwGetTime (); // Number of seconds since the program was started

glUseProgram(programObject); // Activate the shader to set its variables

if(location_time != -1) { // If the variable is not found , -1 is returned

glUniform1f(location_time , time); // Copy the value to the shader program

}

Listing 11: Setting a uniform shader variable from C

Uniform variables can be declared in both vertex shaders and fragment shaders. Ifa uniform variable of the same name is declared in both the vertex and the fragmentshader, it gets the same value in both places. It is an error to declare uniform variablesof the same name but different type in shaders that are supposed to be compiled into thesame program and work together.

17

Page 19: TNM046 Computer Graphics Lab instructions 2013

Add animation to your shader by including the code in Listings 10 and 11 in yourprogram from the previous exercise. Simple but fun ways of using the uniform variabletime in your shaders is to scale your vertices uniformly with sin(time) in the vertexshader, and to set the color in the fragment shader to something that depends onsin(time). Try both!

Now it’s time to start using proper transformation matrices in our vertex shader. Atransformation matrix can be declared as a 4×4 array in C, but by tradition it is morecommon to declare it in OpenGL as a one-dimensional array of 16 values. The memorylayout is the same anyway, it’s just a sequence of 16 numbers. C programmers are usedto treating multi-dimensional arrays and linear arrays interchangeably, and it is far toolate to break that habit now. This is a side effect of the syntax for declaring and usingarrays in C and C++, and it is a quirk that generations of programmers have learnedto accept. Thus, a uniform variable uniform mat4 T in GLSL can be set from C in themanner show in Listing 12. Note that the name of the variable in the C code may bedifferent from its name in the GLSL code. The call to glUniform() performs a copy of thevalue of one variable in C to a different variable in GLSL. There is no direct associationby name between them.

GLfloat T[16] = {

1.0f, 0.0f, 0.0f, 0.0f,

0.0f, 1.0f, 0.0f, 0.0f,

0.0f, 0.0f, 1.0f, 0.0f,

0.0f, 0.0f, 0.0f, 1.0f

};

GLint location_T;

location_T = glGetUniformLocation(programObject , "T");

glUseProgram(programObject); // Activate the shader to set its variables

if(location_T != -1) { // If the variable is not found , -1 is returned

glUniformMatrix4fv(location_T , 1, GL_FALSE , T); // Copy the value

}

Listing 12: A 4×4 uniform matrix variable in C

For reasons which we won’t go further into here, the individual elements of a matrixin OpenGL are specified in a column by column fashion, not row by row. Therefore, thelist of initialization values for the variable T in Listing 12 should be read as the transposeof the matrix. The element at row i, column j, where i and j are both in the range 0 to 3,is element T[4*j+i] in the 1D array in C. The rightmost column, where the translationcomponents should be, consists of elements T[12], T[13], T[14] and T[15]. Rememberthis. You will need it later.

Compared to glUniform1f(), The call to glUniformMatrix4fv() in Listing 12 hastwo extra parameters that deserve some further mention. The count, 1, means that thisis a single matrix, and that it is being copied to a single mat4 variable in the shader. Ifa shader needs many different matrices, say N of them, it can be a good idea to create alarge array in C with 16N numbers and copy them to a mat4 array in GLSL. For theseexercises, you will only need a few matrices, and it is probably wiser to keep them inseparate variables with separate names both in C and in GLSL. GL FALSE tells OpenGL

18

Page 20: TNM046 Computer Graphics Lab instructions 2013

to copy the matrix as it is specified. Substituting GL TRUE would transpose the matrixduring upload, which is of no particular use to us here.

Once a matrix is available in the vertex shader, coordinates can be multiplied with itinstead of using them directly for drawing, as shown in Listing 13

// Transform (x,y,z) vertex coordinates with a 4x4 matrix T

gl_Position = T * vec4(Position , 1.0);

Listing 13: Matrix transformation in the vertex shader

To experiment with matrix-based transformations, declare and initialize a matrix inyour C program, declare a corresponding uniform mat4 T in your vertex shader, sendthe matrix to the shader and use it for transformation.

Edit the values for your matrix to perform different kinds of transformations: uniformscaling in x and y, non-uniform scaling, translation, and finally a rotation around thez axis. Make sure you understand what happens, and why. Note where the translationcoefficients should be specified in your matrix in the C code.

Create a second matrix in your code. Set one to a translation, and the other one to arotation around the z axis. Transform your vertices by multiplying both matrices withthe vertex coordinate vector in your shader. Change the order of the multiplicationand note the difference it makes.

While GLSL has capabilities to multiply matrices with each other, and the GPU cando this kind of operations a lot faster than the CPU, operations in the vertex shaderare performed once for every vertex. A typical scene can contain a million vertices, soit is often wasteful to perform a matrix multiplication between two uniform variables foreach vertex. A better way of doing composite transformations is to multiply the matricestogether on the CPU, and send the composite matrix as one single uniform matrix to theshader. Matrix multiplication on the CPU was included in older versions of OpenGL inthe form of a matrix stack, but for reasons of keeping the API simple, it has been removedin more recent versions. It’s still there for backwards compatibility, but you should notuse it, and we will not mention it further. A matrix stack is still a good idea for large andcomplex scenes with many different transformations, but that is now a task for externallibraries or your own code.

A C function to multiply two matrices in a reasonably simple manner could be declaredlike the stub in Listing 14. For simplicity, it is a three argument function, where the twomatrices to be multiplied are specified as input arguments, and the matrix that is tocontain the result is specified as an output argument. In C, arrays are pointers, so arrayscan always be both read and written when sent as arguments to a function. The reasonfor sending the output array to the function is that it is a little bit tricky to create newarrays in C during program execution. For our limited purposes, it is a lot easier to just

19

Page 21: TNM046 Computer Graphics Lab instructions 2013

declare all the matrix variables you need at the top of your program, and fill them withvalues later in the code.

There are several alternate ways to implement the matrix multiplication function.One way is to do it using loops over row and column indices. This will require less code,but it is tricky to get all the indices right. Another and probably simpler way in this caseis to write ”stupid code” that computes each element of the output matrix explicitly, andwrite sixteen very similar lines to compute the entire ouptut matrix. This is more code,but is probably easier to get it right. For the computer, the ”stupid code” with one linefor each of the sixteen elements of Mout is actually faster to execute, because it does notrequire any loop variables or index computations - all indices are just constants.

In the following, you will be asked to create a few functions. Where to put themin your source code is for you to decide. You can place them in the same file as yourmain program, which is simple but clutters up the code and makes the file more difficultto navigate. You can also add them to the file tnm046.c to keep things more tidy inyour main program. In that case, remember to also add the function prototypes totnm046.h, or your program won’t compile. Finally, you can create a new C source fileand a corresponding header file with any name you choose and put your code there. Inthat case, remember to also include the header for your new file at the top of your mainprogram, and make sure the new file is included in the list of files to compile and linkwhen you build your application.

// Multiply 4x4 matrices M1 and M2 and put the result in Mout

void multmat4(float M1[], float M2[], float Mout []) {

// Your code goes here: compute and set each element of Mout

// e.g. Mout [0] = M1[0]*M2[0] + M1[4]*M2[1] + ... etc

}

Listing 14: A function stub for matrix multiplication in C

Implement a C function like the one in Listing 14 to multiply two matrices. Test it bysupplying your shader with a composite matrix built from a rotation and a translation,and make sure it generates the same result as when you previously multiplied thematrices in the same order in GLSL.

When testing and debugging your matrix multiplication code, it might be handy tohave a function to print the elements of a matrix to the console. C is not as well equippedwith printout functions as C++, and it is a bit of a hassle to generate nice printouts. Thekey to success is the function printf(), but that function is a bit complicated to use.Teaching you everything about printf() is beyond the scope of this course, so insteadwe present code to print a matrix in Listing 15.

Write C functions to fill a matrix with values that make it perform rotations of aspecified angle around a certain axis. Rotation around a general axis is a bit tricky toimplement, so it is enough if you write functions to rotate around the principal axesx, y and z. Suggested function signatures are in Listing 16.

20

Page 22: TNM046 Computer Graphics Lab instructions 2013

// Print the elements of a matrix M

printf("Matrix :\n");

printf("%6.2f %6.2f %6.2f %6.2f\n", M[0], M[4], M[8], M[12]);

printf("%6.2f %6.2f %6.2f %6.2f\n", M[1], M[5], M[9], M[13]);

printf("%6.2f %6.2f %6.2f %6.2f\n", M[2], M[6], M[10], M[14]);

printf("%6.2f %6.2f %6.2f %6.2f\n", M[3], M[7], M[11], M[15]);

printf("\n");

Listing 15: Code to print the elements of a matrix in C

void mat4rotx(float M[], float angle);

void mat4roty(float M[], float angle);

void mat4rotz(float M[], float angle);

Listing 16: Function prototypes for creation of rotation matrices

Now, test your rotation matrices. Rotate first around z to see that things work, butthen try rotating around x or y. Finally, the 3D nature of the object and of OpenGLstarts to show.

Add animation to your scene by making the rotation matrix dependent on time. TheC function double glfwGetTime(), which is a part of the GLFW library, returns thenumber of seconds that have passed since the program was started. Use it to rotateyour triangle around y at a speed of one revolution per 2π seconds.

When you rotate things in 3D, you sometimes see the front of the triangle, andsometimes the back. Without any hints from lighting or perspective, it is impossibleto make out which is which, and the animated display can be confusing. Sometimes itappears as if the rotation changes direction abruptly instead of the back face coming intoview. To resolve this ambiguity, you can hide the back face of the triangle by puttingthe statement glEnable(GL CULL FACE) somewhere before you draw your triangle. Thisis called back face culling, and it is used all the time for more complicated 3D scenes.The most common kind of 3D objects are closed surfaces, ”solid objects”, and theirinterior never shows, so you can save half of the work during rendering by not drawingthe inside of objects. Culling is performed after the vertex shader, and culled faces arenever processed by the fragment shader.

Enable back face culling. Unfortunately, because we don’t have a closed surface, ourtriangle disappears when it is facing the wrong way, and this is even more confusingthan before.

Another way of visualizing the difference between front and back faces is to drawthem in different styles. OpenGL can draw triangles either as filled surfaces, as outlinesor just as dots at the vertices. You can set this by the function glPolygonMode(). Ittakes two arguments, one to determine whether you want to set the mode for front orback faces, or both (GL FRONT, GL BACK, GL FRONT AND BACK), and one to set the mode(GL FILL, GL LINE, GL POINT). By default, both faces are set to GL FILL.

21

Page 23: TNM046 Computer Graphics Lab instructions 2013

Instead of back face culling, change the polygon rendering mode for back faces so thatthey are drawn as outlines. Keep the front faces as filled surfaces. This should helpresolve the ambiguity between what is back and what is front, but it still looks a bitweird.

Now it’s time to create some more triangles to make a better looking 3D object. Acube is simple but good enough for testing. When specifying triangles in the index array,you need to take care to specify the vertices for each triangle in the correct order, becausethe back face culling mechanism assumes that you have them in positive winding orderwhen looking at its front face, as demonstrated in Figure 1. You also need to create eachface of the cube from two triangles, by splitting the square along one of its diagonals. It’sa bit tricky, so take care to get it right.

1

23

Figure 1: Positive winding order when seen from the front.

Create your own vertex arrays to make a cube with corners at ±1 in each of the(x, y, z) axes, and a different color for each vertex. Use bright and contrasting colorsfor clarity.

Test your cube by drawing it with an animated rotation. Perform a fixed rotationaround one axis and an animated rotation around a different axis so you get to seeall the faces. To see whether you got the winding order right everywhere, enable backface culling and render the front faces as filled polygons. This makes any triangleswith wrong winding order stand out more clearly as holes in the surface.

Last in this exercise, we ask you to create an animation of your cube similar to themotion of a planet in orbit around a sun. The transformation should be a compositematrix made from four transformations. First, rotate the entire scene a small fixedangle around x to see it in a slight angle from above. Then rotate slowly around y tosimulate the orbit of the planet. Then translate along either x or z to move the planetout from the center. Last, rotate more quickly around y to simulate the planet’s spinaround its own axis. Draw a cube with these transformations.

22

Page 24: TNM046 Computer Graphics Lab instructions 2013

2.3 Color, normals and shading

For the cube, it would be nice to have each face a constant color to easily make outwhich is which. However, OpenGL does not have any notion of face colors – all geometryattributes are specified per vertex. Therefore, to have different colors for adjacent facesthat share a vertex, you need to split the vertex into several vertices, each with the sameposition but different colors.

Create a cube with each face having a different, constant color. You need to spliteach vertex into three to make 24 vertices in total. Test your new object to makesure it displays correctly.

As you can see, more complicated objects are cumbersome to create by typing theirvertex arrays directly into the source code. Fortunately, there are other ways to getgeometry into OpenGL. One way is to generate the vertices by a function. Objects withregular shapes, like spheres and cylinders, can be generated programmatically by loops,and a small section of code can generate lots of triangles. This method is presentedbelow. Another way is to create the shapes in an interactive 3D modeling program, savethe vertex list to a file and read that file when the OpenGL program is run. A functionto read geometry from a file will be presented later, in section 2.5. For now, let’s at leastencapsulate the details around how to generate, store and render geometry through theuse of a data structure and some helper functions. C is not an object-oriented language,but it does have structured data types, which can be regarded as ”classes without themethods”. A structured data type to hold all the information we need for a vertex arrayis specified by triangleSoup in Listing 17.

/* A struct to hold geometry data and send it off for rendering */

typedef struct {

GLuint vao; // Vertex array object , the main handle for geometry

GLuint vertexBuffer; // Buffer ID to bind to GL_ARRAY_BUFFER

GLuint indexBuffer; // Buffer ID to bind to GL_ELEMENT_ARRAY_BUFFER

GLfloat *vertexArray; // Vertex array on interleaved format: x y z nx ny nz s t

GLuint *indexArray; // Element index array

int nverts; // Number of vertices in the vertex array

int ntris; // Number of triangles in the index array (may be zero)

} triangleSoup;

Listing 17: A data structure to hold vertex array information

The file triangleSoup.c contains the declaration of the data type. To access thefields of a struct, you use a ”dot” notation, somestruct.fieldname, as presented inListing 18. If your variable is a pointer to a struct, you use the ”arrow” notation,somestructpointer->fieldname. A pointer to a struct is required if you want to passit to a function, and if that function needs to change the data in the struct. Pointers inC and C++ can be a difficult subject to get your head around, but the main things tokeep in mind are:

� A pointer is an address in memory, and it tells you where a value is stored.

23

Page 25: TNM046 Computer Graphics Lab instructions 2013

� An asterisk (*) is used both when declaring a pointer and accessing the data towhich it points (dereferencing the pointer). If p is a pointer, *p gives you the valueit points to.

� An ampersand (&) in front of a variable name creates a pointer with the address ofthat variable. If q is a variable, &q gives you a pointer to it.

� Any variable of any type can have a pointer to it.

� Array variables in C are actually pointers to a sequence of a number of values ofthe same type, stored in adjacent memory locations.

/* Struct field access */

triangleSoup mySoup; // A struct

triangleSoup *mySoupPointer; // A pointer to a struct

...

mySoup.nverts = 100; // Set value by direct struct access

mySoupPointer ->nverts = 100; // Set value by pointer struct access

(* mySoupPointer).nverts = 100; // Alternate but less clear notation for "->"

(& mySoup)->nverts = 100; // "&" creates a pointer to a variable

Listing 18: Access to fields of a struct

In the file triangleSoup.c, you will also find some helper functions to handle dataof the type triangleSoup. The function soupCreateSphere() will generate an approxi-mation of a sphere of a specified radius, using a specified number of polygon segments toapproximate the smooth shape. Include the functions in triangleSoup.c in your pro-gram with the statement #include "triangleSoup.h" at the top of your main program,and add the file triangleSoup.c to your project if it isn’t in there already.

For now, you don’t need to bother with the details on how the arrays are created (themalloc() function), but note that what you put into the triangleSoup data structureare two variables of the ”pointer” types, GLfloat* and GLuint*. Pointers in C andC++ are addresses in memory. Among other uses, pointers can be empty placeholdersfor arrays whose data is allocated and set when the program runs. This is of coursemore flexible than declaring an array with a fixed size and constant values, like youdid before. Another difference between your previous example objects and the spheregenerated by soupCreateSphere() is that the vertex array is now a so-called interleavedarray, where all vertex attributes are lined up in the same array. For each vertex of thesphere, eight float values are generated: The vertex position (x, y, z), the vertex normal(Nx, Ny, Nz) and the texture coordinates (s, t). The layout of the array is specified bymaking appropriate calls to glVertexAttribPointer(), where we specify where in the arraythe different attributes are located. An interleaved array is the recommended way ofspecifying geometry in OpenGL. It removes the need for multiple vertex buffer objects,and during rendering it is generally more efficient to have all data for one vertex storedin nearby memory locations.

We will use normals later in this exercise, but for now, you can abuse the normalsdata to set the vertex color. OpenGL does not know what kind of data you send to it,it’s just numbers. Normals may have negative components, and RGB values should be

24

Page 26: TNM046 Computer Graphics Lab instructions 2013

positive, but if you try to set a negative value for a color in the fragment shader, it isautomatically set to 0 instead.

An example of how to use the triangleSoup data type is shown in Listing 19. Asyou can see, using a struct and some functions for its creation and rendering removes alot of the clutter from the code in your main program, which is the very purpose of codeabstraction. However, you should read the code in the functions soupCreateSphere()

and soupRender() to see that they do the same thing as you did previously when youcreated your cube, only with a lot more triangles. If you want, you can regard thefunction calls soupCreateSphere(mySphere, 1.0, 10) and soupRender(mySphere) asthe C equivalent of an object with associated methods: mySphere.createSphere(1.0,

10) and mySphere.Render(). In C++, triangleSoup would have been an obviouscandidate for a separate class. Some quirks of this code would also be solved by objectoriented programming: the need to initialize a struct of type triangleSoup to all zeroby calling soupInit() would be a task for the constructor method.

// --- Put this into the declaration part of your program

triangleSoup mySphere;

// --- Put this before your rendering loop

// Generate a sphere approximation with radius 1.0 and 10 segments

soupInit (& mySphere); // Set all fields to zero - important!

soupCreateSphere (&mySphere , 1.0f, 10);

// --- Put this in the rendering loop

// Draw the sphere

soupRender(mySphere);

Listing 19: Example use of the soupCreateSphere() function.

Read the functions soupCreateSphere() and soupRender() and make sure you un-derstand their structure.

You don’t have to bother with figuring out every detail on the vertex coordinates andtheir indexing, but read the code at an overview level. There is nothing in there that youcouldn’t have written yourself with some guidance, but it would have taken quite a bitof time, and we think your time is better spent on other, more rewarding tasks.

Use the function createSphere() to create a sphere, and draw it. First draw thesphere as a ”wireframe” (triangles rendered as outlines), then render it as a solidobject (filled triangles). Draw it with an animated rotation around the y axis tomake all sides visible over time.

Use the shaders you already have to render the sphere with a color that variessmoothly across the surface. Use the normals (vertex attribute 1, a vec3) or thetexture coordinates (vertex attribute 2, a vec2) to set the RGB color for the render-ing, just to see that you get the additional attributes through to your shader program.

25

Page 27: TNM046 Computer Graphics Lab instructions 2013

It also makes for a colorful display, and the colors will show you the values of thenormals and the texture coordinates.

The sphere surface has a color which changes across the surface, but it looks veryartificial, not at all like a simulation of a real world 3D object. For more realism, we needto add some kind of simulation of how light interacts with a smooth, curved surface.

Change your shader to use the second vertex attribute of the sphere as intended, i.e.as a normal direction. Change the name of the attribute variable Color to Normal,and to compute the output variable interpolatedColor, set all color components to thez component of the normal direction. Run your program and watch what happens.

The sphere looks as if it was lit from one side, but the light source seems to rotate withthe object. This is because we transform the vertex coordinates, but not the normals. Tomake a correct object transformation, we need to transform the normals as well. You canuse the same matrix to transform normals and vertices, as long as you keep two thingsin mind: normals are directions, not positions, so they should never be translated, andthey should always have unit length. To avoid translation, multiply the transformationmatrix with the vector (Nx, Ny, Nz, 0), or pick the upper left 3×3 part of the 4×4 matrixT by writing mat3(T) in your vertex shader. To remove the effects of scaling, normalizethe vector after transformation. This can be done by the GLSL function normalize().

Note that this transformation of normals works properly only if the scaling is uniform.To handle general, non-uniform scaling, you need to use a separate matrix for transfor-mation of normals. More specifically, you should use the inverse transpose of the upper3×3 part of the vertex transformation matrix to transform normals, and then normalizeyour result. The reason for this is a little beyond the scope of this text, but you can readabout it in many computer graphics textbooks.

Transform your normals along with the object before you compute the color from thenormal direction, and see that it makes a visual difference.

Illumination does not always come from postive z, so let’s change the direction of thelight. Create a light direction vector as a local vec3 variable in the shader code. (In amore general shader, the direction and color of the light would typically be a uniformvariable.) The diffuse illumination can then be computed by a scalar product betweenthe light direction and the normal. Scalar products are computed by the function dot()

in GLSL.

Simulate lighting from a direction different from positive z. Try a few different direc-tions to make sure your code works the way it should. You may want to disable therotation of the object to see more clearly what you are doing.

26

Page 28: TNM046 Computer Graphics Lab instructions 2013

Disable the animated rotation of the object, and instead use the transformation matrixto rotate only the light direction.

The sphere is now lit, but it still looks dull and matte. A simple illumination modelto create an appearance of a shiny object is the Phong reflection model. It’s old and notreally based on physics and optics, but it’s simple and still good enough for many pur-poses. You have everything you need to perform the necessary computations in the vertexshader: the surface normal, the light direction and the view direction. The direction ofthe viewer is always (0, 0, 1) in screen coordinates, which is the coordinate system at theoutput from the vertex shader. Note also that there is a convenient function reflect()

in GLSL.

Add specular reflection to your computations. Keep the object still and rotate onlythe light source. Use white for the specular color, and a somewhat darker color forthe diffuse color to make the specular highlight clearly visible.

When simulating a specular material, the lighting does not look as good if the objectcontains large triangles. This is because lighting is computed only at each vertex. Asmall specular highlight might fall between vertices and become dull and blurry. Thismakes the triangle nature of the ”sphere” show more clearly than before.

Increase the number of triangles to make the sphere look better with a specularreflection. You may have to increase it rather a lot.

Specular reflection for shiny objects, where the highlight is small and crisp, is not agood match for per-vertex computation of lighting, which is what we have been doingthis far. A better way to go about it is to send the normals to the fragment shader forinterpolation, normalize the interpolated normals in the fragment shader and perform allcomputations of lighting there instead. Interpolating normals to the fragment level canmake the surface of even fairly coarse polygonal objects look smooth.

To change to per-fragment lighting, instead of sending interpolatedColor from thevertex shader to the fragment shader, you can send a variable interpolatedNormal

= Normal, normalize it in the fragment shader (why is this needed?) and perform alllighting computations in the fragment shader. The transformed light vector needs tobe sent from the vertex shader to the fragment shader by using an appropriate pair ofout and in variables, but otherwise the computations and the GLSL code are the samewhether you perform them per vertex or per fragment.

Change to use interpolated normals and compute your lighting for each fragmentinstead of for each vertex. Decrease the number of triangles in the sphere again andcompare the visual result to what you had before.

27

Page 29: TNM046 Computer Graphics Lab instructions 2013

Computing lighting once for each fragment instead of once for each vertex meansmore computations and a slower rendering, but it does give a better looking result,particularly for specular reflections, and it enables more advanced illumination modelsfor more interesting surface appearances. However, at least some computations can oftenbe performed per vertex instead of per fragment, so don’t blindly push everything to thefragment shader. There is often a balance to be struck in terms of what to perform inthe vertex and the fragment shader stages, respectively. (This is similar to the decisionregarding what to do in the CPU program and what to do in the GPU shaders.)

28

Page 30: TNM046 Computer Graphics Lab instructions 2013

2.4 Camera and perspective

Until now, we have been using a parallel projection to draw our 3-D objects. While thiscan be useful for some applications, the customary way of presenting 3-D graphics is by aperspective projection. In classic OpenGL, there was a specific camera model that you hadto use, and it involved two matrices. One was the modelview matrix that combined twotransformations: the transformation from object coordinates to world coordinates (themodel matrix) and the transformation from world coordinates to camera coordinates (theview matrix). The other matrix was the projection matrix, which took care of the final,non-affine part of the transformation but generally did not move or rotate the camera.This is still the recommended way to do transformation in OpenGL. Because everythingis programmable, you can do it any way you like, but the traditional way was based onlong experience, and it is still a good idea to split your transformations into exactly twomatrices, one modelview matrix and one projection matrix.

Strictly speaking, you should also use a third matrix, the normal matrix, to transformnormals with a different matrix than the vertex coordinates. Details on why and how canbe found in the literature. For these lab exercises, we will use the modelview matrix totransform normals, which is OK if we take care not to introduce any non-uniform scalingthat changes the orientation of surfaces in our objects. However, in general you shouldcreate a normal matrix, set it to the inverse transpose of the modelview matrix withoutthe translations, upload it as a separate uniform matrix to the vertex shader and use itfor transforming normals.

Create a vertex shader that accepts two matrices as uniform variables, and make thenecessary changes in your main program to create and upload both matrices to yourshader program.

The modelview matrix should typically be built from many different transformationmatrices that are multiplied together. It is typically different for each obejct, and itchanges over time whenever objects or the camera moves. The projection matrix, on theother hand, is mostly created once and then left alone, often for the entire duration ofthe program execution.

To demonstrate perspective projection, we need a suitable object to draw. A sphereis not very good in that respect, because it looks more or less the same in parallel andperspective projections. We want an object that is oblong and has long, straight parallellines. A cube that is stretched in one direction by a non-uniform scaling is a good testcase.

Revisit the cube you created in exercise 2. Implement a function to create atriangleSoup data structure that encapsulates your cube into an object that canbe created and rendered more conveniently. Test your object by rendering it with ananimated rotation around the y axis.

A good name for your new function is soupCreateCube(), and a good place to putit is in triangleSoup.c. You need to supply vertex coordinates, normals and texture

29

Page 31: TNM046 Computer Graphics Lab instructions 2013

coordinates in your vertex array, but you can leave the texture coordinates all zero fornow. They will not be required until in the next exercise.

Use a non-uniform scaling matrix to stretch the cube to three or four times its previoussize in x. Render it with an animated rotation around the y axis. You need to scale itdown with a uniform scaling to make it fit better on the screen, or use a non-uniformscaling with a scale of 1 in x but less than 1 in y and z.

Note that when you scale your box non-uniformly in the directions of any of theprincipal axes x, y or z, no normals that are already in the principal directions changedirection, so in this particular case you do not have to create a separate normal matrixeven though the scaling is non-uniform. In this particular case, the modelview matrixwithout the translation just happens to work fine for transforming the normals, as longas you remember to renormalize them to unit length after the transformation.

Look at your animated display of the elongated box. The result looks flat, somehow,even though the object is clearly wider in one direction and changes its on-screen appear-ance from a square to an oblong rectangle during the animated rotation. What is clearlymissing is perspective, so let’s add that.

Create a projection matrix that performs a suitable perspective projection for thisobject.

To create a suitable projection, keep in mind how large the object is and where it islocated. Details on how to define a projection matrix is in the literature. There are afew different ways to do this, and you can use any method you like.

Upload your projection matrix to the shader program, and use a combination of ananimated rotation around y and your perspective projection to render the elongatedobject from before. Add a small fixed rotation around x to your modelview matrixto view the object from a different and more interesting angle. Also try translatingthe object in the y direction and watch what happens.

It looks a lot better now, right? Your perspective may be too subtle, and you mightnot see a big difference from before, or it may be too strong and look weird. Speakingin terms of field of view (FOV) for a small window on an ordinary monitor, it is mostlywise not to use a field of view larger than about 60 degrees, but don’t make the field ofview too small either, as that will give you a result very similar to the parallel projectionyou had before.

Experiment with different fields of view for your projection.

Try simulating the field of view for a real camera, like the camera that may be builtinto in your phone or in your computer. Don’t just guess or Google the field of view –

30

Page 32: TNM046 Computer Graphics Lab instructions 2013

experiment! Hold up a finger in front of a real camera while looking at the viewfinder,and make a rough measurement of the FOV angle. You have been staring at a computerscreen for much too long by now anyway.

31

Page 33: TNM046 Computer Graphics Lab instructions 2013

2.5 Textures, models from file, interaction

2.5.1 Textures

Until now, we have only used constant colors or smoothly varying per-vertex colors forobjects. In most ordinary uses of 3D graphics, surfaces are drawn with a texture toadd visual detail to otherwise plain surfaces. Textures are an integrated part of OpenGL.Textures are associated with a texture object, which are created in much the same manneras vertex array objects and shader program objects:

� Create a texture object

� Bind the texture object (activate it)

� Load the texture data (the pixels of the image) into CPU memory

� Configure the parameters for the texture

� Transfer the texture data to the GPU memory

� Unbind the texture object

When you later want to render an object with the texture, you need to do this:

� Bind the texture object

� Activate a shader program that uses a texture

� Tell the shader program which texture you want to use

� Draw your geometry

� Unbind the texture ID

A texture object is just an integer in the C code, just like vertex array objects andshader program objects. The integer is an ID number that is associated with a datastructure that resides on the GPU.

In shader code, textures are used as arguments to functions that lookup the texturecolor based on texture coordinates. The typical use of a texture in a fragment shader ispresented in Listing 20.

uniform sampler2D tex; // A uniform variable to identify the texture

in vec2 st; // Interpolated texture coords , sent from vertex shader

color = texture(tex , st); // Use the texture to set the surface color

Listing 20: Texture lookup in the fragment shader

Example C code that performs the necessary steps to create and use a texture ispresented in Listing 21. The shader uniform variable tex of type sampler2D deservesspecial mention. It is set to 0, which seems strange at first, but what the shader wantsto know is not the numerical value of the texture object. This is important, and it is acommon cause of errors among beginner OpenGL programmers. What you should tell theshader is the number of the texture unit that has the texture we want to use. In our case,we only use one texture, and by default it gets loaded to texture unit 0. However, a shadercan use multiple textures, and then they need to be loaded onto different texture units

32

Page 34: TNM046 Computer Graphics Lab instructions 2013

by use of the OpenGL function glActiveTexture() before you bind a texture object.If you want to, you can add a line with glActiveTexture(GL TEXTURE0) immediatelybefore the lines where you have glBindTexture(), just to make it clear that we are usingthe default texture unit 0. Note that it is an error to write glActiveTexture(0). Thatwon’t work. You need to use the symbolic constant GL TEXTURE0. The shader uniformvariable, however, should be set to the value 0. This is another common and frustratingsource of error among beginners with OpenGL.

// --- Put this into the declaration part of your program

GLuint textureID;

int location_tex;

// --- Put this before the rendering loop

// Locate the sampler2D uniform in the shader program

location_tex = glGetUniformLocation(programObject , "tex");

// Generate one texture object (texture ID)

glGenTextures (1, &textureID);

// Activate the texture object

glBindTexture(GL_TEXTURE_2D , textureID);

// Set parameters to determine how the texture is resized

glTexParameteri(GL_TEXTURE_2D , GL_TEXTURE_MIN_FILTER , GL_LINEAR_MIPMAP_LINEAR);

glTexParameteri(GL_TEXTURE_2D , GL_TEXTURE_MAG_FILTER , GL_LINEAR);

// Set parameters to determine how the texture wraps at edges

glTexParameteri(GL_TEXTURE_2D , GL_TEXTURE_WRAP_S , GL_REPEAT);

glTexParameteri(GL_TEXTURE_2D , GL_TEXTURE_WRAP_T , GL_REPEAT);

// Read the texture data from file and upload it to the GPU

glfwLoadTexture2D("sample.tga", GLFW_BUILD_MIPMAPS_BIT);

// --- Put this in the rendering loop

// Draw the sphere with a shader program that uses a texture

glUseProgram(programObject);

glBindTexture(GL_TEXTURE_2D , textureID);

glUniform1i(location_tex , 0);

soupRender(mySphere);

glBindTexture(GL_TEXTURE_2D , 0);

glUseProgram (0);

Listing 21: Example use of a texture in OpenGL

The GLFW function glfwLoadTexture2D() hides some details around texture cre-ation. Most importantly, reading a file and getting the pixel data into memory can be avery complex task. Modern compressed file formats like PNG and JPEG are a lot morecomplicated than the old and uncompressed TGA format we use here for demonstration.Not even TGA files are trivial to read, and for image files on any format other than themost simple ones like TGA, you would be well advised to always use other people’s code.There are good and free external libraries, written and maintained by experts, for readingand writing of all common image file formats. However, once the pixels are in memory,it is a straightforward process to transfer the data to the GPU and set it up. It involvesone call to glTexImage2D() and mostly also one call to glGenerateMipmap(). Detailson this can be found in the OpenGL documentation and in most OpenGL tutorials.

The sphere object you rendered in exercise 3 has 2D texture coordinates (s, t) definedin the third vertex attribute. Include all three vertex attributes (0, 1 and 2) in thelayout for in variables in your vertex shader, give them appropriate names, and rendera sphere with a surface color that is determined by a texture lookup.

33

Page 35: TNM046 Computer Graphics Lab instructions 2013

Keep in mind that in the triangleSoup structure, vertex coordinates and normalsare vec3 attributes, but texture coordinates are vec2.

Texture coordinates need to be interpolated across each triangle, which means theyshould be sent as an out variable from the vertex shader to an in variable in the fragmentshader.

Some example TGA files are in the lab material, but you can create your own in anyimage editing program that can save files on the TGA format.

If you haven’t already done it, modify your cube object from the previous exercise toinclude meaningful texture coordinates, and render it with a texture.

Think about how you want the texture to map onto the cube. Do you want differentregions of the image to appear on different sides, or do you want the image to be repeatedon all sides? Decide for yourself which mapping you want, and implement it.

Go back to rendering a sphere. Write a fragment shader that simulates diffuse lightinglike you did before, but where the diffuse color of the surface is now taken from thetexture.

2.5.2 Geometry from files

Previous exercises have used either hand-written coordinate arrays for the geometry, likethe cube, or geometry generated from simple rules, like the sphere. While these areboth useful methods for getting geometry into OpenGL, the most common method isto load models that have been generated in other software packages and saved to filesin some appropriate exchange format. The file triangleSoup.c contains a functionsoupReadOBJ() to load a triangleSoup structure from a file on the OBJ format. TheOBJ file format is old and lacks many useful features, but it is simple, and it is still incommon use in the field of 3D graphics, for platfom-independent exchange of polygonmodels. The code in soupReadOBJ() is not particularly complicated, and it was writtenin a few hours from scratch using only the file format specification. Please have at leasta brief look at the code before you use it. Additionally, take a look at the contents of anOBJ file. It is a text-based format, so OBJ files can be inspected using a text editor.

Load a more complicated object from an OBJ file using the function soupReadOBJ().Display it first as a wireframe to show its polygon structure, then display it withsurface color taken from a texture, and finally display with simulated diffuse lightingand a texture.

Example models in OBJ files, with matching texture images in TGA format, can befound in the lab material. You can also create OBJ files yourself in any of a large numberof 3D modeling packages.

Now that the models are no longer simple, convex shapes, you need to enable depthbuffering to resolve self-occlusion issues. Once you do that, you can render multipleobjects in a scene with complex occlusion just as easily.

34

Page 36: TNM046 Computer Graphics Lab instructions 2013

Load an object from a file, and render the object with a sphere circling in an orbitaround it.

You need to create two separate triangleSoup structures for this. Make sure that theocclusion looks right, both within an object and between objects. Try rendering a spherethat is large enough and close enough to intersect the object it orbits. The depth buffershould take care of both intersections and occlusions.

Render the object as if it was lit from a point light source emanating from the spherethat orbits it.

For this to look good, you need to use a shader that renders the surface in a constantwhite color for the sphere that represents the light source, and use the shader with theillumination model only for the lit object. To create two shader programs, just create twoshader program objects, write two matching sets of one vertex and one fragment shaderin GLSL, and make two calls to createShader().

2.5.3 Interaction

We have rendered animated objects in the exercises above, to show the 3D structure ofthe scene more clearly, but the thing that has been missing is one of the key features ofreal-time 3D graphics: interaction. A game or a simulation of driving or flying goes farbeyond what we have room for in this short lab course, but we can at least add someuser interaction to rotate our view of an object.

User interaction through GLFW can be done using either the keyboard, the mouse, ora game controller (joystick), but we have to write the code for it ourselves. GLFW handlesthe low level input, such that for each frame, we can get information on the up/downstate of each keyboard key, the position of the mouse along with up/down informationfor its keys, and the current position of all sticks and buttons on a game controller. Inorder to set a rotation of an object or of the camera, we first need to remember its currentrotation and update it. If we are using the keyboard or a joystick, we want to keep therotation speed independent of the frame rate, and therefore we also need to rememberthe time of the most recent update of the rotation. If we are using the mouse, the mostnatural interaction is to make the rotation depend on how far we move the mouse andin which direction. To be able to determine how much the mouse has moved, we need tokeep track of where the mouse was in the last frame, and to know whether we just movedthe mouse or clicked and dragged with a mouse key down, we need to also remember theup/down state of the mouse keys from the last frame.

One easy way out to keep track of state between frames and update it in functionsoutside of main() is to use global variables, and that is often done in simple tutorials.However, global variables are a source of confusion and error, and they are almost neverneeded, so you should try to manage without them as much as possible. A clean and wellbehaved design of our interaction is to keep track of all the state we need in a struct (callit an ”object” if you like), declare the struct in the main() function instead of as a globalvariable, and pass it as a parameter to all the functions that need to read or modify

35

Page 37: TNM046 Computer Graphics Lab instructions 2013

it. This is the approach taken in pollRotator.c. The functions pollRotatorKey()

and pollRotatorMouse() use one special struct each, rotatorKey and rotatorMouse tokeep track of what happened last time they were called. Study the functions. Make sureyou understand what they do, and how they do it.

Edit the code you just wrote so you can both view the object from different anglesand position the point light at different angles next to it.

You can use the keyboard for one rotation, and the mouse for the other. Decide ifyou want to change the world space rotations for each independently, or if you want toposition the light source relative to the object, and write your code accordingly.

36