transforming sketches into vectorized imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw...

17
Samuel Farrell Kenny Preston CS534 Final Report Transforming Sketches into Vectorized Images Introduction and Abstract Our final project is an attempt to automate the process of digitizing analog forms of art. Unless a piece of work is initially created as digital medium, transforming the work into easily displayed and shared. Most artists import their sketches into professional image editing software such as Adobe Photoshop and manually edit and filter these images, sometimes even redrawing the whole thing using the original only as a guide. This work is painstaking and slow, and an automated program could truly speed this process up. In addition to the skill that created sketches takes, transforming them into a presentable digital format requires expertise using image editing software. The learning curve of these types of programs is often quite steep, and the thing an artist should be worrying about is tinkering around in software. We set out to recreate the steps often taken to digitize analog work in a simple, easy-to- use program that is completely automated. Originally, we wanted the program to be completely hands-off, but realized that with the variety of input a user may provide, we would need additional input from the user to further tailor the program’s output to the image the user wishes to digitize (these arguments do have default values, however, and are explained in more detail later in this report). Our program reads an image and dynamically constructs a palette of basic image building blocks with a color-range specified by the user. The image is normalized and matched, and then the output is further refined and sent back to the user. The application runs very quickly with a small palette, but execution time scales exponentially as the palette size increases. Application The resultant application has many uses, and makes transforming any physical images into digital copies a much easier process. Our intent was to make a tool for artists, allowing them to digitize their sketches quickly and without loss of quality. We have also found that the program does a good job of interpolating artist intent: an artist can submit a very rough sketch and the program does a good job of refining the original into a more polished, presentable form. This is especially useful in all areas of graphic design. We are used to seeing sharp, highly stylized vector art that, though most of them probably originated as sketches, has been through a high level of post processing and sometime even a complete re-rendering or a re- drawing. Not limited to art, we believe that our application will have numerous uses in scientific fields as well. Researchers often compile notebooks full of sketches describing experiments and gathering data, which proves very difficult to save digitally. Scanning images results in quality loss, but is essentially still the only way to digitize physical work. Using our application

Upload: others

Post on 25-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

Samuel FarrellKenny PrestonCS534 Final Report

Transforming Sketches into Vectorized Images

Introduction and Abstract

Our final project is an attempt to automate the process of digitizing analog forms of art. Unless a piece of work is initially created as digital medium, transforming the work into easily displayed and shared. Most artists import their sketches into professional image editing software such as Adobe Photoshop and manually edit and filter these images, sometimes even redrawing the whole thing using the original only as a guide. This work is painstaking and slow, and an automated program could truly speed this process up. In addition to the skill that created sketches takes, transforming them into a presentable digital format requires expertise using image editing software. The learning curve of these types of programs is often quite steep, and the thing an artist should be worrying about is tinkering around in software.

We set out to recreate the steps often taken to digitize analog work in a simple, easy-to-use program that is completely automated. Originally, we wanted the program to be completely hands-off, but realized that with the variety of input a user may provide, we would need additional input from the user to further tailor the program’s output to the image the user wishes to digitize (these arguments do have default values, however, and are explained in more detail later in this report). Our program reads an image and dynamically constructs a palette of basic image building blocks with a color-range specified by the user. The image is normalized and matched, and then the output is further refined and sent back to the user. The application runs very quickly with a small palette, but execution time scales exponentially as the palette size increases.

Application

The resultant application has many uses, and makes transforming any physical images into digital copies a much easier process. Our intent was to make a tool for artists, allowing them to digitize their sketches quickly and without loss of quality. We have also found that the program does a good job of interpolating artist intent: an artist can submit a very rough sketch and the program does a good job of refining the original into a more polished, presentable form. This is especially useful in all areas of graphic design. We are used to seeing sharp, highly stylized vector art that, though most of them probably originated as sketches, has been through a high level of post processing and sometime even a complete re-rendering or a re-drawing.

Not limited to art, we believe that our application will have numerous uses in scientific fields as well. Researchers often compile notebooks full of sketches describing experiments and gathering data, which proves very difficult to save digitally. Scanning images results in quality loss, but is essentially still the only way to digitize physical work. Using our application

Page 2: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

after scanning the data, or perhaps a future build of our application, we can not only drastically reduce noise and data-loss, but we interpolate data and improve upon the original. Additionally, our design plays very well with text, and does a great job smoothing edges and intelligently filling in gaps. There is little doubt that there is need for a simple post-processing application when dealing with huge amounts of scanned data.

Implementation

Our application was written and run in Matlab, though is also available as a Unix binary executable file. We have utilized a similar technique as a previous Computer Science 534 homework assignment, including calculating the sum of squared different of an image and a set of candidate images and choosing the closest match, but all of the code we have used is our own. We believe the highlights of our code include the dynamic palette building function that, upon runtime, creates a palette of candidate images that span a range of grayscale as indicated by the user as well as our line darkening function, which searches the image for bodies of color, and creates a propagation inward of ever-darkening pixels. This darkening propagation technique is what really captures the feel of vector art, producing thick, heavy lines that look like they were created using an Adobe Photoshop brushstroke or some other type of image editing software.

The application is written as a Matlab function named render. An image location is given as an argument, and we read that image in and convert the image to grayscale if necessary. There are three other arguments require: shades, white balance, and black balance. The application does some boundary condition checking, and converts the source image to a size that is divisible by three to eliminate bounding errors. The program then normalizes the source by running through the entire image, and scaling the lightest colors (as determined by the white balance) up to white and the dark colors (as determined by the black balance) down to black. It also does automated black scaling, which finds the darkest pixel in the image and uses that as a reference point, lowering every value that is below the white balance threshold down by amount the minimum pixel. For example, if the minimum is valued at 43 (out of 255), any pixel that does not meet the white threshold requirement will be reduced by 43. The normalizing step also reduces the variance of the darkest colors as determined by the black balance value, which is useful for controlling images with a very wide range of values. The image is then passed through a gaussian filter, which reduces noise and makes the source image easier to match. We then convert to grayscale values that Matlab recognizes by dividing piecewise by 255 after casting our image matrix to type double. This creates an image made entirely of values from zero to one, which can then be used by Matlab to match with our palette of candidate images created in the next step.

During our first implementation of the program, we skipped this value-normalizing step and kept our image matrix as values from zero to 255, which caused a variety of problems in our output image. The program would still match and paste candidate images to an output image, but Matlab would not show the result. Matlab seems to recognize grayscale images that were read into the program that have pixel values up to 255, however it does not seem to recognize images that were constructed inside the program first as a matrix with values up to 255. This seems a relatively arbitrary distinction, and is something Matlab should work on improving. Another guess as to the cause of the display problem is that when Matlab reads an image, it keeps track that it is indeed an image and, therefore, before display must cast to a

Page 3: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

double and divide by 255. If this process occurs, it must happen under the hood, because the values look like an integer matrix. When displaying the image created as a matrix inside Matlab, it would not realize that it is displaying an image and when dividing by 255, would leave the original matrix as an integer, thus rendering rounding all pixels to either hard white or hard black. To deal with this shortcoming, we have made a point to cast and normalize all images ourselves, which slightly increases execution time.

After the source image is normalized, we are ready to build the output image. First, we must construct a palette of images that will serve as building blocks to the output image. Each image in the palette is a square image three pixels wide and three pixels tall. At first, we hardcoded a set of 50 palette images, but found that, although this worked well with our initial test images, the solution wasn’t scaling well for every source image. We changed our approach to dynamic palette production, in which the user specifies the number of shades they would like the program to use, and we build a new palette with each run. For each shade, we generate a set of 30 palette images. Deciding how many images to create for each shade is a delicate balancing act: too little images and the resultant image will be blocky and not very faithful to the original, too many images and the difference between the source image and output image will be negligible. We decided on a set of images that included a single line in every direction, a single gradient in every direction, and finally a solid color of the shade. We later added some collision handling, which handles intersecting lines. While not comprehensive, this palette does a good job interpolating user intent and removing noise and unnecessary gaps. We choose the levels of grey in each shade by evenly distributing the number of shades over the entire range of black to white, and so we match as wide a dynamic range of color as we can using the minimum number of palette images given the user’s desired level of shading. The restriction on palette images is what gives the output image the clean look of vector art, and the restriction also does work to clean up noise and irregularities captured in the scanning process.

(for an example of a few candidate images includes in a standard palette, see figure one)

Once we have our palette generated, we run through the image in raster order, three pixels at a time. At each point, we calculate the sum of squared differences of that particular block of the image and every one of our palette images. Unlike the texture synthesis problem that we based our approach off of, we know each palette image is unique, there is no need to randomize selection - we simply choose the best match and paste it to the output. We do not want to randomize the candidate selection because we want to match the input image the best we can. We get the desired output not due to randomization of image completion, but with the purposeful restriction on candidate images. We choose a palette of images that would be created had the image been originally created digitally, and so we can restrict the output image to match this style. The larger palette the user chooses to use, the more realisitic the output image will be. We see good results when using above 5 shades. Using just one shade results in a unique look, but not the smooth vector art we are looking to achieve. The result looks good, but there is still some optimizing yet to do.

We then run our output image through a gaussian filter again, this time not so much to remove noise as to reduce any blockiness that may be caused by using too few shades. Our goal is get nice and thick, heavy black lines across the image wherever appropriate, and sketches (especially those that are scanned) have a tendency to great holes. The interpolation done in the previous step does a good job of filling these holes in, but we can do more to

Page 4: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

darken all the thick lines to the desired heaviness. We now run through the image four times, pixel by pixel. For each value, we detect its neighbors, and if all of them are darker than a given value, we darken the center pixel. We do this at 0.9, 0.8, 0.6, and 0.4, and subtract 0.1 from the central pixel each time we find a pixel that meets this case. The resultant is darkening that propagates inward to the center of lines and areas of solid dark color. We run through the image once more, this time lightening pixels that are surrounded by very light pixels, just to reduce the noise that scanning often produces. The image processing is now done and is sent out to the user.

FIGURE ONE(example candidate images included in a palette)

Page 5: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

Usage Walkthrough and Explanation

To use this function in Matlab, all the user needs is the render.m script on their computer. Because we opted for dynamic palette creation, there is no need to install the custom palette on the user’s machine. This is another benefit of creating the palette dynamically - it makes the program very easy to transport and use, and does not take up unnecessary space using a huge array of candidate images that must be carted to the user computer, and stored in a specific spot that Matlab can locate and read. To begin, open Matlab and navigate to the folder that contains your render.m file. In the Matlab terminal, call the function by typing [a,b,c] = render(inputImage, shades, whiteBalance, blackBalance). The program will generate the following output:

• a, the source image after normalization• b, the palette used to color the output image• c, the output image as a matrix

For example, calling a function with standard balance with an image stored in 'img' and 5 shades would be done by using the command: [source,palette,output] = render(img, 5, 200, 100) To learn more about setting up the input arguments to your program, continue reading.Matlab will automatically read in your file from a specified location on your hard drive. For example, if you specify the location "u/e/x/example/Desktop/sample.jpg" as a variable, Matlab will use the file stored at the location as the input image. Make sure not to read the actual image into Matlab - the program does this for you. The program also includes code to transform your RGB image to grayscale. If your image is already grayscale, this step will be skipped. All output images will be grayscale regardless of input.

The render program is called using four arguments: the image location, the number of shades to use, the white balance of the image, and the black balance of the image. The number of shades can be any positive integer and represents the number of gradient levels to use in the output image, with the lowest include three levels of gray. Using a shade level of 1 will result in an image with 3 levels of gray, 5 will result in 7. Be careful, the execution time scales exponentially as you add more shading. We find very little different in output after a shade level of 10.

The white balance argument indicates which value of pixels the program will consider to be 'white'. This is especially useful with a low-quality scanned input sketch, where there is often a very gray background. It can also be used to eliminate background noise such as lines on lined notebook paper. Pixels are measured from 0-255, with 0 as black and 255 as white. The standard white level that the program uses is 200 - everything over 200 will be considered white. For images with much higher levels of noise, try lowering this setting.

The black balance argument indicates the range of dark values that are above the darkest value in the input image that the program will consider black. The program scans the image for the darkest color and treats this as black, but this option extends that functionality so that the user can specify how much range of color the program should treat as black. This is especially useful when the input image was poorly scanned and the lines aren't showing up well, or if it was done using a blue or otherwise colored pen. Pixels are measured from 0-255, with 0 as black and 255 as white. The standard black balance level used by the program is 100 - raise this value to lower the range of dark pixels (which will make the output more uniform and suppress gradients), or lower this level to retain gradient information (though you may lose some of the optimization the program provides).

Page 6: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

Example Output with Discussion

We used a training image of a monster sketch to hone in the variables located in the render script. We already had a vectorized sample of the input image done in photoshop, and so knew what we wanted to recreate. The input image is fairly small, at 241x400 pixels.

Pictured above is what an artist has done to her original sketch in image editing software such as Adobe Photoshop. This is the sort of result we were looking to achieve - transforming an original pencil sketch to and image in the style of smooth vector art.

The image to the far left if the original input image, the next image is done using ten shades, and finally the image to the right is done using twenty shades. The image definitely looks better using twenty shades, and lines are much more crisp and a lot thicker. Using less shades tends to loose some of the finer detail in the image. If you example the lines that outline the monster, its obvious that the program is doing a lot of work interpolating and filling in. All of the holes left in the original sketch are filled in and presented as clean, black outlines. It does less well with the teeth, though because the input image is so small, it seems our program has a hard time interpreting such fine lines.

Page 7: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

As we increase the size of the input image, the program takes much longer to run, but the results look much better. Our palette does not scale with the input image, and so works best with a large input. The building blocks become more and more accurate as the size of the original grows simply because it has a much better chance of fitting the input.

At 1293x1599 pixels, the above image is much larger than the training image, and the improvement is apparently. Though the gaussian filter we use on our output blurs a small input image a little too much, the blur does a much better job with a larger input, and works in the output images favor, added smoothness and filling in any irregularities in the output. As evidenced about, the white balance argument included in our algorithm works to eliminate noise, and you can see how it almost completely eradicates the paper background, including all the lines except the thicker red line. The white balance is capable of removing even that line, but the risk of losing any potential data that we are interested in grows as the white balance is decreased (lowering the threshold of color that is considered white). The algorithm also handles text very well, and cleans up the letters, making them much more readable. We believe this will be especially useful for researchers looking to digitize diagrams with any sort of text on them, including handwritten notes. The above image was done using only five shades. The size of the input image seems to make up for the restricting set of candidate images produced by a lower number of shades.

As we increase the input image size even more, again, the execution time of the program increases exponentially. Our next example has an input size of 3800x4000 pixels, and is by far the largest image we’ve tested. Because it takes so long to work on such a large image, we only ran the image with three shades.

Page 8: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

We saved the image at very low resolution, but you can still make out the work that the algorithm did to enhance important lines and normalize the image. Using such a small range of candidate images, the algorithm did surprisingly well, probably because of the sheer size of input the lack of dynamic range in the input. There was very little noise to eliminate in the original, so all the algorithm had to worry about doing was enhancing.

We also decided to test our algorithm with input featuring a very small dynamic range. Because our normalization depends on the color range of the input, this image took a bit of trial and error to produce acceptable output. The original features very little distinction between what should be considered white and what should be considered black, which is due to very light pencil lines on the original, and a very poorly done scan which left the background a dark gray color.

The algorithm recognizes what is important and should be recognized in the output image, but looses some of the detail and shading beyond the outline. This could be fixed by using dynamic normalization with object detection abilities, or by asking the user to specify which portion of the input image is the object. Because we want the algorithm to be completely automated, we believe the first option would be a better choice for future implementations.

There is more input featured below, as well as the number of shades used to produce that

Page 9: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

image.

Only three shades were used to produce this image, with an adjusted white balance to eliminate background noise.

Five shades were used to create the first output image, and twenty were used to create the second. This is a good example to demonstrate economy of palette size. The input image has a very small range of color, and so raising the black balance will do most of the work that increasing the palette size will, and will do it much more quickly.

Again, such a small dynamic range shows very little improvement when moving from five shades to twenty. This was a small sized input image, which shows how the gaussian blur at

Page 10: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

the end of the process is a little too strong for such small images.

Small image size, shading value of ten.

Very small, dirty image with lots of quality loss. Interpolated with a shading value of four. Demonstrates restoration of blacks and removal of noise.

Output at a shading value of ten. Fills in the gaps left by crosshatching quite well and yet retains most of the minutia.

Page 11: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

Limitations and Future Work

Though our algorithm works well under most circumstances, there are special cases where the user must adjust default values in order to retain optimal results. In the future, we would like our algorithm to do this for the user by going through the image and detecting an appropriate white balance and black balance level. The program could also choose the optimal level of shading to use while minimizing execution time.

As mentioned above, we would like to implement some object detection and line detection to minimize data loss induced by our normalization process. Currently, our program does not distinguish between foreground and background, and so any elimination of color variation our algorithm does in the normalization process is done to the whole image. We think the output of the algorithm would be greatly improved if we did this normalization selectively. When we go to apply our final gaussian blur to the image, we could use this object data to improve the output, perhaps using 2-band blending discussed in class to keep high-frequency content intact. If we want to continue using a gaussian filter, we can detect image size and use that information in deciding the size of our gaussian filter.

The major limitation of our implementation is its dependance on input image size. Our candidate images in the palette templates are fixed at 3x3 pixels, which does not scale up or down depending on input image size. Using a sliding size value based on a logarithmic function of total input image size will provide better, scaling results, and will also greatly reduce the execution time of our algorithm (because much less matching would have to be made for bigger input images, because the template images will be larger as well). We believe this will also alleviate our performance issues.

Page 12: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

function [ sourceImage, palette, outputImage ] = render(source, shades, whiteBalance, blackBalance)% standard white balance = 200% standard black balance = 100if (whiteBalance > 255) whiteBalance = 255;endif (blackBalance > 255) blackBalance = 255;end%--get the source image, resize---------------source = imread(source);source = rgb2gray(source);[height, width] = size(source);remainderWidth = rem(width, 3);remainderHeight = rem(height, 3);height = height - remainderHeight;width = width - remainderWidth;sourceImage = source(1:height, 1:width);sourceImage = double(sourceImage);outputImage = zeros(height, width);outputImage = double(outputImage);%--------------------------------------------%--normalize----------------------------------minimumPixelValue = min(min(sourceImage));

for i = 1:height for j = 1:width if (sourceImage(i,j) > (whiteBalance)) sourceImage(i,j) = 255; elseif (sourceImage(i,j) < (minimumPixelValue + blackBalance)) sourceImage(i,j) = 0; else sourceImage(i,j) = sourceImage(i,j) - minimumPixelValue; end endendsourceImage = sourceImage./255;%---------------------------------------------%--blur---------------------------------------gaussianFilter = fspecial('gaussian', [5, 5], 3);sourceImage = imfilter(sourceImage, gaussianFilter, 'symmetric', 'conv');% imshow(sourceImage);%---------------------------------------------%--build palette------------------------------[palette, paletteSize] = createPalette(shades);paletteMatch = zeros(1, paletteSize);white = [1, 1, 1; 1, 1, 1; 1, 1, 1];%---------------------------------------------%--replace tiles------------------------------for i = 1:3:height for j = 1:3:width if (outputImage(i:(i+2), j:(j+2)) == white(:,:)) outputImage(i:(i+2), j:(j+2)) = white(:,:); else

Page 13: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

for p = 1:paletteSize paletteMatch(1, p) = calcSSD(sourceImage(i:(i+2), j:(j+2)), palette(:,:,p)); end [minimumSSD, indexOfMinSSD] = min(paletteMatch); outputImage(i:(i+2), j:(j+2)) = palette(1:3,1:3,indexOfMinSSD); end endendgaussianFilter = fspecial('gaussian', [4, 4], 50);outputImage = imfilter(outputImage, gaussianFilter, 'symmetric', 'conv');%---------------------------------------------%--find black---------------------------------for i = 2:(height-1) for j = 2:(width-1) if ((outputImage(i-1, j+1) < .9) && ... (outputImage(i-1, j-1) < .9) && .... (outputImage(i+1, j+1) < .9) && ... (outputImage(i+1, j-1) < .9)) outputImage(i,j) = (outputImage(i,j)-.1); end endendfor i = 2:(height-1) for j = 2:(width-1) if ((outputImage(i-1, j+1) < .8) && ... (outputImage(i-1, j-1) < .8) && .... (outputImage(i+1, j+1) < .8) && ... (outputImage(i+1, j-1) < .8)) outputImage(i,j) = (outputImage(i,j)-.1); end endendfor i = 2:(height-1) for j = 2:(width-1) if ((outputImage(i-1, j+1) < .6) && ... (outputImage(i-1, j-1) < .6) && .... (outputImage(i+1, j+1) < .6) && ... (outputImage(i+1, j-1) < .6)) outputImage(i,j) = (outputImage(i,j)-.1); end endendfor i = 2:(height-1) for j = 2:(width-1) if ((outputImage(i-1, j+1) < .4) && ... (outputImage(i-1, j-1) < .4) && .... (outputImage(i+1, j+1) < .4) && ... (outputImage(i+1, j-1) < .4)) outputImage(i,j) = (outputImage(i,j)-.1); end endendfor i = 2:(height-1) for j = 2:(width-1)

Page 14: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

if ((outputImage(i-1, j+1) > .98) && ... (outputImage(i-1, j-1) > .98) && .... (outputImage(i+1, j+1) > .98) && ... (outputImage(i+1, j-1) > .98)) outputImage(i,j) = 1; end endendimshow(outputImage);end%=======================================find SSD of two images of the same sizefunction [ssd] = calcSSD(im1, im2) ssd = sum(sum((im1-im2).^2));end%=================================================================make palette% 30 images per shade, + 1 for black% this method creates a palette with the specified number of shadesfunction [palette, size] = createPalette(shades) size = 1 + (30*shades); palette = zeros(3,3,size); palette(:,:,1) = [ 100 , 100 , 100 ; %white 100 , 100 , 100 ; 100 , 100 , 100 ]; increment = (100/(shades+2)); base = 3*increment; startIndex = 2; for i = 1:shades [palette, newIndex] = buildPalette(startIndex, palette, base, increment); startIndex = newIndex; base = base+increment; end

palette = double(palette); palette = palette./100; size = newIndex-1;end%=================================================================palette builderfunction [palette, endIndex] = buildPalette(startIndex, palette, base, increment) w = base; x = base-increment; o = x-increment; index = startIndex; %-----------------------------base palette(:,:,index) = [ o , o , o ; o , o , o ; o , o , o ]; index = index + 1; %-----------------------------gradients palette(:,:,index) = [ o , o , o ; x , x , x ; w , w , w ]; index = index + 1; palette(:,:,index) = [ o , o , x ; o , x , w ; x , w , w ];

Page 15: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

index = index + 1;

palette(:,:,index) = [ o , x , w ; o , x , w ; o , x , w ]; index = index + 1; palette(:,:,index) = [ x , w , w ; o , x , w ; o , o , x ]; index = index + 1; palette(:,:,index) = [ w , w , w ; x , x , x ; o , o , o ]; index = index + 1; palette(:,:,index) = [ w , w , x ; w , x , o ; x , o , o ]; index = index + 1; palette(:,:,index) = [ w , x , o ; w , x , o ; w , x , o ]; index = index + 1; palette(:,:,index) = [ x , o , o ; w , x , o ; w , w , x ]; index = index + 1; %-----------------------------lines palette(:,:,index) = [ x , x , x ; w , w , w ; x , x , x ]; index = index + 1; palette(:,:,index) = [ o , x , w ; x , w , x ; w , x , 0 ]; index = index + 1;

palette(:,:,index) = [ x , w , x ; x , w , x ; x , w , x ]; index = index + 1; palette(:,:,index) = [ w , x , o ; x , w , x ; o , x , w ]; index = index + 1; palette(:,:,index) = [ w , w , w ; x , w , x ; x , w , x ]; index = index + 1; palette(:,:,index) = [ w , w , w ; x , x , w ; o , x , w ]; index = index + 1; palette(:,:,index) = [ w , w , w ; w , x , x ; w , x , o ];

Page 16: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

index = index + 1; palette(:,:,index) = [ x , w , x ; w , w , w ; x , w , x ]; index = index + 1; palette(:,:,index) = [ x , x , w ; w , w , w ; x , x , w ]; index = index + 1; palette(:,:,index) = [ w , x , x ; w , w , w ; w , x , x ]; index = index + 1; palette(:,:,index) = [ x , w , x ; x , w , x ; w , w , w ]; index = index + 1; palette(:,:,index) = [ o , x , w ; x , x , w ; w , w , w ]; index = index + 1; palette(:,:,index) = [ w , x , o ; w , x , x ; w , w , w ]; endIndex = index + 1; %-----------------------------special cases palette(:,:,index) = [ x , x , x ; x , x , x ; w , w , w ]; index = index + 1; palette(:,:,index) = [ x , x , w ; x , x , w ; x , x , w ]; index = index + 1;

palette(:,:,index) = [ w , w , w ; x , x , x ; x , x , x ]; index = index + 1; palette(:,:,index) = [ w , x , x ; w , x , x ; w , x , x ]; index = index + 1; palette(:,:,index) = [ o , x , o ; x , w , x ; w , w , w ]; index = index + 1; palette(:,:,index) = [ o , x , w ; x , w , w ; o , x , w ]; index = index + 1;

palette(:,:,index) = [ w , w , w ; x , w , x ; o , x , o ];

Page 17: Transforming Sketches into Vectorized Imagespages.cs.wisc.edu › ~dyer › cs534-spring11 › hw › hw5 › projects › farrell.pdfTransforming Sketches into Vectorized Images Introduction

index = index + 1; palette(:,:,index) = [ w , x , o ; w , w , x ; w , x , o ];end