image interpolation techniques with optical and digital zoom concepts -seminar paper

5
1 Image Interpolation Techniques with Optical and Digital Zoom Concepts Musaab Mohammed Jasim. Yildiz Technical University Faculty of Electrical & Electronic Computer Engineering Department Seminar course [email protected] ID :14501063 . Abstract — Actually the digital images correspond to some physical response in real 2-D space i.e. the optical intensity received at the image plane of a camera or the ultrasound intensity at a transceiver. So, it can be considered as a discrete representation of data possessing both spatial (layout) and intensity (color) information .by processing these data we can get different results that mean obtain various image statuses and one of these processes is Zooming [1]. Zooming includes : enlargement and shrinking processes where these processes require two steps: the creation of new pixel locations, and the assignment of gray (or color) levels to those new locations [2]. in this papers we will address the interpolation techniques to achieve the zooming by using three types of algorithms then we try to discover how it be executed ,its effects on the image and its results. Keywords-Image processing, Digital Zooming , Interpolation techniques . I. INTRODUCTION Interpolation is the process of estimating the values of a continuous function from discrete samples. Image processing applications of interpolation include : image magnification or reduction, sub-pixel image registration, to correct spatial distortions, and image decompression, as well as others. Of the many image interpolation techniques available, "nearest neighbor, bilinear and bicubic" are the common non-adaptive methods [3]. In this papers we will discuss all details relative with these techniques to achieve the "image enlargement and shrinking" . But to understand these topics , at the first we need understand and address with the other topics such as Image Resolution (Spatial and intensity resolution) , The differences between optical and digital zooming , Linear interpolation . II. IMAGE RESOLUTION CONCEPTS Resolution is the capability of the sensor to observe or measure the smallest object clearly with distinct boundaries while the Pixel is actually a unit of the digital image . so the Resolution is the measurement unit of the clarity in digital image field .There are different types of the Resolution that is used to determine the digital image clarity and one of them is "Pixel Resolution" which depends upon the size of the pixel. So when the pixels are counted this will be referred to as pixel resolution. the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), Below in Figure(1) is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction from pixels would be preferred, but for illustration of pixels, the sharp squares make the point better). Figure (1) So the Pixel Resolution determine the number of pixels in the image , but unfortunately, the count of pixels isn't a real measure of the image clarity as most people think , there another concepts determine the clarity such Spatial and Intensity Resolution for gray images and Spectral Resolution for colored image [4]. Where the Spatial resolution can be defined as the number of independent pixel values per unit length (depended on the Sampling process of sensor). So it depend in the number of pixels and the area in which these pixels are resolved (spreaded). Since the spatial resolution refers to clarity , so for different devices , different measure has been made to measure it. Dots per inch \ is usually used in monitors. Lines per inch or LPI \ is usually used in laser printers. Pixels per inch \ is measure for different devices such as tablets , Mobile phones e.t.c. While Intensity Resolution is the bit depth or the colors range of pixels in image , it is determined based on the number of bits which has been assigned for each pixel .(depended on the quantization process). Since , differences of spectrum or wavelength is needed to

Upload: mmjalbiaty

Post on 22-Jan-2017

689 views

Category:

Engineering


2 download

TRANSCRIPT

Page 1: Image Interpolation Techniques with Optical and Digital Zoom Concepts -seminar paper

1

Image Interpolation Techniques with

Optical and Digital Zoom Concepts

Musaab Mohammed Jasim. Yildiz Technical University Faculty of Electrical & Electronic Computer Engineering Department Seminar course [email protected] ID :14501063 .

Abstract — Actually the digital images correspond to some physical response in real 2-D space i.e. the optical intensity received at the image plane of a camera or the ultrasound intensity at a transceiver. So, it can be considered as a discrete representation of data possessing both spatial (layout) and intensity (color) information .by processing these data we can get different results that mean obtain various image statuses and one of these processes is Zooming [1]. Zooming includes : enlargement and shrinking processes where these processes require two steps: the creation of new pixel locations, and the assignment of gray (or color) levels to those new locations [2]. in this papers we will address the interpolation techniques to achieve the zooming by using three types of algorithms then we try to discover how it be executed ,its effects on the image and its results. Keywords-Image processing, Digital Zooming , Interpolation techniques .

I. INTRODUCTION

Interpolation is the process of estimating the values of a continuous function from discrete samples. Image processing applications of interpolation include : image magnification or reduction, sub-pixel image registration, to correct spatial distortions, and image decompression, as well as others. Of the many image interpolation techniques available, "nearest neighbor, bilinear and bicubic" are the common non-adaptive methods [3]. In this papers we will discuss all details relative with these techniques to achieve the "image enlargement and shrinking" .

But to understand these topics , at the first we need understand and address with the other topics such as Image Resolution (Spatial and intensity resolution) , The differences between optical and digital zooming , Linear interpolation .

II. IMAGE RESOLUTION CONCEPTS

Resolution is the capability of the sensor to observe or measure the smallest object clearly with distinct boundaries while the Pixel is actually a unit of the digital image . so the Resolution is the measurement unit of the clarity in digital image field .There are

different types of the Resolution that is used to determine the digital image clarity and one of them is "Pixel Resolution" which depends upon the size of the pixel. So when the pixels are counted this will be referred to as pixel resolution. the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), Below in Figure(1) is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction from pixels would be preferred, but for illustration of pixels, the sharp squares make the point better).

Figure (1)

So the Pixel Resolution determine the number of pixels in the image , but unfortunately, the count of pixels isn't a real measure of the image clarity as most people think , there another concepts determine the clarity such Spatial and Intensity Resolution for gray images and Spectral Resolution for colored image [4].

Where the Spatial resolution can be defined as the number of

independent pixel values per unit length (depended on the Sampling process of sensor). So it depend in the number of pixels and the area in which these pixels are resolved (spreaded). Since the spatial resolution refers to clarity , so for different devices , different measure has been made to measure it.

‐ Dots per inch \ is usually used in monitors. ‐ Lines per inch or LPI \ is usually used in laser printers. ‐ Pixels per inch \ is measure for different devices such as tablets ,

Mobile phones e.t.c.

While Intensity Resolution is the bit depth or the colors range of pixels in image , it is determined based on the number of bits which has been assigned for each pixel .(depended on the quantization process). Since , differences of spectrum or wavelength is needed to

Page 2: Image Interpolation Techniques with Optical and Digital Zoom Concepts -seminar paper

2

reproduce color. So the Spectral Resolution is used here where is defined as "the ability to resolve spectral features and bands into their separate components". The spectral resolution required by the analyst or researcher depends upon the application involved.

III.OPTICAL ZOOM VS. DIGITAL ZOOM

Optical zoom means moving the zoom lens so that it increases the magnification of light before it even reaches the digital sensor (Before sampling process and quantization process). So, the optical zoomed image occupies the full area of the sensor and is simply a magnified the real life image [5].

A digital zoom is not really zoom, in the strictest definition of the term. It degrades quality by simply interpolating the image after it has been acquired at the sensor(After sampling and quantization process) . because it is implemented after determination the Spatial Resolution , so it mean resampling the image by creating new pixel locations and assigning gray-level values (or color) to these locations.

There are two types of digital zoom. The most common form of digital zoom involves image "interpolation" which is introduced and discussed and is discussed here . The second type is called "smart zoom".

Based on the above definitions we can understand that with digital zoom the detail is clearly far less than with optical zoom. as shown in the image below.

The original image

10X Optical Zoom 10X Digital Zoom

IV. IMAGE INTERPOLATIONS

Image interpolation occurs when we resize or distort our image from one pixel grid to another. Image resizing is necessary when we need to increase or decrease the total number of pixels, whereas remapping can occur for distortion or rotating an image as shown in Figure(2). Zooming refers to increase the quantity of pixels, so that when we zoom an image we will able to see more detail. (Pixilation Process).

Figure (2)

Common interpolation algorithms can be grouped into two

categories: adaptive and non-adaptive. Adaptive methods change depending on what they are interpolating, whereas non-adaptive methods treat all pixels equally. Non-adaptive algorithms include: nearest neighbor, bilinear, bicubic, spline, sinc, and others, while Adaptive algorithms include many proprietary algorithms in licensed software such as: Qimage, PhotoZoom Pro and Genuine Fractals [5][6]. In this papers we will address with three of Non-adaptive algorithms for resizing purposes ,and try to understand its algorithm and the difference between these method results , then we try to implement these algorithms in MATLAB to see its result on the gray and color images. These three methods are :

1‐ Nearest Neighbor interpolation. 

2‐ Bilinear interpolation . 

3‐ Bicubic interpolation . 

 

Figure(3) show the meant of the resizing concept , and the roles of interpolation to achieve it.

Figure(3) Resizing by using Interpolation

Nearest Neighbor Interpolation

Nearest Neighbor Interpolation, the simplest method, determines the grey level value(or color) from the closest pixel to the specified input coordinates, and assigns that value to the output coordinates. It should be noted that this method does not really interpolate values, it just copies existing values. Since it does not alter values, it is preferred if subtle variations in the grey level values need to be retained [7].

For one-dimension Nearest Neighbor Interpolation, the number of grid points needed to evaluate the interpolation function is two. For two-dimension Nearest Neighbor Interpolation, the number of grid points needed to evaluate the interpolation function is four.

Page 3: Image Interpolation Techniques with Optical and Digital Zoom Concepts -seminar paper

3

Nearest Neighbor algorithm

Figure(4) and (5) below show the two states of using Nearest Neighbor Interpolation methods [8].

Figure(4) Enlargement

Figure(5) Reducing

Bilinear Interpolation

Bilinear Interpolation determines the grey level value (or color) from the weighted average of the four closest pixels to the specified input coordinates, and assigns that value to the output coordinates.

Bilinear interpolation considers the closest 2x2 neighborhood of known pixel values surrounding the unknown pixel's computed location. It then takes a weighted average of these 4 pixels to arrive at its final, interpolated value. The weight on each of the 4 pixel values is based on the computed pixel's distance (in 2D plane) from each of the known points (linear interpolations).

But what is the (Linear Interpolation) and the (Weighted Average) which are used to implementation the Bilinear method ?? Linear interpolation between two known points

 If the two known points are given by the coordinates (x0,y0) and

(x1,y1) , the linear interpolation is the straight line between these points. For a value x in the interval (x0,x1) , the value y along the straight line is given from the equation (1) . [9]

(1)

Which can be derived geometrically from the Figure(6) below

where it is a special case of polynomial interpolation with n = 1.

Figure(6)

Solving this equation for y, which is the unknown value at x,

gives the equation (2)

∗ 2 Which is the formula for linear interpolation in the interval (x0,x1)

Outside this interval, the formula is identical to linear extrapolation.

weighted average

This formula can also be understood as a weighted average. The weights are inversely related to the distance from the end points to the unknown point; the closer point has more influence than the farther point. Thus, the weights are and which are

normalized distances between the unknown point and each of the end points. Because these sum to 1, as shown in the equation (3)

1 1 1 (3)

Page 4: Image Interpolation Techniques with Optical and Digital Zoom Concepts -seminar paper

4

Which yields the formula for linear interpolation given above. The Figure (7) with the color arrows show the idea of weighted average .

Figure (7)

Calculating a weighted Average for Image

Based on the concepts are mentioned above , the weighted average of the attributes (color, alpha, etc.) of the four surrounding pixels (as is shown in the Figure(8) is computed and applied to the screen pixel. This process is repeated for each pixel forming the object being textured.

Figure (8)

To understand how we can calculate the weighted average of a digital image , we will discuss a simple example of a gray digital image with a bit depth (gray level) for four pixels from it, then we will calculate the gray level of the interpolated pixel based on the rules which have already been discussed [10].

Example

Based on the Figure(9) we observe that the intensity value at the pixel computed to be at row 20.2, column 14.5 can be calculated by first linearly interpolating between the values at column 14 and 15 on each rows 20 and 21, giving

, .15 14.515 14

∗ 9114.5 1415 14

∗ 210 150.5

, .15 14.515 14

∗ 16214.5 1415 14

∗ 95 128.5

and then interpolating linearly between these values, giving

. , .21 20.221 20

∗ 150.520.2 2021 20

∗ 128.5 146.1

This algorithm reduces some of the visual distortion caused by

resizing an image to a non-integral zoom factor, as opposed to nearest neighbor interpolation, which will make some pixels appear larger than others in the resized image.

Figure (9)

And for the color digital image we will use the same rules but it is implemented for each channel (Red, Green, Blue) of the image.

Bilinear algorithm

After we comprehended the weighted average rule and how it is calculated , we will be able to understand the Bilinear Algorithm which be built based on the notations of the Figure(10) .

Figure (10)

The details of the algorithm implementation and variables definitions and the relation between them as shown in the following paragraph .[8]

Page 5: Image Interpolation Techniques with Optical and Digital Zoom Concepts -seminar paper

5

BiCubic Interpolation

BiCubic Interpolation method determines the gray level value (or color) from the weighted average of the 16 closest pixels to the specified input coordinates as shown in Figure(11) , and assigns that value to the output coordinates. The image is slightly sharper than that produced by Bilinear Interpolation, and it does not have the disjointed appearance produced by Nearest Neighbor Interpolation.

Figure (11)

So, Bicubic goes one step beyond bilinear by considering the closest 4x4 neighborhood of known pixels — for a total of 16 pixels. Since these are at various distances from the unknown pixel, closer pixels are given a higher weighting in the calculation.

V. CONCLUSION

The digital Image is a visual representation in form of a function f(x,y) where f is related to the brightness (or color) at point (x,y) , the value of each point is acquired based on the light that reflect from the objects on the sensors on the Digital camera , the electrical responses of these sensor will be aggregated then after the sampling and quantization processes the image pixels will been created. A lot of applications are appeared in this field , where these applications contain many and many of the processing methods and algorithms to apply it on the digital image and one of these processes is the "interpolation process" .

Interpolation process include image magnification or reduction, subpixel image registration, to correct spatial distortions, and image decompression, as well as others. There are a number of techniques that can be used to enlarge an image. The three most common were presented here. The Nearest-Neighbor and Bilinear interpolation methods are very practical and easy to apply, due to their simplicity. However, their accuracy is limited while Bicubic gave the best results in terms of image quality, but took the greatest amount of processing time.

VI. REFERENCES

1. Chris  Solomon  ,Toby  Breckon,  "Fundamentals  of  Digital  Image 

Processing",Chichester,  West  Sussex,  PO19  8SQ,  UK  ,  2011, 

sec on 1.1 , p 20‐25. 

2. Rafael  C.  Gonzalez  ,  Richard  E.  Woods  ,  "Digital  Image 

Processing", Second Edition  , Prentice Hall, Upper Saddle River, 

New Jersey 07458 , sec on 2.4.5 , p 75‐81. 

3. S.J.Lebonah  ,D.Minola  Davids,  PhD.  ,  "A  Novel  Coding  using 

Downsampling  Technique  in  Video  Intraframe",International 

Journal of Computer Applications® (IJCA). 

4. Richard  Alan  Peters  II,EECE\CS  253  Image  Processing  course 

,Vanderbilt university ,school of engineering ,  Fall , 2011. 

5. Bax  Smith  ,  EN9821  Design  Assignment  ,  www.engr.mun.ca 

/~baxter /Publications /ImageZooming.pdf. 

6. A  Learning  Community  for  Photographers,  DIGITAL  IMAGE 

INTERPOLATION,  www.cambridgeincolour.com/tutorials/image‐

interpolation.htm 

7. University of  Tartu  , Digital  Image processing  ,Resizing  Image  , 

www.sisu.ut.ee/imageprocessing/book/3. 

8. Image  resolution  ,  From  Wikipedia  ,  en.wikipedia.org  /wiki 

/Image_resolution . 

9. Linear  interpolation,  From  Wikipedia,  en.wikipedia.org/wiki 

/Linear_interpolation . 

10. Bilinear  interpolation,  From  Wikipedia  ,  en.wikipedia.org/wiki 

/Bilinear_interpolation.