survey paper on digital image inpainting

18
International Journal of Engineering Technology, Management and Applied Sciences www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476 91 Rajkumar L. Biradar Survey Paper on Digital Image Inpainting Rajkumar L. Biradar G.Narayanamma Institute of Tech & Science, Hyderabad-500008, India ABSTRACT The inpainting process can be described as introduction of new paint into and limited to areas of loss in the original paint layer in order to restore design continuity. The inpainting aims to complete the damaged region by inserting the ‘best’ matching set of pixels into it such that the inpainted image looks like the ‘original image’. The completion of the area can be done by diffusing neighborhood pixels or by searching best matching patch in the image and pasting it in the damaged region. The user selects an area to be inpainted and the inpainting algorithm automatically fills-in the region with information (pixels) derived from the local neighborhood or from the global search of an entire image, maintaining the best possible overall perceptual quality. The quality of the inpainting depends on the size of the damaged region, the geometry of the occluded objects and the fill-in order etc. Small regions can be inpainted effortlessly, while large regions may produce unrealistic results. Keywords Inpainting, convolution. Texture, structure. 1. Introduction Image inpainting is the technique of filling-in the damaged regions in non undetectable way for an observer who doesn’t know the original damaged image. The concept of digital inpainting was introduced by Bertalmio [1]. In the most conventional form, the user selects an area for inpainting and algorithm automatically fills-in the region with information surrounding it without loss of any perceptual quality. Inpainting techniques are broadly categorized as structure inpainting and texture inpainting. The structure filling algorithms rely on filling the inner area with the information from a structured region which is the boundary of region to be inpainted. Texture inpainting techniques fill in the damaged or missed regions using similar neighborhood in an image. They try to match statistics of damaged regions to statistics of known regions in neighborhood of a damaged pixel. 2 Literature Survey Inpainting technique introduced by Bertalmio, rekindled interest of image processing researchers in the field of inpainting. Over the past decade, many ideas and implementations are proposed to extract the

Upload: others

Post on 02-Oct-2021

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

91 Rajkumar L. Biradar

Survey Paper on Digital Image Inpainting

Rajkumar L. Biradar

G.Narayanamma Institute of Tech & Science, Hyderabad-500008, India

ABSTRACT

The inpainting process can be described as introduction of new paint into and limited to areas of loss in the original paint

layer in order to restore design continuity. The inpainting aims to complete the damaged region by inserting the ‘best’

matching set of pixels into it such that the inpainted image looks like the ‘original image’. The completion of the area can be

done by diffusing neighborhood pixels or by searching best matching patch in the image and pasting it in the damaged

region. The user selects an area to be inpainted and the inpainting algorithm automatically fills-in the region with information

(pixels) derived from the local neighborhood or from the global search of an entire image, maintaining the best possible

overall perceptual quality. The quality of the inpainting depends on the size of the damaged region, the geometry of the

occluded objects and the fill-in order etc. Small regions can be inpainted effortlessly, while large regions may produce

unrealistic results.

Keywords

Inpainting, convolution. Texture, structure.

1. Introduction Image inpainting is the technique of filling-in the damaged regions in non undetectable way for an

observer who doesn’t know the original damaged image. The concept of digital inpainting was

introduced by Bertalmio [1]. In the most conventional form, the user selects an area for inpainting and

algorithm automatically fills-in the region with information surrounding it without loss of any perceptual

quality. Inpainting techniques are broadly categorized as structure inpainting and texture inpainting. The

structure filling algorithms rely on filling the inner area with the information from a structured region

which is the boundary of region to be inpainted. Texture inpainting techniques fill in the damaged or

missed regions using similar neighborhood in an image. They try to match statistics of damaged regions

to statistics of known regions in neighborhood of a damaged pixel.

2 Literature Survey

Inpainting technique introduced by Bertalmio, rekindled interest of image processing researchers in the

field of inpainting. Over the past decade, many ideas and implementations are proposed to extract the

Page 2: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

92 Rajkumar L. Biradar

surrounding information to inpaint effectively. There is no unified framework to inpaint structure and

texture images. Hence, the previous approaches of inpainting techniques may be broadly classified

under three different headings.

Structure Inpainting

Texture Inpainting

Hybrid Inpainting

3 Structure Inpainting

Structure inpainting is a pixel based approach, where properties of individual pixels are used to fill-in

damaged region. The information derived from surrounding pixels of the damaged region is propagated

into it. The structure inpainting techniques may be classified as -

1. Partial Differential Equation (PDE) based techniques

2. Filter based techniques

3. Probabilistic and other techniques

3.1 PDE Based Techniques

The PDE based techniques treats image as a bound surface created by the pixel locations and partial

differential equations are used to model it. The damaged regions are the ‘holes’ in this image surface.

Using some smoothness constraint, damaged region is filled-in by solving the PDEs. Iterative numerical

techniques are used to find approximate solution of PDEs with suitable boundary conditions. Some of

the PDE based inpainting techniques with their limitations are discussed.

Figure 1 Straight lines are used to join points at the boundary which have equal gray level.

Page 3: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

93 Rajkumar L. Biradar

Masnou and J M Morel [2] proposed the inpainting by level lines based on disocclusion. The technique

performs inpainting by joining end points of geodesic curves. The points of the lines of equal gray

values (isophotes) arriving at the boundary of the damaged region are connected by straight lines as

shown in the Figure 1. Regions with the simple topology can be inpainted since the angle with which the

level lines arrive at the boundary of the inpainted region is not preserved.

Bertalmio [1] extended level lines based disocclusion method of Masnou and J M Morel [2]. The angle

of arrival of isophotes and hence, the direction of prolongation is maintained as normal to the direction

of the largest spatial change. If Ω is the damaged region to be inpainted with ∂Ω as its boundary,

Bertalmio proposed to prolong the isophote lines arriving at ∂Ω, while maintaining the angle of arrival

as shown in the Figure 2. The isophotes emanated from ∂Ω curve inwards as they prolong inside Ω

progressively so as to prevent them from crossing each other.

Figure 2 Propagation directions as the normal to the signed distance to the boundary ∂Ω of the damaged region Ω to be

inpainted.

Bertalmio’s inpainting is carried out by interleaving anisotropic diffusion [3] with prolongation of

isophotes inside the damaged region. Prolongation is the inpainting procedure and anisotropic diffusion

tries to ensure evolution of isophotes in the correct direction without crossing each other. The process of

prolongation and diffusion are repeated to inpaint. Anisotropic diffusion minimizes the influence of

noise on the estimate of the direction of isophotes.

To estimate the direction and prolongation of isophotes, consider an image f with the damaged region .

The pixel value of the isophote at location vu, inside the damaged region Ω at tth

iteration is given by

Page 4: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

94 Rajkumar L. Biradar

vufvufvuf t

T

tt ,,, 11 vu, (1)

where )1,0( is diffusion constant and vuf t

T ,1 is improvement factor, an update of the image

vuf t ,1 . The factor vuf t

T ,1 is given by

vuNvuLvuf ttt

T ,.,, 111 (2)

where the vuLt ,1 is change in the Laplacian information L vu, at vu, along the direction N ,

which is given by

)1,()1,(),,1(),1(),( 11111 vuLvuLvuLvuLvuL ttttt (3)

And the direction of isophotes N is obtained by rotating gradient vector through 900 and N is given by

2121

11

11

),(),(

),(),,(),(,

vufvuf

vufvufvufvuN

t

y

t

x

t

x

t

ytt

(4)

),( vuf x , ),( vuf x are first order derivatives of f at ),( vu .

The prolongation lines are progressively curved preventing them from intersecting each other using

discrete form of anisotropic diffusion [4] Equation (5) and is given by

pvu

p

pvu

vu

tt ffgvufvufvu

),,(),,(

),(

1

),(

),(),(

),( vu (5)

where g is the conduction function, is the gradient threshold parameter and ),( vu is four-

neighborhood pixels of ),( vu . The symbol is the gradient operator, and it represents the difference

between neighboring pixels in each direction i.e.

),()( )1(1

),,( vufpff tt

pvu

, ),( vup (6)

The inpainting procedure of Bertalamio has the following limitations-

Page 5: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

95 Rajkumar L. Biradar

1. Edges are not preserved.

2. Large numbers of iterations are needed for larger inpainting area, since the isophotes are of small

size.

3. Artifacts are generated for larger areas, as more anisotropic diffusion process is invoked to avoid

crossing of isophotes.

In [5], the ideas from computational fluid dynamics (CFD) are used to propagate isophote lines. The

image intensity is treated as a ‘stream function’ of a two-dimensional incompressible flowing fluid. The

Laplacian of the image intensity plays the role of the vorticity of the fluid and is transported into the

region to be inpainted by a vector field defined by the stream function. The technique is designed to

continue isophotes while matching gradient vectors at the boundary of the inpainting region. The

method is directly based on the Navier-Stokes equations describing fluid dynamics.

Chan and Shen proposed two image inpainting techniques. Total Variation (TV) inpainting model [6,7]

uses the Euler Lagrange modeling. Inside the inpainting domain, this model employs anisotropic

diffusion based on contrast of the isophotes. It does not connect broken edges (i.e. single lines embedded

in a uniform background). Curvature-Driven Diffusion (CCD) model [8], an extension of TV technique,

takes into account the geometric information of isophotes while defining the strength of diffusion

process. This allows inpainting over large areas. Although, CCD connects some broken edges but

inpainting results in blur. The phase transition in superconductor and Ginzburg-Landau equation [9, 10]

are used to inpaint the selected areas. In [11] normal and tangential vectors are propagated into

damaged/missing regions and image is reconstructed.

A. Telea [12] has proposed a fast marching method (FMM) based on PDE. It is considerably fast and

simple to implement than other PDE based techniques without computational overheads. The technique

calculates smoothness estimate of image from known neighborhood of the pixel as a weighted average

to inpaint. The FMM inpaints the near pixels to the known region first and maintains a narrow band of

pixels which separates known pixels from the unknown pixels. The limitation of this technique is in

producing blur when the region to be inpainted is thicker than ten pixels.

Bertalmio [13] reformulated the inpainting problem as a particular case of image interpolation in which

level lines (isophotes) are propagated. In this technique, a third order PDE is derived based on the local

Page 6: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

96 Rajkumar L. Biradar

neighborhoods of damaged region and using a Taylor expansion. This PDE is optimal in the sense that it

is the most accurate third order PDE which can ensure continuation of level lines into the damaged

region. The continuation is strong [14], allowing the restoration of thin structures occluded by a wide

gap. It is also contrast invariant.

D. Fisheloy [15] proposed an extension of [5]. The idea is to use fluid equations - the Navier-Stokes

equations - as a PDE based method for the image inpainting. The representation of the Navier-Stokes in

terms of stream function eases the implementation and the analysis of the inpainting technique.

The Total variation model [6, 7] for image inpainting is an effective method. But the interpolation of this

model is limited to creating straight isophotes, not necessarily smoothly continued from the boundary.

Peiying Chen [16] made some improvements to propagate the information smoothly from boundary to

the damaged region and proposed fourth-order PDE technique to inpaint.

Zhongyu. Xu [17] presents a faster technique based PDEs. This technique is called as quick curvature-

driven diffusion’s (QCDD) and produces better results with lesser computation time. QCDD model is

developed on the basis of the curvature-driven diffusion’s (CDD) model. Both, CDD and QCDD models

are supported by “connectivity and holistic principle,”. These techniques connect a few broken edges,

but produce a blurry look after inpainting.

Julia A. [18] constructed a new variational method for blind de-convolution and inpainting of images. It

is motivated by recent PDE-based techniques involving the Ginzburg-Landau function and localized

wavelet-based methods. Comparable speeds and better construction of edges are reported by the author.

Peiying Chen [19] proposed inpainting technique based on nonlinear PDE. This procedure allows the

transportation and diffusion of image information simultaneously. This technique permits the

transportation of available information from outside towards inside the inpainting region and the

diffusion of the inside information in the inpainting domain at the same time.

Xiaobao Lu [20] proposed a fast image inpainting technique based on TV model, which is an extension

of [6, 7, and 16]. They proposed priority TV model based on the analysis of local characteristics of the

pixels around the damaged region. If information around the damaged pixels appears more, the diffusion

Page 7: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

97 Rajkumar L. Biradar

is faster. The technique first stratifies and filters the pixels around the damaged region according to

priority, and then iteratively inpaints the damaged region based on the priority. The basic idea is as

follows-

Step 1: Segment the damaged region.

Step 2: Find the edge of the damaged region.

Step 3: Calculate the priority of the pixels on the edge and sort in accordance with the priority,

if the priority of the pixel is greater than a certain threshold T, then reserve the pixel,

else delete it.

Step 4: Store the reserved pixels according to the order of priority as a layer.

Step 5: Update the damaged region using TV model.

Step 6: Repeat the Step 2, Step 3, Step 4, and Step 5 until the area of damaged region vanishes.

Step 7: Iteratively inpaint according to the priority from outside to inside.

Zhaozhong Wang [21] proposed an application of image inpainting techniques for the edge

enhancement problems in image deblurring and denoising. The edge enhancement effect is achieved by

the jumps of pixel values at the edge locations resulting from an inpainting process. The process is

formulated by the Eikonal PDE to rule the inpainting priority of pixels in automatically erased regions.

The equation is then numerically solved by the fast marching method. A solution of the Laplace's

equation is also embedded in the numerical scheme to assure the smoothness in non-edge locations.

Y. Zhang [22] introduced fractional-order image inpainting (a projection interpolation method) into

metal artifacts reduction in Computer Tomography (CT) images. They introduce a fast non iterative

method based on fast marching method (FMM) and coherence transport for metal artifacts reduction

(MAR). In [23] image is inpainted by fractional-order TV image inpainting model, a combination of TV

and fractional derivative. They introduced a new class of fractional-order variational image inpainting

models in both space and wavelet domains. Niang O [24] proposed an alternative implementation of

empirical mode decomposition (EMD) of Huang. This approach relies on a nonlinear diffusion-based

filtering process to solve the mean envelope estimation problem.

PDE based methods are complex and slow. Also, the edge information is not handled and results show

blocky effect for large damaged regions. Sometimes implementation of PDE is numerically unstable.

Page 8: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

98 Rajkumar L. Biradar

3.2 Filter Based Techniques

In these techniques, the damaged region is convolved with the filter mask and the results are dependent

on convolution mask. Oliveira [25] proposed a fast image inpainting method. The algorithm consists of

four steps-

Step 1: Selection of the damaged region to be inpainted.

Step 2: Detecting the region boundary.

Step 3: Initializing Ω, the damaged region by clearing its pixel information.

Step 4: Each pixel in Ω is convolved with diffusion kernel.

The ∂Ω is a one-pixel thick boundary and the number of convolution iterations is independently

controlled by setting appropriate threshold. Most of the results, as reported by the author, use more than

100 iterations. The inpainting process progresses from the boundary, ∂Ω into the damaged region, Ω.

Pseudo code of this technique and the filter kernels are shown in the Figure 2.3.

The damaged region is convolved with the averaging filter to compute the weighted averages of pixels’

in its neighborhood. This is same as the anisotropic diffusion. The advantage of the technique is that it is

Figure 3 Olivera’s inpainting pseudo code with two diffusion kernels used. The values a =.073235, b =.176765 and c = 0.125.

fast, but it cannot handle high contrast edges or high frequency components (e.g. natural textures). In

[26], Handhoud presented a modification of Olivera’s technique with lesser time complexity.

Initialize Ω

for (itr=0; itr < num_iteration; i++)

Convolve masked region with

a b a

b 0 b

a b a

c c c

c 0 c

c c c

Page 9: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

99 Rajkumar L. Biradar

H. Noori [27] proposed a method in which convolution mask coefficients are calculated using the

gradient of the image to be inpainted. The gradient of known pixels in the neighborhood of inpainting is

used to compute weights in convolution mask.

In [28] a bilateral kernel used for convolution is obtained by multiplying range and space kernels. For

each pixel, kernels are calculated using its neighbors in space and range domains. Since the bilateral

filters are efficient in denoising [29-31], inpainting can be performed by estimating the lost (damaged)

pixels.

In general, all convolution based techniques are fast and provide good results only when damaged

regions are thin and small. Images with high contrast edges or high frequency components produce

noisy results.

3.3 Probabilistic and Other Techniques

Roth and Black [32] have developed a framework of generic and expressive image priors that captures

the statistics of natural scenes. The approach extends traditional Markov Random Field (MRF) models

by learning potential functions over the extended pixel neighborhoods. Field potentials are modeled

using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses.

Authors demonstrated the capabilities of Field of Experts (FoE) model with two example applications,

image denoising and image inpainting, which are implemented using a simple, approximate inference

scheme. The model is trained on a generic image database and is not tuned towards a specific

application.

Fields of Expert considers an image to be comprising of many small sub images which are called fields

of expert. It then finds out the information propagation around the mask area in the fields of expert

which is then extended to the actual image.

The inpainting is defined as a full Product of t-distribution (PoT) model which is written as

N

i

i

T

ii xJZ

xp1

;1

)( N ......,.........1 (7)

where iii J, and the expert i have the form

Page 10: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

100 Rajkumar L. Biradar

i

xJxJ T

ii

T

ii

2

2

11; (8)

And Z is the normalizing, or partition, function. The i s are assumed to be positive, which is

needed to make the i proper distribution, but note that the experts themselves are not assumed to be

normalized. It will later be convenient to rewrite the probability density in Gibbs as

,exp2/1 xExp PoE with

i

T

iiPoE xJE ;log (9)

The probability density of a full image under the FoE model is

,exp2/1 xExp FoE (10)

where k

ik

T

iiFoE xJxE ;, (11)

Or equivalently

k

N

i

ik

T

ii xJZ

xp1

;1

(12)

where i and i are defined as before. The important difference with respect to PoE model in Equation

(7) is product over all neighborhoods k is taken which is shown in Equation (12).

The gradient of log-prior [33] is given by

fJJxp iiix **log (13)

where yyx iii /;log and

iJ denotes the filter obtained by mirroring iJ is around its

center pixel [33]. The fJ i * denote the convolution of image f with filter iJ .

Page 11: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

101 Rajkumar L. Biradar

The inpainting algorithm propagates information using FoE prior and is given by

N

i

t

iii

tt fJJMff1

1 ** (14)

In this update t is iterative index, is update rate and the mask M sets the gradient to zero for all pixels

outside the masked region. Here the local structure information comes from the response of the learned

filter banks. This technique produces inferior quality of inpainting if the region to be inpainted is large

and the background is made up of different colors.

George Papandreou [34] adopted a probabilistic model-based technique to inpaint damaged region. The

main elements of this model are, an over-complete complex-wavelet image representation, which

ensures good shift invariance and directional selectivity and a discrete-state/continuous-observation

Hidden Markov Tree model for the wavelet coefficients. The HMM tree captures key statistical

properties of image wavelet responses, such as heavy-tailed histograms and persistence of large wavelet

coefficients across scales. These ideas are integrated into a multi-scale generative process for natural

images and present an alternative deterministic and Markov chain Monte Carlo technique for image

inpainting. They demonstrated the effectiveness of the method in restoring images of ancient wall

paintings.

In [35], Kwan-Jung Oh proposed a hole-filling technique using depth image-based rendering (DIBR)

inpainting. DIBR is a method to fill the holes caused by disocclusion regions and wrong depth values.

The proposed hole-filling method provides better rendering quality objectively and subjectively.

In [36], Liu He proposed a depth-guided exemplar-based inpainting technique, in which a single color

image and its associated disparity map are inpainted simultaneously. Exemplars are randomly selected

under depth constraints in initialization and optimized with a nearest neighbor search method in a semi

global way for smooth completion. Experimental results with datasets of different scenes demonstrate

the positive impact of depth control in exemplar selection and the efficiency of the proposed technique.

In [37], Tomoki Hosoi proposed inpainting technique to generate the subspace from many images

related to the object class in the learning step and estimates the missing pixel values of the input image

Page 12: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

102 Rajkumar L. Biradar

belonging to the same object class so as to maximize the similarity between the input image and the

subspace in the inpainting step. Since it is a learning based technique, inpainting results depend on the

training image data.

In [38], Bianjing Bai proposed completion of missing parts by structure propagation and synthesizing

the regions along the salient structures specified by the user. After structure completion, a finer

algorithm is used to fill-in the remaining unknown regions.

4. Texture Inpainting Techniques

Texture is a group of inter-related pixels, and hence pixel by pixel reconstruction of the structural

images cannot be used directly to inpaint the textured images. The texture inpainting is pasting the

texture into the damaged region. Texture to be pasted can be obtained either by synthesizing it or

searching for a similar patch in the image (exemplar based). Texture inpainting technique fills-in the

damaged region with synthesized texture patch or by searched patch.

Hirani and Totsuka [39] combine frequency and spatial domain information in order to fill-in a given

region with a selected texture. Other texture synthesis techniques [40, 41] can be used to recreate a

preselected texture to fill-in a square region to be inpainted. Though the ideas are simple but the

techniques are complex. In [42] Efros proposed a nonparametric texture synthesis model based on

Markov Random Field (MRF) to inpaint textural images. In his method, first a neighborhood around a

damaged pixel is selected and then all known regions of the images are searched to find the most similar

region to the selected neighborhood. Finally the central pixel found in the neighborhood is copied to the

damaged pixel. This method is time consuming and does not produce good results around structured

regions.

The exemplar based approach is an important class of texture inpainting techniques. Basically it consists

of two basic steps- in the first step priority assignment is done and the second step consists of the

selection of the best matching patch. The exemplar based approach samples the best matching patches

from the known region, whose similarity is measured by certain metrics, and pasted into the damaged

region.

Page 13: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

103 Rajkumar L. Biradar

Exemplar based inpainting iteratively synthesizes and reconstructs the damaged region, by pasting the

most similar patch from the source region. According to the filling order, the method fills-in structures

in the missing regions using spatial information of neighboring regions.

Generally, an exemplar-based inpainting technique includes following four main steps-

Step 1: Initializing the damaged region, in which the initial missing areas are extracted and

represented with the appropriate data structures.

Step 2: Computing filling priorities. A predefined priority function is used to compute the filling

order for all unfilled pixels p in the beginning of each filling-in iteration.

Step 3: Searching example and compositing, in which the most similar example is searched from

the source region Φ to compose the given patch, Ψ (of size BB NN pixels) that is centered on

the given pixel p.

Step 4: Updating image information, in which the boundary of the damaged region Ω and

the required information for computing filling priorities are updated.

Numbers of techniques are developed for the exemplar based image inpainting. Such as, Jia [43]

segmented an image into several regions based on its texture color features and then inpainted each

region individually. Drori [44] proposed a fragment based image inpainting technique that iteratively

approximated, searched, and added detail by compositing adaptive fragments. The computation time of

this technique is intolerable.

Criminisi [45] developed an efficient and simple approach to encourage fill-in from the boundary of the

missing region where the strength of nearby isophote was strong, and then used the sum of squared

difference (SSD) to select a best matching patch among the candidate source patches. In the technique of

Criminisi the region filling is determined by the priority based mechanism. Cheng [46] generalized the

priority function for the technique given in [45] to provide a more robust performance. Komodakis [47]

defined a global objective function to inpaint. This method is computationally expensive. Wong [48]

developed a weighted similarity function to inpaint texture. The similarity function uses several source

patches to reconstruct the target patch instead of using a single source patch. Fang [49] developed a

rapid image inpainting technique which consists of a multiresolution training process and a patch-based

image synthesis process. Xu [50] proposed two novel concepts of sparsity at the patch level for

Page 14: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

104 Rajkumar L. Biradar

modeling the patch priority and patch representation. Exemplar based approaches achieve better

inpainting compared to the diffusion-based ones but adopt complex strategies. These techniques mainly

deals with texture synthesis and do not account structured background.

5. Hybrid Techniques

The hybrid approaches combine both texture synthesis and PDE based inpainting for completing the

damaged region by decomposing the image into structured and textured regions [51-53]. Bertalmio [51]

combined the diffusion based technique [1] and texture synthesis [42]. He proposed to decompose

original image into structure and texture subimages. The structure subimage is reconstructed by a

structure inpainting technique and the texture subimage is restored by a texture synthesis. Similar

approach is proposed in [54], in which instead of decomposing the image, the original image is

segmented into two subregions. In [55] a two-step approach is used: the first step is structure completion

and the second step is texture synthesis. The structure completion stage is achieved using the

segmentation technique [56] based on the insufficient geometry, structure and texture information in the

input and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete

segmentation [57]. The second step consists of synthesizing texture and color information in each

segment, again using tensor voting.

In general, we note that the PDE based techniques are slow and produce artifacts for large structure

regions. On the other hand, convolution based techniques are fast but they work for small inpainting

regions only and produce blurry look as the size of inpainting area increases. The texture inpainting

techniques are complex and difficult to calculate priority. There is no unified technique to inpaint

structure as well as texture.

References

[1] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image Inpainting. Proceedings of SIGGRAPH. Computer

Graphics Processing, 2000, pp 417-424.

[2] S. Masnou and J.M. Morel. Level Lines Based Disocclusion. 5th

IEEE International Conference on Image Processing

(ICIP), Chicago, IL. Oct4-7, vol.3, 1998, pp 259-263.

Page 15: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

105 Rajkumar L. Biradar

[3] P. Perona and J. Malik. Scale-Space Edge Detection Using Anisotropic Diffusion. IEEE transaction on Pattern

Analysis and Machine Intelligence (PAMI), vol.12, No. 7, July 1990.

[4] C. Tsiotsios and M. Petrou. On the Choice of the Parameters for Anisotropic Diffusion in Image Processing. Journal

of Pattern Recognition, Elsevier Limited, vol.46, No.5, 2012, pp 1369-1381.

[5] M. Bertalmio, A. L. Bertozzi and G. Sapiro. Navier-Stokes, Fluid Dynamics and Image and Video Inpainting. IEEE

International Conference on Computer Vision and Patter Recognition (CVPR), vol.1, 2001, pp 355-362.

[6] F. Chan and J. Shen. Mathematical Models for Local Deterministic Inpainting. UCLA Computational and Applied

Mathematics Reports 00-11, March 2000, pp 00-11.

[7] F. Chan and J. Shen. Non-Textured Inpainting by Curvature Driven Diffusion. Journal of Visual Communication and

Image Processing, vol.2, 2001, pp 436-449.

[8] F.Chan, S.H. Kang and J. Shen. Euler’s Elastica and Curvature-Driven Diffusion. SIAM Journal, Applied

Mathematics, vol.63, No.2, 2002, pp 564-592.

[9] H. Grossauer. Digital Inpainting using the Complex Ginzburg-Landau Equation. Scale Space Method in Computer

Vision, Lecturer Notes 2695, 2003.

[10] H. Grossauer and O.Scherzer. Using complex Ginzburg-Landau Equation for Image Inpainting. Scale Space Method

in Computer Vision, Lecturer Notes 2696, 2003.

[11] X. C. Tai, S. Osher and R. Holm. Image Inpainting using a TV-Stokes Equation. In, Image processing Based on

PDE, Springer, Heidelberg, 2006, pp 473-482.

[12] A. Telea. An Image Inpainting Technique Based on the Fast Marching Method. Journal of Graphics Tools, vol.9,

No.1, ACM press, 2004, pp 25-36.

[13] M. Bertalmio. Contrast Invariant Inpainting with a Third Order, Optimal PDE. IEEE International Conference on

Image Processing (ICIP 2005), Sep 2005. pp 778-781.

[14] M. Bertalmio. Strong Continuation, Contrast Invariant Inpainting with a Third Order, Optimal PDE. IEEE

Transaction on Image Processing, vol.15, No.7, July 2006, pp 1934-1938.

[15] D. Fishelov and Nir Sochen. Image Inpainting via Fluid Equation. International Conference on Information

Technology: Research and Education (ITRE 06), Oct 2006, pp23-25.

[16] Peiying Chen and Yuandi Wang. Fourth-Order Partial Differential Equations for Image Inpainting. International

Conference on Audio, Language and Image Processing (ICALIP 2008), July 2008. pp 1713-1717.

[17] Zhongyu.Xu. Image Inpainting Algorithm Based on Partial Differential Equation. International Colloquium on

Computing, Communication, Control, and Management- ICCCM '08, 2008, pp 120-124.

[18] Julia. A. Dobrosotskaya and Andrea. L. Bertozzi. A Wavelet-Laplace Variational Technique for Image

Deconvolution and Inpainting. IEEE Transaction on Image Processing, vol. 17, No. 5, May 2008, pp 657-663.

[19] Peiying Chen and Yuandi Wangew. A New Fourth-Order Equation Model for Image Inpainting. IEEE Sixth

International Conference on Fuzzy Systems and Knowledge Discovery- FSKD '09, vol.5, August 2009, 320-324.

Page 16: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

106 Rajkumar L. Biradar

[20] Xiaobao Lu, Weilan Wang and Duojie Zhuoma. A Fast Image Inpainting Algorithm Based on TV Model.

Proceedings of the International Multiconference of Engineers and Computer Scientist -IMECS 10, vol.II, March

2010.

[21] Zhaozhong Wang. Image Inpainting based on Edge Enhancement using the Eikonal Equation. IEEE International

Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp 1261-1264.

[22] Y. Zhang, Y. F Pu, J. R. Huand and J. L. Zhou. Fast X-Ray CT Metal Aircraft Reduction Based on Noniterative

Sinogram Inpainting. Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), vol.22, No.4, 2011,

pp 200-207.

[23] Y. Zhang, Y.F. Pu, J.R. Hu and J.L. Zhou. A Class of Fractional Order Variational Image Inpainting Models.

International Journal of Applied Mathematics and Information Science, vol.2, 2012, pp 299-306.

[24] Niang O, Thioune A and El Gueirea MC, Deléchelle E, Lemoine J. Partial Differential Equation-Based Approach for

Empirical Mode Decomposition: Application on Image Analysis. IEEE Transaction on Image Processing, vol.21,

No.9, Sep 2012, pp 3991-4001.

[25] M. Olivera, B. Bowen, R. Mckenna and Yu-Sung Chang. Fast Digital Image Inpainting. Proceedings of the

International Conference on Visualization, Imaging and Image Processing -VIIP 2001, 2001, pp.261-266.

[26] M.M.Hadhoud, Kamel Moustafa and Shenoda. Digital Image Inpainting using Modified Convolution Based Method.

International Journal of Signal Processing, Image Processing and Pattern Recognition, vol.1, No.1, 2005, pp 01-10.

[27] H.Noori, S.Saryazdi and H. Nezamabadi-pour. A Convolution Based Image Inpainting.1st International Conference

on Communication and Engineering, University of Sistan & Baluchestan, Dec 2010, pp 130-134.

[28] H.Noori, S.Saryazdi and H. Nezamabadi-pour. A Bilateral Image Inpainting. Transactions of Electrical Engineering,

vol. 35, No. E2, printed in the Islamic Republic of Iran, Shiraz University, 2011, pp 95-108.

[29] C. Tomasi and R. Manduchi. Bilateral Filtering for Gray and Color Images. IEEE Proceedings, International

Conference on Computer Vision, 1998, pp 839– 846.

[30] M. Zhang and B. K. Gunturk. Multiresolution Bilateral Filtering for Image Denoising. IEEE Transaction on Image

Processing, 2008, pp. 2324-2333.

[31] B. Zhang and J. P. Allebach. Adaptive Bilateral Filter for Sharpness Enhancement and Noise Removal. IEEE

Transaction on Image Processing, 2008, pp 664–678.

[32] A. Stefan Roth and Michael J. Black. Field of Expert: A framework for Learning Image Priors. IEEE Conference on

Computer Vision and Pattern Recognition (CVPR), vol.2, June 2005, pp 860-867.

[33] S. Zhu and D. Mumford. Prior Learning and Gibbs Reaction-Diffusion. IEEE Transaction on Pattern Analysis and

Machine Intelligence (PAMI), 1997, pp 1236–1250.

[34] George Papandreou, Petros Maragos and Anil Kokaram. Image Inpainting with a Wavelet Domain Hidden Markov

Tree Model. IEEE International Conference on Acoustics, Speech and Signal Processing, (ICASSP -08), 2008, pp

773-776.

Page 17: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

107 Rajkumar L. Biradar

[35] Kwan-Jung Oh, Sehoon Yea, Yo-Sung Ho. Hole-Filling Method using Depth Based Inpainting for View Synthesis in

Free Viewpoint Television (FTV) and 3D Video. Mitsubishi Electric Research Laboratories, Inc., 201 Broadway,

Cambridge, Massachusetts 02139, 2009, pp 1-4.

[36] Liu He, Michael Bleyer, and Margrit Gelautz. Object Removal by Depth-Guided Inpainting. AAPR Workshop 2011,

pp 1-8.

[37] Tomoki Hosoi, Koji Kobayashi, Koichi Ito and Takafumi Aoki. Fast Image Inpainting using Similarity of Subspace

Method. 18 th IEEE International on Image Processing. 2011 pp 3441-3444.

[38] Bianjing Bai, Zhenjiang Miao and Zhen Tang. An Improved Structure Propagation Based Image Inpainting.

Proceedings in SPIE 8009, 80091D, 2011.

[39] A. Hirani and T. Totsuka. Combining Frequency and Spatial Domain Information for Fast Interactive Image Noise

Removal. Computer Graphics, SIGGRAPH 96, 1996, pp. 269-276.

[40] D. Heeger and J. Bergen. Pyramid Based Texture Analysis/Synthesis. Computer Graphics, SIGGRAPH 95, 1995, pp.

229-238.

[41] E. Simoncelli and J. Portilla. Texture Characterization via Joint Statistics of Wavelet Coefficient Magnitudes. 5th

IEEE International Conference on Image Processing,. vol.1, 1998, pp 62-66.

[42] A. Efros and T. Leung. Texture Synthesis by Non-parametric Sampling. Proceedings in IEEE international

Conference Computer Vision, Corfu, Greece, September 1999, pp 1033-1038.

[43] J. Jia and C. K. Tang. Image Repairing: Robust Image Synthesis by Adaptive and Tensor Voting. Proceedings of

IEEE Computer Society Conference on Computer Vision Pattern Recognition, 2003, pp. 643-650.

[44] I. Drori, D. Cohen-Or, and H. Yeshurun. Fragment based Image Completion. ACM Transactions on Graphics, vol.

22, 2003, pp. 303-312.

[45] A. Criminisi, P.Perez and K. Toyama. Object Removal by Exempler-Based Inpainting. IEEE Transaction on Image

Processing, vol. 13, No 9, September 2004, pp 1200-1212.

[46] WH Cheng, CW Hsieh, Sh. K. Lin, Ch .W. Wang, and JL Wu. Robust Algorithm for

Exemplar Based Image Inpainting. IEEE Transaction on Image Processing, 2005.

[47] N. Komodakis and G. Tziritas. Image Completion using Efficient Belief Propagation via Priority Scheduling and

Dynamic Pruning. IEEE Transactions on Image Processing, vol. 16, 2007, pp. 2649 -2661.

[48] A. Wong and J. Orchard. A Nonlocal Means Approach to Exemplar based Inpainting. Proceedings of the 15th IEEE

International Conference on Image Processing, 2008, pp. 2600-2603.

[49] C. Fang and J. J. Lien. Rapid Image Completion System using Multiresolution Patch based Directional and Non-

directional Approaches. IEEE Transactions on Image Processing, vol. 18, 2009, pp. 2769-2779.

[50] Z. Xu and S. Jian. Image Inpainting by Patch Propagation using Patch Sparsity. IEEE Transactions on Image

Processing, Vol. 19, 2010, pp. 1153-1165.

[51] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher. Simultaneous Structure and Texture Image Inpainting. IEEE

Transactions on Image Processing, vol. 12, 2003, pp. 882 -889.

Page 18: Survey Paper on Digital Image Inpainting

International Journal of Engineering Technology, Management and Applied Sciences

www.ijetmas.com October 2014, Volume 2 Issue 5, ISSN 2349-4476

108 Rajkumar L. Biradar

[52] J.L. Starck, M. Elad, and D.L. Donoho. Image Decomposition via the Combination of Sparse Representation and a

Variational Approach. IEEE Transaction on Image Processing, vol.14, No.10, 2005, pp 1570-1582.

[53] M. Elad, J.L. Starck, D. Donoho, and P. Querre. Simultaneous Cartoon and Texture Image Inpainting using

Morphological Component Analysis (MCA). Applied and Computational Harmonic Analysis, vol.19, No.3, 2005, pp

340-358.

[54] H. Grssauer. A Combined PDE and Texture Synthesis Approach to Inpainting. Computer Vision- ECCV 2004,

Lecture Notes in Computer Science, vol.3022, 2004, pp 214-224.

[55] Jiaya Jia and Chi keung Tang. Inference of Segmented Color and Texture Description by Tensor Voting. IEEE

Transactions Pattern Analysis and Machine Intelligence (PAMI), June 2004, pp 771-786.

[56] Yining Deng and B. S. Manjunath. Unsupervised Segmentation of Color Texture Regions in Images and Video. IEEE

Transaction on Pattern Analysis and Machine Intelligence (PAMI), vol.23, No.8, 2001, pp 800-810.

[57] G. Medioni, Mi-Suen Lee, and Chi-Keung Tang. A Computational Framework for Segmentation and Grouping.

Elsevier B.V, 2000.