a block nonlocal tv method for image restorationmath0.bnu.edu.cn/~liujun/papers/siims_2017.pdf63...
TRANSCRIPT
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION∗1
JUN LIU † AND XIAOJUN ZHENG†‡2
Abstract. In this paper, we propose a block nonlocal total variation (TV) regularization method3for image restoration. We extend the existing nonlocal TV method in two aspects: firstly, some block4nonlocal operators are introduced to extend the point based nonlocal diffusion as a block based non-5local diffusion process; secondly, the weighting function in the nonlocal method can be adaptively6determined by the cost functional itself. The proposed method is derived from a block based max-7imum a posteriori (MAP) estimation. By the assumption of the self similarity of small patches,8we formulate a regularization term as a log-likelihood functional of a mixture model. To optimize9this regularization term efficiently, we employ the idea of the expectation maximum algorithm and10give a variational framework to propose a block based nonlocal TV regularization. The weighting11function occurred in our model can be regarded as a probability of the similarity for image patches,12and it can be updated adaptively according to the newest estimation. Besides, we mathematically13prove the existence of a minimizer for one of the proposed models. Compared with the nonlocal14TV method, numerical results show that our method can greatly improve the quality of the restored15images, especially under heavy noise.16
Key words. Image Blocks, Nonlocal Total Variation, Image Restoration, Adaptive Weighting17Functions,Operator Splitting18
AMS subject classifications. 94A08, 68U1019
1. Introduction. Image denoising is a fundamental and active research field in20
image processing. The goal of image denoising is to recover a true image from some21
noisy observed data. The classical noise model is the additive Gaussian noise, that is,22
v = u+ n,23
where v is the observed image, u is the true image and n stands for noise.24
A vast variety of methods for image denoising are available after several decades25
of developments[4]. Gaussian (Gabor) filters[16] and anisotropic diffusion[21] are the26
early denoising approaches. Statistical methods (e.g.,[2]) considering the noise as a27
random variable is a popular denoising technique. They are mainly based on maxi-28
mum likelihood estimator (MLE) and Bayesian maximum a posteriori (MAP). Varia-29
tional approaches (e.g.,[22, 23]) is another popular and powerful technique for image30
denoising. The variational method obtains a clean image by minimizing a cost func-31
tional which typically consists of a fidelity term and a regularization term. Of course,32
all these methods are not separated from each other, such as many methods can be33
viewed within the variational framework. However, the variational method is more34
highly valued since the total variation (TV) based ROF model was proposed in [23].35
The ROF model is a classical and efficient model to remove Gaussian noise, and36
many variants [20, 17] based on TV had been designed for different denoising tasks37
due to its good edges-preserving property. However, the results obtained with TV38
∗This work was partially supported by National Natural Science Foundation of China (No.11201032) and the Fundamental Research Funds for the Central Universities. Jun Liu was alsosupported by the China Scholarship Council for a one year visiting at UCLA. This work was finishedduring his visit at UCLA. The authors would like to thank Professor Stanley Osher for his valuablecomments and suggestions.†School of Mathematical Sciences, Laboratory of Mathematics and Complex Systems, Beijing
Normal University, Beijing 100875, P.R. China ([email protected], ).‡College of Elementary Education, Hainan Normal University, Haikou 571158, P.R.
China([email protected], [email protected]).
1
This manuscript is for review purposes only.
2 JUN LIU, AND XIAOJUN ZHENG
would be piecewise constant and the image details such as textures would be removed39
together with noise. In order to better preserve the image textures and repetitive40
structures, the nonlocal denoising methods have received considerable attention in41
recent years. The concept of nonlocal averaging was introduced by Yaroslavsky via42
a nonlocal neighborhood filter [27]. The nonlocal neighborhood filter calculates the43
pixel value at point x according to the similar pixels at y which can be located far44
away from x. A well-known nonlocal method based on patch distances extending45
this idea is the nonlocal means method [3] proposed by Buades etc.. The novelty46
of the nonlocal means is to introduce the patch distances, i.e. the value at point y47
is used to denoise that at x if the local value pattern near y is similar to the local48
value pattern near x [24]. A variational interpretation of nonlocal means was given49
in [15, 10]. Motivated by the nonlocal means and the graph theory, the nonlocal TV50
variational framework based on nonlocal operators was proposed in [11]. The weight-51
ing functions occurred in these nonlocal methods play a very important role in the52
task of denoising. Although the above nonlocal methods show superior performance53
in textured or structure repeated images, the weighting functions in the nonlocal54
methods are given and fixed. It may give a bad estimation for the weight and lead to55
a undesirable restoration under the heavy noise. Furthermore, the estimation in these56
methods is point by point, it may not be robust for heavy noise. In [1], the authors57
proposed a block based cost functional by adding a entropy regularization and got an58
adaptive weighting function for image inpainting, however, they did not give a math-59
ematical interpretation for such a entropy regularization term. Many block based60
methods [8, 7, 13, 14] have shown that the block based estimation can improve the61
precision and be more robust to noise. Based on overlapping-patches technique and62
sparsity, block based nonlocal image denoising methods have been proposed in recent63
years, e.g. patch-based near-optimal image denoising (PLOW)[6], locally learned dic-64
tionaries (K-LLD) [5], clustering-based sparse representation (CSR) [9], a weighted65
dictionary learning model[18], the expected patch log likelihood (EPLL) [28] and so66
on.67
In this work, we formulate a block based nonlocal maximum a posterior estima-68
tion (MAP) framework, in which we construct a nonparametric mixture model to69
describe the prior probability density function for the small image patches. Instead of70
optimizing such a complicated regularization term derived from the mixture model,71
we propose a variant which comes from a well-known expectation maximum (EM)72
process. It leads to a block diffusion based BNLH1 model with an adaptive weighting73
function. The BNLH1 model can be regarded as an extension of the nonlocal means74
method. It also gives a statistical interpretation for the entropy regularization ap-75
peared in [1]. We also prove the existence of a minimizer for such a BNLH1 model76
with the minimizing sequence method. It is well-known that the L1 norm has a better77
performance on noncontinuity preservation than L2, so we define a block nonlocal TV78
operator, and propose a block nonlocal (BNLTV) TV model . The BNLTV can be79
efficiently solved by many recently popular splitting algorithms such as augmented80
Lagrangian [26], splitting Bregman [12] methods and ADMM[25]. Compared with the81
existing nonlocal TV, the experimental results in this paper show that our method82
can greatly improve the quality of the reconstruction.83
The rest of the paper is organized as follows: in section 2 we review some basic84
definitions and properties of the existing point based nonlocal TV operator. A block85
based maximum a posterior estimation (MAP) model and the two block nonlocal86
variational models BNLH1 and BNLTV are proposed in section 3; section 4 presents87
the algorithms and some details about the implementation; experimental results are88
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 3
shown in section 5; we summarize our method and conclude the paper in section 6.89
2. Nonlocal TV. In this section we will review some basic definitions of point90
based nonlocal derivative, nonlocal gradient, nonlocal divergence and Laplace operator91
as following [11].92
Let Ω ⊂ Rn, u : Ω → R, w : Ω × Ω → R be a symmetric and nonnegative93
weighting function.94
Nonlocal derivative of u at x with respect to y can be defined as
∂yu(x) := (u(y)− u(x))√w(x, y), x, y ∈ Ω,
then the nonlocal gradient operator ∇w : F1(Ω;R)→ F2(Ω×Ω;R), where F1(Ω;R).=
u : Ω→ R, F2(Ω× Ω;R).= v : Ω× Ω→ R, is defined as
(∇wu)(x, y) = (u(x)− u(y))√w(x, y),
the nonlocal divergence operator divw : F2(Ω × Ω;R) → F1(Ω;R) is given by theadjoint of the nonlocal gradient:
(divwv)(x) =
∫Ω
(v(x, y)− v(y, x))w(x, y)dy.
Obviously, the Laplace operator ∆w : F1(Ω;R)→ F1(Ω;R) will be
(∆wu)(x) =1
2divw((∇wu)(x, y)) =
∫Ω
(u(y)− u(x))w(x, y)dy.
Noting that w(x, y) = w(y, x), one can easily verify the adjoint equality
< ∇wu, v >=< u,−divwv >,
and obtain the following results:∫Ω×Ω
divwvdxdy = 0.
< ∆wu, v >=< u,∆wv >,
< ∆wu, u >= − < ∇wu,∇wu >≤ 0.
In the definitions of the nonlocal operators, the weighting function is always95
assumed as a known one or selected empirically. However, it is easy to see that the96
weighting function plays a key role in image denoising. If the empirically chosen97
weighting function is not good, the restoration would be very bad. Besides, the98
diffusion of ∆w is point based one and this leads to that the pixel reconstruction99
would be point by point, thus it may not be robust enough if the noise level is high.100
In order to solve these problems, we will extend the above definitions to some block101
based new ones.102
3. The Proposed Methods.103
3.1. Statistical Methods. In this section, we shall propose a block based max-104
imum a posteriori estimation (MAP) for nonlocal method, in which the weighting105
function can be determined by the cost functional itself and also can be updated in106
the iterations.107
This manuscript is for review purposes only.
4 JUN LIU, AND XIAOJUN ZHENG
3.1.1. Block Based Maximum a Posteriori (MAP) Estimation. Let Ω be108
a 2 dimensional discrete set. v : Ω → R is the observed noisy image, and u : Ω → R109
stands for the latent clear image. Denote B as a small symmetric neighborhood110
centered at 0, i.e. B = x : ||x|| < r. Moreover, we use uB(x) to represent a small111
image block of u, namely, uB(x) = u(x+ z) : z ∈ B. All the observed image blocks112
vB(x), x ∈ Ω can be regarded as some realizations of a random vector ξ. The MAP113
estimation problem for u can be written by114
u∗ = arg maxu
p(uB |vB) = arg maxu
p(vB |uB)p(uB)
p(vB).115
Since p(vB) is known for any given noisy image v, the MAP problem is equivalent to116
(1) u∗ = arg maxu
ln p(vB |uB) + ln p(uB).117
From the noisy model, one can easily find118
p(vB |uB) ∝ exp
(− (u− v)2
2σ2
),119
where σ is the standard deviation of the Gaussian noise.120
p(uB) is a prior term for u. As to natural images, there are usually many similar121
and repeated structures or textures in the sense of small blocks. We will construct122
a nonparametric mixture model to describe such a prior. Once u is estimated, the123
histogram of all the blocks uB(y), y ∈ Ω can be calculated with the formulation124 ∑y∈Ω
δ(z− uB(y)).125
Here δ is a vector-valued function which counts the number of blocks whose structures126
are the same as z, i.e.127
δ(x) =
1, x = 0,0, else.
128
For calculation convenience, usually δ can be approximated by a Gaussian func-129
tion130
δh(x) =1
(πh)|B|2
exp
(−||x||
2
h
),131
where h > 0 is a parameter which controls the precision of this approximation.132
Assume all the clear image blocks uB(x), x ∈ Ω are some realizations of a random133
vector with the probability density function134
p(z) =1
|Ω|∑y∈Ω
δh(z− uB(y)),135
then according to the independent and identically distributed assumption, one can136
obtain137
p(uB) =1
|Ω||Ω|∏x∈Ω
∑y∈Ω
δh(uB(x)− uB(y)).138
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 5
Therefore, ignoring any constant terms, the problem (1) becomes139
(2) u∗ = arg minu
E(u) =λ
2
∑x∈Ω
(u(x)− v(x))2 −∑x∈Ω
ln∑y∈Ω
δh(uB(x)− uB(y))
.140
The first term in the above cost functional is the fidelity term, which is used to141
measure the difference between the noisy image v and clear image u in terms of noisy142
model. The second term can be regarded as a regularization term, which describes the143
similarity among the small image blocks uB(x). λ > 0 is a regularization parameter144
which balances these two terms.145
To optimize E(u) directly is not easy since the regularization term, which is146
derived from a mixture model, is associated to a complex nonlinear equation if we147
employ the first order optimization conditions. However, this problem can be greatly148
simplified by the well-known expectation maximum (EM) algorithm. For convenience,149
we denote150
E1(u) = −∑x∈Ω
ln∑y∈Ω
δh(uB(x)− uB(y)).151
The key idea of EM algorithm to solve such a problem is as following: in order to152
minimize E1(u), we turn to construct another function H(u;un) which can be easily153
optimized and has a proposition:154
Proposition 3.1. If H(un+1;un) 6 H(un;un), then E1(un+1) 6 E1(un), where155
n is an iteration number.156
According to this proposition, the value of E1 would be descended during the iterations157
of minimizing H. Therefore, the original optimization problem of E1 can be replaced158
by finding the minimizer of H, which is easier to be minimized.159
In the next sections, we will give the formulation of H and give a variational160
interpretation for such a EM process.161
3.1.2. Expectation Maximum (EM) Process and Weighting Function.162
Please note minimizing E1 is a standard parameter estimation problem of mixture163
model. We assume that all the small image blocks uB(x), x ∈ Ω can be divided into164
|Ω| classes (some classes may be empty). Let we introduce a latent integer random165
variable, whose values are in the range of 1, 2, · · · , |Ω| and each η(x) indicates that166
each image block uB(x) centered at x should belong to the η(x)-th class. We use a167
function ω(x, y) to represent the probability of each block uB(x) belongs to the y-th168
class. According to the standard EM method, the expectation step can be written as169
E-step:170
171
(3) ωn(x, y) = Pη(x) = y|unB(x) =δh(unB(x)− unB(y))∑y∈Ω δh(unB(x)− unB(y))
.172
Then the related maximum step is173
M-step:174
175
(4)
un+1 = arg maxu
−∑x∈Ω
∑y∈Ω ln (δh(uB(x)− uB(y)))ωn(x, y)
= arg min
u
∑x∈Ω
∑y∈Ω ln (δh(uB(x)− uB(y)))ωn(x, y) := H(u;un)
.
176
This manuscript is for review purposes only.
6 JUN LIU, AND XIAOJUN ZHENG
Please note H is easy to be optimized since it is a quadratic term.177
In our previous work [19], we reinterpreted the above EM process in a variational178
framework. The E-step and M-step can be regarded as two steps of alternating di-179
rection minimization method. For completeness, we list the main results of [19] here:180
181
Lemma 3.1 (Commutativity of log-sum operations, [19]). Given a function182
δh(x, y) > 0, we have183
−∑x∈Ω
ln∑y∈Ω
δh(x, y)184
= minω∈C
−∑x∈Ω
∑y∈Ω
ln δh(x, y)ω(x, y) +∑x∈Ω
∑y∈Ω
ω(x, y) lnω(x, y)
,185
186
where C =ω(x, y) : 0 6 ω(x, y) 6 1,
∑y∈Ω ω(x, y) = 1,∀x ∈ Ω
.187
Hence, we define188
H(u, ω) = −∑x∈Ω
∑y∈Ω
ln δh(uB(x)− uB(y))ω(x, y) +∑x∈Ω
∑y∈Ω
ω(x, y) lnω(x, y),189
and can get190
minu
E1(u) = minu
minω∈C
H(u, ω)
.191
One can choose the alternating direction minimization method to solve such a problem192
and get an iteration scheme193
(5)
ωn+1 = arg minω∈C
H(un, ω),
un+1 = arg minu
H(u, ωn+1).194
It is easy to check that the two problems in (5) are equal to the E-step (3) and M-step195
(4), respectively. Moreover, for the scheme (5), we have196
Lemma 3.2 (Energy Descent, [19]). The sequence un produced by iteration197
scheme (5) satisfies198
E1(un+1) 6 E1(un).199
Basing on the former analysis, we employ the functional H(u, ω) as the regular-200
ization term and propose the following image restoration functional201
(u∗, ω∗) = arg minu,ω∈C
λ2
∑x∈Ω
(u(x)− v(x))2 + h∑x∈Ω
∑y∈Ω
ω(x, y) lnω(x, y)202
+∑x∈Ω
∑y∈Ω
||uB(x)− uB(y)||2ω(x, y).203
204
By setting λ = 0, the above model can be regarded as an extension of nonlocal205
means method. To be different from the nonlocal means in which the weighting func-206
tions are manually chosen according to some experience, here the weighting functions207
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 7
are determined by the restoration cost functional itself. Besides, the estimation in208
nonlocal means is point by point but here is block based. Generally speaking, the209
block based estimation can improve the precision and be more robust to noise. It210
is well known that the nonlocal means is related to the nonlocal H1 regularization211
problem, hence we can extend the block based method to the nonlocal TV version.212
In the next section, from the view of the variational method, we will propose a block213
nonlocal TV model with adaptive weighting functions.214
3.2. Variational Framework.215
3.2.1. Some Definitions and Notations. Let Ω ⊂ R2 be a bounded and216
open set, Br ⊂ R2 be a small symmetric region centered at 0 with radius r, i.e.217
Br = z : ||z|| < r. ω : Ω × Ω → R+ is a nonnegative smooth weighting function,218
u : Ω→ R is an image function, and g : Br → R+, ρε : Bε → R+ are two other smooth219
weighting functions with g(z) = g(−z),∫Brg(z)dz = 1 and
∫Bερε(z)dz = 1. Denote220
Ω = Ω + Br + Bε = x|x = x1 + x2 + x3, x1 ∈ Ω, x2 ∈ Br, x3 ∈ Bε. We use the221
symbol “˜” to represent an expansion of a function. For example, u is an expansion222
of u with223
u(x) =
u(x), x ∈ Ω,
u0(x), x ∈ Ω\Ω.224
In which u0(x) is a given function associated to the so-called boundary condition.225
To simplify the following formulations, here we set the boundary condition u0(x) =226
0. Other boundary conditions such as Neumann boundary condition also can be227
discussed in the similar way. In the next discussion, without special statement, the228
boundary conditions are all set to 0 if we need the expansions of any functions.229
The symbol ”∗” is used to represent the convolution operator, namely, (ρε ∗ u)(x) =230 ∫Bερε(y)u(x− y)dy.231
Let F1(Ω;R), F2(Ω×Ω×Br;R) be two function spaces, we define a block nonlocal232
gradient operator233
∇gω : F1(Ω;R)→ F2(Ω× Ω×Br;R)234
which has the following representation:235
∇gωu(x) =√g(z)[(ρε ∗ u)(y + z)− (ρε ∗ u)(x+ z)]
√w(x, y), ∀x ∈ Ω, y ∈ Ω, z ∈ Br.236
Here we employ two smooth functions g and ρε in the definition of the block nonlocal237
gradient since we need ρε to keep the later constructed regularization term possessing238
some better mathematical properties, which can theoretically guarantee that we can239
proof the existence of solutions for our proposed model (please see in section 3.2.4).240
Hence, the related block nonlocal divergence operator divgω : F2(Ω × Ω × Br;R) →241
F1(Ω;R) can be given by242
divgωp(x, y, z) =
∫Ω
∫Br
∫Bε
√g(z)ρε(s)
[p(x− z + s, y, z)
√ω(x− z + s, y)243
− p(y, x− z + s, z)√ω(y, x− z + s)
]dsdzdy.244
245
It is easy to check246
< ∇gωu, p >= − < divgωp, u >247
This manuscript is for review purposes only.
8 JUN LIU, AND XIAOJUN ZHENG
y
x
ω(x, y)
(a) Nonlocal
ω(x, y)
x x + z1
y + z1
y − z1
x − z1
y
(b) Block Nonlocal
Fig. 1: Comparison of point based nonlocal operator and the proposed block basednonlocal operator.
with the 0 boundary conditions for p, ω and u. Here <,> is the standard inner product248
in L2.249
The related block nonlocal Laplacian operator 4gω : F1(Ω;R)→ F1(Ω;R) is given250
by251
4gωu(x) =
∫Ω
∫Br
∫Bε
g(z)ρε(s)[(ρε ∗ u)(y + z)− (ρε ∗ u)(x+ s)]·252
[ω(y, x− z + s) + ω(x− z + s, y)]dsdzdy,253254
which satisfies 4gω = divgω∇gω.255
The proposed block based nonlocal operators can be considered as the vector-256
valued extensions of the existing nonlocal operators. Please see the figure 1 for some257
comparisons.258
3.2.2. Block Nonlocal H1 with Adaptive Weighting Function. The block259
nonlocal H1 norm of u can be given as260
(H1)gω(u) =
∫Ω
∫Ω
∫Br
g(z)[(ρε ∗ u)(y + z)− (ρε ∗ u)(x+ z)]2ω(x, y)dzdydx.261
Thus, the previous model can be modified as BNLH1 model:262
(u∗, ω∗)263
= arg minu,ω∈C
λ2
∫Ω
(u(x)− v(x))2dx+ h
∫Ω
∫Ω
ω(x, y) lnω(x, y)dydx+ (H1)gω(u),264
265
where C =ω(x, y) : 0 6 ω(x, y) 6 1,
∫Ωω(x, y)dy = 1,∀x ∈ Ω
266
3.2.3. Block Nonlocal TV. By defining267
TV gω (u) =
∫Ω
√∫Ω
∫Br
g(z)[(ρε ∗ u)(y + z)− (ρε ∗ u)(x+ z)]2ω(x, y)dzdydx.268
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 9
we get a BNLTV model:269
(u∗, ω∗)270
= arg minu,ω∈C
λ2
∫Ω
(u(x)− v(x))2dx+ h
∫Ω
∫Ω
ω(x, y) lnω(x, y)dydx+ TV gω (u),271
272
where C =ω(x, y) : 0 6 ω(x, y) 6 1,
∫Ωω(x, y)dy = 1,∀x ∈ Ω
.273
In the proposed method, there are some convolutions ρε ∗ u in the regularization274
term. As mentioned earlier, the reason for such a design is that these convolution275
operators can help us to theoretically proof the existence of minimizers. From the276
viewpoint of statistics, it requires that there are some self-similarity blocks in the277
smooth version of image u. This requirement is reasonable. In the practical compu-278
tation, we can let ε be small enough, and then ρε becomes a delta function.279
3.2.4. Existence of a Minimizer for the Proposed Models. In this section,280
we will proof the existence of a minimizer for BNLH1. As to the BNLTV model, we281
have not yet investigated a theoretical result and let us leave it as a future work.282
Let us consider the energy functional for BNLH1 model283
(6) J(u, ω) =λ
2
∫Ω
(u(x)− v(x))2dx+ h
∫Ω
∫Ω
ω(x, y) lnω(x, y)dydx+ (H1)gω(u).284
We will show the existence of a minimizer for (6) in the following space
X := (u, ω) : u ∈ L2(Ω), 0 ≤ ω ≤ 1, a.e., and
∫Ω
ω(x, y)dy = 1, a.e.x ∈ Ω.
Proposition There exists a solution for BNLH1 model285
(7) min(u,ω)∈X
J(u, ω).286
Proof: Let287
J1(u) =λ
2
∫Ω
(u(x)− v(x))2dx,
J2(ω) = h
∫Ω
∫Ω
ω(x, y) lnω(x, y)dydx,
J3(u, ω) =
∫Ω
∫Ω
∫B
g(z)(ρε ∗ u(y + z)− ρε ∗ u(x+ z))2ω(x, y)dzdydx.
288
It is obviously that J1(u) ≥ 0 and J3(u, ω) ≥ 0. Futhermore, since x lnx ≥ − 1e
whenever x ≥ 0 and Ω is a bounded domain, J2(ω) ≥ −he |Ω|2. That is, J(u, ω) has
a lower bound and inf(u,ω)∈X
J(u, ω) exists. Let (un, ωn) be a minimizing sequence of
(7), i.e. a sequence such that
J(un, ωn)→ inf(u,ω)
J(u, ω).
It is clear that ‖ωn‖L∞ ≤ 1 and ωn ∈ L∞(Ω×Ω) , as L∞ is the conjugate space ofL1 which is a separable linear normed space, by the Banach-Alaoglu Theorem, thereis a * weakly convergent subsequence(which we relabel by n), and a * weak limitω ∈ L∞(Ω× Ω) such that
ωn∗ ω in L∞(Ω× Ω),
This manuscript is for review purposes only.
10 JUN LIU, AND XIAOJUN ZHENG
That is, for any ϕ ∈ L1(Ω× Ω),∫Ω×Ω
ωn(x, y)ϕ(x, y)dxdy →∫
Ω×Ω
ω(x, y)ϕ(x, y)dxdy, n→ +∞.
Since ω lnω is continuous and convex with respect to ω, it is quasiconvex andJ2(ω) is * weakly lower semicontinuous, i.e.
lim infn→∞
∫Ω
∫Ω
ωn(x, y) lnωn(x, y)dydx ≥∫
Ω
∫Ω
ω(x, y) lnω(x, y)dydx.
We also need to verify the * weak limit ω is in X, that is, 0 ≤ ω ≤ 1 a.e., and289 ∫Ωω(x, y)dy = 1, a.e. x ∈ Ω.290
Owing to ωn∗ ω in L∞, for any ϕ ∈ L1,291
(8)
∫Ω
∫Ω
ωn(x, y)ϕ(x, y)dydx→∫
Ω
∫Ω
ω(x, y)ϕ(x, y)dydx.292
Set A = (x, y) ∈ Ω × Ω : ω(x, y) > 1, Br = (x, y) ∈ Ω × Ω : ω(x, y) < 0,293
ϕ1(x, y) = χA(x, y), ϕ2(x, y) = χBr(x, y), then ϕ1, ϕ2 ∈ L1(Ω × Ω). We will show294
that |A| = 0 and |Br| = 0.295
Substituting ϕ1 for ϕ in (8), we obtain296
(9)
∫A
ωn(x, y)dydx→∫A
ω(x, y)dydx.297
If |A| 6= 0, the right hand side of (9) is bigger than |A|, by sign-preserving theorem298
of limit, there is a N > 0, for any n > N ,∫Aωn(x, y)dydx > |A|. However, since299
ωn ≤ 1, the left hand side of (9) should be smaller than or equal to |A| for any n,300
which is a contradiction. Thus, we have |A| = 0. In the same way, we obtain |Br| = 0.301
The results show that 0 ≤ ω ≤ 1 a.e..302
Let f(x) =∫
Ωω(x, y)dy, ϕ(x, y) = sign(f(x)− 1), then ϕ ∈ L1, and∫
Ω
(∫Ω
ωn(x, y)dy
)ϕ(x)dx→
∫Ω
f(x)ϕ(x)dx,
i.e. ∫Ω
|f(x)− 1|dx = 0,
hence,∫
Ωω(x, y)dy = 1 for a.e. x ∈ Ω. Above all, we have ω ∈ X, and
lim infn→∞
J2(ωn) ≥ J2(ω).
Since J(u, ω) is coercive, un must be bounded, i.e. there is a constant M > 0such that
‖un‖L2(Ω) ≤M.
Boundedness of the sequence in a reflexive space implies the existence of a weaklyconvergent subsequence, which we still denote by un, then, there is a weak limitu ∈ L2(Ω) such that
un u.
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 11
Since J1 : u 7→ λ2
∫Ω
(u(x) − v(x))2dx is continuous and convex, and un u inL2(Ω), we have
lim infn→∞
λ
2
∫Ω
(un(x)− v(x))2dx ≥ λ
2
∫Ω
(u(x)− v(x))2dx,
i.e.
limn→∞
J1(un) = J1(u).
Now, we consider the convergence of J3(un, ωn).303
Here, we assume the support of g is a proper subset of B and choose ε equalling304
to one half of the distance between the support and B.305
Set bn(x, y) =∫Brg(z)(ρε ∗ un(x+ z)− ρε ∗ un(y+ z))2dz. By the smoothness of
convolution operation, integral bn depending on parameters x, y is in C1(Ω×Ω), and
∂bn∂x
(x, y) = 2
∫Br
g(z)(ρε ∗ un(x+ z)− ρε ∗ un(y + z))
(∂ρε∂x∗ un(x+ z)
)dz,
∂bn∂y
(x, y) = 2
∫Br
g(z)(ρε ∗ un(x+ z)− ρε ∗ un(y + z))
(∂ρε∂y∗ un(x+ z)
)dz.
As306 ∫Ω
∫Ω
|bn(x, y)|dxdy ≤ 2
∫Ω
∫Ω
∫Br
g(z)[(ρε ∗ un(x+ z))2 + (ρε ∗ un(y + z))2]dzdxdy307
≤ 4|Ω|‖g‖∞∫
Ω
∫Br
(ρε ∗ un(x+ z))2dzdx ≤ 4|Ω||B|‖g‖∞M2,308
309
∫Ω
∫Ω
|∇xbn(x, y)|dxdy310
≤ 2
∫Ω
∫Ω
∫Br
g(z)|ρε ∗ un(x+ z)− ρε ∗ un(y + z)|∣∣∣∣∂ρε∂x ∗ un(x+ z)
∣∣∣∣ dzdxdy311
≤ 4|Ω||B|‖g‖∞‖ρε‖1,∞M2,312313
then bn(x, y) is a bounded sequence in W 1,1(Ω×Ω). By Rellich-Kondrachov Com-314
pactness Theorem, W 1,1(Ω×Ω) is compactly embedded in L1(Ω×Ω), i.e. there exists315
a subsequence bn(x, y) which converges to b(x, y) in L1(Ω× Ω).316
Apparently, J3(u, ω) is continuous and convex in u, then it is weakly lower semi-317
continuous, that is,318
lim infn→+∞
∫Ω
∫Ω
bn(x, y)ω(x, y)dzdxdy319
≥∫
Ω
∫Ω
∫Br
g(z)(ρε ∗ u(x+ z)− ρε ∗ u(y + z))2ω(x, y)dzdxdy.320
321
This manuscript is for review purposes only.
12 JUN LIU, AND XIAOJUN ZHENG
Furthermore,322 ∫Ω
∫Ω
bn(x, y)(ωn(x, y)− ω(x, y))dydx323
=
∫Ω
∫Ω
(bn(x, y)ωn(x, y)− b(x, y)ω(x, y))dydx324
+
∫Ω
∫Ω
(b(x, y)ω(x, y)− bn(x, y)ω(x, y))dydx325
=
∫Ω
∫Ω
(bn(x, y)− b(x, y))ωn(x, y)dydx+
∫Ω
∫Ω
b(x, y)(ωn(x, y)− ω(x, y))dydx326
+
∫Ω
∫Ω
(b(x, y)− bn(x, y))ω(x, y)dydx.327328
Because of ωn∗ ω in L∞(Ω× Ω), bn → b in L1(Ω× Ω), we have
limn→+∞
∫Ω
∫Ω
bn(x, y)(ωn(x, y)− ω(x, y))dydx = 0.
As329
J3(un, ωn) =
∫Ω
∫Ω
bn(x, y)ωn(x, y)dxdy330
=
∫Ω
∫Ω
bn(x, y)(ωn(x, y)− ω(x, y))dxdy +
∫Ω
∫Ω
bn(x, y)ω(x, y)dxdy,331332
we havelim infn→+∞
J3(un, ωn) ≥ J3(u, ω).
In a word,
J(u, ω) = J1(u) + J2(ω) + J3(u, ω) ≤ lim infn→∞
J(un, ωn),
andJ(u, ω) = inf
(u,ω)∈XJ(u, ω),
which completes the proof.333
4. Algorithms. To simplify the computation, we can set ρε to the delta function334
δ, thus ρε ∗ u = δ ∗ u = u. In the following discussion, we set ρε = δ.335
The BNLH1 model can be directly minimized by an alternating direction mini-336
mization algorithm. We summary the steps in algorithm 4.1.337
Algorithm 4.1. Given an initial value u0 = v, for n = 1, 2, 3, · · · , do338
Step1, calculating the weighting function by339
ωn+1(x, y) =
exp
(−∫Brg(z)(un(y + z)− un(x+ z))2dz
h
)∫
Ω
exp
(−∫Brg(z)(un(y + z)− un(x+ z))2dz
h
)dy
.340
Step 2, restoring un+1 by solving the following equation341
−4gωn+1u+ λu = λv.342
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 13
Step 3, if the convergence condition ||un+1−un||2||un||2 is less than a tolerant error, then343
stop. Otherwise, go to the step 1.344
As to BNLTV, we need to employ some more efficient algorithms, such as aug-345
mented Lagrangian method an splitting Bregman method. Here, we write the iteration346
scheme as augmented Lagrangian method.347
Let us introduce a function q = (∇gω)u, the BNLTV model is equal to the con-348
straint minimization problem349
(u∗, ω∗, q∗) = arg minu,ω∈C,q=(∇g
ω)u
λ2
∫Ω
(u(x)− v(x))2dx+ h
∫Ω
∫Ω
ω(x, y) lnω(x, y)dydx350
+
∫Ω
||q(x, ·, ·)||2dx.351
352
By applying standard augmented Lagrangian method, one can get a Lagrangian353
function354
L(u, ω, q, d) =λ
2
∫Ω
(u(x)− v(x))2dx+ h
∫Ω
∫Ω
ω(x, y) lnω(x, y)dydx355
+
∫Ω
||q(x, ·, ·)||2dx+ < d, q − (∇gω)u > +η
2||q − (∇gω)u||22.356
357
The saddle of L can be split into 4 subproblems, each subproblem is easy to solve.358
359
Subproblem 1: expectation maximum step
ωn+1 = arg minω∈C
L(un+1, w, qn, dn)360
= arg minω∈C
h
∫Ω
∫Ω
ω(x, y) lnω(x, y)dydx+η
2||qn − (∇gω)un+1 +
dn
η||22.361
362
Subproblem 2: restoration step
un+1 = arg minu
L(u,wn, qn, dn)363
= arg minu
λ
2
∫Ω
(u(x)− v(x))2dx+η
2||qn − (∇gωn)u+
dn
η||22.364
365
Subproblem 3: gradient updating
qn+1 = arg minq
L(un+1, wn+1, q, dn)366
= arg minq
∫Ω
||q(x, ·, ·)||2dx+η
2||q − (∇gωn+1)un+1 +
dn
η||22.367
368
Subproblem 4: Lagrangian multiplier updating
dn+1 = arg maxd
L(un+1, wn+1, qn+1, d)369
= arg maxd
< d, qn+1 − (∇gωn+1)un+1 >
.370
371
This manuscript is for review purposes only.
14 JUN LIU, AND XIAOJUN ZHENG
As to the subproblem 1, we can use the first order optimization condition to get372
an approximation solution. For simplifying the representation, let us denote373
An(x, y) = exp− η
2h
∫Br
[g(z)(un+1(y + z)− un+1(x+ z))2 +
1√ωn(x, y)
√g(z)·374
(un+1(y + z)− un+1(x+ z))(qn(x, y, z) +dn(x, y, z)
η)]dz.375
376
Then we obtain377
(10) ωn+1(x, y) =An(x, y)∫
ΩAn(x, y)dy
.378
To get the solution of subproblem 2, we need to solve equation379
(11) λu− η4gωu = λv − ηdivgω(qn +dn
η).380
This equation can be solved iteratively by many methods such as Gauss-Seidel itera-381
tion.382
qn+1 can be given by a shrinkage process383
(12) qn+1(x, y, z) = S((∇gωn+1)un+1(x, y, z)− dn(x, y, z)
η, η),384
where the shrinkage operator385
S(f(x, y, z), η) =
f(x,y,z)||f(x,·,·)||2 (||f(x, ·, ·)||2 − 1
η ), ||f(x, ·, ·)||2 > 1η ,
0, ||f(x, ·, ·)||2 < 1η ,
386
and ||f(x, ·, ·)||2 =√∫
Ω
∫Brf2(x, y, z)dzdy. The last subproblem can be easily maxi-387
mized by388
(13) dn+1(x, y, z) = dn(x, y, z) + τ(qn+1(x, y, z)− (∇gωn+1)un+1(x, y, z)),389
where 0 < τ < 2η is a parameter which controls the convergence of the augmented390
Lagrangian algorithm. Usually, it can be chosen as τ = η.391
We summarize the steps as algorithm 4.2.392
Algorithm 4.2. Given an initial value u0 = v, ω0 = 1|Ω| , d
0 = q0 = 0, for393
n = 1, 2, 3, · · · , do394
Step1, weighting function updating:395
Calculating the weighting function by (10).396
Step 2, image restoration:397
Restoring un+1 by solving the equation (11).398
Step 3, gradient updating:399
Computing qn+1 according to (12).400
Step 4, Lagrangian multiplier updating:401
Renewing dn+1 by formulation (13).402
Step 5, convergence checking:403
If the convergence condition ||un+1−un||2||un||2 is less than a tolerant error, then stop. Oth-404
erwise, go to the step 1.405
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 15
5. Experimental Results. In this section, we will show some numerical results406
for the proposed method and make some comparison with some related methods.407
Without special statements, in the proposed method, some parameters are chosen as408
following: h = 0.252, σ = 3.0, η = 10, Br is set to a 7× 7 rectangle path. λ is set as409
different values according to the noise levels, we will list the exact values later. For410
computational efficiency and saving memory storage, we do not let y go through the411
entire Ω and just set a 13× 13 search window for every x.412
Generally speaking, if the images have a lot of repeated structures (regular tex-413
tures), the size of the searching window should be enlarged because it can help us414
to find more nonlocal similar pixels which have the same textures. However, it will415
lead to some heavy computational burdens. On the other hand, if the images contain416
heavy noise, one should increase the size of the image blocks. The large image blocks417
can help us to produce some smooth results. It plays the role of a local smoother.418
Conversely, the image blocks size should be reduced if the noise level is low, because419
the improvements would be limited in this case and one can get some not too bad420
results with much less CPU time.421
According to the definition of the block gradient operator ∇gω, both local and422
nonlocal factors are considered by the variables y and z, respectively. Thus, the423
proposed block based ones are more robust for heavy noise than the point based424
nonlocal operators [11]. Let us give an intuitive interpretation for such a diffusion425
mechanism. Let ρε = δ and ω be symmetric, then the Gauss-Seidel iteration for426
−4gωu would be427
(14) un+1(x) =
∫Ω
∫Brun(y + z)ω(x− z, y)g(z)dzdy∫
Ω
∫Brg(z)ω(x− z, y)dzdy
.428
This equation means that all of the pixels at y and y + z are averaged with the429
weighting function ω(x − z, y)g(z). The pixels located at y could be far away from430
x, these pixels can be regarded as nonlocal ones. Since z only can be taken in a431
local neighborhood centered at 0, y + z are local positions of y. Furthermore, please432
note that the patch centered at y with large weight would be very similar to the one433
centered at x, then u(y+z) can be regarded to have the similar local pixels of u(x). In434
this sense, the block nonlocal diffusion are both local and nonlocal. Please see figure435
2 for the ideas.436
The equation (14) also can be reformulated as437
un+1(x) =
∫Ω
∫Brun(y)ω(x− z, y − z)g(z)dzdy∫
Ω
∫Brg(z)ω(x− z, y − z)dzdy
438
with some appropriate boundary condition. From this viewpoint, the above formula-439
tion means that one pixel u(x) may be contained in many overlapping patches in the440
block based mode, thus u(x) would be estimated for many times. Our formulation441
give a weighting function to average all the repeatedly estimated u(x). Thus, the442
block based operators are more robust for noise. It can be found in the following443
experiments.444
In the first experiment, we show the superiority of block diffusion operators.445
The clean original “barbara“ image in the figure 3(a) was destroyed by the additive446
Gaussian white noise with standard deviation σ = 80, and the noisy image was447
shown in the figure 3(b). We recover it with different denoising methods. For each448
algorithm, we try different parameters and select the best reconstructed results (with449
This manuscript is for review purposes only.
16 JUN LIU, AND XIAOJUN ZHENG
y
y + z
x − z
x
ω(x, y)
ω(x − z, y)
Fig. 2: The schematic plot of the Gauss-Seidel iteration for −4gωu
the highest PSNR) for comparison. Let us first show the differences of local, nonlocal450
and block based nonlocal operators. To see this clearly, we fix the weighting functions451
and do not update them in the related nonlocal methods. The result produced by452
TV [23] was displayed in the figure 3(c), one can find this restored image is almost453
piecewise constant and loses a lot of texture structures. The PSNR for TV is 22.08454
dB. Compared with TV, the NLTV [11] can keep more texture details since the455
repeated structure information was used in this model. However, some recovered456
textures demonstrated in the figure 3(d) are still not very clear since the noise is very457
heavy. Please find the texture details comparison in the figure 4 which contains the458
related enlargements of white rectangle areas in figure 3. The PSNR of NLTV is459
22.45 dB, which is higher than TV’s. In the figure 3(e), we show the result provided460
by the proposed block based nonlocal method BNLTV. One can find it can produce461
better texture restorations with PSNR 22.90 dB(please see figure 4 for texture details)462
under heavy noise because both of the local and nonlocal information are used in this463
algorithm. In this experiment, the fidelity parameters for TV, NLTV, and BNLTV464
are 3.6, 1.0 and 2.0, respectively.465
In the next, the advantages of the weighting function updating will be illustrated.466
In figure 6, we restore the same noisy image in figure 4 by using the NLTV and BNLTV467
(algorithm 4.2) with 10 times iteration. The related results are shown as in the figure468
5(a) and 5(b), respectively. Obviously, the reconstructed images have a much better469
quality than the previous ones, please also see the details comparison in figure 4. It is470
reasonable because the weighting function could be improved by some new estimated471
images. We display the PSNR values of the BNLN method during the iteration in472
the figure 6. The final PSNR of NLTV and BNLTV are 23.22 dB and 23.62 dB,473
respectively. Compared to the previous results without weighting function updating,474
the PSNRs are improved more than 0.7 dB. Based on some experimental experience,475
we find that it would be better to choose a big fidelity parameter to prevent the476
recovered image over smooth, thus we set the fidelity parameter λ to be 4.0 for NLTV477
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 17
σImages Methods 10 30 50 80 100 120Barbara NLTV [11] 33.08 27.41 24.70 22.45 21.52 20.95
(512× 512) BNLTV 33.53 28.36 25.76 23.62 22.92 22.15Lena NLTV [11] 33.42 27.50 25.16 23.00 21.99 21.21
(256× 256) BNLTV 33.69 28.39 25.77 24.31 23.52 22.79House NLTV [11] 35.20 30.20 27.30 24.63 23.46 22.65
(256× 256) BNLTV 35.30 30.79 28.42 26.21 25.29 24.35
Table 1: PSNRs (dB) for NLTV and the proposed BNLTV under Gaussian noise withdifferent levels.
NLTV [11] BNLTVImages iter. 2 iter. 4 iter. 6 iter. 8 iter. 10House Error 295.02 261.66 209.91 200.12 195.90 193.87
256× 256 CPU time (S) 1.36 24.07 47.88 70.98 93.99 117.01Lena Error 414.76 375.63 312.88 298.37 292.97 291.22
256× 256 CPU time (S) 1.45 25.95 50.93 75.67 100.29 125.23Barbara Error 1845.7 1690.1 1415.9 1367.9 1346.1 1338.6
512× 512 CPU time (S) 6.80 121.19 237.78 360.31 483.67 606.35
Table 2: The absolute error (||u − uoriginal||22) and CPU time (Seconds) for theNLTV[11] and BNLTV. The numbers of iter. in the table are the outer iterationnumbers of the BNLTV.
and 2.0 for BNLTV.478
For more comparison results of different test images under a variety of noise479
levels, please see in table 1. It can be found that the proposed BNLTV has a better480
performance than NLTV in all the cases. Generally speaking, the heavier the noise,481
the larger increasement for the PSNR.482
6. Conclusion and Discussion. In this paper, we proposed a block based non-483
local method for image restoration. In our model, the weighting function in nonlocal484
method is not fixed but adaptively determined by the cost functional itself. The485
weighting function ω(x, y) also has a statistical meaning, and it stands for a probabil-486
ity of the similarity between patches centered at x and y. By updating the weighting487
function, the quality of the restored images will be greatly improved, especially under488
the heavy noise. Besides, we also extend the existing point based nonlocal diffusion489
to a block based one. The numerical results show that the proposed block nonlocal490
operators are more robust for noise than the previous ones. It can help us to get a491
better reconstruction images.492
The proposed method can be applied in many image tasks. One of the possi-493
ble application is the inpainting and compressed sensing, in which the weights are494
more important and our method would greatly improve the results. Due to the page495
limitation, let us left them as a future work in the oncoming another paper.496
Though the proposed method can produce better result than the existing NLTV,497
the computational cost is higher than NLTV. Generally speaking, for 256×256 images,498
it would cost about 11 − 12 seconds CPU time on MacBook Pro 2011 for one outer499
iteration (including the weight calculation and restoration). In table 2, we list some500
of the reconstruction absolute error (||u− uoriginal||22) and the CPU time for BNLTV501
and NLTV [11]. In this table, all the data was obtained under the Gaussian noise502
with standard deviation 100.503
This manuscript is for review purposes only.
18 JUN LIU, AND XIAOJUN ZHENG
(a) Clean
(b) Noisy, σ = 80 (c) TV [23], 22.08 dB
(d) NLTV [11], 22.45 dB (e) BNLTV, 22.90 dB
Fig. 3: Comparison of point based nonlocal operator and the proposed block basednonlocal operator without weight updating.
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 19
(a) Clean (b) Noisy (c) TV (d) NLTVwithoutweightupdating
(e) BNLTVwithoutweightupdating
(f) NLTVwith weightupdating
(g) BNLTVwith weightupdating
Fig. 4: The enlargement of the white rectangle texture area in figure 3 and 5.
(a) NLTV, 23.22 dB (b) BNLTV, 23.62 dB
Fig. 5: Comparison of point based nonlocal operator and the proposed block basednonlocal operator with weight updating 10 times.
1 2 3 4 5 6 7 8 9 10
22.4
22.6
22.8
23
23.2
23.4
23.6
23.8
24
Fig. 6: The PSNR values of BNLTV during the iteration. (x-axis: iteration number,y-axis: PSNR values, dB).
This manuscript is for review purposes only.
20 JUN LIU, AND XIAOJUN ZHENG
REFERENCES504
[1] P. Arias, G. Facciolo, V. Caselles, and G. Sapiro, A variational framework for exemplar-505based image inpainting, International Journal of Computer Vision, 93 (2011), pp. 319–347.506
[2] J. Biemond, A. M. Tekalp, and R. L. Lagendijk, Maximum likelihood image and blur iden-507tification: a unifying approach, Optical Engineering, 29 (1990), pp. 422–435.508
[3] A. Buades, B. Coll, and J.-M. Morel, A non-local algorithm for image denoising, in Com-509puter Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Con-510ference on, vol. 2, IEEE, 2005, pp. 60–65.511
[4] A. Buades, B. Coll, and J.-M. Morel, A review of image denoising algorithms, with a new512one, Multiscale Modeling & Simulation, 4 (2005), pp. 490–530.513
[5] P. Chatterjee and P. Milanfar, Clustering-based denoising with locally learned dictionaries,514Image Processing, IEEE Transactions on, 18 (2009), pp. 1438–1451.515
[6] P. Chatterjee and P. Milanfar, Patch-based near-optimal image denoising, Image Process-516ing, IEEE Transactions on, 21 (2012), pp. 1635–1649.517
[7] K. Dabov, A. Foi, and K. Egiazarian, Video denoising by sparse 3d transform-domain518collaborative filtering, in Signal Processing Conference, 2007 15th European, IEEE, 2007,519pp. 145–149.520
[8] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3-d521transform-domain collaborative filtering, Image Processing, IEEE Transactions on, 16522(2007), pp. 2080–2095.523
[9] W. Dong, X. Li, L. Zhang, and G. Shi, Sparsity-based image denoising via dictionary learning524and structural clustering, in Computer Vision and Pattern Recognition (CVPR), 2011525IEEE Conference on, IEEE, 2011, pp. 457–464.526
[10] G. Gilboa and S. Osher, Nonlocal linear image regularization and supervised segmentation,527Multiscale Modeling & Simulation, 6 (2007), pp. 595–630.528
[11] G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale529Modeling & Simulation, 7 (2008), pp. 1005–1028.530
[12] T. Goldstein and S. Osher, The split bregman method for l1-regularized problems, SIAM531Journal on Imaging Sciences, 2 (2009), pp. 323–343.532
[13] H. Ji, S. Huang, Z. Shen, and Y. Xu, Robust video restoration by joint sparse and low rank533matrix approximation, SIAM Journal on Imaging Sciences, 4 (2011), pp. 1122–1142.534
[14] V. Katkovnik, A. Foi, K. Egiazarian, and J. Astola, From local kernel to nonlocal multiple-535model image denoising, International journal of computer vision, 86 (2010), pp. 1–32.536
[15] S. Kindermann, S. Osher, and P. W. Jones, Deblurring and denoising of images by nonlocal537functionals, Multiscale Modeling & Simulation, 4 (2005), pp. 1091–1115.538
[16] M. Lindenbaum, M. Fischer, and A. Bruckstein, On gabor’s contribution to image enhance-539ment, Pattern Recognition, 27 (1994), pp. 1–8.540
[17] J. Liu, H. Huang, Z. Huan, and H. Zhang, Adaptive variational method for restoring color541images with high density impulse noise, International journal of computer vision, 90 (2010),542pp. 131–149.543
[18] J. Liu, X.-C. Tai, H. Huang, and Z. Huan, A weighted dictionary learning model for denoising544images corrupted by mixed noise, Image Processing, IEEE Transactions on, 22 (2013),545pp. 1108–1120.546
[19] J. Liu and H. Zhang, Image segmentation using a local gmm in a variational framework, J.547Math. Imaging Vis., 46 (2013), pp. 161–176.548
[20] M. Nikolova, A variational approach to remove outliers and impulse noise, Journal of Math-549ematical Imaging and Vision, 20 (2004), pp. 99–120.550
[21] P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffusion, Pattern551Analysis and Machine Intelligence, IEEE Transactions on, 12 (1990), pp. 629–639.552
[22] L. Rudin, S. Osher, et al., Total variation based image restoration with free local constraints,553in Image Processing, 1994. Proceedings. ICIP-94., IEEE International Conference, vol. 1,554IEEE, 1994, pp. 31–35.555
[23] L. I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algo-556rithms, Physica D: Nonlinear Phenomena, 60 (1992), pp. 259–268.557
[24] A. Singer, Y. Shkolnisky, and B. Nadler, Diffusion interpretation of nonlocal neighborhood558filters for signal denoising, SIAM Journal on Imaging Sciences, 2 (2009), pp. 118–139.559
[25] B. Wahlberg, S. Boyd, M. Annergren, and Y. Wang, An admm algorithm for a class of560total variation regularized estimation problems, arXiv preprint arXiv:1203.1828, (2012).561
[26] C. Wu and X.-C. Tai, Augmented lagrangian method, dual methods, and split bregman iter-562ation for rof, vectorial tv, and high order models, SIAM Journal on Imaging Sciences, 3563(2010), pp. 300–339.564
This manuscript is for review purposes only.
A BLOCK NONLOCAL TV METHOD FOR IMAGE RESTORATION 21
[27] L. P. Yaroslavsky and L. Yaroslavskij, Digital picture processing. an introduction., Digital565picture processing. An introduction.. LP Yaroslavsky (LP Yaroslavskij). Springer Seriesin566Information Sciences, Vol. 9. Springer-Verlag, Berlin-Heidelberg-New York-Tokyo. 12+ 276567pp. Price DM 112.00 (1985). ISBN 3-540-11934-5 (FR Germany), ISBN 0-387-11934-5568(USA)., 1 (1985).569
[28] D. Zoran and Y. Weiss, From learning models of natural image patches to whole image570restoration, in Computer Vision (ICCV), 2011 IEEE International Conference on, IEEE,5712011, pp. 479–486.572
This manuscript is for review purposes only.