a discontinuous galerkin method for solving total ...€¦ · 2.the discontinuous galerkin finite...

49
UNIVERSIT ¨ AT LINZ JOHANNES KEPLER JKU Technisch-Naturwissenschaftliche Fakult¨ at A Discontinuous Galerkin Method for Solving Total Variation Minimization Problems MASTERARBEIT zur Erlangung des akademischen Grades Diplomingenieur im Masterstudium Industriemathematik Eingereicht von: Stephen Edward Moore Angefertigt am: Institut f¨ ur Numerische Mathematik Beurteilung: O. Univ. Prof. Dipl. Ing. Dr. Ulrich Langer Priv. Doz. Dr. Johannes Kraus Mitwirkung: Prof. Dr. Massimo Fornasier Linz, August, 2011

Upload: others

Post on 19-Oct-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

  • UNIVERSITÄT LINZJOHANNES KEPLER JKU

    Technisch-NaturwissenschaftlicheFakultät

    A Discontinuous Galerkin Method for SolvingTotal Variation Minimization Problems

    MASTERARBEITzur Erlangung des akademischen Grades

    Diplomingenieurim Masterstudium

    Industriemathematik

    Eingereicht von:Stephen Edward Moore

    Angefertigt am:Institut für Numerische Mathematik

    Beurteilung:O. Univ. Prof. Dipl. Ing. Dr. Ulrich LangerPriv. Doz. Dr. Johannes Kraus

    Mitwirkung:Prof. Dr. Massimo Fornasier

    Linz, August, 2011

  • i

    To my parentsStephen Moore and Janet Moore.

  • Abstract

    The minimization of functionals which are formed by an L2-term and a Total Variation(TV) term play an important role in mathematical imaging with many applicationsin engineering, medicine and art. The TV term is well known to preserve sharp edgesin images.

    More precisely, we are interested in the minimization of a functional formed by a dis-crepancy term and a TV term. The first order derivative of the TV term involvesa degenerate term which could happen in flat areas of an image. Many well knownmethods have been proposed to solve this problem.

    In this thesis, we present a relaxed functional associated with the TV minimizationproblem. The relaxed functionals are well-posed and produce a sequence of solutionsminimizing our original TV-functional. The relaxed functional results in an IterativelyReweighted Least Squares method that approximates the TV minimization.

    Considering the Euler-Lagrange equation, the minimizer of the relaxed functional isequivalent to the solution of a second order elliptic partial differential equation. Wediscretize this partial differential equation in the framework of Discontinuous Galerkin(DG) Finite Element Method (FEM) with linear functions on each element. Specifi-cally, we consider the Symmetric Interior Penalty Galerkin method. The discretizationleads to a system of linear equations.

    The existence and uniqueness of the solution to the DG variational form of Discon-tinuous Galerkin and the discrete DG problem is studied, and a-priori error estimatesare reported.

    The Discontinuous Galerkin Finite Element Method in combination with iterativelyreweighted least squares method is implemented .

    Finally, numerical results are presented that demonstrate the accuracy of the numeri-cal solution using the proposed methods.

    ii

  • Acknowledgments

    First of all, I would like to thank my supervisors Prof. Ulrich Langer for giving me theopportunity to write this thesis and for organizing financial support; Priv. Doz. Dr.Johannes Kraus for his patience, timely and friendly discussions and encouragements,the absence of which would not have made this work possible and lastly to Prof.Massimo Fornasier for his invaluable advice and discussions most of which will not beforgotten.

    Special thanks go to Dr. Satyendra Tomar, who assisted in my understanding to theassembling of the Discontinuous Galerkin Finite Element Method; Dipl. Ing. AndreasLanger for the readiness to discuss some pertinent aspects of my work and for proof-reading the Chapter 2. Also to Dr. Veronika Pillwein for her timely discussions andlecture and Prof. Martin Gander for making time to read and make suggestionstowards the completion of the thesis.

    This work was additionally supported by the Austrian Science Foundation – Fonds zurFörderung der wissenschaftlichen Forschung (FWF) – through the DoktoratskollegsComputational Mathematics: Numerical Analysis and Symbolic Computation.

    I am indebted to the European Union for the Erasmus Mundus Scholarship, TechnicalUniversity of Kaiserslautern, Germany and the Institute of Computational Mathe-matics at the Johannes Kepler University, Austria for the environment and technicalsupport.

    Last but not least I am grateful to my family for their support during my stay abroadand also my friends in Germany and Austria.

    Stephen Edward MooreLinz, August 2011

    iii

  • Contents

    1 Introduction 1

    2 Problem Formulation and Analysis 42.1 Functions of Bounded Variation . . . . . . . . . . . . . . . . . . . . . . 42.2 Existence and Uniqueness of Minimizers . . . . . . . . . . . . . . . . . 82.3 Euler-Lagrange Equations and a Relaxation Algorithm . . . . . . . . . 9

    3 DG Finite Element Discretization 133.1 Some Basic Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . 133.2 DG Variational Formulations . . . . . . . . . . . . . . . . . . . . . . . 143.3 Some Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.4 The DG Finite Element Equations . . . . . . . . . . . . . . . . . . . . 213.5 A Priori Error Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 223.6 DG-Version of IRLS Algorithm . . . . . . . . . . . . . . . . . . . . . . 26

    4 Numerical Results 284.1 DG for Diffusion Problems . . . . . . . . . . . . . . . . . . . . . . . . . 28

    4.1.1 Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 284.1.2 Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 284.1.3 Poisson Problem with Known Solution . . . . . . . . . . . . . . 29

    4.2 Iteratively Reweighted Least Squares Algorithm . . . . . . . . . . . . . 314.2.1 Denoising Problem . . . . . . . . . . . . . . . . . . . . . . . . . 314.2.2 Diffusion Problem, TV-Minimization Problem and IRLS . . . . 33

    5 Conclusion and Outlook 365.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    List of Notations and Function Spaces 38

    Bibliography 40

    iv

  • Chapter 1

    Introduction

    In this thesis we will concentrate on the numerical solution of 2D Total Variation(TV) Minimization problems, as they appear in image processing by a Discontinu-ous Galerkin (DG) Finite Element Method (FEM) in combination with an IterativelyReweighted Least Squares (IRLS) method.

    TV methods and similar approaches based on regularizations with L1-norms (andsemi-norms) have become a very popular tool in image processing and inverse prob-lems due to peculiar features that cannot be realized with smooth regularizations. TVtechniques had particular success due to their ability to realize cartoon-type recon-structions with sharp edges [9]. Within the last decade, there have been an explosionof new developments in this field. The TV methods started with an introduction ofa variational denoising model by Rudin, Osher and Fatemi consisting of minimizingtotal variation among all functions within a variance bound [40]. It was shown to beequivalent to an unconstrained minimization problem of the form:

    minu

    (2λ∫

    Ω|∇u| dx+ ‖Ku− g‖2L2

    ), (1.1)

    where u : Ω → R,Ω ⊂ Rn, n = 1, 2, K : L2(Ω) → L2(Ω), g ∈ L2(Ω) and λ > 0. TheEuler-Lagrange-equation of the functional in (1.1) reads as follows :

    −λ div(∇u|∇u|

    )+K∗(Ku− g) = 0, (1.2)

    where K∗ is the adjoint of K. In the literature, several numerical strategies for effi-ciently solving (1.2) have been proposed. We only mention the following 3 approaches:

    (i) The fixed point iteration method [15, 41, 42, 43]: Once the coefficients 1/|∇u|are fixed at a previous iteration u, various iterative solver techniques have beenconsidered. There exist excellent inner solvers but the outer solver can be slow.Further improvements are still useful.

    1

  • CHAPTER 1. INTRODUCTION 2

    (ii) The explicit time marching scheme [33, 38, 40]: It turns the nonlinear partialdifferential equation (1.2) into a parabolic equation before using an explicit Eulermethod to march in time to convergence. The method is quite reliable but oftenslow as well.

    (iii) The primal-dual (PD) method [12, 13, 14]: It solves for both the primal and thedual variable together in order to achieve faster convergence with the Newtonmethod (and a constrained optimisation with the dual variable).

    In this thesis, we will consider an Iteratively Reweighted Least Squares (IRLS) algo-rithm to solve the TV-minimization problem. Under certain assumptions, the IRLScan be related to the minimization of the L1-norm of derivatives [24]. By consideringthe Euler-Lagrange equation, the minimizer of the relaxed functional is equivalent tothe solution of a second order Partial Differential Equation(PDE) having the form ofa reaction -diffusion equation with a specially chosen diffusion coefficient serving asweights. Introducing suitable boundary conditions, the problem can be formulated ina variational form.

    The application of the IRLS to (1.2) results in a double minimization algorithm [24],which will be discussed in more detail later.

    The two main ingredient involved in our approach are :

    1. An Iteratively Reweighted Least Squares algorithm is used to reconstruct asequence that converge to the solution of the original TV minimization problem.It is known to have linear rate of convergence which can be modified to yield asuper linear rate of convergence [24, 32].

    2. The Discontinuous Galerkin Finite Element Method (DGFEM) is usedto discretize the continuous, infinite dimensional problem. For an introductionto DGFEMs, we recommend the book by Rivière [39] or the survey article byArnoldi, Brezzi, Cockburn and Marrini [4].

    We will combine these ingredients to construct a new efficient numerical method forTV minimization problems in the following way :

    • Firstly, we analyze the conditions that relate the TV minimization problem tothe iteratively reweighted least squares algorithm. We will use results from [17].

    • Secondly, we present the discontinuous Galerkin finite element method and astandardized assembling of the elemental matrix [39].

    • Finally, we will present numerical experiments using this particular combinationof methods and discuss the results.

  • CHAPTER 1. INTRODUCTION 3

    The rest of the thesis is organized as follows: In Chapter 2, starting from the total vari-ation minimization problem, we will present a link between the TV and the iterativelyreweighted least squares method, we then derive the Euler-Lagrange equation of thefunctional. In Chapter 3, after recapitulating the concepts of Discontinuous GalerkinFinite Element Method (h-version), we will provide some analysis for existence anduniqueness and present also some analysis on error estimates, particularly, a-prioriestimates in L2 and the DG energy norms . In Chapter 4, we present some numericalresults to illustrate the efficiency of the combination of DG methods and the IRLSalgorithm. Finally, in Chapter 5, we draw some conclusions and discuss some futurework.

  • Chapter 2

    Problem Formulation and Analysis

    In this chapter, we define the space of functions of bounded variation and give someproperties that form the basis of its applications in regularization methods followingmostly the expositions in [2, 21, 22, 26]. In solving the minimization functional (1.1),we introduce an associated well-posed relaxed functional following from [25]. Finally,we derive the corresponding Euler-Lagrange equation of the relaxed functional whichis a second order elliptic partial differential equation.

    The space of functions of bounded variation BV (Ω) plays an important role in manyproblems in the calculus of variations. For instance, BV -spaces are used in treatingthe minimal surface problem [26, 5] and in the theory and numerics of hyperbolicconservation laws. It was introduced to image processing by Rudin, Osher and Fatemi(ROF) [40] and has subsequently found applications in the related field of inverse prob-lems [20]. The interesting feature of the minimization problem from the ROF modelis that they are well suited for problems with discontinuous solutions.

    2.1 Functions of Bounded VariationIn this section, we denote by Ω a simply connected, bounded, nonempty subset ofRn, n = 1, 2, 3 with Lipschitz continuous boundary Γ = ∂Ω. We use the symbol ∇to denote the gradient of a smooth function u : Rn → R, i.e., ∇u =

    (∂u∂x1, ..., ∂u

    ∂xn

    ).

    Let C10(Ω;Rn) denote the space of vector-valued functions ϕ = (ϕ1, ..., ϕn) whosecomponent function ϕi are continuously-differentiable and compactly supported on Ω,i.e., each ϕi vanishes outside some compact subset of Ω. The divergence of ϕ is givenby

    divϕ =n∑i=1

    ∂ϕi∂xi

    .

    The Euclidean norm is denoted by | · | and given by |ϕ(x)| = [∑ni=1(ϕi(x))2]1/2 forϕ ∈ C10(Ω;Rn). The Sobolev space W 1,1(Ω) denotes the closure of C10(Ω) with respectto the norm

    4

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 5

    ‖ u ‖W 1,1(Ω) =∫

    [|u(x)|+

    n∑i=1

    ∣∣∣∣∣∂u(x)∂xi∣∣∣∣∣]dx,

    where | · | denotes the modulos.

    Definition 2.1 ([26]). The total variation of a function u ∈ L1(Ω) is defined by∫Ω|Du| = sup

    {∫Ωu divϕdx : ϕ ∈ C10(Ω;Rn) : ‖ϕ‖L∞(Ω) ≤ 1

    }. (2.1)

    Further, we define the space of functions of bounded variation as

    BV (Ω) ={u ∈ L1(Ω) :

    ∫Ω|Du|

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 6

    Theorem 2.4 (Structure theorem for BV functions). Let u ∈ BV (Ω). Then thereexist a Radon measure µ on Ω and a µ-measurable function σ : Ω→ Rn such that

    |σ(x)| = 1 µ a.e.,∫Ωu divϕdx = −

    ∫Ωϕ · σ dµ,

    for any ϕ ∈ C10(Ω,Rn).

    Theorem 2.4 essentially follows from the Riesz representation theorem, see [22] fordetails. Here the crucial difference to Sobolev spaces is that the measure Du need notnecessarily be represented as a Lebesgue measurable function.

    The following properties of BV functions play a central role in the analysis of totalvariation minimization problems, see [26] for proofs.

    Theorem 2.5 (Lower semicontinuity). Let (uj)j∈N be a sequence of functions inBV (Ω) which converge in L1loc(Ω) to a function u, then∫

    Ω|Du| ≤ lim inf

    j→∞

    ∫Ω|Duj|,

    whereL1loc(U) =

    {u : U → R | v ∈ L1(V ) ; for each V ⊂⊂ U

    }. (2.5)

    Recall that in a real vector space X, a set F ⊂ X is called convex if, for any u, v ∈ F ,θu+(1−θ)v ∈ F for any θ ∈ [0, 1]. For functions from such an X to the real numbers,we allow the value +∞, i.e. we consider functions from X to R̄ = R ∪ {+∞}.

    Definition 2.6 (Convexity). Let X be a real vector space, F ⊂ X convex, ϕ : F → R̄.ϕ is called a convex functional if for any u, v ∈ F ,

    ϕ(θu+ (1− θ)v) ≤ λϕ(u) + (1− θ)ϕ(v), (2.6)

    for all θ ∈ [0, 1]. ϕ is strictly convex if (2.6) holds strictly for all θ ∈ (0, 1).

    For any ϕ : X → R̄, the set

    domϕ = {u ∈ X : ϕ(u) 1 as well.

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 7

    Theorem 2.8 (Compactness). Let in addition Ω ⊂ Rn be a bounded Lipschitz domain.Then

    BV (Ω) ⊂⊂ Lp(Ω) for 1 ≤ p < p∗ = n/(n− 1),and

    BV (Ω) ⊂ Lp∗(Ω).These compactness properties are analogous to those of functions in W 1,1(Ω). There isanother type of compactness corresponding to a type of convergence that also carriesinformation on the gradient (see [5]).Definition 2.9 (BV -weak∗ convergence). A sequence (uj)j∈N ⊂ BV (Ω) converges tou ∈ BV (Ω) in the BV -weak∗ topology, denoted by uj ∗⇀ u, if and only if

    uj → u in L1(Ω) and Duj M⇀ Du,

    where Duj M⇀ Du denotes a weak convergence of measures, which is defined as∫ΩϕDuj →

    ∫ΩϕDu for all ϕ ∈ C0(Ω;Rn).

    Theorem 2.10 (BV -weak∗ compactness.). Let (uj)j∈N ∈ BV (Ω) with ‖uj‖BV (Ω) uni-formly bounded, then there exists a subsequence (uj

    k)

    k∈N and u ∈ BV (Ω) such that

    uj∗⇀ u in BV (Ω).

    Theorem 2.11 (Approximation). Let u ∈ BV (Ω), then there exists a sequence (uj)j∈N ⊂BV (Ω) ∩ C∞(Ω) such that

    limj→∞

    ∫Ω|u− uj| dx = 0,

    andlimj→∞

    ∫Ω|Duj| =

    ∫Ω|Du|.

    The latter result shows a difference to approximation results for Sobolev spaces: weobtain the approximation but not in the BV semi-norm, whereas for Sobolev spacesapproximation in the corresponding norm is possible.

    The well-known coarea formula relates the total variation of a function to the regularityof its level sets [5, 26].Theorem 2.12 (Coarea formula). Let u ∈ BV (Ω) and Lt := {x ∈ Ω : u(x) < t}, thenLt has finite perimeter for L1 a.e. t ∈ R and∫

    Ω|Du| =

    ∫R

    (∫Ω|D1

    Lt|)dt.

    Conversely, u ∈ L1(Ω) and ∫R

    (∫Ω|D1

    Lt|)dt

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 8

    Since BV (Ω) contains discontinuous functions, the standard DG finite element spacesare contained in it [36]. This will be discussed later in Chapter 3 .

    2.2 Existence and Uniqueness of MinimizersAs in [21, 40, 22, 26], we consider the minimization in BV (Ω) of the functional

    J(u) = ‖Ku− g‖2L2(Ω) + 2λ∫

    Ω|Du|, (2.7)

    where K : L2(Ω)→ L2(Ω) is a bounded linear operator, g ∈ L2(Ω) is given, and λ > 0is a fixed regularization parameter. Several numerical strategies to efficiently performtotal variation minimization have been proposed in literature [11, 10, 23, 27, 37]. How-ever, in the following we will only discuss how to adapt an iteratively reweighted leastsquares algorithm to this particular situation.

    In order to guarantee the existence of minimizers for (2.7) we assume that:

    J is coercive in L2(Ω), i.e., there exists C > 0 such that {u ∈ L2(Ω) : J(u) ≤ C}is bounded in L2(Ω).

    For smooth u, one can approximate the TV-term in (2.7) by a smooth, convex func-tional

    Eε(u) =∫

    Ωϕε(|∇u|) dx, (2.8)

    where ϕε ∈ C1(Ω), (i.e, continuously differentiable) and defined as:

    ϕε(z) =

    12εz

    2 + ε2 if 0 ≤ |z| ≤ ε,|z| if ε ≤ |z| ≤ 1/ε,

    ε

    2z2 + 12ε if |z| ≥ 1/ε.

    Note thatϕε(z) ≥ |z| and lim

    ε→0ϕε(z) = |z|, pointwise.

    We now consider the following relaxed functional

    Jε(u) = ‖Ku− g‖2L2(Ω) + 2λEε(u) = ‖Ku− g‖2L2(Ω) + 2λ∫

    Ωϕε(|∇u|) dx, (2.9)

    which approximates J pointwise from above, i.e.,

    Jε(u) ≥ J(u), (2.10)

    andlimε→0

    Jε(u) = J(u). (2.11)

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 9

    Since Jε is convex and smooth, by taking the Euler-Lagrange equations, we have thatuε is a minimizer for Jε if and only if

    −λ div(ϕ′ε(|∇u|)|∇u|

    ∇u)

    +K∗ (Ku− g) = 0. (2.12)

    2.3 Euler-Lagrange Equations and a Relaxation Al-gorithm

    In this section we want to provide an algorithm to compute efficiently minimizers of theapproximating functionals Jε. First, we want to derive the Euler-Lagrange equationsassociated to Jε. In the following we assume that ϕε is continuously differentiable andΩ is an open, bounded and connected subset of Rn with Lipschitz boundary ∂Ω (see[25]).

    Proposition 2.13. If u is a minimizer in W 1,2(Ω) = H1(Ω) of Jε, then u solves thefollowing of Euler-Lagrange equations 0 = −λdiv

    (ϕ′ε(|∇u|)|∇u| ∇u

    )+K∗ (Ku− g) in Ω,

    ϕ′ε(|∇u|)|∇u|

    ∂u∂~ν

    = 0 ∂Ω.(2.13)

    The equations (2.13) are the necessary condition for the computation of minimizersof Jε. The nonlinear div

    (ϕ′ε(|∇u|)|∇u| ∇u

    )term constitutes the main complication for the

    numerical solution of these equations. Sometimes, the second term is not easy to treat.

    In order to compute efficiently solutions of (2.13), we introduce a new functional givenby:

    J̃(u,w) = ‖Ku− g‖2L2(Ω) + 2λ∫

    (w|∇u(x)|2 + 1

    w

    )dx, (2.14)

    where u ∈ W 1,2(Ω) := V and w ∈ L2(Ω) is such that ε ≤ w ≤ 1/ε almost everywhere.While the variable u is the function to be reconstructed, the function w is called thegradient weight.

    For any given u(0) ∈ V and w ∈ L2(Ω) (for example w(0) := 1 ), we define the followingiterative double-minimization algorithm :{

    u(k+1) = arg minu∈V J̃(u,w(k)),w(k+1) = arg minε≤w≤ 1

    εJ̃(u(k+1), w). (2.15)

    We have the following convergence result (see [25]). We will include the proof for thesake of completeness of this thesis.

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 10

    Theorem 2.14. The sequence (uk)k∈N has subsequences that converge to a minimizeruε := u(∞) of Jε. If Jε has a unique minimizer u∗, then u(∞) = u∗ and the full sequence(uk)k∈N converges to u∗.

    Proof. The proof we present here follows from [25]. Observe that

    J̃(u(k), w(k))− J̃(u(k+1), w(k+1)) = (J̃(u(k), w(k))− J̃(u(k+1), w(k)))︸ ︷︷ ︸Ak

    + (J̃(u(k+1), w(k))− J̃(u(k+1), w(k+1)))︸ ︷︷ ︸Bk

    ≥ 0.

    Therefore J̃(u(k), w(k)) is a non-increasing sequence and moreover it is bounded frombelow, since

    infε≤w≤ 1

    ε

    ∫Ω

    (w|∇u(x)|2 + 1

    w

    )dx ≥ 0.

    This implies that J̃(u(k), w(k)) converges. Moreover, we can write

    Bk =∫

    Ωc(w(k), |∇u(k+1)(x)|)− c(w(k+1), |∇u(k+1)(x)|),

    where c(t, z) := tz2 + 1t. By Taylor’s formula, we have

    c(w(k), z) = c(w(k+1), z) + ∂c∂t

    (w(k+1), z)(w(k) − w(k+1)) + 12∂2c

    ∂t2(ξ, z)|w(k) − w(k+1)|2,

    for ξ ∈ conv(w(k), w(k+1)) (the segment between w(k) and w(k+1)). By definition ofw(k+1), and taking into account that ε ≤ w(k+1) ≤ 1

    ε, we have

    ∂c

    ∂t(w(k+1), |∇u(k+1)(x)|)(w(k) − w(k+1)) ≥ 0,

    and ∂2c∂t2

    (t, z) = 2t3≥ 2ε3, for any t ≤ 1/ε. This implies that

    J̃(u(k), w(k))− J̃(u(k+1), w(k+1)) ≥ Bk ≥ ε3∫

    Ω|w(k)(x)− w(k+1)(x)|2dx,

    and since J̃(u(k), w(k)) is convergent, we have

    ‖w(k) − w(k+1)‖L2(Ω)

    → 0, (2.16)

    for n→∞. Since u(k+1) is a minimizer of J(u,w(k)), it solves the following system ofvariational equation∫

    (w(k)∇u(k+1)(x) · ∇ϕ(x) + λ̃(Ku(k+1) − g)(x)Kϕ(x)

    )= 0, (2.17)

    for all ϕ ∈ V and λ̃ := λ−1. Therefore we can write

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 11

    ∫Ω

    (w(k+1)∇u(k+1)(x) · ∇ϕ(x) + λ̃(Ku(k+1) − g)(x)Kϕ(x)

    )=∫

    Ω(w(k+1) − w(k))∇u(k+1)(x) · ∇ϕ(x),

    and for 1p

    + 1q

    + 12 = 1, we have∣∣∣∣∫Ω

    (w(k+1)∇u(k+1)(x) · ∇ϕ(x) + (Ku(k+1) − g)(x)Kϕ(x)

    )∣∣∣∣≤ ‖w(k+1) − w(k)‖Lp‖∇u(k+1)‖Lq‖∇ϕ‖L2 .

    By monotonicity of (J(u(k+1), w(k+1)))k, and since w(k+1) = ϕ′ε(|∇u(k+1)|)|∇u(k+1)| , we have

    J̃(u1, w0) ≥ J̃(u(k+1), w(k+1)) = Jε(u(k+1)) ≥ J(u(k+1)) ≥ C|∇u|(Ω)≥ C‖∇u(k+1)‖L2(Ω).

    Moreover, since Jε(u(k+1)) ≥ J(u(k+1)) and J is coercive, we have that ‖u(k+1)‖L2(Ω)and ‖∇u(k+1)‖L2 are uniformly bounded with respect to k. Therefore, using (2.16), wecan conclude that∫

    (w(k+1)∇u(k+1)(x) · ∇ϕ(x) + λ̃(Ku(k+1) − g)(x)Kϕ(x)

    )→ 0,

    for k → ∞, and there exists a subsequence (u(kp+1))p∈N that converges in V to afunction u(∞). Since w(kp+1) = ϕ′ε(|∇u

    (kp+1)|)|∇u(kp+1)| , and by taking the limit for p → ∞, we

    obtain

    −λdiv(ϕ′ε(|∇u(∞)|)|∇u(∞)|

    ∇u(∞))

    +K∗(Ku(∞) − g

    )= 0. (2.18)

    This is the Euler-Lagranage equation (2.12) associated to the functional Jε and there-fore u(∞) is a minimizer of Jε.

    Assume now that Jε has a unique minimizer u∗. Then necessarily u(∞) = u∗. Sinceevery subsequence of (uk)k has a subsequence converging to u∗, the full sequence (uk)kconverges to u∗.

    Since both Jε and J̃(·, w) admit minimizers, their uniqueness is equivalent to theuniqueness of the solutions of the corresponding Euler-Lagrange equations. If unique-ness of the solution is satisfied, then the algorithm (2.15) can be equivalently reformu-lated as the following two-step iterative procedure: Given w(0) ∈ L∞(Ω), for k = 0, 1, ...define :

  • CHAPTER 2. PROBLEM FORMULATION AND ANALYSIS 12

    • Find u(k+1) ∈ V :∫Ω

    (w(k+1)∇u(k+1)(x) · ∇ϕ(x) + λ̃(Ku(k+1) − g)(x)Kϕ(x)

    )= 0, ∀ϕ ∈ V.

    • Compute directly w(k+1) by

    w(k+1) = ε ∨ 1|∇u(k+1)|

    ∧ 1ε

    := min(

    max(ε,

    1|∇u(k+1)|

    ),1ε

    ).

    By a standard fixed point argument, the solution of the equation is unique for λ ∼ ε.The condition λ ∼ ε is acceptable only for those applications where the constraints onthe data are weak, e.g., when the data is affected by a strong noise (see [25]).

    The following result establishes the convergence of the algorithm.

    Theorem 2.15. Let us assume that (εj)j∈N is a sequence of positive numbers mono-tonically converging to zero. The accumulation points of the sequence (εj)j∈N of theminimizers of Jεj are minimizers of J.

    Remark 2.16. The proof requires the notion of Γ−Convergence. The minimizers ofa relaxed functional J̄ can be approximated by the minimum points of functionals thatare defined in W 1,2(Ω) (see [5], Section 2.1.4).

  • Chapter 3

    DG Finite Element Discretization

    In [4], Arnold et al. present a uniform analysis of the Discontinuous Galerkin FiniteElement Methods. In this chapter, we start with definitions of some function spacesand derive the standard and the DG variational formulations of our model problemwhich follows mostly the work of Groosmann, Roos and Stynes [28] and B. Rivière[39]. Furthermore, we present a short introduction to the theory of DG finite elementdiscretization applied to the variational setting of our PDE. Particularly, we investigatethe discretization errors which we gain from the discontinuous Galerkin finite elementmethod. Finally, we present the DG-version of the iteratively re-weighted least squaresalgorithm for solving our TV minimization problem.

    3.1 Some Basic Function SpacesWe start with the definition of some functions spaces [1].

    Definition 3.1. Let Ω ⊂ Rn be a bounded Lipschitz domain. The Lebesgue spaceLp(Ω) is given by

    Lp(Ω) ={v : Ω→ R | ‖v‖Lp(Ω)

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 14

    whereDαv = ∂α1x1 , ..., ∂

    αnxn :=

    ∂|α|v

    ∂α1x1, ..., ∂αnxn

    denotes the weak derivative of the order α, where α = (α1, ..., αn) ∈ Nn is a multi-index and |α| = ∑ni=1 αi.The Sobolev space Hs(Ω) is equipped with the norm

    ‖v‖s := ∑

    0≤|α|≤s‖Dαv‖2L2(Ω)

    1/2 . (3.1)Correspondingly, a semi-norm on this space is defined as

    |v|s :=∑|α|=s‖Dαv‖2L2(Ω)

    1/2 . (3.2)3.2 DG Variational FormulationsIn this section, we will derive the standard and the DG variational formulations. Fur-thermore, we will define some special function spaces needed for the DG formulations.Let us consider the following Neumann problem as model problem: Find u such that

    −∇ · (w∇u) + λ̃u = λ̃g in Ω, (3.3)w∇u · ~ν = 0 on Γ = ∂Ω. (3.4)

    Here ~ν is the outer unit normal, w ∈ L∞(Ω) is assumed to be unifomly, g ∈ L2(Ω), andΩ ⊂ R2 is a bounded polygonal Lipschitz domain. We recall from (2.17) that λ̃ = λ−1,where λ is a positive regularization parameter. The standard variational formulationof the Neumann problem (3.3) - (3.4) reads as follows: Find u ∈ H1(Ω) such that∫

    Ω(w∇u · ∇v + λ̃uv) dx =

    ∫Ωλ̃gv dx ∀v ∈ H1(Ω). (3.5)

    The existence and uniqueness of a solution of (3.5) immediately follows from Lax-Milgram’s lemma.

    Let us now derive the DG variational formulation. We start with a decomposition Tof Ω into triangles or rectangles T such that

    Ω =⋃T∈T

    T and Ti ∩ Tj = ∅ if Ti, Tj ∈ T , Ti 6= Tj.

    We assume that T is a regular triangulation, i.e., the intersection of any two elementsis either empty or a common vertex or edge. We also assume that the elements areshape regular i.e., there exists a constant γ such that

    hT

    ρT

    ≤ γ, ∀T ∈ T ,

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 15

    where hT

    denotes the diameter of the element T and ρT

    is the diameter of the largestball inscribed in T .

    To each element T ∈ T we assign a non-negative integer sT and define the “broken”Sobolev space of the order s = {s

    T: T ∈ T } by

    Hs(Ω; T ) := {v ∈ L2(Ω) : v|T ∈ HsT (T ), ∀T ∈ T }.

    The associated norm and seminorm are

    ‖v‖s,T =(∑T∈T‖v‖2

    Hs

    T (T )

    ) 12

    and |v|s,T =(∑T∈T|v|2

    HsT (T )

    ) 12

    , respectively.

    If sT

    = s for all T ∈ T , we write ‖v‖s,T and |v|s,T instead of ‖v‖Hs(Ω;T ) and |v|Hs(Ω;T ) .If v ∈ H1(Ω; T ) then the composite gradient ∇T v of a function v is defined by(∇T v)|T = ∇(v|T ), T ∈ T .

    In the following, we assume that each element T ∈ T is the affine image of a rectangularreference element T̂ (unit square), i.e., T = FT (T̂ ). The finite element space is definedby

    Vh(Ω; T ,F) = {v ∈ L2(Ω) : v|T ◦ FT ∈ Q1(T̂ )}, (3.6)where F = {F

    T: T ∈ T } and Q1(T̂ ) is the space of linear polynomials of degree one

    in each space direction on T̂ . Note that the functions in Vh ≡ Vh(Ω; T ,F) may bediscontinuous across element edges.

    Let E be the set of all edges of the given triangulation T , with Eint ⊂ E the set of allinterior edges e ∈ E in Ω. Set Γint = {x ∈ Ω : x ∈ e for some e ∈ Eint}. Let theelements of T be numbered sequentially: T1, T2, .... Then for each e ∈ Eint there existindices i and j such that i > j and e = Ti ∩ Tj. Set T := Ti and T ′ := Tj . Define thejump (which depends on the enumeration of the triangulation) and average of eachfunction v ∈ H1(Ω, T ) on e ∈ Eint by

    [v]e = (v|∂T∩e − v|∂T ′∩e), {v}e =12(v|∂T∩e + v|∂T

    ′∩e).

    Furthermore, to each edge e ∈ Eint we assign a unit normal vector ~ν directed from Tto T ′. If instead e ⊂ Γ then we take the outward-pointing unit normal vector ~ν on Γ.When there is no danger of misinterpretation we omit the indices in [v]e and {v}e.

    For simplicity, we shall assume that the solution u of (3.5) belongs to H2(Ω) ⊂H2(Ω; T ). Let us mention that, for more general problems, it is standard to assumethat u ∈ H2(Ω; T ) and that both u and ∇u ·~ν are continuous across all interior edges,where ~ν is a normal to the edge. In particular, we have

    [u]e = 0, {u}e = u, e ∈ Eint.

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 16

    Multiply the differential equation (3.3) by a (possibly discontinuous) test functionv ∈ H1(Ω, T ) and integrate over Ω, we obtain∫

    (−∇ · (w∇u) + λ̃u

    )v dx =

    ∫Ωλ̃gv dx. (3.7)

    First, we consider the term −∇ · (w∇u) in (3.7). Let ~νT

    denote the outward-pointingunit normal to ∂T for each T ∈ T . Integration by parts and elementary transformationsgive us

    ∫Ω

    (−∇ · (w∇u))v dx =∑T∈T

    ∫T

    (w∇u) · ∇ v dx−∑T∈T

    ∫∂T

    (w∇u · ~νT) v ds

    =∑T∈T

    ∫T

    (w∇u) · ∇v dx−∑

    e∈E∩Γ

    ∫e(w∇u · ~ν) v ds

    −∑e∈Eint

    ∫e

    (((w∇u · ~ν

    T)v)|∂T∩e + ((w∇u · ~νT ′ )v)|∂T ′∩e

    )ds.

    Let ~ν be the unit normal vector points from T to T ′, then the sum of the integralsover e ∈ Eint can be written as∑

    e∈Eint

    ∫e(((w∇u · ~νT )v)|∂T∩e + ((w∇u · ~νT ′)v)|∂T ′∩e) ds

    =∑e∈Eint

    ∫e(((w∇u · ~ν)v)|∂T∩e − ((w∇u · ~ν)v)|∂T ′∩e) ds

    =∑e∈Eint

    ∫e({w∇u · ~ν}e [v]e + [w∇u · ~ν]e{v}e) ds (3.8)

    =∑e∈Eint

    ∫e{w∇u · ~ν}e [v]e ds.

    Here the basic relation

    (w1 · ~ν1)z1 + (w2 · ~ν2)z2 = {w · ~ν}[z] + [w · ~ν]{z}, (3.9)

    is used in (3.8). Applying the boundary condition and using the abbreviation∑e∈Eint

    ∫e{w∇u · ~ν}e [v]e ds =

    ∫Γint{w∇u · ~ν} [v] ds, (3.10)

    we get∫Ω

    (−∇ · (w∇u))v dx =∑T∈T

    ∫T

    (w∇u) · ∇ v dx−∫

    Γint{w∇u · ~ν} [v] ds. (3.11)

    Finally, we add the terms∫Γint

    σ[u] [v] ds and τ∫

    Γint[u]{w∇v · ~ν} ds, (3.12)

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 17

    where the choices of τ = ±1, 0 will define different DG methods (see below). Theterms (3.12) vanish for the exact solution u ∈ H1(Ω, T ). The penalty parameter σ ispiecewise constant, i.e.,

    σ|e = σe := κh−1e , e ∈ E , κ ≥ 0.

    Finally, we obtain∫Ω

    (−∇ · (w∇u))v dx =∑T∈T

    ∫T

    (w∇u) · ∇ v dx−∫

    Γint{w∇u · ~ν} [v] ds.

    + τ∫

    Γint[u]{w∇v · ~ν} ds+

    ∫Γint

    σ[u] [v] ds.

    We can now give the primal formulation of the discontinuous Galerkin method withinterior penalties, and its relation to the standard variational formulation (3.5).

    Theorem 3.3. If u ∈ H2(Ω, T ) is a solution to (3.5), then u also solves

    A(u, v) = g(v), ∀v ∈ H2(Ω, T ), (3.13)

    where the linear form is given by the relation

    g(v) =∑T∈T

    ∫Tλ̃gv dx, (3.14)

    and the bilinear form is given by the expression

    A(u, v) =∑T∈T

    (∫Tw∇u · ∇v dx+

    ∫Tλ̃uv dx

    )−∫

    Γint{w∇u · ~ν} [v] ds

    + τ∫

    Γint[u]{w∇v · ~ν} ds+

    ∫Γint

    κh−1e [u][v] ds. (3.15)

    The following choices for τ and κ define some of the well known DG methods :

    • τ = −1 and κ ≥ κ0 sufficiently large define the symmetric interior penalty(SIPG) method [4, 3, 44].

    • τ = +1 and κ > 0 define the non-symmetric interior penalty (NIPG) method[4, 39].

    • τ = +1 and κ = 0 define the method of Baumann and Oden [6, 35].

    The proof of Theorem 3.3 follows by taking an arbitrary function v ∈ H2(Ω, T ), mul-tiplying (3.3) by v and integrate by parts over each element. The desired consistencyis achieved by applying the algebraic equality (3.9) and the boundary condition from(3.4) (see [39] for detailed proof ).

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 18

    3.3 Some Basic PropertiesNext, we look at some basic properties of these methods ( see [39, 28, 19]). Moreprecisely we will consider the SIPG method. Firstly, we state the trace inequality asa requirement for the proof of the coercivity property and then proceed with someother properties.

    Theorem 3.4. Let T be a bounded polygonal domain with boundary ∂T and diameterh

    T. Let e be an edge and ~ν a unit outward normal vector to e. Let p > 0 be an integer.

    There exists a constant C independent of hT

    such that

    ‖v‖L2(e)≤ C h−1/2

    T‖v‖

    L2(T ), ∀v ∈ Qp(T ), ∀e ⊂ ∂T, (3.16)

    and‖∇v · ~ν‖

    L2(e)≤ C h−1/2

    T‖∇v‖

    L2(T ), ∀v ∈ Qp(T ), ∀e ⊂ ∂T. (3.17)

    Theorem 3.5 (Consistency, [39]). All three of the above methods are consistent. Thatis, if the exact solution u to (3.5) is in Hs(Ω) for some s > 3/2 then we have

    A(u, v) = g(v), ∀v ∈ H2(Ω, T ). (3.18)

    We now look at the continuity and coercivity properties of the bilinear form A(·, ·)with respect to the DG norm:

    ‖|v|‖ =(∑T∈T

    ∫T

    (w∇v) · ∇v dx+∫

    Ωλ̃v2 dx+

    ∫Γint

    κh−1e [v]2 ds

    ) 12

    , ∀v ∈ H1(Ω, T ).

    (3.19)We will also show the proof for coercivity for the sake of completeness of the thesis.

    Definition 3.6 (Coercivity). The bilinear form A(·, ·) is coercive on Vh if there existsa constant C > 0 such that

    A(v, v) ≥ C‖|v|‖2 ∀v ∈ Vh. (3.20)

    For the SIPG bilinear form, we have

    A(v, v) =∑T∈T

    (∫Tw(∇v)2 dx+

    ∫Tλ̃v2 dx

    )− 2

    ∫Γint{w∇v · ~ν}[v] ds

    +∫

    Γintκh−1e [v]

    2 ds. (3.21)

    By using the Cauchy-Schwarz inequality1 we obtain an upper bound for the secondterm ∫

    e{w∇v · ~ν}[v] ds ≤ ‖{w∇v · ~ν}‖

    L2(e)‖[v]‖

    L2(e).

    1Cauchy-Schwarz inequality : ∀x, y ∈ L2(Ω), |(x, y)Ω| ≤ ‖x‖L2(Ω)‖y‖L2(Ω).

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 19

    Next, we estimate the average of the fluxes for an interior edge e shared by T and T ′as follows

    ‖{w∇v · ~ν}‖L2(e)≤ 12‖w∇v|T ‖L2(e) +

    12‖w∇v|T ′‖L2(e) .

    Using the trace inequality (3.17), we find

    ‖{w∇v · ~ν}‖L2(e)≤ 12‖w∇v|T ‖L2(e) +

    12‖w∇v|T ′‖L2(e)

    ≤ C∗2 h− 12T ‖w∇v‖L2(T ) +

    C∗2 h

    − 12T ′ ‖w∇v‖L2(T ′) , (3.22)

    where C∗ is independent of v and h. Also we have

    |he| ≤ hT ≤ h, ∀e ⊂ ∂T, T ∈ T .

    Using (3.22), we obtain∫e{w∇v · ~ν}[v] ≤ C∗2 |he|

    12(h−

    12

    T‖w∇v‖

    L2(T )+ h− 12

    T ′‖w∇v‖

    L2(T ′)

    )×(|he|−

    12‖[v]‖

    L2(e)

    )≤ C∗2

    (|he|

    12h−

    12

    T+ |he|

    12h−

    12

    T ′

    )(‖w∇v‖2

    L2(T )+ ‖w∇v‖2

    L2(T ′)

    ) 12

    ×(|he|−

    12‖[v]‖

    L2(e)

    )≤ C∗

    (‖w∇v‖2

    L2(T )+ ‖w∇v‖2

    L2(T ′)

    ) 12 (|he|−

    12‖[v]‖

    L2(e)

    ).

    A similar bound is obtained if e is a boundary edge. Using the Cauchy-Schwarzinequality and Young’s inequality for δ > 0 2, we obtain∑e∈Eint

    ∫e{w∇v · ~ν}[v] ds ≤ C∗

    ∑e∈Eint

    (‖w∇v|

    T‖2

    L2(T )+ ‖w∇v|

    T ′‖2

    L2(T ′)

    ) 12 (|he|−

    12‖[v]‖

    L2(e)

    )

    ≤ C∗(∑T∈T‖w∇v‖2

    L2(T )

    ) 12 ∑e∈Eint

    |he|−1‖[v]‖2L2(e)

    12

    ≤ C∗w12

    (∑T∈T‖w

    12∇v‖2

    L2(T )

    ) 12 ∑e∈Eint

    |he|−1‖[v]‖2L2(e)

    12

    ≤ δ2∑T∈T‖w

    12∇v‖2

    L2(T )+ C̃2δ

    ∑e∈Eint

    |he|−1‖[v]‖2L2(e)

    We obtain a lower bound for A(v, v) :

    A(v, v) ≥ (1− δ)∑T∈T‖w

    12∇v‖2

    L2(T )+ ‖λ̃‖

    L∞(Ω)‖v‖2L2(Ω))

    +(κ− C̃

    δ

    ) ∑e∈Eint

    |he|−1‖[v]‖2L2(e)

    2Young’s inequality : ∀a, b ∈ R, ∀δ > 0, ab ≤ δ2a2 + 12δ b

    2

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 20

    where the positive constant C̃ is independent of hT. We achieve the coercivity result

    with C = 1/2 from (3.20) if we choose δ = 1/2 and κ∗ = 1/2 + C̃/δ.

    Definition 3.7 (Continuity, [8]). If κ > 0 for all ∂T , then the bilinear form A(·, ·) iscontinuous on Vh equipped with the energy norm ‖| · |‖, if there exists C̃ > 0 such that

    A(u, v) ≤ C̃ ‖|u|‖ ‖|v|‖ ∀ u, v ∈ Vh.

    Using the Cauchy-Schwarz inequality, we have

    |A(u, v)| ≤∣∣∣∣∣∑T∈T

    (∫Tw∇u · ∇v dx+

    ∫Tλ̃uv dx

    )∣∣∣∣∣+∣∣∣∣∫

    Γint{w∇u · ~ν}[v] ds

    ∣∣∣∣+∣∣∣∣∫

    Γint{w∇v · ~ν}[u] ds

    ∣∣∣∣+ ∣∣∣∣∫Γint

    κh−1e [u][v] ds∣∣∣∣ .

    ≤∑T∈T‖w

    12∇u‖L2(T )‖w

    12∇v‖L2(T ) + ‖λ̃

    12u‖L2(Ω)‖λ̃

    12v‖L2(Ω)

    +∑e∈Eint

    ‖{w∇u · ~ν}‖L2(e) ‖[v]‖L2(e) +∑e∈Eint

    ‖{w∇v · ~ν}‖L2(e) ‖[u]‖L2(e)

    +∑e∈Eint

    κh−1e ‖[u]‖L2(e) ‖[v]‖L2(e)

    ≤ Cw12∑T∈T

    (‖w

    12∇u‖2L2(T )

    ) 12(‖w

    12∇v‖2L2(T )

    ) 12 +

    (‖λ̃

    12u‖2L2(Ω)

    ) 12(‖λ̃

    12v‖2L2(Ω)

    ) 12

    + Cw 12∑e∈Eint

    (‖w

    12∇u‖2L2(T )

    ) 12(h−1e ‖[v]‖2L2(e)

    ) 12 + Cw 12

    ∑e∈Eint

    (‖w

    12∇v‖2L2(T )

    ) 12

    ×(h−1e ‖[u]‖2L2(e)

    ) 12 +

    ∑e∈Eint

    (κh−1e ‖[u]‖2L2(e)

    ) 12(κh−1e ‖[v]‖2L2(e)

    ) 12 (3.23)

    ≤ C̃

    ∑T∈T‖w

    12∇u‖2L2(T ) + ‖λ̃

    12u‖2L2(Ω) +

    ∑e∈Eint

    κh−1e ‖[u]‖2L2(e)

    12

    ×

    ∑T∈T‖w

    12∇v‖2L2(T ) + ‖λ̃

    12v‖2L2(Ω) +

    ∑e∈Eint

    κh−1e ‖[v]‖2L2(e)

    12≤ C̃‖|u|‖ ‖|v|‖ (3.24)

    where C̃ = C̃(w). In (3.23), we have used the trace inequality (3.22) to obtain (3.24).

    Remark 3.8. In general, the bilinear form is not continuous on the “broken” spaceH2(Ω, T ) with respect to the energy norm [39] and C is a generic constant independentof h and may have different values in different places.

    Combining the coercivity result (3.20) and continuity result (3.23), we have the fol-lowing theorem that establishes the existence and uniqueness of the discontinuousGalerkin solution.

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 21

    Theorem 3.9. Let A : Vh × Vh → R be a bilinear form of the SIPG (3.21). If κis uniformly bounded below by a sufficiently large positive constant κ∗ for all edges e,then the DG solution exists and is unique, where the DG space Vh is defined by (3.6).

    Proof. We will refer the reader to Section 2.7.4, [39] for the proof in more detail.

    3.4 The DG Finite Element EquationsA comprehensive introduction to the finite element method can be found in [7, 8, 39].We present a short outline for the most important features. The discrete problem isgiven by the Galerkin scheme:

    Find uh ∈ Vh : A(uh, vh) = g(vh), ∀vh ∈ Vh (3.25)

    The index h stands for the discretization parameter and indicates that we want toachieve convergence of the discrete solution for h→ 0.

    The space Vh in (3.6) is of finite dimension, thus it must have a finite basis. For aquadrilateral mesh and bilinear elements, n = dimVh = 4Nh where Nh is the numberof elements. Let φi denote the basis functions, i.e., Vh = span{φi : i = 1, ..., n}.Consequently, the solution uh is of the form

    uh(x) =n∑i=1

    uihφi(x), for x ∈ Ω, (3.26)

    with coefficients uih ∈ R. Using the notation

    uh := (uih)ni=1 ∈ Rn, (3.27)gh

    := (g(φi))ni=1 ∈ Rn, (3.28)Ah[i, j] := A(φj, φi), Ah ∈ Rn×n, (3.29)

    we arrive at the linear algebraic system :

    Find uh ∈ Rn : Ahuh = gh, (3.30)

    which is equivalent to the original discrete problem (3.25).Since the bilinear formA(·, ·) is positive definite, the matrixAh is also positive definite.This is because, for uh ∈ Rn, we can write

    uThAhuh = A n∑j=1

    ujφj,n∑i=1

    uiφi

    .So the positive definiteness of A(·, ·) implies uThAhuh > 0 and (uThAhuh = 0⇐⇒ uh =0). Now, since Ah is positive definite, it is invertible. Therefore, the linear algebraicsystem has exactly one solution uh ∈ Rn. Thus, (3.25) has exactly one solution uh ∈ Vh.

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 22

    3.5 A Priori Error EstimatesIn this section, we state approximation results in the space of polynomials of degreeless than p in each space direction and also recall the trace inequalities which enablesus to prove discretization error estimates, see e.g [39].

    Theorem 3.10. Let T be a rectangle in 2D. Let v ∈ Hs(T ) for s ≥ 1. Let p ≥ 0 be aninteger. There exists a constant C independent of v and h and a function ṽ ∈ Qp(T )such that

    ‖ṽ − v‖Hq(T ) ≤ C h

    min(p+1,s)−q|v|Hs(T ) ∀0 ≤ q ≤ s, (3.31)

    where h = diam(T ).

    The next result yields an approximation that conserves the average of the normal fluxon each edge.

    Theorem 3.11. Let T be a rectangle in 2D. Denote by ~ν the outward normal to T.Let v ∈ Hs(T ) for s ≥ 2 and p > 0. There exists an approximation ṽ ∈ Qp(T ) of vsatisfying

    ∫e∇(ṽ − v) · ~ν = 0 ∀e ⊂ ∂T and the optimal error bounds

    ‖∇i(ṽ − v)‖L2(T )

    ≤ Chmin(p+1,s)−i|v|Hs(T ) ∀i = 0, 1, 2, (3.32)

    We now state and show an a priori energy error estimate in the DG-norm.

    Theorem 3.12 (Energy error estimate, [39]). Let A(·, ·) be the SIPG bilinear formfrom Theorem (3.3) with κ > 0. If the exact solution u to the model problem (3.3) isin H2(T ) and uh ∈ Vh satisfies (3.25) then there exist a constant C̃ independent of hsuch that

    ‖|u− uh|‖ ≤ C̃ h ‖u‖

    H2(T ). (3.33)

    Proof. We will present the main steps of the proof given in ([39], Sect. 2.8). FromProposition 3.5, we know that

    A(u, v) = g(v) ∀v ∈ H2(T ).

    Since A(·, ·) is bilinear and

    A(uh, v) = g(v) ∀v ∈ Vh,

    we get the Galerkin orthogonality

    A(uh− u, v) = 0 ∀v ∈ Vh.

    Let χ ∈ H2(T ) denote the error: χ := uh − u. We have to find an upper bound forA(χ, χ) which is independent of uh. To achieve this, let ũ ∈ Vh be a function that

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 23

    approximates the exact solution u. Using Galerkin orthogonality, we can write

    A(χ, χ) = A(χ, uh)−A(χ, u) = −A(χ, u)= A(χ, ũ)−A(χ, u)= A(χ, ũ− u).

    As u, ũ ∈ C0(Ω̄), we see [ũ− u] = 0 on any interior edge and ũ is a continuousinterpolant. Therefore, most terms vanish when A(χ, ũ− u) is expanded. We obtain :

    A(χ, χ) = A(χ, ũ− u) =∑T∈T

    ∫T

    (w∇χ · ∇(ũ− u) + λ̃(ũ− u)χ

    )−

    ∑e∈Eint

    ∫e

    [χ]{w∇(ũ− u) · ~ν} −∑e∈Eint

    ∫e{w∇χ · ~ν} [ũ− u]

    +∑e∈Eint

    ∫eκh−1e [χ] [ũ− u].

    Using the Cauchy Schwarz inequality, Young’s inequality and the approximation theo-rems Theorem 3.10 and Theorem 3.11, we obtain a bound for the first term as follows:

    ∣∣∣∣∣∑T∈T

    ∫Tw∇χ · ∇(ũ− u)

    ∣∣∣∣∣ ≤ w 12(∑T∈T‖∇(ũ− u)‖2

    L2(T )

    ) 12(∑T∈T‖w

    12∇χ‖2

    L2(T )

    ) 12

    ≤ C2δ‖∇(ũ− u)‖2L2(T )

    + δ2‖w12∇χ‖2

    L2(T ).∣∣∣∣∣∑

    T∈T

    ∫Tλ̃(ũ− u)χ

    ∣∣∣∣∣ ≤ ‖λ̃‖L∞(T )(∑T∈T‖ũ− u‖2

    L2(T )

    ) 12(∑T∈T‖χ‖2

    L2(T )

    ) 12

    ≤ C2δ‖λ̃‖L∞(Ω)‖ũ− u‖2L2(T )

    + δ2‖χ‖2L2(T )

    .

    By the triangle inequality, we obtain∣∣∣∣∣∑T∈T

    ∫T

    (w∇χ · ∇(ũ− u) + λ̃(ũ− u)χ

    )∣∣∣∣∣ ≤ C2δ (1 + ‖λ̃‖L∞(Ω) )‖ũ− u‖2H1(T ) + δ2‖|χ|‖2≤ C̃h2‖u‖2

    H2(T )+ δ2‖|χ|‖

    2.

    The second term can be bounded by applying the Cauchy Schwarz inequality and

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 24

    Young’s inequality to obtain:∣∣∣∣∣∣∑e∈Eint

    ∫e[χ]{w∇(ũ− u) · ~ν}

    ∣∣∣∣∣∣ ≤∑e∈Eint

    ‖[χ]‖L2(e)‖{w∇(ũ− u) · ~ν}‖

    L2(e)

    =∑e∈Eint

    h−12‖[χ]‖

    L2(e)h

    12‖{w∇(ũ− u) · ~ν}‖

    L2(e)

    ∑e∈Eint

    h−1‖[χ]‖2L2(e)

    12 ∑e∈Eint

    h‖{w∇(ũ− u) · ~ν}‖2L2(e)

    12

    ≤ C

    ∑e∈Eint

    h−1‖[χ]‖2L2(e)

    12 (∑T∈T

    hh−1T ‖∇(ũ− u)‖2L2(T )

    ) 12

    ≤ δ2‖|χ|‖2 + C2δ‖∇(ũ− u)‖

    2L2(T )

    ≤ δ2‖|χ|‖2 + C̃h2‖u‖2

    H2(T ).

    Using the approximation properties Theorem 3.10 and Theorem 3.11 and the traceinequalities from Theorem 3.4, we obtain the bound∣∣∣∣∣∣

    ∑e∈Eint

    ∫e[χ]{w∇(ũ− u) · ~ν}

    ∣∣∣∣∣∣ ≤ C̃ h2‖u‖2H2(T ) + δ2‖|χ|‖2.In general, when ũ is not continuous, we proceed similarly for the third and fourthterms with (ũ−u) unknown on the interfaces. Using jump norm, the trace inequalitiestogetherer with definitions of the trace operators (average and jump), and using thecoercivity result (3.20), we obtain

    ‖|χ|‖2 ≤ A(χ, χ) = A(χ, ũ− u) ≤ C̃h2‖u‖2H2(T ),

    where C̃ = C̃(λ̃, w).

    Next, we prove an error estimate in the L2 norm.

    Theorem 3.13 (L2 error estimate). Assume that Theorem 3.12 holds and that ourmodel problem is H2-coercive then there exists a constant C̃ independent of h such that

    ‖u− uh‖L2(Ω) ≤ C̃h2‖u‖

    H2(T ).

    Proof. The proof we present follows [39]. Consider the dual problem

    −∇ · (w∇v) + λ̃v = u− uh in Ω,w∇v · ~ν = 0 on Γ = ∂Ω.

    We assume that v ∈ H2(Ω) and that there is a constant C that depends on Ω suchthat

    ‖v‖H2(Ω)

    ≤ C‖u− uh‖L2(Ω) . (3.34)

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 25

    Denote χ = u− uh,

    ‖χ‖2L2(Ω)

    =∑T∈T

    ∫T

    (−∇ · (w∇v) + λ̃v)χ.

    Integrating by part on each element and applying the basic inequality (3.9) yields

    ‖χ‖2L2(Ω)

    =∑T∈T

    (∫Tw∇v · ∇χdx+

    ∫Tλ̃vχ dx

    )−

    ∑e∈Eint

    ∫e{w∇v · ~ν}[χ]

    −∑e∈Eint

    ∫e{w∇χ · ~ν}[v]

    =∑T∈T

    (∫Tw∇v · ∇χdx+

    ∫Tλ̃vχ dx

    )−

    ∑e∈Eint

    ∫e{w∇v · ~ν}[χ],

    since v is continuous. By subtracting the Galerkin orthogonality equation, we obtainfor ṽ ∈ Vh

    ‖χ‖2L2(Ω)

    =∑T∈T

    ∫T

    (w∇χ · ∇(ṽ − v) + λ̃(ṽ − v)χ

    )−

    ∑e∈Eint

    ∫e{w∇v · ~ν}[χ] +

    ∑e∈Eint

    ∫e{w∇ṽ · ~ν}[χ]

    −∑e∈Eint

    ∫e{w∇χ · ~ν}[ṽ]−

    ∑e∈Eint

    ∫eκh−1e [χ][ṽ].

    By noting that v ∈ H2(Ω) and choosing ṽ ∈ C0(Ω), we are left with

    ‖χ‖2L2(Ω)

    =∑T∈T

    ∫T

    (w∇χ · ∇(ṽ − v) + λ̃(ṽ − v)χ

    )−

    ∑e∈Eint

    ∫e{w∇v · ~ν}[χ] +

    ∑e∈Eint

    ∫e{w∇ṽ · ~ν}[χ]

    =∑T∈T

    ∫T

    (w∇χ · ∇(ṽ − v) + λ̃(ṽ − v)χ

    )+

    ∑e∈Eint

    ∫e{w∇(ṽ − v) · ~ν}[χ].

    (3.35)The first term is bounded in the following way :∣∣∣∣∣∑

    T∈T

    ∫Tw∇(ṽ − v) · ∇χ

    ∣∣∣∣∣ ≤ C ∑T∈T‖w∇(ṽ − v)‖

    L2(T )‖∇χ‖

    L2(T )

    ≤ C̃∑T∈T‖∇(ṽ − v)‖

    L2(T )‖∇χ‖

    L2(T )

    ≤ C̃ ‖∇(ṽ − v)‖L2(Ω)‖∇χ‖

    L2(Ω).∣∣∣∣∣∑

    T∈T

    ∫Tλ̃(ṽ − v)χ

    ∣∣∣∣∣ ≤ C‖λ̃‖L∞(Ω)‖ṽ − v‖L2(Ω)‖χ‖L2(Ω)≤ C̃ ‖ṽ − v‖

    L2(Ω)‖χ‖

    L2(Ω).

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 26

    By the triangle inequality, we obtain∣∣∣∣∣∑T∈T

    ∫T

    (w∇(ṽ − v) · ∇χ+ λ̃(ṽ − v)χ

    )∣∣∣∣∣ ≤ C̃‖ṽ − v‖H1(Ω)‖χ‖H1(Ω)≤ C̃ h‖v‖H2(Ω)‖|χ|‖. (3.36)

    The last term is bounded by using the Cauchy-Schwarz inequality and by takingadvantage of the definition of the penalty parameter :∣∣∣∣∣∣∑e∈Eint

    ∫e{w∇(ṽ − v) · ~ν}[χ]

    ∣∣∣∣∣∣ ≤∑e∈Eint

    h12e ‖{w∇(ṽ − v) · ~ν}‖

    L2(e)h− 12e ‖[χ]‖L2(e)

    ∑e∈Eint

    he‖{w∇(ṽ − v) · ~ν}‖2L2(e)

    12

    ×

    ∑e∈Eint

    h−1e ‖[χ]‖2L2(e)

    12

    ≤ C(∑T∈T

    heh−1T ‖∇(ṽ − v)‖2

    L2(T )

    ) 12 ∑e∈Eint

    h−1e ‖[χ]‖2L2(e)

    12

    ≤ C̃(∑T∈T

    h2T‖v‖2L2(T )

    ) 12 ∑e∈Eint

    h−1e ‖[χ]‖2L2(e)

    12≤ C̃ h‖v‖H2(Ω)‖|χ|‖. (3.37)

    We substitute (3.36) and (3.37) into (3.35). Therefore, using the bound (3.34) weobtain

    ‖χ‖2L2(Ω) ≤ C̃h‖v‖H2(Ω)‖|χ|‖≤ C̃h‖χ‖L2(Ω)‖|χ|‖.

    With (3.12), this implies

    ‖χ‖L2(Ω)

    ≤ C̃h‖|χ|‖ ≤ C̃h2‖u‖H2(T )

    .

    3.6 DG-Version of IRLS AlgorithmIn this section, we present the iteratively reweighted least squares algorithm in com-bination with the DG discretization. The 1D case has already been presented in [34].Now we are in the position to proceed with the 2D case. Indeed, we have all theingredients to assemble our numerical scheme into the following algorithm.

  • CHAPTER 3. DG FINITE ELEMENT DISCRETIZATION 27

    Algorithm 3.14 (Double Minimization, [24]).Input: Data vector g, ε > 0, initial gradient weight w(0) with ε ≤ w(0) ≤ 1/ε,

    number nmax of outer iterations.Parameters: λ̃, κ > 0.Output: Approximation u∗ of the minimizer of Jε.u

    (0)h = 0

    for n = 0 to nmax doAssemble the matrix A(n+1)h from Theorem 3.3Compute u(n+1)h such that the matrix A

    (n+1)h u

    (n+1)h = gh;

    Compute the gradient ∇u(n+1)h =∑k∈N u

    (n+1)h,k ∇φk;

    w(n+1) = min(

    max(ε,

    1|∇u(n+1)h |

    ),1ε

    ).

    endforu∗h := u

    (n+1)h .

  • Chapter 4

    Numerical Results

    In this chapter, we present some simulation results obtained with the DG-FEM. Inparticular, we will show results of our model problem with Dirichlet boundary condi-tion and with Neumann boundary conditions. We present numerical convergence ratesfor the L2-norm of the error for a smooth exact solution. We continue with results ofthe IRLS, particulary a denoising problem, by presenting results for the convergenceof the algorithm with varying regularization parameter and mesh size.

    4.1 DG for Diffusion Problems

    4.1.1 Dirichlet ProblemLet us consider the Dirichlet boundary value problem{

    −∇ · (w∇u) + λ̃u = λ̃g Ω,u = 0 ∂Ω. (4.1)

    with Ω = (−1, 1)2, g(x, y) = −1 if x, y < 0, 1 if x, y > 0, and 0 otherwise. Discretiz-ing problem (4.1) by means of the SIPG method with penalty parameter κ = 10 asdescribed in Section 3.1, we obtain the results displayed in Figure 4.1. More precisely,we set w = 1. The plots show that increasing the regularization parameter λ̃ yieldssolutions that approaches g.

    4.1.2 Neumann ProblemLet us consider the Neumann boundary value problem{

    −∇ · (w∇u) + λ̃u = λ̃g Ω,∇u · ~ν = 0 ∂Ω, (4.2)

    with Ω = (−1, 1)2, g(x, y) = −1 if x, y < 0, 1 if x, y > 0, and 0 otherwise. Discretiz-ing problem (4.2) by means of the SIPG method with penalty parameter κ = 10 asdescribed in Section 3.1, we obtain the results displayed in Figure 4.2. More precisely,

    28

  • CHAPTER 4. NUMERICAL RESULTS 29

    Figure 4.1: DG solution of Dirichlet problem with penalty parameter κ = 10; λ̃ = 10(left) and λ̃ = 1000 (right).

    we set w = 1. The plots show that increasing the regularization parameter λ̃ yieldssolutions that approaches g.

    Figure 4.2: DG solution of Neumann problem with penalty parameter κ = 10 ; λ̃ = 10(left) and λ̃ = 1000 (right).

    4.1.3 Poisson Problem with Known SolutionLet us consider the Poisson problem{

    −4u = g Ω,u = 0 ∂Ω, (4.3)

    on the unit square (−1, 1)2. We choose g such that the analytical solution of the prob-lem is given by u(x, y) = sin(πx) sin(πy).

  • CHAPTER 4. NUMERICAL RESULTS 30

    Figure 4.3: DG solution of Poisson problem with penalty parameter κ = 10 (left) anderror for the L2-norm (right).

    n h en = ‖u− uh‖ β1 1/2 1.4358 ×10−12 1/4 5.3246 ×10−2 2.6963 1/8 1.5578 ×10−2 3.4184 1/16 4.1438 ×10−3 3.7595 1/32 1.0628 ×10−3 3.8996 1/64 2.6871 ×10−4 3.9557 1/128 6.7532 ×10−5 3.9798 1/256 1.6928 ×10−5 3.9899 1/512 4.2367 ×10−6 3.996

    Table 4.1: Numerical error estimates for a smooth function.

    Table 4.1 shows the numerical error in the L2-norm for equation (4.3). The convergencerate given by

    β = enen+1

    as predicted by the theory is O(h2). This implies halving the mesh size leads toan increse in the error estimate. Table 4.1 demonstrates that as n increases, theconvergance rate β increases to a factor 4. The error in the L2 norm plotted againstthe mesh size is shown in Figure 4.3 using a penalty parameter of κ = 10.

  • CHAPTER 4. NUMERICAL RESULTS 31

    4.2 Iteratively Reweighted Least Squares Algorithm

    4.2.1 Denoising ProblemFor studying the numerical results of the iteratively reweighted least squares (IRLS)Algorithm 3.14, we use a regularization functional

    J̃(u,w) = λ̃2‖u−G‖2L2(Ω) +

    ∫Ω

    (w|∇u|2 + 1

    w

    )dx. (4.4)

    The functional (4.4) is called a denoising functional where G = g + η , λ̃ > 0 and η isa uniformly distributed noise. The corresponding Euler-Lagrange equation is given by{

    −∇ · (w∇u) + λ̃u = λ̃G Ω,w∇u · ~ν = 0 ∂Ω. (4.5)

    Concerning the numerical work involved in the IRLS algorithm, each iteration requiressolving the Euler-Lagrange equation and using the solution to compute new gradientweights w. Figure 4.4 shows the result for 20 iterations of the IRLS algorithm. Weset the regularization parameter λ̃ = 102, the penalty parameter κ = 10 and ε = 10−2as in Algorithm 3.14. As the results indicate, there is a sharp improvement in thesolution within the first 10 iteration steps. The solution becomes steady till the totalnumber of iteration is reached. This means an approximate solution is reached afterfew number of iterations.

    J(u)n h = 1/8 h = 1/16 h = 1/32 h = 1/64 h = 1/1281 45.3461 44.2652 51.2620 63.7826 71.74432 43.3240 38.5002 49.8223 58.2257 67.11173 38.6976 41.6775 47.8067 55.4609 63.62225 41.0890 40.0938 45.4033 52.1332 58.610510 42.6990 40.4148 42.9642 47.8849 53.042520 39.7064 38.1429 40.7648 45.4784 48.569030 39.9341 37.2609 41.3725 43.1490 46.502440 38.1215 38.7159 40.3228 43.0523 44.886250 42.8382 39.6733 40.8136 43.2327 44.2599

    Table 4.2: Results for the functional J(u) from (4.6) for 50 iterations and decreasingmesh size .

    The Table 4.2 shows the values of the convergence of the iteratively reweighted leastsquares algorithm. For a fixed λ̃ = 104 and penalty parameter κ = 10, the denoisingproblem (4.5) is solved for 50 iterations with the iteratively reweighted least squaresalgorithm. The solution is used in computing the functional

    J(u(n)) = λ̃2‖u(n) −G‖2L2(Ω) +

    ∫Ω|∇u(n)| (4.6)

  • CHAPTER 4. NUMERICAL RESULTS 32

    Figure 4.4: Results of the iteratively reweighted least squares algorithm at iterations1 and 2 (above) and iterations 3 and 5 (middle) and 10 and 20 (below).

    for varying mesh sizes h.

    In the Table 4.3, the results for different regularization parameter λ̃ are computed.We set ε = 10−1.

  • CHAPTER 4. NUMERICAL RESULTS 33

    J(u)n λ̃ = 102 λ̃ = 103 λ̃ = 1041 35.7395 46.7507 63.11282 35.5685 44.2403 56.87163 34.9391 42.6369 54.23165 34.7120 41.4692 51.473010 35.1925 39.2793 47.555620 34.6311 38.8595 44.664230 34.5625 38.1292 43.676840 34.5749 37.9654 42.958150 34.8817 38.2415 42.1900

    Table 4.3: Convergence results for a fixed mesh h = 2−6 computed for 50 iterationswith different λ̃.

    Figure 4.5 shows the value of J(u(n)) for the first 50 iterations for λ̃ = 102 (above),λ̃ = 103 (middle) and λ̃ = 104 (below). It can be observed that the functional J(u(n))converges to a minimum with increasing iteration. The minimum is achieved andremains almost the same with regularization parameters 103 and 104. As the regular-ization parameter increases, the solution converges whiles a decrease in ε (figures fromleft to right) does not make any significant difference. When λ̃ = 102, the functionalJ(u(n)) has an irregular behaviour for both ε = 10−2 and ε = 10−1. This means theregularization parameter λ̃ must be larger than 102 to achieve good results.

    4.2.2 Diffusion Problem, TV-Minimization Problem and IRLSWe consider an example of our model problem (4.5). In this example, we set the noiseη = 0, ε = 10−2, κ = 10 and the regularization parameter λ̃ = 100. In Figure 4.6,the results for some iterations are plotted. The results show the efficiency of the IRLSalgorithm to yield sharp edges for problems without noise.

  • CHAPTER 4. NUMERICAL RESULTS 34

    Figure 4.5: Convergence for the regularization functional plotted against the numberof iterations. The results on the left correspond to ε = 10−1 and on the right ε = 10−3.The plot shows the values of J(u(n)) for the first 50 iterations for λ̃ = 102 (above),λ̃ = 103 (middle) and λ̃ = 104 (below).

  • CHAPTER 4. NUMERICAL RESULTS 35

    Figure 4.6: Results of the IRLS for the varying diffusion coefficients at iterations 1and 3 (above) and iterations 10 and 20 (below).

  • Chapter 5

    Conclusion and Outlook

    5.1 ConclusionIn this thesis, we solved a total variation minimization problem by using a discontin-uous Galerkin finite element method. In Chapter 1, we considered the Rudin, Osherand Fattemi (ROF) model. In general, the first order optimality conditition of themodel yields an ill-posed non-linear second order partial differential equation. Consid-ering the space of Bounded Variation of functions and some of its relevant properties,we showed that the existence and uniqueness of minimizers of the ill-posed functionalcan be achieved by a relaxation algorithm called iteratively reweighted least squaresalgorithm which is also referred to as double minimization algorithm in some booksand articles.

    The well-posed second order partial differential equation equation obtained from theiteratively reweighted least squares algorithm is discretized by a standard discontinuousGalerkin (DG) finite element method. This is particularly useful since the space ofbounded variation functions contain discontinuous functions. We presented some the-ory on the primal formulation of a discontinuous Galerkin finite element method. Weshowed the existence and uniqueness of the DG solution and also presented a-priorierror estimates in the energy norm and L2-norm.

    Finally, we have discussed some numerical results obtained by DG discretizationmethod. In Chapter 4, two examples are discussed and the L2 error are computedfor a known problem. Results from the iteratively reweighted least squares algorithmare also presented with an application to a denoising problem. The rate of convergenceof the iteratively reweighted least squares algorithm is also presented.

    5.2 OutlookThis thesis provides a starting point for preparing theoritical results for a discontinuousGalerkin finite element method for solving total variation minimization problems. The

    36

  • CHAPTER 5. CONCLUSION AND OUTLOOK 37

    DG discretization has high degrees of freedom and requires efficient solvers. The workcan be continued in the following way

    1. Robust multilevel solver.At each step of the IRLS, we have to solve a diffusion problem with varyingdiffusion coefficient w between ε and 1/ε. Here we need robust iterative solversand an appropriate stopping criteria in order to achieve optimal results. Onecandidate for such a robust solver is the algebraic multilevel solver proposed byKraus and Tomar [31] and Kraus [29].

    2. Harmonic mean for DG formulation.In the formulation of the DG, the harmonic mean can be used for averaging ofthe flux. This approach is particularly useful since the diffusion coefficients canbe discontinuous [18].

    3. Kraus–Tomar Assembling.The assembling process can also be improved by reducing the total number ofdegrees of freedom. The Kraus-Tomar assembling of the discontinuous Galerkinfinite element method reduces the total degree of freedom compared to the stan-dard assembling. It is useful since the coefficients (gradient weights) can bediscontinuous [31].

    4. Analysis and Simluation of 3D Problems.The work can be carried on to higher dimension.

  • List of Notations and FunctionSpaces

    Notation

    N,R,Rn natural numbers, real numbers, n-dimensional Euclidean spaceL(X, Y ) space of bounded linear operator between Banach spaces X and YX∗ dual space of the Banach space XV ⊂ U V is a subset of U ; for Banach spaces U and V. It also denotes continuous

    embedding, i.e. the identity mapping I : V → U is continuousV ⊂⊂ U V is compactly contained in U , i.e. V ⊂ U and V is compact;

    for Banach spaces U and V. It also denotes compact embedding, i.e.I : V → U is compact.

    1V characteristic function of the set V, i.e. 1V (x) = 1 if x ∈ V, 0 otherwiseDu distributional derivative of udiv g divergence of the vector-valued function g∇u classical or weak gradient of the scalar functional uXV characteristic function of the set V in the sense of convex analysis∂V boundary of the set VK∗ adjoint in L(Y ∗, X∗) of K ∈ L(X, Y )|x|`p `p norm of x ∈ Rn, 1 ≤ p ≤ ∞|α|, |x| modulos of α ∈ R or Euclidean norm of x ∈ RnId identity matrix, Id ∈ Rm×n‖v‖V norm of v in the Banach space VO Landau symbol4 Laplacian operator, 4u =

    n∑i=1

    ∂2u∂x2i, for u : Rn → R

    Function Spaces

    Let Ω ⊂ Rn be open. See [21] as general reference.

    38

  • LIST OF NOTATIONS AND FUNCTION SPACES 39

    C(Ω,Rn) continuous functions on Ω with values in Rn, denoted by C(Ω) for n = 1C0(Ω,Rn) functions in C(Ω,Rn) with copmact support in ΩCk(Ω,Rn) k times continuously differentiable functions on Ω with values in Rn

    denoted by Ck(Ω) for n = 1Ck0 (Ω,Rn) functions in Ck(Ω,Rn) with compact support in ΩC∞(Ω,Rn) infinitely differentiable functions in Ω with values in Rn, denoted by

    C∞(Ω) for n = 1C∞0 (Ω,Rn) functions in C∞(Ω) for n = 1Lp(Ω,Rn) Lebesgue space, 1 ≤ p ≤ ∞, on Ω with values in Rn, denoted by Lp(Ω)

    for n = 1W k,p(Ω) Sobolev space of functions with k−th order weak derivatives in Lp(Ω)W k,p0 (Ω) closure of C∞0 (Ω) in W k,p(Ω)Hk(Ω), Hk0 (Ω) abbreviations for the Hilbert spces W k,2(Ω), W

    k,20 (Ω)

    BV (Ω) space of functions of bounded variation on Ω

  • Bibliography

    [1] R. Adams, Sobolev Spaces. Academic Press, New York, 1975.

    [2] L. Ambrosio, N. Fusco and D. Pallara, Functions of Bounded Variation andFree Discontinuity Problems. Oxford Mathematical Monographs,Oxford Univer-sity Press,2000.

    [3] D. Arnold, An interior penalty finite element method with discontinuous elements,SIAM J. Numer. Anal., 19 pp. 742–760, 1982.

    [4] D. N. Arnold, F. Brezzi, B. Cockburn and L. D. Marini, Unified Analysis of Dis-continuous Galerkin Methods for Elliptic Problems. SIAM Journal on NumericalAnalysis, Vol. 39/5, pp. 1749 –1779, 2002.

    [5] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing: Par-tial Differential Equations and the Calculus of Variation. Springer, 2002.

    [6] C. E. Baumann and J. T. Oden, A discontinuous hp finite element method forconvection-diffusion problems Comput. Methods Appl. Mech. Engrg., 175 pp. 311–341, 1999.

    [7] D. Braess, Finite Elemente: Theorie schneller Löser und Anwendungen in derElastizitätstheorie. 2. Auflage, Springer, 1996.

    [8] S. C. Brenner and L. R. Scott, The Mathematical Theory of Finite Element Meth-ods Springer 2008.

    [9] M. Burger and S. Osher, A Guide to the TV Zoo I:Models and Analysis. In LevelSet and PDE-Based Reconstruction Methods, Springer, 2009.

    [10] A. Chambolle, An algorithm for total variation minimization and applications. J.Math. Imaging Vision 20:1-2 , pp. 89–97, 2004.

    [11] A. Chambolle and P. -L. Lions, Image recovery via Total Variation Minimizationand Related Problems. J. Math. Anal. Appl., Vol 276/2, pp. 845–876, 2002.

    [12] T. F. Chan, G. H. Golub and P. Mulet, A nonlinear primal dual method for totalvariation based image restoration. SIAM J. Sci. Comput., 20, (6):1964-1977, 1999.

    40

  • BIBLIOGRAPHY 41

    [13] T. F. Chan and P. Mulet, Iterative methods for total vari-ation restoration. CAM report 96-38, UCLA, USA; seehttp://www.math.ucla.edu/applied/cam/index.html, 1996.

    [14] T. F. Chan and J. Shen, Image Processing And Analysis: Variational, PDE,Wavelet and Stochastic Methods. SIAM 2005.

    [15] K. Chen and X-C. Tai, A Nonlinear Multigrid Method for Total Variation Mini-mization from Image Restoration J. Sci. Comput., Vol 33/2, pp. 115–138 , 2007.

    [16] P. G. Ciarlet, The Finite Element Method for Elliptic Problems. Classics in Ap-plied Mathematics, Vol. 40, SIAM, Philadelphia, 2002.

    [17] I. Daubechies, R. DeVore, M. Fornasier and C. S. Güntürk, Iteratively Re-weightedLeast Squares Minimization for Sparse Recovery. pp. 1-38 Communications inPure and Applied Mathematics, Vol.63, 2010.

    [18] B. A. Dios, M. Holst, Y. Zhu and L. Zikatanov, Multilevel Preconditioners for Dis-continuous Galerkin Approximations of Elliptic Problems with Jump Coefficient.arXiv:1012.1287v1 [math.NA] 2010.

    [19] V. A. Doberov, Preconditioning of Discontinuous Galerkin Method for Second Or-der Elliptic Problems PhD Thesis , Texas A. & M. University, 2007.

    [20] H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems.Mathematics and Its Applications, vol. 375, Kluwer, 2000.

    [21] L. C. Evans, Partial Differential Equations. Graduate Studies in Mathematics,American Mathematical Society, 1998.

    [22] L. C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions.CRC Press, Boca-Raton–Ann Arbor– London, 1992.

    [23] M. Fornasier, Domain Decomposition methods for linear inverse problems withsparsity constraints. Inverse Problems 23, pp. 2505–2526, 2007.

    [24] M. Fornassier (ed)., Numerical methods for Sparse Recovery. Radon Series Appliedand Computational Mathematics, de Gruyter, 2010.

    [25] M. Fornasier and R. March, Restoration of Color Images by Vector Valued BVFunctions and Variational Calculus SIAM J. Appl. Math., Vol. 68 No. 2, 437–460,2007

    [26] E. Giusti, Minimal Surfaces and Functions of Bounded Variation. BirkhäuserBoston, 1984.

    [27] T. Goldstein and S. Osher, The split Bregmann method for L1 regularized prob-lems. SIAM Journal on Imaging Sciences Vol.2/2, pp. 323–343, 2009.

  • BIBLIOGRAPHY 42

    [28] C. Groosmann, H-G. Roos and M. Stynes, Numerical Treatment of Partial Dif-ferential Equations Springer-Verlag, Berlin Heidelberg 2007.

    [29] J. Kraus , Additive Schur complement approximation and application to multilevelpreconditioning. RICAM: Linz, Bericht-Nr. 2011-22.

    [30] J. Kraus and S. Margenov, Robust Algebraic Multilevel Methods and AlgorithmsRadon Series Comp. Appl. Math., vol. 5, Walter de Gruyter, Berlin-New York,2009.

    [31] J. Kraus and S. Tomar, Multilevel Preconditioning of Elliptic Problems Discretizedby a Class of Discontinuous Galerkin Methods. RICAM: Linz, Bericht-Nr. 2006-36.

    [32] C. L. Lawson, Contributions to the Theory of Linear Least Maximum Approxima-tion. Ph.D.thesis, University of California, Los Angeles, 1961.

    [33] Y. Meyer, Oscillating patterns in image processing and nonlinear evolution equa-tions. American Mathematical Society, 2001.

    [34] S. E. Moore, Discontinuous Galerkin for Total Variation Minimization. ProjectSeminar, Summer Semester, Johannes Kepler Universität, Linz, 2010.

    [35] J. T. Oden, I. Babuška and C. E. Baumann, A discontinuous hp finites elementmethod for diffusion problems J. Comput. Phys. Vol. 146, pp. 491–519, 1998.

    [36] C. Ortner, A Non-conforming Finite Element Method for Convex OptimizationProblems. http://www2.maths.ox.ac.uk/oxmos/, OxMOS: New Frontiers in theMathematics of Solids, 2008.

    [37] S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin, An iterative regulariza-tion method for total variation-based image restoration. Multiscale Model. Simul,Vol.4/2, pp. 460–489, 2005.

    [38] S. Osher and A. Marquina, Explicit algorithms for a new time dependent modelbased on level set motion for nonlinear deblurring and noise removal SIAMJ. Sci. Comput., Vol. 22/2, pp. 387–405, 2000.

    [39] B. Rivière, Discontinuous Galerkin Methods for Solving Elliptic and ParabolicEquations: Theory and Implementation. SIAM 2008.

    [40] L. I. Rudin, S. J. Osher and E. Fatemi, Nonlinear total variation based noise re-moval algorithms Physica D. 60, pp. 259–268, 1992.

    [41] C. R. Vogel, Computational methods for inverse problems SIAM publications,USA, 2002.

  • BIBLIOGRAPHY 43

    [42] C. R. Vogel, A multigrid method for total variation-based image denoising Bowers,K., Lund, J.(eds.) Computation and Control IV. Progress in Systems and ControlTheory, Vol.20, Birkhäuser, Boston, 1995

    [43] C. R. Vogel and M. E. Oman, Iterative methods for total variation denoising.Special issue on iterative methods in numerical linear algebra (Breckenridge, CO,1994). SIAM J. Sci. Comput. Vol 17/1, pp 227–238, 1996.

    [44] M. F. Wheeler, An elliptic collocation-finite element method with interior penaltiesSIAM J. Numer. Anal. 15, pp. 152–161, 1978.

  • Eidesstattliche Erklärung

    Ich, Stephen Edward Moore, erkläre an Eides statt, dass ich die vorliegende Master-arbeit selbständig und ohne fremde Hilfe verfasst, andere als die angegebenen Quellenund Hilfsmittel nicht benutzt bzw. die wörtlich oder sinngemäß entnommenen Stellenals solche kenntlich gemacht habe.

    Linz, August 2011

    ————————————————Stephen Edward Moore