the mathematics of x-ray tomography · ˙2 i + 2 v : this choice of the lters result in a...

52
The Mathematics of X-ray Tomography Tatiana A. Bubba Department of Mathematics and Statistics, University of Helsinki [email protected] Summer School on Very Finnish Inverse Problems Helsinki, June 3-7, 2019 Finnish Centre of Excellence in Inverse Modelling and Imaging 2018-2025 2018-2025

Upload: others

Post on 15-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

The Mathematics of X-ray Tomography

Tatiana A. BubbaDepartment of Mathematics and Statistics, University of Helsinki

[email protected]

Summer School on Very Finnish Inverse ProblemsHelsinki, June 3-7, 2019

Finnish Centre of Excellence in

Inverse Modelling and Imaging 2018-20252018-2025

Page 2: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

TSVD as Spectral Filtering

We can regard the TSVD also as the result of a filtering operation, namely:

fTSVD =

r∑i=1

uTi yδ

σivi =

min(m,n)∑i=1

φTSVDi

uTi yδ

σivi

where r is the truncation parameter and

φTSVDi =

{1 i = 1, . . . , r

0 elsewhere

are the filter factors associated with the method.

These are called spectral filtering methods because the SVD basis can be re-garded as a spectral basis, since the vectors ui and vi are the eigenvectors ofKTK and KKT .

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 3: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

The Tikhonov Method

Let’s now consider the following filter factors:

φTIKHi =

σ2i

σ2i +α

2 i = 1, . . . ,min(m,n)

0 elsewhere

which yield the reconstruction method:

fTIKH =

min(m,n)∑i=1

φTIKHi

uTi yδ

σivi =

min(m,n)∑i=1

σi (uTi yδ)

σ2i + α2

vi.

This choice of the filters result in a regularization technique called Tikhonovmethod and α > 0 is the so-called regularization parameter.

The parameter α acts in the same way as the parameter r in the TSVDmethod: it controls which SVD components we want to damp or filter.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 4: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Tikhonov Regularization

Similarly to SVD being the solution of the least squares problem, also Tikhonovregularization can be understood as the solution of a minimization problem:

fTIKH = argminf

{∥∥Kf − yδ∥∥22

+ α∥∥f∥∥2

2

}.

This problem is motivated by the fact that we clearly want∥∥Kf − yδ

∥∥22

to besmall, but we also wish to avoid that it becomes zero. Indeed, by taking theMoore-Pensore solution f† we would have

‖f†‖22 =

k∑i=1

(uTi yδ)2

σ2i

which could become unrealistically large when the magnitude of the noise insome direction ui greatly exceeds the magnitude of the singular value σi.

The above minimization problem ensures that both the norm of the residualKfTIKH − yδ and the norm of the solution fTIKH are somewhat small and αbalances the trade-off between the two terms.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 5: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Normal Equation and Stacked Form for Tikhonov Regularization

The Tikhonov solution can be also formulated as a linear least squares problem:

fTIKH = argminf

∥∥∥∥∥[

K√α1

]−[yδ

0

] ∥∥∥∥∥2

2

.

This is called stacked form. If we denote by K =

[K√α1

]and yδ =

[yδ

0

]then

the least square solution of the stacked form satisfies the normal equations:

KT Kf = KT yδ.

It is easy to check that

KT K = KTK + α1 and KT yδ = KTyδ.

Hence we also have

fTIKH = (KTK + α1)−1KTyδ.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 6: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)

Truncated SVD RegularizationTikhonov Regularization

Original phantom f†: RE = 100%

fTSVD: RE = 35%fTIKH: RE = 32%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 7: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)

Truncated SVD Regularization

Tikhonov Regularization

Original phantom

f†: RE = 100%

fTSVD: RE = 35%

fTIKH: RE = 32%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 8: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)Truncated SVD Regularization

Tikhonov Regularization

Original phantom

f†: RE = 100%fTSVD: RE = 35%

fTIKH: RE = 32%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 9: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Where Tikhonov Solution Stands in The Geometry of Ill-ConditionedProblems

Object Space

Rn = span{v1, . . . ,vn}Data Space

Rm = span{u1, . . . ,um}

= f true y = Kf true =

yδ = y + δ

f† = K†yδ

fTSVD

fTIKH

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 10: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

About the Regularization Parameter

By looking at the minimization problem formulation of the Tikhonov solution

fTIKH = argminf

{∥∥Kf − yδ∥∥22

+ α∥∥f∥∥2

2

}it is clear that:

a large α results in strong regularity and possible over smoothing

a small α small yields a good fitting, with the risk of over fitting.

In general, choosing the regularization parameter for an ill-posed problem is nota trivial task and there are no rule of thumbs. Usually, it is a combination ofgood heuristics and prior knowledge of the noise in the observations.

Delving into this is out of the scope, but there are methods that can be foundin the literature (Morozov’s discrepancy principle, generalized cross validation,L-curve criterion), and more recent approaches tailored to specific problems.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 11: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Influence of the Choice of α in Tikhonov Regularization

Original phantom fTIKH: α = 103

fTIKH: α = 102fTIKH: α = 10fTIKH: α = 1fTIKH: α = 10−1fTIKH: α = 10−2fTIKH: α = 10−3

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 12: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Influence of the Choice of α in Tikhonov Regularization

Original phantom

fTIKH: α = 103

fTIKH: α = 102

fTIKH: α = 10fTIKH: α = 1fTIKH: α = 10−1fTIKH: α = 10−2fTIKH: α = 10−3

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 13: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Influence of the Choice of α in Tikhonov Regularization

Original phantom

fTIKH: α = 103fTIKH: α = 102

fTIKH: α = 10

fTIKH: α = 1fTIKH: α = 10−1fTIKH: α = 10−2fTIKH: α = 10−3

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 14: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Influence of the Choice of α in Tikhonov Regularization

Original phantom

fTIKH: α = 103fTIKH: α = 102fTIKH: α = 10

fTIKH: α = 1

fTIKH: α = 10−1fTIKH: α = 10−2fTIKH: α = 10−3

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 15: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Influence of the Choice of α in Tikhonov Regularization

Original phantom

fTIKH: α = 103fTIKH: α = 102fTIKH: α = 10fTIKH: α = 1

fTIKH: α = 10−1

fTIKH: α = 10−2fTIKH: α = 10−3

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 16: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Influence of the Choice of α in Tikhonov Regularization

Original phantom

fTIKH: α = 103fTIKH: α = 102fTIKH: α = 10fTIKH: α = 1fTIKH: α = 10−1

fTIKH: α = 10−2

fTIKH: α = 10−3

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 17: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Influence of the Choice of α in Tikhonov Regularization

Original phantom

fTIKH: α = 103fTIKH: α = 102fTIKH: α = 10fTIKH: α = 1fTIKH: α = 10−1fTIKH: α = 10−2

fTIKH: α = 10−3

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 18: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Generalized Tikhonov Regularization

Sometimes we have a priori information about the solution of the inverse prob-lem. This can be incorporated in the minimization formulation of the Tikhonovmethod. For instance:

f is close to a know f∗

fGTIKH = argminf

{∥∥Kf − yδ∥∥22

+ α∥∥f − f∗

∥∥22

}f is known to be smooth

fGTIKH = argminf

{∥∥Kf − yδ∥∥22

+ α∥∥Lf∥∥2

2

}f has similar smoothing properties as f∗

fGTIKH = argminf

{∥∥Kf − yδ∥∥22

+ α∥∥L(f − f∗)

∥∥22

}where L is a suitable operator.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 19: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Generalized Tikhonov Regularization

A common choice for generalized Tikhonov regularization is to take L as adiscretized differential operator. For example, using forward differences:

L =1

∆s

−1 1 0 0 0 . . . 00 −1 1 0 0 . . . 00 0 −1 1 0 . . . 0...

. . ....

.... . .

...0 . . . 0 −1 1 00 . . . 0 0 −1 11 . . . 0 0 0 −1

where ∆s is the length of the discretization interval.

This choice promotes smoothness in the reconstruction.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 20: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Variational Regularization

In general, a minimization problem of the form:

Γα(yδ) = argminf

{1

2

∥∥Kf − yδ∥∥22

+ α R(f)

}is called variational formulation:

The data fidelity (or data fitting) term∥∥Kf − yδ

∥∥22

keeps the estimationof the solution close to the data under the forward physical system.

The regularization parameter α > 0 controls the trade-off between a goodfit and the requirements from the regularization.

R(f) incorporates a priori information or assumptions on the unknown f .A non exhaustive list:

Tikhonov regularization: ‖f‖22Generalized Tikhonov regularization: ‖Lf‖22Compress sensing or sparse regularization: ‖f‖0 or ‖f‖1 or ‖Lf‖1Indicator functions of constraints sets: ιR+

(f)

A combination of the above

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 21: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

`p Norms for Rn

Let f ∈ Rn. The `p norms for 1 ≤ p <∞ are defined by

‖f‖p =

( n∑j=1

|f j |p

)1/p

.

Also important, but not a norm:

‖f‖0 = limp→0‖f‖pp =

∣∣{j : fj 6= 0}∣∣.

The `0 “norm” counts the number of non-zeros components in f : this is usedto measure sparsity.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 22: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Sparse Regularization

Finding the sparsest solution:

argminf

{1

2

∥∥Kf − yδ∥∥22

+ α‖Lf‖0}

is known as Compressed Sensing (CS). However, the problem above is NP-hard,since it requires a combinatorial search of exponential size for considering allpossible supports.

Under certain conditions on Lf and K, replacing `0 with `1 yields “similar”results. This relaxation leads to a convex problem:

argminf

{1

2‖Kf − yδ‖22 + α‖Lf‖1

}.

which is at the basis of optimization-based methods for CS.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 23: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

About the Convex Relaxation

The formulation

argminf

{1

2

∥∥Kf − yδ∥∥22

+ α‖Lf‖1}.

it is more easily solvable, but still nonsmooth. Also, it is convex, but not strictlyconvex. So why not using Tikhonov regularization?

x1

x2

x1

x2

|x1|2 + |x2|2 = const |x1|+ |x2| = const

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 24: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

About the Convex Relaxation

The formulation

argminf

{1

2

∥∥Kf − yδ∥∥22

+ α‖Lf‖1}.

it is more easily solvable, but still nonsmooth. Also, it is convex, but not strictlyconvex. So why not using Tikhonov regularization?

x1

x2

x1

x2

|x1|2 + |x2|2 = const |x1|+ |x2| = const

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 25: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Total Variation Regularization

If we take L = ∇ as the discrete differentiation matrix, the variational formula-tion

fTV = argminf

{1

2

∥∥Kf − yδ∥∥22

+ α ‖∇f‖1}

is called Total Variation.

Total Variation (TV) regularization promotes sparsity in the derivative, in otherwords favouring piece-wise constantness.

TV, first introduced to face denoising problems (in 1992, the so-called ROFmodel) became a popular approach in many imaging processing tasks (includingCT) due to its ability to preserve or even favour reconstructions with sharpedges.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 26: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Beyond Classical TV

Total Generalized Variation (TGVk)defines a whole family of priors depending on the order of the derivative kTGV2 is suitable for piecewise smooth targets

Total p-Variation (TpV)0 < p < 1 refers to the normdesigns a nonsmooth and nonconvex problem

Many many more: Higher Order TV, Directional TV, Anisotropic TV, . . .

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 27: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Wavelet-based Regularization

If we take L = W as the matrix associated to a certain wavelet transform, thevariational formulation

fWLET = argminf

{1

2

∥∥Kf − yδ∥∥22

+ α ‖Wf‖1}

promotes sparsity on the wavelet coefficients.

The idea behind wavelet-based regularization is that wavelet coefficients comewith different magnitudes and the smallest ones are associated with noise. The`1-norm suppresses the small coefficients in favor of the largest ones, which areassociated with edges and images dominant features.

Wavelets (widely used in image processing since 1990s) are a very common choicein CS approaches since they model images quite adequately.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 28: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

A Bit About Wavelets

Wavelets arose in 1980s to overcome some of the limitations of Fourier analysis.

Similarly to Fourier series, the idea is to “break” a signal into building blocks,but unlike Fourier series the building blocks are localized not only in the frequencydomain but also in the space domain.

Time-frequency plane for the Time-frequency plane for the

Fourier Transform. wavelet transform.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 29: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

A Bit About Wavelets

Different families of wavelets can be generated by considering different “parents”:

the scaling function φ ∈ L2(R2), a low-pass filter, provides a rougherversion of the signal itself;

the (mother) wavelet ψ ∈ L2(R2), a high-pass filter, describes the detailsin the signal.

A wavelet system is generated by applying to both parents two operators:

Isotropic dilation:

DMψ(x) = 2−j2ψ(2jx),

where j ∈ Z is the scaling parameter.

Translation:Tkψ(x) = ψ(x− k)

where k ∈ Z is the location parameter.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 30: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

A Bit About Wavelets

The elements of a wavelet system are given by:

ψjk(x) ={TkDjψ(x) = 2−

j2ψ(2jx− k) : (j, k) ∈ Z× Z

}and similarly for the scaling function.

The wavelets coefficients are the result of the wavelet transform:

W : f −→ Wf(j, k) = 〈f, ψj,k〉.

In practical applications these are computed using the language of filters, withconvolutions and downsampling and upsampling operations.

In particular in 2D, by considering tensor products, from the scaling andwavelet functions we get one scaling function but three wavelet functions(horizontal, vertical and diagonal):

Φ(x) = φ(x1)φ(x2),

and

Ψ1(x) = φ(x1)ψ(x2), Ψ2(x) = ψ(x1)φ(x2), Ψ3(x) = ψ(x1)ψ(x2).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 31: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

An Example: Haar Wavelets

0 1

(x)

1

φ(x) =

{1 0 < x < 1

0 elsewhere

0 1/2 1

1

-1

(x)

ψ(x) =

{1 0 ≤ x < 1

2

0 12≤ x < 1

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 32: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

An Example: Haar Wavelet Transform of the Square Phantom

1-level Haar wavelet transform

2-level Haar wavelet transform3-level Haar wavelet transform

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 33: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

An Example: Haar Wavelet Transform of the Square Phantom

1-level Haar wavelet transform

2-level Haar wavelet transform

3-level Haar wavelet transform

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 34: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

An Example: Haar Wavelet Transform of the Square Phantom

1-level Haar wavelet transform2-level Haar wavelet transform

3-level Haar wavelet transform

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 35: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

An Example: Haar Wavelet Transform of a Walnut

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 36: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Constrained Regularization

In many cases, and CT is one of them, it is beneficial to include in the model anonnegativity constraint:

argminf

{1

2

∥∥Kf − yδ∥∥22

+ ιR+(f)

}or argmin

f>0

{1

2

∥∥Kf − yδ∥∥22

},

where the inequality is meant component-wise.

The nonnegative constraint can also be coupled with other regularizers:

Nonnegativity constrained Tikhonov regularization:

fTIKH+ = argmin

f>0

{1

2

∥∥Kf − yδ∥∥22

+ α ‖f‖22}

Nonnegativity constrained sparse regularization:

argminf>0

{1

2

∥∥Kf − yδ∥∥22

+ α ‖Lf‖1}

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 37: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

How to Solve `1-type Problems?

Approximating the absolute value function by

|t|β =√t2 + β.

Then the problem becomes smooth and we can use gradient-basedminimization algorithms. This is often done for TV regularization(smoothed TV ).

Using algorithms for nonsmooth objective functions (primal-dual,forward-backward, Bregman iteration, . . . ). In general, these requires thecomputation of the proximal operator and depending wether there is ornot an analytical closed form for it, the minimization problem can berather challenging.

fTV and fWLET are special cases for which the proximal operator is easy andfast to compute, because is given by the soft-thresholding operator.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 38: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

How to Solve `1-type Problems?

Approximating the absolute value function by

|t|β =√t2 + β.

Then the problem becomes smooth and we can use gradient-basedminimization algorithms. This is often done for TV regularization(smoothed TV ).

Using algorithms for nonsmooth objective functions (primal-dual,forward-backward, Bregman iteration, . . . ). In general, these requires thecomputation of the proximal operator and depending wether there is ornot an analytical closed form for it, the minimization problem can berather challenging.

fTV and fWLET are special cases for which the proximal operator is easy andfast to compute, because is given by the soft-thresholding operator.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 39: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

How to Solve `1-type Problems?

Approximating the absolute value function by

|t|β =√t2 + β.

Then the problem becomes smooth and we can use gradient-basedminimization algorithms. This is often done for TV regularization(smoothed TV ).

Using algorithms for nonsmooth objective functions (primal-dual,forward-backward, Bregman iteration, . . . ). In general, these requires thecomputation of the proximal operator and depending wether there is ornot an analytical closed form for it, the minimization problem can berather challenging.

fTV and fWLET are special cases for which the proximal operator is easy andfast to compute, because is given by the soft-thresholding operator.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 40: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Hard- and Soft-Thresholding

Sα(x) =

x+ α

2if x ≤ −α

2

0 if |x| < α2

x− α2

if x ≥ α2

Hα(x) =

x if x ≤ −α

2

0 if |x| < α2

x if x ≥ α2

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1Orig. Signal

Hard Thr. H

Soft Thr. S

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 41: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Iterative Soft-Thresholding Algorithm (ISTA)

For instance, when L = W is the matrix associated with an orthogonal wavelettransform (e.g., Haar wavelets), problems of the form:

argminf∈Rn

{1

2

∥∥Kf − yδ∥∥22

+ α ‖Wf‖1}, (1)

can be solved using an algorithm called Iterative Soft-Thresholding (ISTA) andthe approximate solution is given by:

f (i+1) = WTSαW(f (i) + KT (yδ −Kf (i)))

where Sα is the soft-thresholding operation.

There are many variants of ISTA to gain faster convergence (FISTA) or to extendit to non-orthogonal bases (or frames), or to include the non-negativity constraint(primal-dual fixed point, PDFP).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 42: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Iterative Soft-Thresholding Algorithm (ISTA)

For instance, when L = W is the matrix associated with an orthogonal wavelettransform (e.g., Haar wavelets), problems of the form:

argminf∈Rn

{1

2

∥∥Kf − yδ∥∥22

+ α ‖Wf‖1}, (1)

can be solved using an algorithm called Iterative Soft-Thresholding (ISTA) andthe approximate solution is given by:

f (i+1) = WTSαW(f (i) + KT (yδ −Kf (i)))

where Sα is the soft-thresholding operation.

There are many variants of ISTA to gain faster convergence (FISTA) or to extendit to non-orthogonal bases (or frames), or to include the non-negativity constraint(primal-dual fixed point, PDFP).

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 43: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)

Truncated SVD RegularizationTikhonov RegularizationNonnegativity Constrained Tikhonov RegularizationNonnegativity Constrained Total Variation RegularizationNonnegativity Constrained Wavelet-based Regularization

Original phantom f†: RE = 100%

fTSVD: RE = 35%fTIKH: RE = 32%fTIKH+ : RE = 13%fWLET+ : RE = 26%fTV

+ : RE = 3%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 44: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)

Truncated SVD Regularization

Tikhonov RegularizationNonnegativity Constrained Tikhonov RegularizationNonnegativity Constrained Total Variation RegularizationNonnegativity Constrained Wavelet-based Regularization

Original phantom

f†: RE = 100%

fTSVD: RE = 35%

fTIKH: RE = 32%fTIKH+ : RE = 13%fWLET+ : RE = 26%fTV

+ : RE = 3%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 45: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)Truncated SVD Regularization

Tikhonov Regularization

Nonnegativity Constrained Tikhonov RegularizationNonnegativity Constrained Total Variation RegularizationNonnegativity Constrained Wavelet-based Regularization

Original phantom

f†: RE = 100%fTSVD: RE = 35%

fTIKH: RE = 32%

fTIKH+ : RE = 13%fWLET+ : RE = 26%fTV

+ : RE = 3%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 46: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)Truncated SVD RegularizationTikhonov Regularization

Nonnegativity Constrained Tikhonov Regularization

Nonnegativity Constrained Total Variation RegularizationNonnegativity Constrained Wavelet-based Regularization

Original phantom

f†: RE = 100%fTSVD: RE = 35%fTIKH: RE = 32%

fTIKH+ : RE = 13%

fWLET+ : RE = 26%fTV

+ : RE = 3%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 47: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)Truncated SVD RegularizationTikhonov RegularizationNonnegativity Constrained Tikhonov RegularizationNonnegativity Constrained Total Variation Regularization

Nonnegativity Constrained Wavelet-based Regularization

Original phantom

f†: RE = 100%fTSVD: RE = 35%fTIKH: RE = 32%fTIKH+ : RE = 13%

fWLET+ : RE = 26%

fTV+ : RE = 3%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 48: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Naive Reconstruction (Moore-Penrose Pseudoinverse)Truncated SVD RegularizationTikhonov RegularizationNonnegativity Constrained Tikhonov Regularization

Nonnegativity Constrained Total Variation Regularization

Nonnegativity Constrained Wavelet-based Regularization

Original phantom

f†: RE = 100%fTSVD: RE = 35%fTIKH: RE = 32%fTIKH+ : RE = 13%fWLET+ : RE = 26%

fTV+ : RE = 3%

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 49: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Take-home message

Uniqueness does not save us.Even with an injective forward map, failure of Hadamard’s condition 3means that we need regularization for solving the inverse problem.

Non-uniqueness can be handled.Stable regularization strategy just needs enough a priori information forpicking out a unique object among those with same data.

Caveat. Regularization is not the only “cure” to ill-posedness: bayesian in-version and analytical strategies (designed ad hoc on the problem) are possibleapproaches as well.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 50: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Take-home message

Uniqueness does not save us.Even with an injective forward map, failure of Hadamard’s condition 3means that we need regularization for solving the inverse problem.

Non-uniqueness can be handled.Stable regularization strategy just needs enough a priori information forpicking out a unique object among those with same data.

Caveat. Regularization is not the only “cure” to ill-posedness: bayesian in-version and analytical strategies (designed ad hoc on the problem) are possibleapproaches as well.

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 51: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Some References

Inverse Problems:Engl, Hanke & Neubauer, Regularization of inverse problems, 1996Hansen, Rank-deficient and discrete ill-posed problems, 1998Bertero & Boccacci, Introduction to inverse problems in imaging, 1998Vogel, Computational methods for inverse problems, 2002Hansen, Discrete inverse problems, 2010Mueller & Siltanen, Linear and Nonlinear Inverse Problems with PracticalApplications, 2012

X-ray Tomography:Deans, The Radon Transform and Some of Its Applications, 1983Natterer, The mathematics of computerized tomography, 1986Kak & Slaney, Principles of computerized tomographic imaging, 1988Buzug: Computed Tomography: From Photon Statistics to ModernCone-Beam CT, 2008Natterer & Wubbeling, Mathematical Methods in Image Reconstruction,2001Epstein, Introduction to the mathematics of medical imaging, 2008

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019

Page 52: The Mathematics of X-ray Tomography · ˙2 i + 2 v : This choice of the lters result in a regularization technique called Tikhonov method and >0 is the so-called regularization parameter

Some References

Total variation and waveletsBurger & Osher, A guide to the TV zoo (chapter 1 in Burger & Osher,Level-Set and PDE-based Reconstruction Methods, 2013)Rudin, Osher & Fatemi, Nonlinear total variation based noise removalalgorithms, 1992Boggess & Narcowich, A first course in wavelets with Fourier analysis, 2009Mallat, A wavelet tour of signal processing, 1999

OptimizationBoyd & Vandenberghe, Convex optimization, 2004Nocedal & Wright, Numerical optimization, 2006Rockafellar, Convex optimization, 1996Daubechies, Defrise & De Mol, An iterative thresholding algorithm for linearinverse problems with a sparsity constraint, 2004

Tatiana Bubba The Mathematics of X-ray Tomography Very Finnish IP2019