from analog to digital: [email protected] switzerland the … · splines are inherently sparse...
TRANSCRIPT
Lausanne, August 19, 2004
Dear Dr. Liebling,
I am pleased to inform you that you were selected to the receive the 2004 Research Award of the
Swiss Society of Biomedical Engineering for your thesis work “On Fresnelets, interference
fringes, and digital holography”. The award will be presented during the general assembly of the
SSBE, September 3, Zurich, Switzerland.
Please, lets us know if
1) you will be present to receive the award,
2) you would be willing to give a 10 minutes presentation of the work during the general
assembly.
The award comes with a cash prize of 1000.- CHF.
Would you please send your banking information to the treasurer of the SSBE, Uli Diermann
(Email:[email protected]), so that he can transfer the cash prize to your account ?
I congratulate you on your achievement.
With best regards,
Michael Unser, Professor
Chairman of the SSBE Award Committee
cc: Ralph Mueller, president of the SSBE; Uli Diermann, treasurer
Dr. Michael Liebling
Biological Imaging Center
California Inst. of Technology
Mail Code 139-74
Pasadena, CA 91125, USA
BIOMEDICAL IMAGING GROUP (BIG)
LABORATOIRE D’IMAGERIE BIOMEDICALE
EPFL LIB
Bât. BM 4.127
CH 1015 Lausanne
Switzerland
Téléphone :
Fax :
E-mail :
Site web :
+ 4121 693 51 85
+ 4121 693 37 01
http://bigwww.epfl.chFrom analog to digital:The unifying role of splinesin Science and Engineering 4.0
For Carl de Boor's 80th Birthday, Dec. 4-6, 2017, National Univ. Singapore
Michael UnserBiomedical Imaging GroupEPFL, LausanneSwitzerland
Field report of an engineerafter a 30 year exploration in splineland
and who, in the process, became (or aspired to be) a mathematician
3
EPFL
Three presidents concerned with digitalization
ETHZDoris Leuthard
First digital day
4
OUTLINE■Functional analytic perspective
■ Splines and operators■ Green’s functions, B-splines
■Unifying role of splines in …■ Signal and image processing■ Sampling theory■ Linear system theory■ Regularization theory / Machine learning■ Stochastic processes■ Wavelet theory■ Estimation theory■ Imaging (tomography, compressed sensing)
Splines and operators
5
Example of admissible operator
Dn =dn
dxn
with ⇢Dn(x) =xn�1+
(n�1)! and NDn = span
⇢xm�1
(m� 1)!
�n
m=1
DefinitionA linear operator L : X ! Y , where X ◆ S(Rd) and Y are appropriate sub-spaces of S 0(Rd), is called spline-admissible if
1. it is linear shift-invariant (LSI);
2. its null space NL = {p 2 X : L{p} = 0} is finite-dimensional of size N0;
3. there exists a function ⇢L : Rd ! R of slow growth (Green’s function of L)such that L{⇢L} = �.
6
Splines, operators and (sparse) innovations
Spline theory: (Schultz-Varga, 1967; Jerome-Schumaker 1969; Micchelli, 1976)
an
xn xn+1
L =ddx
L{·}: differential operator (translation-invariant)
�: Dirac distribution
DefinitionThe function s(x),x 2 Rd (possibly of slow growth) is a nonuniform L-splinewith knots {xk}k2S
, Ls =X
k2S
ak�(·� xk) = w
Splines are inherently sparse (with a finite rate of innovation)Location of singularities (knots) : {xk}
Strength of singularities (linear weights): {ak}
Formal spline synthesis
7
Green’s function ⇢L : Rd ! R such that L{⇢L} = �
L: spline admissible operator (LSI)
Requires specification of boundary conditions
) s(x) =X
k
ak⇢L(x� xk) +N0X
n=1
bnpn(x)
Finite-dimensional null space: NL = span{pn}N0n=1
Spline’s innovation: w =X
k
ak�(·� xk)
a1
xx1 x2
Localized basis functions: cardinal B-splines
8
“finite difference” operator
Space of cardinal L-splines: span{�L(·� k)}k2Zd
Spline-defining operator L
Green’s function ⇢L s.t. L{⇢L(·� x0)} = �(·� x0)
Cardinal B-spline
�L(x) = LdL�1{�}(x) =
X
k2Zd
dL[k]⇢L(x� k)
link with numerical analysis
1 2 3 4 5
1
Design principle: �L as “short” as possible
Discrete counterpart of operator (on uniform grid)
Ld : f 7!X
k2Zd
dL[k]�(·� k)
Controls quality of discrete approximation: Ld{f} = �L ⇤ L{f}
Examples of spline-admissible operators
9
L = Dn (pure derivatives)
) polynomial splines of degree (n� 1)
L = Dn + an�1Dn�1 + · · ·+ a0I (ordinary differential operator)
) exponential splines
Fractional Laplacian: (��)�2
F ! k!k�
) polyharmonic splines
Fractional derivatives: L = D� F ! (j!)�
) fractional splines
(Dahmen-Micchelli 1987)
(Schoenberg 1946)
(U.-Blu 2000)
(Duchon 1977)
(Wu-Schaback 1993) Elliptical differential operators; e.g, L = (��+ ↵I)�
) Sobolev splines
Construction of fractional B-splines
10
(Unser & Blu, SIAM Rev, 2000)
......
�0+(x) = �+x
0+
F ! 1� e�j!
j!
�↵+(x) =
�↵+1+ x↵
+
�(↵+ 1)F !
✓1� e�j!
j!
◆↵+1
One-sided power function: x↵+ =
(x↵, x � 0
0, x < 0
Causal B-splines: �↵+(x)
Degree: ↵ 2 R+
Order of approximation: � = ↵+ 1
Splines as a unifying mathematical concept
11
Splines
Wavelet theory
Samplingtheory
Stochastic processes
Estimation theory
Regularizationtheory
Linear systemstheory
Approximationtheory
Functional analysis
Signal processing Numerical analysis
Partial differentialequations
Signal processing
IEEE Signal Processing Magazine
November 1999
Context: digital image processing
13
Images are made of pixels = B-splines of degree 0
but, looking at it closer …
14
Splines to the rescue■ Real-world signals are continuous■ Flexibility: from “pixelated” to “bandlimited”■ Paradigm: Think analog, act digital
Ability to zoom, rotate, warp, differentiate …
■ Control of “digitalization” error■ Efficient algorithms■ Higher quality …
Lincoln in Dalivision (Dali 1977)
continuous-space image image array(B-spline coefficients)
Compactly supportedbasis functions
B-spline representation of images
15
n Symmetric, tensor-product B-splines�n(x1, · · · , xd) = �n(x1)⇥ · · ·⇥ �n(xd)
n Multidimensional spline function
s(x1, · · · , xd) =X
(k1,···kd)2Zd
c[k1, · · · , kd] �n(x1 � k1, · · · , xd � kd)
L = @xn+11 · · · @xn+1
d
Fast digital-filtering algorithms
16
Digital filterf [k] c[k] = (hint � f)[k]
All classical cardinal spline interpolation and approximation problems can be solved efficiently using recursive digital filtering
Reference: B-spline signal processing (Unser, IEEE-SP 1993)
) Inverse filtering solution (discrete-domain convolution)
Cardinal interpolation problem
Given the signal samples f [k], find the B-spline coefficients c[k] such that
f(x)|x=k = f [k] =X
n2Zd
c[n]�L(k � n)
with Hint(z) =1
B(z)=
1Pk2Zd �L(k)z�k
Note: �L(x) separable ) hint[k] separable
17
Interpolation benchmark
Cumulative rotation experiment: the best algorithm wins !
Truncated sinc Cubic splineTruncated sinc Cubic spline
Bilinear Windowed-sinc Cubic spline
18
High-quality image interpolation
Thévenaz et al., Handbook of Medical Image Processing, 2000
Demo
■ Splines: best cost-performance tradeoff
30
10.4. DiscussionTable 2 presents in succinct form the numeric results of these experiments, along with some additionalones. In particular, we also provide the results for the standard Lena test image. The execution time isgiven in seconds; it corresponds to the duration of a single rotation of a square image 512 pixels on aside. The computer is a Power Macintosh 9600/350. The column ε2 shows the signal-to-noise ratiobetween the central part of the initial image of Figure 17 and the result of each experiment, wile thecolumn "Lena" gives qualitatively similar results for this standard test image. The measure of signal-to-noise ratio is defined as
SNR =10 log( ƒk2
k∈Z2∑ ƒk − gk( )2
k∈Z2∑ ) ,
where ƒ is the original data and where g is given by the r -times chaining of the rotation.
40
30
20
10
0
SNR
(dB
)
2.01.51.00.50.0Execution time 512x512 (s rot-1)
Bspline(4)
Bspline(5)
Bspline(2)
Bspline(3)
oMoms(3)
Bspline(3)
oMoms(3)
Bspline(6)
Keys(-1.0)
Keys(-0.25)
Nearest-neighbor
Schaum(3)Dodgson
Keys(-0.5)
Sinc(Bartlet, W=4)
Linear
Figure 21: Summary of the main experimental results for the circular pattern. Triangles: interpolatingfunctions. Circles: non-interpolating functions. Hollow circles: accelerated implementation.
These results point out some of the difficulties associated to the analysis of the performance of asynthesis function ϕ . For example, the computation time should ideally depend on the number ofmathematical operations only. In reality, the optimization effort put into implementing each variationwith one synthesis function or another, has also some influence. For instance, our faster implementationof the cubic spline and the cubic o-Moms runs in shorter time than reported in Table 2 (namely, 0.91seconds instead of 1.19). We have nevertheless shown the result of the slower implementation because itcorresponds to a somewhat unified level of optimization in all considered cases.
Figure 21 proposes a graphic summary of the most interesting results (circular pattern, quality better than0 dB and execution time shorter than 2 seconds). It is interesting to compare this figure to Figure 16;the similarity between them confirms that our theoretical ranking of synthesis functions was justified.The difference between the interpolation methods is more pronounced in the experimental case becauseit has been magnified by the number of rotations performed.
19
Splines
Wavelet theory
Samplingtheory
Stochastic processes
Estimation theory
Regularizationtheory
Linear systemstheory
Approximationtheory
Functional analysis
Signal processing Numerical analysis
Differentialequations
Sampling (as taught to engineers)
20
Shannon: Commiunication in the Presence of Noise
ony, this operation consists of merely changing soundpressure into a proportional electrical current. In teleg-
the channel capacity may be defined as
C9g2 MT,= -oo TT--a T
Fig. 1-General communications system.
raphy, we have an encoding operation which produces a
sequence of dots, dashes, and spaces corresponding tothe letters of the message. To take a more complexexample, in the case of multiplex PCM telephony thedifferent speech functions must be sampled, compressed,quantized and encoded, and finally interleaved properlyto construct the signal.
3. The channel. This is merely the medium used totransmit the signal from the transmitting to the receiv-ing point. It may be a pair of wires, a coaxial cable, a
band of radio frequencies, etc. During transmission, or
at the receiving terminal, the signal may be perturbedby noise or distortion. Noise and distortion may be dif-ferentiated on the basis that distortion is a fixed opera-
tion applied to the signal, while noise involves statisticaland unpredictable perturbations. Distortion can, inprinciple, be corrected by applying the inverse opera-
tion, while a perturbation due to noise cannot always beremoved, since the signal does not always undergo thesame change during transmission.
4. The receiver. This operates on the received signaland attempts to reproduce, from it, the original mes--sage. Ordinarily it will perform approximately the math-ematical inverse of the operations of the transmitter, al-though they may differ somewhat with best design inorder to combat noise.
5. The destination. This is the person or thing forwhom the message is intended.
Following Nyquist' and Hartley,2 it is convenient touse a logarithmic measure of information. If a device hasn possible positions it can, by definition, store logbn unitsof information. The choice of the base b amounts to a
choice of unit, since logb n = logb c log, n. We will use thebase 2 and call the resulting units binary digits or bits.A group of m relays or flip-flop circuits has 2'" possiblesets of positions, and can therefore store log2 2m =m bits.
If it is possible to distinguish reliably M different sig-nal functions of duration T on a channel, we can saythat the channel can transmit log2 M bits in time T. Therate of transmission is then log2 M/T. More precisely,
1 H. Nyquist, "Certain factors affecting telegraph speed," BellSyst. Tech. Jour., vol. 3, p. 324; April, 1924.
2 R. V. L. Hartley, "The transmission of information," Bell Sys.Tech. Jour., vol. 3, p. 535-564; July, 1928.
A precise meaning will be given later to the requirementof reliable resolution of the M signals.
II. THE SAMPLING THEOREM
Let us suppose that the channel has a certain band-width W in cps starting at zero frequency, and that weare allowed to use this channel for a certain period oftime T. Without any further restrictions this wouldmean that we can use as signal functions any functionsof time whose spectra lie entirely within the band W,and whose time functions lie within the interval T. Al-though it is not possible to fulfill both of these condi-tions exactly, it is possible to keep the spectrum withinthe band W, and to have the time function very smalloutside the interval T. Can we describe in a more usefulway the functions which satisfy these conditions? Oneanswer is the following:THEOREM 1: If a function f(t) contains no frequencies
higher than W cps, it is completely determined by givingits ordinates at a series of points spaced 1/2W secondsapart.
This is a fact which is common knowledge in the com-
munication art. The intuitive justification is that, if f(t)contains no frequencies higher than W, it cannotchange to a substantially new value in a time less thanone-half cycle of the highest frequency, that is, 1/2 W. Amathematical proof showing that this is not onily ap-
proximately, but exactly, true can be given as follows.Let F(w) be the spectrum of f(t). Then
1 a00f(t) = 2f7( )eF(w,)eitdw
+29rW=
2F(w)ewtodco,
-1_2iW
(2)
(3)
since F(c) is assumed zero outside the band W. If welet
nt=-
2W (4)
where n is any positive or negative integer, we obtain
f (2T) = 27r 2W7F(w)ei-2W do. (5)
On the left are the values of f(t) at the sampling points.The integral on the right will be recognized as essen-
tially the nth coefficient in a Fourier-series expansion ofthe function F(w), taking the interval - W to + W as a
fundamental period. This means that the values of thesamples f(n2W) determine the Fourier coefficients inthe series expansion of F(w). Thus they determine F(w,),since F(w) is zero for frequencies greater than W, and for
(1)
1949 11
(Shannon, Proc. I.R.E, vol. 37, p. 10-21, 1949)Shannon’s sampling theorem
f(t) =X
k2Zf(kT )sinc
✓t� kT
T
◆with T 1
2W
Space of W -bandlimited functions
B⇡W = {f 2 L2(R) : f̂(!) = 0, for all |!| > ⇡W}
with the orthogonal basis {sinc( ·�kTT )}k2Z
Practical usage
Used to justify digitalization of signals (A-to-D conversion)
Exact reconstruction formula rarely used in practice
From B-splines to cardinal interpolants
21
-5 -4 -3 -2 -1 1 2 3 4 5
1Interpolation basis function
Example: cubic-spline interpolant
Equivalent interpretation of cardinal spline interpolation
f(x) =X
k2Zd
c[k]�L(x� k) =X
k2Zd
(f [·] ⇤ hint) [k] �L(x� k)
=X
k2Zd
f [k] 'int(x� k)
'int(x) =X
k2Zd
hint[k] �L(x� k)
22
Link with Shannon’s sampling theory
Impulse response Frequency response
References: (Schoenberg, 1973; Unser, Proc. IEEE, 2000)
+∞
1
2
0.5 1 1.5 2
1
0.5
� 2� 3� 4�
n Polynomial spline interpolator
n Asymptotic property
⇥nint(x) F�⇥ ⇥̂n
int(�) =�
sin(�/2)�/2
⇥n+1
⇧ ⌅⇤ ⌃�̂n(⇥)
Hnint(e
j⇥)
The Hilbert-space formulation of polynomial spline approximation provides an extension of Shannon’s classical sampling theorem.
The cardinal-spline interpolators converge to the sinc interpolator (ideal filter) as thedegree goes to infinity:
limn�⇥
⇤nint(x) = sinc(x), lim
n�⇥⇤̂n
int(⇥) = rect� ⇥
2�
⇥(in all Lp-norms)
23
ON NOTES ON SPLINE FUNCTIONS III:
THE C O N V E R G E N C E OF THE I N T E R P O L A T I N G CARDINAL SPLINES AS THEIR DEGREE
TENDS TO INFINITY t
BY
I. J. SCHOENBERG
ABSTRACT
It is shown that for entire functionsf(x) defined by a Fourier-Stieltjes integral (9) the cardinal spline S m (x) of the odd degree 2m-l, which interpolates f(x) at all integers, converges to f(x) as m tends to infinity. Properties of the exponen- tial Euler spline are used in the proof.
1. Introduction
Let n = 2m - 1 be an odd integer and let 5a, = {S(x)} denote the class of spline functions S(x ) of degree n = 2m - 1, with knots at the integers and of the continuity class C2m-2(~). Within this class we wish to interpolate a prescribed bi- infinite sequence (Yv) of numbers for - ~ < v < ~ , that is, S(v) = Yv for all integers v. We know (see, for example, [6, Lec. 4]) that if y, grows at most like a power of [v[ as I v ] ~ oo, then there is a unique S,,(x)e S#2,,_ 1 such that S,,,(x) grows at most like a power of Ix [ as [x I~ ~o which satisfies
S,,(v) = y , for all v.
We are interested in cases when S,,(x) converges to a limit function as m approaches infinity. Two such cases are presently known.
(i) The sequence (y~) is periodic with period k. (ii) (y,) e 12, hence Y~IY~ 12 < oo. For case (i) refer to [3], [1], [4], and for case
(ii) to [5] and [6, Lec. 9]. Since, for the purposes of this paper, we are concerned
t Sponsored by the United States Army under Contract No. DA-31-124-ARO-D-462. Received February 16, 1973
87
Israel J. Math., pp. 87-93, 1973.
“Bandlimited functions” vs. “Entire functions of exponential type”
24
Splines
Wavelet theory
Samplingtheory
Stochastic processes
Estimation theory
Regularizationtheory
Linear systemstheory
Approximationtheory
Functional analysis
Signal processing Numerical analysis
Differentialequations
Link with system theory: C-to-D converters
25
Exponential B-splines = the mathematical translators between continuous-time and discrete-time LSI system theories
poles
zeros
Continuous domain - differential equations - circuits, analog filters
Discrete domain - difference equations - digital filters
- Laplace transform: - z-transform:
mapping: zn = e�n
Reference: “Think analog, act digital” (Unser, IEEE-SP 2006)
HC(s) =�M
m=1(s� ⇥m)�N
n=1(s� �n)HD(z) =
1�N
n=1(z � zn)
Associated B-spline: �⇤�(t) = L�1
�HC(s)HD(es)
⇥(t)
Example: 1st order system■ Continuous-time impulse response
■ Discrete-time counterpart
26
Discrete-time signal
Continuous-time signal Compactly-supported basis functions
€
1
��(t)
e�
t
1st-order exponential B-spline
hC(t) = 1+(t) · e�t =+��
k=0
e�k��(t� k) =�
k⇥ZhD[k] ��(t� k)
hD[k] = hC(k) z⇥⇤ HD(z) =1
z � e�
hC(t) = 1+(t) · e�t =
�e�t, t ⇤ 00, t < 0
L⌅⇧ HC(s) =1
s� �
To appear in IEEE TRANSACTIONS ON SIGNAL PROCESSING 24
6
D-to-A translating B-splines
B-spline Operator
€
L Order
€
N Frequency response
€
δ(t)
€
I{ } 0
€
1
€
δ(t − τ )
€
Sτ{ } 0
€
e− jωτ
€
β(0)(t)
€
D{ } =d
dt1
€
1− e− jω
jω
€
β(0,L,0)(t)
€
Dn{ }
€
n
€
1− e− jω
jω
n
€
βα (t)
€
(D−αI){ } 1
€
1− eα− jω
jω −α
€
β(α,L,α )(t)
€
(D−αI)n{ }
€
n
€
1− eα− jω
jω −α
n
Exponential B-splines
Polynomial B-splines
Dirac distribution
TABLE III
D-TO-A TRANSLATING B-SPLINES:
THE INTEGER SHIFTS OF THESE B-SPLINES ARE THE BASIS FUNCTIONS THAT ALLOW THE RECONSTRUCTION OF THE
IMPULSE RESPONSES IN TABLE I FROM THE DISCRETE SIGNALS IN TABLE II.
1 2 3 4
-0.2
0.2
0.4
0.6
0.8
1
(a)
(c)
(b)
0 0.2 0.4 0.6 0.8 1
0.2
0.4
0.6
0.8
1N1 = 0
(a)
ππ/2
0 0.2 0.4 0.6 0.8 1
-0.8
-0.6
-0.4
-0.2
0
ππ/2
(b)
N1 = 1
−
π
2
−π
1
2
2
0
ω
ω
Phase response
Amplitude response
0 0.2 0.4 0.6 0.8 1
0.2
0.4
0.6
0.8
1
ππ/2
N1 = 284
ω
Amplitude response
Fig. 1. Three generalized B-splines of order N = 4: (a) cubic B-spline, (b) cubic OMOMS, and (c) cubic Lagrange interpolator.
To facilitate the comparison, the B-splines have been normalized to have a unit integral.
November 26, 2004 DRAFT
D-to-A translating B-splines
27
28
Splines
Wavelet theory
Samplingtheory
Stochastic processes
Estimation theory
Regularizationtheory
Linear systemstheory
Approximationtheory
Functional analysis
Signal processing Numerical analysis
Differentialequations
Machine learning
29
SPLINE FUNCTIONS AND THE PROBLEM OF GRADUATION*BY I. J. SCHOENBERG
UNIVERSITY OF PENNSYLVANIA AND INSTITUTE FOR ADVANCED STUDY
Communicated by S. Bochner, August 19, 1964
1. Introduction.-The aim of this note is to extend some of the recent workon spline interpolation so as to include also a solution of the problem of graduationof data. The well-known method of graduation due to E. T. Whittaker suggestshow this should be done. Here we merely describe the idea and the qualitativeaspects of the new method, while proofs and the computational side will be discussedelsewhere.
2. Spline Interpolation.-Let I = [a, b] be a finite interval and let (x,, yJ),(v = 1, ... , n), be given data such that a . xl <x2 < ... < .n_ b. The followingfacts are known:'
Let m be a natural number, m < n. The problem offinding a function f(x) (x C I)having a square integrable mth derivative and satisfying the two conditions
f(x,) = y,, (v = 1, . . ., n), (1)
Jf (r1
J (fP(x))2dx = minimum, (2)
has a unique solution which is the restriction to [a, b] of the function s(x) = S(x; yi,Y2,.. yXn) which is uniquely characterized by the three conditions
s(x,) = y,, (v = 1, . . ., n) (3)
S(X) EEC21n-2 (-co 0) 4
S(X) E 7r2m-1 in each of the intervals (xv, x,+,)s(X) E 7rm- in (-A, xi) and also in (Xn, Axk).
The functions defined by the two conditions (4) and (5) are called spline functionsof order 2m (or degree 2m - 1), having the knots x,; we denote their class by thesymbol Sm.We have assumed that 1 . m < n. If m = 1, then s(x) is obtained by linear in-
terpolation between successive y,, while s(x) = y, if x < xi and s(x) = yn if x > x,.If m = n, then Sm = 7rt.. and s(x) is the polynomial interpolating the y,.
3. Whittaker's Method of Graduation.-In 1923 E. T. Whittaker3 proposed thefollowing method of adjusting the ordinates y, if these are only imperfectly knownand are in need of a certain amount of smoothing: he chooses m, 1 _ m < n, and the(smoothing) parameter, e, e > 0. The graduated sequence y,* = y,*(e) is then ob-tained as the solution of the problem
n-m nE (A"'yy'*)2 + (y,* y) 2 = imniin-tun, (6)v~~~~l~~~1
wherev +-m
Yv*= E Y i /W w(X)X(z) = (x - xv) ... (x - Xv+m)
947
Proc. Nat. Acad. Sci., vol. 52, pp. 947-950, 1964
fspline = argminf2X
kDnfk2L2s.t. f(xm) = ym, (m =, 1 . . . ,M)
L2 representer theorem for variational splines
30
⇢L⇤L(x) = (L⇤L)�1{�}(x): Green’s function of (L⇤L)
+N0X
n=1
bnpn(x);
L2 representer theorem for variational splinesThe solution of (P2) is unique and of the form
f(x) =MX
m=1
am⇢L⇤L(x� xm)
i.e., it is a (L⇤L)-spline with knots at the {xm}.
Example: L = D2 with ⇢D4(x) / |x|3 ) f(x) is a cubic spline
(Schoenberg 1964, de Boor-Lynch 1966, Kimeldorf-Wahba 1971)
(P2) arg minf2HL
MX
m=1
|ym � f(xm)|2 + �kLfk2L2(Rd)
!
31
Courtesy of Carl de Boor
32
But this is just the demand that L∗ minimize ∥L − L′∥ over all L′ ∈ M. It is now immediate thatL and L∗ agree. First, the assumption that L ∈ L(k) is sufficient (though not necessary) to insure thatL(y)K(x, y) ∈ F (k)[a, b], so that L is defined. Also, by Corollary 2 of Lemma 2.3, L ∈ M. Hence, since L
minimizes ∥L − L′∥ over all L′ =!
αiLi, we have L = L∗, and the third minimum property follows.Finally, we mention some estimates for the error Rf = Lf − Lf . Since
Lf − Lf = (φ− φ, f) ="
φ, (I − PS)f#
,
use of Schwarz’s inequality gives
(2.12) |Lf − Lf | ≤ ∥φ− φ∥ ∥f∥, and |Lf − Lf | ≤ ∥φ∥ ∥f − PSf∥.
But since, by (2.5), ((I − PS)φ, f) = ((I − PS)φ, (I − PS)f), we have the better estimate
(2.13) |Lf − Lf | ≤ ∥φ− φ∥ ∥f − PSf∥.
Hence if ∥f∥ ≤ r, which implies ∥f − PSf∥ ≤ (r2 − ∥PSf∥2)1/2, then
(2.14) Lf − ∥φ− φ∥ (r2 − ∥PSf∥2)1/2 ≤ Lf ≤ Lf + ∥φ− φ∥ (r2 − ∥PSf∥2)1/2.
The importance of the fact that this estimate depends only on the bound r and the numbers Li(f), i = 1,. . ., n, and is optimal with respect to this information, is rightfully stressed in [5].
3. The Hilbert space F (k)[a, b]. The linear space F (k)[a, b] can be made into a Hilbert space invarious ways, thus providing various classes of functions which, due to the fact that they are representers ofsuitable linear functionals, have all the minimum properties of polynomial splines.
Specifically, let M be a k–th order ordinary linear differential operator in normal form,
(3.1) M = (dk/dxk) +k−1$
i=0
ai(x)(di/dxi),
and let L1, . . ., Lk be k linear functionals. Under suitable conditions on the ai(x) and the Li,
(3.2) (e, f) =k
$
i=1
Li(e)Li(f) +
% b
a(Me)(y)(Mf)(y)dy, all e, f ∈ F (k)[a, b],
is an inner product defined on F (k)[a, b], which makes F (k)[a, b] into a Hilbert space with reproducing kernel.This is proved in the following theorem, which provides facts necessary to define and describe generalized
splines and their minimum properties.
Theorem 3.1. Let M be any k–th order ordinary linear differential operator in normal form, (3.1),where k ≥ 1 and and ai ∈ C[a, b], i = 0, . . ., k−1. Let N (M) denote the k–dimensional linear subspace of allfunctions f in C(k)[a, b] for which Mf = 0. Let L1, . . ., Lk be any set of k linear functionals in L(k), which islinearly independent over N (M). Then F (k)[a, b] is a Hilbert space with respect to the inner product (3.2),and has a reproducing kernel. This reproducing kernel, K, is given by
(3.3) K(x, y) =k
$
i=1
ci(x)ci(y) +
% b
aG(x, t)G(y, t)dt, x, y ∈ [a, b],
where ci, . . ., ck is the dual basis to L1, . . ., Lk in N (M), and G(x, y) is the Green’s function for thedifferential equation (Mf)(x) = e(x) with Li(f) = 0, i = 1, . . ., k.
5
J. Math. Mech. 15 (1966), pp. 953–970
On splines and their minimum properties
Carl de Boor & Robert E. Lynch1
Communicated by G. Birkhoff
0. Introduction. It is the purpose of this note to show that the several minimum properties of odddegree polynomial spline functions [4, 18] all derive from the fact that spline functions are representers ofappropriate bounded linear functionals in an appropriate Hilbert space. (These results were first announcedin Notices, Amer. Math. Soc., 11 (1964) 681.) In particular, spline interpolation is a process of best approxi-mation, i.e., of orthogonal projection, in this Hilbert space. This observation leads to a generalization of thenotion of spline function. The fact that such generalized spline functions retain all the minimum propertiesof the polynomial splines, follows from familiar facts about orthogonal projections in Hilbert space.1. Polynomial splines and their minimum properties. A polynomial spline function, s(x), of degreem ≥ 0, having the n ≥ 1 joints x1 < x2 < · · · < xn, is by definition a real valued function of classC(m−1)(−∞,∞), which reduces to a polynomial of degree at most m in each of the n+1 intervals (−∞, x1),(x1, x2), . . ., (xn, +∞). The most general such function is given by
s(x) =m
!
i=0
αixi +
n!
j=1
βj(x − xj)m+ ,
where αi, i = 0, . . ., m, and βj , j = 1, . . ., n, are real numbers and
(x)m+ =
"
xm, x ≥ 0,0 , x < 0.
Specifically, let m = 2k − 1, and n ≥ k ≥ 1, and let S0 denote the family of polynomial spline functionsof odd degree m with joints x1, . . ., xn, which reduce to polynomials of degree at most k − 1 in each ofthe two intervals (−∞, x1) and (xn,∞). Equivalently, S0 consists of all polynomial spline functions s(x) ofdegree m with joints x1, . . ., xn which satisfy
(1.1)s(j)(x1) = s(j)(xn) = 0, j = k, . . . , 2k − 2,
s(2k−1)(x) ≡ 0, all x /∈ [x1, xn].
Hence, for n = k, S0 consists just of the set {πk−1} of polynomials of degree at most k − 1. Let [a, b] be afinite interval containing all the joints x1, . . ., xn and consider S0 as a subset of the class of functions [19]
(1.2) F (k)[a, b] =#
f(x) | f ∈ C(k−1)[a, b], f (k−1)absolutely continuous, f (k) ∈ L2[a, b]$
.
The elements of S0 have the following properties [4], [18]:
Interpolation property: Given f ∈ F (k)[a, b], there exists a unique element s(x) ∈ S0 satisfying
s(xi) = f(xi), i = 1, . . . , n.
Denote this unique element by Pf .
1 The work of R. E. Lynch was supported in part by the National Science Foundation through Grant GP–217 and by the
Army Research Office(Durham) through Grant DA–ARO(D)–31–124–G388, at The University of Texas.
1
RKHS representer theorem for machine learning
33
(Schölkopf-Smola 2001)(P2’) argminf2H
�F (y,f) + �kfk2
H
�Sample values: f =
�f(x1), . . . , f(xM )
�
Supports the theory of SVM, kernel methods, etc.
Convex loss function: F : RM ⇥ RM ! R
(P2) argminf2H
MX
m=1
|ym � f(xm)|2 + �kfk2H
!
’Representer theorem for L2-regularizationThe generic parametric form of the solution of (P2 ) is
f(x) =MX
m=1
amrH(x,xm)
rH : Rd⇥ Rd
! R is the (unique) reproducing kernel for the Hilbert H if
rH(x0, ·) 2 H for all r0 2 Rd
f(x0) = hrH(x0, ·), fiH for all f 2 H and x0 2 Rd
(Poggio-Girosi 1990)
34
Splines
Wavelet theory
Samplingtheory
Stochastic processes
Estimation theory
Regularizationtheory
Linear systemstheory
Approximationtheory
Functional analysis
Signal processing Numerical analysis
Partial differentialequations
Splines and stochastic processes
35
Splines are in direct correspondence with stochastic processes (stationary or fractals) that are solution of the same (partial) differential equation, but with a random driving term.
References: stationary proc. (Wahba-Kimeldorf 1970), fractals (Blu-U., 2007)
non-empty null space of L, boundary conditions
Specific driving terms
w(x) = �(x) ) s(x) = L�1{�}(x) : Green function
w(x) =X
k
ak�(x� xk) ) s(x) : non-uniform L-spline
w : white noise ) s : generalized stochastic process
Defining operator equation: Ls = w
Example: Brownian motion synthesis (Gaussian)
36
Brownian motionwhite Gaussian noise
fBm; H = 0.50
(Wiener, 1926)
s(x)
L = ddx ) L�1: integrator
L�1{·}w(x)
Example: going fractional (fBm)
37(Mandelbrot, 1968)
fractional Brownian motionwhite Gaussian noise
L�1{·}
fractional B-splines (2000)
L F ! (j!)H+ 12 ) L�1: fractional integrator
w s
s(x)
Poisson; H = 0.50
Sparsity: Compound Poisson process
38
(Paul Lévy, 1934)
Jump size distribution: a v dP (a)
Random jumps with rate � (Poisson point process)
Compound Poisson process
s(x)
L = ddx ) L�1: integrator
L�1{·}
random stream of Diracs
w(x) =X
k
ak�(x� xk)
Example in 1D: Self-similar processes
39
fBm; H = 0.50
fBm; H = 0.75
fBm; H = 1.25
fBm; H = 1.50
Poisson; H = 0.50
Poisson; H = 0.75
Poisson; H = 1.25
Poisson; H = 1.50H=.5
H=.75H=1.25
H=1.5
Sparse (generalized Poisson)GaussianFractional Brownian motion (Mandelbrot, 1968) (U.-Tafti, IEEE-SP 2010)
LF ! (j!)H+ 1
2 ) L�1: fractional integrator
2D generalization: the Mondrian process
40
� = 30
L = Dx1Dx2
F ! (j!1)(j!2)
41
Splines
Wavelet theory
Samplingtheory
Stochastic processes
Estimation theory
Regularizationtheory
Linear systemstheory
Approximationtheory
Functional analysis
Signal processing Numerical analysis
Partial differentialequations
Splines and wavelet theory
42
�0+(x/2) = �0
+(x) + �0+(x� 1)
Polynomial B-splines have remarkable dilation properties.They play a fundamental role in wavelet theory.
n Generalized Lego™/Duplo™ relation
B-spline dilation property: �n+(x/2) =
X
k2Zh[k]�n
+(x� k)
Binomial filter: H(z) =1
2n
n+1X
k=0
✓n+ 1
k
◆z�k =
1
2n�1 + z
�1�n+1
Splines and wavelet analysis
43
Closer look at archetype of sparse signal
44
�(x)
Poisson; H = 0.50
s(x) D =d
dx
Haar(x) = D�(x)
) hs, Haar(·� y0)i = hs,D�(·� y0)i = hD⇤s,�(·� y0)i
Compound Poisson process = piecewise-constant signal
Wavelet as a smoothed derivative
sparse innovations (train of Dirac impulses)
D⇤ = � ddx (adjoint)
“Sparse derivative” property: Ds(t) =P
n an�(x� xn) with xn jump locations
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
10−1
100
101
α
Mutu
alIn
form
ation
IdentityDCT/KLTHaar WaveletOptimal (ICA)
Ortho-expansion of a (non-stationary) SαS Lévy process
sparser Gaussian
(Pad-U. IEEE Trans. Sig. Proc. 2016)
2
Statistical dependence in transform domain
Operator-like wavelets for sparse AR(1) processes
46
(Khalidov-U., 2006)
Innovation model: Ls = w , s = L�1w with L = (D� ↵1I)
�(x)
0 = L⇤�0
1 = L⇤�1
Haar
11
(a) (b)
↵1 ! 0
(de Boor et al., 1993)
Re(↵1) < 0
Operator-like wavelet: i = L⇤�i with �i: smoothing kernel
Wavelet analysis: hs, i(·� t0)i = hL�1w,L⇤�i(·� t0)i = hw,�i(·� t0)i
Constr. Approx. (1993) 9:123-166 CONSTRUCTIVE APPROXIMATION 9 1993 Springer-Verlag New York Inc.
On the Construction of Multivariate (Pre)Wavelets
Carl de Boor, Ronald A. DeVore, and Amos Ron
Abstract. A new approach for the construction of wavelets and prewavelets on
R d from multiresolution is presented. The method uses only properties of shift-
invariant spaces and orthogonal projectors from L2(R d) onto these spaces, and
requires neither decay nor stability of the scaling function. Furthermore, this
approach allows a simple derivation of previous, as well as new, constructions of
wavelets, and leads to a complete resolution of questions concerning the nature
of the intersection and the union of a scale of spaces to be used in a multi-
resolution.
1. Introduction
We present a new approach for the construction of wavelets and prewavelets on
R a from multiresolution. Our method, which is based on our earlier work [BDR],
[BDR1], uses only properties of shift-invariant spaces and orthogonal projectors
from L2(R a) onto these spaces, and requires neither decay nor stability of the
scaling function. Furthermore, this approach allows us to derive in a simple way
previous constructions of wavelets, as well as new constructions, and to settle
completely certain basic questions about multiresolution.
A univariate function ~ E L2(R ) is called an orthogonal wavelet if its normalized,
translated dilates ~kj.k:= 2k/2r j, k r Z , form an orthonormal basis for
L~(R). In other words, this system is complete and satisfies the orthogonality
conditions
(1.1) fR I/IJ'k~Jj"k' = (~(j - j ' )b(k - k'), j, k,j ' , k' eZ ,
with b the delta function on Z. The concept of prewavelet is somewhat more general
in that it requires (1.1) to hold only when k ~ k' and hence the functions there are
not assumed to be orthogonal at a fixed dyadic level k. In particular, ~k(.-j),
j E Z, are not necessarily orthogonal, and, instead, it is assumed that (~k(. -J))j~z
forms a stable basis for L2(R) (see the end of this section and Section 2 for the
definition of stability).
Date received: February 28, 1992. Communicated by Charles A. Micchelli.
AMS classification: Primary 41A63, 46C99; Secondary 41A30, 41A15, 42B99, 46E20.
Key words and phrases: Wavelets, Multiresolution, Shift-invariant spaces, Box splines.
123
Splines and (non-stationary) wavelets
47
150 C. de Boor, R. A. DeVore, and A. Ron
[CW] considered a slightly different notion of rninimality: they were interested in finding a generator w for W which can be expressed in the form ~ = z0, with ~ a trigonometric polynomial of minimal degree (they assume that the refinement mask A = ~/0 is a polynomial, to guarantee the existence of such ~). Thus, while we minimize diam supp w over all possible generators w, Chui and Wang minimize diam supp w only over those w which can be written as a finite linear combination of the half-shifts of ~/. However, because of Result 5.10, the two notions coincide if we assume (as we do) that the half-shifts of q are linearly independent, and, furthermore, as is proved by Jia and Wang in [JW], this assumption holds in the stationary case in case ~0 has stable shifts and the mask has no 2~z-periodic polynomial factor. In any event, with straightforward modifications, the arguments used in Proposition 5.13 and Corollary 5.14 can be applied to show that the same characterization holds for the "minimal w" in the [CW] sense.
Chui and Wang stated their results in terms of the symmetric zeros of the polynomials involved. Let us pause for a moment to see how symmetric zeros enter into the characterizations provided above. If z is a 4n-periodic trigonometric polynomial, then, up to some exponential factor, we can write z = P(el/2) for some algebraic polynomial p with deg p = mdeg z. However, for any algebraic poly- nomial q, q(el/2) is 2n-periodic if and only if it can be written as an algebraic polynomial in e~ = e2/2, i.e., if and only if q involves only even powers, or, what is the same, if and only if all the zeros of q occur in symmetric pairs. Thus the quotient z/2 in Corollary 5.14 can be equivalently characterized by the lack of symmetric zeros in p/q.
If we take for q~ a cardinal B-spline and for r/its 2-dilate, then the half-shifts of q are linearly independent. In this case the spline wavelet ~ of Chui and Wang (given by Theorem 5.5) is the minimally supported wavelet of W guaranteed by Corollary 5.11 because the function z of Corollary 5.14 is known to have no 2n-periodic polynomial factor. It thus follows that ~ has linearly independent shifts.
6. An Example of Nonstationary Decompositions: Exponential B-Splines
We have carried out the analysis in this paper without making the assumption that r/is the 2-dilate of ~o. The reason for this is twofold: First, the assumption q = ~o(2.) does not simplify either the idea or the details of our approach. Second, and more importantly, there are various interesting examples where the "finer" function r/is not obtained from cp by dilation. This is the case, for example, for exponential B-splines, exponential box splines, and various radial basis functions. In this section we briefly discuss what seems to be the simplest example in this direction: the exponential B-splines.
The exponential B-spline N~:= Na('I0 . . . . . n) is a generalization of the (poly- nomial) B-spline N(" 10 . . . . . n). It can be defined by its Fourier transform as follows. Let 2'be a parameter vector (21,..., 2,)e C". Then
e x'~-ir - 1 ~ ( y )
o = 1
Somewhat hidden: pp. 150-153
Ortho-expansion of a (stationary) SαS AR(1) process
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
10−1
100
101
α
Mutu
alIn
form
ation
IdentityDCT/KLTHaar WaveletOperator−like WaveletOptimal (ICA)
sparser Gaussian
e↵1 = 0.9, M = 64(Pad-U. IEEE Trans. Sig. Proc. 2016)
s = (D� ↵1I)�1w with w : ↵-stable white noise
non-stationary wavelets
Statistical dependence in transform domain
49
Gaussian Sparse
Fourier analysis Wavelet analysis
Norbert Wiener Paul Lévy
vs.
Splines !
Isaac Schoenberg
50
http://www.sparseprocesses.org
ebook: web preprint
51
Splines
Wavelet theory
Samplingtheory
Stochastic processes
Estimation theory
Regularizationtheory
Linear systemstheory
Approximationtheory
Functional analysis
Signal processing Numerical analysis
Partial differentialequations
MMSE reconstruction of a Gaussian process
52
Underlying stochastic differential equation Ds = w s.t. s(0) = 0
= piecewise linear interpolator (Lévy 1934)
Estimation of stochastic process from sample values
Measurement operator ⌫ : s 7!�s(x1), . . . , s(xM )
�
Statistical estimator: s̃(x|⌫(s) = y) : (x,y) 7! R
Minimum mean square error estimator given ⌫(s) = y
s̃MMSE(x|y) = E{s(x)|⌫(s) = y}
= argminf
kDfk2L2s.t.
�f(0) = 0, f(x1) = y1, . . . , f(xM ) = yM
�
Kriging (Matheron 1963)
Reconstruction of Brownian motion from its samples
sMMSE(x|y) = E�s(x)
��s(x1) = y1, · · · , s(xM ) = yM
MMSE reconstruction of a Gaussian process
53
Estimation of stochastic process from sample values
Measurement operator ⌫ : s 7!�s(0), s(x1), . . . , s(xM )
�
Statistical estimator: s̃(x|⌫(s) = y) : (x,y) 7! R
Minimum mean square error estimator given ⌫(s) = y
s̃MMSE(x|y) = E{s(x)|⌫(s) = y}
= fractional spline interpolator (Blu-U., 2003)
Reconstruction of fractional Brownian motion from its samples
sMMSE(x|y) = E�s(x)
��s(x1) = y1, · · · , s(xM ) = yM
Underlying stochastic differential equation D�s = w s.t. s(n)(0) = 0, n = 0, . . . , b� � 12c
= argminf
kD�fk2L2s.t.
�f(0) = 0, . . . , f (n0)(0) = 0, f(x1) = y1, . . . , f(xM ) = yM
�
54
Epilogue: Splines and biomedical imagingImage process ing task Specific operation Imaging modality
Tomographicreconstruction
• Filtered backprojection• Fourier reconstruction• Iterative techniques• 3D + time
Commercial CT (X-rays)EMPET, SPECTDynamic CT, SPECT, PET
Sampling gridconversion
• Polar-to-cartesian coordinates• Spiral sampling• k-space sampling• Scan conversion
Ultrasound (endovascular)Spiral CT, MRIMRI
2D operations• Zooming, panning, rotation• Re-sizing, scaling
All
• Stereo imaging• Range, topography
Fundus cameraOCT
3D operations• Re-slicing• Max. intensity projection• Simulated X-ray projection
CT, MRI, MRA
Visualization
Surface/volume rendering• Iso-surface ray tracing• Gradient-based shading• Stereogram
CTMRI
Geometrical correction • Wide-angle lenses• Projective mapping• Aspect ratio, tilt• Magnetic field distortions
EndoscopyC-Arm fluoroscopyDental X-raysMRI
Registration • Motion compensation• Image subtraction• Mosaicking• Correlation-averaging• Patient positioning• Retrospective comparisons• Multi-modality imaging• Stereotactic normalization• Brain warping
fMRI, fundus cameraDSAEndoscopy, fundus camera,EM microscopySurgery, radiotherapy
CT/PET/MRI
• Contours• Ridges• Differential geometry
AllFeature detection
Contour extraction• Snakes and active contours MRI, Microscopy (cytology)
� =� �
Box splines
55
Box spline with direction set ⌅ := [�1 �2 . . . �N ]
M⌅(x) =�M⇠1
� · · · �M⇠N
�(x)
(de Boor-Höllig-Riemenschneider, 1993)
Zwart-Powell element
Fourier transform
M̂⌅(!) =NY
n=1
1� exp�� j h⇠n,!i
�
j h⇠n,!i
Elementary box splines
Primary box spline: M~e1(x) = box(x1)�(x2, · · · , xd) with box(x) =
8<
:1 0 x 1
0 otherwise.
Elementary box spline: M⇠ with ⇠ 2 Rd
= Dirac-like line distribution along x = t⇠ with t 2 [0, 1] and unit integral
Radon / X-ray transform of box splines
56
ξ1
ξ2
y
ζ1 ζ2
Z = [cos(θ), sin(θ)]
(Entezari-U., IEEE TMI 2012)
PropositionThe Radon/X-ray transform of a box spline is a box splinewith projected direction vectors
P✓{M⌅}(y) = MP✓?⌅(y)
57
-5
0
5-5
-4
-3
-2
-1
0
1
2
3
4
5
0
0.5
1
�
t
Forward model: p = Hc
x1
x2
Total variation regularization
p = 1 and L: discrete gradient
state-of-the-art (compressed sensing)
Box-spline discretization: f(x) =X
k2Z2
c[k]'(x� k)
Variational image reconstruction (via iterative algorithm)
crec = arg minc2RN
ky �Hck22| {z }data consistency
+ �kLckpp| {z }regularization
System matrix:
[H]m,k = P✓m{'(·� k)}(tm)
(Candes-Romberg-Tao; Donoho, 2006)
Continuous-domain formulation of inverse problem
58
noise
nlinear functionals
H
Linear forward model: continuum to discrete
Variational formulation
frec = arg minf2XL(Rd)
MX
m=1
|ym � hhm, fi|2 + �kLfk!
f 2 XL (Native space)
y = H(f) + n
kLfk: `1-like norm for compressed sensing
Ill-posed problem: Recover f : Rd ! R from noisy measurements y 2 RM
H : XL ! RM: f 7! (hh1, fi · · · hhM , fi)
Proper continuous counterpart of
59
Space of real-valued, countably additive Borel measures on Rd
M(Rd) =�C0(Rd)
�0=�w 2 S 0(Rd) : kwkM = sup
'2S(Rd):k'k1=1hw,'i < 1
,
where w : ' 7! hw,'i =RRd '(r)w(r)dr
Equivalent definition of “total variation” norm
kwkM = sup'2C0(Rd):k'k1=1
hw,'i
Basic inclusions
�(·� x0) 2 M(Rd) with k�(·� x0)kM = 1 for any x0 2 Rd
kfkM = kfkL1(Rd) for all f 2 L1(Rd) ) L1(Rd) ✓ M(Rd)
`1
Representer theorem for gTV regularization
60
(P1) arg minf2ML(Rd)
MX
m=1
|ym � hhm, fi|2 + �kLfkM
!
L: spline-admissible operator with null space NL = span{pn}N0n=1
gTV semi-norm: kL{s}kM = supk'k11hL{s},'i
Measurement functionals hm : ML(Rd) ! R (weak⇤-continuous)
V(U.-Fageot-Ward, SIAM Review 2017)
Representer theorem for gTV-regularizationThe extreme points of (P1) are non-uniform L-spline of the form
fspline(x) =KknotsX
k=1
ak⇢L(x� xk) +N0X
n=1
bnpn(x)
with ⇢L such that L{⇢L} = �, Kknots M �N0, and kLfsplinekM = kak`1 .
⇒ splines are universal solutions of linear inverse problems
61
CONCLUSION■Splines: a unifying mathematical framework
■ Link between continuous and discrete theories■ Applicable to many areas of science and engineering
■A powerful set of tools■ B-splines, etc.■ Best cost/performance tradeoff■ Optimality and universality
■An endless source of inspiration
■Current frontiers■ Non-linear algorithms, optimization in Banach space■ Sparsity■ Deep learning
62
AcknowledgmentsMany thanks to
■ Prof. Akram Aldroubi■ Prof. Thierry Blu■ Dr. Pouya Tafti■ Dr. Julien Fageot■ Prof. John-Paul Ward■ Dr. Philippe Thévenaz■ Prof. Alireza Entezari
. :
■ Annette Unser, Artist
+ many other researchers and graduate students
26
63
Selected references
n Preprints and demos: http://bigwww.epfl.ch/
Splines and signal processing
M. Unser, “Splines: A Perfect Fit for Signal and Image Processing,” IEEE Signal ProcessingMagazine, 16(6), pp. 22-38, 1999.
M. Unser, “Sampling—50 Years After Shannon,” Proc. of the IEEE, 88,(4), pp. 569-587, 2000.
M. Unser, ”Cardinal Exponential Splines: Part II—Think Analog, Act Digital,” IEEE Trans. SignalProcessing, 53(4), pp. 1439-1449, 2005.
Splines and imaging
P. Thévenaz, T. Blu, M. Unser, “Interpolation Revisited," IEEE Trans. Medical Imaging, 19(7), pp.739-758, 2000.
A. Entezari, M. Nilchian, M. Unser, “A Box Spline Calculus for the Discretization of ComputedTomography Reconstruction Problems,” IEEE Trans. Med. Imag., vol. 31, pp. 1532-1541, 2012.
M. Unser, J. Fageot, J.P. Ward, “Splines Are Universal Solutions of Linear Inverse Problems withGeneralized-TV Regularization,” SIAM Review, vol. 59, No. 4, pp. 769-793, 2017.
Splines and stochastic processes
T. Blu, M. Unser, “Self-Similarity: Part II—Optimal Estimation of Fractal Processes,” IEEE Trans.Signal Processing, 55(4), pp. 1364-1378, 2007.
M. Unser and P. Tafti, An Introduction to Sparse Stochastic Processes,
Cambridge University Press, 2014. Preprint, available at http://www.sparseprocesses.org.
Carl de Boor
and many thanks for your inspirational works …