future-sequential regularization methods for ill-posed ... · future-sequential regularization...

29
Future-Sequential Regularization Methods for Ill-Posed Volterra Equations * Applications to the Inverse Heat Conduction Problem Patricia K. Lamm Department of Mathematics Michigan State University East Lansing, MI 48824-1027 Abstract We develop a theoretical context in which to study the future-sequential regularization method developed by J. V. Beck for the Inverse Heat Conduction Problem. In the process, we generalize Beck’s ideas and view that method as one in a large class of regularization methods in which the solution of an ill-posed first-kind Volterra equation is seen to be the limit of a se- quence of solutions of well-posed second-kind Volterra equations. Such techniques are important because standard regularization methods (such as Tikhonov regularization) tend to transform a naturally-sequential Volterra problem into a full-domain Fredholm problem, destroying the underlying causal nature of the Volterra model and leading to inefficient global approximation strategies. In contrast, the ideas we present here preserve the original Volterra structure of the problem and thus can lead to easily-implemented localized approximation strategies. Theoretical properties of these methods are discussed and proofs of convergence are given. * This research was supported in part by the U. S. Air Force Office of Scientific Research under contract AFOSR- 89-0419 and by the Clare Boothe Luce Foundation, NY, NY.

Upload: others

Post on 01-Jun-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

Future-Sequential Regularization Methods for

Ill-Posed Volterra Equations ∗

Applications to the Inverse Heat Conduction Problem

Patricia K. Lamm

Department of Mathematics

Michigan State University

East Lansing, MI 48824-1027

Abstract

We develop a theoretical context in which to study the future-sequential regularization

method developed by J. V. Beck for the Inverse Heat Conduction Problem. In the process, we

generalize Beck’s ideas and view that method as one in a large class of regularization methods

in which the solution of an ill-posed first-kind Volterra equation is seen to be the limit of a se-

quence of solutions of well-posed second-kind Volterra equations. Such techniques are important

because standard regularization methods (such as Tikhonov regularization) tend to transform

a naturally-sequential Volterra problem into a full-domain Fredholm problem, destroying the

underlying causal nature of the Volterra model and leading to inefficient global approximation

strategies. In contrast, the ideas we present here preserve the original Volterra structure of the

problem and thus can lead to easily-implemented localized approximation strategies.

Theoretical properties of these methods are discussed and proofs of convergence are given.

∗This research was supported in part by the U. S. Air Force Office of Scientific Research under contract AFOSR-

89-0419 and by the Clare Boothe Luce Foundation, NY, NY.

Page 2: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

1. Introduction.

Linear and nonlinear Volterra integral equations arise in many applications, for example, in models

of population dynamics, for the transport of charged particles in a turbulent plasma, and in the

transmission of an epidemic through a fixed-size population [7]. A particular application of interest

here is the first-kind Volterra equation for the Inverse Heat Conduction Problem, an inverse problem

associated with the partial differential equation describing heat conduction, which we consider in

some detail in the next section. In this and many other examples, the underlying problem of interest

may be expressed as ∫ t

0k(t− s)u(s) ds = f(t), t ∈ [0, 1], (1.1)

a first-kind equation with convolution kernel k ∈ C([0, 1]; IRn×n) and given data f ∈ L2( (0, 1); IRn).

We rewrite this equation as a linear operator equation,

Au = f,

where both f and the Volterra integral operator A (a bounded linear operator from L2( (0, 1); IRn)

to itself) are given, and the goal is to determine the solution u ∈ L2( (0, 1); IRn). We will assume

throughout that the original data f is such that existence of a unique solution u of equation (1.1)

is guaranteed (see, [7], for example, for conditions guaranteeing such a hypothesis is met; clearly

f ∈ C([0, 1]; IRn) and f(0) = 0 are necessary conditions), and will focus primarily on the well-known

instability problems associated with solving this ill-posed problem, i.e., with obtaining the solution

u = A−1f when it is necessarily the case that A−1 is unbounded on L2( (0, 1); IRn) whenever the

range of A is not closed [8]. This question is important because typically one has noise in the

data and, in fact, one uses a less-smooth perturbation f δ of f in (1.1). Indeed the degree of

instability associated with inversion of the operator A may be quantified depending on properties

of the Volterra kernel k [10], a notion which gives useful information about the degree to which

perturbations in data f corrupt the solution u and about the best possible accuracy one would

hope to achieve from any solution method for such a problem.

2

Page 3: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

In this paper we analyze a very effective solution/stabilization method for the inversion of linear

Volterra operators of convolution type. In particular, we develop a theoretical context in which to

the study the “future-sequential” regularization method developed by J. V. Beck for the Inverse

Heat Conduction Problem, establishing for the first time the convergence of this regularization

method for a special class of “finitely smoothing” Volterra problems. In the process, we generalize

Beck’s ideas and are able to view the future-sequential method as a special case in a class of

regularization methods in which the solution of an ill-posed, first-kind Volterra equation is found

to be the limit of a sequence of solutions of well-posed, second-kind Volterra equations. In what

follows we define second-kind equations of the form

∫ t

0k(t− s;∆r)u(s) ds+ α(∆r)u(t) = F (t;∆r).

where α(∆r) is an n×n constant matrix, and k(·;∆r) ∈ C([0, 1]; IRn×n) and F (·;∆r) ∈ L2( (0, 1); IRn)

are functions constructed using “future” values of k and f , respectively, in an effective manner.

Here k has the general form

k(t) =∫ ∆r

0k(t+ ρ) dη∆r

(ρ)

for all t ∈ [0, 1] (so that we must extend k past the original interval [0, 1]), where η∆ris a suitable

measure on the Borel subsets of IR and ∆r is the length of the “future” interval. As we shall later see,

∆r performs the role of a “regularization parameter” in the presence of noisy data. This approach

can also be viewed as an effective tool for approximation in finite-dimensional discretizations of the

original problem (as Beck’s work for the Inverse Heat Conduction Problem has been viewed for

over thirty years), or, just as important, as an infinite-dimensional regularization method for the

Volterra operator equation.

There are good reasons to select a method of this type over a method such as Tikhonov regular-

ization (see, for example, [8]) in order to solve the infinite-dimensional equation Au = f ; we recall

that the Tikhonov method is implemented via the selection of a regularization parameter β > 0

3

Page 4: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

and through the subsequent minimization of a quadratic functional

Jβ(u) = ‖Au− f‖2 + β‖Lu‖2

over u ∈ L2( (0, 1); IRn), where L is a closed operator satisfying certain well-known assumptions

and ‖·‖ is an appropriate norm [12]. This approach leads to the solution of the infinite-dimensional

operator equation

(A?A+ βL?L)u = A?f

for a β-dependent solution uβ which depends continuously on data f so long as β > 0. Standard

theory shows that, for uδβ the minimizer of Jβ using data f δ, there is a choice of β = β(δ) such that

as the level δ of noise converges to zero, one has approximations uδβ(δ) converging to the solution u

of (1.1). This well-studied method, though effective, has the following distinct disadvantage in the

case of Volterra equations. The original Volterra problem Au = f is a causal problem which may

be solved sequentially in time, that is, first solve

∫ t

0k(t− s)u(s) ds = f(t), t ∈ [0, t1]

for u1 ∈ L2( (0, 1); IRn). Then, holding u1 fixed, solve the same equation on the interval [t1, t2] for

u2 ∈ L2( (0, 1); IRn); i.e., u2 solves

∫ t1

0k(t− s)u1(s) ds+

∫ t

t1k(t− s)u(s) ds = f(t), t ∈ [t1, t2],

and so on, until the full solution is obtained on the desired interval (in the case of data outside the

range of A, one could solve sequential least-squares problems). Applying Tikhonov regularization

to this problem destroys its causal nature since the operator A?A is no longer of Volterra type, and

thus a naturally-sequential problem is transformed into a “full-domain” problem requiring both past

and future values for solution. And even if one implements Tikhonov regularization without solving

the normal equations (see, for example, [6] for efficient methods for convolution-type equations),

one still cannot typically avoid the use of all (past and future) data at every point in time. Thus,

4

Page 5: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

one of the advantages of using “future”-based methods over the Tikhonov method lies in the fact

that one is able to preserve the structure of the original Volterra equation, and thus partial-domain,

or sequential, solution methods may be used.

We note that another standard way to regularize linear ill-posed operator equations is via finite-

dimensional discretizations (see, for example, [8, 14, 15]), since finite-dimensional linear equations

are always stable. The regularization parameter in this case then becomes the meshsize associated

with the underlying discretization, and the appropriate size of the mesh is always linked closely to

the amount of expected noise (data measurement error, or computational/round-off error) in the

problem. An effective discretization/regularization technique selects the meshsize according to the

level of error, and allows this meshsize to shrink to zero only as errors in the problem also decrease

to zero. The result is that, in order to stabilize ill-posed problems via discretization, the meshsize

must often be held at an unacceptably large value, leading to poor approximation. In contrast,

discretized future-sequential methods relax considerably the constraints on the meshsize, allowing

for a finer grid and dramatically better approximations. The details of the theoretical analysis of

a discretized version of this problem are given in [11], along with corresponding numerical results.

In Section 1 below we describe the Beck method as it is applied to the Inverse Heat Conduction

Problem. We examine the way in which this method may be viewed as a transformation of an

unstable first-kind Volterra equation into a well-posed second-kind equation. Using a second-kind

equation to approximate the solution of a first-kind equation is a classical procedure (see, for

example, [4, 5, 9, 13, 21]), but, as far as we know, we are the first to view the particular method

developed by Beck in such a manner. And in fact the second-kind equation generated by this

approach differs significantly from those considered in the literature.

In Section 2, we generalize the Beck ideas (which, to our knowledge, have to-date been applied

only in the context of finite-dimensional discretizations) and view the generalization as an infinite-

dimensional regularization technique for the original operator equation. The first complete proofs

of convergence of any form of this method are discussed in Section 3, where we focus on the infinite-

5

Page 6: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

dimensional regularization problem for scalar-valued k and f . Even though the theory discussed

there is immediately applicable to problems in which the kernel k satisfies k(0) 6= 0, the framework

we develop involving second-kind Volterra equations suggests extension to more general kernels.

We first prove convergence in the absence of noise, and then extend the ideas to the case of noisy

data, illustrating how the future-sequential parameter ∆r should be selected such that we have

convergence both of ∆r to zero and of the regularized approximations to the solution u of the

original (noise-free) problem (1.1) as the level δ of noise converges to zero.

Our notation is completely standard, using, for example, expressions such as L2( (0, 1); IRn) for

IRn-valued square-integrable “functions” defined on (0, 1), and L2(0, 1) for the special case of n = 1.

2. Sequential and Future-Sequential Solution of the Inverse Heat

Conduction Problem.

The Inverse Heat Conduction Problem (IHCP) is typically stated as the problem of determining,

from internal temperature or temperature-flux measurements, the unknown heat (or heat flux)

source which is being applied at the surface of a solid. Measurements at various internal spatial

locations are taken over the course of time, the goal being to reconstruct a time-varying function

representing the temperature history at the surface of the solid. For example, if we consider the

problem of recovering a heat source u(t) at the boundary x = 0 of a one-dimensional semi-infinite

bar, the governing partial differential equation (for zero initial heat distribution) is

wt = wxx, 0 < x <∞, t > 0,

w(0, t) = u(t), t > 0

w(x, 0) = 0.

If data is collected at the spatial location x = 1 (i.e., unperturbed measurements are given by

f(t) ≡ w(1, t) ∈ IR), then the unknown source u is the solution of the first-kind equation (1.1),

6

Page 7: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

where, in this case, k is the scalar-valued kernel

k(t) =1

2√π t3/2

exp(− 1

4t

)

[3]. This problem arises naturally in numerous applied settings, for example, in the determination

of the temperature profile of the surface of a space shuttle during its re-entry into the earth’s

atmosphere [2]. Physical considerations often make it necessary to measure the temperature at a

point interior to the solid, rather than at the heated surface where temperature sensor is at risk of

being damaged. Unfortunately, the IHCP problem is severely ill-posed, with unstable dependence

of solutions on data.

J. V. Beck [2] has made significant contributions to the development of useful methods for

solving the IHCP. His ideas are based on a stabilizing modification of a widely-used method

for solving the IHCP equations, the so-called Stolz algorithm. Stolz’s idea is to take the IHCP

equations in first-kind integral form, Au = f , and to construct approximating equations based

on a collocation procedure utilizing approximations in an N -dimensional space of step-functions

defined on an equally-partitioned time interval. Thus Stolz exactly fits an N -part step-function to

N discrete temperature measurements; one may show that there is always a unique solution to the

N -level discretized problem and that solution of the equations is very simple (and in fact may be

done “sequentially”) because the matrix governing the approximation is lower triangular.

Unfortunately, the Stolz method is also highly unstable, with oscillations entering into the solu-

tion for even small N . Beck’s proposed “future estimation method” modifies the Stolz algorithm

and uses r − 1 future temperature (flux) measurements to estimate the source temperature (flux)

at a given time; here r ≥ 1 is an integer. As is seen in the example below, numerical testing

indicates that Beck’s “future sequential estimation” method stabilizes Stolz’s algorithm, with the

degree of “stabilizability” related to the number r − 1 of future measurements used at each step.

In addition, Beck makes convincing arguments from a physical point of view which lead one to

expect his method to exhibit genuine regularizing characteristics. But as yet no one has performed

7

Page 8: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

a complete mathematical convergence/regularization analysis of this very popular and widely-used

method. The goal of this paper is to correct this shortcoming, at least for some general classes of

Volterra equations, and to construct a class of generalized regularization methods for which Beck’s

method is a special case.

We begin here with a detailed discussion of both the Stolz and Beck methods for a general

Volterra convolution equation. For simplicity, we assume throughout the remainder of this section

that the convolution kernel k is real-valued and continuous on [0, T ] for some T > 1 and that

k(t) 6= 0 for t ∈ (0, T ] (mirroring the properties of the IHCP kernel). Let N = 1, 2, . . ., be fixed,

let ∆t ≡ 1/N and ti ≡ i∆t, for i = 0, 1, . . . , N . We designate the space of piecewise-constant

functions on [0, 1] by SN ≡ spanχi, where χi is the characteristic function defined by χi(t) = 1,

for ti−1 < t ≤ ti, and χi(t) = 0 otherwise; χ1(0) = 1. We then seek q ∈ SN solving the collocation

equations

Aq(ti) = f(ti) (2.1)

for i = 1, 2, . . . , N . Expressing q in terms of the basis for SN we have q =∑N

i=1 ciχi for some

ci ∈ IR , and observe that

Aq(tj) =∫ tj

0

(k(tj − s)

N∑i=1

ciχi(s)

)ds

=j∑

i=1

ci

∫ ti

ti−1

k(tj − s) ds

=j∑

i=1

ci

∫ t1

0k(tj−i+1 − s) ds.

Thus, defining ∆i ≡∫ t10 k(ti−s) ds for i = 1, 2, . . ., we may write (2.1) in matrix form as ANc = fN ,

8

Page 9: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

where c = (c1, c2, . . . , cN )> ∈ IRN and

AN =

∆1 0 0 . . . 0

∆2 ∆1 0 . . . 0

∆3 ∆2 ∆1 . . . 0

......

. . . . . ....

∆N ∆N−1 . . . ∆2 ∆1

, fN =

f(t1)

f(t2)

f(t3)

...

f(tN )

.

Due to the assumptions on the kernel k, the diagonal entries ∆1 of AN are nonzero and thus the

Stolz equations may be solved sequentially (via forward substitution) for ci, i = 1, . . . , N . However,

the ill-posedness of the original problem leads to poor-conditioning of the matrix AN , especially as

∆1 gets close to zero (which happens quickly as N grows, especially if k and/or one or more of its

derivatives is zero at t = 0; indeed, this is true for all derivatives at t = 0 of the kernel associated

with the IHCP). Thus, for even moderate values of N , errors made in calculating c1, c2, and so

on, are propagated to later ci, making the Stolz approach an unreasonable method for solving the

IHCP, or even for the solution of better-conditioned finitely-smoothing Volterra equations [10], as

is evident in the example given below.

In addition to being poorly-conditioned, the Stolz approach has other shortcomings, which are

seen as follows. Assume that c1, c2, . . . , ci−1 have been determined. With the Stolz method, one

computes ci such that c1∆i + c2∆i−1 + . . . ci−1∆2 + ci∆1 matches f(ti) exactly. Thus, ci, the

coefficient of the “input” basis function with support on (ti−1, ti], is selected using “output” f(ti)

and using prior values c1, c2, . . . , ci−1 (which were computed using prior values of f). However the

nature of a Volterra equation is such that the “output” at time t is only influenced by “input” at

times prior to t, so that it makes sense to use later data values f(ti+1), f(ti+2), . . . to determine ci.

The Stolz method for Volterra equations, although simple in its sequential nature, has the peculiar

disadvantage of making the selection of the current ci independent of the future values of data.

The approach taken by Beck uses future data values in the computation of ci; the result is a

sequential algorithm which, as later sections will show, is actually regularizing in the presence of

9

Page 10: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

data error (i.e., perturbations in f). For Beck’s approach, each ci is determined to be the optimal

value one would use if forced to use ci as the value of the present coefficient as well as the value

of r − 1 future coefficients while performing data-fitting to r − 1 future data points (see [2] for

the rationale on using less than the entire set of remaining future data points at each step). To

illustrate, we suppose that r has been fixed, and select c1 minimizing the least squares fit-to-data

J1,

J1(c1) ≡ |Ac1χ1(t1)− f(t1)|2 + |Ac1(χ1 + χ2)(t2)− f(t2)|2 + . . .

. . .+ |Ac1(χ1 + χ2 + . . . χr)(tr)− f(tr)|2.

After making this selection of c1, we then hold c1 fixed and choose c2 minimizing

J2(c2) ≡ |A(c1χ1 + c2χ2)(t2)−f(t2)|2 + |A(c1χ1 + c2(χ2 + χ3))(t3)−f(t3)|2 + . . .

. . .+ |A(c1χ1 + c2(χ2 + . . .+ χr+1))(tr+1)−f(tr+1)|2,

and so on. As the end of the interval nears and there are fewer than r − 1 future data points, one

can either use data which is outside the interval [0, 1] (i.e., use data at ti = i/N, i = N + 1, N +

2, . . . , N + r − 1) or one can use fewer future data points, in effect all the remaining data points,

in the formation of a least squares fit-to-data criterion. For simplicity in what follows, we take the

first approach and assume that data is given on the interval [0, T ], for some T > 1.

We illustrate here the results of application of both the Stolz method and Beck’s method to a

particular non-IHCP example (see [2] for many examples illustrating the effectiveness of the future-

sequential method for model equations associated with the IHCP).

Example 2.1: We consider the problem of approximating the solution of (1.1) in the case of

k(t) = 2 + tet and true solution given by u = t cos(4t). Using k and u, unperturbed data f is given

by the exact integral∫ t0 k(t− s)u(s) ds for t ∈ [0, 1], while a perturbation f δ of f is computed to

be f δ(t) = f(t) + d(t) where d is uniformly distributed random error with ‖d‖∞ ≤ .05 ‖f‖∞ (here

10

Page 11: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

‖ · ‖∞ denotes the usual L∞ norm). In this example, both the Stolz method and Beck’s method

were applied to the problem of approximating the solution of (1.1) in the case of perturbed data,

with N = 20 discrete time subintervals and piecewise constant approximations; the results are

shown in Figure 2.1 below for Stolz (i.e., with no future intervals, or r = 1), and for Beck with 1

future interval (r = 2) and 2 future intervals (r = 3)

In order to motivate what follows, we look more closely at the special case of r = 2 (one future

value); we recall that the idea is to choose c1 in order to minimize

J1(c1) = (c1∆1 − f(t1))2 + (c1(∆2 + ∆1)− f(t2))2

= (c1∆1 − f(t1))2 + (c1∆2 − f(t2))2,

where ∆i ≡ ∆1 + . . .+ ∆i =∫ i∆t0 k(i∆t− s) ds. That is, c1 satisfies

c1(∆21 + ∆2

2) = ∆1f(t1) + ∆2f(t2).

Preserving this value for c1, we next choose c2 minimizing

J2(c2) = (c1∆2 + c2∆1 − f(t2))2 + (c1∆3 + c2(∆2 + ∆1)− f(t3))2

= (c1∆2 + c2∆1 − f(t2))2 + (c1∆3 + c2∆2 − f(t3))2

or, c2 satisfying

c1(∆1∆2 + ∆2∆3) + c2(∆21 + ∆2

2) = ∆1f(t2) + ∆2f(t3),

and so on, until c3, c4, . . . , cN have been determined sequentially in this manner. The Beck equations

for r = 2 are thus given in matrix form by

AN,2c = fN,2 (2.2)

11

Page 12: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

where

AN,2 =

∆21 + ∆2

2 0 0 . . . 0

∆1∆2 + ∆2∆3 ∆21 + ∆2

2 0 . . . 0

∆1∆3 + ∆2∆4 ∆1∆2 + ∆2∆3 ∆21 + ∆2

2 . . . 0

......

. . . . . ....

∆1∆N + ∆2∆N+1 ∆1∆N−1 + ∆2∆N . . . ∆1∆2 + ∆2∆3 ∆21 + ∆2

2

,

fN =

∆1f(t1) + ∆2f(t2)

∆1f(t2) + ∆2f(t3)

...

∆1f(tN ) + ∆2f(tN+1)

.

It is useful to rewrite the diagonal elements of AN,2 in the form ∆21+∆2

2 = (∆21+∆2∆2)+∆2∆1 =

∆1(s1∆1+s2∆2)+s2∆21 (∆1 = ∆1), where si = ∆i/∆1, i = 1, 2, . . .. In a similar manner, elements

of AN,2 below the diagonal may be expressed as ∆1∆i + ∆2∆i+1 = ∆1(s1∆i + s2∆i+1), so that we

have (after factoring ∆1 from each such element of AN,2),

1∆1

AN,2 =

s1∆1+s2∆2 0 0 . . . 0

s1∆2+s2∆3 s1∆1+s2∆2 0 . . . 0

s1∆3+s2∆4 s1∆2+s2∆3 s1∆1+s2∆2 . . . 0

......

. . . . . ....

s1∆N +s2∆N+1 s1∆N−1+s2∆N . . . s1∆2+s2∆3 s1∆1+s2∆2

+ s2∆1I,

12

Page 13: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

where I is the N ×N identity matrix. Similarly,

1∆1

fN =

s1f(t1) + s2f(t2)

s1f(t2) + s2f(t3)

...

s1f(tN ) + s2f(tN+1),

.

It thus follows that equation (2.2) may be written in the form

s1((AN

1 + ∆0I)c− fN1

)+ s2

((AN

2 + ∆1I)c− fN2

)= 0 (2.3)

where we have defined ∆0 ≡ 0, and

ANi =

∆i 0 0 . . . 0

∆i+1 ∆i 0 . . . 0

∆i+2 ∆i+1 ∆i . . . 0

......

. . . . . ....

∆i+N−1 ∆i+N−2 . . . ∆i+1 ∆i

fN

i =

f(ti)

f(ti+1)

...

f(tN+i−1)

.

In this form one can immediately see that equation (2.3) is the collocation equation associated with

exactly matching the infinite-dimensional equation,

s1(A1u+ ∆0u− S1f) + s2(A2u+ ∆1u− S2f) = 0, (2.4)

at collocation points t1, t2, . . . , tN , restricting solutions to lie in the approximation space SN ; here

Ai : L2(0, 1) → L2(0, 1) is defined for i = 1, . . . , r by

Aiv(t) =∫ t

0k(t+(i−1) ∆t−s)v(s) ds, v ∈ L2(0, 1),

and Si denotes the shift-and-restriction operator from L2(0, T ) to L2(0, 1) given by

Sif(t) = f(t+ (i− 1) ∆t), t ∈ [0, 1], f ∈ L2(0, T ).

Throughout it is assumed that T is large enough to ensure that [0, 1 + (r − 1)∆t] ⊆ [0, T ]. We note

that equation (2.4) may also be written∫ t

0s1k(t−s) + s2k(t+∆t−s)u(s) ds+ (s1∆0 + s2∆1)u(t) = s1S1f(t) + s2S2f(t), t ∈ (0, 1)

13

Page 14: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

which is a second-kind integral equation whenever ∆1 > 0 (i.e., for all ∆t > 0, under the assump-

tions on k).

In the general case of fixed r ≥ 2 (note that r = 1 reduces to the Stolz equations), we select c1

minimizing

J1(c1) =(c1∆1 − f(t1)

)2+(c1∆2 − f(t2)

)2+(c1∆3 − f(t3)

)2+ . . .+

(c1∆r − f(tr)

)2,

or, c1 such that

(∆21 + ∆2

2 + . . .+ ∆2r)c1 = ∆1f(t1) + ∆2f(t2) + . . .+ ∆rf(tr),

and then select c2 minimizing

(c1∆2 + c2∆1 − f(t2)

)2+(c1∆3 + c2∆2 − f(t3)

)2+ . . .+

(c1∆r+1 + c2∆r − f(tr+1)

)2,

or c2 satisfying

(∆1∆2 + ∆2∆3 + . . .+ ∆r∆r+1)c1 + (∆21 + ∆2

2 + . . .+ ∆2r)c2

= ∆1f(t2) + ∆2f(t3) + . . .+ ∆rf(tr+1),

and so on.

Regarding the coefficient (∆21 + . . .+ ∆2

r) of c2 above, we note that

r∑i=1

∆2i =

r∑i=1

∆i(∆i + ∆i−1)

= ∆1

r∑i=1

(si∆i + si∆i−1)

so that, arguing as before, it is not difficult to show that c satisfies the equations

r∑i=1

si((ANi + ∆i−1I)c− fN

i ) = 0,

which are the corresponding discretized collocation equations for the infinite-dimensional problem,

r∑i=1

si(Aiu+ ∆i−1u− Sif) = 0, (2.5)

14

Page 15: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

or,

∫ t

0

(r∑

i=1

sik(t+(i−1)∆t−s))u(s) ds+ u(t)

(r∑

i=1

si∆i−1

)

=r∑

i=1

sif(t+(i−1)∆t), t ∈ [0, 1]. (2.6)

wherer∑

i=1

si∆i−1 =r∑

i=1

si

∫ (i−1)∆t

0k((i−1)∆t−s) ds.

Remark 2.1: Equation (2.5) is a weighted sum of terms of the form (Aiu+∆i−1u−Sif), and it is

worthwhile to examine individual terms separately. If we assume that the true solution u satisfies

Au(t) = f(t), for all t ∈ [0, T ],

for T ≥ 1 + (r − 1)∆t, then it follows that

SiAu(t) = Sif(t), for all t ∈ [0, 1], i = 1, 2, . . . , r.

But

SiAu(t) = Au(t+ (i− 1)∆t)

=∫ t+(i−1)∆t

0k(t+(i−1)∆t−s)u(s) ds

=∫ t

0k(t+ (i− 1)∆t− s)u(s) ds+

∫ (i−1)∆t

0k((i−1)∆t−s)u(s+ t) ds

= Aiu(t) +

(∫ (i−1)∆t

0k((i−1)∆t−s)

(u(t) + su′(ξ(t, s))

)ds

)= Aiu(t) + ∆i−1u(t) +O

((i− 1)2∆t2

),

for k and u′ bounded. Therefore, under these conditions, the equation∑r

i=1 si(Aiu+∆i−1u−Sif) =

0 is an O(∆t2∑r

i=1(i− 1)2) approximation to the equationr∑

i=1

siSi(Au− f) = 0, which u is known

to satisfy exactly. However, because A−1 is unbounded there is no guarantee that the solution

u = u(·; r,∆t) of the approximating equation is close to the “true solution” u when ∆t is small.

15

Page 16: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

3. Generalized Future-Sequential Methods for Volterra

Convolution Equations.

In what follows we generalize the “future-sequential” ideas discussed in the previous section, and

use this generalization to define an infinite-dimensional regularization method for the solution of

(1.1). Using the notation of the last section (with the kernel k scalar-valued momentarily), we let

r ≥ 2 be fixed, and now define ∆r ≡ (r− 1)∆t to be the length of the “future interval” and Ω∆rby

Ω∆rφ ≡

r∑i=1

siφ(γi−1∆r). (3.1)

where φ ∈ C[0,∆r], γj = jr−1 , for j = 0, 1, . . . , r − 1, and si is given as before by

si =∫ γi∆r0 k(γi∆r − s) ds∫ γ1∆r

0 k(γ1∆r − s) ds, i = 1, 2, . . . , r; γr ≡

r

r − 1.

Then for each ∆r > 0, Ω∆ris a bounded linear functional on C[0,∆r], i.e., Ω∆r

∈ C?[0,∆r], the

continuous dual of C[0,∆r]. Defining kt ∈ C[0,∆r] for each t ∈ [0, 1] and ∆r > 0 via kt(ρ) ≡

k(t + ρ), ρ ∈ [0,∆r] (note that by kt we do not intend k′(t)) and defining k ∈ C[0,∆r] by

k(ρ) ≡∫ ρ0 k(ρ− s) ds, ρ ∈ [0,∆r], equation (2.6) may be written

∫ t

0

(Ω∆r

kt−s

)u(s) ds+

(Ω∆r

k)u(t) = Ω∆r

ft, (3.2)

provided that point evaluations of ft make sense (ft(ρ) = f(t + ρ), ρ ∈ [0,∆r]). The original,

unperturbed f is necessarily continuous, so that ft ∈ C[0,∆r] and thus Ω∆rft is well-defined in

this case; however, in general we will be solving equation (3.2) using a less-smooth perturbation f δ

of f , so we will deliberately avoid assumptions of continuity on the data in what follows and only

require that f is such that Ω∆rft is well-defined in the setting in which it appears. This will be

made more precise below.

Equation (3.2) may be further generalized by allowing arbitrary positive Ω∆r∈ C?[0,∆r]. For

example, one could consider the Ω∆rgiven in (3.1) (i.e., Ω∆r

as a linear combination of delta

functions), but instead allow the si to be arbitrary positive constants and γi arbitrary in [0, 1].

16

Page 17: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

Alternatively, a continuous analog of the discrete Ω∆rgiven in (3.1) could be constructed, namely

Ω∆rφ =

∫ ∆r

0ω∆r

(ρ)φ(ρ) dρ, φ ∈ C[0,∆r], (3.3)

where ω∆r(ρ) is a continuous analog of the si given above,

ω∆r(ρ) =

∫ ρ+δ0 k(ρ+ δ − s) ds∫ δ

0 k(δ − s) ds, (3.4)

for some δ > 0 small, δ = c∆r for some fixed c ∈ (0, 1]. More generally one could let Ω∆rdenote a

density, using (3.3) with arbitrary ω∆r(ρ) > 0.

It will be convenient to use the Riesz Representation Theorem for C?[0,∆r] (see, for example,

[1], p. 106) to represent positive bounded linear functionals Ω∆ron C[0,∆r] by

Ω∆rφ =

∫ ∆r

0φ(ρ) dη∆r

(ρ), φ ∈ C[0,∆r]

where η∆ris a real-valued Borel-Stieltjes measure on the Borel subsets of IR and

∫∆r0 a Stieltjes

integral. It will be assumed henceforth that, for some ∆R > 0 and all ∆r ∈ (0,∆R], the interval

(0,∆r) has positive, finite measure under each η∆r. This assumption is easily satisfied for all

measures mentioned above, that is, for Ω∆ras defined in (3.1) for arbitrary si > 0 and γi ∈ [0, 1],

and for Ω∆rgiven by (3.3) for arbitrary positive ω∆r

.

Using now η∆r, equation (3.2 ) becomes

∫ t

0

(∫ ∆r

0k(t+ρ−s) dη∆r

(ρ)

)u(s) ds+ u(t)

(∫ ∆r

0

∫ ρ

0k(ρ−s) ds dη∆r

(ρ)

)= (3.5)

=∫ ∆r

0f(t+ ρ) dη∆r

(ρ), t ∈ [0, 1],

where below we state conditions on f ensuring that the right-hand side is well-defined, and that

a unique solution of (3.5) may be found. Before doing so, it is useful to note that we may extend

these ideas to k ∈ C([0, 1]; IRn×n) and f : [0, T ] → IRn, provided we interpret∫∆r0 k(t+ ρ) dη∆r

(ρ)

to be the n×n matrix of scalar integrations∫∆r0 ki,j(t+ρ) dη∆r

(ρ) , and use a similar interpretation

for∫∆r0 f(t+ρ) dη∆r

(ρ). Then equation (3.5) is still well-defined as an approximating equation, and

we are able to find conditions on k, f guaranteeing a it has a unique solution u(·;∆r) : [0, 1] → IRn.

17

Page 18: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

Theorem 3.1 Let ∆R > 0 be given with 1 + ∆R ≤ T , and let η∆rbe a measure defined as above

for fixed ∆r ∈ (0,∆R]. Then for k ∈ C ([0, T ]; IRn×n) satisfying the condition that the matrix

α(∆r) ≡∫ ∆r

0

∫ ρ

0k(ρ− s) ds dη∆r

(ρ) (3.6)

is nonsingular, and for f : [0, T ] → IRn such that ft : [0,∆r] → IRn is η∆r–integrable for t ∈ [0, 1],

with∫ 10 ‖∫∆r0 f(t + ρ) dη∆r

(ρ)‖2Rn dt < ∞, there is a unique solution u(·;∆r) ∈ L2( (0, 1); IRn) of

equation (3.5).

The proof of the theorem follows easily from Theorem 9.3.6 of [7] and the fact that the map

t →∫∆r0 k(t + ρ) dη∆r

(ρ) is continuous; we note that other (Lp ) conditions on f lead to solutions

u(·;∆r) ∈ Lp(0, 1) for 1 ≤ p ≤ ∞.

4. Convergence of Abstract Future-Sequential Methods.

We turn to the question of convergence of solutions u(·;∆r) of equation (3.5) to the solution u of

∫ t

0k(t− s)u(s) ds = f(t), t ∈ [0, T ] (4.1)

as ∆r → 0, in the special case of real-valued k and f . The theory developed here also holds for

vector equations, but only under special conditions on the measure η∆rand kernel k, as indicated

in Remark 4.1. We initially consider the problem of convergence in the case of unperturbed data

f , and then extend to perturbations f δ of f , proving convergence as both ∆r and the level δ of

noise go to zero.

We will make the assumption throughout this section that ∆R > 0 is given, that T ≥ 1 + ∆R,

and that η∆ris a Borel-Stieltjes measure defined as in the last section. In addition we assume

that k, u ∈ W 1,∞[0, T ], that k(t) 6= 0 for t ∈ (0,∆R] (an assumption easily satisfied by Volterra

kernels occurring in many applications, e.g., the IHCP), and that f ∈ L∞(0, T ) with the map

t 7→∫∆r0 f(t+ρ) dη∆r

(ρ) belonging to L2(0, 1) for all ∆r ∈ (0,∆R] (such as is satisfied, for example,

18

Page 19: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

when f is piecewise continuous on [0, T ]). Under such assumptions it is not difficult to see that the

hypotheses of Theorem 3.1 hold for all ∆r ∈ (0,∆R].

Recalling Remark 2.1, we note that the solution u of (4.1) satisfies

∫ ∆r

0

(∫ t+ρ

0k(t+ ρ− s)u(s) ds

)dη∆r

(ρ) =∫ ∆r

0f(t+ ρ) dη∆r

(ρ), t ∈ [0, 1],

for any (fixed) ∆r ∈ (0,∆R], or, splitting the inner integral at t in the first term,

∫ t

0

(∫ ∆r

0k(t+ ρ− s) dη∆r

(ρ)

)u(s) ds+

∫ ∆r

0

(∫ ρ

0k(ρ− s)u(s+ t) ds

)dη∆r

(ρ)

=∫ ∆r

0f(t+ ρ) dη∆r

(ρ), t ∈ [0, 1]. (4.2)

Writing the approximation error as y(t) = u(t;∆r)−u(t) (y(t) = y(t;∆r)), we have by subtracting

(4.2) from (3.5) that y satisfies the error equations

∫ t

0

(∫ ∆r

0k(t+ ρ− s) dη∆r

(ρ)

)y(s) ds+ y(t)

(∫ ∆r

0

∫ ρ

0k(ρ− s) ds dη∆r

(ρ)

)

=∫ ∆r

0

∫ ρ

0k(ρ− s)[u(s+ t)− u(t)] ds dη∆r

(ρ), t ∈ [0, 1],

where the existence of a unique solution y is guaranteed using results similar to Theorem 3.1.

Furthermore, the assumptions on u, k ensure that y ∈ C[0, T ]. Because the quantity α(∆r) defined

in (3.6) is assumed to be nonzero for all ∆r ∈ (0,∆R], we may rewrite the last equation as

y(t) = − 1α(∆r)

∫ t

0k(t− s;∆r)y(s) ds+ F (t;∆r), t ∈ [0, 1], (4.3)

where

k(t;∆r) =∫ ∆r

0k(t+ ρ) dη∆r

(ρ),

F (t;∆r) =∫∆r0

∫ ρ0 k(ρ− s)[u(s+ t)− u(t)] ds dη∆r

(ρ)∫∆r0

∫ ρ0 k(ρ− s) ds dη∆r

(ρ).

The following estimates are suggested by the ideas in [4], but here there are notable differences

due to the presence of the regularization parameter ∆r and the fact that the kernel itself now

19

Page 20: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

depends on ∆r. We define

K(t;∆r) ≡ k(t;∆r)k(0;∆r)

ε(∆r) ≡ α(∆r)k(0;∆r)

so that (4.3) for y may be written as

y(t) = −∫ t

0

1ε(∆r)

K(t− s;∆r)y(s) ds+ F (t;∆r), (4.4)

where k(0;∆r) 6= 0 and ε(∆r) > 0 for all ∆r ∈ (0,∆R] due to the continuity of k and the fact that

k does not change sign on (0, T ]. Convolving both sides of (4.4) with the function ψ(t, ε(∆r)) given

by

ψ(t, ε) =

0, t < 0

1ε e−t/ε, t ≥ 0,

the result is

∫ t

0ψ(t− s, ε(∆r))y(s) ds

= − 1ε(∆r)

∫ t

0ψ(t− τ ; ε(∆r))

∫ τ

0K(τ − s;∆r)y(s) ds dτ + ψ(t, ε(∆r)) ∗ F (t,∆r),

which we then subtract from (4.4) to obtain

y(t) = −∫ t

0Kε(t, s;∆r)y(s) ds+ [F (t;∆r)− ψ(t, ε(∆r)) ∗ F (t;∆r)]. (4.5)

Here the new (non-convolution, in general) kernel Kε is defined via a change of order of integration

by

∫ t

0Kε(t, s)y(s) ds

= −∫ t

0ψ(t− s; ε)y(s) ds+

∫ t

0K(t− s)y(s) ds

− 1ε

∫ t

0

∫ t

sψ(t− τ ; ε)K(τ − s) dτ y(s) ds

and we have momentarily suppressed the dependence of various entities on ∆r. Thus

Kε(t, s) = −ψ(t− s; ε) +1εK(t− s)− 1

ε

∫ t

sψ(t− τ ; ε)K(τ − s) dτ

20

Page 21: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

= −ψ(t− s; ε) +1εK(t− s)− ψ(0; ε)K(t− s) + ψ(t− s; ε)K(0)

+∫ t

sψ(t− τ ; ε)K ′(τ − s) dτ

=∫ t

sψ(t− τ ; ε)K ′(τ − s) dτ

using an integration by parts and the fact that K(0) = 1. But these estimates yield

|Kε(t, s)| ≤ ‖K ′‖∞∫ t

sψ(t− τ ; ε) dτ

= ‖K ′‖∞(1− e−(t−s)/ε)

≤ ‖K ′‖∞.

Applying Gronwall’s inequality and the previous estimates to equation (4.5), it follows that

|y(t)| ≤ |F (t;∆r)− ψ(t; ε(∆r)) ∗ F (t;∆r)| · exp(t ‖K ′(·;∆r)‖∞),

where

|F (t;∆r)− ψ(t; ε) ∗ F (t;∆r)| ≤ ‖F (·;∆r)‖∞(

1 +∫ t

0ψ(t− s; ε) ds

)≤ 2‖F (·;∆r)‖∞

= O(∆r),

since u ∈W 1,∞[0, T ]. Therefore,

|y(t)| ≤ O(∆r) exp(t ‖K ′(·;∆r)‖∞), t ∈ [0, 1],

where it remains to consider conditions under which ‖K ′(·;∆r)‖∞ is bounded as ∆r → 0. In fact,

K ′(t;∆r) =∫∆r0 k′(t+ ρ) dη∆r

(ρ)∫∆r0 k(ρ) dη∆r

(ρ)

so that

‖K ′(·,∆r)‖∞ ≤ ‖k′‖∞

∣∣∣∣∣∫∆r0 dη∆r

(ρ)∫∆r0 k(ρ) dη∆r

(ρ)

∣∣∣∣∣ .We have thus proven the following:

21

Page 22: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

Theorem 4.1 Let k, u, η∆r, ∆R, and f satisfy the assumptions given at the beginning of this

section. Then if there exists M > 0 such that

∣∣∣∣∣∫∆r0 dη∆r

(ρ)∫∆r0 k(ρ) dη∆r

(ρ)

∣∣∣∣∣ ≤M, for all ∆r ∈ (0,∆R],

it follows that the solution u = u(·;∆r) of (3.5) converges to u(t) as ∆r → 0, uniformly in t ∈ [0, 1].

From assumptions guaranteeing continuity of k, we immediately have the conditions of the

theorem holding for arbitrary η∆rin the case of k(0) 6= 0:

Corollary 4.1 Let k, u, η∆r, ∆R, and f satisfy the assumptions given at the beginning of this

section, and in addition, assume that k(0) 6= 0. Then the solution u = u(·;∆r) of (3.5) converges

to u(t) as ∆r → 0, uniformly in t ∈ [0, 1].

The case of k(0) = 0 is not as easily handled; in fact, as we shall see below, the sufficiency

conditions stated in the theorem fail when k(0)=0. However, convergence of u(·;∆r) to u as

∆r → 0 is still possible in this important case, as is seen in [10] using an approach different from

that taken here.

To show how the sufficient conditions of Theorem 4.1 can fail to hold when k(0) = 0, we look

at the simple case of k(t) ≡ t, for t ∈ [0, T ] and dη∆r(ρ) = ω∆r

(ρ) dρ, ω∆r> 0 integrable for each

∆r ∈ (0,∆R]. We find through an integration by parts that

∫∆r0 ω∆r

(ρ) dρ∫∆r0 ρω∆r

(ρ) dρ=

∫∆r0 ω∆r

(ρ) dρ

∆r∫∆r0 ω∆r

(τ) dτ −∫∆r0

∫ ρ0 ω∆r

(τ) dτ dρ

≥ 1∆r

as ∆r → 0

so that the conditions of the theorem do not hold for this example.

In fact, this example illustrates the reality of the situation in general, namely that the condi-

tions of the theorem cannot hold if k(0) = 0 and k is continuous. In fact, we now show that so

22

Page 23: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

long as k(0) = 0 and k is continuous there is no family of positive bounded-variation functions

η∆r∆r∈(0,∆R] which satisfy

∣∣∣∣∣∫∆r0 dη∆r

(ρ)∫∆r0 k(ρ) dη∆r

(ρ)

∣∣∣∣∣ ≤M, ∆r ∈ (0,∆R] (4.6)

for fixed M > 0. Indeed, since we may always rescale the family η∆r∆r∈(0,∆R] in the consid-

eration of this bound, it suffices to show that any family of positive bounded-variation functions

η∆r∆r∈(0,∆R] satisfying ∫ ∆r

0dη∆r

(ρ) = 1

for ∆r ∈ (0,∆R] simultaneously satisfies

∫ ∆r

0k(ρ) dη∆r

(ρ) → 0 as ∆r → 0.

In fact, for any ∆r ∈ (0,∆R], a necessary and sufficient condition that η∆r have total variation ≤ 1

(so that∫∆r0 dη∆r

(ρ) ≤ 1) and that∫∆r0 k(ρ) dη∆r

(ρ) = c∆r for an arbitrary constant c∆r is that

|c∆r | ≤ max0≤ρ≤∆r

|k(ρ)|.

This type of result arises in the classical “moment problem” (see, for example, p. 116 of [20]). But

k continuous and k(0) = 0 necessarily implies that |c∆r | → 0. Thus, it is impossible for inequality

(4.6) to hold for fixedM > 0 under such conditions on k. This does not imply nonconvergence of the

error y(t;∆r) to zero, as ∆r → 0, but rather that a particular sufficient condition for convergence

(given in Theorem 4.1) fails to be met for certain k.

We note that success for the case of k with k(0) = 0 might be obtained if we weaken the

assumptions of Theorem 4.1 and instead require an inequality of the form

∣∣∣∣∣∫∆r0 dη∆r

(ρ)∫∆r0 k(η) dη∆r

(ρ)

∣∣∣∣∣ ≤ 1‖k′‖∞

log(1√∆r

),

since such an assumption will give |y(t;∆r)| = O(√

∆r) as ∆r → 0. However, we have not been

able to find a k and η∆rwhich simultaneously satisfy such a condition and the condition k(0) = 0.

23

Page 24: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

Remark 4.1: The preceding theory may also be applied (with only few obvious modifications)

to the case of k(t) ∈ IRn×n and f(t) ∈ IRn, under the special condition that the matrix product

α(∆r)

(∫ ∆r

0k(ρ) dη∆r

(ρ)

)−1

= ε(∆r) I

for I the n × n identity and some scalar ε(∆r) > 0. Otherwise, without such an assumption, a

different approach may be taken in which Gronwall’s inequality is applied directly to equation (4.3),

requiring that the assumptions of Theorem 3.1 hold for all ∆r sufficiently small, and, in addition,

the existence of some M > 0 such that supt∈[0,T ] ‖k(t)‖/‖α(∆r)‖ ≤ M for all ∆r ∈ (0,∆r], and

suitable matrix/vector norms ‖ · ‖.

We need only make small modifications in the theory preceding Theorem 4.1 to consider equation

(1.1) with perturbed data f δ(t) = f(t) + d(t), where d(·) ∈ L∞(0, T ) with t 7→∫∆r0 d(t+ ρ) dη∆r

(ρ)

belonging to L2(0, 1) for all ∆r ∈ (0,∆R], and where we assume ‖d‖∞ ≤ δ. In the presence of

noise, equation (4.3) becomes

y(t) = − 1α(∆r)

∫ t

0k(t− s;∆r)y(s) ds+ F (t;∆r) + E(t;∆r, δ), t ∈ [0, 1],

where

E(t;∆r, δ) =1

α(∆r)

∫ ∆r

0d(t+ ρ) dη∆r

(ρ).

Treating the term for E(t;∆r, δ) just as we have for F (t;∆r), we obtain

|y(t)| ≤ (|F (t;∆r)− ψ(t; ε(∆r))∗F (t;∆r)|+ |E(t;∆r, δ)− ψ(t; ε(∆r))∗E(t;∆r, δ)|)

· exp(t ‖K ′(·;∆r)‖∞),

where, as for F ,

|E(t;∆r, δ)− ψ(t; ε(∆r))∗E(t;∆r, δ)| ≤ ‖E(·;∆r, δ)‖∞(

1 +∫ t

0ψ(t− s; ε) ds

)≤ 2δ

∫∆r0 dη∆r

(ρ)α(∆r)

.

24

Page 25: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

The following theorem thus obtains for the case of perturbed data.

Theorem 4.2 Assume the hypotheses of Theorem 4.1, and in addition, that ∆r ≡ ∆r(δ) may be

selected satisfying

δ

∫∆r0 dη∆r

(ρ)∫∆r0

∫ ρ0 k(ρ− s) ds dη∆r

(ρ)→ 0, as δ → 0.

and

∆r(δ) → 0 as δ → 0.

It then follows that the solution u = u(·; δ) of (3.5) (using ∆r(δ) and data f δ, ‖f − f δ‖∞ ≤ δ)

converges to the solution u(t) of (1.1) as δ → 0, uniformly in t ∈ [0, 1].

As an example of the choice of ∆r = ∆r(δ), consider η∆ra density which is independent of

∆r, i.e., dη∆r(ρ) = ω(ρ) dρ where ω ∈ L∞[0, T ], ω(ρ) ≥ ω

min> 0 on [0, T ], and assume that

k ∈W 1,∞[0, T ] with k(0) = 1. Then it is not difficult to show that

δ

∣∣∣∣∣∫∆r0 ω(ρ) dρ∫∆r

0

∫ ρ0 k(ρ− s) dsω(ρ) dρ

∣∣∣∣∣ ≤ δ∆r ‖ω‖∞

ωmin

kmin

(12∆r

2)

≤ δ

∆rC

for ∆r sufficiently small, where |k(t)| ≥ kmin

> 0 for 0 ≤ t ≤ ∆r, and C > 0 is a suitable constant.

Thus the choice ∆r = δp for p ∈ (0, 1) satisfies the assumptions of Theorem 4.2.

5. Conclusion.

We have examined a widely-used numerical method for the stable solution of the Inverse Heat

Conduction Problem, a problem which is described by an infinitely-smoothing first-kind Volterra

integral equation with convolution kernel. We have shown that the numerical method developed by

J. V. Beck is equivalent to a standard collocation discretization of a related second-kind equation

25

Page 26: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

which is constructed using future values of the data in the original equation. Our results include

a convergence/regularization theory for problems in which the kernel is real-valued and nonzero at

the origin, proving that the solution of the infinite-dimensional second-kind equation converges to

the solution of the original first-kind problem as the level of noise goes to zero (provided that a

regularization parameter in the approximating equation is chosen correctly).

In [10] we extend the findings in this paper to ν-smoothing scalar Volterra problems, i.e.,

those first-kind Volterra equations with kernels k satisfying k(0) = k′(0) = . . . = k(ν−2)(0) = 0,

k(ν−1)(0) 6= 0, k(ν) ∈ L2(0, 1), for certain values of ν, and prove convergence of the approximations

defined in this paper for this more general problem.

In addition, in [11], we consider discretizations of the regularized problem along the lines dis-

cussed in Section 2 above, illustrating the relationship between noise level δ, number of future

intervals (r − 1), and width ∆t of these intervals (or discretization stepsize). In that paper it is

shown how to compute ∆t = ∆t(δ, r) such that δ → 0 implies ∆t → 0 and convergence of the

corresponding finite-dimensional approximations to the original solution u of (1.1).

Finally we note that other investigations into stability and approximation properties of Beck’s

method may be found in [16, 17, 18, 19] where both linear and nonlinear problems are treated.

Acknowledgement: We would like to thank Professor J. V. Beck for many helpful discussions

during the early stages of this work.

References

[1] R. G. Bartle. The Elements of Integration. John Wiley and Sons,, New York, 1966.

[2] J. V. Beck, B. Blackwell, and Jr. C. R. St. Clair. Inverse Heat Conduction. Wiley-Interscience,

1985.

26

Page 27: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

[3] J. R. Cannon. The One-Dimensional Heat Equation, Encyclopedia of Mathematics and its

Applications, Volume 23. Addison-Wesley, Reading, MA, 1984.

[4] C. Corduneanu. Integral Equations and Applications. Cambridge University Press, Cambridge,

1991.

[5] A. M. Denisov. The approximate solution of a Volterra equation of the first kind. USSR

Comp. Maths. Math. Physics, 15:237–239, 1975.

[6] L. Elden. An efficient algoritum for the regularization of ill-conditioned least squares problems

with triangular Toeplitz matrix. SIAM J. Sci. Stat. Comput., 5:229–236, 1984.

[7] G. Gripenberg, S. O. Londen, and O. Staffens. Volterra Integral and Functional Equations.

Cambridge University Press, Cambridge, 1990.

[8] C. W. Groetsch. The Theory of Tikhonov regularization for Fredholm equations of the first

kind. Pitman, Boston, 1984.

[9] B. Imomnazarov. Approximate solution of integro-operator Volterra equations of the first

kind. USSR Comp. Maths. Math. Physics, 25:199–202, 1985.

[10] P. K. Lamm. Regularized inversion of finitely smoothing Volterra operators. Submitted, 1993.

[11] P. K. Lamm. Stable approximation of ill-posed Volterra problems via structure-preserving

discretization. Submitted, 1994.

[12] J. Locker and P. M. Prenter. Regularization with differential operators, I. General theory.

Journal of Math. Analysis and Applic., 74:504–529, 1980.

[13] N. A. Magnitskii. A method of regularizing Volterra equations of the first kind. USSR

Comp. Maths. Math. Physics, 15:221–228, 1975.

27

Page 28: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

[14] F. Natterer. On the order of regularization methods. In G. Hammerlin and K. H. Hoffman,

editors, Improperly Posed Problems and their Numerical Treatment, pages 189–203. Birkhauser,

Basel, 1983.

[15] F. Natterer. Error bounds for Tikhonov regularization in Hilbert scales. Applicable Analysis,

18:29–37, 1984.

[16] H.-J. Reinhardt. Analysis of the discrete beck method solving illposed parabolic equations,

preprint, 1992.

[17] H.-J. Reinhardt. On the stability of sequential methods solving the inverse heat conduction

problem. Z. angew. Math. Mech., 73:T 864–T 866, 1993.

[18] H.-J. Reinhardt and D. N. Hao. Sequential approximation to nonlinear inverse heat conduction

problems, to appear in Math. Comput. Modelling, 1994.

[19] H.-J. Reinhardt and F. Seiffarth. On the approximate solution of illposed Cauchy problems

for parabolic differential equations. In G. Anger, editor, Inverse Problems: Principles and

Applications in Geophysics, Technology and Medicine, Potsdam, 1993.

[20] F. Riesz and B. Sz.-Nagy. Functional Analysis. Frederick Ungar Publishing Co., New York,

1978.

[21] V. O. Sergeev. Regularization of Volterra equations of the first kind. Soviet Math. Dokl.,

12:501–505, 1971.

28

Page 29: Future-Sequential Regularization Methods for Ill-Posed ... · Future-Sequential Regularization Methods for Ill-Posed Volterra Equations ... theory shows that, ... in order to stabilize

0.2 0.4 0.6 0.8 1

True Solution

-1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.2 0.4 0.6 0.8 1

Stolz Approx. (r=1)

-1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.2 0.4 0.6 0.8 1

Beck Approx. (r=2)

-1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

0.2 0.4 0.6 0.8 1

Beck Approx. (r=3)

-1

-0.8

-0.6

-0.4

-0.2

0.2

0.4

Figure 2.1: Results for Example 2.1

29