integro-pde in hilbert spaces: existence of viscosity...
TRANSCRIPT
Integro-PDE in Hilbert spaces: Existence ofviscosity solutions
Andrzej SwiechSchool of Mathematics, Georgia Institute of Technology
Atlanta, GA 30332, U.S.A.E-mail: [email protected]
AND
Jerzy Zabczyk
Institute of Mathematics, Polish Academy of Sciences
Sniadeckich 8, 00-950 Warsaw, PolandE-mail: [email protected]
Abstract
Existence of a viscosity solution to a non-local Hamilton-Jacobi-Bellman equa-tion in a Hilbert space is established. We prove that the value function of an as-sociated stochastic control problem is a viscosity solution. We provide a completeproof of the Dynamic Programming Principle for the stochastic control problem. Wealso illustrate the theory with Bellman equations associated to a controlled waveequation and controlled Musiela equation of mathematical finance both perturbedby Levy processes.
Keywords: viscosity solutions, integro-PDE, Hamilton-Jacobi-Bellman equation, stochas-
tic PDE, Levy process.
2010 Mathematics Subject Classification: 49L25, 60H15, 35K60
1 Introduction
The main aim of the present paper is to establish existence of a viscosity solution to the
following nonlinear integro-PDE
1
ut − 〈Ax,Du〉+ infa∈Λ
〈b(x, a), Du〉+ f(x, a)
+∫U
(u(t, x+ γ(x, a, z))− u(t, x)− 〈γ(x, a, z), Du(t, x)〉) ν(dz)
= 0, t ∈ [0, T ], x ∈ H,
u(T, x) = g(x), x ∈ H,(1.1)
in a separable Hilbert space H which is equipped with the norm ‖·‖ and the inner product
〈·, ·〉. In the equation, A is a linear, densely defined, maximal monotone operator in H, Λ is
a Polish space, U is a separable Hilbert space equipped with the norm ‖ ·‖U and the inner
product 〈·, ·〉U . Moreover b, γ, and g are mappings; b : H × Λ→ H, γ : H × Λ× U → H,
g : H → R, and ν a non-negative measure on U , all satisfying appropriate regularity
conditions.
It will be shown that the solution can be identified with the value function of the
stochastic optimal control problem of minimizing a cost functional
J(t, x; a(·)) = E∫ T
t
f(X(s), a(s))ds+ g(X(T ))
,
for the controlled abstract stochastic differential equation (SDE)dX(s) = (−AX(s) + b(X(s), a(s)))ds+
∫U\0 γ(X(s−), a(s), z)π(ds, dz)
X(t) = x ∈ H. (1.2)
Here a(·) are Λ-valued controls on the interval [t, T ] belonging to a set Ut and π is the
compensated Poisson random measure of jumps of a U -valued Levy process L.
The equation (1.1), often called a Hamilton-Jacobi-Bellman (HJB) equation, is the
dynamic programming equation for the above control problem. Thus our aim is to show
that the above integro-PDE is satisfied, in the viscosity sense, by the value function
V (t, x) = infa(·)∈Ut
J(t, x; a(·)).
The uniqueness problem for a more general HJB equation was established in our
paper [34] and the present paper can be regarded as a continuation and a completion of
that paper. However here we only consider the case where the HJB equation corresponds
to an optimal control problem with pure jumps. Similar techniques can be applied to a
general case where the noise also has a continuous part, however since the novelty is in
dealing with the jump part we chose to restrict to this case. The main technical challenge
here is proving the Dynamic Programming Principle (DPP). Proofs of DPP for optimal
control problems for finite dimensional jump-diffusions can be found in [16, 31] and for
game problems in [6, 8, 20]. Our control problem uses the framework of [13] and the proof
2
also follows the approach of [13], which was based on the one of [35]. We try to make the
paper as self-contained as possible, however we will refer the reader for some technical
results to [13].
The literature on existence and uniqueness of viscosity solutions in finite dimensional
spaces is enormous. We refer to [34] for an extensive list of references (which is not com-
plete). In particular [6, 7, 4, 8, 16, 18, 20, 22, 26, 31, 32] discuss stochastic representation
formulas for solutions of integro-PDE. However the literature for infinite dimensional
Hilbert spaces is very limited. Viscosity solutions were introduced in [33, 34] and in [33]
the theory was used to deal with large deviations for solutions of evolution equations
with small Levy type noise. For other results related to non-local equations in infinite
dimensional spaces we refer the reader to [1, 21, 24, 27, 28, 36].
Conceptually the paper consists of two parts. The first one, of a preparatory character,
dealing with various properties of solutions of stochastic differential equations and the
stochastic control problem, and the second one, analytical in character, establishing the
DPP and the main integro-PDE existence result.
The paper is organized as follows. In Section 2 we recall basic properties of Levy
processes and establish an important result on the Yosida approximations of stochastic
convolutions with respect to a Poisson random measure. Section 3 is devoted to properties
of control systems formulated in Theorem 3.4. It starts from a rather long list of assump-
tions and definitions of several concepts of admissible controls. Subsections 3.3 and 3.4
are devoted to distributional properties of the control system and show that they do not
depend on the probability space they are defined on. The final subsection shows how the
control system behaves under the regular conditional probability. With all these prelim-
inary results at hand, in Section 4 we establish the key technical result, the Dynamic
Programming Principle. The main existence result is established in Section 5. The final
section applies the theory to the HJB equation for the controlled wave equation and an
equation of mathematical finance.
Acknowledgments. The authors would like to thank a referee for his/her very useful
comments which improved the paper.
2 Preliminaries
2.1 Levy processes
Below we recall basic definitions concerned with Levy processes. They generate the noise
present in the equation (1.1).
Let 0 ≤ t < T . We say that(
Ω,F , F tss∈[t,T ] ,P)
is a filtered probability space if
(Ω,F ,P) is a complete probability space, and F tss∈[t,T ] is a filtration in this probability
3
space. Unless specified otherwise we will always assume that F ts satisfies the usual condi-
tions, i.e. it is right continuous and complete (meaning that F ts contains all P-null sets of
F for every s ≥ t). A stochastic process L = L(s) : t ≤ s ≤ T is a U -valued (a,Q, ν)
F ts-Levy process if it satisfies the following conditions:
(i) L is F ts-adapted.
(ii) L(t) = 0 P a.s..
(iii) L has cadlag trajectories.
(iv) For all t ≤ t1 ≤ t2 ≤ T , the random variable L(t2)− L(t1) is independent of F tt1 .
(v)
E[ei〈u,L(t2)−L(t1)〉U
]= e−(t2−t1)ψ(u),
where
ψ(u) = −i〈a, u〉U +1
2〈Qu, u〉U
+
∫U\0
(1− ei〈u,z〉U + 1‖z‖U<1i〈u, z〉U
)ν(dz). (2.1)
Here a ∈ U , Q is a self-adjoint, non-negative, trace class operator on U , and ν is a non-
negative measure on (U \ 0,B(U \ 0)), where B(U \ 0) is the Borel σ-field, such
that ∫U\0
(‖z‖2U ∧ 1)ν(dz) < +∞. (2.2)
The measure ν is called the Levy measure of L or the jump intensity measure of L. We
extend ν to a measure on (U,B(U)) by setting ν(0) = 0. The function ψ is called the
characteristic exponent of L.
In this paper we will always assume that a = 0, Q = 0, i.e. that L is of pure jump
type. We will then say that L is a ν F ts-Levy process, or when the filtration is clear or not
essential, just a ν-Levy process. A ν F ts-Levy process which does not necessarily satisfy
condition (ii) will be called a translated ν F ts-Levy process. According to the Levy-Ito
decomposition, a ν-Levy process L has the form
L(s) = L0(s) + L1(s), (2.3)
where L0, L1 are independent Levy processes,
L0(s) =
∫ s
t
∫0<‖z‖<1
zπ(dτ, dz), L1(s) =
∫ s
t
∫‖z‖≥1
zπ(dτ, dz),
4
where π is the Poisson random measure of jumps of L and π is the compensated Poisson
random measure of jumps:
π([t, s], B) =∑t<τ≤s
1B(L(τ)− L(τ−)), B ∈ B(U \ 0), L(τ−) = limr↑τ
L(r),
π(dτ, dz) = π(dτ, dz)− dτ ν(dz).
Moreover, for every B ∈ B(U \ 0), such that∫B
‖z‖Uν(dz) < +∞,
N(s, B) := π([t, s], B), s ≥ t, is a martingale. For additional material on Levy processes,
see [5] and [25].
The σ-field of F ts-predictable sets on [t, T ] × Ω is the smallest σ-field containing all
the sets of the form (s, r] × A, where t ≤ s < r ≤ T,A ∈ F ts and t × F tt . It will be
denoted by P[t,T ]. A stochastic process with values in a measurable space (E, E) is called
F ts-predictable if it is measurable as a map between [t, T ]× Ω and E, where [0, T ]× Ω is
equipped with the σ-field P[t,T ] of F ts-predictable sets.
We will denote by DU [t, T ] the space of all cadlag, U -valued functions ω : [t, T ]→ U ,
equipped with the Skorohod metric and the Borel σ-field B(DU [t, T ]). It is a complete,
separable metric space, see [12], Theorem 5.6.
2.2 Yosida approximations
Throughout the paper we will always assume that A is a linear, densely defined, maximal
operator in H, i.e. −A be the generator of a semigroup of contractions e−rA, r ≥ 0 on
H.
Denote H = L2(U, ν;H). We will later need the following property of the space H.
There exists a dense subset e1, e2, . . . of H = L2(U, ν;H) consisting of bounded and
continuous functions such that for each i = 1, 2, . . . there exists a positive number ri > 0
such that the support of ei is disjoint from the ball u : ‖u‖U ≤ ri. To show the existence
of such a set, since H is separable we can assume that H = R. A countable dense subset
of H = L2(U, ν;H) can obviously be taken to have the property that every function φi in
this set has support in a compact subset Ki of ‖z‖U > ri for some ri > 0. We can then
approximate each such function in L2(Ki, ν) by a function in C(Ki) and using Tietze’s
extension theorem extend it to a function in C(U) which is equal to 0 outside of a small
neighborhood of Ki.
5
It is known, see e.g. [25], that if φ(r), r ∈ [t, T ] is an H valued predictable process
then the following isometric identity holds
E∥∥∥∥∫ s
t
∫U\0
φ(r, u)π(dr, du)
∥∥∥∥2
= E∫ s
t
‖φ(r)‖2H dr, s ∈ [t, T ]
provided the right-hand side is finite.
The following technical result on the so called Yosida approximations of stochastic
integrals with respect to a Poissonian random measure is used several times in the sequel.
Proposition 2.1. Let An = nA(nI + A)−1, be the Yosida approximations of A. If a
predictable process φ(r), r ∈ [t, T ] is such that
E∫ T
t
‖φ(r)‖2H dr < +∞,
then the stochastic convolution
ψ(s) =
∫ s
t
∫U\0
e−(s−r)Aφ(r, u)π(dr, du), t ≤ s ≤ T
has a cadlag modification and
limn→∞
E supt≤s≤T
∥∥∥∥∫ s
t
∫U\0
(e−(s−r)An − e−(s−r)A)φ(r, u)π(dr, du)
∥∥∥∥2
= 0. (2.4)
Proof of Proposition 2.1. The proof is similar to the proof of Proposition 3.3 in [33] and
therefore we indicate only main steps. Let the space H and the unitary group S(r), r ∈ Rbe the extensions, respectively, of H and e−rA, r ≥ 0, given by the dilation theorem of
Nagy, see e.g. [25], p.160, and let P be the orthogonal projection of H on H. Then
e−rAh = PS(r)h, h ∈ H, r ≥ 0
and
ψ(s) =
∫ s
t
∫U\0
PS(s− r)φ(r, u)π(dr, du)
= PS(s)
∫ s
t
∫U\0
S(−r)φ(r, u)π(dr, du), s ∈ [t, T ].
However the process
ψ(s) =
∫ s
t
∫U\0
S(−r)φ(r, u)π(dr, du), s ∈ [t, T ]
6
is an H-valued square integrable martingale and therefore has a cadlag modification.
Thus, since ψ(s) = PS(s)ψ(s), P is a bounded operator and S(s) is a C0-semigroup, also
ψ(s) has a cadlag modification. In addition
‖ψ(s)‖ ≤ ‖ψ(s)‖H , s ∈ [t, T ].
By the classical Doob inequality, for square integrable martingales,
E(
supt≤s≤T
‖ψ(s)‖2H
)≤ 4E‖ψ(T )‖2
H≤ 4E
∫ T
t
‖φ(r)‖2H dr.
Consequently
E(
supt≤s≤T
‖ψ(s)‖2
)≤ 4E
∫ T
t
‖φ(s)‖2H ds. (2.5)
To go further, denote by χ1 the space of H-valued predictable processes Y (s), s ∈ [t, T ],
with the norm
‖Y ‖χ1 =
(E∫ T
t
‖Y (s)‖2H ds
)1/2
< +∞
and by χ2 the space of all adapted, H-valued cadlag processes Y (s), equipped with the
norm,
‖Y ‖χ2 =
(E supt≤s≤T
‖Y (s)‖2
)1/2
. (2.6)
Let K, Kn be transformations from χ1 into χ2 given by the formulae
K(φ)(s) =
∫ s
t
∫U\0
e−(s−r)Aφ(r, u)π(dr, du),
Kn(φ)(s) =
∫ s
t
∫U\0
e−(s−r)Anφ(r, u)π(dr, du), s ∈ [t, T ].
The fact that the images of K and Kn are in χ2 follows from (2.5). By estimate (2.5),
‖K‖ ≤ 2, ‖Kn‖ ≤ 2, n = 1, 2, . . . .
Therefore it is enough to prove (2.4) for a dense set in χ1. This can be done exactly as
in [33]. The main tool of the proof is the isometric formula valid without any special
assumptions on the random measure.
3 Control system
The following assumptions will be imposed throughout the paper and will not be repeated
in the statements of the results.
7
3.1 Assumptions
Let B be a bounded, linear, positive (i.e. 〈Bx, x〉 > 0 for every x ∈ H, x 6= 0), self-adjoint
operator on H such that A∗B is bounded on H and
〈(A∗B + c0B)x, x〉 ≥ 0 for all x ∈ H (3.1)
for some c0 ≥ 0 (see [10, 13, 29, 34] for existence of such an operator B and various
examples).
We define the space H−1 to be the completion of H under the norm
‖x‖−1 = ‖B12x‖.
H−1 is a Hilbert space equipped with the inner product
〈x, y〉−1 =⟨B
12x,B
12y⟩.
It is clear that
‖x‖−1 ≤ ‖B12‖‖x‖, x ∈ H. (3.2)
We say that a function u : W → R is B-upper-semicontinuous (respectively, B-
lower-semicontinuous) on W ⊂ [0, T ] × H if whenever tn → t, xn x, Bxn → Bx,
(t, x) ∈ W , then lim supn→+∞ u(tn, xn) ≤ u(t, x) (respectively, lim infn→+∞ u(tn, xn) ≥u(t, x)). The function u is B-continuous on W if it is B-upper-semicontinuous and B-
lower-semicontinuous on W . The condition xn x in the definition of B-upper/lower-
semicontinuity can be replaced by the requirement that the sequence xn is bounded. This
is an easy consequence of the fact that if xn is bounded and Bxn → Bx then xn x
which follows since B is one-to-one.
In the assumptions below C is a generic constant which may change from line to line.
We will assume that:
(i) There exists a Borel measurable function ρ, bounded on bounded sets, such that
inf‖z‖U>r ρ(z) > 0 for every r > 0, and∫U
(ρ(z))2ν(dz) < +∞. (3.3)
(ii) b : H × Λ→ H are continuous and such that
‖b(x, a)− b(y, a)‖ ≤ C‖x− y‖−1, (3.4)
γ : H × Λ × U → H is continuous in x, a, Borel measurable with respect to z and
such that
‖γ(x, a, z)− γ(y, a, z)‖ ≤ Cρ(z)‖x− y‖−1 (3.5)
for all x, y ∈ H, z ∈ U, a ∈ Λ.
8
(iii) f : H × Λ→ R, g : H → R are continuous and such that
|f(x, a)− f(y, a)|+ |g(x)− g(y)| ≤ ω(‖x− y‖−1) (3.6)
for all x, y ∈ H, a ∈ Λ, where ω is a modulus , i.e. a continuous subadditive function
on [0,+∞) such that ω(0) = 0 and ω(a) > 0, a > 0.
(iv)
‖b(0, a)‖, |f(0, a)| ≤ C, (3.7)
‖γ(0, a, z)‖ ≤ Cρ(z) (3.8)
for all a ∈ Λ, z ∈ U .
It follows from (3.5) and (3.8) that
‖γ(x, a, z)‖ ≤ (1 + Cρ(z))‖x‖−1 for x ∈ H.
3.2 Admissible controls
We introduce here the notions of a generalized reference probability space, reference prob-
ability space, and a standard reference probability space from [13], which formalize the
concept of admissible controls.
Definition 3.1. A 5-tuple µ :=(
Ω,F , F tss∈[t,T ] ,P, L)
is called a generalized reference
probability space if (Ω,F , F tss∈[t,T ] ,P) is a filtered probability space, and L is a translated
ν F ts-Levy process.
Definition 3.2. A reference probability space is a generalized reference probability space
µ :=(
Ω,F , F tss∈[t,T ] ,P, L)
, where L(t) = 0, P a.s., and F ts = σ(F t,0s ,N ), where F t,0s =
σ(L(τ) : t ≤ τ ≤ s) is the filtration generated by L, and N is the collection of the P-null
sets in F .
Definition 3.3. A reference probability space µ is called standard if there exists a σ-
field F ′ such that F t,0T ⊂ F ′ ⊂ F , F is the completion of F ′, and (Ω,F ′) is a standard
measurable space (see [13], Definition 1.11).
We recall that if (Ω,F ,P) is standard then for every sub σ-field G ⊂ F there exists
a unique regular conditional probability given G (see [13], Definition 1.44 and Theorem
1.45) p : Ω× F → [0, 1]. We will denote p(ω, ·) (which is a probability measure on F for
every ω) by P(·|G)(ω) or simply by Pω.
9
For a given t ∈ [0, T ] and a reference probability space µ =(
Ω,F , F tss∈[t,T ] ,P, L)
,
the set of admissible controls on µ, denoted by Uµt , will be the collection of all F ts-predictable processes a : [t, T ]× Ω→ Λ. We then define the set of all admissible controls
Ut =⋃µ
Uµt ,
where the union is taken over all reference probability spaces µ on [t, T ]. With a slight
abuse of notation we will often write (Ω,F ,F ts,P, L, a (·)) ∈ Ut instead of a(·) ∈ Ut to
indicate the underlying reference probability space. We also simply write F ts instead of
F tss∈[t,T ] since the notation clearly indicates that the filtration is defined for s ∈ [t, T ]
3.3 Properties of the solutions
Properties of the solutions to equation (1.2) are summarized in the following theorem.
Theorem 3.4. Let 0 ≤ t ≤ t1 < T,(Ω,F , F tss∈[t,T ] ,P, L
)be a generalized reference
probability space. Let a(·) : [t1, T ]×Ω→ Λ be F ts-predictable, and let ξ be F tt1-measurable
and such that E‖ξ‖2 < +∞. We have:
(i) There exists a unique mild solution X(·) = X(·; t1, ξ, a(·)) of (1.2) with X(t1) = ξ.
(ii) The process X(·; t1, ξ, a(·)) satisfies
E[
supt1≤s≤T
‖X(s)‖2
]≤ CT (1 + E‖ξ‖2). (3.9)
(iii) If X(t1) = x ∈ H and Xn(·) is the solution of (1.2) with Xn(t1) = x and with A
replaced by its Yosida approximation An, then
limn→+∞
E[
supt1≤s≤T
‖Xn(s)−X(s)‖2
]= 0. (3.10)
(iv) Let ξi, i = 1, 2, be F tt1-measurable and such that E‖ξi‖2 < +∞. Let Xi(·) = X(·; t1, ξi, a(·)),
i = 1, 2. Then
E[
supt1≤s≤T
‖X1(s)−X2(s)‖2
]≤ CTE‖ξ1 − ξ2‖2. (3.11)
(v) Let xi ∈ H, i = 1, 2. Let Xi(·) = X(·; t1, xi, a(·)), i = 1, 2. Then
supt1≤s≤T
E[‖X1(s)−X2(s)‖2
−1
]≤ CT‖x1 − x2‖2
−1. (3.12)
10
(vi) For all t1 ≤ s ≤ T
E[
supt1≤τ≤s
‖X(τ)− x‖2
]≤ ωT,x(s− t1) (3.13)
for some modulus ωT,x, where X(·) = X(·; t1, x, a(·)).
(vii) If a1(·), a2(·) : [t1, T ]×Ω→ Λ are F ts-predictable and a1(·) = a2(·) dt⊗ P a.s., then
X(·; t1, ξ, a1(·)) = X(·; t1, ξ, a2(·)) on [t1, T ] P a.s..
Proof. The proof is rather standard and we will indicate only the main steps.
Proof of ( i). Let χ be the space of H-valued, adapted, cadlag processes Y (s), s ∈ [t1, T ]
equipped with the norm (2.6) and let K be the following mapping:
K(Y )(s) = S(s− t1)ξ +
∫ s
t1
S(s− r)b(Y (r), a(r)) dr
+
∫ s
t1
∫U\0
S(s− r)γ(Y (r−), a(r), u)π(dr, du)
= S(s− t1)ξ +K1(Y )(s) +K2(Y )(s), s ∈ [t1, T ],
where S denotes the semigroup generated by −A, and K1 and K2 denote the mappings
defined by, respectively, the deterministic and the stochastic integrals. It is easy to show
that if T − t1 is sufficiently small than the transformation K is a contraction on χ and
therefore the equation
Y = K(Y )
has a unique solution. The case of the general interval is obtained by repeating the process
a finite number of steps. Let us prove, for instance, the contraction property for the
mapping K2. If Y1, Y2 ∈ χ, then
‖K2(Y1)−K2(Y2)‖2χ
= E supt1≤s≤T
∥∥∥∥∫ s
t1
∫U\0
S(s− r) [γ(Y1(r−), a(r), u)− γ(Y2(r−), a(r), u)] π(dr, du)
∥∥∥∥2
.
By (2.5), (3.5) and (3.2),
‖K2(Y1)−K2(Y2)‖2χ ≤ 4E
∫ T
t1
∫U
‖γ(Y1(r−), a(r), u)− γ(Y2(r−), a(r), u)‖2 dr ν(du)
≤ 4E∫ T
t1
∫U
C2ρ2(u)‖Y1(r)− Y2(r)‖2−1 dr ν(du)
≤ 4(T − t1)C2
∫U
ρ2(u)ν(du)E supt1≤r≤T
‖Y1(r)− Y2(r)‖2−1
≤ 4M2(T − t1)C2
∫U
ρ2(u)ν(du)‖Y1 − Y2‖χ.
11
Consequently by (3.3), choosing the length of the interval [t1, T ], one can make the
Lipschitz constant of K2 as small as one wishes. The same can be achieved for the trans-
formation K1.
Proof of (ii). Since
X(s) = S(s−t1)ξ+
∫ s
t1
S(s−r)b(X(r), a(r)) dr+
∫ s
t1
∫U\0
S(s−r)γ(X(r−), a(r), u)π(dr, du),
we have
E supt1≤s≤t
‖X(s)‖2
≤ 3
[E‖ξ‖2 + E
(∫ t
t1
‖b(X(r), a(r))‖ dr)2
+ 4E∫ t
t1
∫U
‖γ(X(r−), a(r), u)‖2 dr ν(du)
].
It follows from the conditions imposed on the coefficients that there exists a constant c1,
such that
‖b(x, a)‖2 ≤ c1(1 + ‖x‖2), ‖γ(x, a, u)‖2 ≤ c1ρ2(u)(1 + ‖x‖2), x ∈ H, u ∈ U, a ∈ Λ.
Consequently
E supt1≤s≤t
‖X(s)‖2 ≤ 3E‖ξ‖2 + 3c1
∫ t
t1
(1 + E sup
t1≤s≤r‖X(s)‖2
)dr
+ 12c1
∫U
ρ2(u)ν(du)
∫ t
t1
(1 + E sup
t1≤s≤r‖X(s)‖2
)dr
and thus, for some constants c2, cT , for t1 ≤ t ≤ T ,
E supt1≤s≤t
‖X(s)‖2 ≤ cT (1 + E‖ξ‖2) + c2
∫ t
t1
E supt1≤s≤r
‖X(s)‖2 dr.
The required estimate follows by Gronwall’s inequality.
Proof of (iii). Part (iii) follows from the local inversion theorem [11], page 238, and
Proposition 2.1.
Proof of (iv). The proof uses arguments similar to these used in the proof of (ii).
Proof of (v). By Lemma 5.3 we have
E‖X1(s)−X2(s)‖2−1 = ‖x1 − x2‖2
−1
− 2E∫ s
t1
[〈X1(r)−X2(r), A∗B(X1(r)−X2(r))〉
+ 〈b(X1(r), a(r))− b(X2(r), a(r)), B(X1(r)−X2(r))〉]dr
+ E∫ s
t1
∫U
‖γ(X1(r), a(r), u)− γ(X2(r), a(r), u)‖2−1ν(du)dr
≤ ‖x1 − x2‖2−1 + C1
∫ s
t1
E‖X1(r)−X2(r)‖2−1dr,
(3.14)
12
where we have used (3.1), (3.3), (3.4), (3.5). Therefore, Gronwall’s inequality gives
E‖X1(s)−X2(s)‖2−1 ≤ C‖x1 − x2‖2
−1.
Proof of (vi). The proof is similar to that of (ii). With a different starting inequality
we obtain for a new constant cT
E supt1≤s≤t
‖X(s)− x‖2 ≤ cT
[supt1≤s≤t
‖S(s− t1)x− x‖2 +
∫ t
t1
E(
1 + supt1≤s≤r
‖X(s)‖2
)dr
].
The result follows using (3.9).
Proof of (vii). The proof of (vii) follows the proof of (iv).
3.4 Uniqueness in law for control systems
Let (E, E) be a measurable space, (Ωi,Fi,Pi), i = 1, 2 be two probability spaces, and
Yi : [t, T ] × Ωi → E be two stochastic processes, and let D ⊂ [t, T ]. We say that Y1(·)and Y2(·) have the same laws on D if they have the same finite dimensional distributions
on D, i.e. if for any t ≤ t1 < t2 < ... < tn ≤ T, ti ∈ D, and A ∈ E × E × ... × E (n-fold
product),
P1 (ω1 : (Y1(t1), ..Y1(tn))(ω1) ∈ A) = P2 (ω2 : (Y2(t1), ..Y2(tn))(ω2) ∈ A) .
We denote it by writing LP1(Y1(·)) = LP2(Y2(·)) on D. If we omit the set D it is understood
that the finite dimensional distributions are the same on some set of full measure.
Theorem 3.5. Let µi = (Ωi,Fi,F i,ts ,Pi, Li) , i = 1, 2, be two reference probability spaces
and πi, i = 1, 2, be the Poisson random measures for Li, i = 1, 2. Let ai ∈ Uµit , and ζi ∈L2(Ωi,F i,tt ,Pi), i = 1, 2. Let LP1(a1(·), L1(·), ζ1) = LP2(a2(·), L2(·), ζ2) on some subset D ⊂[0, T ] of full measure. Denote by Xi(·) the unique mild solution of (1.2) in the reference
probability space µi with a(·) = ai(·) and Xi(t) = ξi, i = 1, 2. Then LP1(X1(·), a1(·)) =
LP2(X2(·), a2(·)) on D.
Proof. The result is a direct consequence of the following two propositions and the fact
that solutions of the equation are obtained as limits of iterations of maps that give the
fixed point, like in [13].
Proposition 3.6 ([23], Theorem 8.3). Let H be a separable Hilbert space. Let (Ωi,Fi,Pi), i =
1, 2 two complete probability spaces, and (Ω, F) be a measurable space. Let ξi : Ωi → Ω, i =
1, 2 be two random variables, and fi : [t, T ]× Ωi → H, i = 1, 2 be two stochastic processes
satisfying
P1
(∫ T
t
‖f1(s)‖ds < +∞)
= P2
(∫ T
t
‖f2(s)‖ds < +∞)
= 1,
13
and
LP1 (f1(·), ξ1) = LP2 (f2(·), ξ2) on D
for some subset D ⊂ [t, T ] of full measure. Then
LP1
(∫ ·t
f1(s)ds, ξ1
)= LP2
(∫ ·t
f2(s)ds, ξ2
)on [t, T ]. (3.15)
Proposition 3.7. Let µi = (Ωi,Fi,F i,ts ,Pi, Li) , i = 1, 2, be two reference probability
spaces. Let πi, i = 1, 2, be the Poisson random measures for Li, i = 1, 2. Let Φi : [t, T ] ×Ωi × U → H, i = 1, 2 be two F i,ts -predictable fields such that Φi ∈ L2
µi,T(see [25], page
128). Let (Ω, F) be a measurable space and ξi : Ωi → Ω, i = 1, 2 be two random variables.
Assume that, for some subset D ⊂ [t, T ] of full measure,
(Φ1(·), L1(·), ξ1) , i = 1, 2
have the same finite dimensional distributions on D. Then
LP1
(∫ ·t
∫U\0
Φ1(s, z)π1(ds, dz), ξ1
)= LP2
(∫ ·t
∫U\0
Φ2(s, z)π2(ds, dz), ξ2
)on [t, T ].
(3.16)
Proof. Let e1, e2, . . . be a dense subset of H = L2(U, ν;H) consisting of bounded and
continuous functions such that for each i = 1, 2, . . . there exists a positive number ri > 0
such that the support of ei is disjoint from the ball u : ‖u‖U ≤ ri. For k = 1, 2, . . . and
h ∈ H, define Tk(h) to be the element in e1, . . . , ek which is the closest to h and has
minimal index, and define Ak,j = h : Tk(h) = ej, k = 1, 2, . . . , j = 1, . . . , k. The Ak,j are
disjoint Borel sets and
H =k⋃j=1
Ak,j, Tk(h) =k∑j=1
1Ak,j(h)ej, h ∈ H.
Define
Φki (s) = Tk(Φi(s)), i = 1, 2, k = 1, 2, . . . , s ∈ [t, T ].
It is clear that ((1Ak,j(Φi(·)))j=1,...,k, Li(·), ξi), i = 1, 2, have the same finite dimensional
distributions on D for each k and thus also (Φki (·), Li(·), ξi). Since ‖Tk(h) − h‖H ↓ 0 as
k → +∞ for each h ∈ H,
limk→+∞
E∫ T
t
‖Φki (s)− Φi(s)‖2
H ds = 0,
and therefore, by Doob’s maximal inequality, see (2.5), and the Borel-Cantelli lemma,
some subsequence of the stochastic integrals∫ s
t
∫U\0
Φki (r, u)πi(dr, du), s ∈ [t, T ], i = 1, 2
14
converges uniformly on [t, T ] with probability 1, to the integrals∫ s
t
∫U\0
Φi(r, u)πi(dr, du).
Consequently we can assume that the processes Φi are of the special form
Φi(s, u) =K∑j=1
φji (s)ej(u), i = 1, 2, . . . , s ∈ [t, T ],
where ((φji )j=1,...,K(·), Li(·), ξi) have the same finite dimensional distributions on D and in
addition
0 ≤ φji (s) ≤ 1, i = 1, 2, j = 1, . . . , K, . . . , s ∈ [t, T ]
(recall that φji (·) corresponds to 1Ak,j(Φi(·)) before). To go further we need the following
elementary lemma.
Lemma 3.8. Assume that a real valued function f ∈ L2[a, b]. Define
fn(s) = n
∫[a∨s− 1
n,s]
f(r) dr, a ≤ s ≤ b, n = 1, 2, ...
and for a partition πn = a = tn0 < tn1 < · · · < tnk = b,
fn(s) = fn(tnk), s ∈ (tnk , tnk+1], k = 0, . . . , n− 1.
Then: (i) The fn are continuous functions and
limn
∫ b
a
|fn(s)− f(s)|2 ds = 0.
(ii) If f is continuous and πn is a normal sequence of partitions (i.e. partitions which
are finer with n increasing and such that maxtni − tni−1 : i = 1, ..., k → 0 as n→ +∞),
then
limn
∫ b
a
|fn(s)− f(s)|2 ds = 0.
(iii) If supa≤s≤b |f(s)| = M , then |fn(s)| ≤M , |fn(s)| ≤M .
By Proposition 3.6, deterministic integration leads to processes with the same finite
dimensional distributions. Taking into account the above lemma it is enough to show that
processes (∫ ·t
∫U\0
(J−1∑j=0
1(tj ,tj+1](r)ψji
)e(u)πi(dr, du), Li(·), ξi
)i = 1, 2,
15
have the same finite dimensional distributions on D if t = t0 < t1 < · · · < tJ = T , ψjiare F i,ttj -measurable, e is a bounded, continuous function with the support in ‖u‖U ≥r0, r0 > 0 which is integrable with respect to ν, and (ψ0
i , ψ1i , . . . , ψ
J−1i , Li(·), ξi), i = 1, 2,
have the same finite dimensional distributions on D. Note that for i = 1, 2∫ s
t
∫U\0
(J−1∑j=0
1(tj ,tj+1](r)ψji
)e(u)πi(dr, du)
=J−1∑j=0
ψji
∫U\0
e(u)πi((tj ∧ s, tj+1 ∧ s], du), s ∈ [t, T ],
and therefore to show that these processes have the same distributions it is enough to
prove the following lemma.
Lemma 3.9. Let e ∈ H be a bounded, continuous function with the support in ‖u‖U ≥r0, r0 > 0. Then for arbitrary 0 ≤ a < b ≤ T∫
U\0e(u)πi((a, b], du) = lim
m→+∞
m−1∑q=0
e
(Li
(a+
b− am
(q + 1)
)− Li
(a+
b− am
q
)).
Proof of Lemma 3.9. Let τ ki , k = 1, 2, . . . , Ki be the consecutive moments of all jumps of
the processes Li, i = 1, 2 of the size greater than or equal to r0 which occur in the time
interval (a, b]. Then∫U\0
e(u)πi((a, b], du) =
Ki∑k=1
e(Li(τ
ki )− Li(τ ki −)
).
We fix i. We will show that for almost all ω there exists m0(ω) such that if m ≥ m0(ω)
and ∣∣∣∣Li(a+b− am
(q + 1)
)− Li
(a+
b− am
q
)∣∣∣∣ ≥ r for some q,
then τ ki ∈(a+ b−a
mq, a+ b−a
m(q + 1)
], for some k = 1, 2, . . . , Ki.
Suppose to the contrary that there exists a sequence qm converging to +∞ such that
τ ki /∈(a+
b− am
qm, a+b− am
(qm + 1)
], for k = 1, 2, . . . , Ki
and ∣∣∣∣Li(a+b− am
(qm + 1)
)− Li
(a+
b− am
qm
)∣∣∣∣ ≥ r. (3.17)
Passing to a subsequence we can assume that for some k,(a+
b− am
qm, a+b− am
(qm + 1)
]⊂ [τ ki , τ
k+1]
16
and that
limm→+∞
(a+
b− am
qm
)= c ∈ [τ ki , τ
k+1].
By the cadlag property, c 6= τ ki , τk+1. Moreover, if c ∈ (τ ki , τ
k+1i ), then
limm→+∞
Li
(a+
b− am
qm
)= lim
m→+∞Li
(a+
b− am
(qm + 1)
)= L(c),
which contradicts (3.17).
This ends the proof of Theorem 3.7
3.5 Control problems and reference spaces
We show in this section, see Theorem 3.14 below, that the value function of the control
problem is independent of the reference probability space on which it is considered.
Let t ∈ [0, T ]. The canonical reference probability space on [t, T ] is the 5-tuple
µL := (DU [t, T ],F∗,P∗,Bts,L), where P∗ is the measure on (DU [t, T ],B(DU [t, T ])) (where
B(DU [t, T ]) is the Borel σ-field) such that the mappingL : [t, T ]×DU [t, T ]→ U
L(s, ω) = ω(s)(3.18)
is a ν Bts-Levy process in U , F∗ is the completion of B(DU [t, T ]), and for s ∈ [t, T ],
Bt,0s = σ(L(τ) : t ≤ τ ≤ s), Bts = σ (Bt,0s ,N ∗), where N ∗ are the P∗-null sets.
Lemma 3.10 (see [12], Proposition 7.1). We have
B(DU [t, T ]) = Bt,0T (3.19)
and if (Ω,F ,F ts,P, L) is a reference probability space such that the trajectories of L(·, ω)
are cadlag for every ω ∈ Ω, then
F t,0s = L(· ∧ s)−1(Bt,0T). (3.20)
Proof. The equality (3.19) is proved in [12], Proposition 7.1, and the proof (3.20) follows
the arguments of the proof of Lemma 2.18 in [13].
In particular, since DU [t, T ] is a Polish space, µL is a standard reference probability
space. For more information on the canonical sample space for Levy processes we refer to
[12], Chapter 3 and to [2].
We denote PDU [t,T ][t,T ] to be the sigma field of Bt,0s -predictable sets, i.e. the sigma field
generated by the sets of the form (s, r]×A, t ≤ s < r ≤ T,A ∈ Bt,0s and t×A,A ∈ Bt,0t .
17
For a reference probability space (Ω,F ,F ts,P, L) we denote PΩ[t,T ] to be the sigma field of
F t,0s -predictable sets.
We will assume from now on without loss of generality that the paths of all ν Levy
processes L(·, ω) are cadlag for every ω ∈ Ω.
Lemma 3.11. Let a(·) = (Ω,F ,F ts,P, L, a (·)) ∈ Ut be F t,0s -predictable. Then there exists
a PDU [t,T ][t,T ] /B(Λ)-measurable function F : [t, T ]×DU [t, T ]→ Λ such that
a(s, ω) = F (s, L(·, ω)), for ω ∈ Ω, s ∈ [t, T ]. (3.21)
Proof. Define the process β : [t, T ]× Ω→ [t, T ]×DU [t, T ]
β(τ, ω) = (τ, L(·, ω)).
The sets of the form A1 = (s, r]× ω ∈ Ω : L(η, ω) ∈ B, t ≤ η ≤ s < r ≤ T,B ∈ B(U),
and A2 = t × ω ∈ Ω : L(t, ω) ∈ B, B ∈ B(U), generate PΩ[t,T ]. But (τ, ω) ∈ A1
if and only if τ ∈ (s, r] and L(·, ω) ∈ B1 = ξ ∈ DU [t, T ] : ξ(η) ∈ B ∈ Bt,0s , and
(t, ω) ∈ A2 if and only if L(·, ω) ∈ B2 = ξ ∈ DU [t, T ] : ξ(t) ∈ B ∈ Bt,0t . Therefore,
A1 = β−1((s, r] × B1), A2 = β−1(t × B2). Since the sets of the form (s, r] × ξ ∈DU [t, T ] : ξ(η) ∈ B, t ≤ η ≤ s < r ≤ T,B ∈ B(U), and t × ξ ∈ DU [t, T ] : ξ(t) ∈B, B ∈ B(U), generate PDU [t,T ]
[t,T ] , we have PΩ[t,T ] = β−1(PDU [t,T ]
[t,T ] ). Therefore, by Theorem
1.9 of [13] (or Theorem 1.7 of [35]), there exists a PDU [t,T ][t,T ] /B(Λ)-measurable function
F : [t, T ]×DU [t, T ]→ Λ such that (3.21) is satisfied.
The following corollary follows immediately from Lemma 3.11 and its proof.
Corollary 3.12. Let µi =(Ωi,Fi,F ti,s,Pi, Li
), i = 1, 2, be two reference probability spaces.
Let a1(·) ∈ Uµ1t be F t,01,s-predictable. Let F : [t, T ] × DU [t, T ] → Λ be the function from
Lemma 3.11 satisfying (3.21) for L = L1. Then the process
a2(s, ω2) = F (s, L2(·, ω))
is F t,02,s-predictable (and hence it belongs to Uµ2t ) and LP1(L1(·), a1(·)) = LP2(L2(·), a2(·))on some set D ⊂ [t, T ] of full measure.
Proposition 3.13. Assume that (Ω,F ,F ts,P, L) is a reference probability space. Then:
(i) The fultration F ts is right continuous.
(ii) F t,0T is countably generated (and consequently F tT is countably generated up to sets
of measure zero).
18
(iii) If (Ω,F ,F ts,P, L, a (·)) ∈ Ut, then there exists an F t,0s -predictable process a1(·) such
that a(·) = a1(·), dt⊗ P a.s. on [t, T ]× Ω.
Proof. The proofs of (i) and (ii) are similar to the proof of Lemma 1.94 in [13]. In particular
the right-continuity of the trajectories of L is a sufficient replacement for the continuity
of a Wiener process, see [30], vol. 1, pages 174-175. For the proof of (iii) see the proof of
Lemma 1.99 in [13].
Theorem 3.14. Let t ∈ [0, T ], x ∈ H, µ1 =(Ω1,F1,F t1,s,P1, L1
), µ2 =
(Ω2,F2,F t2,s,P2, L2
)be two reference probability spaces, and a1(·) ∈ Uµ1t . There exists a2(·) ∈ Uµ2t such that
LP1(Xµ1(·; t, x, a1(·)), a1(·)) = LP2(X
µ2(·; t, x, a2(·)), a2(·)).
In particular, for every reference probability space µ,
V µt (x) := inf
a(·)∈UµtJ(t, x; a(·)) = V (t, x).
Proof. The result follows from Corollary 3.12, Proposition 3.13(iii) and Theorem 3.5.
3.6 Conditional control systems
Let us recall that a ν F ts-Levy process regarded as a DU [t, T ] random variable determines
uniquely its distribution, say Pν , on (DU [t, T ],B(DU [t, T ])). Conversely if (Ω,F ,F ts,P)
is a filtered probability space, L is U -valued cadlag process such that, L(t) = 0, P-a.s.,
F ts is the augmentation of the filtration generated by L by the null sets of F , and the
distribution of L is Pν , then (Ω,F ,F ts,P, L) is a reference probability space.
In the sequel we will often use the concept of the regular conditional probability, for
which we refer to [17], p.106, or [15].
Proposition 3.15. Let 0 ≤ t ≤ η < T and let µ = (Ω,F ,F ts,P, L) be a standard reference
probability space (see Definition 3.3). Define, for s ∈ [η, T ], Lη(s) := L(s) − L(η). Let
a(·) ∈ Uµt , and let a1(·) be from Proposition 3.13. Then:
(i) For P a.e. ω0 ∈ Ω, µω0 =(Ω,Fω0 ,Fηω0,s
,Pω0 , Lη)
is a reference probability space,
where Pω0 = P(·|F t,0η )(ω0) is the regular conditional probability, Fω0 is the augmen-
tation of F ′ by the Pω0 null sets, and Fηω0,sis the augmented filtration generated by
Lη.
(ii) For P a.e. ω0 ∈ Ω, F t,0s+ ⊂ Fηω0,sfor every η ≤ s ≤ T .
(iii) For P a.e. ω0 ∈ Ω, aω0(·) :=(Ω,Fω0 ,Fηω0,s
,Pω0 , Lη, a1 (·) |[η,T ]
)∈ Uη.
19
Proof. (i) Let Γj, j = 1, 2, . . . be a determining family of B(DU [t, T ]), and Pν be the
distribution of ν-Levy processes on DU [η, T ]. For arbitrary j = 1, 2, . . ., the event Lη(·) ∈Γj is independent of F tη and therefore
P(Lη(·) ∈ Γj | F tη
)= P (Lη(·) ∈ Γj)
= Pν(Γj), P-a.s.
Moreover
P(Lη(·) ∈ Γj | F tη
)= Pω0 (Lη(·) ∈ Γj) , for P a.s. ω0
Let Ω0 be the set of those ω0 for which
Pω (Lη(·) ∈ Γj) = Pν(Γj), j = 1, 2, . . . .
Then P(Ω0) = 1. Since sets Γj form a determining family, we have
Pω0 (Lη(·) ∈ Γ) = Pν(Γ)
for all Γ ∈ DU [η, T ]. Therefore Lη is a ν Fηω0,s-Levy process in
(Ω,Fω0 ,Fηω0,s
,Pω0
)for P
a.s. ω0 and (i) follows.
The proofs of (ii) and (iii) are identical to the proof of Lemma 2.26 in [13]. Following
this proof we obtain F t,0s ⊂ Fηω0,sfor every η ≤ s ≤ T and then (ii) is the consequence of
the right continuity of the filtration Fηω0,s.
Proposition 3.16. Let 0 ≤ t ≤ η < T, x ∈ H, be fixed. Let µ = (Ω,F ,F ts,P, L) be a
standard reference probability space and let µω0 be as in Proposition 3.15. Let a(·) ∈ Ut be
such that a|[η,T ](·) ∈ Uµω0η for P a.e. ω0. Then there exists a version of Xµ(·; t, x, a(·)) such
that (for that version) for P a.e. ω0, Xµω0 (·; η,Xµ(η), a(·)) = Xµ(·; t, x, a(·)) on [η, T ], Pω0
a.s.
Proof. Denote X(·) = Xµ(·; t, x, a(·)). Let Ω0 be such that P(Ω0) = 1 and X(·, ω) are
cadlag for ω ∈ Ω0. Let sk∞k=1 be a dense set in [t, T ], s1 = T and Ak ∈ F t,0sk be such that
P(Ak) = 1 and X(sk) = ξk on Ak for some F t,0sk -measurable random variable ξk. Define
Ω1 = Ω0 ∩⋂∞k=1 Ak. Then P(Ω1) = 1 and thus Pω0(Ω1) = 1 for P a.e. ω0 which implies
that Ω1 ⊂ Fηω0,sfor P a.e. ω0. The required version, denoted by X1, is defined as follows:
X1(s) = X(s) if s ∈ [t, T ], ω ∈ Ω1, X1(s) = 0 if ω 6∈ Ω1.
The process X1(·) has cadlag trajectories. Since, for ω ∈ Ω1, X1(s) = limsk↓s ξk, X1(·)is σ(F t,0s+,Ω1)-adapted. But, for s ∈ [η, T ], σ(F t,0s+,Ω1) ⊂ Fηω0,s
for P a.e. ω0, so X1(·) is
Fηω0,s-adapted. Since X1(·) is a version of Xµ(·), and thus a version of Xµ(·; η,Xµ(η), a(·))
20
on [η, T ], to conclude that, for P a.e. ω0, Xµω0 (·; η,Xµ(η), a(·)) = X1(·) on [η, T ], Pω0 a.s.,
it is enough to prove that, for P a.e. ω0,∫ s
η
∫U\0
S(s− r)γ(X1(r−), a(r), u)π(dr, du)
in µ is Pω0 a.s. equal to this integral in the reference probability spaces µω0 .
Denote by H the extension of H and by S the unitary group extending eAr, coming
from the dilation theorem, as in the proof of Proposition 2.1. Define
Φ(r, u) = S(−r)γ(X1(r−), a(r), u), t ≤ r ≤ T, u ∈ U.
Then Φ(r) = Φ(r, ·) is an L2(U, ν; H) valued, F t,0s -predictable process. The result will
follow if we can show that the stochastic integral∫ s
η
∫U\0
Φ(r, u)π(dr, du)
in the space µ, is Pω0 a.s. equal to this integral in the reference probability spaces µω0 , for
P a.e. ω0, .
As in the proof of Proposition 3.7 , there exists a sequence of processes Φn(r, u) such
that
Φn(r, u) =mn∑k=0
φnk(τk)(u)1(τk,τk+1](r),
where η = τ0 < τ1, ..., τmn = T, φnk(τk) is F t,0τk -measurable, and
limn→+∞
E∫ T
η
∫U
‖Φ(r, u)− Φn(r, u)‖2Hν(du)dt = 0.
Choosing a subsequence we can assume that for P a.e. ω0
limn→+∞
Eω0
∫ T
η
∫U
‖Φ(r, u)− Φn(r, u)‖2Hν(du)dt = 0 (3.22)
and moreover∞∑n=1
22nE∫ T
η
∫U
‖Φn+1(r, u)− Φn(r, u)‖2Hν(du)dt < +∞.
Therefore, by Doob’s martingale inequality,
P
(supη≤s≤T
∥∥∥∥∫ s
η
∫U\0
Φn+1(r, u)π(dr, du)−∫ s
η
∫U\0
Φn(r, u)π(dr, du)
∥∥∥∥H
≥ 1
2n
)
≤ 22nE∥∥∥∥∫ T
η
∫U\0
(Φn+1(r, u)− Φn(r, u)) π(dr, du)
∥∥∥∥2
H
= 22nE∫ T
η
∫U
‖Φn+1(r, u)− Φn(r, u)‖2Hν(du)dt. (3.23)
21
It thus follows from the Borel-Cantelli lemma that there is a set A ⊂ Ω such that P(A) = 1
and, for every ω ∈ A,
supη≤s≤T
∥∥∥∥∫ s
η
∫U\0
Φn+1(r, u)π(dr, du)−∫ s
η
∫U\0
Φn(r, u)π(dr, du)
∥∥∥∥H
≤ 1
2n(3.24)
for a sufficiently big n. We can also assume that Pω0(A) = 1 for P a.e. ω0.
Now
E
[∞∑n=1
22n
∫ T
η
∫U
‖Φn+1(r, u)− Φn(r, u)‖2Hν(du)dt
]
= E
[∞∑n=1
22nE[∫ T
η
∫U
‖Φn+1(r, u)− Φn(r, u)‖2Hν(du)dt
∣∣∣∣F t,0η ]]
= E
[∞∑n=1
22nEω0
[∫ T
η
∫U
‖Φn+1(r, u)− Φn(r, u)‖2Hν(du)dt
]]< +∞.
Therefore, for P a.e. ω0,
∞∑n=1
22nEω0
[∫ T
η
∫U
‖Φn+1(r, u)− Φn(r, u)‖2Hν(du)dt
]< +∞.
Arguing as before we thus obtain that for P a.e. ω0 there is a set Aω0 ⊂ Ω such that
Pω0(Aω0) = 1 and, for every ω ∈ Aω0 , (3.24) holds if n is big enough. We remark that∫ s
η
∫U\0
Φn(r, u)π(dr, du)
is the same in µ and µω0 . The result now follows since Pω0(A∩Aω0) = 1 and (3.22) holds.
4 Dynamic programming principle
We now establish the Dynamic Programming Principle which is the main technical tool
of our approach. We remind that we assume without loss of generality that for every
reference probability space (Ω,F ,F ts,P, L), the paths L(·, ω) are cadlag for every ω ∈ Ω.
Lemma 4.1. There exists a modulus σ and C ≥ 0 such that
|J(t, x; a(·))− J(t, y; a(·))| ≤ σ(‖x− y‖−1), for all x, y ∈ H, t ∈ [0, T ], a(·) ∈ Ut, (4.1)
and
|J(t, x; a(·))| ≤ C(1 + ‖x‖), for all x ∈ H, t ∈ [0, T ]. (4.2)
22
In particular
|V (t, x)− V (t, y)| ≤ σ(‖x− y‖−1), for all x, y ∈ H, t ∈ [0, T ], (4.3)
|V (t, x)| ≤ C(1 + ‖x‖), for all x ∈ H, t ∈ [0, T ]. (4.4)
Proof. The results follow directly from Theorem 3.4 and assumptions (3.6), (3.7).
Theorem 4.2 (Dynamic Programming Principle). Let 0 ≤ t < η ≤ T, x ∈ H, and denote,
for a(·) ∈ Ut, X(s) := X(s; t, x, a(·)), s ∈ [t, T ]. Then
V (t, x) = infa(·)∈Ut
E[∫ η
t
f (X (s) , a (s)) ds+ V (η,X (η))
]. (4.5)
Proof. With the preparatory results of Section 3 the arguments follow the proof of Theo-
rem 2.24 in [13] with only small changes, however we include the proof for completeness.
If we denote
Ut =
⋃µ
Uµt : µ is a standard reference probability space
.
it is easy to see from Theorem 3.14 that (4.5) will follow if we can prove it with Ut replaced
by Ut. So we need to show that
V (t, x) = infa(·)∈Ut
E[∫ η
t
f (X (s) , a (s)) ds+ V (η,X (η))
]. (4.6)
Step 1. Let a(·) ∈ Uµt ⊂ Ut for some µ = (Ω,F ,F ts,P, L). We have
J (t, x; a(·)) = E[∫ η
t
f (X (s) , a (s)) ds
]+E
[∫ T
η
f (X (s) , a (s)) ds+ g (X (T ))
]. (4.7)
By Theorem 3.4(vii), Proposition 3.13, and Proposition 3.15(iii), we can assume that a(·)is F t,0s -predictable and a(·)|[η,T ] ∈ U
µω0η for P a.e. ω0. Thus, by Proposition 3.16 we can
assume that Xµω0 (·; η,Xµ(η), a(·)) = X(·) for P a.e. ω0.
By the properties of the regular conditional probability, (see for instance [35], Propo-
sition 1.9 or [15], Corollary, page 15), for P a.e. ω0,
Pω0(ω : X(η, ω) = X(η, ω0)) = 1.
23
Therefore we have
E[∫ T
η
f (X (s) , a (s)) ds+ g (X (T ))
]= E
[E[∫ T
η
f (X (s) , a (s)) ds+ g (X (T )) |F t,0η]]
= E[Eω0
[∫ T
η
f (X (s) , a (s)) ds+ g (X (T ))
]]= E [J (η,X(η, ω0); a(·))] ≥ E [V (η,X(η, ω0))] = E [V (η,X(η))] . (4.8)
Thus, using (4.7) and taking the infimum over all a(·) ∈ Ut, we obtain
V (t, x) ≥ infa(·)∈Ut
E[∫ η
t
f (X (s) , a (s)) ds+ V (η,X (η))
].
Step 2. We fix a(·) = (Ω,F ,F ts,P, L, a (·)) ∈ Ut. Using the separability of H, (4.1) and
(4.3), it is easy to see that we can find a partition Djj∈N of H into countable disjoint
Borel subsets such that for every j = 1, 2, ..., all x, x ∈ Dj, and each a(·) ∈ Uη, we have
|J (η, x; a(·))− J (η, x; a(·))|+ |V (η, x)− V (η, x)| < ε
For each j ∈ N we now choose xj ∈ Dj and aj(·) ∈ Uµj for some µj =(Ωj,Fj,Fηj,s,Pj, Lj
)such that
J (η, xj; aj(·)) < V (η, xj) + ε. (4.9)
Let aj,1(·), j ∈ N be the Fη,0j,s -predictable processes from Proposition 3.13 such that
aj,1(·) = aj(·), Pj⊗dt a.e.. Let Fj : [η, T ]×DU ([η, T ])→ Λ be the functions from Lemma
3.11 such that
Fj (s, Lj(·, ω)) = aj,1 (s, ω) , for ω ∈ Ωj, s ∈ [η, T ].
We now set aj (s, ω) = Fj (s, Lη (·, ω)). By Corollary 3.12 and Proposition 3.15 the process
aj(·) is F t,0s -predictable and, for P-a.e. ω0, is Fηω0,s-predictable in the reference probability
spaces µω0 :=(Ω,Fω0 ,Fηω0,s
Pω0 , Lη). Moreover LPω0 (aj(·), Lη(·)) = LPj(aj,1(·), Lj(·)). We
now define a new control aη(·) ∈ Ut in the reference probability space (Ω,F ,F ts,P, L)
aη (s, ω) = a (s, ω)1t≤s≤η + 1s>η∑j∈N
aj (s, ω)1X(η;t,x,a(·))∈Dj. (4.10)
We have (Ω,F ,F ts,P, L, aη(·)) ∈ Ut.Let X(s) = X(s; t, x, aη(·)). Notice that X(s; t, x, aη(·)) = X(s; t, x, a(·)) on [t, η], P
a.e.
24
Denote Oj := ω : X(η; t, x, a(·)) ∈ Dj. Since for P-a.s. ω0, Pω0(ω : X(η, ω) =
X(η, ω0)) = 1, if ω0 ∈ Oj, then Pω0(Ω \ Oj) = 0, which implies that in this case aj(·) =
aη(·) on [η, T ], Pω0 a.s., and thus, for P a.s. ω0, aη|[η,T ] ∈ Uµω0η , and
LPω0 (aη(·), Lη(·)) = LPj(aj,1(·), Lj(·)), j ∈ N. (4.11)
By Proposition 3.16 we can assume that Xµω0 (·; η,Xµ(η), a(·)) = X(·) for P a.e. ω0.
We have
V (t, x) ≤ E[∫ T
t
f (X (s) , aη (s)) ds+ g (X (T ))
]= E
[∫ η
t
f (X (s) , a (s)) ds
]+ E
[∫ T
η
f (X (s) , aη (s)) ds+ g (X (T ))
](4.12)
and
E[∫ T
η
f (X (s) , aη (s)) ds+ g (X (T ))
]= E
[E[∫ T
η
f (X (s) , aη (s)) ds+ g (X (T )) |F t,0η]]
=∑j∈N
∫Oj
Eω0
[ ∫ T
η
f (X (s) , aη (s)) ds+ g (X (T ))
]dP(ω0)
By (4.11) and Theorem 3.5 we obtain
LPω0 (X(·), aη(·)) = LPj(Xµj(·), aj,1(·)), j ∈ N,
where Xµj(s) = Xµj(s; η,X(η; t, x, a(·))(ω0), aj,1(·)). Therefore,
E[∫ T
η
f (X (s) , aη (s)) ds+ g (X (T ))
]=∑j∈N
∫Oj
JPω0 (η,X(η; t, x, a(·))(ω0); aη(·)) dP(ω0)
=∑j∈N
∫Oj
JPj (η,X(η; t, x, a(·))(ω0); aj,1(·)) dP(ω0).
Using (4.9), we get for a.s. ω0 ∈ Oj
JPj (η,X (η; t, x, a(·)) (ω0) ; aj(·)) ≤ JPj(η, xj; aj(·)) + ε
≤ V (η, xj) + 2ε ≤ V (η,X (η; t, x, a(·)) (ω0)) + 3ε,
25
which implies
E[∫ T
η
f (X (s) , aη (s)) ds+ g (X (T ))
]≤ E [V (η,X (η; t, x, a(·)))] + 3ε.
We now use (4.12) and take the infimum over all a(·) ∈ Ut to obtain
V (t, x) ≤ infa(·)∈Ut
E[∫ η
t
f (X (s) , a (s)) ds+ V (η,X (η))
]+ 3ε
and the claim follows since ε was arbitrary.
Lemma 4.3. For every R > 0 there exists a modulus σR such that
|V (t, x)− V (s, x)| ≤ σR(|t− s|), for all t, s ∈ [0, T ], ‖x‖ ≤ R. (4.13)
Proof. Suppose that s > t, ‖x‖ ≤ R. Using (4.5), (3.6), (3.7), (3.9), (3.13) and (4.3) we
have
|V (t, x)− V (s, x)| ≤ supa(·)∈Ut
E[∫ s
t
|f (X (τ) , a (τ)) |ds+ |V (s,X (s))− V (s, x)|]
≤ C1(s− t)(1 + ‖x‖) + supa(·)∈Ut
E [σ (‖X (s)− x‖−1)] ≤ σR(|t− s|)
for some modulus σR.
Once we know that V has the above continuity properties we can repeat the arguments
from [13], Section 3.6.2 (see Theorem 3.67 there) to obtain the dynamic programming
principle in the stopping time formulation. For every a(·) ∈ Ut defined on some reference
probability space µ = (Ω,F ,F ts,P, L), we choose an F ts-stopping time t ≤ τa(·) ≤ T .
We define Vt to be the set of all such pairs(a(·), τa(·)
). We then have the following. If
0 ≤ t < T, x ∈ H, then
V (t, x) = inf(a(·),τa(·))∈Vt
E[∫ τa(·)
t
f (X (s) , a (s)) ds+ V(τa(·), X
(τa(·)
))]. (4.14)
5 Viscosity solutions
We recall the definition of viscosity solution from [34]. Since in the current paper we allow
for unbounded solutions the definition here will be slightly more general than that in [34]
where it was assumed that viscosity solutions were bounded.
Definition 5.1. We will say that a function ψ is a test function if
ψ = ϕ+ δ(t, x)h(‖x‖),
where:
26
(i) ϕt, Dϕ,D2ϕ,A∗Dϕ, δt, Dδ,D
2δ, A∗Dδ are uniformly continuous on (ε, T − ε)×H for
every ε > 0, δ ≥ 0 and is bounded, ϕ is B-lower semicontinuous, δ is B-continuous.
(ii) h is even, h′, h′′ are uniformly continuous on R, h′(r) ≥ 0 for r ∈ (0,+∞).
The above definition implies that |ψ(t, x)| ≤ Cε(1+‖x‖2) on every set (ε, T − ε)×H.
The function δ was introduced in [34] so that the definition of viscosity solution implied
a definition using a “localized” Hamiltonian. The test functions here are slightly different
from these in [34] since in that paper only bounded solutions were studied, and hence
ϕ, h were also assumed to be bounded. However the definition of test functions here is
consistent with the definition in [34], i.e. the test functions used in [34] are also test
functions in the sense of Definition 5.1.
Definition 5.2. A B-upper semicontinuous function u : (0, T ] × H → R is a viscosity
subsolution of (1.1) if u(T, x) ≤ g(x) on H, and, whenever u− ψ has a global maximum
at a point (t, x) for a test function ψ(s, y) = ϕ(s, y) + δ(s, y)h(‖y‖), then
ψt(t, x)− 〈x,A∗Dϕ(t, x) + h(‖x‖)A∗Dδ(t, x)〉+ infa∈Λ
〈b(x, a), Dψ(t, x)〉+ f(x, a)
+
∫U
(ψ(t, x+ γ(x, a, z))− ψ(t, x)− 〈γ(x, a, z), Dψ(t, x)〉) ν(dz)
≥ 0. (5.1)
A B-lower semicontinuous function u : (0, T ) × H → R is a viscosity supersolution
of (1.1) if u(T, x) ≥ g(x) on H, and whenever u + ψ has a global minimum at a point
(t, x) for a test function ψ then
−ψt(t, x) + 〈x,A∗Dϕ(t, x) + h(‖x‖)A∗Dδ(t, x)〉+ infa∈Λ
〈b(x, a),−Dψ(t, x)〉+ f(x, a)
−∫U
(ψ(t, x+ γ(x, a, z))− ψ(t, x)− 〈γ(x, a, z), Dψ(t, x)〉) ν(dz)
≤ 0. (5.2)
A viscosity solution of (1.1) is a function which is both a viscosity subsolution and a
viscosity supersolution.
Lemma 5.3. Let 0 < t < T1 < T , τ be a stopping time such that t ≤ τ ≤ T1, x ∈ H,
a(·) ∈ Ut, and X(·) = X(·; t, x, a(·)) be the solution of (1.2). Let for R > ‖x‖, τR be the
exit time of X(·) from y : ‖y‖ ≤ R and set τ = τ ∧ τR. Let ψ = ϕ + δh(‖ · ‖) be a test
function. Then
Eψ(τ,X(τ)) ≤ ψ(t, x) + E∫ τ
t
[ψt(r,X(r)) + 〈b(X(r), a(r)), Dψ(r,X(r))〉
− 〈X(r), A∗Dϕ(r,X(r)) + h(‖X(r)‖)A∗Dδ(r,X(r))〉]dr
+ E∫ τ
t
∫U
[ψ(r,X(r) + γ(X(r), a(r), z))− ψ(r,X(r))
− 〈Dψ(r,X(r)), γ(X(r), a(r), z)〉]ν(dz)dr.
(5.3)
27
Proof. Let Xn(·) = Xn(·; t, x, a(·)) be the solution of (1.2) with A replaced by its Yosida
approximation An. Denote by τnR the exit time of Xn(·) from y : ‖y‖ ≤ R + 1 and set
τn = τ∧τnR. Applying Ito’s formula to the function ψ(s,Xn(s)) and taking the expectation
we obtain
Eψ(τn, Xn(τn)) = ψ(t, x) + E∫ τn
t
[ψt(r,Xn(r)) + 〈b(Xn(r), a(r)), Dψ(r,Xn(r))〉
− 〈AnXn(r), Dϕ(r,Xn(r)) + h(‖Xn(r)‖)Dδ(r,Xn(r)) + δ(r,Xn(r))h′(‖Xn(r)‖)‖Xn(r)‖
Xn(r)〉]dr
+ E∫ τn
t
∫U
[ψ(r,Xn(r) + γ(Xn(r), a(r), z))− ψ(r,Xn(r))
− 〈Dψ(r,Xn(r)), γ(Xn(r), a(r), z)〉]ν(dz)dr.
Since An is monotone we can drop the term −〈AnXn(r), δ(r,Xn(r))h′(‖X(r)‖)‖X(r)‖ Xn(r)〉 from
the integral above to obtain
Eψ(τn, Xn(τn)) ≤ ψ(t, x) + E∫ τn
t
[ψt(r,Xn(r)) + 〈b(Xn(r), a(r)), Dψ(r,Xn(r))〉
− 〈AnXn(r), Dϕ(r,Xn(r)) + h(‖Xn(r)‖)Dδ(r,Xn(r))〉]dr
+ E∫ τn
t
∫U
[ψ(r,Xn(r) + γ(Xn(r), a(r), z))− ψ(r,Xn(r))
− 〈Dψ(r,Xn(r)), γ(Xn(r), a(r), z)〉]ν(dz)dr.
We now pass to the limit as n→ +∞. Using (3.10) it is easy to see that for P a.s. ω
we have τn(ω) = τ(ω) if n is sufficiently large. Therefore the terms on the right hand side
above converge to the corresponding terms in (5.3) by (3.10) and the Lebesgue dominated
convergence theorem.
Regarding the Eψ(τn, Xn(τn)) term we proceed as follows. Denote Ωnm = ω :
supt≤r≤T ‖Xn(r)‖ ≥ m and Ωm = ω : supt≤r≤T ‖X(r)‖ ≥ mfor m > 0. Since
limm→+∞ lim supn→+∞ P(Ωnm ∪ Ωm)→ 0 and
E[
supt≤r≤T
‖Xn(r)‖21Ωnm∪Ωm
]≤ 2E
[supt≤r≤T
‖Xn(r)−X(r)‖21Ωnm∪Ωm + sup
t≤r≤T‖X(r)‖2
1Ωnm∪Ωm
],
denoting
ρ(m,n) = E[
supt≤r≤T
‖Xn(r)‖21Ωnm∪Ωm + sup
t≤r≤T‖X(r)‖2
1Ωnm∪Ωm
],
we have
limm→+∞
lim supn→+∞
ρ(m,n) = 0.
28
We know that ψ(s, y)| ≤ C(1 + ‖y‖2) for (s, y) ∈ [t1, T1]×H. Let σm be the modulus of
continuity of ψ on [t1, T1]× ‖y‖ ≤ m. Then
E|ψ(τn, Xn(τn))− ψ(τ,X(τ))| ≤ Eminσm(|τn − τ |+ ‖Xn(τn)−X(τ)‖), 2C(1 +m2)
+ E[(
2 + supt≤r≤T
‖Xn(r)‖2 + supt≤r≤T
‖X(r)‖2
)1Ωnm∪Ωm
]Using (3.10) and the Lebesgue dominated convergence theorem we thus obtain
lim supn→+∞
E|ψ(τn, Xn(τn))− ψ(τ,X(τ))| ≤ limm→+∞
lim supn→+∞
(ρ(m,n) + 2P(Ωnm ∪ Ωm)) = 0.
Theorem 5.4. The value function V is a viscosity solution of (1.1).
Proof. We will only show that V is a viscosity supersolution as the proof of the subsolution
part is similar and in fact easier.
Suppose that V +ψ has a global minimum at (t, x) ∈ (0, T )×H for some test function
ψ = ϕ+ δ(t, x)h(‖x‖). We can assume that V (t, x) + ψ(t, x) = 0, so for all (s, y) we have
V (s, y) ≥ −ψ(s, y). It follows from (3.13) that there exist numbers rε > 0, γε > 0 with
the property that rε → 0, γε → 1 as ε→ 0 and such that, denoting
Ω1 = ω ∈ Ω : sups∈[t,t+ε]
‖X(s; t, x, a(·))− x‖ ≤ rε,
we have
P(Ω1) ≥ γε (5.4)
for every a(·) ∈ Ut. (The set Ω1 depends on a(·) but we omit it in the notation.) We set
Ω2 = Ωµε \Ω1. Let τε be the exit time of X(s) from y : ‖y−x‖ ≤ rε, and τε = τε∧(t+ε).
By the dynamic programming principle (4.14), for every 0 < ε < (T−t)/2 there exists
a control aε(·) ∈ Ut defined on some reference probability space µε := (Ωµε ,Fµε ,Fµε,ts ,Pµε , Lµε)such that
V (t, x) + ε2 ≥ E[ ∫ τε
t
f(Xε(s), aε(s))ds+ V (τε, Xε(τε))
], (5.5)
where Xε(s) := X(s; t, x, aε(·)). By Theorem 3.14 we can assume that all control processes
aε(·) are defined on a single reference probability space µ := (Ω,F ,F ts,P, L). Inequality
(5.5), together with the fact that V + ψ has a global minimum at (t, x), implies that
ε2 − ϕ(t, x)− δ(t, x)h(‖x‖) ≥ E[ ∫ τε
t
f(Xε(s), aε(s))ds
− ϕ(τε, Xε(τε))− δ(τε, Xε(τε))h(‖Xε(τε)‖)]. (5.6)
29
Denote
Ψ(s, y, a) = ψt(s, y) + 〈b(y, a), Dψ(s, y)〉 − 〈y, A∗Dϕ(s, y) + h(‖y‖)A∗Dδ(s, y)〉 − f(y, a),
Φε1(s, z) = ψ(s,Xε(s)+γ(Xε(s), aε(s), z))−ψ(s,Xε(s))−〈Dψ(s,Xε(s)), γ(Xε(s), aε(s), z)〉.
It follows from (3.4), (3.6), (3.7) and the definition of test functions that the functions
Ψ(·, ·, a) are uniformly continuous on bounded subsets of [t, (T + t)/2]×H, uniformly for
a ∈ Λ. It thus follows that, if
γ1(ε) = sup|Ψ(s, y, a)−Ψ(t, x, a) : t ≤ s ≤ t+ ε, ‖x− y‖ ≤ rε, a ∈ Λ,
we have limε→0 γ1(ε) = 0. Therefore, by (5.6) and Lemma 5.3, we have
−ε ≤ 1
εE[ ∫ τε
t
Ψ(s,Xε(s), aε(s))
]+
1
εE[ ∫ τε
t
∫U
Φε1(s, z)ν(dz)ds
]≤ 1
εE[ ∫ τε
t
Ψ(t, x, aε(s))
]+
1
εE[ ∫ τε
t
∫U
Φε1(s, z)ν(dz)ds
]+ γ1(ε).
(5.7)
Set
Φε2(s, z) = ψ(t, x+ γ(x, aε(s), z))− ψ(t, x)− 〈Dψ(t, x), γ(x, aε(s), z)〉.
It follows from (3.5) and (3.8) and rε < 1 that
‖γ(Xε(s), a, z)‖ ≤ C(1 + ‖B1/2‖(1 + ‖x‖))ρ(z) = M1ρ(z), t ≤ s < τε, a ∈ Λ. (5.8)
Denote K = supρ(z) : ‖z‖U ≤ 1. Since D2ψ is uniformly continuous on bounded subsets
of [t, (T + t)/2]×H we have
sup‖D2ψ(s, y)‖ : t ≤ s ∈ [t, (T + t)/2], ‖y‖ ≤ 1 + ‖x‖+M1K
= M2 < +∞.
Therefore, recalling that for every s ∈ (0, T ) and w, y ∈ H
ψ(s, w + y) = ψ(s, w) + 〈Dψ(s, w), y〉+
∫ 1
0
∫ 1
0
〈D2ψ(s, w + rσy)y, y〉σdrdσ,
it follows from the above that
|Φε1(s, z)|+ |Φε
2(s, z)| ≤M2M21 (ρ(z))2, t ≤ s < τε, ‖z‖U ≤ 1. (5.9)
Since |ψ(s, y)| ≤ C1(1 + ‖y‖2) for s ∈ [t, (T + t)/2] and Dψ is is uniformly continuous on
bounded subsets of [t, (T + t)/2]×H, using (5.8) it is obvious that
|Φε1(s, z)|+ |Φε
2(s, z)| ≤ C2
(1 + (ρ(z))2
), t ≤ s < τε, ‖z‖U > 1 (5.10)
30
for some constant C2. Moreover, (3.5), (3.8) and the uniform continuity of ψ,Dψ on
bounded subsets of [t, (T + t)/2]×H imply that the functions
(τ, y) 7→ ψ(τ, y + γ(y, a, z))− ψ(τ, y)− 〈Dψ(τ, y, γ(y, a, z)〉
are uniformly continuous on [t, (T + t)/2]× y : ‖y − x‖ ≤ 1, uniformly for a ∈ Λ. Thus
for every z ∈ U there is a modulus σz such that
|Φε1(s, z)− Φε
2(s, z)| ≤ σz (|s− t|+ ‖Xε(s)− x‖) , t ≤ s < τε.
Now
1
εE[ ∫ τε
t
∫U
|Φε1(s, z)−Φε
2(s, z)|ν(dz)ds
]≤∫U
1
ε
∫ t+ε
t
E[1[t,τε)(s)|Φε
1(s, z)− Φε2(s, z)|
]dsν(dz)
and for every z
1
ε
∫ t+ε
t
E[1[t,τε)(s)|Φ1(s, z)− Φ1(s, z)|
]ds ≤ 1
ε
∫ t+ε
t
Eσz (|s− t|+ ‖Xε(s)− x‖) ds→ 0
as ε→ 0 by (3.13). Moreover, by (5.9) and (5.10),
1
ε
∫ t+ε
t
E[1[t,τε)(s)|Φε
1(s, z)− Φε2(s, z)|
]ds ≤
M2M
21 (ρ(z))2, ‖z‖U ≤ 1
C2 (1 + (ρ(z))2) , ‖z‖U > 1.
Therefore by the Lebesgue dominated convergence theorem we conclude
1
εE[ ∫ τε
t
∫U
|Φε1(s, z)− Φε
2(s, z)|ν(dz)ds
]= γ2(ε)→ 0 as ε→ 0.
Plugging this in (5.7) we thus have
− ε− γ1(ε)− γ2(ε) ≤ 1
εE[ ∫ τε
t
(ψt(t, x) + 〈b(x, aε(s)), Dψ(t, x)〉
− 〈x,A∗Dϕ(t, x) + h(‖x‖)A∗Dδ(t, x)〉 − f(x, aε(s))
+
∫U
[ψ(t, x+ γ(x, aε(s), z))− ψ(t, x)− 〈Dψ(t, x), γ(x, aε(s), z)〉] ν(dz)
)ds
]≤ C3Pµε(Ωµε
2 ) +1
εE[ ∫ t+ε
t
(ψt(t, x) + 〈b(x, aε(s)), Dψ(t, x)〉
− 〈x,A∗Dϕ(t, x) + h(‖x‖)A∗Dδ(t, x)〉 − f(x, aε(s))
+
∫U
[ψ(t, x+ γ(x, aε(s), z))− ψ(t, x)− 〈Dψ(t, x), γ(x, aε(s), z)〉] ν(dz)
)ds
]≤ C3(1− γε) + ψt(t, x)− 〈x,A∗Dϕ(t, x) + h(‖x‖)A∗Dδ(t, x)〉
+ supa∈Λ
〈b(x, a), Dψ(t, x)〉 − f(x, a)
+
∫U
[ψ(t, x+ γ(x, a, z))− ψ(t, x)− 〈Dψ(t, x), γ(x, a, z)〉] ν(dz)
. (5.11)
31
It remains to send ε→ 0.
The proof that V is a viscosity subsolution follows basically the same arguments and
is much easier since we choose any a ∈ Λ and we do everything for a fixed control process
a(·) = a and the solution X(·; t, x, a(·)).
Remark 5.5. With some additional effort the main results of the present paper (Theo-
rems 4.2 and 5.4) can be proved for HJB equation corresponding to the discounted gain
functionals:
E[∫ T
t
e−∫ st c(X(τ))dτf(x(s), a(s)) ds+ e−
∫ Tt c(X(τ))dτg(x(T ))
]under the condition that the discount function c is bounded from below, uniformly con-
tinuous in the ‖ · ‖−1 norm on bounded sets of H and has, say at most linear growth at
infinity. One can also assume that the functions f, g are only uniformly continuous in the
‖ ·‖−1 norm on bounded sets of H and have at most linear growth at infinity, uniformly in
a ∈ Λ. In this case the HJB equation (1.1) should be modified by subtracting a zero order
term c(x)u(t, x). Also the comparison theorem of [34] (Theorem 6.2 there) can be proved
under these assumptions if in addition assumption (4.8) of [34] is satisfied. This requires
a modification of its proof which amounts to reversing the order in which parameters ε, δ
go to 0 there, i.e. letting first ε→ 0 and then δ → 0. Moreover the proof of Lemma 5.5 in
[34] needs to be modified too which is more technical.
6 Specific Examples
6.1 HJB for controlled wave equation
We start with a problem of controlling mechanical structure in a random environment.
This example was also discussed in [34], Examples 2.1 and 4.3.
The problems is concerned with the optimal control of the following random equations
in the rectangle [0, T ] × O, where O is a bounded domain in Rd with regular boundary
∂O, and t ∈ [0, T ]:
∂2x(s, ξ)
∂s2= (∆x(s, ξ) + h1(x(s, ξ), a(s))) ds+ k1(x(s−, ξ), a(s))
∂L
ds(s), (6.12)
with initial and boundary conditions:
x(t, ξ) = x(ξ),∂x
∂t(t, ξ) = y(ζ), ξ ∈ O, x(s, ζ) = 0, s ∈ (t, T ), ξ ∈ O.
Here L is a Levy, scalar square integrable martingale, with the characteristic exponent
ψ(z) =
∫R(1− eizu + izu)ν(du), z ∈ R,
32
i.e.
L(s) =
∫ s
t
∫R\0
zπ(dτ, dz).
The position x(s, ξ) and velocity y(x, ξ) = ∂x∂s
(s, ξ) depend on the time and space
coordinates from (t, T )×O. The control process a(·) ∈ Ut, and takes values in a bounded
Polish space Λ. The problem is to minimize the cost functional
E[∫ T
t
∫Of1(x(s, ξ), a(s))dξds+
∫Og1(x(T, ξ))dξ
]for some functions f1 : R× Λ→ R, g1 : R→ R.
Denote the inner product and the norm in L2(O) respectively by 〈·, ·〉0 and ‖ · ‖0.
As in [34] we take as the state space
H =
H10 (O)×
L2(O)
equipped with the inner product
〈X, Y 〉 =⟨(−∆)1/2x, (−∆)1/2x
⟩0
+ 〈y, y〉0 , X =
(xy
), Y =
(xy
)∈ H.
and rewrite (6.12) as a stochastic equation for the pair X(s) :=(x(s)y(s)
),
dX(s) = [−AX(s) + b(X(s), a(s))] ds+∫R\0 γ(X(s−), a(s), z)π(ds, dz),
X(t) = X := ( xy ) ,
where A = − ( 0 I∆ 0 ) is a maximal monotone operator in H with the domain
D(A) =
H10 (O) ∩H2(O)
×H1
0 (O)
,
and
b(X, a)(ξ) :=
(0
h1(x(ξ), a)
), γ(X, a, z)(ξ) := z
(0
k(x, a)(ξ)
):= z
(0
k1(x(ξ), a)
).
We have A∗ = −A. The cost functional is then rewritten as
J(t,X; a(·)) = E[∫ T
t
f(X(s), a(s))dds+ g(X(T ))
],
where f(X, a) :=∫O f1(x(ξ), a)dξ, g(X) :=
∫O g1(x(ξ))dξ.
33
The operator
B =
((−∆)−1/2 0
0 (−∆)−1/2
)is bounded, compact, positive, self-adjoint in H, A∗B is bounded, and (3.1) holds with
any constant c0 ≥ 0. Moreover
‖X‖−1 =(‖(−∆)1/4x‖2
0 + ‖(−∆)−1/4y‖20
)1/2, X =
(xy
)∈ H,
see [34], Example 2.1.
Assume that the functions h1, k1 are uniformly continuous and are Lipschitz contin-
uous with respect to the first variable, uniformly with respect to the second one, and the
functions f1, g1 are uniformly continuous. Then the functions γ, b, f, g, and ρ(z) := |z|,satisfy assumptions (3.3)-(3.8) (see [34], Example 4.3).
Therefore, by our theorem, the value function of the control problem is a viscosity
solution of the following HJB equation:
ut(t,X)− 〈AX,Du(t,X)〉+ infa∈Λ
f(X, a) + 〈b(X, a), Du(t,X)〉
+
∫R
[u(t, x, y + k(x, a)z)− u(t, x, y)− z〈k(x, a), Dyu(t, x, y)〉0] ν(dz)
= 0
u(T,X) = g(X), t ∈ (0, T ), X =
(x
y
)∈ H.
In the equation we used Dy to denote the partial derivative with respect to the y variable,
and deliberately kept the (x, y) notation in the integral part. If in addition f and g are
bounded and Λ is compact then, by [34], the viscosity solution is unique among solutions
in BUC([0, T ]×H−1).
Let us consider the case when the process L is a standard Wiener process subordinated
by an increasing Levy process with the Laplace exponent ϕ(λ), λ ≥ 0, and the Levy
measure ν1. L is square integrable if∫ +∞
0
z2ν1(dz) < +∞.
If the vector k is independent of the control parameter, the integral operator in the
equation can be identified with a specific pseudo-differential operator. Denote the integral
operator in the equation by I and the Levy measure of L by ν. The operator I acts on
functions of the velocity variable y ∈ L2(O). Assume that k is of norm one. We introduce
new coordinates (y1, y2) in the space of velocities y. Namely we set
y1 = 〈y, k〉0 , y2 = y − 〈y, k〉0 k.
34
Then we have
Iv(y1, y2) = Iv(y) =
∫R
[v(y + kz)− v(y)− 〈kz,Dyv(y)〉0] ν(dz)
=
∫R
[v(y1 + z, y2)− v(y1, y2)− z ∂v
∂y1
v(y1, y2)
]ν(dz) = I1v(y1, y2),
where I1 is the operator generating the process L acting on functions of the y1 variable.
Since the characterisctic exponent of L is ϕ(|λ|2/2), λ ∈ R, the symbol of I1 is −ϕ(|λ|2/2).
For more details see [37], page 59. In particular for tempered stable subordinators with
ν1(dz) =c
z1+αe−γzdz, z > 0
and c > 0, γ > 0, 0 ≤ α < 1, one has
ϕ(λ) =
−cΓ(−α) [(γ + λ)α − γα] , λ > 0, α ∈ (0, 1),
c log(
1 + λγ
), λ > 0, α = 0,
see [9], pages 115-116.
6.2 Black-Scholes-Barenblatt equation
The results and methods developed in the present paper as well as in [34] apply to the
Black-Scholes-Barenblatt (BSB) equation for the so called super prices of derivatives on
the bond market. In particular one can extend results obtained in [19] from the Gaussian
noise to the Levy one. Here we sketch the extension.
We assume, see [14, 19], that the bond price at moment t with maturity t+ξ, denoted
by P (t, ξ), is determined by the so called forward rates r(t, ξ), ξ ≥ 0, by the formula
P (t, ξ) = e−∫ ξ0 r(t,η) dη, t ≥ 0, ξ ≥ 0.
The forward rates are estimated, continuously in time, by central banks and various models
are proposed to model their evolution. Generalizing the so called Heath-Jarrow-Morton
(HJM) model in the Musiela parametrization (HJMM) (see e.g. [3, 25]), we assume that
dr(s, ξ) =
(∂r
∂ξ(s, ξ) + F (s, ξ)
)ds+ σ(s, ξ) dL(s), (6.13)
where L is a Levy process with the Laplace exponent ψ = −J , where
J(z) =
∫R(e−zu − 1 + zu) ν(du), z ∈ R. (6.14)
35
(The Laplace exponent is defined by E[e−zL(t)
]= e−tψ(z).) The drift term F depends on
the volatility σ and the noise term as follows
F (s, ξ) =∂
∂ξJ
(∫ ξ
0
σ(s, η)dη
)= J ′
(∫ ξ
0
σ(s, η)dη
)σ(s, ξ). (6.15)
Some assumptions must be made about the measure ν and the volatility σ for the above
formal calculations to make sense.
If a bond derivative pays at a given time T the amount g(r(T )) then its price at
moment t < T , given by the so called non-arbitrage pricing, should be equal to
v(t, x) = E[e−
∫ Tt r(s,0)dsg(r(T ))
].
The price is a functional of the rate x(ξ), ξ ≥ 0, observed at the moment t of signing
the contract. Thus r(·, ·), entering (6.16) is a solution to (6.13) with the initial condition
r(t, ξ) = x(ξ), ξ ≥ 0. Since solutions of (6.13) may not be nonnegative, we will assume
(as it was done in [19, 36]) that the price is equal to
v(t, x) = E[e−
∫ Tt r+(s,0)dsg(r(T ))
]. (6.16)
Suppose that one expects that volatilities belong to a set Λ of functions of ξ ≥ 0.
Therefore, the so called super-price, favorable for the seller of the derivative, could be
defined as the largest of the expressions (6.16) over all processes σ(s), s ∈ [t, T ] taking
values in Λ. If we think of the processes σ(·) as controls and use the framework described
in this paper, this value, denoted by V (t, x), should be a solution of the associated HJB
equation:Vt(t, x)− 〈Ax,DV (t, x)〉 − V (t, x)x+(0)
+ supσ∈Λ
[〈b(σ), DV (t, x)〉+
∫R[V (t, x+ zσ)− V (t, x)− z〈σ,DV (t, x)〉]ν(dz)
]= 0,
V (T, x) = g(x), (t, x) ∈ (0, T )×H.(6.17)
The scalar product 〈·, ·〉 is in the space H of functions, defined on [0,+∞), for which the
equation (6.13) is considered. Moreover Ax = − ∂∂ξx. In the above HJB equation, for a
function σ(ξ), ξ ≥ 0, the drift b(σ) is given by the formula
b(σ)(ξ) = J ′(∫ ξ
0
σ(η) dη
)σ(ξ), ξ ≥ 0,
where
J ′(z) =
∫Ru(1− e−zu) ν(du). (6.18)
36
Since J ′′(z) =∫R u
2e−zu ν(du) ≥ 0, J ′ is an increasing, concave function on the interval
where it is finite. It is convenient to take H = H1,γ[0,+∞), γ > 0, the space of absolutely
continuous functions x for which
‖x‖2H1,γ =
∫ +∞
0
eγξ[|x(ξ)|2 + |x′(ξ)|2] dξ < +∞. (6.19)
We will denote by H1,γ+ [0,+∞), the set of non-negative functions in H1,γ[0,+∞). In
particular if possible volatilities σ are in H1,γ+ [0,+∞) and the measure ν is concentrated
on the positive axis, the drift term is well defined. To assure that equation (6.13) has
a well defined solution for arbitrary, say bounded in H1,γ+ [0,+∞), control set Λ, it is
enough to show that b is a continuous mapping from H1,γ+ [0,+∞) into H. In the following
proposition we give sufficient conditions for that.
Proposition 6.1. Let Λ be a closed and bounded subset of H1,γ+ [0,+∞). Assume that
ν is concentrated on [0,+∞) and∫ +∞
0z2 ν(dz) < +∞. Then b : Λ → H1,γ
+ ([0,+∞)) is
uniformly continuous.
Proof. It follows from the assumptions that J ′ and J ′′ are bounded and continuous on
[0,+∞). Moreover, for x, y ∈ H1,γ([0,+∞)), ξ ≥ 0,∣∣∣∣∫ ξ
0
x(η)dη −∫ ξ
0
y(η)dη
∣∣∣∣ ≤ ∫ ξ
0
e−γ2ηe
γ2η|x(η)− y(η)| dη
≤(∫ ξ
0
e−γηdη
)1/2(∫ ξ
0
eγη|x(η)− y(η)|2dη)1/2
≤(
1
γ
)1/2
‖x− y‖H1,γ . (6.20)
37
Thus for fixed ξ, the functional∫ ξ
0x(η)dη is continuous on H1,γ
+ ([0,+∞)). In addition
‖b(x)− b(y)‖2H1,γ =
∫ +∞
0
eγξ(J ′(∫ ξ
0
x(η)dη
)x(ξ)− J ′
(∫ ξ
0
y(η)dη
)y(ξ)
)2
dξ
+
∫ +∞
0
eγξ[(J ′′(∫ ξ
0
x(η)dη
)|x(ξ)|2 − J ′′
(∫ ξ
0
y(η)dη
)|y(ξ)|2
)+ J ′
(∫ ξ
0
x(η)dη
)x′(ξ)− J ′
(∫ ξ
0
y(η)dη
)y′(ξ)
]2
dξ
≤ 2
[∫ +∞
0
eγξ(J ′(∫ ξ
0
x(η)dη
))2 (|x(ξ)|2 − |y(ξ)|2
)2dξ
+
∫ +∞
0
eγξ(J ′(∫ ξ
0
y(η)dη
)− J ′
(∫ ξ
0
x(η)dη
))2
|y(ξ)|2 dξ
]
+ 4
[∫ +∞
0
eγξ(J ′′(∫ ξ
0
x(η)dη
))2 (|x(ξ)|2 − |y(ξ)|2
)2dξ
+
∫ +∞
0
eγξ(J ′′(∫ ξ
0
x(η)dη
)− J ′′
(∫ ξ
0
y(η)dη
))2
|y(ξ)|4 dξ
+
∫ +∞
0
eγξ(J ′(∫ ξ
0
x(η)dη
))2
(x′(ξ)− y′(ξ))2dξ
+
∫ +∞
0
eγξ(J ′(∫ ξ
0
x(η)dη
)− J ′
(∫ ξ
0
y(η)dη
))2
|y′(ξ)|2 dξ
].
Examining all the terms it is enough to show the following lemma.
Lemma 6.2. If x, y ∈ H1,γ([0,+∞)) then x, y are bounded and:
(i) supξ≥0|x(ξ)| ≤ 2
(1
γ
)1/2
‖x‖H1,γ ,
(ii)
∫ +∞
0
eγξ|x(ξ)|4 dξ ≤ 4
γ‖x‖4
H1,γ ,
(iii)
∫ +∞
0
eγξ(|x(ξ)|2 − |y(ξ)|2
)2dξ ≤ 8
γ
∫ +∞
0
eγξ(x(ξ)− y(ξ))2dξ(‖x‖2
H1,γ + ‖y‖2H1,γ
).
Proof. The proof of (i) can be found, e.g. in [3], p.164. From (i)
|x(ξ)|2 ≤ 4
γ‖x‖2
H1,γ , ξ ∈ [0,+∞)
38
therefore
eγξ|x(ξ)|4 ≤ 4
γeγξ|x(ξ)|2‖x‖2
H1,γ , ξ ∈ [0,+∞),
so (ii) follows after integration over [0,+∞). Finally∫ +∞
0
eγξ(|x(ξ)|2 − |y(ξ)|2
)2dξ =
∫ +∞
0
eγξ(x(ξ)− y(ξ))2(x(ξ) + y(ξ))2dξ
≤ 2
∫ +∞
0
eγξ(x(ξ)− y(ξ))2dξ
(supξ|x(ξ)|2 + sup
ξ|y(ξ)|2
)≤ 8
γ
(∫ +∞
0
eγξ(x(ξ)− y(ξ))2dξ
)(‖x‖2
H1,γ + ‖y‖2H1,γ
).
We recall that in this example we can take B = ((λI+A)(λI+A∗))−1/2, where λ > 0
(see e.g. [13], Section 3.1.1). It therefore follows that under the assumptions of Proposition
6.1, the drift b and the diffusion coefficient γ(x, σ, z) := zσ, satisfy the assumptions
of Section 3.1 with ρ(z) = z. Moreover, observe that the function c(x) = x+(0) is a
bounded linear functional on H. Thus it is weakly sequentially continuous and hence
uniformly continuous in the ‖·‖−1 norm on bounded sets of H. Therefore, if g is uniformly
continuous in the ‖·‖−1 norm on bounded sets of H (for instance if g is weakly sequentially
continuous), it follows from Remark 5.5 that the value function V , where the supremum
is taken over all progressively measurable volatilities (controls) σ(·) with values in Λ and
all reference probability spaces, is a viscosity solution of the HJB equation (6.17).
References
[1] D. Applebaum, On the infinitesimal generators of Ornstein-Uhlenbeck processes with
jumps in Hilbert space, Potential Anal. 26 (2007), no. 1, 79–100.
[2] D. Applebaum, Levy processes and stochastic calculus, Second edition, Cambridge
Studies in Advanced Mathematics, 116, Cambridge University Press, Cambridge,
2009.
[3] M. Barski and J. Zabczyk, Bond Markets with Levy Factors, book in preparation for
CUP.
[4] G. Barles, R. Buckdahn and E. Pardoux, Backward stochastic differential equations
and integral-partial differential equations, Stochastics Stochastics Rep. 60 (1997), no.
1-2, 57–83.
[5] J. Bertoin, Levy processes, Cambridge University Press, 1996.
39
[6] I. H. Biswas, On zero-sum stochastic differential games with jump-diffusion driven
state: a viscosity solution framework, SIAM J. Control Optim. 50 (2012), no. 4, 1823–
1858.
[7] I. H. Biswas, E. R. Jakobsen and K. H. Karlsen, Viscosity solutions for a system
of integro-PDEs and connections to optimal switching and control of jump-diffusion
processes, Appl. Math. Optim. 62 (2010), no. 1, 47–80.
[8] R. Buckdahn, Y. Hu, and J. Li, Stochastic representation for solutions of Isaacs’
type integral-partial differential equations Stochastic Process. Appl. 121 (2011), no.
12, 2715–2750.
[9] R. Cont and P. Tankov, Financial modeling with jump processes, Chapman &
Hall/CRC Financial Mathematics Series, Chapman & Hall/CRC, Boca Raton, FL,
2004.
[10] M. G. Crandall and P. L. Lions, Viscosity solutions of Hamilton-Jacobi equations in
infinite dimensions. IV. Hamiltonians with unbounded linear terms, J. Funct. Anal.
90 (1990), no. 2, 237–283.
[11] G. Da Prato and J. Zabczyk, Stochastic equations in infinite dimensions, Second Edi-
tion Encyclopedia of Mathematics and its Applications, 152, Cambridge University
Press, Cambridge, 2014.
[12] S. N. Ethier and T. G. Kurtz, Markov processes, characterization and convergence,
Wiley Series in Probability and Statistics, John Wiley & Sons, Inc., New York, 1986.
[13] G. Fabbri, F. Gozzi and A. Swiech, Stochastic Optimal Control in Infinite Di-
mensions: Dynamic Programming and HJB Equations, with Chapter 6 by M.
Fuhrman and G. Tessitore, book in preparation. Chapters 1-3 are available at
http://people.math.gatech.edu/∼swiech/FGS-Chapters1-3.pdf.
[14] D. Filipovic, Consistency Problems for Heath–Jarrow–Morton Interest Rate Models.
LNIM 1760, Springer, 2001.
[15] N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes,
second edition, North-Holland Mathematical Library, 24, North-Holland, Amster-
dam; Kodansha, Tokyo, 1989.
[16] Y. Ishikawa, Optimal control problem associated with jump processes, Appl. Math.
Optim. 50 (2004), no. 1, 21–65.
40
[17] O. Kallenberg, Foundations of modern probability, Second Edition, Springer, 2002.
[18] I. Kharroubi, H. Pham, Feynman-Kac representation for Hamilton-Jacobi-Bellman
IPDE, Ann. Probab. 43 (2015), no. 4, 1823–1865.
[19] D. Kelome and A. Swiech, Viscosity solutions of an infinite-dimensional Black-
Scholes-Barenblatt equation, Appl. Math. Optim. 47 (2003), no. 3, 253–278.
[20] S. Koike and A. Swiech, Representation formulas for solutions of Isaacs integro-PDE,
Indiana Univ. Math. J. 62 (2013), no. 5, 1473–1502.
[21] P. Lescot and M. Rockner, Perturbations of generalized Mehler semigroups and ap-
plications to stochastic heat equations with Levy noise and singular drift, Potential
Anal. 20 (2004), no. 4, 317–344.
[22] B. Oksendal and A. Sulem, Applied stochastic control of jump diffusions, Second
edition, Universitext, Springer, Berlin, 2007.
[23] M. Ondrejat, Uniqueness for stochastic evolution equations in Banach spaces, Dis-
sertationes Math. (Rozprawy Mat.) 426 (2004), 63 pp.
[24] S. Peszat, Levy-Ornstein-Uhlenbeck transition semigroup as second quantized opera-
tor, J. Funct. Anal. 260 (2011), no. 12, 3457–3473.
[25] S. Peszat and J. Zabczyk, Stochastic Partial Differential Equations with Levy Noise.
An evolution equation approach, Encyclopedia of Mathematics and its Applications,
113, Cambridge University Press, Cambridge, 2007.
[26] H. Pham, Optimal stopping of controlled jump diffusion processes: a viscosity solution
approach, J. Math. Systems Estim. Control 8 (1998), no. 1, 27 pp.
[27] E. Priola and S. Traca, On the Cauchy problem for non-local Ornstein-Uhlenbeck
operators, Nonlinear Anal. 131 (2016), 182–205.
[28] E. Priola and J. Zabczyk, Liouville theorems for non-local operators, J. Funct. Anal.
216 (2004), no. 2, 455–490.
[29] M. Renardy, Polar decomposition of positive operators and a problem of Crandall and
Lions, Appl. Anal. 57 (1995), no. 3–4, 383–385.
[30] L. C. G. Rogers and D. Williams, Diffusions, Markov processes, and martingales,
Cambridge University Press, Cambridge, 2000
41
[31] H. M. Soner, Optimal control with state-space constraint. II, SIAM J. Control Optim.
24 (6) (1986), 1110–1122.
[32] H. M. Soner, Optimal control of jump-Markov processes and viscosity solutions, in
Stochastic differential systems, stochastic control theory and applications (Minneapo-
lis, Minn., 1986), 501–511, IMA Vol. Math. Appl., 10, Springer, New York, 1988.
[33] A. Swiech and J. Zabczyk, Large deviations for stochastic PDE with Levy noise, J.
Funct. Anal. 260 (2011), no. 3, 674–723.
[34] A. Swiech and J. Zabczyk, Uniqueness for integro-PDE in Hilbert spaces, Potential
Anal. 38 (2013), no. 1, 233–259.
[35] J. Yong and X. Y. Zhou, Stochastic controls. Hamiltonian systems and HJB equations,
Applications of Mathematics (New York), 43. Springer-Verlag, New York, 1999.
[36] J. Zabczyk, Bellman’s inclusions and excessive measures, Probab. Math. Statist. 21
(2001), no. 1, 101–122.
[37] J. Zabczyk, Topics in Stochastic Process, Quaderni, SNS, Pisa, 2004.
42