detection of several obstacles in a stokes flow: a mixed

57
Detection of Several Obstacles in a Stokes Flow: A mixed approach Mat´ ıas GODOY CAMPBELL Departamento de Ingenier´ ıa Matem´ atica, Universidad de Chile & Institut de Math´ ematiques de Toulouse, Universit´ e Paul Sabatier Advisors: Fabien CAUBET (IMT UPS) - Carlos CONCA (DIM-CMM UChile) eminaire ´ Equipe MOD - Universit´ e de Limoges - 13th November, 2015. Mat´ ıas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 1 / 44

Upload: others

Post on 27-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Detection of Several Obstacles in a Stokes Flow: A mixed approach
Matas GODOY CAMPBELL Departamento de Ingeniera Matematica, Universidad de Chile
& Institut de Mathematiques de Toulouse, Universite Paul Sabatier
Advisors: Fabien CAUBET (IMT UPS) - Carlos CONCA (DIM-CMM UChile)
Seminaire Equipe MOD - Universite de Limoges - 13th November, 2015.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 1 / 44
The inverse problem of object detection: Motivation
Figure: Can we measure ‘something’ in O ⊂ ∂ in order to precise the location of the object ω? What awesome things could be inside the bottle? A treasure map? Riemann hypothesis proof?
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 2 / 44
Outline
2 Topological Optimization Definition Main Result Main Result Proof Numerical Results
3 A Combination with Geometrical Optimization Definition and Results Numerical Results
4 Going Further
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 3 / 44
The inverse problem of object detection: General idea
Object detection: Basic Idea
We perform a (some) measurement(s) in a connected region O ⊂ ∂ of the boundary ∂ of relevant quantities. (The quantities and regions of measurement may be different, they depend on the problem)
We minimize an ad-hoc cost functional, which penalizes the error in the object localization and/or the shape of it.
Figure: The initial domain and the same domain after inclusion of an object
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 4 / 44
Mathematical Model: Posing the Detection Problem
Let us consider the non-compressible and stationary fluid case:
Stokes system −νu +∇p = 0 in \ ω div u = 0 in \ ω u = f on ∂
u = 0 on ∂ω
Our aim (at least for now) is to reconstruct the object ω∗ based in a measurement on O ⊂ ∂:
σ(u, p)n = g on O ⊂ ∂
where σ(u, p) = νD(u)− pI = ν(∇u + t∇u)− pI
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 5 / 44
Mathematical Model: Posing the Detection Problem
The following result implies the well definition of the problem:
Identifiability Result [Alvarez et al. 2005]
Let (ui , pi ), i = 1, 2 the solutions for the problems defined in \ ωi for each correspondant i :
−νui +∇pi = 0 in \ ωi
div ui = 0 in \ ωi
ui = f on ∂
ui = 0 on ∂ωi
If g1 = g2 then ω1 = ω2.
Important hypothesis: The measurements gi are perfect (this means, without error). Also, we only need ∂ to be Lipschitz.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 6 / 44
Mathematical Model: Posing the Detection Problem
In virtue of the previous result, we consider the following strategy:
Strategy
Minimize (over all the admissible ‘objects’ ω) the Kohn-Vogelius functional, given by:
JKV (ω) = FKV (uN(ω),uD(ω)) = ν
2
∫ \ω |D(uN(ω)− uD(ω))|2dx
where the pairs (uN , pN) ∈ H1( \ ω)× L2( \ ω), (uD , pD) ∈ H1( \ ω)× L2
0( \ ω) solve:
(PN)
−νuN +∇pN = 0 in \ ω div uN = 0 in \ ω σ(uN , pN)n = g on O uN = f on ∂ \ O uN = 0 on ∂ω
, (PD)
.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 7 / 44
Mathematical Model: Posing the Detection Problem
Notice that we have the following equivalence:
Inverse Problem as a Minimization Problem
Given the data g, f on O and ∂ respectively, to determine the position of the desired object, we have to solve:
(O)
JKV (ω∗) = minω∈Dad JKV (ω)
We will use two different criterias to perform this minimization, each one will respond to a different objective and therefore will consider a different technique.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 8 / 44
Mathematical Model: Posing the Detection Problem
Notice that we have the following equivalence:
Inverse Problem as a Minimization Problem
Given the data g, f on O and ∂ respectively, to determine the position of the desired object, we have to solve:
(O)
JKV (ω∗) = minω∈Dad JKV (ω)
We will use two different criterias to perform this minimization, each one will respond to a different objective and therefore will consider a different technique.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 8 / 44
References
[1] C. Alvarez, C. Conca, I. Fritz, O. Kavian and J. Ortega. Identification of immersed obstacles via boundary measurements. Inverse Problems. 21 (2005), 1531–1552.
[2] A. Ben Abda, M. Hassine, M. Jaoua and M. Masmoudi. Topological sensitivity analysis for the location of small cavities in Stokes flow. SIAM J. Control Optim. 48 (2010).
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 9 / 44
Outline
2 Topological Optimization Definition Main Result Main Result Proof Numerical Results
3 A Combination with Geometrical Optimization Definition and Results Numerical Results
4 Going Further
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 10 / 44
Topological Optimization
In our first approach the minimization will be based under the criteria that we want to detect several objects without knowing the number of them a priori. For this, we will use a tool called ‘Topological derivative’, which defines the way we will perform this (topological) optimization:
Assumption
Our admissible domain space will be restricted to domains with fixed shape and small size:
Dad = {ωz,ε = z + εω, z ∈ , ω is open with 0 ∈ ω, 0 < ε 1, ωz,ε ⊂⊂ }
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 11 / 44
Topological Optimization
In our first approach the minimization will be based under the criteria that we want to detect several objects without knowing the number of them a priori. For this, we will use a tool called ‘Topological derivative’, which defines the way we will perform this (topological) optimization:
Assumption
Our admissible domain space will be restricted to domains with fixed shape and small size:
Dad = {ωz,ε = z + εω, z ∈ , ω is open with 0 ∈ ω, 0 < ε 1, ωz,ε ⊂⊂ }
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 11 / 44
Topological Optimization
Topological Optimization basically explores how changes a cost functional when we add an object (with prescribed shape) in the original domain. The objective is to obtain an expression like:
Topological asymptotic expansion
JKV ( \ ωz,ε) = JKV () + ξ(ε) · δJKV (z) + o(ξ(ε)), (1)
where ξ(ε) is a positive function of ε which goes to 0 when ε→ 0. The term δJKV (z) is called the ‘topological derivative’ of JKV in z ∈ .
The topological gradient δJKV (z) basically measures the cost of add the object ωz,ε on the domain . Noticing that ξ(ε) is positive, the strategy then should be:
Add ωz,ε where δJKV (z) is the most negative possible
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 12 / 44
Topological Optimization
Topological Optimization basically explores how changes a cost functional when we add an object (with prescribed shape) in the original domain. The objective is to obtain an expression like:
Topological asymptotic expansion
JKV ( \ ωz,ε) = JKV () + ξ(ε) · δJKV (z) + o(ξ(ε)), (1)
where ξ(ε) is a positive function of ε which goes to 0 when ε→ 0. The term δJKV (z) is called the ‘topological derivative’ of JKV in z ∈ .
The topological gradient δJKV (z) basically measures the cost of add the object ωz,ε on the domain . Noticing that ξ(ε) is positive, the strategy then should be:
Add ωz,ε where δJKV (z) is the most negative possible
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 12 / 44
Topological Optimization
Let us consider the following notation: z,ε = \ ωz,ε. From now we focus in the 2D case, we consider the problems (PD), (PN) with ω = ωz,ε, so the solutions will depend of ε and we will use the notation (uεN , p
ε N) and (uεD , p
ε D).
JKV (z,ε) = JKV () + 4πν
− log ε (|u0
or equivalently: ξ(ε) = 1 − log ε , δJKV (z) = 4πν
( |u0
) .
Where superscript 0 denotes the solutions of problems (PD), (PN) in .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 13 / 44
Topological Optimization
Let us consider the following notation: z,ε = \ ωz,ε. From now we focus in the 2D case, we consider the problems (PD), (PN) with ω = ωz,ε, so the solutions will depend of ε and we will use the notation (uεN , p
ε N) and (uεD , p
ε D).
JKV (z,ε) = JKV () + 4πν
− log ε (|u0
or equivalently: ξ(ε) = 1 − log ε , δJKV (z) = 4πν
( |u0
) .
Where superscript 0 denotes the solutions of problems (PD), (PN) in .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 13 / 44
Topological asymptotic expansion proof
The proof is based in two parts:
1 Obtain an asymptotic expansion of uεD/N 2 Estimate JKV (ε)− JKV ()
The proof ideas will be presented for a centered object z = 0. The general case comes from a change of variables.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 14 / 44
Topological asymptotic expansion proof
The respective solutions uε D ∈ H1(z,ε) and uε
N ∈ H1(z,ε) of Problems (PD) and (PN) admit the following asymptotic expansion (with the subscript \ = D and \ = N respectively):
uε \(x) = u0
( 1
) , (3)
where (U\,P\) ∈ H1()× L2 0() solves the following Stokes problem defined in
the whole domain { −νU\ +∇P\ = 0 in
divU\ = 0 in U\ = C \ on ∂,
(4)
with C \(x) := −4πνE (x − z)u0 \(z), where E is the fundamental solution of the
Stokes equations in R2 given by
E (x) = 1
2πx2 .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 15 / 44
Topological asymptotic expansion proof
Proof Sketch: First of all, let us see the heuristics involved in the deduction of an expression such as (3): Notice, in contrast with the classical technique in 3D case, the term E (x − z)u0
D has a logaritmic term, therefore tends to infinity at infinity, it can only be considered on . We therefore cannot consider, as in 3D case, an approximation via exterior problem. We follow the strategy proposed by Bonnaillie-Noel and Dambrine for Laplace equation in the plane. Consider the solution (UD ,PD) ∈ H1()× L2
0() of Problem (4) with \ = D. The idea is to combine this solution and the function CD to build a proper corrector. To build this, we search coefficients a(ε) and b(ε), such that the error rεD defined by:
uε D(x) = u0
is reduced with respect to Rε D := uε
D − u0 D .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 16 / 44
Topological asymptotic expansion proof
Proof Sketch: First of all, let us see the heuristics involved in the deduction of an expression such as (3): Notice, in contrast with the classical technique in 3D case, the term E (x − z)u0
D has a logaritmic term, therefore tends to infinity at infinity, it can only be considered on . We therefore cannot consider, as in 3D case, an approximation via exterior problem. We follow the strategy proposed by Bonnaillie-Noel and Dambrine for Laplace equation in the plane. Consider the solution (UD ,PD) ∈ H1()× L2
0() of Problem (4) with \ = D. The idea is to combine this solution and the function CD to build a proper corrector. To build this, we search coefficients a(ε) and b(ε), such that the error rεD defined by:
uε D(x) = u0
is reduced with respect to Rε D := uε
D − u0 D .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 16 / 44
Topological asymptotic expansion proof
Proof Sketch: First of all, let us see the heuristics involved in the deduction of an expression such as (3): Notice, in contrast with the classical technique in 3D case, the term E (x − z)u0
D has a logaritmic term, therefore tends to infinity at infinity, it can only be considered on . We therefore cannot consider, as in 3D case, an approximation via exterior problem. We follow the strategy proposed by Bonnaillie-Noel and Dambrine for Laplace equation in the plane. Consider the solution (UD ,PD) ∈ H1()× L2
0() of Problem (4) with \ = D. The idea is to combine this solution and the function CD to build a proper corrector. To build this, we search coefficients a(ε) and b(ε), such that the error rεD defined by:
uε D(x) = u0
is reduced with respect to Rε D := uε
D − u0 D .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 16 / 44
Topological asymptotic expansion proof
Notice that the remainder rεD satisfies: −νrεD +∇prεD = 0 in ε
div rεD = 0 in ε
rεD = −(a(ε) + b(ε))CD(x) on ∂ rεD = −u0
D(x)− a(ε)CD(x)− b(ε)UD(x) on ∂ωε,
where prεD is defined in analogous way with pressure terms. Now imposing rεD(x) = o(1) for x ∈ ∂ ∪ ∂ωε, we obtain the linear system in unknowns (a(ε), b(ε)):{
a(ε) + b(ε) = 0 a(ε)CD(εX ) + b(ε)UD(0) = −u0
D(0).
which leads to b(ε) = −a(ε) and (for i , j ∈ {1, 2} , i 6= j):
a(ε) = (u0
D(0))j + (UD(0))i =
) ,
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 17 / 44
Topological asymptotic expansion proof
Notice that the remainder rεD satisfies: −νrεD +∇prεD = 0 in ε
div rεD = 0 in ε
rεD = −(a(ε) + b(ε))CD(x) on ∂ rεD = −u0
D(x)− a(ε)CD(x)− b(ε)UD(x) on ∂ωε,
where prεD is defined in analogous way with pressure terms. Now imposing rεD(x) = o(1) for x ∈ ∂ ∪ ∂ωε, we obtain the linear system in unknowns (a(ε), b(ε)):{
a(ε) + b(ε) = 0 a(ε)CD(εX ) + b(ε)UD(0) = −u0
D(0).
which leads to b(ε) = −a(ε) and (for i , j ∈ {1, 2} , i 6= j):
a(ε) = (u0
D(0))j + (UD(0))i =
) ,
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 17 / 44
Topological asymptotic expansion proof
uε D(x) = u0
ε D(x).
uε N(x) = u0
ε N(x).
rεD1,ε = O
) ,
The proof is technical and relies in obtain explicit (w.r.t. ε) estimates for Stokes system in ε.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 18 / 44
Topological asymptotic expansion proof
uε D(x) = u0
ε D(x).
uε N(x) = u0
ε N(x).
rεD1,ε = O
) ,
The proof is technical and relies in obtain explicit (w.r.t. ε) estimates for Stokes system in ε.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 18 / 44
Topological asymptotic expansion proof
We have: JKV (ε)− JKV () = AD + AN ,
where
|D(u0 N)|2.
Proof: (A little bit tricky) Integration by parts. Remark: Now we have a decoupled expression!.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 19 / 44
Topological asymptotic expansion proof
Estimate JKV (ε)− JKV ()
Let us estimate (for example) AN , first notice: 1 2ν ∫ ωε |D(u0
N)|2 = O(ε2) thanks to regularity. On the other hand, by the asymptotic expansion for uε
N :∫ ∂ωε
Now, the first term is easily controlled:∫ ∂ωε
[ σ(rεN , prεN )n
u0 N
1/2,∂ωε
) .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 20 / 44
Topological asymptotic expansion proof
σ(CN −UN ,ΠN −PN)n ·u0 N =
∫ ∂ωε
N(0) + u0 N(0))
[σ(CN ,ΠN)n] · u0 N(0).
We get the last equality because ∇u0 N is uniformly bounded and:∫
∂ωε
(−νCN+∇ΠN) = −4πνu0 N(0)
because of the definition of the pair (CN ,ΠN) = (−4πνEu0 N(0),−4πνP · u0
N(0)) in terms of the fundamental solution (E ,P) of Stokes equation.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 21 / 44
Topological asymptotic expansion proof
∫ ∂ωε
[σ(UN ,PN)n] · u0 N(0),
by of the definition of the pair (UN ,PN), we get:∫ ∂ωε
[σ(UN ,PN)n] =
1
4πν
AN = 4πν
) .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 22 / 44
Numerical Algorithm
Algorithm 1 fix an initial shape ω0 = ∅, a maximum number of iterations M and
set i = 1 and k = 0, 2 solve Problems (PD) and (PN) in \
(k j=0 ωj
) ,
3 compute the topological gradient δJKV using the formula from (1),
δJKV (P) = 4πν ( |u0
) ∀P ∈ \
4 seek P∗k+1 := argmin ( δJKV (P), P ∈ \
(k j=0 ωj
)) ,
5 if P∗k+1−Pj0 < rk+1 + rj0 + 0.01 for j0 ∈ {1, . . . , k}, where rj0 is the radius of ωj0 and rk+1 is defined such that there are no intersections, then rj0 = 1.1 ∗ rj0 , get back to the step 2. and i ← i + 1 while i ≤ M,
6 set ωk+1 = B(P∗k+1, rk+1), where rk+1 is defined such that there are no intersections,
7 while i ≤ M, get back to the step 2, i ← i + 1 and k ← k + 1.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 23 / 44
Numerical Algorithm: Some tests
Framework of the examples: The domain is the rectangle [−0.5, 0.5]× [−0.25, 0.25], g is measured on all faces except on the one given by y = 0.25, and we consider f = (1, 1)t . We start with a simple example:
Figure: Detection of ω∗ 1 , ω∗
2 and ω∗ 3
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 24 / 44
Numerical Algorithm: Some tests
We continue with an example where things go (expectedly) wrong:
Figure: Bad Detection for a ‘very big sized’ object
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 25 / 44
Numerical Algorithm: Some tests
An we see now an animated example of how our algorithm works, where the object to be detected has a different geometry to our default shape:
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 26 / 44
var ocgs=host.getOCGs(host.pageNum);for(var i=0;i<ocgs.length;i++){if(ocgs[i].name=='MediaPlayButton0'){ocgs[i].state=false;}}
References
[1] J. Soko lowski and A. Zochowski. On the topological derivative in shape optimization. SIAM J. Control Optim., 37(4) (1999), 1251–1272.
[2] V. Bonnaillie-Noel and M. Dambrine. Interactions between moderately close circular inclusions: The Dirichlet-Laplace equation on the plane. Asymptotic Analysis. 84 (2013).
[3] F. Caubet, M. Dambrine, D. Kateb, C. Timimoun. Localization of small obstacles in Stokes flow. Inverse Problems. 28 (2012).
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 27 / 44
Outline
2 Topological Optimization Definition Main Result Main Result Proof Numerical Results
3 A Combination with Geometrical Optimization Definition and Results Numerical Results
4 Going Further
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 28 / 44
Geometrical Optimization: A complementary task
Up to now, we’ve been able to detect (small) objects, precise their number and their relative location. We would like to improve the quality of our algorithm in the following senses:
1 The numerical simulations suggest the need to improve the computation of the relative location of the objects.
2 The shape of the objects: Our topological approach works with a fixed shape for the objects, naturally we cannot expect to find always circular objects.
To this end, we will consider a useful and classical tool from geometrical optimization: the shape gradient.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 29 / 44
Geometrical Optimization: A complementary task
Up to now, we’ve been able to detect (small) objects, precise their number and their relative location. We would like to improve the quality of our algorithm in the following senses:
1 The numerical simulations suggest the need to improve the computation of the relative location of the objects.
2 The shape of the objects: Our topological approach works with a fixed shape for the objects, naturally we cannot expect to find always circular objects.
To this end, we will consider a useful and classical tool from geometrical optimization: the shape gradient.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 29 / 44
Geometrical Optimization: A complementary task
Hadamard’s boundary variation method: describes variations of a reference domain (at least Lipschitz) of the form:
→ θ = (I + θ)()
} ,
} .
these choices make the perturbations of the domain \ ω to be applied only on the object ω, i.e. we have:
\ ω → ( \ ω)θ = (I + θ)( \ ω) = \ ωθ
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 30 / 44
Geometrical Optimization: A complementary task
Hadamard’s boundary variation method: describes variations of a reference domain (at least Lipschitz) of the form:
→ θ = (I + θ)()
} ,
} .
these choices make the perturbations of the domain \ ω to be applied only on the object ω, i.e. we have:
\ ω → ( \ ω)θ = (I + θ)( \ ω) = \ ωθ
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 30 / 44
Geometrical Optimization: A complementary task
Keeping this is mind, we define:
Shape differentiability
Given a smooth domain , a functional J() of the domain is shape differentiable at if the function
θ ∈ U 7→ J(θ)
is Frechet-differentiable at 0, i.e. we get the following expansion around 0:
J(θ) = J() + DJ()(θ) + o(θU)
DJ is called ‘shape derivative’ of the functional J.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 31 / 44
Geometrical Optimization: A complementary task
For our functional, we have the following result:
Shape Gradient for our functional (Caubet et al. (REF))
For θ ∈ U , the Kohn-Vogelius cost functional JKV is differentiable at \ ω in the direction θ with
DJKV (\ω) · θ = − ∫ ∂ω
(σ(w , q)n) · ∂nuD(θ · n) + 1
2 ν
|D(w)|2 (θ · n), (5)
where (w , q) is defined by w := uD − uN and q := pD − pN .
We parametrize the boundary of the object ω in polar coordinates:
∂ω = {(
} ,
Taking into account of the ill-posedness of the problem, we regularize with the approximation of the polar radius r by its truncated Fourier series
rN() := aN0 + N∑
aNk cos(k) + bNk sin(k),
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 32 / 44
Geometrical Optimization: A complementary task
For our functional, we have the following result:
Shape Gradient for our functional (Caubet et al. (REF))
For θ ∈ U , the Kohn-Vogelius cost functional JKV is differentiable at \ ω in the direction θ with
DJKV (\ω) · θ = − ∫ ∂ω
(σ(w , q)n) · ∂nuD(θ · n) + 1
2 ν
|D(w)|2 (θ · n), (5)
where (w , q) is defined by w := uD − uN and q := pD − pN .
We parametrize the boundary of the object ω in polar coordinates:
∂ω = {(
} ,
Taking into account of the ill-posedness of the problem, we regularize with the approximation of the polar radius r by its truncated Fourier series
rN() := aN0 + N∑
aNk cos(k) + bNk sin(k),
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 32 / 44
Geometrical Optimization: A complementary task
Then, the unknown shape is entirely defined by the coefficients (ai , bi ). Hence, for k = 1, . . . ,N, the corresponding deformation directions are respectively,
θ1 := θx0 := (
1 0
) , θ2 := θy0 :=
( 0 1
cos sin
( cos sin
) ,
∈ [0, 2π). The gradient is then computed component by component using its characterization (formula (5)):(
∇JKV (\ω) ) k
= DJKV (\ω) · V k , k = 1, . . . , 2N + 3.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 33 / 44
Mixed Approach Algorithm
Algorithm
1 fix a number of iterations M and take the initial shape ω0 (which can have several connected components) given by the previous topological algorithm,
2 solve problems (PD) and (PN) with ωε = ωi ,
3 compute ∇JKV ( \ ωi ) using formula (5),
4 move the coefficients associated to the shape: ωi+1 = ωi − αi∇JKV (ωi ),
5 get back to the step 2. while i < M.
Note: The number of parameters increases gradually during the algorithm.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 34 / 44
Mixed Approach Algorithm: Results
For the unit circle, f = (n2,−n1)t and g measured in the whole disk ∂ except the lower right quadrant, we have:
Figure: Detection of squares ω∗ 1 and ω∗
2 with the combined approach (the initial shape is the one obtained after the “topological step”) and zoom on the improvement with the geometrical step for ω∗
2
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 35 / 44
References
[1] A. Henrot and M. Pierre. Variation et Optimisation de Formes. Springer, Berlin, 2005.
[2] M. Badra, F. Caubet and M. Dambrine. Detecting an obstacle immersed in a fluid by shape optimization methods. Math. Models Methods Appl. Sci., 21(10) (2011), 2069–2101.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 36 / 44
Outline
2 Topological Optimization Definition Main Result Main Result Proof Numerical Results
3 A Combination with Geometrical Optimization Definition and Results Numerical Results
4 Going Further
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 37 / 44
Going Further: Data Completion Problem
Up to now, we have developed an algorithm which combines a topological optimization and a geometrical optimization. In all our computation we have assumed that we have the data u = f on ∂, which in several scenarios cannot be the case. Our aim, is now to develop a method which allows to find (in a reliable way), in case we only have the data u = f on O ⊂ ∂, a suitable (naturally we cannot expect to find the exact one) completion of our data f in the rest of ∂: This is the so-called data completion problem.
This part is a joint work with F. CAUBET and J. DARDE.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 38 / 44
Going Further: Data Completion Problem
Up to now, we have developed an algorithm which combines a topological optimization and a geometrical optimization. In all our computation we have assumed that we have the data u = f on ∂, which in several scenarios cannot be the case. Our aim, is now to develop a method which allows to find (in a reliable way), in case we only have the data u = f on O ⊂ ∂, a suitable (naturally we cannot expect to find the exact one) completion of our data f in the rest of ∂: This is the so-called data completion problem.
This part is a joint work with F. CAUBET and J. DARDE.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 38 / 44
Going Further: Data Completion Problem
For sake of simplicity let us present the Laplace case, we now consider the solutions of the following well-posed problems:
Considered systems in data completion problem
(PN)
−uN = 0 in \ ω ∂nuN = gN on O uN = 0 on ∂ω uN = Ψ on ∂ \ O
, (PD)
.
And consider the Kohn-Vogelius functional JKV now defined on the unknown data (,Ψ) ∈ X := H−1/2(∂ \ O)×H1/2(∂ \ O)/R:
JKV (,Ψ) := 1
∫ \ω |∇uN(Ψ)−∇uD()|2
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 39 / 44
Going Further: Data Completion Problem
For sake of simplicity let us present the Laplace case, we now consider the solutions of the following well-posed problems:
Considered systems in data completion problem
(PN)
−uN = 0 in \ ω ∂nuN = gN on O uN = 0 on ∂ω uN = Ψ on ∂ \ O
, (PD)
.
And consider the Kohn-Vogelius functional JKV now defined on the unknown data (,Ψ) ∈ X := H−1/2(∂ \ O)×H1/2(∂ \ O)/R:
JKV (,Ψ) := 1
∫ \ω |∇uN(Ψ)−∇uD()|2
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 39 / 44
Going Further: Data Completion Problem
Minimization Problem
Unfortunately, this problem is extremely ill-posed (exponentially ill-posed!), in order to perform this minimization we rely on the Tikhonov Regularization, which adds a penalization term, so, we modify the functional to:
JεKV (,Ψ) = JKV (,Ψ) + ε
2 (w(Ψ), v())2
H1(\ω)×H1(\ω)
notice this regularization leads to have coerciveness for JεKV .
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 40 / 44
Going Further: Data Completion Problem
Minimization for the Regularized Problem
min (,Ψ)∈X
Notation: (ε,Ψε) = argmin(,Ψ)∈X JεKV (,Ψ).
We have proved several results for this problem, the most interesting are:
1 Consider (gD , gN) ‘perfect and compatible’ data, we have:
uD(ε)→ uexact in H1( \ ω) as ε→ 0
2 In case of noisy data (g δD , g δ N) such that: (gD − g δD , gN − g δN) ≤ 2δ
we have:
δ√ ε
This result allows to chose ε(δ) to have convergence with noise !
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 41 / 44
References
[1] H. Engl, M. Hanke, A. Neubauer. Regularization of Inverse Problems. Kluwer Academic Publishers, 2000.
[2] F. Ben Belgacem and H. El Fekih. On Cauchy’s problem: I. A variational Steklov–Poincare theory. Inverse Problems, (21), 2005.
[3] M. Azaez, F. Ben Belgacem and H. El Fekih. On Cauchy’s problem: II. Completion, regularization and approximation. Inverse Problems, (22), 2006.
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 42 / 44
Outline
2 Topological Optimization Definition Main Result Main Result Proof Numerical Results
3 A Combination with Geometrical Optimization Definition and Results Numerical Results
4 Going Further
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 43 / 44
Merci pour votre attention Thank you for your attention
Gracias por su atencion
Matas GODOY (DIM UChile - IMT UPS) Limoges - 13th Nov. 2015 44 / 44
Introduction
Definition and Results