geodesic method in computer vision and graphics

85
Geodesic Methods in Computer Vision and Graphics Gabriel Peyré www.numerical-tours.com

Upload: gabriel-peyre

Post on 30-Nov-2014

820 views

Category:

Documents


0 download

DESCRIPTION

Slides of a presentation at CMAP seminar, Ecole Polytechnique, Palaiseau, France, March 3rd 2011.

TRANSCRIPT

Page 1: Geodesic Method in Computer Vision and Graphics

1

Geodesic Methodsin Computer Vision

and Graphics

Gabriel Peyré

www.numerical-tours.com

Page 2: Geodesic Method in Computer Vision and Graphics

Overview

•Riemannian Data Modelling

• Numerical Computations of Geodesics

• Geodesic Image Segmentation

• Geodesic Shape Representation

• Geodesic Meshing

• Inverse Problems with Geodesic Fidelity2

Page 3: Geodesic Method in Computer Vision and Graphics

Parametric Surfaces

3

Parameterized surface: u ∈ R2 → ϕ(u) ∈M.

u1

u2 ϕ∂ϕ

∂u1

∂ϕ

∂u2

Page 4: Geodesic Method in Computer Vision and Graphics

Parametric Surfaces

3

Parameterized surface: u ∈ R2 → ϕ(u) ∈M.

Curve in parameter domain: t ∈ [0, 1] → γ(t) ∈ D.

u1

u2 ϕ∂ϕ

∂u1

∂ϕ

∂u2

γ γ

Page 5: Geodesic Method in Computer Vision and Graphics

Parametric Surfaces

3

Parameterized surface: u ∈ R2 → ϕ(u) ∈M.

Curve in parameter domain: t ∈ [0, 1] → γ(t) ∈ D.

Geometric realization: γ(t) def.= ϕ(γ(t)) ∈M.

u1

u2 ϕ∂ϕ

∂u1

∂ϕ

∂u2

γγ

γ γ

Page 6: Geodesic Method in Computer Vision and Graphics

Parametric Surfaces

3

Parameterized surface: u ∈ R2 → ϕ(u) ∈M.

Curve in parameter domain: t ∈ [0, 1] → γ(t) ∈ D.

Geometric realization: γ(t) def.= ϕ(γ(t)) ∈M.

For an embedded manifoldM ⊂ Rn:First fundamental form: Iϕ =

∂ϕ

∂ui,

∂ϕ

∂uj

i,j=1,2

.

u1

u2 ϕ∂ϕ

∂u1

∂ϕ

∂u2

L(γ) def.= 1

0||γ(t)||dt =

1

0

γ(t)Iγ(t)γ(t)dt.

Length of a curve

γγ

γ γ

Page 7: Geodesic Method in Computer Vision and Graphics

Riemannian Manifold

4

Length of a curve γ(t) ∈M: L(γ) def.= 1

0

γ(t)TH(γ(t))γ(t)dt.

Riemannian manifold: M ⊂ Rn (locally)Riemannian metric: H(x) ∈ Rn×n, symmetric, positive definite.

Page 8: Geodesic Method in Computer Vision and Graphics

Riemannian Manifold

4

Length of a curve γ(t) ∈M: L(γ) def.= 1

0

γ(t)TH(γ(t))γ(t)dt.

W (x)

Euclidean space: M = Rn, H(x) = Idn.

Riemannian manifold: M ⊂ Rn (locally)Riemannian metric: H(x) ∈ Rn×n, symmetric, positive definite.

Page 9: Geodesic Method in Computer Vision and Graphics

Riemannian Manifold

4

Length of a curve γ(t) ∈M: L(γ) def.= 1

0

γ(t)TH(γ(t))γ(t)dt.

W (x)

Euclidean space: M = Rn, H(x) = Idn.2-D shape: M ⊂ R2, H(x) = Id2.

Riemannian manifold: M ⊂ Rn (locally)Riemannian metric: H(x) ∈ Rn×n, symmetric, positive definite.

Page 10: Geodesic Method in Computer Vision and Graphics

Riemannian Manifold

4

Length of a curve γ(t) ∈M: L(γ) def.= 1

0

γ(t)TH(γ(t))γ(t)dt.

W (x)

Euclidean space: M = Rn, H(x) = Idn.2-D shape: M ⊂ R2, H(x) = Id2.

Riemannian manifold: M ⊂ Rn (locally)Riemannian metric: H(x) ∈ Rn×n, symmetric, positive definite.

Isotropic metric: H(x) = W (x)2Idn.

Page 11: Geodesic Method in Computer Vision and Graphics

Riemannian Manifold

4

Length of a curve γ(t) ∈M: L(γ) def.= 1

0

γ(t)TH(γ(t))γ(t)dt.

W (x)

Euclidean space: M = Rn, H(x) = Idn.2-D shape: M ⊂ R2, H(x) = Id2.

Image processing: image I, W (x)2 = (ε + ||∇I(x)||)−1.

Riemannian manifold: M ⊂ Rn (locally)Riemannian metric: H(x) ∈ Rn×n, symmetric, positive definite.

Isotropic metric: H(x) = W (x)2Idn.

Page 12: Geodesic Method in Computer Vision and Graphics

Riemannian Manifold

4

Length of a curve γ(t) ∈M: L(γ) def.= 1

0

γ(t)TH(γ(t))γ(t)dt.

W (x)

Euclidean space: M = Rn, H(x) = Idn.2-D shape: M ⊂ R2, H(x) = Id2.

Parametric surface: H(x) = Ix (1st fundamental form).Image processing: image I, W (x)2 = (ε + ||∇I(x)||)−1.

Riemannian manifold: M ⊂ Rn (locally)Riemannian metric: H(x) ∈ Rn×n, symmetric, positive definite.

Isotropic metric: H(x) = W (x)2Idn.

Page 13: Geodesic Method in Computer Vision and Graphics

Riemannian Manifold

4

Length of a curve γ(t) ∈M: L(γ) def.= 1

0

γ(t)TH(γ(t))γ(t)dt.

W (x)

Euclidean space: M = Rn, H(x) = Idn.2-D shape: M ⊂ R2, H(x) = Id2.

Parametric surface: H(x) = Ix (1st fundamental form).Image processing: image I, W (x)2 = (ε + ||∇I(x)||)−1.

DTI imaging: M = [0, 1]3, H(x)=diffusion tensor.

Riemannian manifold: M ⊂ Rn (locally)Riemannian metric: H(x) ∈ Rn×n, symmetric, positive definite.

Isotropic metric: H(x) = W (x)2Idn.

Page 14: Geodesic Method in Computer Vision and Graphics

Geodesic Distances

5

Geodesic distance metric overM ⊂ Rn

Geodesic curve: γ(t) such that L(γ) = dM(x, y).

Distance map to a starting point x0 ∈M: Ux0(x) def.= dM(x0, x).

met

ric

geod

esic

s

2 ECCV-08 submission ID 1057

Euclidean Shape Isotropic Anisotropic Surface

Fig. 1. Examples of Riemannian metrics (top row) and geodesic distances and curves(bottom row). The blue/red colormap indicates the geodesic distance to the startingpoint. From left to right: euclidean (H(x) = Id2 restricted to Ω = [0, 1]2), planardomain (H(x) = Id2 restricted to Ω = [0, 1]2), isotropic (H(x) = W (x)Id2 with W

computed from the image), Riemannian manifold metric (H(x) is the structure tensorof the image, see equation (8)) and 3D surface (H(x) corresponds to the first funda-mental form).

1.1 Riemannian Metric and Geodesic Distance40 40

This paper considers 2D Riemannian manifolds that are defined over a com-

pact planar domain Ω ⊂ R2. At each point x ∈ Ω, one has a tensor H(x) ∈ R2×2

which is a positive symmetric matrix. This tensor field defines a local metric that

allows to measure the length of a piecewise smooth curve γ : [0, 1]→ Ω as follows

L(γ)def.=

1

0

γ(t)TH(γ(t))γ(t)dt.

In image processing and computer vision, the manifold is often the image domain41 41

Ω = [0, 1]2

equipped with a metric H derived from the image f to process. In42 42

computer graphics and numerical analysis, one often deals with complicated43 43

planar domains Ω with holes and corners. Figure 1 shows some examples of44 44

Riemannian metrics.45 45

An important issue is thus to design, for a specific application, a tensor46 46

field H(x) that encodes the important information about the problem to solve.47 47

Sections 3 and 4 show computer vision and graphics applications where the48 48

tensor field is derived either from a background image or from some user input.49 49

At each location x ∈ Ω, the Riemannian tensor can be diagonalized50 50

H(x) = λ1(x)e1(x)e1(x)T

+ λ2(x)e2(x)e2(x)T

with 0 < λ1 λ2, (1)

and e1, e2 are two orthogonal (un-oriented) eigenvector fields. A curve γ passing51 51

at location γ(t) = x with speed γ(t) has a shorter local length if γ(t) is collinear52 52

to e1(x) rather than to another direction. Hence shortest paths (to be defined53 53

next) tends to be tangent to the direction field e1.54 54

dM(x, y) = minγ(0)=x,γ(1)=y

L(γ)

Page 15: Geodesic Method in Computer Vision and Graphics

Anisotropy and Geodesics

6

H(x) = λ1(x)e1(x)e1(x)T + λ2(x)e2(x)e2(x)T with 0 < λ1 λ2,

Tensor eigen-decomposition:

x e1(x)

M

e2(x)

λ2(x)−12

λ1(x)−12

η \ η∗H(x)η 1

Page 16: Geodesic Method in Computer Vision and Graphics

Anisotropy and Geodesics

6

H(x) = λ1(x)e1(x)e1(x)T + λ2(x)e2(x)e2(x)T with 0 < λ1 λ2,

Tensor eigen-decomposition:

x e1(x)

M

e2(x)

λ2(x)−12

λ1(x)−12

η \ η∗H(x)η 1

Geodesics tend to follow e1(x).

Page 17: Geodesic Method in Computer Vision and Graphics

Anisotropy and Geodesics

6

H(x) = λ1(x)e1(x)e1(x)T + λ2(x)e2(x)e2(x)T with 0 < λ1 λ2,

Tensor eigen-decomposition:

Local anisotropy of the metric:

4 ECCV-08 submission ID 1057

Figure 2 shows examples of geodesic curves computed from a single starting77 77

point S = x1 in the center of the image Ω = [0, 1]2 and a set of points on the78 78

boundary of Ω. The geodesics are computed for a metric H(x) whose anisotropy79 79

α(x) (defined in equation (2)) is increasing, thus making the Riemannian space80 80

progressively closer to the Euclidean space.81 81

Image f α = .1 α = .2 α = .5 α = 1

Fig. 2. Examples of geodesics for a tensor metric with an increasing anisotropy α (seeequation (2) for a definition of this parameter). The tensor field H(x) is computed fromthe structure tensor of f as defined in equation (8), its eigenvalues fields λi(x) are thenmodified to impose the anisotropy α.

For the particular case of an isotropic metric H(x) = W (x)Idx, the geodesic82 82

distance and the shortest path satisfy83 83

||∇xUS || = W (x) and γ(t) = − ∇xUS||∇xUS ||

. (6)

This corresponds to the Eikonal equation, that has been used to compute mini-84 84

mal paths weighted by W [1].85 85

1.3 Numerical Computations of Geodesic Distances86 86

In order to make all the previous definitions effective in practical situations,87 87

one needs a fast algorithm to compute the geodesic distance map US . The Fast88 88

Marching algorithm, introduced by Sethian [2] is a numerical procedure to effi-89 89

ciently solve in O(n log(n)) operations the discretization of equation (6) in the90 90

isotropic case. Several extensions of the Fast Marching have been proposed in91 91

order to solve equation (4) for a generic metric, see for instance Kimmel and92 92

Sethian [3] for triangulated meshes and Spira and Kimel [4], Bronstein et al. [5]93 93

for parametric manifolds.94 94

We use the Fast Marching method developed by Prados et al. [6], which is a95 95

numerical scheme to compute the geodesic distance over a generic parameteric96 96

Riemannian manifold in 2D and 3D in O(n log(n)) operations. As any Fast97 97

Marching method, it computes the distance US by progressively propagating a98 98

front, starting from the initial points in S. Figure 3 shows an example of Fast99 99

Marching computation with an anisotropic metric. The front propagates faster100 100

along the direction of the texture. This is because the Riemannian tensor is101 101

computed following equation (8) in order for the principal direction e1 to align102 102

with the texture patterns.103 103

Image f α = .5 α = 0α = .95 α = .7

α(x) =λ1(x)− λ2(x)λ1(x) + λ2(x)

∈ [0, 1]

x e1(x)

M

e2(x)

λ2(x)−12

λ1(x)−12

η \ η∗H(x)η 1

Geodesics tend to follow e1(x).

Page 18: Geodesic Method in Computer Vision and Graphics

Overview• Riemannian Data Modelling

•Numerical Computation of Geodesics

• Geodesic Image Segmentation

• Geodesic Shape Representation

• Geodesic Meshing

• Inverse Problems with Geodesic Fidelity 7

Page 19: Geodesic Method in Computer Vision and Graphics

Eikonal Equation and Viscosity Solution

8

U(x) = d(x0, x)Distance map:

Theorem: U is the unique viscosity solution of||∇U(x)||H(x)−1 = 1 with U(x0) = 0

where ||v||A =√

v∗Av

Page 20: Geodesic Method in Computer Vision and Graphics

Eikonal Equation and Viscosity Solution

8

Geodesic curve γ between x1 and x0 solves

γ(t) = −ηt H(γ(t))−1∇Ux0(γ(t))γ(0) = x1

ηt > 0with

U(x) = d(x0, x)Distance map:

Theorem: U is the unique viscosity solution of||∇U(x)||H(x)−1 = 1 with U(x0) = 0

where ||v||A =√

v∗Av

Page 21: Geodesic Method in Computer Vision and Graphics

Eikonal Equation and Viscosity Solution

8

Geodesic curve γ between x1 and x0 solves

Example: isotropic metric H(x) = W (x)2Idn,

γ(t) = −ηt H(γ(t))−1∇Ux0(γ(t))γ(0) = x1

ηt > 0

||∇U(x)|| = W (x) γ(t) = −ηt∇U(γ(t))and

with

U(x) = d(x0, x)Distance map:

Theorem: U is the unique viscosity solution of||∇U(x)||H(x)−1 = 1 with U(x0) = 0

where ||v||A =√

v∗Av

Page 22: Geodesic Method in Computer Vision and Graphics

Discretization

9

x

x0γB(x)Control (derivative-free) formulation:

U(x) = d(x0, x) is the unique solution ofU(x) = Γ(U)(x) = min

y∈B(x)U(x) + d(x, y)

y

Page 23: Geodesic Method in Computer Vision and Graphics

Discretization

9

x

x0γB(x)

xj

xi xk

B(x)

Control (derivative-free) formulation:U(x) = d(x0, x) is the unique solution ofU(x) = Γ(U)(x) = min

y∈B(x)U(x) + d(x, y)

Manifold discretization: triangular mesh.

U discretization: linear finite elements.

H discretization: constant on each triangle.

y

Page 24: Geodesic Method in Computer Vision and Graphics

Discretization

9

x

x0γB(x)

xj

xi xk

B(x)

Control (derivative-free) formulation:U(x) = d(x0, x) is the unique solution ofU(x) = Γ(U)(x) = min

y∈B(x)U(x) + d(x, y)

Manifold discretization: triangular mesh.

U discretization: linear finite elements.

H discretization: constant on each triangle.

Ui = Γ(U)i = minf=(i,j,k)

Vi,j,k

Vi,j,k = min0t1

tUj + (1− t)Uk

xj

xk

xi

γ

txj + (1− t)xk

y

+||tUj + (1− t)Uk − Ui||Hijk

Page 25: Geodesic Method in Computer Vision and Graphics

Discretization

9

x

x0γB(x)

xj

xi xk

B(x)

Control (derivative-free) formulation:U(x) = d(x0, x) is the unique solution ofU(x) = Γ(U)(x) = min

y∈B(x)U(x) + d(x, y)

Manifold discretization: triangular mesh.

U discretization: linear finite elements.

H discretization: constant on each triangle.

Ui = Γ(U)i = minf=(i,j,k)

Vi,j,k

Vi,j,k = min0t1

tUj + (1− t)Uk

→ on regular grid: equivalent to upwind FD.

→ explicit solution (solving quadratic equation).

xj

xk

xi

γ

txj + (1− t)xk

y

+||tUj + (1− t)Uk − Ui||Hijk

Page 26: Geodesic Method in Computer Vision and Graphics

Numerical Schemes

10

Fixed point equation: U = Γ(U)

Iterative schemes: Jacobi, Gauss-Seidel, accelerations.[Borneman and Rasch 2006]

Γ is monotone:Γ is L∞ contractant:

U V =⇒ Γ(U) Γ(V )||Γ(U)− Γ(V )||∞ ||U − V ||∞

Page 27: Geodesic Method in Computer Vision and Graphics

Numerical Schemes

10

Fixed point equation: U = Γ(U)

Iterative schemes: Jacobi, Gauss-Seidel, accelerations.[Borneman and Rasch 2006]

∀ j ∼ i, Γ(U)i UjCausality condition:

→ The value of Ui depends on Ujj with Uj Ui.

→ Compute Γ(U)i using an optimal ordering.→ Front propagation, O(N log(N)) operations.

Γ is monotone:Γ is L∞ contractant:

U V =⇒ Γ(U) Γ(V )||Γ(U)− Γ(V )||∞ ||U − V ||∞

Page 28: Geodesic Method in Computer Vision and Graphics

Numerical Schemes

10

Fixed point equation: U = Γ(U)

Iterative schemes: Jacobi, Gauss-Seidel, accelerations.[Borneman and Rasch 2006]

∀ j ∼ i, Γ(U)i UjCausality condition:

→ The value of Ui depends on Ujj with Uj Ui.

→ Compute Γ(U)i using an optimal ordering.→ Front propagation, O(N log(N)) operations.

Γ is monotone:Γ is L∞ contractant:

U V =⇒ Γ(U) Γ(V )||Γ(U)− Γ(V )||∞ ||U − V ||∞

Good

BadGood

Holds for: - Isotropic H(x) = W (x)2Idn, square grid.

- Surface (first fundamental form),.triangulation with no obtuse angles.

Page 29: Geodesic Method in Computer Vision and Graphics

Front Propagation

11

x0

Algorithm: Far → Front → Computed.

2) Move from Front to Computed .

Iter

atio

nFront ∂Ft, Ft = i \ Ui t

∂Ft

State Si ∈ Computed, Front, Far

3) Update Uj = Γ(U)j for neighbors

1) Select front point with minimum Ui

and

Page 30: Geodesic Method in Computer Vision and Graphics

Fast Marching on an Image

12

Page 31: Geodesic Method in Computer Vision and Graphics

Fast Marching on Shapes and Surfaces

13

Page 32: Geodesic Method in Computer Vision and Graphics

Overview

• Riemannian Data Modelling

• Numerical Computations of Geodesics

•Geodesic Image Segmentation

• Geodesic Shape Representation

• Geodesic Meshing

• Inverse Problems with Geodesic Fidelity14

Page 33: Geodesic Method in Computer Vision and Graphics

Isotropic Metric Design

15

Image f Metric W (x) Distance Ux0(x) Geodesic curve γ(t)

Image-based potential: H(x) = W (x)2Id2, W (x) = (ε + |f(x)− c|)α

Page 34: Geodesic Method in Computer Vision and Graphics

Isotropic Metric Design

15

Image f Metric W (x) Distance Ux0(x) Geodesic curve γ(t)

Image f Metric W (x) Ux0,x1

Image-based potential: H(x) = W (x)2Id2, W (x) = (ε + |f(x)− c|)α

Geodesics

Gradient-based potential: W (x) = (ε + ||∇xf ||)−α

Page 35: Geodesic Method in Computer Vision and Graphics

Isotropic Metric Design: Vessels

16

f f W = (ε + max(f , 0))−α

Remove background: f = Gσ f − f , σ ≈vessel width.

Page 36: Geodesic Method in Computer Vision and Graphics

Isotropic Metric Design: Vessels

16

f f W = (ε + max(f , 0))−α

3D Volumetric datasets:

Remove background: f = Gσ f − f , σ ≈vessel width.

Page 37: Geodesic Method in Computer Vision and Graphics

Overview

• Riemannian Data Modelling

• Numerical Computations of Geodesics

• Geodesic Image Segmentation

•Geodesic Shape Representation

• Geodesic Meshing

• Inverse Problems with Geodesic Fidelity17

Page 38: Geodesic Method in Computer Vision and Graphics

Bending Invariant Recognition

18

[Zoopraxiscope, 1876]

Shape articulations:

Page 39: Geodesic Method in Computer Vision and Graphics

Bending Invariant Recognition

18

[Zoopraxiscope, 1876]

Shape articulations:

x1

x2M

Surface bendings:

[Elad, Kimmel, 2003]. [Bronstein et al., 2005].

Page 40: Geodesic Method in Computer Vision and Graphics

2D Shapes

19

2D shape: connected, closed compact set S ⊂ R2.Piecewise-smooth boundary ∂S.

Geodesic distance in S for uniform metric:dS(x, y) def.= min

γ∈P(x,y)L(γ) where L(γ) def.=

1

0|γ(t)|dt,

Shap

eS

Geo

desi

cs

Page 41: Geodesic Method in Computer Vision and Graphics

Distribution of Geodesic Distances

20

0

20

40

60

80

0

20

40

60

80

0

20

40

60

80

Distribution of distancesto a point x: dM(x, y)y∈M

Page 42: Geodesic Method in Computer Vision and Graphics

Distribution of Geodesic Distances

20

0

20

40

60

80

0

20

40

60

80

0

20

40

60

80

Distribution of distances

Extract a statistical measure

to a point x: dM(x, y)y∈M

Min Median Max

a0(x) = miny

dM(x, y).

a1(x) = mediany

dM(x, y).

a2(x) = maxy

dM(x, y).

x x x

Page 43: Geodesic Method in Computer Vision and Graphics

Distribution of Geodesic Distances

20

0

20

40

60

80

0

20

40

60

80

0

20

40

60

80

Distribution of distances

Extract a statistical measure

to a point x: dM(x, y)y∈M

Min Median Max

a0(x) = miny

dM(x, y).

a1(x) = mediany

dM(x, y).

a2(x) = maxy

dM(x, y).a(x)

a0

a1

a2

x x x

Page 44: Geodesic Method in Computer Vision and Graphics

Benging Invariant 2D Database

21

0 20 40 60 80 1000

20

40

60

80

100

Aver

age

Prec

isio

n

Average Recall

1D4D

0 10 20 30 400

20

40

60

80

100

Image Rank

Aver

age

Rec

all

1D4D

[Ling & Jacobs, PAMI 2007]

(min,med,max)Our method

max only[Ion et al. 2008]

→ State of the art retrieval rates on this database.

Page 45: Geodesic Method in Computer Vision and Graphics

Perspective: Textured Shapes

22Max Min||∇f(x)||

Image f(x)

Euclidean

Weighted

Take into account a texture f(x) on the shape.

Compute a saliency field W (x), e.g. edge detector.

Compute weighted curve lengths: L(γ) def.= 1

0W (γ(t))||γ(t)||dt.

Page 46: Geodesic Method in Computer Vision and Graphics

Overview

• Riemannian Data Modelling

• Numerical Computations of Geodesics

• Geodesic Image Segmentation

• Geodesic Shape Representation

•Geodesic Meshing

• Inverse Problems with Geodesic Fidelity23

Page 47: Geodesic Method in Computer Vision and Graphics

Meshing Images, Shapes and Surfaces

24

Vertices V = viMi=1.

Faces F ⊂ 1, . . . ,M3.Triangulation (V,F):

fM =M

m=1

λmϕm

λ = argminµ

||f −

m

µmϕm||

Image approximation:

ϕm(vi) = δmi is affine on each face of F .

Page 48: Geodesic Method in Computer Vision and Graphics

Meshing Images, Shapes and Surfaces

24

Vertices V = viMi=1.

Faces F ⊂ 1, . . . ,M3.Triangulation (V,F):

fM =M

m=1

λmϕm

λ = argminµ

||f −

m

µmϕm||

Image approximation:

||f − fM || CfM−2

ϕm(vi) = δmi is affine on each face of F .

There exists (V,F) such thatOptimal (V,F): NP-hard.

Page 49: Geodesic Method in Computer Vision and Graphics

Meshing Images, Shapes and Surfaces

24

Vertices V = viMi=1.

Faces F ⊂ 1, . . . ,M3.Triangulation (V,F):

fM =M

m=1

λmϕm

λ = argminµ

||f −

m

µmϕm||

Image approximation:

||f − fM || CfM−2

ϕm(vi) = δmi is affine on each face of F .

There exists (V,F) such thatOptimal (V,F): NP-hard.

Domain meshing:

Conforming to complicated boundary.Capturing PDE solutions:Boundary layers, chocs . . .

Page 50: Geodesic Method in Computer Vision and Graphics

Riemannian Sizing Field

25

Distance conforming:

Sampling xii∈I of a manifold.

Triangulation conforming:

∀xi ↔ xj , d(xi, xj) ≈ ε

∆ =( xi ↔ xj ↔ xk) ⊂x \ ||x− x∆||T (x∆) η

ε

x

e1(x)e2(x)∼ λ1(x)−

12

∼ λ2(x)−12

⇐⇒Building triangulation

Ellipsoid packing

Global integration oflocal sizing field

⇐⇒

Page 51: Geodesic Method in Computer Vision and Graphics

Geodesic Sampling

Metric Sampling

Sampling xii∈I of a manifold.

Page 52: Geodesic Method in Computer Vision and Graphics

xk+1 = argmaxx

min0ik

d(xi, x)

Geodesic Sampling

Metric Sampling

Sampling xii∈I of a manifold.

Farthest point algorithm: [Peyre, Cohen, 2006]

Page 53: Geodesic Method in Computer Vision and Graphics

xk+1 = argmaxx

min0ik

d(xi, x)

Geodesic Sampling

Metric

Voronoi

Sampling

Sampling xii∈I of a manifold.

Farthest point algorithm: [Peyre, Cohen, 2006]

Ci = x \ ∀ j = i, d(xi, x) d(xj , x)Geodesic Voronoi:

Page 54: Geodesic Method in Computer Vision and Graphics

xk+1 = argmaxx

min0ik

d(xi, x)

→ distance conforming. → triangulation conforming if the metric is “gradded”.

→ geodesic Delaunay refinement.

Geodesic Sampling

Metric

DelaunayVoronoi

Sampling

Sampling xii∈I of a manifold.

Farthest point algorithm: [Peyre, Cohen, 2006]

Geodesic Delaunay connectivity:

(xi ↔ xj)⇔ (Ci ∩ Cj = ∅)

Ci = x \ ∀ j = i, d(xi, x) d(xj , x)Geodesic Voronoi:

Page 55: Geodesic Method in Computer Vision and Graphics

Adaptive Meshing

# samples

Page 56: Geodesic Method in Computer Vision and Graphics

Texture Metric Uniform Adaptive

Adaptive Meshing

# samples

Page 57: Geodesic Method in Computer Vision and Graphics

Minimize approximation error ||f − fM ||Lp .

Isotropic

Approximation Driven MeshingLinear approximation fM with M linear elements.

Page 58: Geodesic Method in Computer Vision and Graphics

Minimize approximation error ||f − fM ||Lp .

Images: T (x) = |H(x)| (Hessian)

Surfaces: T (x) = |C(x)| (curvature tensor)Isotropic Anisotropic

Approximation Driven MeshingLinear approximation fM with M linear elements.

L∞ optimal metrics for smooth functions:

Page 59: Geodesic Method in Computer Vision and Graphics

Minimize approximation error ||f − fM ||Lp .

Images: T (x) = |H(x)| (Hessian)

Surfaces: T (x) = |C(x)| (curvature tensor)Isotropic Anisotropic

Approximation Driven MeshingLinear approximation fM with M linear elements.

L∞ optimal metrics for smooth functions:

Anisotropic triangulation JPEG2000

For edges and textures: → use structure tensor.[Peyre et al, 2008]

Page 60: Geodesic Method in Computer Vision and Graphics

Minimize approximation error ||f − fM ||Lp .

Images: T (x) = |H(x)| (Hessian)

Surfaces: T (x) = |C(x)| (curvature tensor)

boundary approximation.

Isotropic Anisotropic

Approximation Driven MeshingLinear approximation fM with M linear elements.

L∞ optimal metrics for smooth functions:

Anisotropic triangulation JPEG2000

For edges and textures: → use structure tensor.

→ extension to handle

[Peyre et al, 2008]

[Peyre et al, 2008]

Page 61: Geodesic Method in Computer Vision and Graphics

Overview

• Riemannian Data Modelling

• Numerical Computations of Geodesics

• Geodesic Image Segmentation

• Geodesic Shape Representation

• Geodesic Meshing

•Inverse Problems with Geodesic Fidelity

29

with G.Carlier, F. Santambrogio, F. Benmansour

Page 62: Geodesic Method in Computer Vision and Graphics

Variational Minimization with Metrics

30

Metric T (x) = W (x)2Idd.

Geodesic distance: dW (x, y) = minγ(0)=x,γ(1)=y

W (γ(t))||γ(t)||dt

W → dW (x, y) is concave.

Page 63: Geodesic Method in Computer Vision and Graphics

Variational Minimization with Metrics

30

Metric T (x) = W (x)2Idd.

Geodesic distance: dW (x, y) = minγ(0)=x,γ(1)=y

W (γ(t))||γ(t)||dt

Variational problem: minW∈C

i,j

Ei,j(dW (xi, xj))2 +R(W )

C: admissible metrics.

R: regularization (smoothness).

Ei,j : interaction functional.

W → dW (x, y) is concave.

Page 64: Geodesic Method in Computer Vision and Graphics

Variational Minimization with Metrics

30

Metric T (x) = W (x)2Idd.

Geodesic distance: dW (x, y) = minγ(0)=x,γ(1)=y

W (γ(t))||γ(t)||dt

Variational problem: minW∈C

i,j

Ei,j(dW (xi, xj))2 +R(W )

C: admissible metrics.

R: regularization (smoothness).

Ei,j : interaction functional.

Example: shape optimization,

seismic imaging, . . .

traffic congestion,Eij(d) = −ρi,jd

Eij(d) = (d− di,j)2

convex

nonconvex

W → dW (x, y) is concave.

Page 65: Geodesic Method in Computer Vision and Graphics

Variational Minimization with Metrics

30

Metric T (x) = W (x)2Idd.

Geodesic distance: dW (x, y) = minγ(0)=x,γ(1)=y

W (γ(t))||γ(t)||dt

Variational problem: minW∈C

i,j

Ei,j(dW (xi, xj))2 +R(W )

Compute the gradient of W → dW (x, y).

C: admissible metrics.

R: regularization (smoothness).

Ei,j : interaction functional.

Example: shape optimization,

seismic imaging, . . .

traffic congestion,Eij(d) = −ρi,jd

Eij(d) = (d− di,j)2

convex

nonconvex

W → dW (x, y) is concave.

Page 66: Geodesic Method in Computer Vision and Graphics

Gradient with Respect to the Metric

31

If γ is unique, this shows that ξ → dξε(xs, xt) is differentiable at ξ, and that its gradient

δξ(xs, xt) is a measure supported along the curve γ. In the case where this geodesic is not

unique, this quantity may fail to be differentiable. Yet, the map ξ → dξ(xs, xt) is anyway

concave (as an infimum of linear quantities in ξ) and for each geodesic we get an element

of the super-differential through Equation (1.9).

The extraction of geodesics is quite unstable, especially for metrics such that xs and xt

are connected by many curves of length close to the minimum distance dξ(xs, xt). It is thus

unclear how to discretize in a robust manner the gradient of the geodesic distance directly

from the continuous definition (1.9). We propose in this paper an alternative method,

where δξ(xs, xt) is defined unambiguously as a subgradient of a discretized geodesic dis-

tance. Furthermore, this discrete subgradient is computed with a fast Subgradient March-

ing algorithm.

Figure 1 shows two examples of subgradients, computed with the algorithm detailed

in Section 3. Near a degenerate configuration, we can see that the subgradient δξ(xs, xt)

might be located around several minimal curves.

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1

1.2

1.4

1.6

1.8

2

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Figure 1: On the left, δξ(xs, xt) and some of its iso-levels for ξ = 1. In the middle, a non

constant metric ξ(x) = 1/(1.5− exp(−c− x)), where c is the center of the domain. On

the right, an element of the superdifferential of the geodesic with respect to the metric

shown in the middle.

Anisotropic metrics. The geodesic distance and its subgradient can be defined for

more general Riemannian metric ξ that depends both on the location γ(t) of the curve

and on its local direction γ(t)/|γ(t)|. The algorithm presented in this paper extends

to this more general setting, thus allowing to design arbitrary anisotropic Riemannian

metric. This requires to use more advanced Fast Marching methods, such as the ones

developed in [20, 5], see also [13] for a related extension of our method to 3D meshes. We

decided however to restrict our attention to the isotropic case, that has many practical

applications.

1.4 Contributions

This paper proposes a projected subgradient descent (1.8) to minimize variational problems

that are discretized versions of (1.1). The key ingredient and the main contribution of

the paper is the Subgradient Marching Algorithm, detailed in Section 3, that computes

an element of the subgradient of the geodesic distance with respect to the metric. This

algorithm follows the optimal ordering used by the Fast Marching, making the overall

process only O(N2 log(N)) to compute subgradients of the maps ξ → dξ(xs, xt) for a fixed

xs and for all the grid points xt.

5

W (x) ∇dW (x, y)

Formal derivation: dW+εZ(x, y) = dW (x, y) + εZ, ∇dW (x, y) + o(ε)

Z, ∇dW (x, y) = 1

0Z(γ(t))dt γ: geodesic x→ y

Page 67: Geodesic Method in Computer Vision and Graphics

Gradient with Respect to the Metric

31

If γ is unique, this shows that ξ → dξε(xs, xt) is differentiable at ξ, and that its gradient

δξ(xs, xt) is a measure supported along the curve γ. In the case where this geodesic is not

unique, this quantity may fail to be differentiable. Yet, the map ξ → dξ(xs, xt) is anyway

concave (as an infimum of linear quantities in ξ) and for each geodesic we get an element

of the super-differential through Equation (1.9).

The extraction of geodesics is quite unstable, especially for metrics such that xs and xt

are connected by many curves of length close to the minimum distance dξ(xs, xt). It is thus

unclear how to discretize in a robust manner the gradient of the geodesic distance directly

from the continuous definition (1.9). We propose in this paper an alternative method,

where δξ(xs, xt) is defined unambiguously as a subgradient of a discretized geodesic dis-

tance. Furthermore, this discrete subgradient is computed with a fast Subgradient March-

ing algorithm.

Figure 1 shows two examples of subgradients, computed with the algorithm detailed

in Section 3. Near a degenerate configuration, we can see that the subgradient δξ(xs, xt)

might be located around several minimal curves.

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1

1.2

1.4

1.6

1.8

2

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Figure 1: On the left, δξ(xs, xt) and some of its iso-levels for ξ = 1. In the middle, a non

constant metric ξ(x) = 1/(1.5− exp(−c− x)), where c is the center of the domain. On

the right, an element of the superdifferential of the geodesic with respect to the metric

shown in the middle.

Anisotropic metrics. The geodesic distance and its subgradient can be defined for

more general Riemannian metric ξ that depends both on the location γ(t) of the curve

and on its local direction γ(t)/|γ(t)|. The algorithm presented in this paper extends

to this more general setting, thus allowing to design arbitrary anisotropic Riemannian

metric. This requires to use more advanced Fast Marching methods, such as the ones

developed in [20, 5], see also [13] for a related extension of our method to 3D meshes. We

decided however to restrict our attention to the isotropic case, that has many practical

applications.

1.4 Contributions

This paper proposes a projected subgradient descent (1.8) to minimize variational problems

that are discretized versions of (1.1). The key ingredient and the main contribution of

the paper is the Subgradient Marching Algorithm, detailed in Section 3, that computes

an element of the subgradient of the geodesic distance with respect to the metric. This

algorithm follows the optimal ordering used by the Fast Marching, making the overall

process only O(N2 log(N)) to compute subgradients of the maps ξ → dξ(xs, xt) for a fixed

xs and for all the grid points xt.

5

If γ is unique, this shows that ξ → dξε(xs, xt) is differentiable at ξ, and that its gradient

δξ(xs, xt) is a measure supported along the curve γ. In the case where this geodesic is not

unique, this quantity may fail to be differentiable. Yet, the map ξ → dξ(xs, xt) is anyway

concave (as an infimum of linear quantities in ξ) and for each geodesic we get an element

of the super-differential through Equation (1.9).

The extraction of geodesics is quite unstable, especially for metrics such that xs and xt

are connected by many curves of length close to the minimum distance dξ(xs, xt). It is thus

unclear how to discretize in a robust manner the gradient of the geodesic distance directly

from the continuous definition (1.9). We propose in this paper an alternative method,

where δξ(xs, xt) is defined unambiguously as a subgradient of a discretized geodesic dis-

tance. Furthermore, this discrete subgradient is computed with a fast Subgradient March-

ing algorithm.

Figure 1 shows two examples of subgradients, computed with the algorithm detailed

in Section 3. Near a degenerate configuration, we can see that the subgradient δξ(xs, xt)

might be located around several minimal curves.

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1

1.2

1.4

1.6

1.8

2

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Figure 1: On the left, δξ(xs, xt) and some of its iso-levels for ξ = 1. In the middle, a non

constant metric ξ(x) = 1/(1.5− exp(−c− x)), where c is the center of the domain. On

the right, an element of the superdifferential of the geodesic with respect to the metric

shown in the middle.

Anisotropic metrics. The geodesic distance and its subgradient can be defined for

more general Riemannian metric ξ that depends both on the location γ(t) of the curve

and on its local direction γ(t)/|γ(t)|. The algorithm presented in this paper extends

to this more general setting, thus allowing to design arbitrary anisotropic Riemannian

metric. This requires to use more advanced Fast Marching methods, such as the ones

developed in [20, 5], see also [13] for a related extension of our method to 3D meshes. We

decided however to restrict our attention to the isotropic case, that has many practical

applications.

1.4 Contributions

This paper proposes a projected subgradient descent (1.8) to minimize variational problems

that are discretized versions of (1.1). The key ingredient and the main contribution of

the paper is the Subgradient Marching Algorithm, detailed in Section 3, that computes

an element of the subgradient of the geodesic distance with respect to the metric. This

algorithm follows the optimal ordering used by the Fast Marching, making the overall

process only O(N2 log(N)) to compute subgradients of the maps ξ → dξ(xs, xt) for a fixed

xs and for all the grid points xt.

5

If γ is unique, this shows that ξ → dξε(xs, xt) is differentiable at ξ, and that its gradient

δξ(xs, xt) is a measure supported along the curve γ. In the case where this geodesic is not

unique, this quantity may fail to be differentiable. Yet, the map ξ → dξ(xs, xt) is anyway

concave (as an infimum of linear quantities in ξ) and for each geodesic we get an element

of the super-differential through Equation (1.9).

The extraction of geodesics is quite unstable, especially for metrics such that xs and xt

are connected by many curves of length close to the minimum distance dξ(xs, xt). It is thus

unclear how to discretize in a robust manner the gradient of the geodesic distance directly

from the continuous definition (1.9). We propose in this paper an alternative method,

where δξ(xs, xt) is defined unambiguously as a subgradient of a discretized geodesic dis-

tance. Furthermore, this discrete subgradient is computed with a fast Subgradient March-

ing algorithm.

Figure 1 shows two examples of subgradients, computed with the algorithm detailed

in Section 3. Near a degenerate configuration, we can see that the subgradient δξ(xs, xt)

might be located around several minimal curves.

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1

1.2

1.4

1.6

1.8

2

xs

xt0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Figure 1: On the left, δξ(xs, xt) and some of its iso-levels for ξ = 1. In the middle, a non

constant metric ξ(x) = 1/(1.5− exp(−c− x)), where c is the center of the domain. On

the right, an element of the superdifferential of the geodesic with respect to the metric

shown in the middle.

Anisotropic metrics. The geodesic distance and its subgradient can be defined for

more general Riemannian metric ξ that depends both on the location γ(t) of the curve

and on its local direction γ(t)/|γ(t)|. The algorithm presented in this paper extends

to this more general setting, thus allowing to design arbitrary anisotropic Riemannian

metric. This requires to use more advanced Fast Marching methods, such as the ones

developed in [20, 5], see also [13] for a related extension of our method to 3D meshes. We

decided however to restrict our attention to the isotropic case, that has many practical

applications.

1.4 Contributions

This paper proposes a projected subgradient descent (1.8) to minimize variational problems

that are discretized versions of (1.1). The key ingredient and the main contribution of

the paper is the Subgradient Marching Algorithm, detailed in Section 3, that computes

an element of the subgradient of the geodesic distance with respect to the metric. This

algorithm follows the optimal ordering used by the Fast Marching, making the overall

process only O(N2 log(N)) to compute subgradients of the maps ξ → dξ(xs, xt) for a fixed

xs and for all the grid points xt.

5

W (x) W (x)∇dW (x, y) ∇dW (x, y)

W → dW (x, y) is concave.

Formal derivation: dW+εZ(x, y) = dW (x, y) + εZ, ∇dW (x, y) + o(ε)

Z, ∇dW (x, y) = 1

0Z(γ(t))dt γ: geodesic x→ y

Problem: W → dW (x, y) non smooth if γ not unique.

compute sup-differetials.

Page 68: Geodesic Method in Computer Vision and Graphics

Subgradient Marching

32

Discretized geodesic distance computed by FM: Ui ≈ dW (x0, xi)

Theorem: W ∈ RN → Ui ∈ R is concave.

Page 69: Geodesic Method in Computer Vision and Graphics

Subgradient Marching

32

u = Γ(U)i ∈ R solution of:xi

xj

xk

Discretized geodesic distance computed by FM: Ui ≈ dW (x0, xi)

Fast marching update: Ui ← u solution of

Theorem: W ∈ RN → Ui ∈ R is concave.

(u− Uj)2 + (u− Uk)2 = h2W 2i

Page 70: Geodesic Method in Computer Vision and Graphics

Subgradient Marching

32

u = Γ(U)i ∈ R solution of:xi

xj

xk

Gradient update: ∇Ui ≈ ∇dW (x0, xi)

Discretized geodesic distance computed by FM: Ui ≈ dW (x0, xi)

Fast marching update: Ui ← u solution of

Theorem: W ∈ RN → Ui ∈ R is concave.

αj = Ui − Uj∇Ui ←h2Wiδi + αj∇Uj + αk∇Uk

αj + αk δi(s) = δ(i− s) (Dirac)

(u− Uj)2 + (u− Uk)2 = h2W 2i

Page 71: Geodesic Method in Computer Vision and Graphics

Subgradient Marching

32

u = Γ(U)i ∈ R solution of:xi

xj

xk

Gradient update: ∇Ui ≈ ∇dW (x0, xi)

Discretized geodesic distance computed by FM: Ui ≈ dW (x0, xi)

Fast marching update: Ui ← u solution of

Theorem:

Theorem: ∇Ui ∈ RN is a sup-gradient of W → Ui

W ∈ RN → Ui ∈ R is concave.

Complexity: O(N2 log(N)) operations to compute all (∇Ui)i ∈ RN×N .

αj = Ui − Uj∇Ui ←h2Wiδi + αj∇Uj + αk∇Uk

αj + αk δi(s) = δ(i− s) (Dirac)

(u− Uj)2 + (u− Uk)2 = h2W 2i

Page 72: Geodesic Method in Computer Vision and Graphics

Landscape Design

C =

W \

ΩW (x)dx = λ, a W b

maxW∈C

ρi,jdW (xi, xj)

Page 73: Geodesic Method in Computer Vision and Graphics

Landscape Design

k = 100 k = 300 k = 500

Figure 9: Iterations ξ(k) computed for a domain Ω with a hole and with P = 5 landmarks.

Extension of the model. It is possible to modify the energy E defined in (4.3) to mix

differently the distances between the points xss. One can for instance minimize

Emin(ξ) = −

s

mint=s

dξ(xs, xt).

This functional is the opposite of the minimum of concave functions, and hence Emin is

a convex function. The maximization of the energy Emin forces each landmark to be

maximally distant from its closest neighbors.

The subgradient of Emin is computed as

∇ξE = −

s

∇ξUξs (xt(s)).

where, for each landmark xs, xt(s) is the closest landmark according to the metric ξ

t(s) = argmint=s

dξ(xs, xt).

The projected gradient descent (1.8) converges to a minimum of Emin.

Figure 10, left and center, shows how the metric ξ(k) evolves during the iterations of

a projected gradient descent. The graph connecting each xs to its nearest neighbor xt(s)

is overlaid. The points xs are clustered on two sides of the domain, so that during the

first iterations, the graph connects points on each side of the domain. At convergence,

the metric is optimized so that points located in two different sides of the domain are also

relatively close one from each other. This is why the graph also connects points located

on two different sides.

4.2 Traffic Congestion Equilibria

The simulation of a static traffic congestion is the computation of a Riemannian metric

so that agents travel along geodesics between points xss. The metric is the unique

minimizer of (1.1) for linear interactions (1.4). The regularization J is a discretization of

(1.5)

J(ξ) =1

3

(i,j)

|ξi,j |3

17

Sub-gradient descent:W (+1) = ProjC

W () + η

i,j

ρi,j∇dW (xi, xj)

C =

W \

ΩW (x)dx = λ, a W b

η = +∞,

η2

< +∞Convergence:

maxW∈C

ρi,jdW (xi, xj)

Page 74: Geodesic Method in Computer Vision and Graphics

k = 100 k = 300 k = 500

Figure 10: Iteration ξ(k) computed during the subgradient descent. The dashed line

corresponds to the connexion s→ t(s) of nearest neighbors points.

Each weight ws,t is the strength of the traffic between two landmarks xs and xt. The only

constraint is that metrics should be positive

C =ξ ∈ RN

; ξi,j ≥ 0

.

The projection on this constraint set is simply

ΠC(ξ) = max(0, ξ),

and the subgradient descent (1.8) is guaranteed to converge to the solution of (1.1) if one

uses for instance ρk = 1/k.

For this application, the subgradient of E is

∇ξE = j(ξ)−

s,t

ws,t∇ξUs(xt), (4.9)

where

j(ξ) = (ξ2i,j)(i,j) ∈ RN .

The subgradient ∇ξUs(xt) at ξ of the mapping ξ → dξ(xs, xt) is computed with the Sub-

gradient Marching algorithm, Table 2, starting the front propagation at the point xs.

Numerical example. Figure 11 shows an example of congested metric with a complex

domain Ω and four landmarks. The two landmarks x0, x1 are sources of traffic and x2, x3

are targets, so that w0,1 = w2,3 = 0 and we choose the other weights

w0,2 + w0,3 = 2(w1,2 + w1,3) andw0,2

w0,3=

w1,2

w1,3,

so that the traffic intensity going out from x0 is twice the one from x1. One can note the

two hollows on each side of the river appearing because of the inter-sides and intra-sides

crossed traffics.

18

k = 100 k = 300 k = 500

Figure 10: Iteration ξ(k) computed during the subgradient descent. The dashed line

corresponds to the connexion s→ t(s) of nearest neighbors points.

Each weight ws,t is the strength of the traffic between two landmarks xs and xt. The only

constraint is that metrics should be positive

C =ξ ∈ RN

; ξi,j ≥ 0

.

The projection on this constraint set is simply

ΠC(ξ) = max(0, ξ),

and the subgradient descent (1.8) is guaranteed to converge to the solution of (1.1) if one

uses for instance ρk = 1/k.

For this application, the subgradient of E is

∇ξE = j(ξ)−

s,t

ws,t∇ξUs(xt), (4.9)

where

j(ξ) = (ξ2i,j)(i,j) ∈ RN .

The subgradient ∇ξUs(xt) at ξ of the mapping ξ → dξ(xs, xt) is computed with the Sub-

gradient Marching algorithm, Table 2, starting the front propagation at the point xs.

Numerical example. Figure 11 shows an example of congested metric with a complex

domain Ω and four landmarks. The two landmarks x0, x1 are sources of traffic and x2, x3

are targets, so that w0,1 = w2,3 = 0 and we choose the other weights

w0,2 + w0,3 = 2(w1,2 + w1,3) andw0,2

w0,3=

w1,2

w1,3,

so that the traffic intensity going out from x0 is twice the one from x1. One can note the

two hollows on each side of the river appearing because of the inter-sides and intra-sides

crossed traffics.

18

k = 100 k = 300 k = 500

Figure 10: Iteration ξ(k) computed during the subgradient descent. The dashed line

corresponds to the connexion s→ t(s) of nearest neighbors points.

Each weight ws,t is the strength of the traffic between two landmarks xs and xt. The only

constraint is that metrics should be positive

C =ξ ∈ RN

; ξi,j ≥ 0

.

The projection on this constraint set is simply

ΠC(ξ) = max(0, ξ),

and the subgradient descent (1.8) is guaranteed to converge to the solution of (1.1) if one

uses for instance ρk = 1/k.

For this application, the subgradient of E is

∇ξE = j(ξ)−

s,t

ws,t∇ξUs(xt), (4.9)

where

j(ξ) = (ξ2i,j)(i,j) ∈ RN .

The subgradient ∇ξUs(xt) at ξ of the mapping ξ → dξ(xs, xt) is computed with the Sub-

gradient Marching algorithm, Table 2, starting the front propagation at the point xs.

Numerical example. Figure 11 shows an example of congested metric with a complex

domain Ω and four landmarks. The two landmarks x0, x1 are sources of traffic and x2, x3

are targets, so that w0,1 = w2,3 = 0 and we choose the other weights

w0,2 + w0,3 = 2(w1,2 + w1,3) andw0,2

w0,3=

w1,2

w1,3,

so that the traffic intensity going out from x0 is twice the one from x1. One can note the

two hollows on each side of the river appearing because of the inter-sides and intra-sides

crossed traffics.

18

Landscape Design

k = 100 k = 300 k = 500

Figure 9: Iterations ξ(k) computed for a domain Ω with a hole and with P = 5 landmarks.

Extension of the model. It is possible to modify the energy E defined in (4.3) to mix

differently the distances between the points xss. One can for instance minimize

Emin(ξ) = −

s

mint=s

dξ(xs, xt).

This functional is the opposite of the minimum of concave functions, and hence Emin is

a convex function. The maximization of the energy Emin forces each landmark to be

maximally distant from its closest neighbors.

The subgradient of Emin is computed as

∇ξE = −

s

∇ξUξs (xt(s)).

where, for each landmark xs, xt(s) is the closest landmark according to the metric ξ

t(s) = argmint=s

dξ(xs, xt).

The projected gradient descent (1.8) converges to a minimum of Emin.

Figure 10, left and center, shows how the metric ξ(k) evolves during the iterations of

a projected gradient descent. The graph connecting each xs to its nearest neighbor xt(s)

is overlaid. The points xs are clustered on two sides of the domain, so that during the

first iterations, the graph connects points on each side of the domain. At convergence,

the metric is optimized so that points located in two different sides of the domain are also

relatively close one from each other. This is why the graph also connects points located

on two different sides.

4.2 Traffic Congestion Equilibria

The simulation of a static traffic congestion is the computation of a Riemannian metric

so that agents travel along geodesics between points xss. The metric is the unique

minimizer of (1.1) for linear interactions (1.4). The regularization J is a discretization of

(1.5)

J(ξ) =1

3

(i,j)

|ξi,j |3

17

Sub-gradient descent:W (+1) = ProjC

W () + η

i,j

ρi,j∇dW (xi, xj)

C =

W \

ΩW (x)dx = λ, a W b

η = +∞,

η2

< +∞Convergence:

maxW∈C

ρi,jdW (xi, xj)

maxW∈C

i

minj =i

dW (xi, xj)

max/min generalization:

Page 75: Geodesic Method in Computer Vision and Graphics

Traffic Congestion

34

x1

x2

y1y2

Sources xii and destinations yjj .

Traffic plan: distribution Q on the set of paths γ.Q γ \ γ(0) = xi, γ(1) = yj = ρi,j

xi → yj : ρi,j 0Traffic ratio:

Page 76: Geodesic Method in Computer Vision and Graphics

Traffic Congestion

34

x1

x2

y1y2

Sources xii and destinations yjj .

Traffic plan: distribution Q on the set of paths γ.

Traffic intensity:

iQ(x) = limε→0

γ

10 1Bε(γ(t))|γ(t)|dt dQ(γ)

|Bε(x)|

Q γ \ γ(0) = xi, γ(1) = yj = ρi,j

xi → yj : ρi,j 0Traffic ratio:

Bε(x)

Page 77: Geodesic Method in Computer Vision and Graphics

Traffic Congestion

34

x1

x2

y1y2

Sources xii and destinations yjj .

Traffic plan: distribution Q on the set of paths γ.

Traffic intensity:

iQ(x) = limε→0

γ

10 1Bε(γ(t))|γ(t)|dt dQ(γ)

|Bε(x)|

Congested metric: WQ(x) = ϕ(iQ(x)).

Q γ \ γ(0) = xi, γ(1) = yj = ρi,j

xi → yj : ρi,j 0Traffic ratio:

Bε(x)

Page 78: Geodesic Method in Computer Vision and Graphics

Traffic Congestion

34

x1

x2

y1y2

Sources xii and destinations yjj .

Traffic plan: distribution Q on the set of paths γ.

Traffic intensity:

iQ(x) = limε→0

γ

10 1Bε(γ(t))|γ(t)|dt dQ(γ)

|Bε(x)|

Congested metric: WQ(x) = ϕ(iQ(x)).

Wardrop equilibria: Q is distributed on geodesics for WQ.

Q γ \ γ(0) = xi, γ(1) = yj = ρi,j

LW (γ) = 1

0W (γ(t))|γ(t)|dt

Qγ \ LWQ(γ) = dWQ(γ(0), γ(1))

= 1

dW (x, y) = minγ(0)=x,γ(1)=y

LW (γ)where

xi → yj : ρi,j 0Traffic ratio:

Bε(x)

Page 79: Geodesic Method in Computer Vision and Graphics

Convex Formulation

35

Congested metric W = ϕ(iQ) solution of:

ϕ→ ϕ explicit, example: ϕ(i) =√

i −→ ϕ(w) =w3

3

minW0

ϕ(W (x))dx−

i,j

ρi,jdW (xi, yj)

Page 80: Geodesic Method in Computer Vision and Graphics

Convex Formulation

35

Congested metric W = ϕ(iQ) solution of:

Unique solution if ϕ strictly convex.

ϕ→ ϕ explicit, example: ϕ(i) =√

i −→ ϕ(w) =w3

3

minW0

ϕ(W (x))dx−

i,j

ρi,jdW (xi, yj)

(a) 3D view of ξ

xs1

xs2

xt1

xt2 −2

−1.5

−1

−0.5

0

0.5

(b) Flat view of log(ξ)

Figure 11: Two sources and two targets, with a river and a bridge on a symmetric config-uration and an asymmetric traffic weights.

4.3 Travel Time Tomography

A simple model of seismic data acquisition assumes that geodesic distances are collectedbetween points (xs)s. Pairs of emitter and receiver are denoted as (xs, xt) for (s, t) ∈ Γ.

In our simplified tomography setting, the data acquisition (1.6) computes geodesicdistances ds,t for each pair (s, t) ∈ Γ according to an unknown Riemannian metric ξ0.

Geodesic tomography inversion. The recovery is obtained by solving (1.1) for a datafidelity term (1.7) as interaction functionals.

We use a discrete Sobolev regularization

J(ξ) =µ

2gradξ2 =

µ

2

(i,j)∈Ω

gradi,jξ2

where the operator grad is a finite difference discretization of the 2D gradient

gradi,jξ = (ξi+1,j − ξi,j , ξi,j+1 − ξi,j),

with Neumann condition on the boundary ∂Ω of the domain. The parameter µ controls thestrength of the regularization and should be adapted to the number |Γ| of measurementsand the smoothness of ξ0.

We use the local and global constraints (4.4), where

ξ = min(i,j)∈Ω

ξ0i,j , ξ = max

(i,j)∈Ωξ0i,j , and λ =

1|Ω|

(i,j)∈Ω

ξ0i,j .

The travel time recovery is thus obtained by minimizing

argminξ∈C

12

(s,t)∈Γ

(dξ(xs, xt)− ds,t)2 +µ

2gradξ2 . (4.10)

19

(a) 3D view of ξ

xs1

xs2

xt1

xt2 −2

−1.5

−1

−0.5

0

0.5

(b) Flat view of log(ξ)

Figure 11: Two sources and two targets, with a river and a bridge on a symmetric config-uration and an asymmetric traffic weights.

4.3 Travel Time Tomography

A simple model of seismic data acquisition assumes that geodesic distances are collectedbetween points (xs)s. Pairs of emitter and receiver are denoted as (xs, xt) for (s, t) ∈ Γ.

In our simplified tomography setting, the data acquisition (1.6) computes geodesicdistances ds,t for each pair (s, t) ∈ Γ according to an unknown Riemannian metric ξ0.

Geodesic tomography inversion. The recovery is obtained by solving (1.1) for a datafidelity term (1.7) as interaction functionals.

We use a discrete Sobolev regularization

J(ξ) =µ

2gradξ2 =

µ

2

(i,j)∈Ω

gradi,jξ2

where the operator grad is a finite difference discretization of the 2D gradient

gradi,jξ = (ξi+1,j − ξi,j , ξi,j+1 − ξi,j),

with Neumann condition on the boundary ∂Ω of the domain. The parameter µ controls thestrength of the regularization and should be adapted to the number |Γ| of measurementsand the smoothness of ξ0.

We use the local and global constraints (4.4), where

ξ = min(i,j)∈Ω

ξ0i,j , ξ = max

(i,j)∈Ωξ0i,j , and λ =

1|Ω|

(i,j)∈Ω

ξ0i,j .

The travel time recovery is thus obtained by minimizing

argminξ∈C

12

(s,t)∈Γ

(dξ(xs, xt)− ds,t)2 +µ

2gradξ2 . (4.10)

19

Algorithm: sub-gradient descent.Γ-convergence: discrete solution → continuous solution.

Page 81: Geodesic Method in Computer Vision and Graphics

Traveltime Tomography

36ξ0 ξ

Figure 12: Examples of travel time tomography recovery.

Subgradient descent recovery. The minimization problem (4.10) is non-convex, buta local minimizer ξ can be computed using the projected gradient descent (1.8). In thiscase, the gradient of the energy E at a metric ξ reads

∇ξE = µ∆ξ −

s,t

(dξ(xs, xt)− ds,t)∇ξ(k)Us(xt).

where ∆ = −grad∗ grad is the Laplacian. The subgradient ∇ξUs(xt) of the mappingξ → dξ(xs, xt) is computed with the Subgradient Marching algorithm, Table 2, startingthe propagation from the point xs.

The subgradient descent (1.8) converges to a local minimum ξ of the problem (4.10).

Numerical examples. Figure 12 shows two examples of smooth metrics ξ0 recoveredfrom travel time tomography measurements. In each case, we set ξ/ξ = 1.3 and we useξ(0)i,j = λ as an initial flat metric.

For the first example, we use P = 100 points distributed evenly on the boundary of asquare Ω, discretized at N = 150 × 150 points, and each points acts both as emitter andsensor.

For the second example, we use 50 emitter points xs49s=0 distributed evenly on the

boundary of a complicated domain Ω, and 150 sensors xt199s=50 distributed randomly

within the domain. Each emitter is connected to all the sensors, so that (s, t) ∈ Γ if andonly if s < 50 and t ≥ 50.

20

ξ0 ξ

Figure 12: Examples of travel time tomography recovery.

Subgradient descent recovery. The minimization problem (4.10) is non-convex, buta local minimizer ξ can be computed using the projected gradient descent (1.8). In thiscase, the gradient of the energy E at a metric ξ reads

∇ξE = µ∆ξ −

s,t

(dξ(xs, xt)− ds,t)∇ξ(k)Us(xt).

where ∆ = −grad∗ grad is the Laplacian. The subgradient ∇ξUs(xt) of the mappingξ → dξ(xs, xt) is computed with the Subgradient Marching algorithm, Table 2, startingthe propagation from the point xs.

The subgradient descent (1.8) converges to a local minimum ξ of the problem (4.10).

Numerical examples. Figure 12 shows two examples of smooth metrics ξ0 recoveredfrom travel time tomography measurements. In each case, we set ξ/ξ = 1.3 and we useξ(0)i,j = λ as an initial flat metric.

For the first example, we use P = 100 points distributed evenly on the boundary of asquare Ω, discretized at N = 150 × 150 points, and each points acts both as emitter andsensor.

For the second example, we use 50 emitter points xs49s=0 distributed evenly on the

boundary of a complicated domain Ω, and 150 sensors xt199s=50 distributed randomly

within the domain. Each emitter is connected to all the sensors, so that (s, t) ∈ Γ if andonly if s < 50 and t ≥ 50.

20

Seismic imaging: emitters xii, receivers yjj .

Wave propagation:

Geometric optics approximation: first hitting time = dW (xi, yj).

initial conditions localized around xi

W0 (unknown) W0 (unknown)

Emitters xii Receivers yjj

Page 82: Geodesic Method in Computer Vision and Graphics

Traveltime Tomography

36

minW

i,j

|dW (xi, yj)− di,j |2 + λ

||∇W (x)||2dx

ξ0 ξ

Figure 12: Examples of travel time tomography recovery.

Subgradient descent recovery. The minimization problem (4.10) is non-convex, buta local minimizer ξ can be computed using the projected gradient descent (1.8). In thiscase, the gradient of the energy E at a metric ξ reads

∇ξE = µ∆ξ −

s,t

(dξ(xs, xt)− ds,t)∇ξ(k)Us(xt).

where ∆ = −grad∗ grad is the Laplacian. The subgradient ∇ξUs(xt) of the mappingξ → dξ(xs, xt) is computed with the Subgradient Marching algorithm, Table 2, startingthe propagation from the point xs.

The subgradient descent (1.8) converges to a local minimum ξ of the problem (4.10).

Numerical examples. Figure 12 shows two examples of smooth metrics ξ0 recoveredfrom travel time tomography measurements. In each case, we set ξ/ξ = 1.3 and we useξ(0)i,j = λ as an initial flat metric.

For the first example, we use P = 100 points distributed evenly on the boundary of asquare Ω, discretized at N = 150 × 150 points, and each points acts both as emitter andsensor.

For the second example, we use 50 emitter points xs49s=0 distributed evenly on the

boundary of a complicated domain Ω, and 150 sensors xt199s=50 distributed randomly

within the domain. Each emitter is connected to all the sensors, so that (s, t) ∈ Γ if andonly if s < 50 and t ≥ 50.

20

ξ0 ξ

Figure 12: Examples of travel time tomography recovery.

Subgradient descent recovery. The minimization problem (4.10) is non-convex, buta local minimizer ξ can be computed using the projected gradient descent (1.8). In thiscase, the gradient of the energy E at a metric ξ reads

∇ξE = µ∆ξ −

s,t

(dξ(xs, xt)− ds,t)∇ξ(k)Us(xt).

where ∆ = −grad∗ grad is the Laplacian. The subgradient ∇ξUs(xt) of the mappingξ → dξ(xs, xt) is computed with the Subgradient Marching algorithm, Table 2, startingthe propagation from the point xs.

The subgradient descent (1.8) converges to a local minimum ξ of the problem (4.10).

Numerical examples. Figure 12 shows two examples of smooth metrics ξ0 recoveredfrom travel time tomography measurements. In each case, we set ξ/ξ = 1.3 and we useξ(0)i,j = λ as an initial flat metric.

For the first example, we use P = 100 points distributed evenly on the boundary of asquare Ω, discretized at N = 150 × 150 points, and each points acts both as emitter andsensor.

For the second example, we use 50 emitter points xs49s=0 distributed evenly on the

boundary of a complicated domain Ω, and 150 sensors xt199s=50 distributed randomly

within the domain. Each emitter is connected to all the sensors, so that (s, t) ∈ Γ if andonly if s < 50 and t ≥ 50.

20

ξ0 ξ

Figure 12: Examples of travel time tomography recovery.

Subgradient descent recovery. The minimization problem (4.10) is non-convex, buta local minimizer ξ can be computed using the projected gradient descent (1.8). In thiscase, the gradient of the energy E at a metric ξ reads

∇ξE = µ∆ξ −

s,t

(dξ(xs, xt)− ds,t)∇ξ(k)Us(xt).

where ∆ = −grad∗ grad is the Laplacian. The subgradient ∇ξUs(xt) of the mappingξ → dξ(xs, xt) is computed with the Subgradient Marching algorithm, Table 2, startingthe propagation from the point xs.

The subgradient descent (1.8) converges to a local minimum ξ of the problem (4.10).

Numerical examples. Figure 12 shows two examples of smooth metrics ξ0 recoveredfrom travel time tomography measurements. In each case, we set ξ/ξ = 1.3 and we useξ(0)i,j = λ as an initial flat metric.

For the first example, we use P = 100 points distributed evenly on the boundary of asquare Ω, discretized at N = 150 × 150 points, and each points acts both as emitter andsensor.

For the second example, we use 50 emitter points xs49s=0 distributed evenly on the

boundary of a complicated domain Ω, and 150 sensors xt199s=50 distributed randomly

within the domain. Each emitter is connected to all the sensors, so that (s, t) ∈ Γ if andonly if s < 50 and t ≥ 50.

20

ξ0 ξ

Figure 12: Examples of travel time tomography recovery.

Subgradient descent recovery. The minimization problem (4.10) is non-convex, buta local minimizer ξ can be computed using the projected gradient descent (1.8). In thiscase, the gradient of the energy E at a metric ξ reads

∇ξE = µ∆ξ −

s,t

(dξ(xs, xt)− ds,t)∇ξ(k)Us(xt).

where ∆ = −grad∗ grad is the Laplacian. The subgradient ∇ξUs(xt) of the mappingξ → dξ(xs, xt) is computed with the Subgradient Marching algorithm, Table 2, startingthe propagation from the point xs.

The subgradient descent (1.8) converges to a local minimum ξ of the problem (4.10).

Numerical examples. Figure 12 shows two examples of smooth metrics ξ0 recoveredfrom travel time tomography measurements. In each case, we set ξ/ξ = 1.3 and we useξ(0)i,j = λ as an initial flat metric.

For the first example, we use P = 100 points distributed evenly on the boundary of asquare Ω, discretized at N = 150 × 150 points, and each points acts both as emitter andsensor.

For the second example, we use 50 emitter points xs49s=0 distributed evenly on the

boundary of a complicated domain Ω, and 150 sensors xt199s=50 distributed randomly

within the domain. Each emitter is connected to all the sensors, so that (s, t) ∈ Γ if andonly if s < 50 and t ≥ 50.

20

Seismic imaging: emitters xii, receivers yjj .

Wave propagation:

Geometric optics approximation: first hitting time = dW (xi, yj).

initial conditions localized around xi

Non-convex recovery:

di,j = dW0(xi, yj) + noiseMeasurements:

W0 (unknown) W (recovered) W0 (unknown) W (recovered)

Emitters xii Receivers yjj

Page 83: Geodesic Method in Computer Vision and Graphics

37

ConclusionRiemannian tensors encode geometric features.→ Size, orientation, anisotropy.

Page 84: Geodesic Method in Computer Vision and Graphics

37

ConclusionRiemannian tensors encode geometric features.→ Size, orientation, anisotropy.

Using geodesic curves: image segmentation.

Using geodesic distance: image and surface meshing

Page 85: Geodesic Method in Computer Vision and Graphics

(a) 3D view of ξ

xs1

xs2

xt1

xt2 −2

−1.5

−1

−0.5

0

0.5

(b) Flat view of log(ξ)

Figure 11: Two sources and two targets, with a river and a bridge on a symmetric config-uration and an asymmetric traffic weights.

4.3 Travel Time Tomography

A simple model of seismic data acquisition assumes that geodesic distances are collectedbetween points (xs)s. Pairs of emitter and receiver are denoted as (xs, xt) for (s, t) ∈ Γ.

In our simplified tomography setting, the data acquisition (1.6) computes geodesicdistances ds,t for each pair (s, t) ∈ Γ according to an unknown Riemannian metric ξ0.

Geodesic tomography inversion. The recovery is obtained by solving (1.1) for a datafidelity term (1.7) as interaction functionals.

We use a discrete Sobolev regularization

J(ξ) =µ

2gradξ2 =

µ

2

(i,j)∈Ω

gradi,jξ2

where the operator grad is a finite difference discretization of the 2D gradient

gradi,jξ = (ξi+1,j − ξi,j , ξi,j+1 − ξi,j),

with Neumann condition on the boundary ∂Ω of the domain. The parameter µ controls thestrength of the regularization and should be adapted to the number |Γ| of measurementsand the smoothness of ξ0.

We use the local and global constraints (4.4), where

ξ = min(i,j)∈Ω

ξ0i,j , ξ = max

(i,j)∈Ωξ0i,j , and λ =

1|Ω|

(i,j)∈Ω

ξ0i,j .

The travel time recovery is thus obtained by minimizing

argminξ∈C

12

(s,t)∈Γ

(dξ(xs, xt)− ds,t)2 +µ

2gradξ2 . (4.10)

19

37

ConclusionRiemannian tensors encode geometric features.→ Size, orientation, anisotropy.

Using geodesic curves: image segmentation.

Using geodesic distance: image and surface meshing

Variational minimization:optimization of the metric.→ inverse problems,→ surfaces optimization.