robust combinatorial optimization with variable uncertainty · robust combinatorial optimization...

Post on 24-Jul-2020

36 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Robust combinatorial optimization with variableuncertainty

Michael Poss

Heudiasyc UMR CNRS 7253, Universite de Technologie de Compiegne

17th Aussois Combinatorial Optimization Workshop

M. Poss (Heudiasyc) Variable uncertainty Aussois 1 / 31

Outline

1 Robust optimization

2 Variable budgeted uncertainty

3 Cost uncertainty

M. Poss (Heudiasyc) Variable uncertainty Aussois 2 / 31

Outline

1 Robust optimization

2 Variable budgeted uncertainty

3 Cost uncertainty

M. Poss (Heudiasyc) Variable uncertainty Aussois 3 / 31

Combinatorial optimization under uncertainty

minn∑

i=1

cixi

s.t.n∑

j=1

aijxj ≤ bi , i = 1, . . . ,m

x ∈ {0, 1}n

Suppose that the parameters (a, b, c) are uncertain:

They vary over time

They must be predicted from historical data

They cannot be measured with enough accuracy

...

Let’s do something clever (and useful)!

M. Poss (Heudiasyc) Variable uncertainty Aussois 4 / 31

How much do we know?

Stochastic programming︷ ︸︸ ︷A lot ⇔

Robust programming︷ ︸︸ ︷A little

Robust pr. Uncertain parameters are merely assumed to belong to anuncertainty set U ⇒ one wishes to optimize some worst-caseobjective over the uncertainty set

Stochastic pr. Uncertain parameters are precisely described by probabilitydistributions ⇒ one wishes to optimize some expectation,variance, Value-at-risk, . . .

Intermediary models exist: distributionally robust optimization, ambiguouschance-constrained

M. Poss (Heudiasyc) Variable uncertainty Aussois 5 / 31

How much do we know?

Stochastic programming︷ ︸︸ ︷A lot ⇔

Robust programming︷ ︸︸ ︷A little

Robust pr. Uncertain parameters are merely assumed to belong to anuncertainty set U ⇒ one wishes to optimize some worst-caseobjective over the uncertainty set

Stochastic pr. Uncertain parameters are precisely described by probabilitydistributions ⇒ one wishes to optimize some expectation,variance, Value-at-risk, . . .

Intermediary models exist: distributionally robust optimization, ambiguouschance-constrained

M. Poss (Heudiasyc) Variable uncertainty Aussois 5 / 31

When do we take decisions?

Now All decisions must be taken before the uncertainty is knownwith precision ⇒ probability constraints, (static) robustoptimization

Delayed Some decisions may be delayed until the uncertainty isrevealed ⇒ multi-stage stochastic programming, adjustablerobust optimization

M. Poss (Heudiasyc) Variable uncertainty Aussois 6 / 31

Robust combinatorial optimization

minn∑

i=1

cixi

s.t.n∑

j=1

aijxj ≤ bi , i = 1, . . . ,m, ∀ai ∈ Ui

x ∈ {0, 1}n,

The linear relaxation of this problem is tractable if Ui is defined by conicconstraints:

Ui = {ai ∈ Rn : u ai − v ∈ K}.

In particular, polyhedrons and polytopes are nice (K = Rn+).

M. Poss (Heudiasyc) Variable uncertainty Aussois 7 / 31

Feasibility set

∑aixi ≤ b, ∀a ∈ U ⇔

∑j cjαj ≤ b∑j ujiαj ≥ xj

αj ≥ 0

The feasibility set of the constraint is a polyhedron (thus, convex) !

A (very) popular polyhedral uncertainty set is (Bertsimas and Sim, 2004):

UΓ :={

a ∈ Rn : ai = ai + δi ai ,−1 ≤ δi ≤ 1,∑|δi | ≤ Γ

}.

Main reasons for popularity:

Nice computational properties for MIP and combinatorial problems.

Intuitive interpretation.

Probabilistic interpretation.

M. Poss (Heudiasyc) Variable uncertainty Aussois 8 / 31

Feasibility set

∑aixi ≤ b, ∀a ∈ U ⇔

∑j cjαj ≤ b∑j ujiαj ≥ xj

αj ≥ 0

The feasibility set of the constraint is a polyhedron (thus, convex) !A (very) popular polyhedral uncertainty set is (Bertsimas and Sim, 2004):

UΓ :={

a ∈ Rn : ai = ai + δi ai ,−1 ≤ δi ≤ 1,∑|δi | ≤ Γ

}.

Main reasons for popularity:

Nice computational properties for MIP and combinatorial problems.

Intuitive interpretation.

Probabilistic interpretation.

M. Poss (Heudiasyc) Variable uncertainty Aussois 8 / 31

Consider a vehicle routing problem with uncertain travel times t and timelimit T . For simplicity, we suppose t ∈ UΓ for Γ = 5.

Consider two robust-feasible routes x1 and x2 : ‖x1‖1 = 3 and ‖x2‖1 = 10.

Because x1 and x2 are robust-feasible:∑i :x1

i =1

t i ≤ T , ∀a ∈ UΓ, and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

which becomes∑i :x1

i =1

t i ≤ T , and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

For any probability distribution for t:

P

∑i :x1

i =1

t i > T

= 0.

If x2 is not robust-feasible for Γ = 10, there exists probability distributions:

P

∑i :x2

i =1

t i > T

> 0

M. Poss (Heudiasyc) Variable uncertainty Aussois 9 / 31

Consider a vehicle routing problem with uncertain travel times t and timelimit T . For simplicity, we suppose t ∈ UΓ for Γ = 5.

Consider two robust-feasible routes x1 and x2 : ‖x1‖1 = 3 and ‖x2‖1 = 10.

Because x1 and x2 are robust-feasible:∑i :x1

i =1

t i ≤ T , ∀a ∈ UΓ, and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

which becomes∑i :x1

i =1

t i ≤ T , and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

For any probability distribution for t:

P

∑i :x1

i =1

t i > T

= 0.

If x2 is not robust-feasible for Γ = 10, there exists probability distributions:

P

∑i :x2

i =1

t i > T

> 0

M. Poss (Heudiasyc) Variable uncertainty Aussois 9 / 31

Consider a vehicle routing problem with uncertain travel times t and timelimit T . For simplicity, we suppose t ∈ UΓ for Γ = 5.

Consider two robust-feasible routes x1 and x2 : ‖x1‖1 = 3 and ‖x2‖1 = 10.

Because x1 and x2 are robust-feasible:∑i :x1

i =1

t i ≤ T , ∀a ∈ UΓ, and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

which becomes∑i :x1

i =1

t i ≤ T , and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

For any probability distribution for t:

P

∑i :x1

i =1

t i > T

= 0.

If x2 is not robust-feasible for Γ = 10, there exists probability distributions:

P

∑i :x2

i =1

t i > T

> 0

M. Poss (Heudiasyc) Variable uncertainty Aussois 9 / 31

Consider a vehicle routing problem with uncertain travel times t and timelimit T . For simplicity, we suppose t ∈ UΓ for Γ = 5.

Consider two robust-feasible routes x1 and x2 : ‖x1‖1 = 3 and ‖x2‖1 = 10.

Because x1 and x2 are robust-feasible:∑i :x1

i =1

t i ≤ T , ∀a ∈ UΓ, and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

which becomes∑i :x1

i =1

t i ≤ T , and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

For any probability distribution for t:

P

∑i :x1

i =1

t i > T

= 0.

If x2 is not robust-feasible for Γ = 10, there exists probability distributions:

P

∑i :x2

i =1

t i > T

> 0

M. Poss (Heudiasyc) Variable uncertainty Aussois 9 / 31

Consider a vehicle routing problem with uncertain travel times t and timelimit T . For simplicity, we suppose t ∈ UΓ for Γ = 5.

Consider two robust-feasible routes x1 and x2 : ‖x1‖1 = 3 and ‖x2‖1 = 10.

Because x1 and x2 are robust-feasible:∑i :x1

i =1

t i ≤ T , ∀a ∈ UΓ, and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

which becomes∑i :x1

i =1

t i ≤ T , and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

For any probability distribution for t:

P

∑i :x1

i =1

t i > T

= 0.

If x2 is not robust-feasible for Γ = 10, there exists probability distributions:

P

∑i :x2

i =1

t i > T

> 0

M. Poss (Heudiasyc) Variable uncertainty Aussois 9 / 31

Consider a vehicle routing problem with uncertain travel times t and timelimit T . For simplicity, we suppose t ∈ UΓ for Γ = 5.

Consider two robust-feasible routes x1 and x2 : ‖x1‖1 = 3 and ‖x2‖1 = 10.

Because x1 and x2 are robust-feasible:∑i :x1

i =1

t i ≤ T , ∀a ∈ UΓ, and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

which becomes∑i :x1

i =1

t i ≤ T , and∑i :x2

i =1

t i ≤ T , ∀a ∈ UΓ

For any probability distribution for t:

P

∑i :x1

i =1

t i > T

= 0.

If x2 is not robust-feasible for Γ = 10, there exists probability distributions:

P

∑i :x2

i =1

t i > T

> 0

M. Poss (Heudiasyc) Variable uncertainty Aussois 9 / 31

Outline

1 Robust optimization

2 Variable budgeted uncertainty

3 Cost uncertainty

M. Poss (Heudiasyc) Variable uncertainty Aussois 10 / 31

Robust optimization and probabilistic constraint

Let ai be random variables and ε > 0. The chance constraint

P(∑

aixi > b)≤ ε (1)

leads to very difficult optimization problems in general.

In some situations, we know that (1) can be approximated by∑aixi ≤ b ∀a ∈ U (2)

for a properly chosen U.These approximations are conservative: any x feasible for (2) is feasible for(1).We must balance conservatism and protection cost ⇒ devise goodprotection sets U.

M. Poss (Heudiasyc) Variable uncertainty Aussois 11 / 31

Robust optimization and probabilistic constraint

Let ai be random variables and ε > 0. The chance constraint

P(∑

aixi > b)≤ ε (1)

leads to very difficult optimization problems in general.In some situations, we know that (1) can be approximated by∑

aixi ≤ b ∀a ∈ U (2)

for a properly chosen U.These approximations are conservative: any x feasible for (2) is feasible for(1).We must balance conservatism and protection cost ⇒ devise goodprotection sets U.

M. Poss (Heudiasyc) Variable uncertainty Aussois 11 / 31

Robust optimization and probabilistic constraint

What about UΓ ?

Let ai be random variables independently and symmetrically distributed in[ai − ai , ai + ai ].Bertsimas and Sim (2004) prove that if a vector x satisfies the robustconstraint ∑

aixi ≤ b ∀a ∈ UΓ,

then it satisfies also the probabilistic constraint

P(∑

aixi > b)≤ exp

(−Γ2

2n

).

M. Poss (Heudiasyc) Variable uncertainty Aussois 12 / 31

Robust optimization and probabilistic constraint

What about UΓ ?Let ai be random variables independently and symmetrically distributed in[ai − ai , ai + ai ].Bertsimas and Sim (2004) prove that if a vector x satisfies the robustconstraint ∑

aixi ≤ b ∀a ∈ UΓ,

then it satisfies also the probabilistic constraint

P(∑

aixi > b)≤ exp

(−Γ2

2n

).

M. Poss (Heudiasyc) Variable uncertainty Aussois 12 / 31

Something is wrong ...

From

P(∑

aixi > b)≤ exp

(−Γ2

2n

),

we see that choosing Γ = (−2 ln(ε))1/2 n1/2 yields

P(∑

aixi > b)≤ ε.

For many problems, ‖x‖1 < n1/2 for optimal (or feasible) vectors x(network design, assignment, ...)⇒ Γ > n1/2 already for ε = 0.5⇒ for these problems, protecting with probability 0.5 yields protectionwith probability 0!⇒ overprotection !

M. Poss (Heudiasyc) Variable uncertainty Aussois 13 / 31

Something is wrong ...

From

P(∑

aixi > b)≤ exp

(−Γ2

2n

),

we see that choosing Γ = (−2 ln(ε))1/2 n1/2 yields

P(∑

aixi > b)≤ ε.

For many problems, ‖x‖1 < n1/2 for optimal (or feasible) vectors x(network design, assignment, ...)⇒ Γ > n1/2 already for ε = 0.5⇒ for these problems, protecting with probability 0.5 yields protectionwith probability 0!⇒ overprotection !

M. Poss (Heudiasyc) Variable uncertainty Aussois 13 / 31

Multifunctions

It is easy to see that the bound from Bertsimas and Sim can be adapted to

P(∑

aixi > b)≤ exp

(− Γ2

2‖x‖1

).

Γ can be reduced when x is small

Let’s use multifunctions !

Defineαε(x) = (−2 ln(ε)‖x‖1)1/2 .

Consider x∗ be given. If∑aix∗i ≤ b ∀a ∈ Uαε(x∗) ,

then

P(∑

aix∗i > b

)≤ exp

(−αε(x∗)2

2‖x∗‖1

)= ε.

M. Poss (Heudiasyc) Variable uncertainty Aussois 14 / 31

Multifunctions

It is easy to see that the bound from Bertsimas and Sim can be adapted to

P(∑

aixi > b)≤ exp

(− Γ2

2‖x‖1

).

Γ can be reduced when x is small

Let’s use multifunctions !

Defineαε(x) = (−2 ln(ε)‖x‖1)1/2 .

Consider x∗ be given. If∑aix∗i ≤ b ∀a ∈ Uαε(x∗) ,

then

P(∑

aix∗i > b

)≤ exp

(−αε(x∗)2

2‖x∗‖1

)= ε.

M. Poss (Heudiasyc) Variable uncertainty Aussois 14 / 31

Multifunctions

It is easy to see that the bound from Bertsimas and Sim can be adapted to

P(∑

aixi > b)≤ exp

(− Γ2

2‖x‖1

).

Γ can be reduced when x is small

Let’s use multifunctions !

Defineαε(x) = (−2 ln(ε)‖x‖1)1/2 .

Consider x∗ be given. If∑aix∗i ≤ b ∀a ∈ Uαε(x∗) ,

then

P(∑

aix∗i > b

)≤ exp

(−αε(x∗)2

2‖x∗‖1

)= ε.

M. Poss (Heudiasyc) Variable uncertainty Aussois 14 / 31

New robust model

Let γ : {0, 1}n → R+ be a non-negative function.

Uγ(x) :={

a ∈ Rn : ai = ai + δi ai ,−1 ≤ δi ≤ 1,∑|δi | ≤ γ(x)

}.

We have shown that the new model∑aixi ≤ b ∀a ∈ Uαε(x),

should be considered instead of the classical model∑aixi ≤ b ∀a ∈ UΓ.

M. Poss (Heudiasyc) Variable uncertainty Aussois 15 / 31

New robust model

Let γ : {0, 1}n → R+ be a non-negative function.

Uγ(x) :={

a ∈ Rn : ai = ai + δi ai ,−1 ≤ δi ≤ 1,∑|δi | ≤ γ(x)

}.

We have shown that the new model∑aixi ≤ b ∀a ∈ Uαε(x),

should be considered instead of the classical model∑aixi ≤ b ∀a ∈ UΓ.

M. Poss (Heudiasyc) Variable uncertainty Aussois 15 / 31

Better bound

The previous bound is bad. Bertsimas and Sim propose a better bound:

P(∑

aix∗i > b

)≤ B(n, Γ) =

1

2n

(1− µ)

(n

bνc

)+

n∑l=bνc+1

(n

l

) ,

where ν = (Γ + n)/2, µ = ν − bνc.

We can make this bound dependent on x by considering B(‖x‖1, Γ).

βε(x) is the solution of the equation

B(‖x∗‖1, Γ)− ε = 0 (3)

in variable Γ.

We solve (3) numerically.

M. Poss (Heudiasyc) Variable uncertainty Aussois 16 / 31

Tractability. Example: Knapsack problem

maxn∑

i=1

cixi

s.t.n∑

i=1

aixi ≤ b, a ∈ Uγ

x ∈ {0, 1}n,which can be rewritten as

maxn∑

i=1

cixi

s.t.n∑

i=1

aixi + max0 ≤ δi ≤ 1∑δi ≤ γ(x)

n∑i=1

δi aixi ≤ b,

x ∈ {0, 1}n,M. Poss (Heudiasyc) Variable uncertainty Aussois 17 / 31

Knapsack problem

Using the dualization approach:

maxn∑

i=1

cixi

s.t.n∑

i=1

aixi + zγ(x) +n∑

i=1

pi ≤ b,

z + pi ≥ aixi , i = 1, . . . , n

z , p ≥ 0,

x ∈ {0, 1}n.

Non-convex reformulation.

x binary may help.

M. Poss (Heudiasyc) Variable uncertainty Aussois 18 / 31

Dualization

Theorem

Consider robust constraint

aT x ≤ b, ∀a ∈ Uγ(x),x ∈ {0, 1}n, (4)

and suppose that γ = γ0 +∑γixi is an affine function of x, non-negative

for x ∈ {0, 1}n. Then, (4) is equivalent to

n∑i=1

aixi + γ0z +n∑

i=1

γiwi +n∑

i=1

pi ≤ b

z + pi ≥ aixi , i = 1, . . . , n,wi − z ≥ −maxj(aj)(1− xi ), i = 1, . . . , n,

p,w , z ≥ 0, x ∈ {0, 1}n.

M. Poss (Heudiasyc) Variable uncertainty Aussois 19 / 31

Non-affine functions γ

0

10

20

30

40

50

60

70

80

0 100 200 300 400 500 600 700 800 900 1000

β0.01

γ1

0

10

20

30

40

50

60

70

80

90

0 100 200 300 400 500 600 700 800 900 1000

β0.01

min(γ1, γ2)

M. Poss (Heudiasyc) Variable uncertainty Aussois 20 / 31

Computational results

Objective

1 Is there a benefit in using Uβ instead of UΓ ?

2 Computational “complexity” of solving the robust counterparts.

ModelsWe compare the following at ε = 0.01:

UΓ The classical robust model with budget uncertainty.

Uγ1Our new model with variable budget uncertainty: γ1

overapproximates β.

Uγ1γ2Our new model with variable budget uncertainty:min(γ1, γ2) over-approximates β.

M. Poss (Heudiasyc) Variable uncertainty Aussois 21 / 31

Computational results

Objective

1 Is there a benefit in using Uβ instead of UΓ ?

2 Computational “complexity” of solving the robust counterparts.

ModelsWe compare the following at ε = 0.01:

UΓ The classical robust model with budget uncertainty.

Uγ1Our new model with variable budget uncertainty: γ1

overapproximates β.

Uγ1γ2Our new model with variable budget uncertainty:min(γ1, γ2) over-approximates β.

M. Poss (Heudiasyc) Variable uncertainty Aussois 21 / 31

The price of robustness at ε = 0.01

Instances from Bertsimas and Sim (2004)

0

0.5

1

1.5

2

2.5

3

3.5

100 200 300 400 500 600 700 800 900 1000

Det

erm

inis

tic

cost

incr

ease

in%

Number of items n

Uγ1γ2

+

+

++

+ ++ + + +

+Uγ1

••

• • • • • •

•UΓ

��

�� � � � �

M. Poss (Heudiasyc) Variable uncertainty Aussois 22 / 31

Computational complexity

model Uγ1 Uγ1γ2 Uγ1γ2γ3

time model/time UΓ 1.7 3.4 6.1gap model/gap UΓ 0.87 0.98 1.1

Fixing M to maxj(aj) affects the LP relaxation.

If M = 1000, gap Uγ1/gap UΓ → 3.9 !

M. Poss (Heudiasyc) Variable uncertainty Aussois 23 / 31

Outline

1 Robust optimization

2 Variable budgeted uncertainty

3 Cost uncertainty

M. Poss (Heudiasyc) Variable uncertainty Aussois 24 / 31

Cost uncertainty

Suppose that only cost coefficient are uncertain

min maxc∈U

n∑i=1

cixi

s.t.n∑

j=1

aijxj ≤ bi , i = 1, . . . ,m

x ∈ {0, 1}n,

which can be rewritten

COΓ ≡ minx∈X

maxc∈UΓ

cT x .

The previous probabilistic approximation leads to a relation between COΓ

andminx∈X

VaRε cT x .

M. Poss (Heudiasyc) Variable uncertainty Aussois 25 / 31

Value-at-risk

Definition: VaRε(cT x) = inf{t|P(cT x ≤ t) ≥ 1− ε}.We see easily that

COΓ provides an upper bound of the optimization of VaR

The upper bound is very bad for small cardinality vectors

Model COγ overcomes this flaw

COγ ≡ minx∈X

maxc∈Uγ

cT x .

M. Poss (Heudiasyc) Variable uncertainty Aussois 26 / 31

Value-at-risk

Definition: VaRε(cT x) = inf{t|P(cT x ≤ t) ≥ 1− ε}.We see easily that

COΓ provides an upper bound of the optimization of VaR

The upper bound is very bad for small cardinality vectors

Model COγ overcomes this flaw

COγ ≡ minx∈X

maxc∈Uγ

cT x .

M. Poss (Heudiasyc) Variable uncertainty Aussois 26 / 31

Value-at-risk

Definition: VaRε(cT x) = inf{t|P(cT x ≤ t) ≥ 1− ε}.We see easily that

COΓ provides an upper bound of the optimization of VaR

The upper bound is very bad for small cardinality vectors

Model COγ overcomes this flaw

COγ ≡ minx∈X

maxc∈Uγ

cT x .

M. Poss (Heudiasyc) Variable uncertainty Aussois 26 / 31

Shortest path problem

10

15

20

25

30

35

40

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1

Cos

t red

uctio

n in

%

Value of ε

NE1AL1

MN1IA1

M. Poss (Heudiasyc) Variable uncertainty Aussois 27 / 31

Complexity of the resulting problems

Theorem

When γ is an affine function, COγ can be solved by solving n + 1 problemsCO and taking the cheapest optimal solution.

Theorem

When γ is a non-decreasing function of ‖x‖1, COγ can be solved bysolving n cardinality constrained problems COΓ and taking the cheapestoptimal solution.

M. Poss (Heudiasyc) Variable uncertainty Aussois 28 / 31

Complexity of the resulting problems

Theorem

When γ is an affine function, COγ can be solved by solving n + 1 problemsCO and taking the cheapest optimal solution.

Theorem

When γ is a non-decreasing function of ‖x‖1, COγ can be solved bysolving n cardinality constrained problems COΓ and taking the cheapestoptimal solution.

M. Poss (Heudiasyc) Variable uncertainty Aussois 28 / 31

Dynamic Programming

We use the notation Γ′ = min(n, maxk=0,...,n

γ(k)).

Theorem

Consider a combinatorial optimization problem that can be solved in O(τ)by using dynamic programming. If γ(k) ∈ Z for each k = 0, . . . , n, thenCOγ can be solved in O(nΓ′τ). Otherwise, COγ can be solved inO(n2Γ′τ).

Theorem

Consider a combinatorial optimization problem that can be solved in O(τ)by using dynamic programming. Then, COΓ can be solved in O(Γτ).

If Γ ∼ n1/2, we get O(n1/2τ), improving over the O(nτ) from Bertsimasand Sim.

M. Poss (Heudiasyc) Variable uncertainty Aussois 29 / 31

Dynamic Programming

We use the notation Γ′ = min(n, maxk=0,...,n

γ(k)).

Theorem

Consider a combinatorial optimization problem that can be solved in O(τ)by using dynamic programming. If γ(k) ∈ Z for each k = 0, . . . , n, thenCOγ can be solved in O(nΓ′τ). Otherwise, COγ can be solved inO(n2Γ′τ).

Theorem

Consider a combinatorial optimization problem that can be solved in O(τ)by using dynamic programming. Then, COΓ can be solved in O(Γτ).

If Γ ∼ n1/2, we get O(n1/2τ), improving over the O(nτ) from Bertsimasand Sim.

M. Poss (Heudiasyc) Variable uncertainty Aussois 29 / 31

Concluding remarks

We introduce a new class of uncertainty models.

They correct the flaw of Bertsimas and Sim model.

The tractability of the new model is often comparable (or equal) tothe traditional model.

Remark: The model can be extended to non-combinatorial problems buttractability becomes an issue.

M. Poss (Heudiasyc) Variable uncertainty Aussois 30 / 31

Thank’s for your attention

M. Poss (Heudiasyc) Variable uncertainty Aussois 31 / 31

top related