stochastic programming modeling · stochastic programming modeling ima new directions short course...
TRANSCRIPT
Stochastic Programming Modeling
IMA New Directions Short Course on Mathematical Optimization
Jeff Linderoth
Department of Industrial and Systems EngineeringUniversity of Wisconsin-Madison
August 8, 2016
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 1 / 77
Week #2
The first week focused on theoryand algorithms for continuousoptimization problems whereproblem parameters are knownwith certainty.
This week we will focus on twodifferent topics:
1 Stochastic Programming:Used for Optimization underdata uncertainty
2 Integer Programming: Usedfor modeling discrete decisions
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 2 / 77
Today’s Outline
About This Week
About Us
About You
Stochastic Programming
What is it?/Why Should we Do it?
A Newsvendor
Recourse Models and Extensive Form
How to implement in a modeling language
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 3 / 77
Today’s Outline
About This Week
About Us
About You
Stochastic Programming
What is it?/Why Should we Do it?
A Newsvendor
Recourse Models and Extensive Form
How to implement in a modeling language
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 3 / 77
This Week
Resources
Our exercises will be done with AMPL: A Mathematical ProgrammingLanguage
We added you all to a Dropbox: There you can get AMPL, templatesfor the exercises, and the lecture slides.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 4 / 77
This Week
The Dream Team
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 5 / 77
This Week
The Dream Team
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 5 / 77
This Week
Optimization “Dream Team”
Monday: Dave Morton, Northwestern, Sample AverageApproximation
Tuesday: Shabbir Ahmed, Georgia Tech, Multistage StochasticProgramming
Wednesday: Robert Hildebrand, IBM, Lenstra’s Algorithm
Thursday: Santanu Dey, Georgia Tech, Cutting Plane Theory
Friday: Dan Bienstock, Columbia, Mixed Integer NonlinearProgramming
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 6 / 77
This Week
Week Overview—Social Events!
Monday: Stub and Herb’s
Wednesday: Twins Game
Thursday: Surly Brewing Company
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 7 / 77
This Week
Recommended Texts
Stochastic Programming
?: Very good. Requires strong math background
?: A more gentle introduction, but still covers the whole field quitewell.
?: FREE!. It’s in the Dropbox
Integer Programming
?: Classic reference.
?: A more gentle treatment
?: Very nice geometric intuition
?: My (new) favorite book
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 8 / 77
This Week
Recommended Texts
Stochastic Programming
?: Very good. Requires strong math background
?: A more gentle introduction, but still covers the whole field quitewell.
?: FREE!. It’s in the Dropbox
Integer Programming
?: Classic reference.
?: A more gentle treatment
?: Very nice geometric intuition
?: My (new) favorite book
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 8 / 77
This Week
Course Level/Expectations
We will use AMPL (www.ampl.com) to solve problems and prototypealgorithms:
If nothing else, you can get to learn a new language for modeling andsolving mathematical optimization problems
We will do a few proofs, but we will not require significantmathematical sophistication beyond a reasonable understanding of LPduality
We assume some basic background in probability theory (no measuretheory required) – what is a random variable, expected value, law oflarge numbers, some basic statistics (CLT)
We will expect some basic linear algebra knowledge
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 9 / 77
This Week
About us...
B.S. (G.E.), UIUC, 1992.
M.S., OR, GA Tech, 1994.
Ph.D., GA Tech, 1998
1998-2000 : MCS, ANL
2000-2002 : Axioma, Inc.
2002-2007 : Lehigh University
Research Areas: Large ScaleOptimization, High PerformanceComputing.
Married. One child, Jacob. Now 13.He is awesome.
Hobbies: Golf, Integer Programming,Human Pyramids.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 10 / 77
This Week
About Jim...
B.S. (I.E.), UW-Madison, 2001
M.S., OR, GA Tech, 2004.
Ph.D., GA Tech, 2007
2007-2008 : IBM
2008-2016 : UW-Madison
Research Areas: DiscreteOptimization, StochasticOptimization, Applications
Married. Three children: Rowan,Camerson, Remy. They are awesome
Hobbies: Boxing, IntegerProgramming, Human Pyramids.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 11 / 77
This Week
Picture Time
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 12 / 77
This Week
About You – Quiz #1!
1 Name
2 Nationality
3 Education Background.
4 Research Interests/Thesis Topic?
5 (Optimization) Modeling Languages you know: (AMPL, GAMS,Mosel, CVX, . . .
6 Programming Languages you know: (C, Python, Matlab, Julia,FORTRAN, Java, . . .)
7 Anything specific you hope to accomplish/learn this week?
8 One interesting fact about yourself you think we should know.
9 Do you like human pyramids? :-)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 13 / 77
Introduction to SP Background
Stochastic Programming
$64 Question
What does “Programming” mean in “MathematicalProgramming”, “Linear Programming”, etc...?
A. Planning.
Mathematical Programming (Optimization) is about decision making,or planning.
Stochastic Programming is about decision making under uncertainty.
View it as “Mathematical Programming with random parameters”
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 14 / 77
Introduction to SP Background
Stochastic Programming
$64 Question
What does “Programming” mean in “MathematicalProgramming”, “Linear Programming”, etc...?
A. Planning.
Mathematical Programming (Optimization) is about decision making,or planning.
Stochastic Programming is about decision making under uncertainty.
View it as “Mathematical Programming with random parameters”
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 14 / 77
Introduction to SP Background
Dealing With Randomness
In most applications of optimization, randomness is ignored
Otherwise, it is dealt with via:
Sensitivity analysis
For large-scale problems, sensitivity analysis is useless
“Careful” determination of instance parameters
No matter how careful you are, you can’t get rid of inherentrandomness.
Stochastic Programming is the way!1
1This is not necessarily true, but we will assume it to be so for the next two daysJeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 15 / 77
Introduction to SP Newsvendor
Hot Off the Presses
A paperboy (newsvendor) needs to decide how many papers to buy inorder to maximize his profit.
He doesn’t know at the beginning of the day how many papers he cansell (his demand).
Each newspaper costs c.He can sell each newspaper for a price of s.He can return each unsold newspaper at the end of the day for r.(Note that s > c > r).The demand (unknown when we purchase papers) is D
Newsvendor Profit
F (x,D) =
(s− c)x if x ≤ DsD + r(x−D)− cx if x > D
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 16 / 77
Introduction to SP Newsvendor
Hot Off the Presses
A paperboy (newsvendor) needs to decide how many papers to buy inorder to maximize his profit.
He doesn’t know at the beginning of the day how many papers he cansell (his demand).
Each newspaper costs c.He can sell each newspaper for a price of s.He can return each unsold newspaper at the end of the day for r.(Note that s > c > r).The demand (unknown when we purchase papers) is D
Newsvendor Profit
F (x,D) =
(s− c)x if x ≤ DsD + r(x−D)− cx if x > D
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 16 / 77
Introduction to SP Newsvendor
Pictures of Function
Marginal profit: (s− c) if can sell all: x ≤ DMarginal loss: (c− r) if have to salvage
xx = D
F (x,D)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 17 / 77
Introduction to SP Newsvendor
What Should We Do?
Optimize, silly:maxx≥0
F (x,D).
http://en.wikipedia.org/wiki/
Chewbacca_defense
This problem does not makesense!
You can’t optimize somethingrandom!
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 18 / 77
Introduction to SP Newsvendor
What Should We Do?
Optimize, silly:maxx≥0
F (x,D).
http://en.wikipedia.org/wiki/
Chewbacca_defense
This problem does not makesense!
You can’t optimize somethingrandom!
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 18 / 77
Introduction to SP Newsvendor
The Function is “Random”
xx = D1
F (x,D1)
xx = D2
F (x,D2)
One x can’t simultaneously optimize both functions
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 19 / 77
Introduction to SP Newsvendor
(Silly) Idea #1
Suppose D is a random variable with cdf H(t)def= P(D ≤ t)
“Silly” Idea: Plan for Average Case
Let µdef= E[D] be the mean value of demand
In this case: (proof by picture)
maxx≥0
F (x, µ)⇒ x∗ = µ.
In this case, the optimal policy is to purchase µ
We will see that this can be far from optimal when your problemtakes more uncertainty into account
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 20 / 77
Introduction to SP Newsvendor
Idea #2 – RobustPlan for Worst Case
Suppose D ∈ [`, u], and we wish to do the best we can given thatthe worst outcome for our objective will occur:
maxx≥0
minD∈[`,u]
F (x,D)
Note that we can write:
F (x,D) = min(s− c)x,D(s− r) + (r − c)x
maxx≥0
minD∈[`,u]
F (x,D) = maxx≥0
minD∈[`,u]
min(s− c)x,D(s− r) + (r − c)x
= maxx≥0
min(s− c)x, `(s− r) + (r − c)x
= maxx≥0
F (x, `)⇒ x∗ = `
Robust optimization – say some nice things. But we will not cover indetail this week. (Give reference?)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 21 / 77
Introduction to SP Newsvendor
Idea #2 – RobustPlan for Worst Case
Suppose D ∈ [`, u], and we wish to do the best we can given thatthe worst outcome for our objective will occur:
maxx≥0
minD∈[`,u]
F (x,D)
Note that we can write:
F (x,D) = min(s− c)x,D(s− r) + (r − c)x
maxx≥0
minD∈[`,u]
F (x,D) = maxx≥0
minD∈[`,u]
min(s− c)x,D(s− r) + (r − c)x
= maxx≥0
min(s− c)x, `(s− r) + (r − c)x
= maxx≥0
F (x, `)⇒ x∗ = `
Robust optimization – say some nice things. But we will not cover indetail this week. (Give reference?)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 21 / 77
Introduction to SP Newsvendor
Idea #3: Maximize Long-Run Profit
The “best” idea
Treat F (x,D) as a proper random variable, and maximize long-runprofit.
i.e. solve the optimization problem:
maxx≥0
E[F (x,D)].
In this case, the objective may make sense. The newsvendor will makea purchase every day
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 22 / 77
Introduction to SP Newsvendor
Idea #3: Maximize Long-Run Profit
The “best” idea
Treat F (x,D) as a proper random variable, and maximize long-runprofit.
i.e. solve the optimization problem:
maxx≥0
E[F (x,D)].
In this case, the objective may make sense. The newsvendor will makea purchase every day
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 22 / 77
Introduction to SP Newsvendor
Optimizing for the Newsvendor
Given only knowledge of the random variable D, given as the cdfHD(t), how many newspapers should the newsvendor buy?
With some old-school calculus (Chain rule, Fundmental theorem ofcalculus), one can show that the optimal closed form solution to theNewsvendor problem is
x∗ = H−1(s− cs− r
)the (s− c)/(s− r) quantile of the distribution H
It Ain’t Always “That Easy”
The newsvendor is about the only stochastic program that admitssuch a simple “closed form” solution.
In general, we must solve instances numerically (and alsoapproximately)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 23 / 77
Introduction to SP Newsvendor
Optimizing for the Newsvendor
Given only knowledge of the random variable D, given as the cdfHD(t), how many newspapers should the newsvendor buy?
With some old-school calculus (Chain rule, Fundmental theorem ofcalculus), one can show that the optimal closed form solution to theNewsvendor problem is
x∗ = H−1(s− cs− r
)the (s− c)/(s− r) quantile of the distribution H
It Ain’t Always “That Easy”
The newsvendor is about the only stochastic program that admitssuch a simple “closed form” solution.
In general, we must solve instances numerically (and alsoapproximately)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 23 / 77
Introduction to SP Newsvendor
Optimizing for the Newsvendor
Given only knowledge of the random variable D, given as the cdfHD(t), how many newspapers should the newsvendor buy?
With some old-school calculus (Chain rule, Fundmental theorem ofcalculus), one can show that the optimal closed form solution to theNewsvendor problem is
x∗ = H−1(s− cs− r
)the (s− c)/(s− r) quantile of the distribution H
It Ain’t Always “That Easy”
The newsvendor is about the only stochastic program that admitssuch a simple “closed form” solution.
In general, we must solve instances numerically (and alsoapproximately)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 23 / 77
Introduction to SP Newsvendor
Simulating (with Scenarios)
newsboy.xls
s = 2, c = 0.3, r = 0.05
Demand: Normally distributed. µ = 100, σ = 20
Mean Value Solution
Buy 100. (Duh!)TRUE long run profit ≈ 154
Stochastic Solution
Buy 123TRUE long run profit ≈ 162
The difference between the two solutions (162− 154) is called thevalue of the stochastic solution.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 24 / 77
Introduction to SP Newsvendor
Do You Feel Lucky, Punk?
Should we always optimize the randomvariable F (x,D) in expectation?
We may be “risk-averse”
minx≥0
ρ[F (x,D)]
If ρ(a) = E[a]: Standard stochasticprogram
If ρ(a) = E[a] + λV(a) for λ ∈ R, wehave a “mean-variance” stochasticprogram
Risk measures are discussed in thesecond lecture
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 25 / 77
Introduction to SP Newsvendor
Another Possible Newsvendor Problem
Suppose the newsvendor is lazy. He just wants to usually makeenough money to go to Stub and Herb’s, but he doesn’t want to hurthis back carrying too may papers
Chance Constraints
minx≥0x | PF (x,D) ≥ b ≥ 1− α
Minimize the number of papers to purchase to ensure that theprobability that you make at least b in profit is at least 1− αNote that F (x,D) is a random variable
Jim will discuss this a bit as well
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 26 / 77
Introduction to SP Newsvendor
Another Possible Newsvendor Problem
Suppose the newsvendor is lazy. He just wants to usually makeenough money to go to Stub and Herb’s, but he doesn’t want to hurthis back carrying too may papers
Chance Constraints
minx≥0x | PF (x,D) ≥ b ≥ 1− α
Minimize the number of papers to purchase to ensure that theprobability that you make at least b in profit is at least 1− αNote that F (x,D) is a random variable
Jim will discuss this a bit as well
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 26 / 77
Introduction to SP Newsvendor
Another Possible Newsvendor Problem
Suppose the newsvendor is lazy. He just wants to usually makeenough money to go to Stub and Herb’s, but he doesn’t want to hurthis back carrying too may papers
Chance Constraints
minx≥0x | PF (x,D) ≥ b ≥ 1− α
Minimize the number of papers to purchase to ensure that theprobability that you make at least b in profit is at least 1− αNote that F (x,D) is a random variable
Jim will discuss this a bit as well
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 26 / 77
Introduction to SP Newsvendor
Take Away Message
The “Flaw” of Averages
The flaw of averages occurs whenuncertainties are replaced by “singleaverage numbers” planning.
Joke: Did you hear the one aboutthe statistician who drowned fordinga river with an average depth ofthree feet.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 27 / 77
Introduction to SP Newsvendor
Take-away Message: Point Estimates
If you are planning using point estimates, then you are planningsub-optimally
It doesn’t matter how carefully you choose the point estimate— it isimpossible to hedge against future uncertainty by considering onerealization of the uncertainty in your planning process
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 28 / 77
Stages
Stages and Decisions
The newsvendor problem is a classical “recourse problem”:
1 We make a decision now (first-period decision)
2 Nature makes a random decision (“stuff” happens)
3 We make a second period decision that attempts to repair the havocwrought by nature in (2). (recourse)
Key Idea
The evolution of information is of paramount importance
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77
Stages
Stages and Decisions
The newsvendor problem is a classical “recourse problem”:
1 We make a decision now (first-period decision)
2 Nature makes a random decision (“stuff” happens)
3 We make a second period decision that attempts to repair the havocwrought by nature in (2). (recourse)
Key Idea
The evolution of information is of paramount importance
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77
Stages
Stages and Decisions
The newsvendor problem is a classical “recourse problem”:
1 We make a decision now (first-period decision)
2 Nature makes a random decision (“stuff” happens)
3 We make a second period decision that attempts to repair the havocwrought by nature in (2). (recourse)
Key Idea
The evolution of information is of paramount importance
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77
Stages
Stages and Decisions
The newsvendor problem is a classical “recourse problem”:
1 We make a decision now (first-period decision)
2 Nature makes a random decision (“stuff” happens)
3 We make a second period decision that attempts to repair the havocwrought by nature in (2). (recourse)
Key Idea
The evolution of information is of paramount importance
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77
Stages
Stages and Decisions
The newsvendor problem is a classical “recourse problem”:
1 We make a decision now (first-period decision)
2 Nature makes a random decision (“stuff” happens)
3 We make a second period decision that attempts to repair the havocwrought by nature in (2). (recourse)
Key Idea
The evolution of information is of paramount importance
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77
Stages
Newsvendor Again
Newsvendor Profit
F (x,D) = min(s− c)x, (s+ r)D + (r − c)x
D a random variable with cdf HD(t)
We showed that
x∗ = H−1(s− cs− r
).
Suppose thatΩ = d1, d2, . . . d|S|
So there are a finite set of scenarios S, each with associatedprobability pj . (
∑j∈S pj = 1)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 30 / 77
Stages
Newsvendor Again
Newsvendor Profit
F (x,D) = min(s− c)x, (s+ r)D + (r − c)x
D a random variable with cdf HD(t)
We showed that
x∗ = H−1(s− cs− r
).
Suppose thatΩ = d1, d2, . . . d|S|
So there are a finite set of scenarios S, each with associatedprobability pj . (
∑j∈S pj = 1)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 30 / 77
Stages
Newsvendor Again
Newsvendor Profit
F (x,D) = min(s− c)x, (s+ r)D + (r − c)x
D a random variable with cdf HD(t)
We showed that
x∗ = H−1(s− cs− r
).
Suppose thatΩ = d1, d2, . . . d|S|
So there are a finite set of scenarios S, each with associatedprobability pj . (
∑j∈S pj = 1)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 30 / 77
Stages
Newsvendor SP
Parameters
ds: Demand for newspapers in scenario s
ps: Probability of scenario s
Writing an optimization model for the newsvendor
Variables
x: Number to purchase
ys: Number to sell in scenario s
zs: Number to salvage in scenario s
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 31 / 77
Stages
Newsvendor SP
Parameters
ds: Demand for newspapers in scenario s
ps: Probability of scenario s
Writing an optimization model for the newsvendor
Variables
x: Number to purchase
ys: Number to sell in scenario s
zs: Number to salvage in scenario s
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 31 / 77
Stages
Newsvendor Stochastic LP
max−cx+∑s∈S
ps(qys + rzs)
s.t.
ys ≤ ds ∀s ∈ Sx− ys − zs = 0 ∀s ∈ S
x ≥ 0
ys, zs ≥ 0 ∀s ∈ S
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 32 / 77
Stages
Put Another Way
We could write the objective for the newsvendor problem in the form:
F (x,D) = −cx+ EQ(x,D),
where
Q(x,D) = maxy≥0,z≥0
qy + rz | y ≤ D, y + z = x.
Q(x,D) is the optimal recourse function: Given that we have chosenx and observed demand D, what should I do to maximize profit?
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 33 / 77
Stages
Put Another Way
We could write the objective for the newsvendor problem in the form:
F (x,D) = −cx+ EQ(x,D),
where
Q(x,D) = maxy≥0,z≥0
qy + rz | y ≤ D, y + z = x.
Q(x,D) is the optimal recourse function: Given that we have chosenx and observed demand D, what should I do to maximize profit?
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 33 / 77
Stages
It’s Not Always So Easy
For the newsvendor the recourse function: Q(x,D) has a simpleclosed form:
Q(x,D) = minsx, sD + r(x−D)
In general the recourse function may not be simple
In fact, for two-stage stochastic linear programs, the recourse functionwill be the optimal value of a linear program
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 34 / 77
Stages
It’s Not Always So Easy
For the newsvendor the recourse function: Q(x,D) has a simpleclosed form:
Q(x,D) = minsx, sD + r(x−D)
In general the recourse function may not be simple
In fact, for two-stage stochastic linear programs, the recourse functionwill be the optimal value of a linear program
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 34 / 77
Stages
Scenario Modeling
The most common representation of uncertainty (in stochasticprogramming) is via a list of scenarios, which are specificrepresentations of how the future will unfold.
Think of these as random variables ξ1, ξ2, . . . ξS , with ξj ∈ Ξ
What we CAN’T do
Planners often generate a solution for each scenariogenerated—“What-if” analysis.
Each solution yields a prescription of what should be done if thescenario occurs, but there is no theoretical guidance about thecompromise between those prescriptions
Can we “combine” these prescriptions in a natural way?Stochastic Programming does this!
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 35 / 77
Stages
Scenario Modeling
The most common representation of uncertainty (in stochasticprogramming) is via a list of scenarios, which are specificrepresentations of how the future will unfold.
Think of these as random variables ξ1, ξ2, . . . ξS , with ξj ∈ Ξ
What we CAN’T do
Planners often generate a solution for each scenariogenerated—“What-if” analysis.
Each solution yields a prescription of what should be done if thescenario occurs, but there is no theoretical guidance about thecompromise between those prescriptions
Can we “combine” these prescriptions in a natural way?Stochastic Programming does this!
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 35 / 77
Farmer Ted Background
Farmer Ted
In this example, the farmer has recourse that is, he can do somethingat step (3). Not just sell his newspapers.
Farmer Ted can grow Wheat, Corn, or Beans on his 500 acres.
Farmer Ted requires 200 tons of wheat and 240 tons of corn to feedhis cattle
These can be grown on his land or bought from a wholesaler.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 36 / 77
Farmer Ted Background
More Constraints
Any excess production can be sold for $170/ton (wheat) and$150/ton (corn)
Any shortfall must be bought from the wholesaler at a cost of$238/ton (wheat) and $210/ton (corn).
Farmer Ted can also grow beans
Beans sell at $36/ton for the first 6000 tonsDue to economic quotas on bean production, beans in excess of 6000tons can only be sold at $10/ton
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 37 / 77
Farmer Ted Background
The Data
500 acres available for planting
Wheat Corn Beans
Yield (T/acre) 2.5 3 20Planting Cost ($/acre) 150 230 260
Selling Price 170 150 36 (≤ 6000T)10 (>6000T)
Purchase Price 238 210 N/AMinimum Requirement 200 240 N/A
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 38 / 77
Farmer Ted Background
Formulate the LP – Decision Variables
xW,C,B Acres of Wheat, Corn, Beans Planted
wW,C,B Tons of Wheat, Corn, Beans sold (at favorable price).
eB Tons of beans sold at lower price
yW,C Tons of Wheat, Corn purchased.
Note that Farmer Ted has recourse. After he observes the weatherevent, he can decide how much of each crop to sell or purchase!
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 39 / 77
Farmer Ted Background
Formulation
max−150xW − 230xC − 260xB − 238yW + 170wW
−210yC + 150wC + 36wB + 10eB
subject to
xW + xC + xB ≤ 500
2.5xW + yW − wW = 200
3xC + yC − wC = 240
20xB − wB − eB = 0
wB ≤ 6000
xW , xC , xB, yW , yC , eB, wW , wC , wB ≥ 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 40 / 77
Farmer Ted Background
Solution with (expected) yields
Wheat Corn Beans
Plant (acres) 120 80 300Production 300 240 6000
Sales 100 0 6000Purchase 0 0 0
Profit: $118,600
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 41 / 77
Farmer Ted Background
It’s the Weather, Stupid!
Farmer Ted knows well enough to know that his yields aren’t alwaysprecisely Y = (2.5, 3, 20). He decides to run two more scenarios
Good weather: 1.2Y
Bad weather: 0.8Y .
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 42 / 77
Farmer Ted Making the SuperModel
Creating a Stochastic Model
Here is a general procedure for making a (scenario-based) 2-stagestochastic optimization problem
For a “nominal” state of nature (scenario), formulate an appropriateLP model
Decide which decisions are made before uncertainty is revealed, andwhich are decided after
All second stage variables get “scenario” index
Constraints with scenario indices must hold for all scenarios
Second stage variables in the objective function should be weightedby the probability of the scenario occurring
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 43 / 77
Farmer Ted Making the SuperModel
What does this mean in our case?
First stage variables are the x (or planting variables)
Second stage variables are the y, w, e (purchase and sale variables)
We have one copy of the y, w, e for each scenario!
Attach a scenario subscript s = 1, 2, 3 to each of the purchase andsale variables.
1: Good, 2: Average, 3: Bad
wC2 : Tons of corn sold at favorable price in scenario 2
eB3 : Tons of beans sold at unfavorable price in scenario 3.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 44 / 77
Farmer Ted Making the SuperModel
Expected Profit
The second stage cost for each submodel appears in the overallobjective function weighted by the probability that nature will choosethat scenario
−150xW − 230xC − 260xB
+1/3(−238yW1 + 170wW1 − 210yC1 + 150wC1 + 36wB1 + 10eB1)
+1/3(−238yW2 + 170wW2 − 210yC2 + 150wC2 + 36wB2 + 10eB2)
+1/3(−238yW3 + 170wW3 − 210yC3 + 150wC3 + 36wB3 + 10eB3)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 45 / 77
Farmer Ted Making the SuperModel
Constraints
xW + xC + xB ≤ 500
3xW + yW1 − wW1 = 200
2.5xW + yW2 − wW2 = 200
2xW + yW3 − wW3 = 200
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 46 / 77
Farmer Ted Making the SuperModel
Constraints (cont.)
3.6xC + yC1 − wC1 = 240
3xC + yC2 − wC2 = 240
2.4xC + yC3 − wC3 = 240
24xB − wB1 − eB1 = 0
20xB − wB2 − eB2 = 0
16xB − wB3 − eB3 = 0
wB1, wB2, wB3 ≤ 6000
All vars ≥ 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 47 / 77
Farmer Ted Making the SuperModel
Optimal Solution
Wheat Corn Beans
s Plant (acres) 170 80 250
1 Production 510 288 60001 Sales 310 48 60001 Purchase 0 0 0
2 Production 425 240 50002 Sales 225 0 50002 Purchase 0 0 0
3 Production 340 192 40003 Sales 140 0 40003 Purchase 0 48 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 48 / 77
Farmer Ted Statistics:VSS
The Value of the Stochastic Solution (VSS)
Suppose we just replaced the “random” quantities (the yields) by their meanvalues and solved that problem.
Would we get the same expected value for the Farmer’s profit?
How can we check?
Solve the “mean-value” problem to get a first stage solution x.Fix the first stage solution at that value x, and solve all the scenariosto see Farmer Ted’s profit in each.Take the weighted (by probability) average of the optimal objectivevalue for each scenario
Alternatively (and probably faster), we can fix the x variables and solve thestochastic programming problem we created.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 49 / 77
Farmer Ted Statistics:VSS
Computing FT’s VSS
Mean yields Y = (2.5, 3, 20)
(We already solved this problem).
xW = 120, xC = 80, xB = 300
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 50 / 77
Farmer Ted Statistics:VSS
Fixed Policy – Average Yield Scenario
maximize
−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB
subject to
xW = 120
xC = 80
xB = 300
xW + xC + xB ≤ 500
2.5xW + yW − wW = 200
3xC + yC − wC = 240
20xB − wB − eB = 0
wB ≤ 6000
xW , xC , xB , yW , yC , eB , wW , wC , wB ≥ 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 51 / 77
Farmer Ted Statistics:VSS
Fixed Policy – Average Yield Scenario Solution
Wheat Corn Beans
Plant (acres) 120 80 300Production 300 240 6000
Sales 100 0 6000Purchase 0 0 0
Profit: $118,600
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 52 / 77
Farmer Ted Statistics:VSS
Fixed Policy – Bad Yield Scenariomaximize
−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB
subject to
xW = 120
xC = 80
xB = 300
xW + xC + xB ≤ 500
2xW + yW − wW = 200
2.4xC + yC − wC = 240
16xB − wB − eB = 0
wB ≤ 6000
Objective Value: $55,120
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 53 / 77
Farmer Ted Statistics:VSS
Fixed Policy – Bad Yield Scenariomaximize
−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB
subject to
xW = 120
xC = 80
xB = 300
xW + xC + xB ≤ 500
2xW + yW − wW = 200
2.4xC + yC − wC = 240
16xB − wB − eB = 0
wB ≤ 6000
Objective Value: $55,120
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 53 / 77
Farmer Ted Statistics:VSS
Fixed Policy – Good Yield Scenariomaximize
−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB
subject to
xW = 120
xC = 80
xB = 300
xW + xC + xB ≤ 500
3xW + yW − wW = 200
3.6xC + yC − wC = 240
24xB − wB − eB = 0
wB ≤ 6000
Objective Value: $148,000
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 54 / 77
Farmer Ted Statistics:VSS
What’s it Worth to Model Randomness?
If Farmer Ted implemented the policy based on using only “average”yields, he would plant xW = 120, xC = 80, xB = 300
He would expect in the long run to make an average profit of...
1/3(118600) + 1/3(55120) + 1/3(148000) = 107240
If Farmer Ted implemented the policy based on the solution to thestochastic programming problem, he would plantxW = 170, xC = 80, xB = 250.
From this he would expect to make 108390
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 55 / 77
Farmer Ted Statistics:VSS
VSS
The difference of the values 180390-107240 is the Value of theStochastic Solution : $1150.
It would pay off $1150 per growing season for Farmer Ted to use the“stochastic” solution rather than the “mean value” solution.
$1150 is precisely the “value” of implementing a planting policy basedon the “stochastic solution”, rather than the mean-value solution.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 56 / 77
Farmer Ted Statistics:VSS
VSS
The difference of the values 180390-107240 is the Value of theStochastic Solution : $1150.
It would pay off $1150 per growing season for Farmer Ted to use the“stochastic” solution rather than the “mean value” solution.
$1150 is precisely the “value” of implementing a planting policy basedon the “stochastic solution”, rather than the mean-value solution.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 56 / 77
Farmer Ted Statistics:VSS
(General) Stochastic Programming
A Stochastic Programminx∈X
f(x)def= Eω[F (x, ξ(ω))]
2 Stage Stochastic LP w/Recourse
F (x, ω)def= cTx+Q(x, ω)
cTx: Pay me now
Q(x, ω): Pay me later
The Recourse Problem
Q(x, ω)def= min q(ω)T y
W (ω)y = h(ω)− T (ω)x
y ≥ 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 57 / 77
Extensive Form
Two Stage Stochastic Linear Program
Assume Ω = ω1, ω2, . . . ωS ⊆ Rr, P(ω = ωs) = ps,∀s = 1, 2, . . . , S
Tsdef= T (ωs), hs
def= h(ωs), qs
def= q(ωs),Ws = W (ωs)
min c>x+∑S
s=1 psQs(x)
s.t. Ax ≥ bx ∈ Rn1
+
where for s = 1, . . . , S
Qs(x)def= Q(x, ωs) = min q>s y
s.t. Wsy = hs − Tsxy ∈ Rn2
+
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 58 / 77
Extensive Form
Two Stage Stochastic Linear Program
Assume Ω = ω1, ω2, . . . ωS ⊆ Rr, P(ω = ωs) = ps,∀s = 1, 2, . . . , S
Tsdef= T (ωs), hs
def= h(ωs), qs
def= q(ωs),Ws = W (ωs)
min c>x+∑S
s=1 psQs(x)
s.t. Ax ≥ bx ∈ Rn1
+
where for s = 1, . . . , S
Qs(x)def= Q(x, ωs) = min q>s y
s.t. Wsy = hs − Tsxy ∈ Rn2
+
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 58 / 77
Extensive Form
Extensive Form
When we have a finite number of scenarios, or if we approximate theproblem with a finite number of scenarios2, we can write anequivalent extensive form linear program:
cTx + p1qT1 y1 + p2q
T2 y2 + · · · + psq
Ts ys
s.t.Ax = bT1x + W1y1 = h1T2x + W2y2 = h2
... +. . .
...TSx + WSys = hsx ∈ X y1 ∈ Y y2 ∈ Y ys ∈ Y
2Stay Tuned for Dave Morton’s LectureJeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 59 / 77
Extensive Form
The Upshot
This is just a larger linear program
It is a larger linear program that also has special structure
Jim explains how to exploit this structure tomorrow
cTx + p1qT1 y1 + p2q
T2 y2 + · · · + psq
Ts ys
s.t.Ax = bT1x + W1y1 = h1T2x + W2y2 = h2
... +. . .
...TSx + WSys = hsx ∈ X y1 ∈ Y y2 ∈ Y ys ∈ Y
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 60 / 77
Extensive Form
The Upshot
This is just a larger linear program
It is a larger linear program that also has special structure
Jim explains how to exploit this structure tomorrow
cTx + p1qT1 y1 + p2q
T2 y2 + · · · + psq
Ts ys
s.t.Ax = bT1x + W1y1 = h1T2x + W2y2 = h2
... +. . .
...TSx + WSys = hsx ∈ X y1 ∈ Y y2 ∈ Y ys ∈ Y
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 60 / 77
Extensive Form
Building the Supermodel
Weird Science
A general technique for creating two-stage resource problems.
1 Write a nominal (one scenario) model
2 Decide which variables are first stage,and second stage
3 Give s scenario index to all secondstage variables and random parameters
4 “Give context” to all scenarios
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 61 / 77
Facility Location Example
Facility Location and Distribution
Facilities: I
Customers: J
Fixed cost fi, capacity ui for facility i ∈ IDemand dj : for j ∈ JPer unit Delivery cost: cij ∀i ∈ J, j ∈ J
min∑i∈I
fixi +∑i∈I
∑j∈J
cijyij
∑i∈I
yij ≥ dj ∀j ∈ J∑j∈J
yij − uixi ≤ 0 ∀i ∈ I
xi ∈ 0, 1, yij ≥ 0 ∀i ∈ I, ∀j ∈ J
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 62 / 77
Facility Location Example
Facility Location and Distribution
Facilities: I
Customers: J
Fixed cost fi, capacity ui for facility i ∈ IDemand dj : for j ∈ JPer unit Delivery cost: cij ∀i ∈ J, j ∈ J
min∑i∈I
fixi +∑i∈I
∑j∈J
cijyij
∑i∈I
yij ≥ dj ∀j ∈ J∑j∈J
yij − uixi ≤ 0 ∀i ∈ I
xi ∈ 0, 1, yij ≥ 0 ∀i ∈ I, ∀j ∈ J
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 62 / 77
Facility Location Example
AMPL for FL
AMPL Code
1 var xI binary;
2 var yI,J >= 0;
3
4 minimize Cost:
5 sumi in I f[i]*x[i] + sumi in I, j in J c[i,j]*y[i,j] ;
6
7 subject to MeetDemandj in J:
8 sumi in I y[i,j] >= d[j] ;
9
10 subject to FacCapacityi in I:
11 sumj in J y[i,j] - u[i]*x[i] <= 0 ;
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 63 / 77
Facility Location Example
Evolution of Information
1 Build facilities now
2 Demand becomes known. One of the scenarios S = d1, d2, . . . d|S|happens
3 Meet demand from open facilities
First stage variables: xi
Second stage variables: yijs
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 64 / 77
Facility Location Example
Evolution of Information
1 Build facilities now
2 Demand becomes known. One of the scenarios S = d1, d2, . . . d|S|happens
3 Meet demand from open facilities
First stage variables: xi
Second stage variables: yijs
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 64 / 77
Facility Location Example
The SuperModel
min∑i∈I
fixi +∑s∈S
ps∑i∈I
∑j∈J
cijyijs
∑i∈I
yijs ≥ djs ∀j ∈ J ∀s ∈ S∑j∈J
yijs − uixi ≤ 0 ∀i ∈ I, ∀s ∈ S
xi ∈ 0, 1, yijs ≥ 0 ∀i ∈ I, ∀j ∈ J, ∀s ∈ S
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 65 / 77
Facility Location Example
Modeling Discussion
Do we always want to meet demand?
Regardless of the outcome ds?
What happens on the off chance that our product is so popular thatwe can’t possibly meet demand, even if we opened all of the facilities?
Does the world end?
Two Ideas
1 We could penalize not meeting demand of customers.
2 We only want to meet demand “most of the time”. (Chanceconstraint)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 66 / 77
Facility Location Example
Modeling Discussion
Do we always want to meet demand?
Regardless of the outcome ds?
What happens on the off chance that our product is so popular thatwe can’t possibly meet demand, even if we opened all of the facilities?
Does the world end?
Two Ideas
1 We could penalize not meeting demand of customers.
2 We only want to meet demand “most of the time”. (Chanceconstraint)
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 66 / 77
Facility Location Example
SP Definitions
A 2-stage stochastic optimization problem has complete recourse iffor every scenario, there always exists a feasible second solution:
Qs(x) < +∞ ∀x ∈ Rn,∀s = 1, . . . , S
A 2-stage stochastic optimization problem has relatively completerecourse if for every scenario, and for every feasible first stagesolution, there always exists a feasible second solution:
Qs(x) < +∞ ∀x ∈ X, ∀s = 1, . . . , S
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 67 / 77
Facility Location Example
Penalize Shortfall: A Recourse Formulation
min∑i∈I
fixi +∑s∈S
ps
∑i∈I
∑j∈J
cijyijs + λejs
∑i∈I
yijs + ejs ≥ djs ∀j ∈ J ∀s ∈ S∑j∈J
yijs − uixi ≤ 0 ∀i ∈ I, ∀s ∈ S
xi ∈ 0, 1, yij ≥ 0 ∀i ∈ I, ∀j ∈ J
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 68 / 77
AMPL Hints
Stop. AMPL Time.
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 69 / 77
AMPL Hints
AMPL Hints
1 All chapters of AMPL book are available for download: http:
//ampl.com/resources/the-ampl-book/chapter-downloads/
2 You can change the solver with the command option solver
cplex; (or replace cplex with baron, conopt, gurobi, knitro,
loqo, minos, snopt, xpress.)
3 Use var to declare variables; You may also put >= 0 on the same lineif the variables are constrained to be non-negative.
4 One your AMPL model is complete, you can type model
<filename>; at the ampl: prompt. This will tell you if you havesyntax errors.
5 If you have syntax errors. Fix them. Save the file, and type reset;
Then go to 4.
6 If no errors, type solve;
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 70 / 77
AMPL Hints
AMPL Entities
Data
Sets: lists of products, materials, etc.Parameters: numerical inputs such as costs, etc.
Model
Variables: The values to be decided upon.Objective Function.Constraints.
Data and Model typically stored in different files!
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 71 / 77
AMPL Hints
Template of Typical AMPL File
Define Sets
Define Parameters
Define Variables
Also can define variable bound constraints in this section
Define Objective
Define Constraints
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 72 / 77
AMPL Hints
Important AMPL Keywords/Syntax
model file.mod;
data file.mod;
reset;
quit;
set
param
var
maximize (minimize)
subject to
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 73 / 77
AMPL Hints
Important AMPL Notes
The # character starts a comment
All statements must end in a semi-colon;
Names must be unique!
A variable and a constraint cannot have the same name
AMPL is case sensitive. Keywords must be in lower case.
Even if the AMPL error message is cryptic, look at the location whereit shows an error – this will often help you deduce what is wrong.
Learning Data Input
Look at examples
Look at Chapter 9 of AMPL Book: http:
//ampl.com/resources/the-ampl-book/chapter-downloads/
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 74 / 77
AMPL Hints
Important AMPL Notes
The # character starts a comment
All statements must end in a semi-colon;
Names must be unique!
A variable and a constraint cannot have the same name
AMPL is case sensitive. Keywords must be in lower case.
Even if the AMPL error message is cryptic, look at the location whereit shows an error – this will often help you deduce what is wrong.
Learning Data Input
Look at examples
Look at Chapter 9 of AMPL Book: http:
//ampl.com/resources/the-ampl-book/chapter-downloads/
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 74 / 77
AMPL Hints
Some AMPL Tips
option show stats 1; shows the problem size
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 75 / 77
AMPL Hints
Conclusions
Replacing uncertain parameters with point estimates may lead tosub-optimal planning: the flaw of averages
Two-stage recourse problems: Decision → Event → Decision
The Value of the Stochastic Solution
Creating the extensive form/supermodel
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 76 / 77
AMPL Hints
VSS: Value of the Stochastic Solution
Let zs be the optimal solution value to
zsdef= min
x∈XE[F (x, ξ(ω))]
Let xmv be an optimal solution to the “mean-value” problem:
xmv ∈ arg minx∈X
F (x,E[ξ(ω)])
Let zmv be the long run cost if you plan based on the policy obtainedfrom the ’average’ scenario:
zmvdef= EF (xmv, ξ(ω))
Value of Stochastic Solution
vssdef= zmv − zs
Simple HW: Prove vss ≥ 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77
AMPL Hints
VSS: Value of the Stochastic Solution
Let zs be the optimal solution value to
zsdef= min
x∈XE[F (x, ξ(ω))]
Let xmv be an optimal solution to the “mean-value” problem:
xmv ∈ arg minx∈X
F (x,E[ξ(ω)])
Let zmv be the long run cost if you plan based on the policy obtainedfrom the ’average’ scenario:
zmvdef= EF (xmv, ξ(ω))
Value of Stochastic Solution
vssdef= zmv − zs
Simple HW: Prove vss ≥ 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77
AMPL Hints
VSS: Value of the Stochastic Solution
Let zs be the optimal solution value to
zsdef= min
x∈XE[F (x, ξ(ω))]
Let xmv be an optimal solution to the “mean-value” problem:
xmv ∈ arg minx∈X
F (x,E[ξ(ω)])
Let zmv be the long run cost if you plan based on the policy obtainedfrom the ’average’ scenario:
zmvdef= EF (xmv, ξ(ω))
Value of Stochastic Solution
vssdef= zmv − zs
Simple HW: Prove vss ≥ 0
Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77
AMPL Hints
VSS: Value of the Stochastic Solution
Let zs be the optimal solution value to
zsdef= min
x∈XE[F (x, ξ(ω))]
Let xmv be an optimal solution to the “mean-value” problem:
xmv ∈ arg minx∈X
F (x,E[ξ(ω)])
Let zmv be the long run cost if you plan based on the policy obtainedfrom the ’average’ scenario:
zmvdef= EF (xmv, ξ(ω))
Value of Stochastic Solution
vssdef= zmv − zs
Simple HW: Prove vss ≥ 0Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77