mit2 852s10 probability
TRANSCRIPT
-
7/27/2019 MIT2 852S10 Probability
1/129
MIT 2.852
Manufacturing Systems AnalysisLectures 25: Probability
Basic probability, Markov processes, M/M/1 queues, and more
Stanley B. Gershwin
http://web.mit.edu/manuf-sysMassachusetts InstituteofTechnology
Spring, 2010
2.852ManufacturingSystemsAnalysis 1/128 Copyright c2010StanleyB.Gershwin.
http://find/http://goback/ -
7/27/2019 MIT2 852S10 Probability
2/129
Probability and StatisticsTrick Question
I flip a coin 100 times, and it shows heads every time.
Question: What is the probability that it will show heads on the next flip?
2.852ManufacturingSystemsAnalysis 2/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
3/129
Probability and Statistics
Probability= Statistics
Probability: mathematical theory that describes uncertainty.
Statistics: set of techniques for extracting useful information from data.
2.852ManufacturingSystemsAnalysis 3/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
4/129
Interpretations of probabilityFrequency
The probability that the outcome of an experiment is A is prob (A)
if the experiment is performed a large number of times and the fraction oftimes that the observed outcome is A is prob (A).
2.852ManufacturingSystemsAnalysis 4/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
5/129
Interpretations of probabilityParallel universes
The probability that the outcome of an experiment is A is prob (A)
if the experiment is performed in each parallel universe and the fraction ofuniverses in which the observed outcome is A is prob (A).
2.852ManufacturingSystemsAnalysis 5/128 Copyright c2010StanleyB.Gershwin.
http://find/http://goback/ -
7/27/2019 MIT2 852S10 Probability
6/129
Interpretations of probabilityBetting Odds
The probability that the outcome of an experiment is A isprob(A) = P(A)
ifbefore the experiment is performed a risk-neutral observer would bewilling to bet $1 against more than $ 1P(A)
P(A) .
2.852ManufacturingSystemsAnalysis 6/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
7/129
Interpretations of probabilityState of belief
The probability that the outcome of an experiment is A is prob (A)
if that is the opinion (ie, belief or state of mind) of an observer before theexperiment is performed.
2.852ManufacturingSystemsAnalysis 7/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
8/129
Interpretations of probabilityAbstract measure
The probability that the outcome of an experiment is A is prob (A)
if prob () satisfies a certain set of axioms.
2.852ManufacturingSystemsAnalysis 8/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
9/129
Interpretations of probabilityAbstract measure
Axioms of probability
Let Ube a set ofsamples. Let E1, E
2, ... be subsets ofU. Let be the
null set (the set that has no elements).
0 prob (Ei) 1 prob (U) = 1
prob () = 0 IfEi Ej = , then prob (Ei Ej) = prob (Ei) + prob (Ej)
2.852ManufacturingSystemsAnalysis 9/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
10/129
Probability Basics
Subsets ofUare called events.
prob (E) is the probability ofE.
2.852ManufacturingSystemsAnalysis 10/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
11/129
Probability Basics
If
iEi = U, and Ei Ej = for
all iandj,
then i prob (Ei) = 1
Ei
2.852ManufacturingSystemsAnalysis 11/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
12/129
Probability BasicsSet Theory
Venn diagrams
A
A
U
prob (A) = 1 prob (A)
2.852ManufacturingSystemsAnalysis 12/128 Copyright c2010StanleyB.Gershwin.
http://find/http://goback/ -
7/27/2019 MIT2 852S10 Probability
13/129
Probability BasicsSet Theory
Venn diagrams
U
AUB
U
A B
A B
prob (A B) = prob (A) + prob (B) prob (A B)
2.852ManufacturingSystemsAnalysis 13/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
14/129
Probability BasicsIndependence
A and Bare independent if
prob (A B) = prob (A) prob (B).
2.852ManufacturingSystemsAnalysis 14/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
15/129
Probability BasicsConditional Probability
prob (A|B) =prob (A B)
prob (B)
U
AUB
U
A B
A B
prob (A B) = prob (A|B) prob (B).
2.852ManufacturingSystemsAnalysis 15/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
16/129
Probability BasicsConditional Probability
ExampleThrow a die.
A is the event of getting an odd number (1, 3, 5). B is the event of getting a number less than or equal to 3 (1, 2, 3).
Then prob (A) = prob (B) = 1/2 and
prob (A B) = prob (1, 3) = 1/3.Also, prob (A|B) = prob (A B)/ prob (B) = 2/3.
2.852ManufacturingSystemsAnalysis 16/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
17/129
Probability BasicsConditional Probability
Note: prob (A|B) being large does not mean that Bcauses A. It onlymeans that ifBoccurs it is probable that A also occurs. This could bedue to A and Bhaving similar causes.
Similarly prob (A|B) being small does not mean that Bprevents A.
2.852ManufacturingSystemsAnalysis 17/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
18/129
Probability BasicsLaw of Total Probability
U
C
U U
B
A D
A C A D
Let B= C Dand assume C D= . We haveprob (A|C) =
prob (A C)
prob (C)and prob (A|D) =
prob (A D)
prob (D).
Alsoprob (C|B) =
prob (C B)
prob (B)=
prob (C)
prob (B)because C B= C.
Similarly, prob (D|B) =prob (D)
prob (B)2.852ManufacturingSystemsAnalysis 18/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
19/129
Probability BasicsLaw of Total Probability
U
UA B
A B= A (C D) =
A C+ A D A (C D) =
A C+ A D
Therefore,
prob (A B) = prob (A C) + prob (A D)2.852ManufacturingSystemsAnalysis 19/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
20/129
Probability BasicsLaw of Total Probability
Or,
prob (A|B) prob (B) =prob (A|C) prob (C) + prob (A|D) prob (D)
so
prob (A|B) =prob (A|C) prob (C|B) + prob (A|D) prob (D|B).
2.852ManufacturingSystemsAnalysis 20/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
21/129
Probability BasicsLaw of Total Probability
An important case is when C D= B= U, so that A B= A. Then
prob (A)
= prob (A C) + prob (A D)= prob (A|C) prob (C) + prob (A|D) prob (D).
D = C
U
U
U
CA
A D
A C
2.852ManufacturingSystemsAnalysis 21/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
22/129
Probability BasicsLaw of Total Probability
More generally, ifA and E1, . . . Ek areevents and
Ei and Ej = , for all i=jand
j
Ej = the universal set
(ie, the set ofEj sets is mutually exclu-sive and collectively exhaustive) then...
A
Ei
2.852ManufacturingSystemsAnalysis 22/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
23/129
Probability BasicsLaw of Total Probability
j prob (Ej) = 1
and
prob (A) =j
prob (A|Ej) prob (Ej).
2.852ManufacturingSystemsAnalysis 23/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
24/129
Probability BasicsLaw of Total Probability
Some useful generalizations:
prob (A|B) =j prob (A|Band Ej) prob (Ej|B),
prob (A and B) =
j prob (A|Band Ej) prob (Ej and B).
2.852ManufacturingSystemsAnalysis 24/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
25/129
Probability BasicsRandom Variables
Let V be a vector space. Then a random variable X is a mapping (a
function) from Uto V.
If Uand x= X() V, then X is a random variable.
2.852ManufacturingSystemsAnalysis 25/128 Copyright c2010StanleyB.Gershwin.
http://find/ -
7/27/2019 MIT2 852S10 Probability
26/129
Probability BasicsRandom Variables
Flip of One Coin
Let U=H,T. Let = H if we flip a coin and get heads; = T if we flip acoin and get tails.
Let X() be the number of times we get heads. Then X() = 0 or 1.
prob ( = T ) = prob (X= 0) = 1/2
prob ( = H ) = prob (X= 1) = 1/2
2.852ManufacturingSystemsAnalysis 26/128 Copyright c2010StanleyB.Gershwin.
P b b l B
http://find/ -
7/27/2019 MIT2 852S10 Probability
27/129
Probability BasicsRandom Variables
Flip of Three Coins
Let U=HHH, HHT, HTH, HTT, THH, THT, TTH, TTT.Let = HHH if we flip 3 coins and get 3 heads; = HHT if we flip 3 coins and
get 2 heads and then tails, etc. The order matters!
prob () = 1/8 for all .
Let Xbe the number of heads. The order does not matter! Then X= 0, 1, 2,or 3.
prob (X= 0)=1/8; prob (X= 1)=3/8; prob (X= 2)=3/8;prob (X= 3)=1/8.
2.852ManufacturingSystemsAnalysis 27/128 Copyright c2010StanleyB.Gershwin.
P b bili B i
http://find/ -
7/27/2019 MIT2 852S10 Probability
28/129
Probability BasicsRandom Variables
Probability Distributions Let X() be a random variable. Thenprob (X() = x) is the probability distribution ofX (usually writtenP(x)). For three coin flips:
0 1 2
1/8
1/4
3/8
P(x)
3 x
2.852ManufacturingSystemsAnalysis 28/128 Copyright c2010StanleyB.Gershwin.
D i S
http://find/ -
7/27/2019 MIT2 852S10 Probability
29/129
Dynamic Systems
tis the time index, a scalar. It can be discrete or continuous. X(t) is the state.
The state can be scalar or vector. The state can be discrete or continuous or mixed. The state can be deterministic or random.
X is a stochastic process ifX(t) is a random variable for every t.
The value ofX is sometimes written explicitly as X(t, ) or X(t).
2.852ManufacturingSystemsAnalysis 29/128 Copyright c2010StanleyB.Gershwin.
Di t R d V i bl
http://find/ -
7/27/2019 MIT2 852S10 Probability
30/129
Discrete Random VariablesBernoulli
Flip a biased coin. IfXB is Bernoulli, then there is a psuch thatprob(XB = 0) = p.prob(XB = 1) = 1 p.
2.852ManufacturingSystemsAnalysis 30/128 Copyright c2010StanleyB.Gershwin.
Di t R d V i bl
http://find/ -
7/27/2019 MIT2 852S10 Probability
31/129
Discrete Random VariablesBinomial
The sum ofn independent Bernoulli random variables XBi with the sameparameter pis a binomial random variable Xb.
Xb =n
i=0XBi
prob (Xb
= x) =
n!
x!(n x)!px(1 p)
(nx)
2.852ManufacturingSystemsAnalysis 31/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variables
http://find/ -
7/27/2019 MIT2 852S10 Probability
32/129
Discrete Random VariablesGeometric
The number of independent Bernoulli random variables XBi tested untilthe first 0 appears is a geometricrandom variable Xg.
Xg = mini {X
Bi = 0}
To calculate prob (Xg = t): For t= 1, we know prob (XB = 0) = p.
Therefore prob (Xg > 1) = 1 p.
2.852ManufacturingSystemsAnalysis 32/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variables
http://find/ -
7/27/2019 MIT2 852S10 Probability
33/129
Discrete Random VariablesGeometric
For t> 1,
prob (Xg > t)= prob (Xg > t|Xg > t 1) prob (Xg > t 1)
= (1 p) prob (Xg > t 1),so
prob (Xg
> t) = (1 p)t
andprob (Xg = t) = (1 p)t1p
2.852ManufacturingSystemsAnalysis 33/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variables
http://find/ -
7/27/2019 MIT2 852S10 Probability
34/129
Discrete Random VariablesGeometric
Alternative view
1 0
p
1p 1
Consider a two-state system. The system can go from 1 to 0, but not from0 to 1.
Let pbe the conditional probability that the system is in state 0 at timet+ 1, given that it is in state 1 at time t. That is,
p= prob
(t+ 1) = 0
(t) = 1
.
2.852ManufacturingSystemsAnalysis 34/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variablesp
http://find/ -
7/27/2019 MIT2 852S10 Probability
35/129
Discrete Random VariablesGeometric
1 0
p
Let p(, t) be the probability of the system being in state at time t.
Then, since
p(0, t+ 1) = prob
(t+ 1) = 0
(t) = 1
prob [(t) = 1]
+ prob(t+ 1) = 0
(t) = 0
prob [(t) = 0],
(Why?)we have
p(0, t+ 1) = pp(1, t) + p(0, t),p(1, t+ 1) = (1 p)p(1, t),
and the normalization equation
p(1, t) + p(0, t) = 1.
2.852ManufacturingSystemsAnalysis 35/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variablesp
http://find/ -
7/27/2019 MIT2 852S10 Probability
36/129
Discrete Random VariablesGeometric
1 0
p
Assume that p(1, 0) = 1. Then the solution is
p(0, t) = 1 (1 p)t,
p(1, t) = (1 p)t.
2.852ManufacturingSystemsAnalysis 36/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variables1p 1
http://find/ -
7/27/2019 MIT2 852S10 Probability
37/129
Discrete Random VariablesGeometric
1 0
p
Geometric Distribution
t
probability
0 10 20 300
0.2
0.4
0.6
0.8
1
p(0,t)p(1,t)
2.852ManufacturingSystemsAnalysis 37/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variablesp
http://find/ -
7/27/2019 MIT2 852S10 Probability
38/129
Discrete Random VariablesGeometric
1 0
p
Recall that once the system makes the transition from 1 to 0 it can nevergo back. The probability that the transition takes place at time t is
prob [(t) = 0 and (t 1) = 1] = (1 p)t1p.
The time of the transition from 1 to 0 is said to be geometricallydistributed with parameter p. The expected transition time is 1/p. (Proveit!)
Note: If the transition represents a machine failure, then 1/pis the MeanTime to Fail (MTTF). The Mean Time to Repair (MTTR) is similarlycalculated.
2.852ManufacturingSystemsAnalysis 38/128 Copyright c2010StanleyB.Gershwin.
Discrete Random Variables p
http://find/ -
7/27/2019 MIT2 852S10 Probability
39/129
Discrete Random VariablesGeometric 1 0
p
p
Memorylessness: ifT is the transition time,
prob (T> t+ x|T> x) = prob (T> t).
2.852ManufacturingSystemsAnalysis 39/128 Copyright c2010StanleyB.Gershwin.
Digression: Difference Equations
http://find/ -
7/27/2019 MIT2 852S10 Probability
40/129
Digression: Difference EquationsDefinition
A difference equation is an equation of the form
x(t+ 1) = f(x(t), t)
where t is an integer and x(t) is a real or complex vector.
To determine x(t), we must also specify additional information, for exampleinitial conditions:
x(0) = c
Difference equations are similar to differential equations. They are easier to solve
numerically because we can iterate the equation to determine x(1), x(2), .... In fact,
numerical solutions of differential equations are often obtained by approximating them
as difference equations.
2.852ManufacturingSystemsAnalysis 40/128 Copyright c2010StanleyB.Gershwin.
Digression: Difference Equations
http://find/ -
7/27/2019 MIT2 852S10 Probability
41/129
Digression: Difference EquationsSpecial Case
A linear difference equation with constant coefficients is one of the form
x(t+ 1) = Ax(t)
where A is a square matrix of appropriate dimension.
Solution:x(t) = Atc
However, this form of the solution is not always convenient.
2.852ManufacturingSystemsAnalysis 41/128 Copyright c2010StanleyB.Gershwin.
Digression: Difference Equations
http://find/ -
7/27/2019 MIT2 852S10 Probability
42/129
g qSpecial Case
We can also write
x(t) = b1t1+ b2t2+ ... + bktk
where k is the dimensionality ofx, 1, 2,...,k are scalars andb1, b2, ...,bk are vectors. The bj satisfy
c= b1+ b2+ ... + bk1, 2, ..., k are the eigenvalues ofAand b1, b2, ..., bk are its eigenvectors, but we dontalways have to use that explicitly to determine them. This is very similar to the solution
of linear differential equations with constant coefficients.
2.852ManufacturingSystemsAnalysis 42/128 Copyright c2010StanleyB.Gershwin.
Digression: Difference Equations
http://find/ -
7/27/2019 MIT2 852S10 Probability
43/129
g qSpecial Case
The typical solution technique is to guess a solution of the form
x(t) = btand plug it into the difference equation. We find that must satisfy a kthorder polynomial, which gives us the ks. We also find that bmustsatisfy a set of linear equations which depends on .
Examples and variations will follow.
2.852ManufacturingSystemsAnalysis 43/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
44/129
p
A Markov process is a stochastic process in which the probability offinding Xat some value at time t+ tdepends only on the value ofXat time t.
Or, let x(s), s t, be the history of the values ofXbefore time tandlet A be a set of possible values ofX(t+ t). Then
prob {X(t+ t) A|X(s) = x(s), s t} =
prob {X(t+ t) A|X(t) = x(t)}
In words: if we know what Xwas at time t, we dont gain any moreuseful information about X(t+ t) by also knowing what Xwas atany time earlier than t.
2.852ManufacturingSystemsAnalysis 44/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
45/129
pStates and transitions
Discrete state, discretetime
States can be numbered 0, 1, 2, 3, ... (or with multiple indices if thatis more convenient).
Time can be numbered 0, 1, 2, 3, ... (or 0, , 2, 3, ... if moreconvenient).
The probability of a transition fromjto i in one time unit is oftenwritten Pij, wherePij = prob{X(t+ 1) = i|X(t) =j}
2.852ManufacturingSystemsAnalysis 45/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
46/129
States and transitions
Discrete state, discretetimeTransition graph
14P14
P24
P64
P45
14P14
P24
P6
1
1
2
3
4
5
6
7
Pij is a probability. Note that Pii = 1 m,m=iPmi.
2.852ManufacturingSystemsAnalysis 46/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
47/129
States and transitions
Discrete state, discretetime
Define pi(t) = prob{X(t) = i}. {pi(t)for all i} is the probability distribution at time t. Transition equations: pi(t+ 1) = jPijpj(t). Initial condition: pi(0) specified. For example, if we observe that the system
is in statejat time 0, then pj(0) = 1 and pi(0) = 0 for all i=j. Let the current time be 0. The probability distribution at time t> 0
describes our state of knowledge at time 0 about what state the system willbe in at time t.
Normalization equation: ipi(t) = 1.
2.852ManufacturingSystemsAnalysis 47/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
48/129
States and transitions
Discrete state, discretetime
Steady state: pi = limt pi(t), if it exists. Steady-state transition equations: pi = jPijpj. Steady state probability distribution:
Very important concept, but different from the usual concept of steadystate.
The system does not stop changing or approach a limit. The probability distribution stops changing and approaches a limit.
2.852ManufacturingSystemsAnalysis 48/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
49/129
States and transitions
Discrete state, discretetime
Steady state probability distribution: Consider a typical (?) Markov process.Look at a system at time 0.
Pick a state. Any state.
The probability of the system being in that state at time 1 is very different fromthe probability of it being in that state at time 2, which is very different from itbeing in that state at time 3.
The probability of the system being in that state at time 1000 is very
close
to
theprobability of it being in that state at time 1001, which is very close to the
probability of it being in that state at time 2000.
Then, the system has reached steady state at time 1000.
2.852ManufacturingSystemsAnalysis 49/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
50/129
States and transitions
Discrete state, discretetimeTransition equations are valid for steady-state and non-steady-stateconditions.
(Self-loops suppressed for clarity.)
2.852ManufacturingSystemsAnalysis 50/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
51/129
States and transitions
Discrete state, discretetimeBalance equations steady-state only. Probability of leaving node i=probability of entering node i.
pi
m,m=iPmi =
j,j=i
Pijpj
(Prove it!)
2.852ManufacturingSystemsAnalysis 51/128 Copyright c2010StanleyB.Gershwin.
Markov processes
http://find/ -
7/27/2019 MIT2 852S10 Probability
52/129
Unreliable machine
1=up; 0=down.
p
1 0
p rr
2.852ManufacturingSystemsAnalysis 52/128 Copyright c2010StanleyB.Gershwin.
Markov processesU l bl h
http://find/ -
7/27/2019 MIT2 852S10 Probability
53/129
Unreliable machine
The probability distribution satisfies
p(0, t+ 1) = p(0, t)(1 r) + p(1, t)p,
p(1, t+ 1) = p(0, t)r+ p(1, t)(1 p).
2.852ManufacturingSystemsAnalysis 53/128 Copyright c2010StanleyB.Gershwin.
Markov processesU li bl hi
http://find/ -
7/27/2019 MIT2 852S10 Probability
54/129
Unreliable machine
Solution
Guess
p(0, t) = a(0)Xtp(1, t) = a(1)Xt
Then
a(0)Xt+1 = a(0)Xt(1 r) + a(1)Xtp,a(1)Xt+1 = a(0)Xtr+ a(1)Xt(1 p).
2.852ManufacturingSystemsAnalysis 54/128 Copyright c2010StanleyB.Gershwin.
Markov processesU li bl hi
http://find/ -
7/27/2019 MIT2 852S10 Probability
55/129
Unreliable machine
SolutionOr,
a(0)X = a(0)(1r) + a(1)p,a(1)X = a(0)r+ a(1)(1p).
or,
X = 1r+ a(1)a(0)
p,
X = a(0)a(1)
r+ 1p.so
X = 1r+ rpX1 + p
or,(X1 + r)(X1 + p) = rp.
2.852ManufacturingSystemsAnalysis 55/128 Copyright c2010StanleyB.Gershwin.
Markov processesU li bl hi
http://find/ -
7/27/2019 MIT2 852S10 Probability
56/129
Unreliable machine
SolutionTwo solutions:
X= 1 and X= 1 r p.
IfX= 1, a(1)a(0) = rp. IfX= 1 r p, a(1)a(0) = 1. Therefore
p(0, t) = a1(0)Xt1 + a2(0)Xt2 = a1(0) + a2(0)(1 r p)t
p(1, t) = a1(1)Xt1 + a2(1)Xt2 = a1(0) r
p a2(0)(1 r p)
t
2.852ManufacturingSystemsAnalysis 56/128 Copyright c2010StanleyB.Gershwin.
Markov processesU li bl hi
http://find/ -
7/27/2019 MIT2 852S10 Probability
57/129
Unreliable machine
SolutionTo determine a1(0) and a2(0), note that
p(0, 0) = a1(0) + a2(0)
p(1, 0) = a1(0)
r
p a
2(0)
Therefore
p(0, 0) + p(1, 0) = 1 = a1(0) + a1(0)r
p= a1(0)
r+ p
pSo
a1(0) =p
r+ pand a2(0) = p(0, 0)
p
r+ p
2.852ManufacturingSystemsAnalysis 57/128 Copyright c2010StanleyB.Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
58/129
Unreliable machine
SolutionAfter more simplification and some beautification,
p(0, t) = p(0, 0)(1 p r)t
+p
r+ p
1 (1 p r)t ,
p(1, t) = p(1, 0)(1 p r)t
+r
r+ p1 (1 p r)t
.
2.852ManufacturingSystemsAnalysis 58/128 Copyright c2010StanleyB.Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
59/129
Unreliable machine
SolutionDiscrete Time Unreliable Machine
t
probability
0 20 40 60 80 1000
0.2
0.4
0.6
0.8
1
p(0,t)p(1,t)
2.852ManufacturingSystemsAnalysis 59/128 Copyright c2010StanleyB.Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
60/129
Unreliable machine
Steady-state solutionAs t ,
p(0) p
r+ p,
p(1) rr+ p
which is the solution of
p(0) = p(0)(1 r) + p(1)p,
p(1) = p(0)r+ p(1)(1 p).
2.852ManufacturingSystemsAnalysis 60/128 Copyright c2010StanleyB.Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
61/129
Unreliable machine
Steady-state solutionIf the machine makes one part per time unit when it is operational, the
average production rate is
p(1) =r
r+ p=
1
1 +p
r.
2.852ManufacturingSystemsAnalysis 61/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
62/129
States and Transitions
Classification of statesA chain is irreducibleif and only if each state can be reached from each
other state.
Let fij be the probability that, if the system is in statej, it will at somelater time be in state i. State i is transientiffij < 1. If a steady statedistribution exists, and i is a transient state, its steady state probability is
0.
2.852ManufacturingSystemsAnalysis 62/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
63/129
States and Transitions
Classification of statesThe states can be uniquely divided into sets T, C1, . . . Cn such that T is the setof all transient states and fij = 1 for iandj in the same set Cm and fij = 0 for iin some set Cm andjnot in that set. If there is only one set C, the chain isirreducible. The sets Cm are called final classesor absorbing classesand T is thetransient class.
Transient states cannot be reached from any other states except possibly other
transient states. If state i is in T, there is no statej in any set Cm such thatthere is a sequence of possible transitions (transitions with nonzero probability)fromjto i.
2.852ManufacturingSystemsAnalysis 63/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
64/129
States and Transitions
Classification of states
C2 C3
C4
C5C6C7
C1T
2.852ManufacturingSystemsAnalysis 64/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
65/129
States and Transitions
Discrete state, continuoustime
States can be numbered 0, 1, 2, 3, ... (or with multiple indices if thatis more convenient).
Time is a real number, defined on (, ) or a smaller interval. The probability of a transition fromjto iduring [t, t+ t] is
approximately ijt, where t is small, andijt= prob{X(t+ t) = i|X(t) =j} + o(t) forj= i.
2.852ManufacturingSystemsAnalysis 65/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
66/129
States and Transitions
Discrete state, continuoustime
Transition graph no self loops!!!!
1
2
3
4
5
6
7
1414
24
45
64
ij is a probability rate. ijt is a probability.2.852ManufacturingSystemsAnalysis 66/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
67/129
States and Transitions
Discrete state, continuoustime
Define pi(t) = prob{X(t) = i} It is convenient to define ii = j=iji Transition equations: dpi(t)
dt =
jijpj(t). Normalization equation: ipi(t) = 1.
2.852ManufacturingSystemsAnalysis 67/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
68/129
Discrete state, continuoustime
Steady state: pi = lim
tp
i(t), if it exists.
Steady-state transition equations: 0 = jijpj. Steady-state balance equations: pim,m=imi = j,j=iijpj
Normalization equation:
ipi = 1.
2.852ManufacturingSystemsAnalysis 68/128 Copyright c2010StanleyB.Gershwin.
Markov processesStates and Transitions
http://find/ -
7/27/2019 MIT2 852S10 Probability
69/129
Discrete state, continuoustime
Sources of confusion in continuous time models:
Never Draw self-loops in continuous time Markov process graphs. Never write 1 14 24 64. Write
1 (14+ 24+ 64)t, or (14+ 24+ 64)
ii = j=iji is NOT a probability rate and NOT a probability. Itis ONLY a convenient notation.
2.852ManufacturingSystemsAnalysis 69/128 Copyright c2010StanleyB.Gershwin.
Markov processesExponential
http://find/ -
7/27/2019 MIT2 852S10 Probability
70/129
p
Exponential random variable: the time to move from state 1 to state 0.
1 0
p
pt= prob
(t+ t) = 0
(t) = 1
+ o(t).
2.852ManufacturingSystemsAnalysis 70/128 Copyright c2010StanleyB.Gershwin.
Markov processesExponential
http://find/ -
7/27/2019 MIT2 852S10 Probability
71/129
p
1 0
p
p(0, t+ t) =
prob(t+ t) = 0
(t) = 1 prob [(t) = 1]+prob
(t+ t) = 0
(t) = 0
prob[(t) = 0].
or
p(0, t+ t) = ptp(1, t) + p(0, t) + o(t)
ordp(0, t)
dt= pp(1, t).
2.852 ManufacturingSystems Analysis 71/128 Copyright c2010 StanleyB. Gershwin.
Markov processesExponential
http://find/ -
7/27/2019 MIT2 852S10 Probability
72/129
1 0
p
Since p(0, t) + p(1, t) = 1,
dp(1, t)
dt= pp(1, t).
Ifp(1, 0) = 1, then
p(1, t) = e
ptand
p(0, t) = 1 ept
2.852 ManufacturingSystems Analysis 72/128 Copyright c2010 StanleyB. Gershwin.
Markov processesExponential
http://find/ -
7/27/2019 MIT2 852S10 Probability
73/129
Density function
The probability that the transition takes place in [t, t+ t] is
prob [(t+ t) = 0 and (t) = 1] = eptpt.The exponential density function is pept.The time of the transition from 1 to 0 is said to be exponentially
distributed with rate p. The expected transition time is 1/p. (Prove it!)
2.852 ManufacturingSystems Analysis 73/128 Copyright c2010 StanleyB. Gershwin.
Markov processesExponential
http://find/ -
7/27/2019 MIT2 852S10 Probability
74/129
Density function
f(t) = pept for t 0; f(t) = 0 otherwise;F(t) = 1 ept for t 0; F(t) = 0 otherwise.
ET= 1/p,VT = 1/p2. Therefore, cv=1.
t1 p
F(t)
t1 p
f(t)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.5 2 2.5 3 3.5 4 4.51 1.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.5 2 2.5 3 3.5 4 4.51 1.5
2.852 ManufacturingSystems Analysis 74/128 Copyright c2010 StanleyB. Gershwin.
Markov processesExponential
http://find/ -
7/27/2019 MIT2 852S10 Probability
75/129
Density function
Memorylessness: prob (T> t+ x|T> x) = prob (T> t) prob (t T t+ t) tfor small t. IfT1, ...,Tn are exponentially distributed random variables with
parameters 1...,n and T= min(T1, ...,Tn), then T is anexponentially distribution random variable with parameter = 1+ ... + n.
2.852 ManufacturingSystems Analysis 75/128 Copyright c2010 StanleyB. Gershwin.
Markov processesExponential
http://find/ -
7/27/2019 MIT2 852S10 Probability
76/129
Density function
Exponential densityfunction and a small
number of actualsamples.
2.852 ManufacturingSystems Analysis 76/128 Copyright c2010 StanleyB. Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
77/129
Continuous time
p
1 0
r
2.852 ManufacturingSystems Analysis 77/128 Copyright c2010 StanleyB. Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
78/129
Continuous timeThe probability distribution satisfies
p(0, t+ t) = p(0, t)(1 rt) + p(1, t)pt+ o(t)p(1, t+ t) = p(0, t)rt+ p(1, t)(1 pt) + o(t)
or
dp(0, t)
dt= p(0, t)r+ p(1, t)p
dp(1, t)
dt= p(0, t)r p(1, t)p.
2.852 ManufacturingSystems Analysis 78/128 Copyright c2010 StanleyB. Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
79/129
Solution
p(0, t) =p
r+ p+
p(0, 0)
p
r+ p
e(r+p)t
p(1, t) = 1 p(0, t).
As t ,
p(0) p
r+ p
,
p(1) r
r+ p
2.852 ManufacturingSystems Analysis 79/128 Copyright c2010 StanleyB. Gershwin.
Markov processesUnreliable machine
http://find/ -
7/27/2019 MIT2 852S10 Probability
80/129
Steady-state solutionIf the machine makes parts per time unit on the average when it is
operational, the overall average production rate is
p(1) =r
r+ p=
1
1 +p
r.
2.852 ManufacturingSystems Analysis 80/128 Copyright c2010 StanleyB. Gershwin.
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
81/129
Simplest model is the M/M/1 queue: Exponentially distributed inter-arrival times mean is 1/; is arrival
rate (customers/time). (Poisson arrival process.) Exponentially distributed service times mean is 1/; is service rate
(customers/time).
1 server. Infinite waiting area.
2.852 Manufacturing Systems Analysis 81/128 Copyright c2010 Stanley B. Gershwin.
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
82/129
Exponential arrivals: If a part arrives at time s, the probability that the next part arrives
during the interval [s+ t, s+ t+ t] is e
tt+ o(t) t. isthe arrival rate.
Exponential service: If an operation is completed at time sand the buffer is not empty, the
probability that the next operation is completed during the interval
[s+ t, s+ t+ t] is ett+ o(t) t. is the service rate.
2.852 Manufacturing Systems Analysis 82/128 Copyright c2010 Stanley B. Gershwin.
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
83/129
Sample pathNumber of customers in the system as a function of time.
1
2
3
4
56
t
n
2.852 Manufacturing Systems Analysis 83/128 Copyright c2010 Stanley B. Gershwin.
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
84/129
State Space
0 1 2
n1 n n+1
2 852 Manufacturing Systems Analysis 84/128 Copyright c2010 Stanley B Gershwin
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
85/129
Performance EvaluationLet p(n, t) be the probability that there are n parts in the system at timet. Then,
p(n, t+ t) = p(n 1, t)t+ p(n + 1, t)t
+p(n, t)(1 (t+ t)) + o(t)
for n > 0
and
p(0, t+ t) = p(1, t)t+ p(0, t)(1 t) + o(t).
2 852 Manufacturing Systems Analysis 85/128 Copyright c2010 Stanley B Gershwin
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
86/129
Performance EvaluationOr,
dp(n, t)
dt= p(n 1, t) + p(n + 1, t) p(n, t)( + ),
n > 0dp(0, t)
dt= p(1, t) p(0, t).
If a steady state distribution exists, it satisfies
0 = p(n 1) + p(n + 1) p(n)( + ), n > 00 = p(1) p(0).
Why if?
2 852 Manufacturing Systems Analysis 86/128 Copyright c2010 Stanley B Gershwin
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
87/129
Performance EvaluationLet = /. These equations are satisfied by
p(n) = (1 )n, n 0if < 1. The average number of parts in the system is
n =n
np(n) =
1 =
.
From Littles law, the average delay experienced by a part is
W=1
.
2 852 Manufacturing Systems Analysis 87/128 Copyright c2010 Stanley B Gershwin
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
88/129
Performance Evaluation
Delay in a M/M/1 Queue
Arrival Rate
Delay
0 0.25 0.5 0.75 10
10
20
30
40
Define the utilization = /.
What happens if > 1?
2 852 Manufacturing Systems Analysis 88/128 Copyright c2010 Stanley B Gershwin
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
89/129
Performance Evaluation
W
=2=1
0
20
40
60
80
100
0 0.5 1 1.5 2
To increase capacity, increase . To decrease delay for a given ,
increase .
2 852 Manufacturing Systems Analysis 89/128 Copyright c2010 Stanley B Gershwin
Markov processesThe M/M/1 Queue
http://find/ -
7/27/2019 MIT2 852S10 Probability
90/129
Other Single-Stage ModelsThings get more complicated when:
There are multiple servers.
There is finite space for queueing. The arrival process is not Poisson. The service process is not exponential.
Closed formulas and approximations exist for some cases.
2 852 Manufacturing Systems Analysis 90/128 Copyright c2010 Stanley B Gershwin
Continuous random variablesPhilosophical issues
http://find/ -
7/27/2019 MIT2 852S10 Probability
91/129
1. Mathematically, continuous and discrete random variables are verydifferent.
2. Quantitatively, however, some continuous models are very close to
some discrete models.3. Therefore, which kind of model to use for a given system is a matter
ofconvenience.
Example: The production process for small metal parts (nuts, bolts,
washers, etc.) might better be modeled as a continuous flow than a largenumber of discrete parts.
2 852 Manufacturing Systems Analysis 91/128 Copyright c2010 Stanley B Gershwin
Continuous random variablesProbability density
http://find/ -
7/27/2019 MIT2 852S10 Probability
92/129
Low density
High density
The probability of a two-dimensional
random variable being in a small square isthe probability density times the area ofthe square. (Actually, it is more generalthan this.)
2 852 Manufacturing Systems Analysis 92/128 Copyright c2010 Stanley B Gershwin
Continuous random variablesProbability density
http://find/ -
7/27/2019 MIT2 852S10 Probability
93/129
2 852 Manufacturing Systems Analysis 93/128 Copyright c2010 Stanley B Gershwin
Continuous random variablesSpaces
http://find/ -
7/27/2019 MIT2 852S10 Probability
94/129
Continuous random variables can be defined in one, two, three, ..., infinite dimensional spaces;
in finite or infinite regions of the spaces. Continuous random variables can have
probability measures with the same dimensionality as the space; lower dimensionality than the space; a mix of dimensions.
2 852 Manufacturing Systems Analysis 94/128 Copyright c2010 Stanley B Gershwin
Continuous random variablesDimensionality
http://find/ -
7/27/2019 MIT2 852S10 Probability
95/129
M1 B1 M2 B2 M3
M1
M2
M3
B1
x1
B2
x2
2 852 Manufacturing Systems Analysis 95/128 Copyright c2010 Stanley B Gershwin
Continuous random variablesDimensionality
http://find/ -
7/27/2019 MIT2 852S10 Probability
96/129
M1 B1 M2 B2 M3
M1
M2
M3
B1
x1
B2
x2
2 852 Manufacturing Systems Analysis 96/128 Copyright c2010 Stanley B Gershwin
Continuous random variablesSpaces
http://find/ -
7/27/2019 MIT2 852S10 Probability
97/129
DimensionalityOnedimensional density
Twodimensional density
Zerodimensional density
(mass)
M1
2x
B M B M1 2 2 3
x1
Probability distributionof the amount ofmaterial in each of thetwo buffers.
2 852 Manufacturing Systems Analysis 97/128 Copyright 2010 Stanley B Gershwin
Continuous random variablesSpaces
http://find/ -
7/27/2019 MIT2 852S10 Probability
98/129
Discrete approximation
M1
2x
B M B M1 2 2 3
x1
Probability distributionof the amount ofmaterial in each of thetwo buffers.
2 852 M f t i S te A l i 98/128 C i ht 2010 St le B Ge h i
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
99/129
Problem
Productionsurplus fromanunreliablemachine
Production rate = p
1 0
r
Production rate = 0
Demand rate = d<
r
r+ p
. (Why?)
Problem: producing more than has been demanded creates inventory and iswasteful. Producing less reduces revenue or customer goodwill. How can we
anticipate and respond to random failures to mitigate these effects?
2 852 M f t i S t A l i 99/128 C i ht 2010 St l B G h i
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
100/129
Solution
We propose a production policy. Later we show that it is a solution to anoptimization problem.Model:
0 < u
-
7/27/2019 MIT2 852S10 Probability
101/129
SolutionSurplus, or inventory/backlog:
dx(t)
dt= u(t) d
Productionpolicy: Choose Z(the hedgingpoint) Then,
if = 1, ifx< Z, u= , ifx= Z, u= d, ifx> Z, u= 0;
if = 0, u= 0.
d t + Z
t
demand dt
hedging point Z
production
surplus x(t)
Production and Demand
Cumulative
How do we choose Z?
2 852 M f t i S t A l i 101/128 C i ht 2010 St l B G h i
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
102/129
Mathematical modelDefinitions:f(x, , t) is a probability density function.
f(x, , t)x = prob (x X(t) x+ xand the machine state is at time t).
prob (Z, , t) is a probability mass.
prob (Z, , t) = prob (x= Z
and the machine state is at time t).
Note that x> Z is transient.
2 852 M f i S A l i 102/128 C i h 2010 S l B G h i
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
103/129
Mathematical model
State Space:
=1x
= ddx
dt
= d
=0dx
dt
x=Z
2 852M f i
S
A l i
103/128
C i h
2010
S l
B
G h i
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
104/129
Mathematical model
Transitions to = 1, [x, x+ x]; x< Z:
=1
x
=0
x=Z
repair
no failure
x
2 852M f i
S
A l i
104/128
C i h
2010
S l
B
G h i
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
105/129
Mathematical model
Transitions to = 0, [x, x+ x]; x< Z:
=1
x
=0
x=Z
failure
no repair
x
2 852M f i
S
A l i
105/128
C i h
2010
S l
B
G h i
Continuous random variablesExample
Mathematical model
http://find/ -
7/27/2019 MIT2 852S10 Probability
106/129
Mathematical model
Transitions to = 1, [x, x+ x]; x< Z:
x + d t
x( d) tx(t) =
x(t) =
x(t+ t)
=1
=0
f(x, 1, t+ t)x=
[f(x+ dt, 0, t)x]rt+ [f(x ( d)t, 1, t)x](1 pt)
+o(t)o(x)
M f i
S
A l i
/
C i h
S l
B
G h i
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
107/129
Mathematical model
Or,
f(x, 1, t+ t) =o(t)o(x)
x
+f(x+ dt, 0, t)rt+ f(x ( d)t, 1, t)(1 pt)
In steady state,
f(x, 1) =o(t)o(x)
x
+f(x+ dt, 0)rt+ f(x ( d)t, 1)(1 pt)
S
/
C
S
G
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
108/129
Mathematical model
Expand in Taylor series:f(x, 1) =
f(x, 0) + df(x, 0)
dxdt
rt
+
f(x, 1)
df(x, 1)
dx( d)t
(1 pt)
+o(t)o(x)
x
/
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
109/129
Mathematical model
Multiply out:
f(x, 1) = f(x, 0)rt+df(x, 0)
dx(d)(r)t2
+f(x, 1) df(x, 1)
dx( d)t
f(x, 1)ptdf(x, 1)
dx( d)pt2
+o(t)o(x)
x
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
110/129
Mathematical model
Subtract f(x, 1) from both sides and move one of the terms:
df(x, 1)
dx( d)t=
o(t)o(x)
x
+f(x, 0)rt+df(x, 0)
dx(d)(r)t2
f(x, 1)pt df(x, 1)dx
( d)pt2
2.852ManufacturingSystemsAnalysis Copyright 2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
111/129
Mathematical model
Divide through by t:
df(x, 1)
dx( d) =
o(t)o(x)
tx
+f(x, 0)r+df(x, 0)
dx(d)(r)t
f(x, 1)p df(x, 1)dx
( d)pt
2.852ManufacturingSystemsAnalysis 111/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
112/129
Mathematical model
Take the limit as t 0:
df(x, 1)
dx( d) = f(x, 0)r f(x, 1)p
2.852ManufacturingSystemsAnalysis 112/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
Mathematical model
http://find/ -
7/27/2019 MIT2 852S10 Probability
113/129
Transitions to = 0, [x, x+ x]; x< Z:
x + d tx(t) =
x( d) tx(t) =
x(t+ t)
=1
=0
failure
no repair
f(x, 0, t+ t)x=
[f(x+ dt, 0, t)x](1 rt) + [f(x ( d)t, 1, t)x]pt
+o(t)o(x)
2.852ManufacturingSystemsAnalysis 113/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
114/129
Mathematical model
By following essentially the same steps as for the transitions to = 1, [x, x+ x]; x< Z, we have
df(x, 0)dx
d= f(x, 0)r f(x, 1)p
Note:
df(x, 1)
dx ( d) =df(x, 0)
dx d
Why?
2.852ManufacturingSystemsAnalysis 114/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
115/129
Mathematical model
Transitions to = 1, x= Z:
1p t
1p t
Z ( d) t
=1
=0
no failure
x=Z
no failure
P(Z, 1) = P(Z, 1)(1 pt)
+ prob (Z ( d)t< X< Z, = 1)(1 pt)+o(t).
2.852ManufacturingSystemsAnalysis 115/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
116/129
Mathematical model
Or,
P(Z, 1) = P(Z, 1) P(Z, 1)pt
+f(Z ( d)t, 1)( d)t(1 pt) + o(t),
or,P(Z, 1)pt= o(t)+
+f(Z, 1) df(Z, 1)
dx( d)t
( d)t(1 pt),
2.852ManufacturingSystemsAnalysis 116/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
117/129
Mathematical model
Or,
P(Z, 1)pt= f(Z, 1)( d)t+ o(t)or,
P(Z, 1)p= f(Z, 1)( d)
2.852ManufacturingSystemsAnalysis 117/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
118/129
Mathematical model
=1
x
= ddx
dt
= d
=0dx
dt
x=Z
P(Z, 0) = 0. Why?
2.852ManufacturingSystemsAnalysis 118/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
119/129
Mathematical model
Transitions to = 0,Z ( d)t< x< Z:
p t 1r t
Z d t
=1
failure
=0
x=Z
no repair
prob (Z dt< X< Z, 0) = f(Z, 0)dt+ o(t)= P(Z, 1)pt+ o(t)
2.852ManufacturingSystemsAnalysis 119/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
120/129
Mathematical model
Or,
f(Z, 0)d= P(Z, 1)p= f(Z, 1)( d)
2.852ManufacturingSystemsAnalysis 120/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
121/129
Mathematical model
df
dx(x, 0)d= f(x, 0)r f(x, 1)p
df(x, 1)dx
( d) = f(x, 0)r f(x, 1)p
f(Z, 1)( d) = f(Z, 0)d
0 = pP(Z, 1) + f(Z, 1)( d)
1 = P(Z, 1) +Z
[f(x, 0) + f(x, 1)] dx
2.852ManufacturingSystemsAnalysis 121/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
122/129
SolutionSolution of equations:
f(x, 0) = Aebx
f(x, 1) = A ddebx
P(Z, 1) = A dpebZ
P(Z, 0) = 0
whereb= r
d p
d
and A is chosen so that normalization is satisfied.
2.852ManufacturingSystemsAnalysis 122/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
123/129
SolutionDensity Function -- Controlled Machine
-40 -20 0 200
1E-2
0.02
0.03
x
2.852ManufacturingSystemsAnalysis 123/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
124/129
Observations
1. Meanings of b:Mathematical:In order for the solution on the previous slide to make sense, b> 0.Otherwise, the normalization integral cannot be evaluated.
2.852ManufacturingSystemsAnalysis 124/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
Ob i
http://find/ -
7/27/2019 MIT2 852S10 Probability
125/129
Observations
Intuitive:
The average duration of an up period is 1/p. The rate that x increases(while x< Z) while the machine is up is d. Therefore, the average
increase ofxduring an up period while x< Z is ( d)/p.
The average duration of a down period is 1/r. The rate that xdecreaseswhile the machine is down is d. Therefore, the average decrease ofxduringan down period is d/r.
In order to guarantee that xdoes not move toward , we must have( d)/p> d/r.
2.852ManufacturingSystemsAnalysis 125/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
126/129
Observations
If ( d)/p> d/r,
thenp
d 0.
That is, we must have b> 0 so that there is enough capacity for x to increase on
the average when x< Z.
2.852ManufacturingSystemsAnalysis 126/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
http://find/ -
7/27/2019 MIT2 852S10 Probability
127/129
Observations
Also, note that b> 0 =r
d>
p
d=
r( d) > pd=
r rd> pd=
r > rd+ pd=
r
r+ p> d
which we assumed.
2.852ManufacturingSystemsAnalysis 127/128 Copyright c2010StanleyB.Gershwin.
Continuous random variablesExample
Observations
http://find/ -
7/27/2019 MIT2 852S10 Probability
128/129
2. Let C= AebZ. Then
f(x, 0) = Ceb(Zx)
f(x, 1) = C ddeb(Zx)
P(Z, 1) = Cdp
P(Z, 0) = 0
That is, the probability distribution really depends on Z x. IfZ ischanged, the distribution shifts without changing its shape.
2.852ManufacturingSystemsAnalysis 128/128 Copyright c2010StanleyB.Gershwin.
MIT OpenCourseWare
http://ocw.mit.edu
http://find/http://ocw.mit.edu/http://ocw.mit.edu/ -
7/27/2019 MIT2 852S10 Probability
129/129
2.852 Manufacturing Systems AnalysisSpring 2010
For information about citing these materials or our Terms of Use,visit:http://ocw.mit.edu/terms.
http://ocw.mit.edu/termshttp://ocw.mit.edu/terms