simulation of discrete event systems · a lower case letter as a variable, e.g. the variable x,...
TRANSCRIPT
© Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Simulation of Discrete Event Systems
Exercise to Unit 1
Introduction to Discrete Event Systems
Fall Winter 2016/2017
Dr.-Ing. Dipl.-Wirt.-Ing. Sven Tackenberg
Univ.-Prof. Dr.-Ing. Dipl.-Wirt.-Ing. Christopher M. Schlick
Chair and Institute of Industrial Engineering and Ergonomics
Instructor:
Jochen Nelles, M.Sc.
E-Mail: [email protected]
1 - 2 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
1. Review of Set Theory
2. Review of Probability Theory
3. Exercises
Contents
1 - 3 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
1. Review of Set Theory
1. Review of Set Theory
1 - 4 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
A set is a collection of objects. The objects are called the elements of the set. Sets will be
denoted with capital letters and elements will be denoted with lower case letters. The empty
set does not contain elements and is denoted with the help of the symbol.
Example: A = {a, b, c} a A; d A
A set can be defined in two ways. First, we can simply enumerate its elements. Second, we can
specify the relevant properties characterizing the set elements (generation). In the second case
a lower case letter as a variable, e.g. the variable x, represents an arbitrary element of the set
and the colon “:” denotes “with the property”. Logical associations among properties are
denoted with the help of the binary operators “and“ (symbol ) as well as “or“ (symbol ) of
Boolean algebra. The unary “-” operator is used to denote a negation.
Enumeration example: A = {a, b, c}; B = {1, 2, 3, ...}
Generation example: A = {x: x is an English word}
B = {x: x is a prime number x is < 106}
C = {x: x is an English word x is an English sentence}
Review of set theory (I)
1 - 5 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Two sets are equal, if each element of the first set can be associated with an equal element of
the second set and the number of elements (set cardinality) of both sets are equal. Otherwise
they are not equal.
Example: A = {a, b, c}; B = {a, b, c}; C = {a, b, c, d} A = B; A C
A subset B of set A contains a sample of elements in A. The sample may represent the complete
set A and the binary operators or are used. A true subset is a subset with lower cardinality
than the reference set and the binary operators are are used. The empty set is a true
subset of any other set.
Example: A = {a, b, c}; B = {a, b}; C = {a, b, c} A B; A C; A B
A set of cardinality 1 is a single set. A set with a finite cardinality is a finite set. A set with an infinite
cardinality is an infinite set. The cardinality of a set is denoted with the help of the unary | |
operator.
Example: A = {a, b, c} A is a finite set; |A| = 3
Review of set theory (II)
1 - 6 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
If A und B are two sets, a set C can be generated with the help of the following set operators:
- Complement:
C = Acomp = {x : x U x A}, where U is the universal set
- Conjunction:
C = A B = {x : x A x B}
- Intersection:
C = A B = {x : x A x B}
- Difference:
C = A \ B = {x : x A x B}
- Power:
2A = { X : X A}, that is the set of all subsets
- Cartesian product:
C = A B = { (x, y) : x A y B }
Review of set theory (III)
1 - 7 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Examples of set operators:
A = {a, b, c} 2A = {, {a}, {b}, {c}, {a, b}, {a, c}, {b, c}, A}
A = {a, b}; B = {c, d} C = A B = { (a, c), (a, d), (b, c), (b, d) }
A B is hatched A B is hatched
A \ B is hatched AC is hatched
Review of set theory (IV)
1 - 8 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Review of set theory (V)
Idempotency property 1a. A A = A 1b. A A = A
Associative property 2a. (A B) C =
A (B C)
2b. (A B) C =
A (B C)
Commutative property 3a. A B = B A 3b. A B = B A
Distributive property 4a. A (B C) =
(A B) (A C)
4b. A (B C) =
(A B) (A C)
Identity 5a. A Ø = A
6a. A U = U
5b. A U = A
6b. A Ø = Ø
Complement 7a. A AC = U
8a. (AC)C = A
7b. A AC = Ø
8b. UC = Ø, ØC = U
De Morgan law 9a. (A B)C = AC BC 9b. (A B)C = AC BC
1 - 9 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
2. Review of Probability Theory
2. Review of Probability Theory
1 - 10 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
In probability theory a random trial is a nondeterministic phenomenon where possible outcomes
of the trial are associated with a probability measure. The set S of all possible outcomes is
called the sample space.
A possible trial outcome as an element of the sample space S is called a sample. A random event
A is a set of samples or a subset of the sample space. An event with cardinality 1 is called an
elementary event.
The empty set and S are random events. The empty set is called the impossible event and
S is called the certain event.
Combined events of random trials can be simply defined bottom-up from elementary events when
using the previously introduced unary or binary set operators.
Example: Throwing a die S = {1, 2, 3, 4, 5, 6}
A = {a S: “even number of faces up”} = {2, 4, 6}
Review of probability theory (I)
1 - 11 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Let the sample space S be a finite set, e.g. S = {a1, a2, ..., an}. We can define a probability space,
if each ai S can be associated with a real number pi called probability of ai, so that the
following conditions are fulfilled:
- the pi are non negative
- the sum of all pi equals 1.
The probability P(A) of an event A is defined as the sum of probabilities of the elements of A. A
compact notation for P({ai}) is often P(ai).
For the discrete probability distribution P being defined on the sample space the following axioms
according to Kolmogorov hold:
- For each event A: 0 P(A) 1
- P(S) = 1
- If A and B are mutually exclusive, then P(A B) = P(A) + P(B)
Example: Throwing a fair die S = {1, 2, ..., 6}; P(S) = 1
P({“even no. of faces up”}) = P({2, 4, 6}) = P({2}) + P({4}) + P({6}) = 1/2
Review of probability theory (II)
1 - 12 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
An event E with P(E) > 0 is given. The probability, that another event A occurs under the
hypothesis of E being given is called a conditional probability of A given E. The functional
notation is P(A|E). The conditional probability is defined in the Bayes theorem:
( )( | )
( )
P A EP A E
P E
If the sample space is finite, the following equation holds:
( )( | )
( )
A EP A EP A E
P E E
Example: Throwing two fair dices
S = {1, 2, 3, 4, 5, 6} {1, 2, 3, 4, 5, 6} = {(1,1), (1,2), ..., (6, 6)}
E = {e S : “sum of faces up is 6“} = {(1, 5), (2, 4), (3, 3), (4, 2), (5, 1)}
A = {a S : “2 is up in one die “}
A E = {(2, 4), (4, 2)} P(A | E) = 2/5 P(A)
Review of probability theory (III)
1 - 13 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
For arbitrary events A1, A2, ..., An the Bayes theorem can be used recursively and we can derive
the chain rule:
1 2 1 2 1 3 1 2 1 2 1( ... ) ( ) ( | ) ( | )... ( | ... )n n nP A A A P A P A A P A A A P A A A A
An event A is independent from an event B, if the observation of B does not influence the
probability of A to be observed. If the following condition holds, the events A and B are called
(statistically) independent:
( | ) ( )P A B P A
Three events A, B, and C are called independent, if the event pairs are independent and the joint
probability can be completely factorized into unconditional probabilities:
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
i P A B P A P B P A C P A P C P B C P B P C
ii P A B C P A P B P C
Review of probability theory (IV)
1 - 14 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Consider the sample space S of a random trial. Obviously, the elements of S do not need to be
numbers to apply the rules of probability theory. However, in engineering science the elements of
S are usually mapped onto numbers, e.g. the number of faces up for a fair die. This mapping is
called a random variable.
A random variable X on a probability space S is a function of S into the real numbers, so that the
preimage are random events. The domain of the random variable X is denoted RX: RX = X(S).
Example: Throwing two dices
According to the previous examples the sample space of the random trial is the cartesian product
of A = {1, 2, 3, 4, 5, 6} with itself:
S = A A = {(1, 1), (1, 2), ..., (6, 6)}
The random variable X represents the sum of faces up of both dices and is mapping each element
in S onto the sum of faces; hence, X is a random variable with the domain:
RX = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
Review of probability theory (V)
1 - 15 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Given is a domain RX = {x1, x2, ..., xn} of a random variable X being defined on a finite sample
space S. X induces the following probabilities on RX:
pi = P(xi) : Sum of probabilities of all elements in S with the image xi.
The function which is mapping the point xi onto the probability pi is called the probability
distribution of the random variable X. The distribution can be specified easily as a table:
x1 x2 ... xn
p1 p2 ... pn → Sum must equal 1!
Example: Throwing two fair dices
S = {(1, 1), (1, 2), ..., (6, 6)} with |S| = 36
RX = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
The element (1, 1) has the image 2 and therefore P(2) = p1 = 1/36.
There are two elements in S with image 3: {1, 2}, {2, 1} and therefore P(3) = p2 = 2/36.
There are three elements in S with image 4: {1, 3}, {2, 2}, {3, 1} and therefore
P(4) = p3 = 3/36.
Et cetera...
Review of probability theory (VI)
1 - 16 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Given is a random trial being repeated and the repetitions are independent. The trial has binary
outcomes as events, which are called “success” and “failure”. We denote with p the probability of
a success being observed and with q = 1 - p the probability of failure. These repeated trials are
called Bernoulli trials.
According to the following proposition we can compute the probability of k successes being
observed in n repeated trials:
The function P(.) denotes the binomial probability distribution with parameters n and p.
Example: n times fair coin tosses
The probability after n tosses (n is even!) to observe n/2 times “head” is:
!,
! !
k n knP k n p p q
k n k
2 2
2 2
1 ! 1 1 ! 1 !,
2 2 2 2 2! ! ! ! 2
2 2 2 2
n nn n
n
n n n nP n
n n n nn
Review of probability theory (VII)
1 - 17 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
2
n
List plot of probability to observe n/2 times “heads” after n tosses:
5 10 15 20
0.2
0.4
0.6
0.8
11
,2 2
nP n
Review of probability theory (VIII)
1 - 18 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
Open Questions ???
Questions ?
1 - 19 © Chair and Institute of Industrial Engineering and Ergonomics, RWTH Aachen University
1. Proof the De Morgan laws.
2. Develop a state transition diagram of a coin-based soda machine with lemonade and water beverages.
Each beverage costs 3 euros and due to occupational safety and hygiene the slot with the plastic cup
must be closed when pouring in the beverage.
Short homework