decision analysis

50
Decision Analysis-1 IndE 311 IndE 311 Stochastic Models and Decision Stochastic Models and Decision Analysis Analysis UW Industrial Engineering Instructor: Prof. Zelda Zabinsky

Upload: ihsan-san

Post on 13-Sep-2015

225 views

Category:

Documents


0 download

DESCRIPTION

Presentation for decision analysis

TRANSCRIPT

  • Decision Analysis-*IndE 311Stochastic Models and Decision AnalysisUW Industrial Engineering

    Instructor: Prof. Zelda Zabinsky

  • Decision Analysis-*Operations ResearchThe Science of Better

  • Decision Analysis-*Operations Research Modeling ToolsetLinear ProgrammingNetwork ProgrammingPERT/ CPMDynamic ProgrammingInteger ProgrammingNonlinear ProgrammingGame TheoryDecision AnalysisMarkov ChainsQueueing TheoryInventory TheoryForecastingMarkov Decision ProcessesSimulationStochastic Programming310311312

  • Decision Analysis-*IndE 311Decision analysisDecision making without experimentationDecision making with experimentationDecision treesUtility theoryMarkov chainsModelingChapman-Kolmogorov equationsClassification of statesLong-run propertiesFirst passage timesAbsorbing statesQueueing theory Basic structure and modelingExponential distributionBirth-and-death processesModels based on birth-and-deathModels with non-exponential distributionsApplications of queueing theoryWaiting cost functionsDecision models

  • Decision Analysis-*Decision Analysis

    Chapter 15

  • Decision Analysis-*Decision AnalysisDecision making without experimentationDecision making criteriaDecision making with experimentationExpected value of experimentationDecision treesUtility theory

  • Decision Analysis-*Decision Making without Experimentation

  • Decision Analysis-*Goferbroke ExampleGoferbroke Company owns a tract of land that may contain oilConsulting geologist: 1 chance in 4 of oilOffer for purchase from another company: $90kCan also hold the land and drill for oil with cost $100kIf oil, expected revenue $800k, if not, nothing

    PayoffAlternativeOilDryDrill for oilSell the landChance1 in 43 in 4

  • Decision Analysis-*Notation and TerminologyActions: {a1, a2, }The set of actions the decision maker must choose fromExample:

    States of nature: {1, 2, ...}Possible outcomes of the uncertain event.Example:

  • Decision Analysis-*Notation and TerminologyPayoff/Loss Function: L(ai, k)The payoff/loss incurred by taking action ai when state k occurs.Example:

    Prior distribution:Distribution representing the relative likelihood of the possible states of nature.

    Prior probabilities: P( = k)Probabilities (provided by prior distribution) for various states of nature.Example:

  • Decision Analysis-*Decision Making CriteriaCan optimize the decision with respect to several criteriaMaximin payoffMinimax regretMaximum likelihoodBayes decision rule (expected value)

  • Decision Analysis-*Maximin Payoff CriterionFor each action, find minimum payoff over all states of natureThen choose the action with the maximum of these minimum payoffs

    State of NatureMinPayoffActionOilDryDrill for oil700-100Sell the land9090

  • Decision Analysis-*Minimax Regret CriterionFor each action, find maximum regret over all states of natureThen choose the action with the minimum of these maximum regrets

    (Regrets)State of NatureMaxRegretActionOilDryDrill for oilSell the land

    (Payoffs)State of NatureActionOilDryDrill for oil700-100Sell the land9090

  • Decision Analysis-*Maximum Likelihood CriterionIdentify the most likely state of natureThen choose the action with the maximum payoff under that state of nature

    State of NatureActionOilDryDrill for oil700-100Sell the land9090Prior probability0.250.75

  • Decision Analysis-*Bayes Decision Rule(Expected Value Criterion)For each action, find expectation of payoff over all states of natureThen choose the action with the maximum of these expected payoffs

    State of NatureExpectedPayoffActionOilDryDrill for oil700-100Sell the land9090Prior probability0.250.75

  • Decision Analysis-*Sensitivity Analysis with Bayes Decision RuleWhat is the minimum probability of oil such that we choose to drill the land under Bayes decision rule?

    State of NatureExpectedPayoffActionOilDryDrill for oil700-100Sell the land9090Prior probabilityp1-p

  • Decision Analysis-*Decision Making with Experimentation

  • Decision Analysis-*Goferbroke Example (contd)Option available to conduct a detailed seismic survey to obtain a better estimate of oil probabilityCosts $30kPossible findings:Unfavorable seismic soundings (USS), oil is fairly unlikelyFavorable seismic soundings (FSS), oil is fairly likely

    State of NatureActionOilDryDrill for oil700-100Sell the land9090Prior probability0.250.75

  • Decision Analysis-*Posterior ProbabilitiesDo experiments to get better information and improve estimates for the probabilities of states of nature. These improved estimates are called posterior probabilities.Experimental Outcomes: {x1, x2, }Example:

    Cost of experiment: Example:

    Posterior Distribution: P( = k | X = xj)

  • Decision Analysis-*Goferbroke Example (contd)Based on past experience:If there is oil, thenthe probability that seismic survey findings is USS = 0.4 = P(USS | oil)the probability that seismic survey findings is FSS = 0.6 = P(FSS | oil)If there is no oil, thenthe probability that seismic survey findings is USS = 0.8 = P(USS | dry)the probability that seismic survey findings is FSS = 0.2 = P(FSS | dry)

  • Decision Analysis-*Bayes TheoremCalculate posterior probabilities using Bayes theorem:Given P(X = xj | = k), find P( = k | X = xj)

  • Decision Analysis-*Goferbroke Example (contd)We haveP(USS | oil) = 0.4P(FSS | oil) = 0.6P(oil) = 0.25P(USS | dry) = 0.8P(FSS | dry) = 0.2P(dry) = 0.75

    P(oil | USS) =

    P(oil | FSS) =

    P(dry | USS) =

    P(dry | FSS) =

  • Decision Analysis-*Goferbroke Example (contd)Optimal policiesIf finding is USS:If finding is FSS:

    State of NatureExpectedPayoffActionOilDryDrill for oil700-100Sell the land9090Posterior probability

    State of NatureExpectedPayoffActionOilDryDrill for oil700-100Sell the land9090Posterior probability

  • Decision Analysis-*The Value of ExperimentationDo we need to perform the experiment?As evidenced by the experimental data, the experimental outcome is not always correct. We sometimes have imperfect information. 2 ways to access value of informationExpected value of perfect information (EVPI)What is the value of having a crystal ball that can identify true state of nature?Expected value of experimentation (EVE)Is the experiment worth the cost?

  • Decision Analysis-*Expected Value of Perfect InformationSuppose we know the true state of nature. Then we will pick the optimal action given this true state of nature. E[PI] = expected payoff with perfect information =

    State of NatureActionOilDryDrill for oil700-100Sell the land9090Prior probability0.250.75

  • Decision Analysis-*Expected Value of Perfect InformationExpected Value of Perfect Information:EVPI = E[PI] E[OI]where E[OI] is expected value with original information (i.e. without experimentation) EVPI for the Goferbroke problem =

  • Decision Analysis-*Expected Value of ExperimentationWe are interested in the value of the experiment. If the value is greater than the cost, then it is worthwhile to do the experiment.Expected Value of Experimentation:EVE = E[EI] E[OI]where E[EI] is expected value with experimental information.

  • Decision Analysis-*Goferbroke Example (contd)Expected Value of Experimentation:EVE = E[EI] E[OI]

    EVE =

  • Decision Analysis-*Decision Trees

  • Decision Analysis-*Decision TreeTool to display decision problem and relevant computations

    Nodes on a decision tree called __________.Arcs on a decision tree called ___________.

    Decision forks represented by a __________.Chance forks represented by a ___________.

    Outcome is determined by both ___________ and ____________. Outcomes noted at the end of a path.Can also include payoff information on a decision tree branch

  • Decision Analysis-*Goferbroke Example (contd)Decision Tree

  • Decision Analysis-*Analysis Using Decision TreesStart at the right side of tree and move left a column at a time. For each column, if chance fork, go to (2). If decision fork, go to (3).At each chance fork, calculate its expected value. Record this value in bold next to the fork. This value is also the expected value for branch leading into that fork.At each decision fork, compare expected value and choose alternative of branch with best value. Record choice by putting slash marks through each rejected branch.Comments: This is a backward induction procedure.For any decision tree, such a procedure always leads to an optimal solution.

  • Decision Analysis-*Goferbroke Example (contd)Decision Tree Analysis

  • Decision Analysis-*Painting problemPainting at an art gallery, you think is worth $12,000Dealer asks $10,000 if you buy today (Wednesday)You can buy or wait until tomorrow, if not sold by then, can be yours for $8,000Tomorrow you can buy or wait until the next day: if not sold by then, can be yours for $7,000In any day, the probability that the painting will be sold to someone else is 50%What is the optimal policy?

  • Decision Analysis-*Drawer problemTwo drawersOne drawer contains three gold coins, The other contains one gold and two silver. Choose one drawerYou will be paid $500 for each gold coin and $100 for each silver coin in that drawerBefore choosing, you may pay me $200 and I will draw a randomly selected coin, and tell you whether its gold or silver and which drawer it comes from (e.g. gold coin from drawer 1)What is the optimal decision policy? EVPI? EVE? Should you pay me $200?

  • Decision Analysis-*Utility Theory

  • Decision Analysis-*Validity of Monetary Value AssumptionThus far, when applying Bayes decision rule, we assumed that expected monetary value is the appropriate measureIn many situations and many applications, this assumption may be inappropriate

  • Decision Analysis-*Choosing between LotteriesAssume you were given the option to choose from two lotteriesLottery 1 50:50 chance of winning $1,000 or $0Lottery 2 Receive $50 for certainWhich one would you pick?$1,000$0.5.5$501

  • Decision Analysis-*Choosing between lotteriesHow about between these two?Lottery 1 50:50 chance of winning $1,000 or $0Lottery 2 Receive $400 for certain

    Or these two?Lottery 1 50:50 chance of winning $1,000 or $0Lottery 2 Receive $700 for certain

    $1,000$0.5.5$4001$1,000$0.5.5$7001

  • Decision Analysis-*UtilityThink of a capital investment firm deciding whether or not to invest in a firm developing a technology that is unproven but has high potential impactHow many people buy insurance? Is this monetarily sound according to Bayes rule?So... is Bayes rule invalidated? No- because we can use it with the utility for money when choosing between decisionsWell focus on utility for money, but in general it could be utility for anything (e.g. consequences of a doctors actions)

  • Decision Analysis-*A Typical Utility Function for Moneyu(M)M43210$100$250$500$1,000What does this mean?

  • Decision Analysis-*Decision Makers PreferencesRisk-averseAvoid riskDecreasing utility for moneyRisk-neutralMonetary value = UtilityLinear utility for moneyRisk-seeking (or risk-prone)Seek riskIncreasing utility for moneyCombination of these

    u(M)Mu(M)Mu(M)Mu(M)M

  • Decision Analysis-*Constructing Utility FunctionsWhen utility theory is incorporated into a real decision analysis problem, a utility function must be constructed to fit the preferences and the values of the decision maker(s) involvedFundamental property: The decision maker is indifferent between two alternative courses of action that have the same utility

  • Decision Analysis-*Indifference in UtilityConsider two lotteries

    The example decision maker we discussed earlier would be indifferent between the two lotteries if p is 0.25 and X is p is 0.50 and X is p is 0.75 and X is $1,000$0p1-p$X1

  • Decision Analysis-*Goferbroke Example (with Utility)We need the utility values for the following possible monetary payoffs:Mu(M)45

    Monetary PayoffUtility-130-1006090670700

  • Decision Analysis-*Constructing Utility FunctionsGoferbroke Exampleu(0) is usually set to 0. So u(0)=0We ask the decision maker what value of p makes him/her indifferent between the following lotteries:

    The decision makers response is p=0.2So700-130p1-p01

  • Decision Analysis-*Constructing Utility FunctionsGoferbroke ExampleWe now ask the decision maker what value of p makes him/her indifferent between the following lotteries:

    The decision makers response is p=0.15So7000p1-p901

  • Decision Analysis-*Constructing Utility FunctionsGoferbroke ExampleWe now ask the decision maker what value of p makes him/her indifferent between the following lotteries:

    The decision makers response is p=0.1So7000p1-p601

  • Decision Analysis-*Goferbroke Example (with Utility)Decision Tree

  • Decision Analysis-*Exponential Utility FunctionsOne of the many mathematically prescribed forms of a closed-form utility function

    It is used for risk-averse decision makers onlyCan be used in cases where it is not feasible or desirable for the decision maker to answer lottery questions for all possible outcomesThe single parameter R is the one such that the decision maker is indifferent between

    R-R/20.50.501and(approximately)