ai-apoc, chapter 11, ur

Upload: leokvn

Post on 07-Apr-2018

240 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    1/60

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    2/60

    Artificial Intelligence: Uncertainty Slide: 2

    Instructors Information

    LE Thanh Sach, Ph.D.Office:

    Department of Computer Science,

    Faculty of Computer Science and Engineering,

    HoChiMinh City University of Technology.

    Office Address:

    268 LyThuongKiet Str., Dist. 10, HoChiMinh City,

    Vietnam.E-mail: [email protected]

    E-home: http://cse.hcmut.edu.vn/~ltsach/

    Tel: (+84) 83-864-7256 (Ext: 5839)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    3/60

    Artificial Intelligence: Uncertainty Slide: 3

    Acknowledgment

    The slides in this PPT file are composed using thematerials supplied by

    Prof. Stuart Russell and Peter Norvig: They

    are currently from University of California,Berkeley. They are also the author of the bookArtificial Intelligence: A Modern Approach, whichis used as the textbook for the course

    Prof. Tom Lenaerts, from Universit Libre deBruxelles

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    4/60

    Artificial Intelligence: Uncertainty Slide: 4

    Outline

    UncertaintyProbability

    Syntax and Semantics

    Inference

    Independence and Bayes' Rule

    Bayesian Networks

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    5/60

    Artificial Intelligence: Uncertainty Slide: 5

    Uncertainty

    Let actionAt

    = leave for airport t minutes before flight

    WillAt get me there on time?Problems:

    1. partial observability (road state, other drivers' plans, etc.)2. noisy sensors (traffic reports)3. uncertainty in action outcomes (flat tire, etc.)4. immense complexity of modeling and predicting traffic

    Hence a purely logical approach either1. risks falsehood: A25will get me there on time, or2. leads to conclusions that are too weak for decision making:

    A25 will get me there on time IF there's no accident on the bridge and it doesn'train and my tires remain intact etc etc.

    (A1440 might reasonably be said to get me there on time BUT I'd have to stayovernight in the airport )

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    6/60

    Artificial Intelligence: Uncertainty Slide: 6

    Methods for handling uncertainty

    Default or nonmonotonic logic: Assume my car does not have a flat tire AssumeA25works unless contradicted by evidence

    Issues: What assumptions are reasonable? How to handle contradic tion? Rules with fuzzy factors :

    A25|0.3

    get there on time Sprinkler | 0.99 WetGrass

    WetGrass | 0.7 Rain Issues: Problems with combination, e.g., SprinklercausesRain ?? Probability

    Model agent's degree of belief Given the available evidence, A25 will get me there on time with probability 0.04

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    7/60

    Artificial Intelligence: Uncertainty Slide: 7

    Probability

    Probabilistic assertions summarize effects of laziness : failure to enumerate exceptions, qualifications, etc. ignorance : lack of relevant facts, initial conditions, etc.

    Subjective probability:

    Probabilities relate propositions to agent's own state of knowledge

    e.g., P(A25 | no reported accidents) = 0.06These are not assertions about the worldProbabilities of propositions change with new evidence:

    e.g., P(A25 | no reported accidents, 5 a.m. ) = 0.15

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    8/60

    Artificial Intelligence: Uncertainty Slide: 8

    Making decisions under uncertainty

    Suppose I believe the following:P(A25 gets me there on time | ) = 0.04

    P(A90 gets me there on time | ) = 0.70

    P(A120 gets me there on time | ) = 0.95

    P(A1440 gets me there on time | ) = 0.9999 Which action to choose?

    Depends on my preferences for missing flight vs. time spent waiting,etc. Utility theory is used to represent and infer preferences Decision theory = probability theory + utility theory

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    9/60

    Artificial Intelligence: Uncertainty Slide: 9

    Syntax

    Basic element: random variable Similar to propositional logic: possible worlds defined by

    assignment of values to random variables. Boolean random variables

    e.g., Cavity (do I have a cavity?) Discrete random variables

    e.g., Weatheris one of

    Domain values:must be exhaustive and mutually exclusive

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    10/60

    Artificial Intelligence: Uncertainty Slide: 10

    Syntax

    Elementary proposition: constructed by assignment of a value to a

    random variable:e.g., Weather = sunny, Cavity= false

    (abbreviated as cavity)Complex propositions:formed from elementary propositions and

    standard logical connectivese.g., Weather = sunny Cavity= false

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    11/60

    Artificial Intelligence: Uncertainty Slide: 11

    Syntax

    Atomic event:Acomplete specification of the state of the world aboutwhich the agent is uncertain

    E.g., if the world consists of only two Boolean variables

    Cavityand Toothache, then there are 4 distinct atomicevents:(Cavity = false) (Toothache = false)

    Cavity = false Toothache = true

    Cavity = true Toothache = false

    Cavity = true

    Toothache = true

    Atomic events are mutually exclusive and exhaustive

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    12/60

    Artificial Intelligence: Uncertainty Slide: 12

    Axioms of probability

    For any propositionsA, B0 P(A) 1P(true) = 1 and P(false) = 0

    P(A B) = P(A) + P(B) - P(A B)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    13/60

    Artificial Intelligence: Uncertainty Slide: 13

    Prior probability

    Prior or unconditional probabilities

    of propositionse.g., P(Cavity= true) = 0.1 and P(Weather= sunny) = 0.72

    correspond to beliefprior to arrival of any (new) evidence Probability distribution gives values for all possible

    assignments:Weathers domain:

    P(Weather) =

    (normalized, i.e., sums to 1)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    14/60

    Artificial Intelligence: Uncertainty Slide: 14

    Prior probability

    Joint probability distribution for a set of random variables gives the

    probability of every atomic event on those random variablesP(Weather,Cavity) = a 4 2 matrix of values:

    Weather= sunny rainy cloudy snow

    Cavity = true 0.144 0.02 0.016 0.02

    Cavity = false 0.576 0.08 0.064 0.08 Every question about a domain can be answered by the jointdistribution

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    15/60

    Artificial Intelligence: Uncertainty Slide: 15

    Conditional probability

    Conditional or posterior probabilitiese.g., P(cavity| toothache ) = 0.8i.e., given that toothache is all I know

    (Notation for conditional distributions:P(Cavity| Toothache) = 2-element vector of 2-element vectors)

    If we know more, e.g., cavity is also given, then we have

    P(cavity | toothache, cavity ) = 1 New evidence may be irrelevant, allowing simplification, e.g.,

    P(cavity | toothache, sunny) = P(cavity| toothache) = 0.8

    This kind of inference, sanctioned by domain knowledge, is cruci al

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    16/60

    Artificial Intelligence: Uncertainty Slide: 16

    Conditional probability

    Definition of conditional probability:

    Product rule gives an alternative formulation:

    ( ^ )( | )

    ( )

    p a bP a b

    p b=

    ( ^ ) ( , )

    ( ) ( | )( ) ( | )

    P a b P a b

    P a P b aP b P a b

    =

    =

    =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    17/60

    Artificial Intelligence: Uncertainty Slide: 17

    Conditional probability

    A general version holds for whole distributions, e.g.,P(Weather,Cavity) = P(Weather | Cavity) P(Cavity)

    (View as a set of 4 2 equations, not matrix mult .) Chain rule is derived by successive application of productrule:

    1 1 1 1 1

    1 2 1 1 2 1 1

    1 1

    1

    ( ,..., ) ( ,..., ) ( | ,..., )

    ( ,..., ) ( | ,..., ) ( | ,..., )

    ...

    ( | ,..., )

    n n n n

    n n n n n

    n

    i i

    i

    P X X P X X P X X X

    P X X P X X X P X X X

    P X X X

    =

    =

    =

    =

    =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    18/60

    Artificial Intelligence: Uncertainty Slide: 18

    Inference by enumeration

    Start with the joint probability distribution:

    For any proposition , sum the atomic eventswhere it is true: P() = : P( )

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    19/60

    Artificial Intelligence: Uncertainty Slide: 19

    Inference by enumeration

    Start with the joint probability distribution:

    For any proposition , sum the atomic events

    where it is true: P() = : P( )P(toothache) = 0.108 + 0.012 + 0.016 + 0.064= 0.2

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    20/60

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    21/60

    Artificial Intelligence: Uncertainty Slide: 21

    Inference by enumeration

    Start with the joint probability distribution:

    Can also compute conditional probabilities:( , )

    ( | )( )

    0.108 + 0.012

    0.108 + 0.012 + 0.016 + 0.064

    0.6

    P cavity toothacheP cavity toothache

    P toothache=

    =

    =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    22/60

    Artificial Intelligence: Uncertainty Slide: 22

    Inference by enumeration

    Start with the joint probability distribution:

    Can also compute conditional probabilities:

    ( | ) ( | ) 1P cavity toothache P cavity toothache+ =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    23/60

    Artificial Intelligence: Uncertainty Slide: 23

    Normalization

    Denominator can be viewed as a normalization constant P(Cavity | toothache) = , P(Cavity,toothache)

    = , [P(Cavity,toothache,catch) + P(Cavity,toothache, catch)]= , [ + ]

    = , = General idea: compute distribution on query variable by fixing evidence

    variables and summing over hidden variables

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    24/60

    Artificial Intelligence: Uncertainty Slide: 24

    Inference by enumeration, contd.

    Typically, we are interested in

    the posterior joint distribution of the query variables Ygiven specific values e for the evidence variables E

    Let the hidden variables be H = X - Y - EThen the required summation of joint entries is done by summing out the hiddenvariables:

    P(Y| E = e) = P(Y,E = e) = hP(Y,E= e, H = h) The terms in the summation are joint entries because Y, E and H together

    exhaust the set of random variables

    Obvious problems:1. Worst-case time complexity O(dn)where dis the largest arity2. Space complexity O(dn) to store the joint distribution3. How to find the numbers for O(dn)entries?

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    25/60

    Artificial Intelligence: Uncertainty Slide: 25

    Independence

    A andB are independent iff

    P(A|B) = P(A) or P(B|A) = P(B) or P(A, B) = P(A) P(B )

    P(Toothache, Catch, Cavity, Weather)= P(Toothache, Catch, Cavity) P(Weather)

    32 entries reduced to 12; for

    nindependent biased coins,

    O(2n)

    O(n) Absolute independence powerful but rare Dentistry is a large field with hundreds of variables, none of which are

    independent. What to do?

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    26/60

    Artificial Intelligence: Uncertainty Slide: 26

    Conditional independence

    P(Toothache, Cavity, Catch) has 23 1 = 7 independent entries If I have a cavity, the probability that the probe catches in it doesn't depend onwhether I have a toothache:

    (1) P(catch | toothache, cavity) = P(catch | cavity)

    The same independence holds if I haven't got a cavity:(2) P(catch | toothache,cavity) = P(catch| cavity)

    Catch is conditionally independent ofToothache given Cavity :P(Catch | Toothache,Cavity) = P(Catch | Cavity)

    Equivalent statements:

    P(Toothache | Catch, Cavity) = P(Toothache | Cavity)P(Toothache, Catch | Cavity) = P(Toothache | Cavity) P(Catch | Cavity)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    27/60

    Artificial Intelligence: Uncertainty Slide: 27

    Conditional independence contd.

    Write out full joint distribution using chain rule:P(Toothache, Catch, Cavity)

    = P(Toothache | Catch, Cavity) P(Catch, Cavity)= P(Toothache | Catch, Cavity) P(Catch | Cavity) P(Cavity)= P(Toothache | Cavity) P(Catch | Cavity) P(Cavity)

    In most cases, the use of conditional independence reducesthe size of the representation of the joint distribution fromexponential in n to linear in n.

    Conditional independence is our most basic and robustform of knowledge about uncertain environments.

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    28/60

    Artificial Intelligence: Uncertainty Slide: 28

    Bayes' Rule

    Product rule:

    Bayes' rule:

    or in distribution form

    ( ^ ) ( , )

    ( ) ( | )

    ( ) ( | )

    P a b P a b

    P a P b a

    P b P a b

    =

    =

    =

    ( | ) ( )( | )

    ( )

    P b a P aP a b

    P b

    =

    ( | ) ( | ) ( )P a b P b a P a=

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    29/60

    Artificial Intelligence: Uncertainty Slide: 29

    Bayes' Rule

    Usefulness:For assessing diagnostic probability from causalprobability:

    ( | ) ( )( | )

    ( )

    ceffecte

    ause causecause

    P PP

    Pffect

    effect

    =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    30/60

    Artificial Intelligence: Uncertainty Slide: 30

    Bayes' Rule

    Usefulness:

    Example: Let M be meningitis, (cause)

    One patient in 10000 people

    Let S be stiff neck: (effect) Ten patients in 100 people

    P(S|M): 80% people effected by meningitis have stiff neck

    Note: posterior probability of meningitis still very small!

    ( | ) ( )( | )

    ( )

    0.8 0.00010.1

    0.0008

    S MMP

    S

    P

    PS

    P =

    =

    =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    31/60

    Artificial Intelligence: Uncertainty Slide: 31

    Bayes' Rule and conditionalindependence

    P(Cavity | toothache catch)= P(toothache catch | Cavity) P(Cavity)

    = P(toothache | Cavity) P(catch | Cavity) P(Cavity)

    This is an example of a nave Bayes

    model:P(Cause,Effect1, ,Effectn) = P(Cause) iP(Effecti|Cause )

    Total number of parameters is linear in n

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    32/60

    Artificial Intelligence: Uncertainty Slide: 32

    Bayesian networks

    A simple, graphical notation for conditional independenceassertions and hence for compact specification of full jointdistributions

    Syntax:

    a set of nodes, one per variable a directed, acyclic graph (link "directly influences") a conditional distribution for each node given its parents:

    PPPP (Xi | Parents (Xi))

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    33/60

    Artificial Intelligence: Uncertainty Slide: 33

    Bayesian networks

    In the simplest case,conditional distribution represented as a

    conditional probability table (CPT) givingthe distribution over Xi for each

    combination of parent values

    CPT

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    34/60

    Artificial Intelligence: Uncertainty Slide: 34

    Example

    Topology of network encodes conditional independenceassertions:

    Weatheris independent of the other variables

    Toothache and Catch are conditionally independent givenCavity

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    35/60

    Artificial Intelligence: Uncertainty Slide: 35

    Example

    I'm at work, neighbor John calls to say my alarm isringing, but neighbor Mary doesn't call. Sometimes it's setoff by minor earthquakes. Is there a burglar?

    Variables:Burglary,Earthquake,Alarm,JohnCalls,MaryCalls

    Network topology reflects "causal" knowledge:

    A burglar can set the alarm offAn earthquake can set the alarm off

    The alarm can cause Mary to call

    The alarm can cause John to call

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    36/60

    Artificial Intelligence: Uncertainty Slide: 36

    Example contd.

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    37/60

    Artificial Intelligence: Uncertainty Slide: 37

    Compactness

    A CPT for BooleanXi with kBoolean parents has 2krows for the

    combinations of parent values

    Each row requires one numberp forXi = true(the number for Xi =false is just 1-p)

    If each variable has no more than kparents, the complete networkrequires O(n 2k) numbers

    I.e., grows linearly with n, vs. O(2

    n

    ) for the full joint distribution

    For burglary net, 1 + 1 + 4 + 2 + 2 = 10 numbers (vs. 25-1 = 31)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    38/60

    Artificial Intelligence: Uncertainty Slide: 38

    Semantics

    The full joint distribution is defined as the product of the localconditional distributions:P (X1, ,Xn) = i = 1P (Xi | Parents(Xi ))

    e.g.,P(j m a b e)=P (j | a)P (m | a)P (a | b, e)P (b)P ( e)

    n

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    39/60

    Artificial Intelligence: Uncertainty Slide: 39

    Constructing Bayesian networks

    1. Choose an ordering of variablesX1

    , ,Xn

    2. For i = 1 to n

    add Xi to the network select parents from X1, ,Xi-1such that

    P(Xi| Parents(Xi)) =P(Xi| X1, ... Xi-1)

    This choice of parents guarantees:P (X1, ,Xn) = i =1P (Xi | X1, , Xi-1) (chain rule)

    = i =1P (Xi | Parents(Xi)) (by construction)n

    n

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    40/60

    Artificial Intelligence: Uncertainty Slide: 40

    Suppose we choose the ordering M, J, A, B, E

    P(J | M) =P (J)?

    Example

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    41/60

    Artificial Intelligence: Uncertainty Slide: 41

    Suppose we choose the orderingM, J, A, B, E

    P(J | M) =P (J)? NoP(A | J, M) =P(A | J)?P(A | J, M) =P(A)?

    Example

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    42/60

    Artificial Intelligence: Uncertainty Slide: 42

    Suppose we choose the ordering M, J, A, B, E

    P(J | M) =P (J)? NoP(A | J, M) =P(A | J)? P(A | J, M) =P(A)? No

    P(B | A, J, M) =P(B | A)?

    P(B | A, J, M) =P(B)?

    Example

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    43/60

    Artificial Intelligence: Uncertainty Slide: 43

    Suppose we choose the ordering M, J, A, B, E

    P(J | M) =P (J)? NoP(A | J, M) =P(A | J)?P(A | J, M) =P(A)? No

    P(B | A, J, M) =P(B | A)? Yes

    P(B | A, J, M) =P(B)? NoP(E | B, A ,J, M) =P(E | A)?

    P(E | B, A, J, M) =P(E | A, B)?

    Example

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    44/60

    Artificial Intelligence: Uncertainty Slide: 44

    Suppose we choose the ordering M, J, A, B, E

    P(J | M) =P (J)? NoP(A | J, M) =P(A | J)?P(A | J, M) =P(A)? No

    P(B | A, J, M) =P(B | A)? Yes

    P(B | A, J, M) =P(B)? NoP(E | B, A ,J, M) =P(E | A)? No

    P(E | B, A, J, M) =P(E | A, B)? Yes

    Example

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    45/60

    Artificial Intelligence: Uncertainty Slide: 45

    Example contd.

    Deciding conditional independence is hard in noncausal directions (Causal models and conditional independence seem hardwired for h umans!) Network is less compact: 1 + 2 + 4 + 2 + 4 = 13 numbers needed

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    46/60

    Artificial Intelligence: Uncertainty Slide: 46

    Bayesian Networks - Reasoning

    Example Consider problem: block-lifting

    B: the battery is charged.

    L: the block is liftable.

    M: the arm moves.

    G: the gauge indicates that the battery is charged

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    47/60

    Artificial Intelligence: Uncertainty Slide: 47

    Bayesian Networks - Reasoning

    B

    L

    G M

    P(B) =0.95

    P(L) =0.7

    P(G|B) =0.95P(G|B) =0.1

    P(M|B,L) =0.9

    P(M|B,L) =0.05

    P(M|B,L) =0.0

    P(M|B,L) =0.0

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    48/60

    Artificial Intelligence: Uncertainty Slide: 48

    Bayesian Networks - Reasoning

    Again, pls note:

    p(G,M,B,L) = p(G|M,B,L)p(M|B,L)p(B|L)p(L)

    = p(G|B)p(M|B,L)p(B)p(L)

    Specification:

    Traditional: 16 rows

    BayessianNetworks: 8 rows see previous page.

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    49/60

    Artificial Intelligence: Uncertainty Slide: 49

    Bayesian Networks - Reasoning

    Reasoning: top-downExample:

    If the block is liftable, compute the probability of arm

    moving.

    I.e., Compute p(M | L)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    50/60

    Artificial Intelligence: Uncertainty Slide: 50

    Bayesian Networks - Reasoning

    Reasoning: top-down Solution:

    Insert parent nodes:

    p(M|L) = p(M,B|L) + p(M,B|L)

    Use chain rule:

    p(M|L) = p(M|B,L)p(B|L) + p(M|,B,L)p(B|L)

    Remove independent node:

    p(B|L) =p(B) : B does not have PARENT

    p(B|L) = p(B) = 1 p(B)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    51/60

    Artificial Intelligence: Uncertainty Slide: 51

    Bayesian Networks - Reasoning

    Reasoning: top-down Solution:

    p(M|L) = p(M|B,L)p(B) + p(M|,B,L)(1 p(B))

    = 0.90.95 + 0.0 (1 0.95)

    = 0.855

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    52/60

    Artificial Intelligence: Uncertainty Slide: 52

    Bayesian Networks - Reasoning

    Reasoning: bottom-upExample:

    If the arm cannot move

    Compute the probability that the block is not liftable.

    I.e., Compute: p(L|M)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    53/60

    Artificial Intelligence: Uncertainty Slide: 53

    Bayesian Networks - Reasoning

    Reasoning: bottom-up

    Use Bayesian Rule:

    Compute top-down reasoning

    p(M|L) = 0.9525 exercise

    p(L) = 1- p(L) = 1- 0.7 = 0.3

    ( | ) ( )

    ( )( | )

    p M L p L

    p Mp L M

    =

    )(28575.0

    )(3.0*9525.0)|(

    MpMpMLp

    ==

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    54/60

    Artificial Intelligence: Uncertainty Slide: 54

    Bayesian Networks - Reasoning

    Reasoning: bottom-up

    Compute the negation component:

    We have

    )(03665.0

    )(7.0*0595.0)|(

    MpMpMLp

    ==

    ( | ) ( | ) 1

    ( ) 0.3224

    p L M p L M

    p M

    + =

    =

    88632.0)|( = MLp

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    55/60

    Artificial Intelligence: Uncertainty Slide: 55

    Bayesian Networks - Reasoning

    Reasoning: explanation

    Example

    If we know B (the battery is not charged)

    Compute p(L| B, M)

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    56/60

    Artificial Intelligence: Uncertainty Slide: 56

    Bayesian Networks - Reasoning

    Reasoning: explanation( , | ) ( )

    ( | , )( , )

    ( | , ) ( | ) ( )

    ( , )

    ( | , ) ( ) ( ), because B,L are independent

    ( , )

    [1 ( | , )] [1 ( )] [1 ( )]=

    ( , )

    [1 0.0] [1 0.95] [1 0.7]

    (

    p M B L p Lp L B M

    p B M

    p M B L p B L p L

    p B M

    p M B L p B p L

    p B M

    p M B L p B p L

    p B M

    p B

    =

    =

    =

    =

    , )

    0.015

    ( , )

    M

    p B M

    =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    57/60

    Artificial Intelligence: Uncertainty Slide: 57

    Bayesian Networks - Reasoning

    Reasoning: explanation( , | ) ( )

    ( | , )( , )

    ( | , ) ( | ) ( )

    ( , )

    ( | , ) ( ) ( ), because B,L are independent

    ( , )

    [1 ( | , )] [1 ( )] ( )=

    ( , )

    [1 0.0] [1 0.95] 0.7

    ( , )

    0.035

    ( ,

    p M B L p Lp L B M

    p B M

    p M B L p B L p L

    p B M

    p M B L p B p L

    p B M

    p M B L p B p L

    p B M

    p B M

    p B M

    =

    =

    =

    =

    = )

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    58/60

    Artificial Intelligence: Uncertainty Slide: 58

    Bayesian Networks - Reasoning

    Reasoning: explanation

    ( | , ) ( | , ) 1

    0.015 0.035

    1( , ) ( , )

    ( , ) 0.045

    0.01

    (

    5( | , )0.0

    | , )

    4

    0.33

    5

    p L B M p L B M

    p B M p B

    p L B

    M

    p B M

    p L B M

    M

    + =

    + =

    =

    =

    =

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    59/60

    Artificial Intelligence: Uncertainty Slide: 59

    Summary

    Probability is a rigorous formalism for uncertain

    knowledge Joint probability distribution specifies probability of every

    atomic event

    Queries can be answered by summing over atomic events For nontrivial domains, we must find a way to reduce thejoint size Independence and conditional independence provide the

    tools

  • 8/6/2019 Ai-Apoc, Chapter 11, Ur

    60/60

    Artificial Intelligence: Uncertainty Slide: 60

    Summary

    Bayesian networks provide a natural

    representation for (causally induced) conditional

    independence

    Topology + CPTs = compact representation of

    joint distribution

    Generally easy for domain experts to construct