Uncertain Knowledge Representation

Download Uncertain Knowledge Representation

Post on 01-Jan-2016

19 views

Category:

Documents

0 download

Embed Size (px)

DESCRIPTION

Uncertain Knowledge Representation. CPSC 386 Artificial Intelligence Ellen Walker Hiram College. Reasoning Under Uncertainty. We have no way of getting complete information, e.g. limited sensors We dont have time to wait for complete information, e.g. driving - PowerPoint PPT Presentation

TRANSCRIPT

<ul><li><p>Uncertain Knowledge RepresentationCPSC 386 Artificial IntelligenceEllen WalkerHiram College</p></li><li><p>Reasoning Under UncertaintyWe have no way of getting complete information, e.g. limited sensorsWe dont have time to wait for complete information, e.g. drivingWe cant know the result of an action until after having done it, e.g. rolling a dieThere are too many unlikely events to consider, e.g. I will drive home, unless my car breaks down, or unless a natural disaster destroys the roads </p></li><li><p>ButA decision must be made!No intelligent system can afford to consider all eventualities, wait until all the data is in and complete, or try all possibilities to see what happens</p></li><li><p>SoWe must be able to reason about the likelihood of an eventWe must be able to consider partial information as part of larger decisionsButWe are lazy (too many options to list, most unlikely)We are ignorantNo complete theory to work fromAll possible observations havent been made</p></li><li><p>Quick Overview of Reasoning SystemsLogicTrue or false, nothing in between. No uncertaintyNon-monotonic logicTrue or false, but new information can change it.ProbabilityDegree of belief, but in the end its either true or falseFuzzyDegree of belief, allows overlapping of true and false states</p></li><li><p>ExamplesLogicRain is precipitationNon-monotonicIt is currently rainingProbabilityIt will rain tomorrow (70% chance)FuzzyIt is raining (.5 hard / .8 very hard / .2 a little)</p></li><li><p>NonMonotonic LogicOnce true (or false) does not mean always true (or false)As information arrives, truth values can change (Penelope is a bird, penguin, magic penguin)Implementations (you are not responsible for details)CircumscriptionBird(x) and not abnormal(x) -&gt; flies(x)We can assume not abnormal(x) unless we know abnormal(x)Default logic x is true given x does not conflict with anything we already know</p></li><li><p>Truth Maintenance SystemsThese systems allow truth values to be changed during reasoning (belief revision)When we retract a fact, we must also retract any other fact that was derived from itPenelope is a bird.(can fly)Penelope is a penguin.(cannot fly)Penelope is magical.(can fly)Retract (Penelope is magical).(cannot fly)Retract (Penelope is a penguin).(can fly)</p></li><li><p>Types of TMSJustification based TMSFor each fact, track its justification (facts and rules from which it was derived)When a fact is retracted, retract all facts that have justifications leading back to that fact, unless they have independent justifications.Each sentence labeled IN or OUTAssumption based TMSRepresent all possible states simultaneouslyWhen a fact is retracted, change state setsFor each fact, use list of assumptions that make that fact true; each world state is a set of assumptions.</p></li><li><p>TMS Example (Quine &amp; Ullman 1978)Abbot, Babbit &amp; Cabot are murder suspectsAbbots alibi: At hotel (register)Babbits alibi: Visiting brother-in-law (testimony)Cabots alibi: Watching ski raceWho committed the murder?New Evidence comes inTV video shows Cabot at the ski raceNow, who committed the murder?</p></li><li><p>JTMS ExampleEach belief has justifications (+ and -) We mark each fact as IN or OUTSuspect Abbot (IN)Beneficiary Abbot(IN)Alibi Abbot (OUT)+</p></li><li><p>Revised JustificationSuspect Abbot (OUT)Beneficiary Abbot(IN)Alibi Abbot (IN)+Registered (IN)Far Away (IN)Forged (OUT)++</p></li><li><p>ATMS Example (Partial)List all possible assumptions (e.g. A1: register was forged, A2: register was not forged)Consider all possible facts(e.g. Abbot was at hotel.)For each fact, determine all possible sets of assumptions that would make it validEg. Abbot was at hotel (all sets that include A2 but not A1)</p></li><li><p>JTMS vs. ATMSJTMS is sequentialWith each new fact, update the current set of beliefsATMS is pre-compiledDetermine the correct set of beliefs for each fact in advanceWhen you have actual facts, find the set of beliefs that is consistent with all of them (intersection of sets for each fact)</p></li><li><p>ProbabilityThe likelihood of an event occurring represented as a percentage of observed events over total observationsE.g. I have a bag containing red &amp; black ballsI pull 8 balls from the bag (replacing the ball each time)6 are red and 2 are blackAssume 75% of balls are red, 25% are blackThe probability of the next ball being red is 75%</p></li><li><p>More examplesThere are 52 cards in a deck, 4 suits (2 red, 2 black)What is the probability of picking a red card(26 red cards) / (52 cards) = 50%What is the probability of picking 2 red cards?50% for the first card(25 red cards) / (51 cards) for the secondMultiply for total result (26*25) / (52*51)</p></li><li><p>Basic Probability NotationProposition an assertion like the card is redRandom variableDescribes an event we want to know the outcome of, like ColorofCardPickedDomain is set of values such as {red, black}Unconditional (prior) probability P(A)In the absence of other informationConditional probability P(A | B)Based on specific prior knowledge</p></li><li>Some important equationsP(true) = 1; P(false) = 00 </li><li><p>Conditional &amp; unconditional Probabilities in exampleUnconditional P(Color2ndCard = red) = 50%With no other knowledgeConditionalP(Color2ndCard = red |Color1stCard=red) = 25/51Knowing the first card was red, gives more info (lower likelihood of 2nd card being red)</p><p>The bar is read given</p></li><li><p>Computing Conditional ProbabilitiesP(A|B) = P(A ^ B) / P(B)The probability that the 2nd card is red given the first card was red is (the probability that both cards are red) / (probability that 1st card is red)</p><p>P(CarWontStart |NoGas) = P(CarWontStart ^ NoGas) / P(NoGas)P(NoGas | CarWontStart) = P(CarWontStart ^ NoGas) / P(CarWontStart)</p></li><li><p>Product Rule and IndependenceP(A^B) = P(A|B) * P(B)(just an algebraic manipulation)Two events are independent if P(A|B) = P(A)E.g. 2 consecutive coin flips are independentIf events are independent, we can multiply their probabilitiesP(A^B) = P(A)*P(B) when A and B are independent</p></li><li><p>Back to Conditional ProbabilitiesP (CarWontStart | NoGas)This predicts a symptom based on an underlying causeThese can be generated empirically(Drain N gastanks, see how many cars start)P (NoGas | CarWontStart)This is a good example of diagnosis. We have a symptom and want to predict the causeWe cant measure these</p></li><li><p>Bayes RuleP(A^B) = P(A|B) * P(B) = P(B|A) * P(A)</p><p>So</p><p>P(A|B) = P(B|A) * P(A) / P(B)</p><p>This allows us to compute diagnostic probabilities from causal probabilities and prior probabilities!</p></li><li><p>Bayes rule for diagnosisP(measles) = 0.1P(chickenpox) = 0.4P(allergy) = 0.6P(spots | measles) = 1.0P(spots | chickenpox) = 0.5P(spots | allergy) = 0.2(assume diseases are independent)What is P(measles | spots)?</p></li><li><p>P(spots)P(spots) was not given.We can estimate it with the following (unlikely) assumptionsThe three listed diseases are independent; no one will have two or moreThere are no other causes or co-factors for spots, P.e. p(spots | none-of-the-above) = 0Then we can say that:P(spots) = p(spots^measles) + p(spots^chickenpox) + p(spots^allergy) (0.42)</p></li><li><p>Combining EvidenceMultiple sources of evidence leading to the same conclusionachefeverflufeverthermometer readingfluChain of evidence leading to a conclusion</p><p>Birds fly. Penguins do not fly. Magical penguins do fly. We assume one of the suspects did it, so which belief that we have, should we change our mind?What happens when we discover the register was forged?In general, we track all factsWe can also define a special fact Contradiction that is IN when a given set of facts are all OUT, I.e. one of the suspects must have done it!But, I could be wrong of course. The more times I try, the more likely it is that I wasnt just lucky</p></li></ul>

Recommended

View more >