topics combining probability and first-order logic –blog and dblog learning very complex behaviors...

14
Topics Combining probability and first-order logic BLOG and DBLOG Learning very complex behaviors ALisp: hierarchical RL with partial programs State estimation for neurotrauma patients Joint w/ Geoff Manley (UCSF), Intel, Omron Metareasoning and bounded optimality Transfer learning (+ Jordan, Bartlett, MIT, SU, OSU) Knowing everything on the web Human-level AI

Upload: allen-tyler

Post on 11-Jan-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Topics

• Combining probability and first-order logic– BLOG and DBLOG

• Learning very complex behaviors– ALisp: hierarchical RL with partial programs

• State estimation for neurotrauma patients– Joint w/ Geoff Manley (UCSF), Intel, Omron

• Metareasoning and bounded optimality• Transfer learning (+ Jordan, Bartlett, MIT, SU, OSU)• Knowing everything on the web• Human-level AI

Page 2: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Heart rateHeart rateBlood pressureBlood pressureOxygen saturationOxygen saturation

Pulmonary artery pressurePulmonary artery pressureIntracranial pressureIntracranial pressureTemperatureTemperature

Nursing DocumentationNursing Documentation

MedicationsMedicationsTreatmentsTreatmentsIntake OutputIntake OutputVital signsVital signsBlood productsBlood productsIV fluidsIV fluids

Tissue oxygenTissue oxygen

Inspired oxygenInspired oxygenTidal volumeTidal volumePeak pressurePeak pressure

End expiratory pressureEnd expiratory pressureRespiratory rateRespiratory rateVentilation modeVentilation mode

Cardiac outputCardiac output

Sedation levelSedation level

ICP waveICP wave

State estimation: 3x5 index card

Page 3: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Patient 2-13

Page 4: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Dynamic Bayesian Networks

Page 5: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

DBNs contd:

Page 6: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Research plan

• DBN model: ~200 core state variables, ~500 sensor-related variables

• Learn model parameter distributions from DB• Infer patient-specific parameters online• Goals:

– Improved alarms– Diagnostic state estimation => improved treatment– Solve the treatment POMDP– Structure discovery => better understanding of

physiology

Page 7: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Possible worlds• Propositional

• First-order + unique names, domain closure

• First-order open-world

A B

C D

A B

C D

A B

C D

A B

C D

A B

C D

A B C D A B C D A B C D A B C D A B C D A B C D

Page 8: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Example: Citation Matching[Lashkari et al 94] Collaborative Interface Agents, Yezdi

Lashkari, Max Metral, and Pattie Maes, Proceedings of the Twelfth National Conference on Articial Intelligence, MIT Press, Cambridge, MA, 1994.

Metral M. Lashkari, Y. and P. Maes. Collaborative interface agents. In Conference of the American Association for Artificial Intelligence, Seattle, WA, August 1994.

Are these descriptions of the same object?What authors and papers actually exist, with what

attributes? Who wrote which papers?General problem: raw data -> relational KBOther examples: multitarget tracking, vision, NLP

Approach: formal language for specifying first-order open-world probability models

Page 9: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

BLOG generative process

• Number statements describe steps that add some objects to the world

• Dependency statements describe steps that set the value of a function or relation on a tuple of arguments

• Includes setting the referent of a constant symbol (0-ary function)

• Both types may condition on existence and properties of previously added objects

Page 10: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

BLOG model (simplified)

guaranteed Citation Cit1, Cit2, Cit3, Cit4, Cit5, Cit6, Cit7;

#Researcher ~ NumResearchersPrior();

Name(r) ~ NamePrior();

#Paper(FirstAuthor = r) ~ NumPapersPrior(Position(r));

Title(p) ~ TitlePrior();

PubCited(c) ~ Uniform({Paper p});

Text(c) ~ NoisyCitationGrammar (Name(FirstAuthor(PubCited(c))), Title(PubCited(c)));

Page 11: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Basic results

• Theorem 1: Every well-formed* BLOG model specifies a unique distribution over possible worlds

• The probability of each (finite) world is given by a product of the relevant conditional probabilities from the model

• Theorem 2: For any well-formed BLOG model, there are algorithms (LW, MCMC) that converge to correct probability for any query, using finite time per sampling step

Page 12: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Citation Matching Results

Four data sets of ~300-500 citations, referring to ~150-300 papers

0

0.05

0.1

0.15

0.2

0.25

Reinforce Face Reason Constraint

Err

or

(Fra

ctio

n o

f C

lust

ers

No

t R

eco

vere

d C

orr

ectl

y)

Phrase Matching[Lawrence et al. 1999]

Generative Model + MCMC[Pasula et al. 2002]

Conditional Random Field[Wellner et al. 2004]

Page 13: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

DBLOG

• BLOG allows for temporal models – time is just a logical variable over an infinite set

• Inference works (only finitely many relevant random variables) but is grossly inefficient

• DBLOG includes time as a distinguished type and predecessor as distinguished function; implements special-purpose inference:– Particle filter for temporally varying relations– Decayed MCMC for atemporal relations

Page 14: Topics Combining probability and first-order logic –BLOG and DBLOG Learning very complex behaviors –ALisp: hierarchical RL with partial programs State

Open Problems

• Inference– Applying “lifted” inference to BLOG (like Prolog)– Approximation algorithms for problems with huge/growing

numbers of objects

• Knowledge representation– Hierarchical activity models– Undirected submodels– Nonparametric extensions (cf. de Freitas, 2005)