multistrategy rule refinement

65
Tecuci, Learning Agents Laboratory Learning Agents Laboratory Department of Computer Science George Mason University Gheorghe Tecuci [email protected] http://lalab.gmu.e du/ CS 785, Fall 2001

Upload: eve-whitney

Post on 01-Jan-2016

49 views

Category:

Documents


2 download

DESCRIPTION

CS 785, Fall 2001. Multistrategy Rule Refinement. Gheorghe Tecuci [email protected] http://lalab.gmu.edu/. Learning Agents Laboratory Department of Computer Science George Mason University. Overview. The rule refinement method. Integrated modeling, learning, and solving. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Learning Agents LaboratoryDepartment of Computer Science

George Mason University

Gheorghe Tecuci [email protected]://lalab.gmu.edu/

CS 785, Fall 2001

Page 2: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

OverviewOverview

Integrated modeling, learning, and solving

Illustration of rule refinement in the COA domain

Illustration of rule refinement in other domains

Required reading

The rule refinement method

Characterization of the PVS learning method

Hands-on experience: Problem solving and learning

Page 3: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement methodThe rule refinement method

General presentation of the rule refinement method

Rule refinement with a positive example

Rule refinement with a negative example

The rule refinement problem

Characterization of the learned PVS rule

Page 4: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement problemThe rule refinement problem

GIVEN:

• a plausible version space rule R;

• a positive or a negative example E of the rule (i.e. a correct or an incorrect problem solving episode that has the same IF and THEN tasks as R);

• a knowledge base that includes an object ontology and a set of problem solving rules;

• an expert that understands why the example is positive or negative, and can answer agent’s questions.

DETERMINE:

• an improved rule that covers the example if it is positive, or does not cover the example if it is negative;

• an extended object ontology (if needed for rule refinement).

Page 5: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement methodThe rule refinement method

General presentation of the rule refinement method

Rule refinement with a positive example

Rule refinement with a negative example

The rule refinement problem

Characterization of the learned PVS rule

Page 6: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement method: general presentationThe rule refinement method: general presentation

Let R be a plausible version space rule, U its plausible upper bound condition, L its plausible lower bound condition, and E a new example of the rule.

1. If E is covered by U but it is not covered by L then

• If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U.

• If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L. Alternatively, both bounds need to be specialized.

2. If E is covered by L then

• If E is a positive example then R need not to be refined.

• If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule.

3. If E is not covered by U then

• If E is a positive example then it represents a positive exception to the rule.

• If E is a negative example then no refinement is necessary.

Page 7: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

1. If E is covered by U but it is not covered by L then

• If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

+++

UBLB

+

Page 8: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

1. If E is covered by U but it is not covered by L then

• If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L.

Alternatively, both bounds need to be specialized.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

UB_++

LB_

Strategy 1:Specialize UB by using a specialization rule (e.g. the descending the generalization hierarchy rule, or specializing a numeric interval rule).

Page 9: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

++

UBLB

_

EXw identifies the features that make E a wrong problem solving episode.The inductive hypothesis is that the correct problem solving episodes should not have these features.EXw is taken as an example of a condition that the correct problem solving episodes should not satisfy, an Except-When condition.The Except-when condition needs also to be learned, based on additional examples.Based on EXw an initial Except-When plausible version space condition is generated.

Strategy 2:Find a failure explanation EXw of why E is a wrong problem solving episode.

Page 10: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

++

UBLB

_

Specialize both bounds of the plausible version space condition by: - adding the most general generalization of EXw, corresponding to the examples encountered so far, to the upper bound; - adding the least general generalization of EXw, corresponding to the examples encountered so far, to the lower bound.

Strategy 3:Find an additional explanation EXw for the correct problem solving episodes, which is not satisfied by the current wrong problem solving episode.

_

Page 11: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

2. If E is covered by L then

• If E is a positive example then R need not to be refined.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

+

Page 12: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

2. If E is covered by L then

• If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

- ++

UBLB

-

Strategy 1:Find a failure explanation EXw of why E is a wrong problem solving episode and create an Except-When a plausible version space condition, as indicated before.

Page 13: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

3. If E is not covered by U then

• If E is a positive example then it represents a positive exception to the rule. • If E is a negative example then no refinement is necessary.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

-

++

UBLB

+

Page 14: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement methodThe rule refinement method

General presentation of the rule refinement method

Rule refinement with a positive example

Rule refinement with a negative example

The rule refinement problem

Characterization of the learned PVS rule

Page 15: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Positive example covered by the upper boundPositive example covered by the upper bound

Condition satisfied by positive example

?O1 IS Germany_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_Germany_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_fuel_of_Germany_1943less

gen

eral

th

an

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

Plausible Lower Bound Condition

?O1 IS US_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_transports_of_US_1943

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Identify the strategic COG candidates with respect to the industrial civilization of a force

The force is Germany_1943

A strategic COG relevant factor is strategic COG candidate for a force

The force is Germany_1943The strategic COG relevant factor is

Industrial_capacity_of_Germany_1943

IF the task to accomplish is

THEN accomplish the task

Positive example that satisfies the upper bound

explanationGermany_1943 has_as_industrial_factor

Industrial_capacity_of_Germany_1943Industrial_capacity_of_Germany_1943 is_a_major_generator_of War_materiel_and_fuel_of_Germany_1943

Page 16: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Condition satisfied by the positive example

?O1 IS Germany_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_Germany_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_fuel_of_Germany_1943

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

Plausible Lower Bound Condition (from rule)

?O1 IS US_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_transports_of_US_1943

Minimal generalization of the plausible lower boundMinimal generalization of the plausible lower bound

New Plausible Lower Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

minimal generalization

less general than (or at most as general as)

Page 17: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Opposing_force

Force

Single_state_force Single_group_forceMulti_group_forceMulti_state_force

Generalization hierarchy of forces Generalization hierarchy of forces

Anglo_allies_1943

European_axis_1943

US_1943

Britain_1943

Germany_1943

component_state

Italy_1943

component_state

component_state

component_state

Group

<object>

Page 18: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Generalized ruleGeneralized rule

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O4

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Lower Bound Condition

?O1 IS US_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_transports_of_US_1943

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Page 19: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement methodThe rule refinement method

General presentation of the rule refinement method

Rule refinement with a positive example

Rule refinement with a negative example

The rule refinement problem

Characterization of the learned PVS rule

Page 20: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

A negative example covered by the upper boundA negative example covered by the upper bound

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

Condition satisfied by positive example

?O1 IS Italy_1943has_as_industrial_factor ?O2

?O2 IS Farm_implement_industry_of_Italy_1943 is_a_major_generator_of ?O3

?O3 IS Farm_implements_of_Italy_1943le

ss g

ener

al t

han

Identify the strategic COG candidates with respect to the industrial civilization of a force

The force is Italy_1943

A strategic COG relevant factor is strategic COG candidate for a force

The force is Italy_1943The strategic COG relevant factor is

Farm_implement_industry_of_Italy_1943

IF the task to accomplish is

THEN accomplish the task

Negative example that satisfies the upper bound

explanationItaly_1943 has_as_industrial_factor

Farm_implement_industry_of_Italy_1943Farm_implement_industry_of_Italy_1943 is_a_major_generator_of

Farm_implements_of_Italy_1943

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Page 21: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

IF

THEN

Automatic generation of plausible explanationsAutomatic generation of plausible explanations

Industrial_capacity_of_Italy_1943is a strategic COG candidate for Italy_1943

Identify the strategic COG candidates with respect to the industrial civilization of Italy_1943

The agent generates a list of plausible explanations from which the expert has to select the correct one:

Farm_implements_of_Italy_1943 IS_NOTStrategically_essential_goods_or_materiel

Farm_implement_industry_of_Italy_1943 IS_NOT Industrial_capacity

explanationItaly_1943 has_as_industrial_factor

Farm_implement_industry_of_Italy_1943Farm_implement_industry_of_Italy_1943 is_a_major_generator_of

Farm_implements_of_Italy_1943

Who or what is a strategicallycritical industrial civilization

element in Italy_1943?

Industrial_capacity_of_Italy_1943

No!

Page 22: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Minimal specialization of the plausible upper boundMinimal specialization of the plausible upper bound

Plausible Upper Bound Condition (from rule)?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

Condition satisfied by the negative example

?O1 IS Italy_1943has_as_industrial_factor ?O2

?O2 IS Farm_implement_industry_of_Italy_1943 is_a_major_generator_of ?O3

?O3 IS Farm_Implements_of_Italy_1943

New Plausible Upper Bound Condition

?O1 IS Forcehas_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materiel

New Plausible Lower Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materiel

more general than(or at least as general as)

specialization

Page 23: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Fragment of the generalization hierarchyFragment of the generalization hierarchy

specialization

Main_airport Main_seaport

Sole_airport Sole_seaport

Strategically_essential_resource_or_infrastructure_element

Strategic_raw_material Strategically_essential_goods_or_materiel

War_materiel_and_transports

Raw_material

Strategically_essential_infrastructure_element

Resource_or_ infrastructure_element

<object>

Product

Non-strategically_essentialgoods_or_services

Farm-implementsof_Italy_1943

subconcept_of

instance_ofsubconcept_of

War_materiel_and_fuel

subconcept_of

UB

LB

+

+

_

Page 24: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Specialized ruleSpecialized rule

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Page 25: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

The rule refinement methodThe rule refinement method

General presentation of the rule refinement method

Rule refinement with a positive example

Rule refinement with a negative example

The rule refinement problem

Characterization of the learned PVS rule

Page 26: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Problem solving with PVS rulesProblem solving with PVS rules

PVS Condition Except-When PVS Condition

Rule not applicable

Rule’s conclusion

is (most likely)

incorrect

Rule’s conclusion is plausible Rule’s conclusion is

(most likely) correct

Rule’s conclusion is not plausible

Page 27: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

OverviewOverview

Integrated modeling, learning, and solving

Illustration of rule refinement in the COA domain

Illustration of rule refinement in other domains

Required reading

The rule refinement method

Characterization of the PVS learning method

Agent teaching: Hands-on experience

Page 28: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Control of modeling, learning and solvingControl of modeling, learning and solving

Input Task

Generated Reduction

Mixed-Initiative Problem Solving

Ontology + Rules

Reject ReductionAccept ReductionNew Reduction

Rule Refinement

Task RefinementRule Refinement

Modeling

Formalization

Learning

Solution

Page 29: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Identify the strategic COG candidates for the Sicily_1943 scenario

Anglo_allies_1943

A systematic approach to agent teachingA systematic approach to agent teaching

European_Axis_1943

US_1943Britain_1943 Italy_1943Germany_1943

alliancealliance

individual states individual states1

2

35

other factors

other factors

4

16-19

20

controllingelement

governingelement

civilization

otherfactors

6

7

9

8

controllingelement

governingelement

civilization

otherfactors

10

11

12-15

Page 30: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Identify the strategic COG candidates for the Sicily_1943 scenario

Anglo_allies_1943

Identify the strategic COG candidates for Anglo_allies_1943

Which is an opposing force in the Sicily_1943 scenario?

Modeling, learning, problem solvingModeling, learning, problem solving

Is Anglo_allies_1943 a single member force or a multi-member force?

Anglo_allies_1943 is a multi-member force

Identify the strategic COG candidates for the Anglo_allies_1943which is a multi-member force

Rule_1

Rule_1 European_Axis_1943

Identify the strategic COG candidates for European_Axis_1943

Rule_2

Rule_2

Is European_Axis_1943 a single member force or multi-member force?

European_Axis_1943 is a multi-member force

Identify the strategic COG candidates for the European_Axis _1943which is a multi-member force

Page 31: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

OverviewOverview

Integrated modeling, learning, and solving

Illustration of rule refinement in the COA domain

Illustration of rule refinement in other domains

Required reading

The rule refinement method

Characterization of the PVS learning method

Hands-on experience: Problem solving and learning

Page 32: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Agent teaching: hands-on experienceAgent teaching: hands-on experience

Problem Solvingand

Rule Refinement

Page 33: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 34: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 35: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 36: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 37: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 38: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 39: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Agent teaching: hands-on experienceAgent teaching: hands-on experience

AutonomousProblem Solving

Page 40: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 41: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 42: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Page 43: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

OverviewOverview

Integrated modeling, learning, and solving

Illustration of rule refinement in the COA domain

Illustration of rule refinement in other domains

Required reading

The rule refinement method

Characterization of the PVS learning method

Hands-on experience: Problem solving and learning

Page 44: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Illustration of rule refinement in the COA domainIllustration of rule refinement in the COA domain

Rule refinement with a positive example:Minimal generalization of the plausible lower bound

Rule refinement with a negative example:Minimal specialization of the plausible upper bound

Rule refinement with a negative example:Adding an Except-When plausible version space condition

Integrated problem solving and learning

Page 45: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Rule: R2

Plausible Upper Bound?O1 IS COA-SPECIFICATION-MICROTHEORY?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK?O4 IS ALLEGIANCE-OF-UNIT

IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa ?O1

Question: Is an enemy reconnaissance unit present?

Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action.

THEN accomplish the task:Assess-security-when-enemy-recon-is-present

for-coa ?O1for-unit ?O2for-recon-action ?O3

Ma

in

Co

nd

itio

n

Explanation: ?O2 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 IS RED--SIDE?O2 TASK ?O3 IS INTELLIGENCE-COLLECTION--MIL-TASK

Plausible Lower Bound?O1 IS COA411?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS SCREEN1?O4 IS RED--SIDE

Positive example that satisfies the upper bound

IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa COA421

THEN accomplish the task:Assess-security-when-enemy-recon-is-present

for-coa COA421for-unit RED-CSOP2for-recon-action SCREEN2

Condition satisfied by positive example?O1 IS COA421?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS SCREEN2

?O4 IS RED--SIDEle

ss g

ener

al t

han

A positive example covered by the upper boundA positive example covered by the upper bound

Page 46: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Plausible Lower Bound (from rule)?O1 IS COA411?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS SCREEN1?O4 IS RED--SIDE

Plausible Lower Bound (from example)?O1 IS COA421?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS SCREEN2?O4 IS RED--SIDE

New Plausible Lower Bound?O1 IS COA-SPECIFICATION-MICROTHEORY?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS SCREEN-MILITARY-TASK?O4 IS RED--SIDE

minimal generalization

SCREEN1

SCREEN-MILITARY-TASK

INSTANCE-OF

SCREEN2

INSTANCE-OF

INTELLIGENCE-COLLECTION-MILTARY-TASK

SUBCLASS-OF

COA411

INSTANCE-OF

COA421

INSTANCE-OF

COA-SPECIFICATION-MICROTHEORY

Minimal generalization of the plausible lower boundMinimal generalization of the plausible lower bound

Page 47: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Generalized ruleGeneralized rule

Rule: R2

Plausible Upper Bound?O1 IS COA-SPECIFICATION-MICROTHEORY?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK?O4 IS ALLEGIANCE-OF-UNIT

IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa ?O1

Question: Is an enemy reconnaissance unit present?

Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action.

THEN accomplish the task:Assess-security-when-enemy-recon-is-present

for-coa ?O1for-unit ?O2for-recon-action ?O3

Ma

in

Co

nd

itio

n

Explanation: ?O2 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 IS RED--SIDE?O2 TASK ?O3 IS INTELLIGENCE-COLLECTION--MIL-TASK

Plausible Lower Bound?O1 IS COA411?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS SCREEN1?O4 IS RED--SIDE

Rule: R2

Plausible Upper Bound?O1 IS COA-SPECIFICATION-MICROTHEORY?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK?O4 IS ALLEGIANCE-OF-UNIT

IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa ?O1

Question: Is an enemy reconnaissance unit present?

Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action.

THEN accomplish the task:Assess-security-when-enemy-recon-is-present

for-coa ?O1for-unit ?O2for-recon-action ?O3

Ma

in

Co

nd

itio

n

Explanation: ?O2 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 IS RED--SIDE?O2 TASK ?O3 IS INTELLIGENCE-COLLECTION--MIL-TASK

Plausible Lower Bound?O1 IS COA-SPECIFICATION-MICROTHEORY?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 TASK ?O3?O3 IS SCREEN-MILITARY-TASK?O4 IS RED--SIDE

Page 48: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Illustration of rule refinement in the COA domainIllustration of rule refinement in the COA domain

Rule refinement with a positive example:Minimal generalization of the plausible lower bound

Rule refinement with a negative example:Minimal specialization of the plausible upper bound

Rule refinement with a negative example:Adding an Except-When plausible version space condition

Integrated problem solving and learning

Page 49: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Rule: R$ASWCER-001IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa ?O1

Question: Is an enemy reconnaissance unit present?

Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action.

Explanation:•?O2 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 IS RED--SIDE•?O2 TASK ?O3 IS INTELLIGENCE-COLLECTION--MIL-TASK

THEN accomplish the task:Assess-security-when-enemy-recon-is-present for-coa ?O1 for-unit ?O2 for-recon-action ?O3

Plausible Lower Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS SCREEN—MILITARY-TASK?O4 IS RED--SIDE

Ma

in C

on

dit

ion

Negative example that satisfies the upper bound

IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa COA51

THEN accomplish the task:Assess-security-when-enemy-recon-is-present for-coa COA51 for-unit BLUE-BATTALION1 for-recon-action SCREEN-RIGHT

Plausible Upper Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK

?O4 IS ALLEGIANCE-OF-UNIT

Condition satisfied by positive example?O1 IS COA51

?O2 IS BLUE-BATTALION1

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS SCREEN-RIGHT

?O4 IS BLUE-SIDE

less

gen

eral

th

an

A negative example covered by the upper boundA negative example covered by the upper bound

Page 50: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

RED-SIDEBLUE-SIDE

ALLEGIANCE-OF-UNIT

SUBCLASS-OF

_

specialization

?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS INTELLIGENCE-COLLECTION--MIL-TASK?O4 IS ALLEGIANCE-OF-UNIT

?O1 IS COA51

?O2 IS BLUE-BATALLION1

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS SCREEN-RIGHT?O4 IS BLUE-SIDE

?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS SCREEN--MILITARY TASK?O4 IS RED-SIDE

Negative Example Specialized Plausible Upper Bound

Plausible Upper Bound (from rule)

specialization

Minimal specialization of the plausible upper boundMinimal specialization of the plausible upper bound

Page 51: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Rule: R$ASWCER-001

Plausible Upper Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS INTELLIGENCE-COLLECTION--MIL-TASK?O4 IS ALLEGIANCE-OF-UNIT

IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa ?O1

Question: Is an enemy reconnaissance unit present?

Answer: Yes, the enemy unit ?O2 is performing the action ?O3 which is a reconnaissance action.

Explanation:•?O2 SOVEREIGN-ALLEGIANCE-OF-ORG ?O4 IS RED--SIDE•?O2 TASK ?O3 IS INTELLIGENCE-COLLECTION--MIL-TASK

THEN accomplish the task:Assess-security-when-enemy-recon-is-present for-coa ?O1 for-unit ?O2 for-recon-action ?O3

Plausible Lower Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MECHANIZED-INFANTRY-UNIT--MIL-SPECIALTY

SOVEREIGN-ALLEGIANCE-OF-ORG ?O4

TASK ?O3

?O3 IS SCREEN--MILITARY TASK?O4 IS RED--SIDE

Ma

in C

on

dit

ion

Rule specializationRule specialization

Negative example that satisfies the upper bound

IF the task to accomplish is:Assess-security-wrt-countering-enemy-reconnaissance for-coa COA51

THEN accomplish the task:Assess-security-when-enemy-recon-is-present for-coa COA51 for-unit BLUE-BATTALION1 for-recon-action SCREEN-RIGHT

Failure Explanation:•BLUE-SIDE is ALLEGIANCE-OF-UNIT but is not RED-SIDE

Explanation:• BLUE-BATTALION1 SOVEREIGN-ALLEGIANCE-OF-ORG

BLUE-SIDE• BLUE-BATTALION1 TASK SCREEN-RIGHT • SCREEN-RIGHT IS INTELLIGENCE-COLLECTION--MIL-TASK

The above reduction is incorrectin spite of

Because

RED--SIDE

minimal specialization

Page 52: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Illustration of rule refinement in the COA domainIllustration of rule refinement in the COA domain

Rule refinement with a positive example:Minimal generalization of the plausible lower bound

Rule refinement with a negative example:Minimal specialization of the plausible upper bound

Rule refinement with a negative example:Adding an Except-When plausible version space condition

Integrated problem solving and learning

Page 53: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Generation of the failure explanationGeneration of the failure explanation

Rule: R$ASWERIP-002

Plausible Upper Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE

?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK?S1 IS “HIGH”

Explanation:•?S1 IS ALWAYS “HIGH”

IF the task to accomplish is:Assess-security-when-enemy-recon-is-present for-coa ?O1 for-unit ?O2 for-recon-action ?O3

Question: Is the enemy unit destroyed?

Answer: No, ?O2 is not countered

Plausible Lower Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE

?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK?S1 IS “HIGH”

THEN accomplish the task:Report-weakness-in-security-because-enemy-recon-is-not-countered for-coa ?O1 for-unit ?O2 for-recon-action ?O3 with-importance ?S1

Ma

in

Co

nd

itio

n

Negative ex. that satisfies the upper bound

IF the task to accomplish is:Assess-security-when-enemy-recon-is-present for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1

THEN accomplish the task:Report-weakness-in-security-because-enemy-recon-is-not-countered for-coa COA411 for-unit RED-CSOP1 for-recon-action SCREEN1 with-importance High

Failure Explanation:• DESTROY1 OBJECT-ACTED-ON RED-CSOP1• DESTROY1 IS DESTROY-MILITARY-TASK

The above reduction is incorrectBecause enemy recon is countered

Page 54: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Rule refinement withthe failure explanationRule refinement withthe failure explanation

Rule: R$ASWERIP-002

Plausible Upper Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE

?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK?S1 IS “HIGH”

Explanation:•?S1 IS ALWAYS “HIGH”

IF the task to accomplish is:Assess-security-when-enemy-recon-is-present for-coa ?O1 for-unit ?O2 for-recon-action ?O3

Question: Is the enemy unit destroyed?

Answer: No, ?O2 is not countered

Plausible Lower Bound?O1 IS COA-SPECIFICATION-MICROTHEORY

?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE

?O3 IS INTELLIGENCE-COLLECTION--MILITARY-TASK?S1 IS “HIGH”

THEN accomplish the task:Report-weakness-in-security-because-enemy-recon-is-not-countered for-coa ?O1 for-unit ?O2 for-recon-action ?O3 with-importance ?S1

Failure Explanation:•?O4 OBJECT-ACTED-ON ?O2 •?O4 IS DESTROY-MILITARY-TASK

Plausible Upper Bound?O4 IS DESTROY-MILITARY-TASK OBJECT-ACTED-ON ?O2

Plausible Lower Bound?O4 IS DESTROY1 OBJECT-ACTED-ON ?O2

Ma

in

Co

nd

itio

nE

xc

ep

t W

he

n

Co

nd

itio

n

Both bounds are specialized with an Except When condition

Page 55: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Illustration of rule refinement in the COA domainIllustration of rule refinement in the COA domain

Rule refinement with a positive example:Minimal generalization of the plausible lower bound

Rule refinement with a negative example:Minimal specialization of the plausible upper bound

Rule refinement with a negative example:Adding an Except-When plausible version space condition

Integrated problem solving and learning

Page 56: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Assess COA wrt Principle of Securityfor-coa COA411

R$

AS

WE

RIP

-0

02

RuleLearning

Does the COA include security and counter-recon actions, a security element, a rear element, and identify risks?

Assess security wrt countering enemy reconnaissancefor-coa COA411

I consider enemy reconnaissance

R$

AS

WC

ER

-00

1

RuleLearning

R$

AC

WP

OS

-00

1

RuleLearning

Is an enemy reconnaissance unit present?

Assess security when enemy recon is presentfor-coa COA411for-unit RED-CSOP1for-recon-action SCREEN1

Yes, RED-CSOP1 which is performingthe reconnaissance action SCREEN1

Yes, RED-CSOP1 is destroyed by DESTROY1

Is the enemy reconnaissance unit destroyed?

Report strength in security because of countering enemy reconfor-coa COA411for-unit RED-CSOP1for-recon-action SCREEN1for-action DESTROY1with-importance “high”

Page 57: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Assess COA wrt Principle of Securityfor-coa COA421

Does the COA include security and counter-recon actions, a security element, a rear element, and identify risks?

Assess security wrt countering enemy reconnaissancefor-coa COA421

I consider enemy reconnaissance

Is an enemy reconnaissance unit present?

Assess security when enemy recon is presentfor-coa COA421for-unit RED-CSOP2for-recon-action SCREEN2

Yes, RED-CSOP2 which is performingthe reconnaissance action SCREEN2

No

Is the enemy reconnaissance unit destroyed?

Report weakness in security because enemy recon is not counteredfor-coa COA421for-unit RED-CSOP2for-recon-action SCREEN2with-importance “high”

RuleRefinement

R$

AC

WP

OS

-00

1

R$

AS

WC

ER

-00

1

RuleRefinement

R$

AS

WE

RIP

-0

02

RuleLearning

Page 58: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

OverviewOverview

Integrated modeling, learning, and solving

Illustration of rule refinement in the COA domain

Illustration of rule refinement in other domains

Required reading

The rule refinement method

Characterization of the PVS learning method

Hands-on experience: Problem solving and learning

Page 59: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Illustration of rule refinement in other domainsIllustration of rule refinement in other domains

Illustration in the assessment and tutoring domain

Illustration in the manufacturing domain

Page 60: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Illustration in the manufacturing domainIllustration in the manufacturing domain

G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 21-23, pp. 101-129 (required reading).

See:

Page 61: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Illustration in the assessment and tutoring domainIllustration in the assessment and tutoring domain

G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 27-32, pp. 198-228 (required reading).

See:

Page 62: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

OverviewOverview

Integrated modeling, learning, and solving

Illustration of rule refinement in the COA domain

Illustration of rule refinement other domains

Required reading

The rule refinement method

Characterization of the PVS learning method

Hands-on experience: Problem solving and learning

Page 63: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Characterization of the PVS ruleCharacterization of the PVS rule

Page 64: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Characterization of the rule learning methodCharacterization of the rule learning method

Uses the explanation of the first positive example to generate a much smaller version space than the classical version space method.

Conducts an efficient heuristic search of the version space, guided by explanations, and by the maintenance of a single upper bound condition and a single lower bound condition.

Will always learn a rule, even in the presence of exceptions.

Learns from a few examples and an incomplete knowledge base.

Uses a form of multistrategy learning that synergistically integrates learning from examples, learning from explanations, and learning by analogy, to compensate for the incomplete knowledge.

Uses mixed-initiative reasoning to involve the expert in the learning process.

Is applicable in complex real-world domains, being able to learn within a complex representation language.

Page 65: Multistrategy Rule Refinement

G.Tecuci, Learning Agents Laboratory

Required readingRequired reading

G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 21-23, pp. 27-32, pp. 101-129, pp. 198-228 (required).

Tecuci G., Boicu M., Bowman M., and Dorin Marcu, with a commentary by Murray Burke,“An Innovative Application from the DARPA Knowledge Bases Programs: Rapid Development of a High Performance Knowledge Base for Course of Action Critiquing,” invited paper for the special IAAI issue of the AI Magazine, Volume 22, No, 2, Summer 2001, pp. 43-61.http://lalab.gmu.edu/publications/data/2001/COA-critiquer.pdf (required).

Boicu M., Tecuci G., Stanescu B., Marcu D. and Cascaval C., "Automatic Knowledge Acquisition from Subject Matter Experts," in Proceedings of the IEEE International Conference on Tools with Artificial Intelligence, Dallas, Texas, November 2001. http://lalab.gmu.edu/publications/data/2001/ICTAI.doc (required).