software agent - bdi architecture -

40
Software Agent - BDI architecture -

Upload: dwight

Post on 22-Feb-2016

78 views

Category:

Documents


1 download

DESCRIPTION

Software Agent - BDI architecture -. Outline. BDI Agent AgentSpeak (L) Summary. Definition of BDI Architecture. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Software Agent - BDI architecture -

Software Agent- BDI architecture -

Page 2: Software Agent - BDI architecture -

Outline

• BDI Agent

• AgentSpeak(L)

• Summary

2/39

Page 3: Software Agent - BDI architecture -

Definition of BDI Architecture

• The Belief-Desire-Intention (BDI) architectures are examples of prac-tical reasoning: the process of deciding, moment by moment which action to perform in the furtherance of our goals– Two-front reasoning

• Agents not only strive to achieve their goals but must take time to reflect on whether their goals are still valid and possibly revise them

• The BDI architecture is an example of balancing reactive behavior with goal-directed be-havior

• BDI agents use two important processes– Deliberations: deciding what goals we want to achieve– Means-ends reasoning: deciding how we are going to achieve them

• To differentiate between these three concepts– I believe that if I study hard I will pass this course– I desire to pass this course– I intend to study hard

3/39

Page 4: Software Agent - BDI architecture -

BELIEFS DESIRES

SELECTION FUNCTION

INTENTION

4/39

Page 5: Software Agent - BDI architecture -

Examples of BDI Agents

• Hotel manager– Duties: Rooms supervising (occupation checking, making reservation), gives clean-

ing orders, gives fixing orders, needs submission – Models

• Beliefs – room status and schedule• Desires – provide reservation to client• Intentions – order to clean / order to fix

• Hotel maid– Duties: room cleaning (i.e. cyclic) (changing bedclothes, tablecloths and towels,

vacuuming, bathroom cleaning, cabinet keeping), room cleaning on managers de-mand, defects submission, taking new stuff (bedclothes, towels etc.) from ware-house manager and giving him used ones

– Models• Beliefs – room stuff and device status• Desires – make room clean / defect detection • Intentions – clean room / submits defects / exchange used stuff with warehouse

5/39

Page 6: Software Agent - BDI architecture -

Role of Intentions

• Intentions drive means-ends reasoning:– Having formed an intention, I will attempt to achieve it– This involves deciding how to achieve it– If one action fails, I will try another approach

• Intentions constrain future deliberation:– I will not entertain options that are inconsistent with my intentions

• Intentions persist:– I will not give up my intentions without good reason

• Typically, they persist until either I believe they have been achieved or I believe they will never be achieved, or if the reason I had the intention is no longer present

• Intentions influence beliefs on which future practical reasoning is based– If I adopt an intention, I will plan for the future on the assumption that I will achieve

that intention

6/39

Page 7: Software Agent - BDI architecture -

Balancing Reactivity and Goal-directed

• How often should an agent reconsider its intentions?– Bold agents: an agents that rarely stops to reconsider will continue attempting to

achieve intentions even after they are no longer possible or it has no reason for achieving them.

– Cautious agents: an agent that constantly reconsiders will spend insufficient time actually working to achieve its intentions, and hence may never achieve them.

• The main factor that affects how thee agents perform in different envi-ronments is the rate of world change, r:– If r is low, bold agents do well compared with cautious ones, while cautious agents

waste time reconsidering their intentions, bold agents are busy working towards and achieving their goals.

– If r is high, cautious agents outperform bold agents; cautious agents can recognize when their intentions are doomed, and can also take advantage of serendipitous situations and new opportunities.

7/39

Page 8: Software Agent - BDI architecture -

Schematic of BDI Architecture

• A belief revision function ( brf )

• A set of current beliefs

• An option generation function

• A set of current desires (options)

• A filter function

• A set of current intentions

• An action selection function

8/39

Page 9: Software Agent - BDI architecture -

Components of a BDI Agent (1)

• A set of current beliefs– These represent information that agent has about the current environment

• A belief revision function (brf)– Takes perceptual input and agent’s current belief and, using these, determines a

new set of beliefs

• An option generation function– Determines options available to agent, using current belief about environment and

current intentions

• A set of current desires (options)– These represent possible courses of action available to agent

9/39

Page 10: Software Agent - BDI architecture -

Components of a BDI Agent (2)

• A filter function– Represents agent’s deliberation process and determines agent’s intentions on the

basis of current beliefs, desires and intentions

• A set of current intentions– These represent agent’s current focus – those states of affairs that is has commit-

ted to trying to bring about

• An action selection function– That determines an action to perform on the basis of current intentions

10/39

Page 11: Software Agent - BDI architecture -

Formalization of a BDI Agent

• Let Bel, Des, Int be sets of all possible beliefs, desires and intentions– We will not consider the contents of these sets – it depends on the purpose of the

agent – but often they are logical formulas

• There must be a notion of consistency defined on these sets. e. g.,– Is an intention to achieve x consistent with the belief y?

• The internal state of a BDI agent at a given instant is a triple (B, D, I) where

• The belief revision function is a mapping:

– On the basis of the current percept and current beliefs determines a new set of be-liefs

, ,B Bel D Des I Int

:brf P Bel P P Bel

11/39

Page 12: Software Agent - BDI architecture -

Functions

• The option generation function is a mapping:

– Is responsible for the agent’s means-ends reasoning – how to achieve its intentions– Some of the options feedback – recursively elaborating a hierarchical plan struc-

ture, until it reaches executable actions

• The agent’s deliberation process (filter) is a mapping:

– Updates agent’s intentions– Drops impossible or non-beneficial intentions– Retains those not yet achieved and expected to be of benefit– Finally, it adopts new intentions either to achieve existing intentions, or to exploit

new opportunities.

• NB. Intentions must come from somewhere:

:options P Bel P Int P Des

:filter P Bel P Des P Int P Int

, , , ,B D I filter B D I I D 12/39

Page 13: Software Agent - BDI architecture -

Action Selection Function

• The execute function simply returns any executable intention:

• The whole action function of a BDI agent is simply:

:execute P Int A

:action P A

: :

: ,

: ,

: , ,

function action p P A

beginB brf B p

D options B I

I filter B D I

end function action

13/39

Page 14: Software Agent - BDI architecture -

Passing the Course

• A student agent perceives the following beliefs:

• The agent has an initial intention to pass the course:

• The agent’s desires are freshly generated each for cycle (they do not persist). The option generation function leads to desires to pass the course and its consequence:

0Intentions passCourse

1 1 0,

, , ,

Desires options Belief Intentions

workhard attendLecture completeCoursework review

1 ,workhard passCourse

Beliefs brf attendLectures completeCoursework reviewworkhard

Example

14/39

Page 15: Software Agent - BDI architecture -

Generating Intentions

• The filter function leads to some new intentions being added:

• One or more of which will then be executed before the agent’s delib-eration cycle recommences.

1 1 1 0, ,

, , ,,

Intentions filter Belief Desires Intentions

passCourse workHard attendLecturescompleteCoursework review

Example

15/39

Page 16: Software Agent - BDI architecture -

Obtaining New Beliefs

• Suppose the agent perceives new information which leads to his be-liefs being revised:

2 1

,,

,

,,

cheat passCourseBeliefs brf Beliefs

cheat workhard

workHard passCourseattendLecture completeCoursework reviewworkHard

cheat passCoursecheate workHard

Example

16/39

Page 17: Software Agent - BDI architecture -

Revising Desires and Intentions

• The agent recomputes his current desires

• And intentions

• The agent drops his original intention to work hard (and its conse-quences) and adopts a new one to cheat

2 1 1,Desires options Beliefs Intentions cheat

2 2 2 1, ,

,

Intentions filter Beliefs Desires Intentions

passCourse cheat

Example

17/39

Page 18: Software Agent - BDI architecture -

Adding More Beliefs

• Subsequently, the agent perceives that if caught cheating, he will no longer pass the course. What’s more, he is certain to be caught

• Because the new beliefs lead to an inconsistency, the agent has had to drop his belief in

3 2

2

,,

/

,

cheat caught passCourseBeliefs brf Beliefs

caught

Beliefs cheat passCourse

cheat caught passCourse caught

cheat passCourse

Example

18/39

Page 19: Software Agent - BDI architecture -

Revising Desires and Intentions

• The agent recomputes his desires and intentions

• Because it’s not longer consistent to cheat (even through it may be preferable to working hard), the agent drops that intention and re-adopts workHard (and consequences)

3 2 2

3 3 3 2

,

, , ,

, ,

, ,, ,

Desires options Beliefs Intentions

workHard attendLectures completeCoursework review

Intentions filter Beliefs Desires Intentions

passCourse workHardattendLectures completeCoursework review

Example

19/39

Page 20: Software Agent - BDI architecture -

AgentSpeak(L)

• A model that shows a one-to-one correspondence between the model theory, proof theory and the abstract interpreter– Attempt to bridge the gap between theory and practice– Natural extension of logic programming for the BDI agent architecture– Provides an elegant abstract framework for programming BDI agents– Based on a restricted first-order language with events and actions– The behavior of the agent (i.e., its interaction with the environment) is dictated by

the programs written in AgentSpeak(L)

20/39

Page 21: Software Agent - BDI architecture -

Basic Notions (1)

• A set of base beliefs: facts in the logic programming sense

• A set of plans: context-sensitive, event-invoked recipes that allow hi-erarchical decomposition of goals as well as the execution of actions with the purpose of accomplishing a goal

• Belief atom– A first-order predicate in the usual notation– Belief atoms or their negations are termed belief literals

• Goal: a state of the system, which the agent wants to achieve.– Achievement goals

• predicates prefixed with the operator “!”• state to achieve a state of the world where the associated predicate is true• in practice, these initiate the execution of subplans

– Test goals• predicates prefixed with the operator‘?’• returns a unification for the associated predicate with one of the agent’s beliefs; it fails if

no unification is found

AgentSpeak(L)

21/39

Page 22: Software Agent - BDI architecture -

Basic Notions (2)

• Triggering event– Defines which events may initiate the execution of a plan.– An event can be

• internal, when a subgoal needs to be achieved• external, when generated from belief updates as a result of perceiving the environment

– Two types of triggering events: related to the addition (‘+’) and deletion (‘-’) of atti-tudes (beliefs or goals)

• Plans: refer to the basic actions that an agent is able to perform on its environment

AgentSpeak(L)

p ::= te : ct <- hWhere: te - triggering event (denoting the purpose for that plan) ct - a conjunction of belief literals representing a context.

The context must be a logical consequence of that agent’s current beliefs for the plan to be applicable.

h - a sequence of basic actions or (sub)goals that the agent has to achieve (or test) when the plan, if applicable, is chosen for execution.

22/39

Page 23: Software Agent - BDI architecture -

+concert (A,V) : likes(A) <- !book_tickets(A,V).

+!book_tickets(A, V) : ¬busy(phone)

<- call(V); …;

!choose seats(A,V).

Triggering event Context

Achievement goal added

Basic action

23/39

Page 24: Software Agent - BDI architecture -

Basic Notions (3)

• Intentions– Plans the agent has chosen for execution.– Intentions are executed one step at a time– A step can

• query or change the beliefs• perform actions on the external world• suspend the execution until a certain condition is met• submit new goals

– The operations performed by a step may generate new events, which, in turn, may start new intentions

– An intention succeeds when all its steps have been completed. It fails when certain conditions are not met or actions being performed report errors

AgentSpeak(L)

24/39

Page 25: Software Agent - BDI architecture -

Syntax AgentSpeak(L)

ag ::= bs psbs ::= at1. … atn. (n0)at ::= P(t1, … tn) (n0)ps ::= p1 … pn (n1)p ::= te : ct <- h.te ::= +at | -at | +g | -gct ::= true | l1 & … & ln (n1)h ::= true | f1 ; … ; fn (n1)l ::= at | not (at)f ::= A(t1, … tn) | g | u (n0)g ::= !at | ?atu ::= +at | -at

25/39

Page 26: Software Agent - BDI architecture -

Informal Semantic (1)

• The interpreter for AgentSpeak(L) manages– A set of events– A set of intentions– Three selection functions

• Events, which may start off the execution of plans that have relevant triggering events, can be:– External, when originating from perception of the agent’s environment (i.e., addition

and deletion of beliefs based on perception are external events). External events create new intentions.

– Internal, when generated from the agent’s own execution of a plan (i.e., a subgoal in a plan generates an event of type “addition of achievement goal”).

• Intentions are particular courses of actions to which an agent has committed in order to handle certain events. Each intention is a stack of partially instantiated plans

AgentSpeak(L)

26/39

Page 27: Software Agent - BDI architecture -

Informal Semantic (2)

• SE (the event selection function)– Selects a single event from the set of events

• SO– Selects an “option” (i.e., an applicable plan) from a set of applicable plans

• SI– Selects one particular intention from the set of intentions

• The selection functions are agent-specific, in the sense that they should make selections based on an agent’s characteristics

AgentSpeak(L)

27/39

Page 28: Software Agent - BDI architecture -

Informal Semantic: Overview (1) AgentSpeak(L)

28/39

Page 29: Software Agent - BDI architecture -

Informal Semantic: Overview (2) AgentSpeak(L)

29/39

Page 30: Software Agent - BDI architecture -

Example

During lunch time, for-ward all calls to Carla. When I am busy, incom-ing calls from colleagues should be forwarded to Denise.

ALICE

30/39

Page 31: Software Agent - BDI architecture -

Beliefs

• user(alice).• user(bob).• user(carla).• user(denise).• ~status(alice, idle).• status(bob, idle).• colleague(bob).• lunch_time(“11:30”).

Example

31/39

Page 32: Software Agent - BDI architecture -

Plan

user(alice).user(bob).user(carla).user(denise).~status(alice, idle).status(bob, idle).colleague(bob).lunch_time(“11:30”).

“During lunch time, forward all calls to Carla”.+invite(X, alice) : lunch_time(t) !call_forward(alice, X, carla). (p1)

“When I am busy, incoming calls from col-leagues should be forwarded to Denise”.

+invite(X, alice) : colleague(X)

call_forward_busy(alice,X,denise). (p2)

+invite(X, Y): true connect(X,Y).

(p3)

Example

32/39

Page 33: Software Agent - BDI architecture -

Plan Example

user(alice).user(bob).user(carla).user(denise).~status(alice, idle).status(bob, idle).colleague(bob).lunch_time(“11:30”).+invite(X, alice) : lunch_time(t) !call_forward(alice, X, carla). (p1)+invite(X, alice) : colleague(X) call_forward_busy(alice,X,denise).(p2)+invite(X, Y): true connect(X,Y). (p3)

+!call_forward(X, From, To) : invite(From, X) +invite(From, To), - invite(From,X) (p4)

+!call_forvard_busy(Y, From, To) : invite(From, Y)& not(status(Y, idle)))

+invite(From, To), - invite(From,Y). (p5)

33/39

Page 34: Software Agent - BDI architecture -

Plan Example

user(alice).user(bob).user(carla).user(denise).~status(alice, idle).status(bob, idle).colleague(bob).lunch_time(“11:30”).

+invite(X, alice) : lunch_time(t) !call_forward(alice, X, carla).

(p1)+invite(X, alice) : colleague(X)

call_forward_busy(alice,X,denise). (p2)

+invite(X, Y): true connect(X,Y). (p3)+!call_forward(X, From, To) : invite(From, X) +invite(From, To), - invite(From,X) (p4)

+!call_forvard_busy(Y, From, To) : invite(From, Y)& not(status(Y, idle)))

+invite(From, To), - invite(From,Y). (p5) 34/39

Page 35: Software Agent - BDI architecture -

Execution 1

• A new event is sensed from the environment, +invite(Bob, Alice) (there is a call for Alice from Bob).• There are three relevant plans for this event (p1, p2 and p3)

– the event matches the triggering event of those three plans.

Example

Relevant Plans Unifierp1: +invite(X, alice) : lunch_time(t) !call_forward(alice, X, carla)

p2: +invite(X, alice) : colleague(Bob) !call_forward_busy(alice, X, denise).

{X=bob}

p3 : +invite(X, Y): true connect(X,Y). {Y=alice, X=bob}

35/39

Page 36: Software Agent - BDI architecture -

Execution 2

• Only the context of plan p2 is satisfied - colleague(bob) => p2 is ap-plicable• A new intention based on this plan is created in the set of intentions,

because the event was external, generated from the perception of the environment• The plan starts to be executed. It adds a new event, this time an in-

ternal event: !call_forward_busy(alice,bob,denise)

Example

Intention ID Intension Stack Unifier1 +invite(X,alice):colleague(X)

<- !call_forward_busy(alice,X,denise){X=bob}

36/39

Page 37: Software Agent - BDI architecture -

Execution 3

• A plan relevant to this new event is found (p5):

• p5 has the context condition true, so it becomes an applicable plan and it is pushed on top of intention 1 (it was generated by an internal event)

Example

Relevant Plans Unifierp5: +!call_forvard_busy(Y, From, To) : invite(From, Y) & not(status(Y, idle)))

+invite(From, To), - invite(From,Y).

{From=bob, Y=alice, To=denise}

Intention ID Intension Stack Unifier

1 +!call_forward_busy(Y,From,To) : invite(From,Y) & not status(Y,idle) <- +invite(From,To); -invite(From,Y)

{From=bob, Y=alice, To=denise}

+invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise)

{X=bob}

37/39

Page 38: Software Agent - BDI architecture -

Execution 4

• A new internal event is created, +invite(bob, denise).• Three relevant plans for this event are found, p1, p2 and p3.• However, only plan p3 is applicable in this case, since the others

don’t have the context condition true.• The plan is pushed on top of the existing intention.

Example

Intention ID Intension Stack Unifier1 +invite(X,Y) : <- connect(X,Y) {Y=denise, X=bob}

+!call_forward_busy(Y,From,To) : invite(From,Y) & not status(Y,idle) <- +invite(From,To); -invite(From,Y)

{From=bob, Y=alice, To=denise}

+invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise)

{X=bob}

38/39

Page 39: Software Agent - BDI architecture -

Execution 5

• On top of the intention is a plan whose body contains an action.• The action is executed, connect(bob, denise) and is removed from

the intention.• When all formulas in the body of a plan have been removed (i.e.,

have been executed), the whole plan is removed from the intention, and so is the achievement goal that generated it.

• The only thing that remains to be done is –invite(bob, alice) (this event is removed from the beliefs base). • This ends a cycle of execution, and the process starts all over again,

checking the state of the environment and reacting to events.

Example

Intention ID Intension Stack Unifier1 +!call_forward_busy(Y,From,To) :

invite(From,Y) & not status(Y,idle) <- -invite(From,Y)

{From=bob, Y=alice, To=denise}

+invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise)

{X=bob}

39/39

Page 40: Software Agent - BDI architecture -

Summary

• BDI Agents– An example of practical reasoning: the process of deciding, moment by moment

which action to perform in the furtherance of our goals– An example of balancing reactive behavior with goal-directed behavior

• AgentSpeak(L) has many similarities with traditional logic program-ming, which would favor its becoming a popular language– It proves quite intuitive for those familiar with logic programming.– It has a neat notation, thus providing quite elegant specifications of BDI agents.

40/39