luís moniz pereira centria, departamento de informática universidade nova de lisboa
DESCRIPTION
Preferring and Updating in. Abductive Multi-Agents Systems. Pierangelo Dell’Acqua Dept. of Science and Technology Linköping University [email protected]. Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa [email protected]. Our agents. - PowerPoint PPT PresentationTRANSCRIPT
Luís Moniz Pereira
CENTRIA, Departamento de Informática
Universidade Nova de Lisboa
Pierangelo Dell’Acqua
Dept. of Science and Technology
Linköping University
Our agents
We propose a LP approach to agents that can:
Reason and react to other agents
Abduce hypotheses to solve goals and to explain observationsPrefer among possible choices Intend to reason and to actUpdate their own knowledge, reactions and goals Interact by updating the theory of another agentDecide whether to accept an update depending on the
requesting agent
Framework
This framework builds on the work:
Updating Agents - P. Dell’Acqua & L. M. Pereira
MAS’99
Updates plus Preferences - J. J. Alferes & L. M.
Pereira JELIA’00
Updating agents
Updating agent: a rational, reactive agent that can dynamically change its own knowledge and goals:
makes observations reciprocally updates other agents with goals and rules thinks a bit (rational) selects and executes an action (reactive)
Abductive agents
Abductive agent: an agent that can abduce hypotheses to
solve golas and to explain observations.
Hypotheses must satisfy the integrity constraints.
Hypotheses abduced in proving a goal G are not permanent: they only hold during the proof of G.
Hypotheses can be committed to by self-updating.
Updates plus preferences
A logic programming framework that combines two distinct forms of reasoning: preferring and updating.
Updates create new models, while preferences allow us to select among pre-existing models
The priority relation can itself be updated.
A language capable of considering sequences of logic programs that result from the consecutive updates of an initial program, where it is possible to define a priority relation among the rules of all successive programs.
Preferring agents
Agents can express preferences about their own rules and abducibles.
Preferring agent: an agent that is able to prefer beliefs, reactions and abducibles when several alternatives are possible.
Preferences are expressed via priority rules.
Preferences can be updated, possibly on advice from others.
Claim
We argue that our present theory of the type of agents is a rich, integrative, evolvable basis, and suitable for engineering configurable, dynamic, self-organizing and self-evolving agent societies.
Thus, the overall emerging structure will be flexible and dynamic: each agent has its own explicitrepresentation of its organization which is updatable.
Agent’s language
Atomic formulae:
A objective atoms
not A default atoms
i:C projects
updates:- i C
Formulae:
A L1 Ln
not A L1 Ln
L1 Ln Z
Li is an update or an atom
active rule
generalized rules
Zj is a project
integrity constraint
false L1 Ln Z1 Zm
Agent’s language
i : ( A L1 Ln )
i : ( L1 Ln Z )
i : ( ?- L1 Ln )
A project i:C can take one of the forms:
i : ( not A L1 Ln )
goal
i : ( false L1 Ln Z1 Zm )
Note that a program can be updated with another program, i.e., any rule can be updated.
Agents’ knowledge states
Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates.
Given the current knowledge state Ps , its successor knowledge state Ps+1 is produced as a result of the occurrence of a set of
parallel updates.
Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.
Projects and updates
A project j:C denotes the intention of some agent i of proposing the updating the theory of agent j with C.
denotes an update proposed by i of the current theory of some agent j with C .
j:C
:- i C
:- i C
Priority rules
Let < be a binary predicate symbol whose set of constants includes all the generalized rules:
r1 < r2 means that the rule r1 is preferred to the rule r2 .
A priority rule is a generalized rule defining < .
Prioritized abductive LP
A prioritized abductive LP is a pair (P,A):
- P is a set of generalized rules (possibly, priority rules)
and integrity constraints.
- A is a set of objective and default atoms (abducibles).
Agent theory
The initial theory of an agent is a tuple (P,A,R):- (P,A) is an prioritized abductive LP.- R is a set of active rules.
An updating program is a finite set of updates.
Let S be a set of natural numbers. We call the elements sS states.
An agent at state s , written ,s , is a pair (T,U):
- T is the initial theory of .
- U={U1,…, Us} is a sequence of updating programs.
Multi-agent system
A multi-agent system M={1,s ,…, n,s } at state s is a set of agents 1,…,n at state s.
M characterizes a fixed society of evolving agents.
The declarative semantics of M characterizes the relationship among the agents in M and how the system evolves.
The declarative semantics is stable models based.
Distributed databases and cooperative agents
Then p can be characterized by (P,A,R), where A={} and
rC reject(rC) NrC
NrC t:NrC
P =
R =rC=residence of CarloNrC=new residence of Carlo
Communication and updates allow to integrate distinct agents.
Assume that we want to minimize the administrative procedure required for changing residence. For example, we may notify the new residence once in a public office (p). Then it is the responsibility of that office to inform all the relevant offices.
Representation of conflicting informationand preferences
This example models a situation where an agent, Fabio, receives conflicting advice from two reliable authorities.
Let (P,A,R) be the initial theory of Fabio, where A=R={} and
dont(A) fa(noA) not do(A) (r1)
do(A) ma(A) not dont(A) (r2)
false do(A) fa(noA)
false dont(A) ma(A)
r1 < r2 fr
r2 < r1 mr
P = fa=father advisesma=mother advisesfr=father responsabilitymr=mother responsability
Preferences may resolve conflicting information.
Representation of conflicting informationand preferences
Suppose that Fabio wants to live alone, represented as lA.
U1 =
His mother advises him to do so, but the father advises not to do so:
mother ma(lA) ,:- father fa(nolA):-
Assuming that there are no rejection clauses, Fabio accepts both updates, and therefore he is still unable to choose either do(lA) or dont(lA) and, as a result, does not perform any action whatsoever.
Representation of conflicting informationand preferences
U2 =
Afterwards, Fabio's parents separate and the judge assigns responsibility over Fabio to the mother:
judge mr:-
Now the situation changes since the second priority rule gives preference to the mother's wishes, and therefore Fabio can happily conclude ”do live alone”.
Updating preferences
Within the theory of an agent both rules and preferences can
Here internal projects of an agent are used to update its own priority rules.
The updating process is triggered by means of external or internal projects.
be updated.
Updating preferences
Let the theory of George be characterized by :
workLate not party (r1)
party not workLate (r2)
money workLate (r3)
r2 < r1
beautifulWoman george: wishGoOut
wishGoOut not money george: getMoney
wishGoOut money beautifulWoman: inviteOut
getMoney george: r1 < r2
getMoney george: not r2 < r1
P =
R =
A = { }
partying is prefered to working until late
to get money, George must update his priority rules
Applications
Applications in which our agent technology can have a significant potential to contribute are internet applications, e.g.
- information integration
- web-site management
Engineering agent societies
We believe that the theory of our agents is rich and suitable to engineer configurable, dynamic, self-organizing and self-evolving agent societies.
Jennings argues that:
- open, networked systems are characterized by the fact that
there is no simple controlling organization.
- the computational model of these systems places several
requirements.
Engineering agent societies
Computational model’s requirements: the individual entities must be active and autonomous;
the individual entities need to be reactive and proactive;
the computational entities need to be capable of interacting with entities
that were not foreseen at design time;
any organizational relationships that do exist must be reflected in the behaviour and actions of the agents (i.e., the organizational relationships must be explicitly represented).
Engineering agent societies
Castelfranchi claims that:
- The most effective solution to the problem of social order in
multi-agent systems is social modelling.
- It should leave some flexibility and try to deal with emergent
and spontaneous form of organizations (that is, decentralized
and autonomous social control).
Problem: modeling the feedback from the global results to the local/individual layer
Introspection and metareasonig for social modelling
To solve this problem we need two ingredients:
introspection
To dynamically change the organization, structure of the multi-agent system, agents must be aware (even if partially) of the structure and must be able to introspect about it.
Introspection
By using metareasoning the agent can evaluate it, obtain feedback from it and eventually try to modify it via preferences and updates in a rational way.
Metareasoning
metareasoning
Future work
The approach can be extended in several ways: Dynamically reconfigurable multi-agent system.
Introspective and metareasoning abilities.
Other rational abilities can be incorporated, e.g.,
learning.
Proof procedure for preference reasoning to be incorporated
into the current implementation of updates plus
abduction.
Conclusion
To have dynamic, flexible agent societies we need to have suitable agent theories, otherwise the structure modeling the agent society will be rigid in the sense that it will not be modifiable by the agents themselves.
We believe that our theory of agents is a suitable basis for achieving this aim.