from multiagent systems to multiagent societies michael berger based on: 1) “multiagent systems...
Post on 21-Dec-2015
228 Views
Preview:
TRANSCRIPT
From Multiagent Systems to Multiagent Societies
Michael Berger
Based on:
1) “Multiagent Systems and Societies of Agents” / Michael N. Huhns and Larry M. Stephens
In Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence (Chapter 2) / Gerhard W. Weiss
2) “Commitments and Conventions: The Foundation of Coordination in Multi-Agent Systems” / Nick R. Jennings
Overview
• Agent and Environment
• Communications
• Interactions
• Commitments and Conventions
Part I:Agent and Environment
Agent - Definition
•An active object with the ability to
perceive, reason and act.
•Has explicitly represented knowledge and
a mechanism for operating on or drawing
inferences from its knowledge.
•Has the ability to communicate.
• Knowable (Accessible)
• Predictable (Deterministic)
• Controllable
• Historical (non-Episodic)
• Telelogical
• Real-time (Dynamic)
No
No
No
Yes
Yes
Yes
Open
Environment - Categories
Part II:Communications
Communications - Overview• Motivation
• Meanings
• Speech Acts
• Message Types and Dialogue Roles
• Communication Protocols
• KQML
• KIF
• Ontologies
Motivation (I)
• Coordination - the extent to which agents avoid extraneous activity.– Reducing resource contention
– avoiding livelock / deadlock
– maintaining safety conditions
• Coherence - how well the system behaves as a unit.– Determining shared goals
– Pooling knowledge and evidence
?
Motivation (II)
• Coordination - “not making things worse”.
• Coherence - “making things better”.
• Communication enables the agents to coordinate their actions and behavior, resulting in systems that are more coherent.
Meanings (I)
• Communication - consists of:
– Syntax - how the symbols of communication
are structured.
– Semantics - what the symbols denote.
– Pragmatics - how the symbols are interpreted.
• Meaning = Semantics + Pragmatics
Meanings (II)
• Dimensions of meaning:– Descriptive vs. Prescriptive
– Speaker’s vs. Hearer’s vs. Society’s Perspective
– Semantics vs. Pragmatics
– Contextuality
– Identity
– Cardinality
Speech Acts• Speech act theory used as basis for analyzing
human communication.• Theory views human natural language as
actions.• Speech acts have three aspects:
– Locution - the physical utterance by the speaker.– Illocution - the intended meaning of the utterance by
the speaker.– Perlocution - the action that results from the locution.
• “Performative” - Speech acts that have the property that “saying it makes it so” (e.g. promise, report, tell, request, demand).
Message Types and Dialogue Roles
• Two basic message types:– Assertion– Query
• Three dialogue roles:– Master (active)
• Sends queries (questions), receives assertions (answers), sends assertions (fact determinations).
– Slave (passive)• Receives queries (questions), sends assertions
(answers), receives assertions (fact determinations).
– Peer• Master + Slave
Communication Protocols
• Communication can be:– binary (single sender, single receiver)– n-ary (single sender, many receivers)
• Messages sent using communication protocols are specified by a data structure, that contains the following fields:– Sender– Receiver– Encoding / Decoding functions– Language of message– Message content
Communicating Agents (I)
a is broken.
KQML• KQML - Knowledge Query and Manipulation
Language.• Basic KQML performative defined by a structure
that contains the following fields:– Sender– Receiver– Language– Ontology– Content
• More advanced performatives.• Language used as wrapper for other languages -
Domain independent!• Forwarding and nesting possible.
Communicating Agents (II)Sender: Cowboy
Receiver: Shadow
Language: English
Content: a is broken
Languages: French Languages: English,
Spanish, Basque
Je ne comprends pas
KIF• KIF - Knowledge Interchange Format.
• Prefix version of first-order predicate
calculus.
– Example: or ((and (> ?a 6) (> b 5))) (< c 7)
• Possible to encode knowledge about
knowledge (second-order) and to describe
procedures.
Communicating Agents (III)Sender: Cowboy
Receiver: Shadow
Language: KIF
Ontology: Computers
Content: broken(a)
Languages: French, KIFLanguages: English,
Spanish, Basque, KIF
bad(message)
Ontologies: Computers,
Politics, Sports
Ontologies: Fashion,
Politics, Weather
Ontologies• Ontology - specification of objects,
concepts and relationships in an area of interest (domain).
• Concepts represented in first-order logic as unary predicates. Relationships represented by n-ary predicates.
• Note: predicates refer to classes of objects, not instances of objects.– except “instanceof”
• All agents share the same ontology - i.e. all agents use and understand the same “vocabulary”!
Communicating Agents (IV)Sender: Cowboy
Receiver: Shadow
Language: KIF
Ontology: Computers
Content: broken(a)
Languages: French, KIFLanguages: English,
Spanish, Basque, KIF
need_fixing(a)
Ontologies: Computers,
Politics, Sports
Ontologies: Fashion,
Politics, Weather, Computers
Computer Ontology:
instanceof(a, disk)
instanceof(X, disk) AND broken(X) ==> need_fixing(X)
Part III:Interactions
Interactions - Overview• Motivation
• Negotiation
• Market Mechanisms
• Contract Net
• Truth Maintenance Systems
• Blackboard Systems
Motivation• Communication is a necessary
condition for coordination and
coherence, but not a sufficient one.
• It would help if agents could:
– Determine shared goals
– Avoid unnecessary conflicts
– Pool knowledge and evidence
Negotiation• Negotiation - a process by which a joint decision
is reached by two or more agents, each trying to reach an individual goal.
• Main steps:– One of the agents communicates its initial position.– While no agreement is reached, each agent makes a
proposal in its turn. These may include:• Concessions.• New alternatives.
– Ends with agreement or disagreement.
• Mechanisms for negotiation may be:– Environment-centered– Agent-centered
Negotiation Mechanisms:Environment-Centered
• Environment designer.• “How can the rules of the environment be
designed so that the agents will interact productively and fairly?”
• A negotiation mechanism would ideally have the following attributes:– Efficiency– Stability– Simplicity– Distribution– Symmetry
Negotiation Mechanisms:Agent-Centered
• Agent designer.• “Given an environment, what is the best
strategy for my agent to follow?”• Large part of the negotiation mechanisms
assume that agents are economically rational.
• For example, a negotiation protocol that contains the following terms:– Deal– Utility– Negotiation set
Market Mechanisms (I)
• Everything of interest to the agents
described in terms of prices.
• Two types of agents:
– Consumers
– Producers
• Markets of goods are interconnected.
Market Mechanisms (II)
• Big market will usually reach a competitive
equilibrium:
– Consumers bid to maximize utility, subject to
their budget constraints.
– Producers bid to maximize profits, subject to their
technological capability.
– Net demand is zero for all goods.
Contract Net (I)• Interaction protocol for cooperative
problem solving.
• Modeled on the contracting mechanism
used by businesses.
• For any assignment, agents are divided
ad-hoc into managers and contractors.
• Managers:– Announce a task that needs to be performed.– Receive and evaluate bids from potential
contractors.– Award a contract to a suitable contractor.– Receive and synthesize results.
• Contractors:– Receive task announcements.– Evaluate their own capability to respond.– Respond (decline / bid).– Perform the task if bid is accepted by manager.– Report task’s results.
Contract Net (II)
1
5
69
2
3
47
8
Truth Maintenance System (I)
• Truth Maintenance System (TMS) - ensures the integrity of an agent’s knowledge, and keeps the knowledge base:– Stable
• Each datum that has a valid justification is believed.• Each datum that lacks a valid justification and which is not
in initial belief set is disbelieved.
– Well-founded• Permits no set of its beliefs to be mutually dependent.
– Logically consistent• No datum is both believed and disbelieved.• Every datum is either believed or disbelieved.• No data and its negation are both believed or disbelieved.
TMS Graph
Agent 2
Agent 1
P(IN)
Q(OUT)
T(INTERNAL)
R(IN)
S(OUT)
T(EXTERNAL)
U(OUT)
_
+
+
_
_
_
Truth Maintenance System (II)
• Every datum is labeled either:– IN (in initial belief set).– INTERNAL (“IN” because of local justification).– EXTERNAL (“IN” because another agent asserts it).– OUT (disbelieved).
• When justification is added or removed, the TMS is invoked:– Some data unlabeled, including the newly justified
datum and its consequences in all agents.– New Labeling introduced for all unlabeled data.– If any affected agent fails to label, backtrack occurs.
• Principal of TMS changes: Affect as few agents as possible and as few beliefs as possible.
TMS - Example (I)
Agent 2
Agent 1
P(IN)
Q(OUT)
T(INTERNAL)
R(IN)
S(OUT)
T(EXTERNAL)
U(OUT)
_
+
+
_
_
_
TMS - Example (II)
Agent 1
P(IN)
Q(OUT)
T(INTERNAL)
R(IN)
S(OUT)
T(EXTERNAL)
U(OUT)
_
+
+
_
_
_+
Agent 2
TMS - Example (III)
Agent 1
P
Q
T(INTERNAL)
R(IN)
S(OUT)
T(EXTERNAL)
U(OUT)
_
+
+
_
_
_+
Agent 2
TMS - Example (IV)
Agent 1
P
Q
T
R(IN)
S(OUT)
T
U
_
+
+
_
_
_+
Agent 2
TMS - Example (V)
Agent 1
P
Q
T
R S
T
U
_
+
+
_
_
_+
Agent 2
TMS - Example (VI)
Agent 1
P(OUT)
Q(OUT)
T(OUT)
R(OUT)
S(IN)
T(OUT)
U(IN)
_
+
+
_
_
_+
Agent 2
Blackboard Systems (I)• Akin to the following metaphor:
– A group of specialists working together on solving a
problem.
– A common blackboard allows every specialist to report
(“write down”) his sub-task results.
– Every specialist may be assisted in his work by
information reported on the blackboard.
• Every specialist is called a “knowledge source”
(KS).
Blackboard Systems (II)• Characteristics of blackboard systems:
– Independence of expertise.
– Diversity in problem-solving techniques.
– Flexible representation of blackboard information.
– Common interaction language.
– Event-based activation.
– Need for control.
– Incremental solution generation.
Part IV:Commitments and Conventions
Distributed Goal Search Model• Goals solution expressed as AND/OR graph
(which is directed and a-cyclic).– High-level goals are root nodes.– Primitive goals are leaf nodes.
• Graph also contains resources needed for solving primitive goals.
• Dependencies may exist between different goals or between a goal and its resource.– Strong vs. weak– Uni-directional vs. bi-directional
• Note that dependencies from resources to goals may be solved by adding more instances of the resource.
Distributed Goal Search Graph
Goals
G1 G2
G11 G1
2 G1k G1,2
m G2p G2
t
G11,1 G1
1,2 G1m,1 G2
m,2 G2p,1 G2
p,2
G1m,1,1 G1
m,1,2 G2p,1,1 G2
p,1,2 G2p,2,2
……………………….…
Agent1 Agent2
d11 d1
j d2j+1 d2
z…………………………………… ……………………………
Resources
Strong dependencies
Weak dependendcies
Interactions among Agents for Distributed Goal Search
• Defining the goal graph.
• Assigning particular regions of the graphs to
different agents.
• Controlling decisions about which areas of the
graph to explore.
• Traversing the graph.
• Ensuring that successful traversal of the graph
is reported.
Commitment - Definition
A pledge from one agent to another agent
(or itself) to undertake a specified
course of action.
Commitments• Practical reasoning agents employ intentions
for choosing a course of action - a kind of “self-commitment”.
• In computational problems, different agents commit themselves to solving different sub-goals of a larger goal.
• Agents may inform other agents of the sub-goals to which they are self-committed. In stronger terms, they may commit to other agents about solving these sub-goals.
Motivation for Conventions (I)
• Agents do not have complete knowledge of the goals and intentions of other agents.
• Infeasible to have all agents re-contemplate about the goals of other agents in every step:– Limited computation power– Limited communication bandwidth
• Infeasible to have one agent or database keep all information about all agents:– Bottleneck– Single point of failure
Motivation for Conventions (II)
• If circumstances changed, an agent might be
working sub-optimally until he asks about it.
– Another agent solves a goal
– Another agent commits itself to a goal
– Another agent drops his commitment to a goal
– Another agent discovers that a goal is no longer
attainable
• We still would like to keep a distributed system of
agents...
Convention - Definition
A pre-determined description, common to all
agents in the system, of the course of action
to be taken by an agent, given a specific
circumstance or occurrence.
Minimum Convention for Joint Commitments
• Formalism by Cohen and Levesque.
BASIC SOCIAL CONVENTION
REASONS FOR ACTION:
STATUS OF COMMITMENT TO SHARED GOAL CHANGES
STATUS OF COMMITMENT TO REACHING SHARED GOAL IN PRESENT TEAM CONTEXT CHANGES
STATUS OF COMMITMENT OF A TEAM MEMBER TO SHARED GOAL CHANGES
ACTIONS:
R1: IF STATUS OR COMMITMENT TO SHARED GOAL CHANGES OR
STATUS OF COMMITMENT IN PRESENT TEAM CONTEXT CHANGES
THEN INFORM ALL OTHER TEAM MEMBERS OF CHANGE
R2: IF STATUS OF COMMITMENT OF A TEAM MEMBER TO SHARED GOAL CHANGES
THEN DETERMINE WHETHER JOINT COMMITMENT STILL VIABLE
Convention for Limited-Bandwidth Environments
LIMITED-BANDWIDTH CONVENTION
REASONS FOR ACTION:
COMMITMENT SATISFIED
COMMITMENT UNATTAINABLE
MOTIVATION FOR COMMITMENT NO LONGER PRESENT
ACTIONS:
R1: IF COMMITMENT SATISFIED OR
COMMITMENT UNATTAINABLE OR
MOTIVATION FOR COMMITMENT NO LONGER PRESENT
THEN DROP COMMITMENT
R2: IF COMMITMENT SATISFIED
THEN INFORM ALL AGENTS WORKING ON RELATED GOALS
R3: IF COMMITMENT DROPPED BECAUSE UNATTAINABLE OR
MOTIVATION NOT PRESENT
THEN INFORM ALL AGENTS WORKING ON STRONGLY RELATED GOALS
R4: IF COMMITMENT DROPPED BECAUSE UNATTAINABLE OR
MOTIVATION NOT PRESENT AND COMMUNICATION RESOURCES NOT OVERBURDENED
THEN INFORM ALL AGENTS WORKING ON WEAKLY RELATED GOALS
Convention in Nearly Open Environments (I)
JOINT RESPONSIBILITY SOCIAL CONVENTION
INHERIT: BASIC SOCIAL CONVENTION
REASONS FOR ACTION:
SHARED GOAL IS MET
SHARED GOAL WILL NEVER BE MET
MOTIVATION FOR SHARED GOAL IS NO LONGER PRESENT
AGREED PLAN WILL NOT ACHIEVE DESIRED RESULTS
AGREED PLAN CANNOT BE EXECUTED
AGREED PLAN HAS NOT BEEN EXECUTED PROPERLY
ACTIONS:
R1: IF SHARED GOAL IS MET OR
SHARED GOAL WILL NEVER BE MET OR
MOTIVATION FOR SHARED GOAL IS NO LONGER PRESENT
THEN DROP COMMITMENT TO SHARED GOAL & TO AGREED PLAN
R2: IF AGREED PLAN WILL NOT ACHIEVE DESIRED RESULTS OR
AGREED PLAN CANNOT BE EXECUTED OR
AGREED PLAN HAS NOT BEEN EXECUTED PROPERLY
THEN DROP COMMITMENT TO AGREED PLAN
R3: IF DROP JOINT COMMITMENT TO AGREED PLAN AND
CAN RE-PLAN USING SAME AGENTS
THEN DEVELOP AND COMMIT TO NEW PLAN
Convention in Nearly Open Environments (II)
R4: IF DROPPED COMMITMENT TO AGREED PLAN AND
CANNOT RE-PLAN USING SAME AGENTS AND
CAN DEVELOP NEW PLAN USING DIFFERENT TEAM
THEN DROP COMMITMENT TO EXISTING TEAM & COMMIT TO NEW TEAM
R5: IF CANNOT DEVELOP NEW COMMON PLAN
THEN DROP COMMITMENT TO SHARED GOAL & TO AGREED PLAN
Possible Trends in Conventions
• The harsher the environment, the more rules
are needed to determine the agent’s action.
• The harsher the environment, the more
frequent are situations in which the agent
stops and reconsiders objectives.
– Similar to the spectrum between bold agents and
cautious agents (Kinny and Georgeff).
Example for Benefits of Conventions
Goals
G1 G2
G11 G1
2 G1k G1,2
m G2p G2
t
G11,1 G1
1,2 G1m,1 G2
m,2 G2p,1 G2
p,2
G1m,1,1 G1
m,1,2 G2p,1,1 G2
p,1,2 G2p,2,2
……………………….…
Agent1 Agent2
d11 d1
j d2j+1 d2
z…………………………………… ……………………………
Resources
Strong dependencies
Weak dependendcies
Agents without HonorG1,2
1
G11,1 G2
1,2
G1,21
G11,1 G2
1,2
G1k G2
m,1
G2m
Agent 1: G11,1 Agent 2: G2
1,2 Agent 1 reneges
Agent 1: G11,1 Agent 2: G2
p Agent 1 reneges
Agent 1: G1k Agent 2: G2
m Agent 1 reneges
G1k G2
m,1
G2m
Agent 1: G1k Agent 2: G2
m Agent 1 reneges
G1k,2 G2
m,1
G2m
Agent 1: G1k,2 Agent 2: G2
m Agent 2 reneges
G2m,2
G1k
G1k,1
Benefits of Conventions• Provides a degree of predictability to counteract
the uncertainty caused by the distribution of control.
• Mitigates the effect of commitments reneged.• Flexible - can sometimes be made at different
levels and thus have varied time horizons.– The lower the level, the higher the accuracy of
information and the larger are the required computation and communication bandwidth
– Lower levels don’t always provide a significant contribution
– Lower levels might cause more constraints
Conventions vs. Conventions
• Humans also have conventions.
– Not obligatory.
– Others don’t always expect adherence to them.
• Agent conventions are actually rules rather
than conventions.
The Dawn of Society• In a System of agents/humans acting without
conventions, they cannot expect anything from their peers.
• Pre-determined rules/conventions act as common denominator for all units.
• Conventions that are adhered to, allow the system to act more coherently without extra effort from particular units.– The whole is larger than the sum of its parts.
• Thus a system turns into a society.– Human societies always have unwritten rules.– Agent conventions also called “social rules”.
Thank You
top related