multi-agent systems: overview and research directions

41
Multi-Agent Systems: Overview and Research Directions CMSC 471 December 1 and 3, 2009 Prof. Marie desJardins

Upload: sandra-neal

Post on 02-Jan-2016

49 views

Category:

Documents


1 download

DESCRIPTION

Multi-Agent Systems: Overview and Research Directions. CMSC 471 December 1 and 3, 2009 Prof. Marie desJardins. Outline. What’s an Agent? Multi-Agent Systems Cooperative multi-agent systems Competitive multi-agent systems MAS Research Directions Organizational structures - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Multi-Agent Systems: Overview and Research Directions

Multi-Agent Systems:Overview and Research Directions

CMSC 471

December 1 and 3, 2009

Prof. Marie desJardins

Page 2: Multi-Agent Systems: Overview and Research Directions

Outline

What’s an Agent? Multi-Agent Systems

Cooperative multi-agent systems Competitive multi-agent systems

MAS Research Directions Organizational structures Communication limitations Learning in multi-agent systems

Page 3: Multi-Agent Systems: Overview and Research Directions

What’s an Agent?

Page 4: Multi-Agent Systems: Overview and Research Directions

What’s an agent?

Weiss, p. 29 [after Wooldridge and Jennings]: “An agent is a computer system that is situated in some

environment, and that is capable of autonomous action in this environment in order to meet its design objectives.”

Russell and Norvig, p. 7: “An agent is just something that perceives and acts.”

Rosenschein and Zlotkin, p. 4: “The more complex the considerations that [a] machine takes into

account, the more justified we are in considering our computer an ‘agent,’ who acts as our surrogate in an automated encounter.”

Page 5: Multi-Agent Systems: Overview and Research Directions

What’s an agent? II

Ferber, p. 9: “An agent is a physical or virtual entity

a) Which is capable of acting in an environment,b) Which can communicate directly with other agents,c) Which is driven by a set of tendencies…,d) Which possesses resources of its own,e) Which is capable of perceiving its environment…,f) Which has only a partial representation of this environment…,g) Which possesses skills and can offer services,h) Which may be able to reproduce itself,i) Whose behavior tends towards satisfying its objectives,

taking account of the resources and skills available to it and depending on its perception, its representations and the communications it receives.”

Page 6: Multi-Agent Systems: Overview and Research Directions

OK, so what’s an environment?

Isn’t any system that has inputs and outputs situated in an environment of sorts?

Page 7: Multi-Agent Systems: Overview and Research Directions

What’s autonomy, anyway?

Jennings and Wooldridge, p. 4: “[In contrast with objects, we] think of agents as

encapsulating behavior, in addition to state. An object does not encapsulate behavior: it has no control over the execution of methods – if an object x invokes a method m on an object y, then y has no control over whether m is executed or not – it just is. In this sense, object y is not autonomous, as it has no control over its own actions…. Because of this distinction, we do not think of agents as invoking methods (actions) on agents – rather, we tend to think of them requesting actions to be performed. The decision about whether to act upon the request lies with the recipient.”

Is an if-then-else statement sufficient to create autonomy?

Page 8: Multi-Agent Systems: Overview and Research Directions

So now what?

If those definitions aren’t useful, is there a useful definition? Should we bother trying to create “agents” at all?

Page 9: Multi-Agent Systems: Overview and Research Directions

Multi-Agent Systems

Page 10: Multi-Agent Systems: Overview and Research Directions

Multi-agent systems

Jennings et al.’s key properties: Situated Autonomous Flexible:

Responsive to dynamic environmentPro-active / goal-directedSocial interactions with other agents and humans

Research questions: How do we design agents to interact effectively to solve a wide range of problems in many different environments?

Page 11: Multi-Agent Systems: Overview and Research Directions

Aspects of multi-agent systems

Cooperative vs. competitive Homogeneous vs. heterogeneous Macro vs. micro

Interaction protocols and languages Organizational structure Mechanism design / market economics Learning

Page 12: Multi-Agent Systems: Overview and Research Directions

Topics in multi-agent systems

Cooperative MAS: Distributed problem solving: Less autonomy Distributed planning: Models for cooperation and teamwork

Competitive or self-interested MAS: Distributed rationality: Voting, auctions Negotiation: Contract nets

Page 13: Multi-Agent Systems: Overview and Research Directions

Typical (cooperative) MAS domains

Distributed sensor network establishment Distributed vehicle monitoring Distributed delivery

Page 14: Multi-Agent Systems: Overview and Research Directions

Distributed sensing

Track vehicle movements using multiple sensors Distributed sensor network establishment:

Locate sensors to provide the best coverage Centralized vs. distributed solutions

Distributed vehicle monitoring: Control sensors and integrate results to track vehicles as they move

from one sensor’s “region” to another’s Centralized vs. distributed solutions

Page 15: Multi-Agent Systems: Overview and Research Directions

Distributed delivery

Logistics problem: move goods from original locations to destination locations using multiple delivery resources (agents)

Dynamic, partially accessible, nondeterministic environment (goals, situation, agent status)

Centralized vs. distributed solution

Page 16: Multi-Agent Systems: Overview and Research Directions

Cooperative Multi-Agent Systems

Page 17: Multi-Agent Systems: Overview and Research Directions

Distributed problem solving/planning

Cooperative agents, working together to solve complex problems with local information

Partial Global Planning (PGP): A planning-centric distributed architecture

SharedPlans: A formal model for joint activity Joint Intentions: Another formal model for joint activity STEAM: Distributed teamwork; influenced by joint

intentions and SharedPlans

Page 18: Multi-Agent Systems: Overview and Research Directions

Distributed problem solving

Problem solving in the classical AI sense, distributed among multiple agents That is, formulating a solution/answer to some complex question Agents may be heterogeneous or homogeneous DPS implies that agents must be cooperative (or, if self-interested,

then rewarded for working together)

Page 19: Multi-Agent Systems: Overview and Research Directions

Requirements for cooperative activity

(Grosz) -- “Bratman (1992) describes three properties that must be met to have ‘shared cooperative activity’: Mutual responsiveness Commitment to the joint activity Commitment to mutual support”

Page 20: Multi-Agent Systems: Overview and Research Directions

Joint intentions

Theoretical framework for joint commitments and communication

Intention: Commitment to perform an action while in a specified mental state

Joint intention: Shared commitment to perform an action while in a specified group mental state

Communication: Required/entailed to establish and maintain mutual beliefs and join intentions

Page 21: Multi-Agent Systems: Overview and Research Directions

SharedPlans

SharedPlan for group action specifies beliefs about how to do an action and subactions

Formal model captures intentions and commitments towards the performance of individual and group actions

Components of a collaborative plan (p. 5): Mutual belief of a (partial) recipe Individual intentions-to perform the actions Individual intentions-that collaborators succeed in their subactions Individual or collaborative plans for subactions

Very similar to joint intentions

Page 22: Multi-Agent Systems: Overview and Research Directions

STEAM: Now we’re getting somewhere!

Implementation of joint intentions theory Built in Soar framework Applied to three “real” domains Many parallels with SharedPlans

General approach: Build up a partial hierarchy of joint intentions Monitor team and individual performance Communicate when need is implied by changing mental state &

joint intentions

Key extension: Decision-theoretic model of communication selection

Page 23: Multi-Agent Systems: Overview and Research Directions

Competitive Multi-Agent Systems

Page 24: Multi-Agent Systems: Overview and Research Directions

Distributed rationality

Techniques to encourage/coax/force self-interested agents to play fairly in the sandbox

Voting: Everybody’s opinion counts (but how much?) Auctions: Everybody gets a chance to earn value (but how

to do it fairly?) Contract nets: Work goes to the highest bidder Issues:

Global utility Fairness Stability Cheating and lying

Page 25: Multi-Agent Systems: Overview and Research Directions

Pareto optimality

S is a Pareto-optimal solution iff S’ (x Ux(S’) > Ux(S) → y Uy(S’) < Uy(S)) i.e., if X is better off in S’, then some Y must be worse off

Social welfare, or global utility, is the sum of all agents’ utility If S maximizes social welfare, it is also Pareto-optimal (but not

vice versa)

X’s utility

Y’s utility

Which solutionsare Pareto-optimal?

Which solutionsmaximize global utility(social welfare)?

Page 26: Multi-Agent Systems: Overview and Research Directions

Stability

If an agent can always maximize its utility with a particular strategy (regardless of other agents’ behavior) then that strategy is dominant

A set of agent strategies is in Nash equilibrium if each agent’s strategy Si is locally optimal, given the other agents’ strategies No agent has an incentive to change strategies Hence this set of strategies is locally stable

Page 27: Multi-Agent Systems: Overview and Research Directions

Prisoner’s Dilemma

Cooperate Defect

Cooperate 3, 3 0, 5

Defect 5, 0 1, 1

AB

Page 28: Multi-Agent Systems: Overview and Research Directions

Prisoner’s Dilemma: Analysis

Pareto-optimal and social welfare maximizing solution: Both agents cooperate

Dominant strategy and Nash equilibrium: Both agents defect

Cooperate Defect

Cooperate 3, 3 0, 5

Defect 5, 0 1, 1

Why?

AB

Page 29: Multi-Agent Systems: Overview and Research Directions

Voting

How should we rank the possible outcomes, given individual agents’ preferences (votes)?

Six desirable properties (which can’t all simultaneously be satisfied): Every combination of votes should lead to a ranking Every pair of outcomes should have a relative ranking The ranking should be asymmetric and transitive The ranking should be Pareto-optimal Irrelevant alternatives shouldn’t influence the outcome Share the wealth: No agent should always get their way

Page 30: Multi-Agent Systems: Overview and Research Directions

Voting protocols

Plurality voting: the outcome with the highest number of votes wins Irrelevant alternatives can change the outcome: The Ross Perot

factor Borda voting: Agents’ rankings are used as weights, which

are summed across all agents Agents can “spend” high rankings on losing choices, making their

remaining votes less influential Binary voting: Agents rank sequential pairs of choices

(“elimination voting”) Irrelevant alternatives can still change the outcome Very order-dependent

Page 31: Multi-Agent Systems: Overview and Research Directions

Auctions

Many different types and protocols All of the common protocols yield Pareto-optimal outcomes But… Bidders can agree to artificially lower prices in order

to cheat the auctioneer What about when the colluders cheat each other?

(Now that’s really not playing nicely in the sandbox!)

Page 32: Multi-Agent Systems: Overview and Research Directions

Contract nets

Simple form of negotiation Announce tasks, receive bids, award contracts Many variations: directed contracts, timeouts, bundling of

contracts, sharing of contracts, … There are also more sophisticated dialogue-based

negotiation models

Page 33: Multi-Agent Systems: Overview and Research Directions

MAS Research Directions

Page 34: Multi-Agent Systems: Overview and Research Directions

Agent organizations

“Large-scale problem solving technologies” Multiple (human and/or artificial) agents Goal-directed (goals may be dynamic and/or conflicting) Affects and is affected by the environment Has knowledge, culture, memories, history, and capabilities

(distinct from individual agents) Legal standing is distinct from single agent Q: How are MAS organizations different from human

organizations?

Page 35: Multi-Agent Systems: Overview and Research Directions

Organizational structures

Exploit structure of task decomposition Establish “channels of communication” among agents

working on related subtasks Organizational structure:

Defines (or describes) roles, responsibilities, and preferences

Use to identify control and communication patterns:Who does what for whom: Where to send which

task announcements/allocationsWho needs to know what: Where to send which

partial or complete results

Page 36: Multi-Agent Systems: Overview and Research Directions

Communication models

Theoretical models: Speech act theory Practical models:

Shared languages like KIF, KQML, DAML Service models like DAML-S Social convention protocols

Page 37: Multi-Agent Systems: Overview and Research Directions

Communication strategies

Send only relevant results at the right time Conserve bandwidth, network congestion, computational

overhead of processing data Push vs. pull Reliability of communication (arrival and latency of

messages) Use organizational structures, task decomposition,

and/or analysis of each agent’s task to determine relevance

Page 38: Multi-Agent Systems: Overview and Research Directions

Communication structures

Connectivity (network topology) strongly influences the effectiveness of an organization

Changes in connectivity over time can impact team performance: Move out of communication range coordination

failures Changes in network structure reduced (or increased)

bandwidth, increased (or reduced) latency

Page 39: Multi-Agent Systems: Overview and Research Directions

Learning in MAS

Emerging field to investigate how teams of agents can learn individually

and as groups Distributed reinforcement learning: Behave as an

individual, receive team feedback, and learn to individually contribute to team performance

Distributed reinforcement learning: Iteratively allocate “credit” for group performance to individual decisions

Genetic algorithms: Evolve a society of agents (survival of the fittest)

Strategy learning: In market environments, learn other agents’ strategies

Page 40: Multi-Agent Systems: Overview and Research Directions

Adaptive organizational dynamics

Potential for change: Change parameters of organization over time That is, change the structures, add/delete/move agents,

… Adaptation techniques:

Genetic algorithms Neural networks Heuristic search / simulated annealing Design of new processes and procedures Adaptation of individual agents

Page 41: Multi-Agent Systems: Overview and Research Directions

Conclusions and directions

“Agent” means many different things Different types of “multi-agent systems”:

Cooperative vs. competitive Heterogeneous vs. homogeneous Micro vs. macro

Lots of interesting/open research directions: Effective cooperation strategies “Fair” coordination strategies and protocols Learning in MAS Resource-limited MAS (communication, …)