distributed constraint optimization * some slides courtesy of p. modi pmodi

22
Distributed Constraint Optimization * some slides courtesy of P. Modi http://www.cs.drexel.edu/~pmodi/

Post on 22-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Distributed Constraint Optimization

* some slides courtesy of P. Modi http://www.cs.drexel.edu/~pmodi/

Outline

• DCOP and real world problems– DiMES

• Algorithms to solve DCOP– Synchronous Branch and Bound– ADOPT (distributed search)– DPOP (dynamic programming)

• DCPOP

• Future work

Distributed Constraint Optimization Problem (DCOP)

• Definition:– V = A set of variables

– Di = A domain of values for Vi

– U = A set of utility functions on V

• Goal is to optimize global utility– can also model minimal cost problems by using negative utilities

• One agent per variable

• Each agent knows Ui which is the set of all utility functions that involve Vi

DiMES

• Framework for capturing real-world domains involving joint-activities– {R1,...,RN} is a set of resources

– {E1,...,EK} is a set of events

– Some ∆ st T*∆ = Tlatest – Tearliest and T is a natural number

– Thus we can characterize the time domain as the set Ŧ = {1,...,T}

– An event, Ek, is then the tuple (Ak,Lk;Vk) where:• Ak is the subset of resources required by the event

• Lk is the number of contiguous time slots for which the resources Ak are needed

• Vk denotes the value per time slot of the kth resource in Ak

DiMES (cont’)

• It was shown in [Maheswaran et al. 2004] that DiMES can be translated into DCOP– Events are mapped to variables– The domain for each event is the time slot at which that

event will start– Utility functions are somewhat complex but were able

to be restricted to binary functions

• It was also shown that several resource allocation problems can be represented in DiMES (including distributed sensor networks)

Synchronous Branch and Bound

• Agents are prioritized into a chain (Hirayama97) or tree

• Root chooses value, sends to children• Children choose value, evaluate partial solution,

send partial solution (with cost) to children• When cost exceeds upper bound, backtrack• Agent explores all its values before reporting to

parent

SyncBB Example

Pseudotrees

• Solid line = parent/child relationship

• Dashed line = pseudo-parent/pseudo-

child relationship

• Common structure used in search procedures to allow parallel processing of independent branches

• A node can only have constraints with nodes in the path to root or with descendants

ADOPT

• SyncBB backtracks only when suboptimality is proven (current solution is greater than an upper bound)

• ADOPT’s backtrack condition – when lower bound gets too high– backtrack before sub-optimality is proven– solutions need revisiting

• Agents are ordered in a Pseudotree• Agents concurrently choose values

– VALUE messages sent down– COST messages sent up only to parent– THRESHOLD messages sent down only to child

ADOPT Example• Suppose parent has two values, “white” and “black”

DPOP

• Three phase algorithm:1. Pseudotree generation2. Utility message propagation bottom-

up3. Optimal value assignments top-

down

DPOP: Phase 1

DPOP: Phase 2

• Propagation starts at leaves, goes up to root

• Each agent waits for UTIL messages from children– does a JOIN

– sends UTIL message to parent• How many total messages in this phase?

DPOP: Phase 2 (cont’)

• UTIL Message = maximum utility for all value combinations of parent/pseudo-parents

= includes maximum utility values for all children

DPOP: Phase 3

• Value Propagation– After Phase 2, root has a summary view of the global UTIL information

– Root can then pick the value for itself that gives the best global utility

– This value is sent to all children

– Children can now choose their own value, given the value of the root, that optimizes the global utility

– This process continues until all nodes are assigned a value

DCOP Algorithm Summary

• Adopt– distributed search

– linear size messages

– worst case exponential number of messages• with respect to the depth of the pseudotree

• DPOP– dynamic programming

– worst case exponential size messages• with respect to the induced width of the pseudotree

– linear number of messages

Can we do better?

• Are pseudotrees the most efficient translation?– The minimum induced width pseudotree is currently

the most efficient known translation– Finding it is NP-Hard and may require global

information

• Heuristics are used to produce the pseudotrees– Current distributed heuristics are all based on some

form of DFS or BestFS– We prove in a recent paper that pseudotrees produced

by these heuristics are suboptimal

Cross-Edged Pseudotrees

• Pseudotrees that include edges between nodes in separate branches

• The dashed line is a cross-edge

• This relaxed form of a pseudotree can produce shorter trees, as well as less overlap between constraints

DCPOP

• Our extension to DPOP that correctly handles Cross-Edged Pseudotrees

• We have proved that using an edge-traversal heuristic (DFS, BestFS) it is impossible to produce a traditional pseudotree that outperforms a well chosen cross-edged pseudotree

• Edge-traversal heuristics are popular because they are easily done in a distributed fashion and require no sharing of global information

DCPOP (cont’)

• Computation size is closer to the minimum induced width than with DPOP

• Message size can actually be smaller than the minimum induced width

• A new measurement of sequential path cost (represents the maximal amount of parallelism achieved) also shows improvement

DCPOP Metrics

Future Work

• DCOP mapping for a TAEMS based task/resource allocation problem

• Full integration of uncertainty characteristics into the DCOP model

• Anytime adaptation with uncertainty