concurrent counting is harder than queuing
DESCRIPTION
Concurrent Counting is harder than Queuing. Costas Busch Rensselaer Polytechnic Intitute Srikanta Tirthapura Iowa State University. Arbitrary graph. Distributed Counting. count. count. count. count. Some processors request a counter value. Distributed Counting. Final state. - PowerPoint PPT PresentationTRANSCRIPT
1
Concurrent Counting is harder than Queuing
Costas BuschRensselaer Polytechnic Intitute
Srikanta TirthapuraIowa State University
2
Arbitrary graph
3
Distributed Counting
Some processors request a counter value
count
count
count
count
4
Distributed Counting
Final state
2
1
34
5
Distributed Queuing
Some processors perform enqueue operations
Enq(A) Enq(B)
Enq(D)
Enq(C)
A B
C
D
6
Distributed Queuing
A B
C
D
headTail
A BC D
Previous=nil
Previous=B
Previous=D
Previous=A
7
Counting: - parallel scientific applications - load balancing (counting networks)
Queuing:- distributed directories for mobile
objects- distributed mutual exclusion
Applications
8
Multicast with the condition:all messages received at all nodes in the same order
Either Queuing or Counting will do
Which is more efficient?
Ordered Multicast
9
Queuing vs Counting?
Total ordersQueuing = finding predecessorNeeds local knowledge
Counting = finding rankNeeds global knowledge
10
ProblemIs there a formal sense in which Counting
is harder problem than Queuing?
Reductions don’t seem to help
11
Our Result Concurrent Counting is harder than
Concurrent Queuing
on a variety of graphs including: many common interconnection
topologies complete graph, mesh hypercube perfect binary trees
12
Model
Synchronous system G=(V,E) - edges of unit delay
Congestion: Each node can process only one message in a single time step
Concurrent one-shot scenario:a set R subset V of nodes issue queuing (or
counting) operations at time zeroNo more operations added later
13
Cost Model : delay till v gets back queuing result
Cost of algorithm A on request set R is
Queuing Complexity =
Define Counting Complexity Similarly
)(vCQ
Rv
QQ vCRAC )(),(
)},({maxmin RACQVRA
14
Lower Bounds on Counting
Counting Cost = )log( * nn
For arbitrary graphs:
For graphs with diameter :D
)( 2DCounting Cost =
15
Theorem:
Proof:
Consider some arbitrary algorithmfor counting
For graphs with diameter :D
)( 2DCounting Cost =
16
Take shortest path of length
Graph
D
17
Graph
make these nodes to count
12 3 4567 8
18
k
Node of count decides after at least time steps
k
21k
Needs to be aware of other processors k-1
21k
21k
19
Counting Cost: )(2
1 2
1
DkD
k
End of Proof
20
Theorem:
Counting Cost = )log( * nnFor arbitrary graphs:
Proof:
Consider some arbitrary algorithmfor counting
21
Prove it for a complete graph with nodes:
any algorithm on any graph with nodes can be simulated on the complete graph
n
n
22
The initial state affects the outcome
Red: countBlue: don’t count
v
Initial State
23
Final state
Red: count
v
1
23
4
5
Blue: don’t count
24
Initial state
Red: count
v
Blue: don’t count
25
Final state
Red: count
v
1
2
3
45
6 78
Blue: don’t count
26
v
Let be the set of nodes whose input may affect the decision of
)(vAv
)(vA
27
Suppose that there is an initial statefor which decides v k
kvA |)(|Then:
v
)(vA
k
28
v
)(vA
k
v
)(vA
k
These two initial states give same result forv
29
v
)(vA
k
If , then would decide less than
kvA |)(|
k
Thus, kvA |)(|
v
30
Suppose that decides at timetv
We show:
22|)(| vA
2
2t times
31
Suppose that decides at timetv
kvA |)(|
22|)(| vA
2
2t
times
kt *log
32
kt *log
)log(log *
1
* nnkn
k
Cost of node : v
If nodes wish to count:n
Counting Cost =
33
v
),( tvA
Nodes that affectup to time
vt
v
),( tvB
Nodes that affectsup to timev
t
|),(|max)( txAta x |),(|max)( txBtb x
34
v
)1,( tvA
v
)1,( tvB
After , the sets grow1t
1)( ta 1)( tb
35
),( tvA
v
)1,( tvA
36
),( tvA
v
)1,( tvA
),( tzA
Eligible to send messageat time
z
1t
There is an initial state such thatthat sends a message to z v
37
),( tvA
v
),( tzA
z
)1,( tvA
),( tsA
s
),(),( tzAtsA
Eligible to send messageat time 1t
Suppose that
Then, there is an initial state such that both send message tov
38
),( tvA
v
),( tzA
z
)1,( tvA
),( tsA
s
Eligible to send messageat time 1t
However, can receive one message at a time v
39
),( tvA
v
),( tzA
z
)1,( tvA
),( tsA
s
Eligible to send messageat time 1t
),(),( tzAtsATherefore:
40
),( tzA
z
),( tsA
s
Number of nodes like : s )()( tbta
|),(|max txAx |),(|max txBx
41
),( tvA
v
),( tzA
z
)1,( tvA
),( tsA
s
Therefore: )()()(),()1,( tbtatatvAtvA
42
)()(1)()1( tbtatata
We can also show: )(21)()1( tatbtb
Thus:
Which give:
22)( a
2
2
times
End of Proof
43
Upper Bound on Queuing
)log( nnOQueuing Cost =
For graphs with spanning trees of constant degree:
)(nOQueuing Cost =
For graphs whose spanning trees are lists or perfect binary trees:
44
An arbitrary graph
45
Spanning tree
46
Spanning tree
47
A
A
Distributed Queue
HeadTail
Tail
Previous = Nil
48
A
B enq(B)
Tail A
HeadTail
Previous = ?
Previous = Nil
49
A
B
enq(B)
Tail A
HeadTail
Previous = ?
Previous = Nil
50
A
B
enq(B)
Tail A
HeadTail
Previous = ?
Previous = Nil
51
A
B
enq(B)
Tail A
HeadTail
Previous = ?
Previous = Nil
52
A
B
enq(B)
Tail A
HeadTail
Previous = ?
Previous = Nil
53
A
A
B
Tail
Previous = A
BA informs B
HeadTail
Previous = Nil
54
A
Concurrent Enqueue Requests
Tail
BC
A
HeadTail
enq(B)enq(C)Previous = ? Previous = ?
Previous = Nil
55
A
Tail
BC
A
HeadTail
enq(B)enq(C)Previous = ? Previous = ?
Previous = Nil
56
A
Tail
BC
A
HeadTail
enq(B)
enq(C)
Previous = ? Previous = ?
Previous = Nil
57
A
Tail
BC
enq(B)
AC
HeadTail
Previous = A Previous = ?
Previous = Nil
58
A
Tail
BCenq(B)
AC
HeadTail
Previous = A Previous = ?
Previous = Nil
59
A
Tail
BC
AC
HeadTail
C informs B
Previous = CPrevious = A
B
Previous = Nil
60
A
Tail
BC
AC
HeadTail
Previous = CPrevious = A
B
Previous = Nil
61
A
Tail
BC
AC
HeadTail
Previous = CPrevious = A
B
Previous = Nil
Paths of enqueue requests
62
Nearest-Neighbor TSP tour
on Spanning tree
A
C B
D
Origin(first element in queue)
E F
63
A
C B
Visit closest unused node in Tree
ED F
64
A
C B
Visit closest unused node in Tree
ED F
65
A
C B
ED F
Nearest-Neighbor TSP tour
66
Queuing Cost 2 x Nearest-Neighbor TSP length
[Herlihy, Tirthapura, Wattenhofer PODC’01]
For spanning tree of constant degree:
67
Nearest-NeighborTSP length
OptimalTSP length
[Rosenkratz, Stearns, Lewis SICOMP1977]
If a weighted graph satisfies triangular inequality:
nlogx
68
A
C
D
B
E F
A
C
D
B
F2
4
4
13
12
3
1
weighted graph of distances
E1
23 4
2
1
69
Satisfies triangular inequality
A
C
D2
1
11e
2e3e
)()()( 321 ewewew
A
C
D
B
F2
4
4
13
12
3
1
E1
23 4
2
1
70
A
C
E
B
FD
Nearest Neighbor TSP tour
A
C
D
B
F2
4
4
13
12
3
1
E1
23 4
2
1
Length=8
71
A
C
E
B
FD
A
C
D
B
F2
4
4
13
12
3
1
E1
23 4
2
1
Optimal TSP tour
Length=6
72
It can be shown that:
Optimal TSP length n2
A
C
E
B
FD
(Nodes in graph)
Since every edge is visited twice
73
Therefore, for constant degree spanning tree:
Queuing Cost = O(Nearest-Neighbor TSP)
)log( nnO
= O(Optimal TSP x )nlog
=
74
For special cases we can do better:
Spanning Tree is
Queuing Cost = )(nO
List balanced binary tree
75
Graphs with Hamiltonian path,have spanning trees which are lists
Complete graph
Queuing Cost = )(nO
Mesh Hypercube
Counting Cost = )log( * nn
76
If the spanning tree is a list, then
Theorem:
Queuing Cost = )(nO
77
Nearest Neighbor TSP
Queuing Cost = O(Nearest-Neighbor TSP)
Proof:
78
x y
yx
ab
ab 2
79
Length doubles
Total length n2
n nodes
Even sides
80
Length doubles
n nodes
Total length n2
Odd sides
81
n nodes
Total length nnn 422
Even+Odd sides
End of proof