tomography-based overlay network monitoring
DESCRIPTION
Tomography-based Overlay Network Monitoring. Yan Chen, David Bindel, and Randy H. Katz. UC Berkeley. Motivation. Applications of end-to-end distance monitoring Overlay routing/location Peer-to-peer systems VPN management/provisioning Service redirection/placement - PowerPoint PPT PresentationTRANSCRIPT
Tomography-based Overlay Network Monitoring
UC Berkeley
Yan Chen, David Bindel, and Randy H. Katz
Motivation• Applications of end-to-end distance
monitoring– Overlay routing/location – Peer-to-peer systems – VPN management/provisioning– Service redirection/placement – Cache-infrastructure configuration
• Requirements for E2E monitoring system– Scalable & efficient: small amount of probing traffic– Accurate: capture congestion/failures– Incrementally deployable– Easy to use
Existing Work• Static estimation:
– Global Network Positioning (GNP)• Dynamic monitoring
– Loss rates: RON (n2 measurement)– Latency: IDMaps, Dynamic Distance Maps, Isobar
• Latency similarity under normal conditions doesn’t imply similar losses !
• Network tomography– Focusing on inferring the characteristics of
physical links rather than E2E paths– Limited measurements -> under-constrained
system, unidentifiable links
Problem FormulationGiven n end hosts on an overlay network and
O(n2) paths, how to select a minimal subset of paths to monitor so that the loss rates/latency of all other paths can be inferred.
• Key idea: select a basis set of k paths that completely describe all O(n2) paths (k «O(n2)) – Select and monitor k linearly independent paths to compute
the loss rates of basis set– Infer the loss rates of all other paths– Applicable for any additive metrics, like latency
End hosts
Overlay Network Operation Center
topology
measurements
Modeling of Path Space
Path loss rate p, link loss rate l )1)(1(1 211 llp
)1log()1log()1log(
011)1log()1log()1log(3
2
1
211
lll
llp
11 vector rate losspath vectorrate losslink
matrix path where
,
}1|0{,
rs
sr
bx
GbGx
Put all r = O(n2) paths togetherTotally s links
A
D
C
B
1
2
3p1
1
3
2
1
011 bxxx
Sample Path Matrix
• x1 - x2 unknown => cannot compute x1, x2
• Set of vectorsform null space
• To separate identifiable vs. unidentifiable components: x = xG + xN
• All E2E paths are in path space, i.e., GxN = 0
01
1
2)(
2/2/
100
011
2)(
21
2
1
1
321
xxx
bbb
xxxx
N
G
GNG GxGxGxGxb
111100011
G
3
2
1
3
2
1
bbb
xxx
G
A
D
C
B
1
2
3b1
b2
b3
(1,-1,0)
link 2
link 1
link 3
(1,1,0)
row space(measured)
null space(unmeasured)
T]011[
Intuition through Topology Virtualization
• Virtual links: minimal path segments whose loss rates uniquely identified
• Can fully describe all paths
• xG : similar forms as virtual links
Real links (solid) and all of the overlay paths (dotted) traversing them
Virtualization
Virtual links
1’
1 2Rank(G)=1
1 11G
1
2 31’ 2’
Rank(G)=2
1 2
10
01
11
G
1
23
1’2’
4
Rank(G)=3
3’
4’
12
3
1100
1010
0101
0011
G
Algorithms
• Select k = rank(G) linearly independent paths to monitor– Use rank revealing decomposition, e.g., QR with
column pivoting– Leverage sparse matrix: time O(rk2) and memory O(k2)
• E.g., 10 minutes for n = 350 (r = 61075) and k = 2958• Compute the loss rates of other paths
– Time O(k2) and memory O(k2)GGG GxbbxGx then,with Solve
1 where ,}1|0{, ksk bGbG Gx 1 where ,}1|0{, rsr bGbG Gx
How much measurement saved ?
k « O(n2) ?
For a power-law Internet topology • When the majority of end hosts are on the overlay,
overlay network has O(n) IP links
• When a small portion of end hosts are on overlay– If Internet a pure hierarchical structure (tree): k = O(n)– If Internet no hierarchy at all (worst case, clique): k
= O(n2)– Internet has moderate hierarchical structure [TGJ+02]
k = O(n) (with proof)
For reasonably large n, (e.g., 100), k = O(nlogn)
Linear Regression Tests of the Hypothesis• BRITE Router-level Topologies
– Barbarasi-Albert, Waxman, Hierarchical models• Mercator Real Topology• Most have the best fit with O(n) except the
hierarchical ones fit best with O(nlogn)BRITE 20K-node hierarchical topology Mercator 284K-node real router topology
Practical Issues• Topology measurement errors tolerance
– Care about path loss rates than any interior links– Poor router alias resolution
=> assign similar loss rates to the same links– Unidentifiable routers
=> add virtual links to bypass
• Measurement load balancing on end hosts– Randomly order the paths for scan and selection of G
Topology Changes• Basic building block: add/remove one path
– Incremental changes: O(k2) time (O(n2k2) for re-scan)– Add path: check linear dependency with old basis
set,– Delete path p : hard when
The essential info described by p :
G
• Add/remove end hosts , Routing changes• Topology relatively stable in order of a day
=> incremental detection
0 and}{ where, topath any add,0}{ if
}{but , such that , vector a
yxpGxGxypG
pGyGyy
Gp
Evaluation• Simulation
– Topology• BRITE: Barabasi-Albert, Waxman, hierarchical: 1K – 20K nodes• Real topology from Mercator: 284K nodes
– Fraction of end hosts on the overlay: 1 - 10%– Loss rate distribution (90% links are good)
• Good link: 0-1% loss rate; bad link: 5-10% loss rates• Good link: 0-1% loss rate; bad link: 1-100% loss rates
– Loss model: • Bernouli: independent drop of packet• Gilbert: busty drop of packet
– Path loss rate simulated via transmission of 10K pkts• Experiments on PlanetLab
Areas and Domains # of hosts
US (40)
.edu 33
.org 3
.net 2
.gov 1
.us 1
Interna-tional (11)
Europe (6)
France 1
Sweden 1
Denmark 1
Germany 1
UK 2
Asia (2)Taiwan 1
Hong Kong 1
Canada 2
Australia 1
Experiments on Planet Lab• 51 hosts, each from
different organizations– 51 × 50 = 2,550 paths
• Simultaneous loss rate measurement– 300 trials, 300 msec each– In each trial, send a 40-byte
UDP pkt to every other host• Simultaneous topology
measurement– Traceroute
• Experiments: 6/24 – 6/27– 100 experiments in peak
hours
• Loss rate distribution
• Metrics– Absolute error |p – p’ |:
• Average 0.0027 for all paths, 0.0058 for lossy paths– Relative error [BDPT02]
– Lossy path inference: coverage and false positive ratio• On average k = 872 out of 2550
lossrate [0, 0.05)
lossy path [0.05, 1.0] (4.1%)
[0.05, 0.1) [0.1, 0.3) [0.3, 0.5) [0.5, 1.0) 1.0
% 95.9% 15.2% 31.0% 23.9% 4.3% 25.6%
PlanetLab Experiment Results
)',max()('),,max()(where)()(',
)(')(max)',(
pppppp
ppppF
Accuracy Results for One Experiment
• 95% of absolute error < 0.0014• 95% of relative error < 2.1
Accuracy Results for All Experiments
• For each experiment, get its 95% absolute & relative errors• Most have absolute error < 0.0135 and relative error < 2.0
Lossy Path Inference Accuracy
• 90 out of 100 runs have coverage over 85% and false positive less than 10%
• Many caused by the 5% threshold boundary effects
Topology/Dynamics Issues• Out of 13 sets of pair-wise traceroute …• On average 248 out of 2550 paths have no
or incomplete routing information• No router aliases resolvedConclusion: robust against topology
measurement errors
• Simulation on adding/removing end hosts and routing changes also give good results
Conclusions• A tomography-based overlay network
monitoring system– Given n end hosts, characterize O(n2) paths with a
basis set of O(nlogn) paths– Selectively monitor the basis set for their loss rates,
then infer the loss rates of all other paths– Topology measurement error tolerance– Adaptive to topology changes
• Both simulation and PlanetLab experiments show promising results
• Built an adaptive overlay streaming media system on top of it
Work in Progress• Provide it as a continuous service on
PlanetLab
• Network diagnostics:Which links or path segments are down
• Iterative methods for better speed and scalability
Backup Slides
Sensitivity Test of Sending Frequency
• Big jump for # of lossy paths when the sending rate is over 12.8 Mbps
Performance Improvement with Overlay
• With single-node relay• Loss rate improvement
– Among 10,980 lossy paths:– 5,705 paths (52.0%) have loss rate reduced by 0.05 or more– 3,084 paths (28.1%) change from lossy to non-lossy
• Throughput improvement– Estimated with
– 60,320 paths (24%) with non-zero loss rate, throughput computable
– Among them, 32,939 (54.6%) paths have throughput improved, 13,734 (22.8%) paths have throughput doubled or more
• Implications: use overlay path to bypass congestion or failures
lossraterttthroughput 5.1
SERVER
OVERLAY RELAYNODE
OVERLAY NETWORKOPERATION CENTER
CLIENT
3. Network congestion /failure
4. Detect congestion /failure
2. Register trigger
7. Skip-free streamingmedia recovery
6. Setup New Path
1. Setupconnection
5. Alert +New Overlay Path
XUC Berkeley
UC San Diego
Stanford
HP Labs
Adaptive Overlay Streaming Media
• Implemented with Winamp client and SHOUTcast server• Congestion introduced with a Packet Shaper• Skip-free playback: server buffering and rewinding• Total adaptation time < 4 seconds
Adaptive Streaming Media Architecture
Client 1
MEDIASOURCE
SERVERSHOUTcast
Server
Buffering Layer
Clie
nt 1
Clie
nt 2
Clie
nt 3
Clie
nt 4
FromSHOUTcast
server
Calculatedconcatenation
point
BU
FFE
R
ByteCounter
Client 2
Client 3
Client 4
INTERNET
Triggering /alert + new path
OVERLAY RELAY NODE
RELAYOverlay Layer
Path Management
TCP/IP Layer
RELAY
CLIENTWinamp client
TCP/IP Layer
Overlay Layer
Internet
Path Management
Winamp Video/Audio Filter
Byte Counter
TCP/IP Layer
OVERLAY NETWORKOPERATION CENTER
Conclusions• A tomography-based overlay network
monitoring system– Given n end hosts, characterize O(n2) paths with
a basis set of O(nlogn) paths– Selectively monitor O(nlogn) paths to compute
the loss rates of the basis set, then infer the loss rates of all other paths
• Both simulation and real Internet experiments promising
• Built adaptive overlay streaming media system on top of monitoring services– Bypass congestion/failures for smooth playback
within seconds