advanced network experiments in fed4fire
Post on 12-Apr-2017
30 Views
Preview:
TRANSCRIPT
PUBLIC
ADVANCED NETWORK EXPERIMENTS ON FED4FIRE, PAST
PRESENT AND A LOOK INTO THE FUTURE
DIMITRI STAESSENS
SPARC – SPLIT ARCHITECTURE
2
SPARC OBJECTIVES: CARRIER GRADE SDN
Requirements (study topics)
f
Problem and Solution
Description
OF Extensions Prototype Integration
/Implementation
Validation / Performance
Evaluation
Controller Architecture Yes Yes (Namespace
mgmt)
Yes Yes
Network Management Yes No No No
Scalability Yes (numerical
validation)
N/A N/A Yes
Openness & Extensibil ity Yes Yes Yes Yes
Service Creation Yes Yes Yes Yes
Virtualization & Isolation Yes Yes Yes Yes
Control Channel Bootstrapping
& Topology DiscoveryYes N/A Yes Yes
OAM Yes Yes Yes Yes
Network Resiliency Yes N/A Yes Yes
Energy-Efficient Networking Yes Yes No No
Quality of Service Yes No No No
Multilayer Aspects Yes No No No
SCOPE
Modify flow entry
Add new flow entry
RESTORATION
5
Modify flow entry
Add new flow entry
RESTORATION
6
Modify flow entry
Add new flow entry
Delete old flow entry
RESTORATION
7
RESILIENCE EXPERIMENT
• 14 OF nodes (ovs)
• 14 hosts (not shown)
• Not Openflow “aware”!
• 1 controller
• separate control LAN
• restoration application
• Shortest path
• Failure notification by switch
• 21 links (1Gbps)
• 176 “flows”
• Pktgen
• UDP traffic
• ~300 packets/s
8
FACILITY: VIRTUAL WALL
9
10
EXPERIMENT TIMING
11
Connecting
switches to
NOX
controller
“DP join”
Normal operation
“echo req/rep”
Failure
“portstatus”
Restored
operation
“echo
req/rep”
Establishing
flows
“packet-in”
RESULTS: RESTORATION AND PROTECTION
Restoration Protection
RESULTS: RESTORATION AND PROTECTION
4/4/2017
(C) Restoration-Protection Experiment
0
20
40
60
80
100
120
140
160
180
200
-0.4-0.3-0.2-0.1 0 0.1 0.2 0.3 0.4
Tra
ffic
(p
ack
et/1
0 m
s)
Experiment time in seconds
Total Traffic
Traffic from Berlin
(A) Restoration Experiment
0
20
40
60
80
100
120
140
160
180
200
-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4
Tra
ffic
(p
ack
et/1
0 m
s)
Experiment time in seconds
Total Traffic
Traffic from Berlin
(D) Protection Experiment
0
20
40
60
80
100
120
140
160
180
200
-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4
Tra
ffic
(p
ack
et/
10
ms)
Experiment Time in seconds
Total Traffic
Traffic from Berlin
(B) Protection-Restoration Experiment
0
20
40
60
80
100
120
140
160
180
200
-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4
Tra
ffic
(p
ack
et/1
0 m
s)
Experiment time in seconds
Total Traffic
Traffic from Berlin
150ms
< 50ms~65ms
~120ms
CITYFLOW – QOS OVER SDN
14
CITYFLOW OBJECTIVE: QOS DIFFERENTIATION
AS
65001
AS
65002
AS
65003
Scheduler
Control
Scheduler
Control
VPS
Controller
VPS
Controller
VPS
Controller
VPS
VPS Control Plane Invocation API
includes following functions: Network Service Portfolio
Invocation Controller
NSIS Signalling Driver: End to End Control
IPsphere Driver: Inter AS Configuration
RACF
CAC
Network Element Configuration Interface
VPS Control Plane
OpenFlow Multi AS Network
En
d p
oin
t
En
d p
oin
t
Invocation by Application
Business Logic Bus
Invocation Bus (VPSS)
Public
Internet
Future
Internet
Right of way for High-Priority Traffic
LOW-LEVEL INSTALLATION OF QUEUES IN FORWARDING ENGINES
OFELIA & CITYFLOW
i2CAT
AS
ETHZ
AS
CreateNet
AS
TUB
AS
iMinds
Interconnection
RedZinc
ADSL link
OVSFloodlight
VPS
OVSFloodlight
VPS
OVSFloodlight
VPS
OVSFloodlight
VPSiMinds
OVSFloodlight
VPS
17
CITY-SCALE NETWORK EMULATION
18
DIFFERENTIATED RECOVERY
0
5
10
15
20
25
30
35
-100 0 100 200 300 400
Tra
ffic
(M
b/s
)
Experiment Time in Seconds
Best-Effort
High Priority
failure
Failure
repaired
IRATI – CLEAN SLATE NETWORKING
20
OBJECTIVE: IMPLEMENT RINA POC
RECURSIVE INTERNET ARCHITECTURE
22
RINA : IRATI OS/LINUX IMPLEMENTATION
Source: S. Vrijders, F. Salvestrini, E.Grasa, M. Tarzan, L. Bergesio, D. Staessens, D. Colle
“ Prototyping [RINA], the IRATI project approach”, IEEE Network, March 2014
TESTBEDS: OFELIA
24
VALIDATION OF ROUTING
26
VIRTUAL MACHINE NETWORKING
SHIM IPCP OVER HYPERVISOR
Implementation directly in
the hypervisor (Qemu / Xen)
VALIDATION OF THE SHIM-HV
29
PERFORMANCE TEST
30
PRISTINE – CLEAN SLATE NETWORKING
31
OBJECTIVES: PROGRAMMABILITY OF RINA
DATNET USE CASE
DISTCLOUD USE CASE
33
PERFORMANCE ISOLATION IN DATACENTERS
• Custom congestion control in Fat Tree
topologies.
• Measurements of performances of flows
which belongs to different Tenants. Such flows
compete for the link bandwidth.
• Measurements on the status of the queues
during congestion events.
• Reaction of the flows which will have their
rate reduced to their paid bandwidth, and can
also share any remaining left capacity on the
link.
• How performances change adopting different
multipath strategies.
PRISTINE: VALIDATION EXPERIMENTS
• Authentication• password-based, asymmetric keys
• Encryption
• Explicit congestion avoidance
• Scalable routing
• Location-independent application names• Mapping of application names to node
addresses at multiple layers.
35
ARCFIRE – LARGE SCALE RINA EXPERIMENTATION ON FED4FIRE+
36
SEAMLESS NODE RENUMBERING
3-4 days of tedious and error-prone work to
setup the experiment
Each node changes addresses randomly every 30-60
seconds
RUMBA FRAMEWORK
Python library for managing RINA experiments on Fed4FIRE
38
TESTBED PLUGINS PROTOTYPE PLUGINS
Will become available to all fed4fire users
CONCLUSIONS
39
CONCLUSIONS
FIRE testbeds fill a gap for Future Internet experiments that have one or more of the following requirements
Real-time operation
Performance measurements at small timescales
Implementations near the hardware
Advanced OS modifications near the device driver level
Advanced architectural concepts
Advanced virtualization concepts
Scriptable interface
PUBLIC
top related