docsis 3.1 & sdn:
TRANSCRIPT
DOCSIS 3.1 & SDN: SDN Ground Truth Experiences
David Early, Ph.D. Applied Broadband, Inc.
SDN is easy, right?
Photo: Bell Labs
SDN and PacketCable Multimedia
Photo: Bell Labs
ApplicationApplication
Application Support
Orchestration
Abstraction
Control Support
Data transport & processing
Application-control Interface
Resource-control Interface
Application Layer
SDN ControlLayer
ResourceLayer
(From PKT-SP-MM-I06-110629)
OpenDaylight (ODL): Starting Point
Photo: Bell Labs
OpenDaylight is a collaborative, open source project to advance Software-Defined Networking (SDN). OpenDaylight is a community-led, open, industry-supported framework, consisting of code and blueprints, for accelerating adoption, fostering new innovation, reducing risk and creating a more transparent approach to Software-Defined Networking.
ODL PacketCable Plugin
Photo: Bell Labs
• Created by CableLabs • Provides all the necessary components to support basic PCMM:
• Packetcable PCMM Provider • Packetcable PCMM Model • Southbound ODL plugin supporting PCMM/COPS protocol driver • Packetcable PCMM RESTCONF Service API
• Limited to SCN based gates OOTB • Jumpstart: Modify Code rather than writing from scratch
Ground Truth: Implementation
Photo: Bell Labs
• Adapt existing Applications • Telephony • Congestion
• Use a common REST interface
• Based on Yang models • Common for all applications • Adaptable to unique circumstances
• ODL internal communications between the datastores and PCMM plugin
• COPS-PR to the CMTS CMTS
ODL
Tele
phon
y
Con
gesti
on
Vid
eo
Ground Truth
Photo: Bell Labs
• Successfully set gates in the lab • “Off the shelf” opensource, minimal modifications
• Premise shows promise • Modular approach to scale and flexibility
• Future:
• Scalability • Survivability • Manageability
Ground Truth: Current Limitations
Photo: Bell Labs
Examples: • Not all PCMM Traffic Profiles and DOCSIS MAC layer
scheduling types are supported yet
• The ODL REST API does not return any explicit (ACK/NACK) operational information to the requesting ODL Application. Just “200 OK”
Nothing earth shatteringly bad, but work to be done.
“How do you eat an elephant”?
Photo: Bell Labs
“One bite at a time”: • Evolutionary approach
• Do one thing well, move on • Don’t boil the ocean • Only implement what is needed • Leverage other peoples work • Remember scale and flexibility
• Keep it Modular
Thank you
VIRTUAL MACHINE PLACEMENT STRATEGIES FOR VIRTUAL
NETWORK FUNCTIONS
Adam Grochowski Juniper Networks
NFV/VNF Considered • NFV gaining traction in the industry. • OpenStack—Defacto Virtual
Infrastructure Manager (VIM). • NFV workloads differ from traditional
cloud workloads.
It’s about the network now.
• CPU • RAM • Storage • Availability zone • IOPS
Nova Workload Filters Filter examples:
Good candidate?
Host4
Host2
Host1
Host2
Host3
Host4
Filter
Host1
Host2
Host3
Host4
Weight
• OpenStack doesn’t care. • Will “plumb” the network. • Will also place VM on
non-viable host.
What About the Network?
What We Need New network-related attributes
• Port bandwidth • Requested bandwidth
Class NetworkFilter(filters.BaseHostFilter): ""”Network Filter Utilization""” def host_passes(self, host_state, filter_properties): """Only return hosts with sufficient available BW.""" instance_type = filter_properties.get('instance_type') requested_bw = instance_type[’bandwidth_mb']
Needed:
It’s About the Network Now
New OpenStack development
Addition of network-based
placement
Provide quality service for network functions
Improve server utilization,
improve ROI
To:
Thank you.
Assessing Network and Equipment Failures in the New SDN/NFV
Architectures
Marlon Roa Infinera
Transport network Change in management topology
SDN Ctl
Legacy Management • Low bandwidth • Relax latency need, if any • Connectivity protection – objective
• Failure not customer affecting • HA at NMS/ OSS level
SDN/NFV Management: • High bandwidth & QoS methods
• Data & control traffic • Service/data process latency critical
• Forces clusters closer to nodes • Connectivity protection – requirement • HA at former NMS level and near key nodes
NFV MANO
NFV MANO
SDN Ctl NMS NMS
NMS HA Cluster
“Service continuity is not only a customer expectation, but often a
regulatory requirement, as telecommunication networks are considered to be part of critical
national infrastructure, and respective legal obligations for service
assurance/business continuity are in place” (NFV ETSI [2] specification).
Effects: Extended downtime or
execution error
SDN Controller HA Clusters
NFV MANO HA Clusters
• Sample functions – SDN: Route, re-route, cross-connect, etc. – NFV: Functions part of data-path – Deliver carrier-grade performance
• Failure prevention:
– SDN/NFV: Hardware protection at certain nodes
• Openflow 1.2: Slave/master/same • No standard protection
– Reboot/reset recovery complexity – Hardware choices:
• Processor, memory, I/O • Operating temp • SW compatibility
SDN Controller / NFV MANO Hardware
SDN Controller/Orchestrator NFV MANO connectivity
• Sample functions – Heartbeat monitor – Sync backups – SDN/NFV coexisting (ETSI/ONF) – Controller/Orchestrator as in NMS/OSS model – SDN/NFV HA Clusters closer to nodes
• Failure prevention:
– HA design: Cluster, HA monitors, Load balancers, low latency
– Connectivity redundancy between host servers • New sub-network
– Security of link (Ex: IPsec) due to remote controllers
– Link media upgrades to relief congestion
SDN/ NFV Controller to Node connectivity
• Sample functions – Legacy configuration and PM data – SDN service affecting decisions – Execution and OAM of VNFs – Security for new customer traffic path
• Failure prevention:
– Redundant, symmetric, low latency paths (NFV)
– Increase management link requirement • QoS (Control, Customer), latency, speed
– Security of link (Ex: IPsec) – Function download and execution
validation
SDN/NFV deployment and the cost increase
• New deployment challenges for control layer – New Qualification testing
• Multi-vendor, non-standard HW & SW – New redundant control-layer HW/SW at nodes – New carrier-grade connectivity intra-nodes & among
control clusters
• Non-trivial cost of deployment today. Tomorrow: – Industry certification of HW/SW bundles (Ex: MEF with ETH) – As SDN & NFV mature, reduced cost and complexity
• Normalize redundancy of connection to nodes • Clear best-practices for resiliency of controllers should
become clear
Have you estimated OPEX & CAPEX of actual SDN & NFV deployment?
THANK YOU
Marlon Roa Technical Solutions Director [email protected]
DOCSIS 3.1 Overdrive: Dynamic Optimization Using a Programmable Physical Layer
Jason Schnitzer Applied Broadband, Inc.
Shannon & DOCSIS
Photo: Bell Labs
CMTS HFC CM
Photo: Bell Labs
DOCSIS 3.1 OFDM Profile Optimization
Profile Management Application (PMA)
Photo: Bell Labs SOURCE: CableLabs, VNE-TR-SDN-ARCH-V01-150623
D3.1 Programmable Optimization System
Photo: Bell Labs
Separation of Data and Control Planes • Data forwarding via DOCSIS resources • Control plane hosts application Abstraction of Network Complexity • CMTS Vendor Abstraction layer (CVAL) • Visibility instrumentation normalization • Common control interface Programmatic Interaction • Common Data Model (YANG Defined) • RESTCONF API
Optimization Information Model – A “Big Data” Problem
Photo: Bell Labs
DsRxMer FEC Codewords RxPower
Current Art
Photo: Bell Labs
Capabilities • Measurement & control plane • Complete data model collection in
production network • Profile control proven in lab • Basic optimization logic using
simple network policies Limitations • Small number of simple profiles • Proprietary control interfaces (CLI) • Limited data model (SNMP MIBS)
Future Work
Photo: Bell Labs
Protocols • Define standard profile control
interfaces on D3.1 CCAP • API for OPT REQ-RSP • Formalize PMA architecture • OpenDaylight SDN integration Analysis & Control • Historical data analysis • Standard CMTS APIs • Profile Optimization and control in
production D3.1 network
Thank you
Evaluation of virtualizing DOCSIS MAC by software in data center
Li Zhang, Xin Yang, Lifan Xu Huawei Technologies Co.,Ltd.
Virtualized DOCSIS MAC introduce • DOCSIS MAC consists of data plane, control plane, and management plane.
– the control and management plane are all software in CCAP implementation; – the data plane is almost hardware like FPGA to guarantee high throughput and low latency.
• In order to evaluate DOCSIS MAC virtualization, we break down the DOCSIS MAC to
several functionality modules, then build NFV simulation modules to estimate how many generic CPU cores are needed and what performance it can achieve.
• The NFV platform we used is DPDK(Data Plane Development Kit) which provides a set of
data plane libraries for fast packet processing.
• The Hardware we used is Huawei FusionServer RH2288 which can provide Intel Xeon 12 cores 2.6GHz-64bits processor, 8*10GE NIC and 128G RAM.
DOCSIS MAC Break Down
Ethernet RX MAC
Service Flow Classifier
Downstream Scheduler
Bonding Channel
Distributor
DOCSIS Framer
MPEG2 Framer
DEPI Encapsulate
*Only for data traffic *Not apply for D3.1
UEPI Decapsulate
Concatenation Fragmentation
DOCSISDeFramer
AES/DES Encryption
AES/DES Decryption
Frame Distributor
Upstream Scheduler
Upstream Scheduler
Dynamic Service Flow
Changes
Multicast Control
Load Balancing
CM Control
Encryption Key Exchange
Subscriber Management
Channel Management
QoS Profile Management
Modulation Profile
Management
QoS Profile Management
DOCSISOSSI
CM Management
Downstream Data Plane
DOCSIS MAC Control
DOCSIS MAC Mangement
Upstream Data Plane
*Only for data traffic *Not apply for D3.1
Ethernet TX MAC
MAC Data Plane Evaluation
DS Data Plane Modules CPU Cycles Eth RX and Classifier 1948 Downstream Scheduler 1318 Bonding Channel Distributor 218 AES Encryption 16384 DOCSIS Framer 353 MPEG2 Framer slicing 7631 DEPI tunnel 651 Total 28503
US Data Plane Modules CPU Cycles Eth TX 1552 Upstream Scheduler 818 Frame Distributor 318 AES Decryption 16384 DOCSIS Deframer 2553 Concatenation Fragmentation 5931 UEPI tunnel 502 Total 28058
Table 1. 2000bytes packet DS data plane CPU cycles
Table 2. 2000bytes packet US data plane CPU cycles
Packet Size DS Cycles US Cycles 64 Bytes 21065 20672 128 Bytes 21103 20768 256 Bytes 21814 21410 512 Bytes 22602 22244 1024 Bytes 24811 24362 1518 Bytes 27009 26453 2000 Bytes 28503 28058
Table 3. CPU cycles for typical packet length
• The MAC data plane modules are driven by packets, the performance of software forwarding relates with packet size
– we choose 64, 128, 256, 512, 1024, 1518 and 2000 bytes packet size to count processing cycles; – Longer packets can get higher bits/s throughput, 2000 bytes is the max packet size defined in D3.1
MULPI spec.
• AES encryption and decryption is biggest CPU cycles consumer even if we use intel AES-NI library as acceleration engine.
MAC MGMT and Control Evaluation
Figure 1. Upstream Scheduler CPU usage
• Unlike data plane, MAC control and management threads are driven by events not packets.
• The processing of almost modules are not heavy except upstream scheduler.
– Upstream scheduler operation is cycle base and it is about 1.5ms – 2ms;
– Figure 1 gives a typical X86 CPU computing usage curve in different number of Cable Modem, we can get every 1000 CMs needs 0.556 X86 CPU cores.
• other MAC control and management modules only needs 0.218 X86 CPU cores for 1000 CMs.
• The total MAC MGMT and Control plane need 0.774 X86 CPU cores for 1000 CMs.
Virtualized MAC Evaluation Summary • If we choose the packet size distribution profile shown in the below table:
Length 64 128 256 512 1518 Packets Percent 50% 10% 15% 10% 15% Bits Percent 8.9% 3.5% 10.6% 14.1% 62.9%
• The CPU cores consumption of the whole DOCSIS MAC software can be summed as: Cores = 3.54 ∗ T + 0.556 ∗ C + 0.218 ∗ C
Where: • T is throughput in Gbps; • C is the number of cable modem in one
thousand.
• The right figure CPU consumption curve for small HUB with 10K HHPs, typical HUB with 30K HHPs and big HUB with 50K HHP.
— Given a HUB with 30K subscribers and 80Gbps bandwidth, about 300 X86 cores are needed.
Conclusion • There still is a big gap between CPU implementation and FPGA implementation for
DOCSIS MAC – The power consumption of X86 CPU MAC is 24 times of FPGA MAC; – The hardware cost is about 77 times.
• Now the generic X86 servers in the cloud are designed for applications which need huge memory and storage, actually the access forwarding device don’t require that.
• The DOCSIS MAC virtualization needs a new generic hardware platform for Access network device NFV.