networking field day 10 presentation

87
Hyperscale Networking for All Chapter 2 Prashant Gandhi Rob Sherwood VP, Products & Strategy CTO NFD-10 Presentation 19 AUGUST 2015

Upload: big-switch-networks

Post on 10-Jan-2017

1.834 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Networking Field Day 10 Presentation

Hyperscale Networking for All Chapter 2

Prashant Gandhi Rob SherwoodVP, Products & Strategy CTO

NFD-10 Presentation19 AUGUST 2015

Page 2: Networking Field Day 10 Presentation

2

AGENDATime Speakers Topic

12:30, 30mins

Prashant Gandhi OverviewRob Sherwood Why SDN Fabrics

1:00, 20mins Rob Sherwood Big Cloud Fabric, OpenStack, P+V Demo

1:20, 20 mins

Rob SherwoodSyed Ghayur Big Cloud Fabric for VMware, Demo

1:40, 20 mins

Rob SherwoodMostafa Mansour Big Monitoring Fabric, Demo

2:00, 30 mins

Prashant Gandhi Use Cases & Business ModelRob Sherwood What’s Next?

© 2015, BIG SWITCH NETWORKS, INC.

Page 3: Networking Field Day 10 Presentation

OUR PURPOSE

Transform NetworkingWhy?

• Highly Automated (Faster)• Simple to Operate, Specialists not Needed (Simpler)• Open HW, Vendor choice (Economical)

How?

SDN Fabrics on Open Networking HW• Big Cloud Fabric for OpenStack, VMware & Data Center

Fabric• Big Monitoring Fabric for Next-gen Network Packet Broker

(NPB)

What?

© 2015, BIG SWITCH NETWORKS, INC. 3

Page 4: Networking Field Day 10 Presentation

- Shipping

**Tech Preview

BCF 3.0OpenStack P+V Fabric

• Switch Light Virtual (VX)

VMware• Multi-vSphere 6, VIO• vCenter GUI Plugin**• vRealize Log Insight**

$599/Mo Elastic Pricing

BCF 2.0SDN Clos Fabric32 Leaf / 6 SpineOpenStackUse cases: Cloud, Big Data, VDIAccton Trident2 HW

BIG CLOUD FABRICBCF 2.5/2.6vSphere automationNSX-v VisibilityAnalyticsDell Trident2 HW

SDN FABRICS – PRODUCT JOURNEY

4

2H13 1H14 2H14 1H15

BMF 2.0SDN-based NPBScale-outSingle mgmt paneEconomicalQuanta Switches

2H15

BMF InnovationsService Node

• De-duplication• Packet Slicing/Mod• Multi-10G, Scale-out

100G Switch

- Q3 Beta

- Q4 Beta

BMF 3.0IPv6 MatchingTool Load balancingOverlapping policiesNPB service chainingAccton Switches

BMF 4.0Inter-DC tunnelingDeeper pkt match.AnalyticsDell S4810, S6000

BMF 4.1/4.5Inline modeFlow generationDell S4048, S6000

BIG MONITORING FABRIC* – NEXT-GEN NPB

© 2015, BIG SWITCH NETWORKS, INC. *Formerly Big Tap Monitoring Fabric

Page 5: Networking Field Day 10 Presentation

Architecture and Vision

Rob Sherwood

5

Page 6: Networking Field Day 10 Presentation

HYPERSCALE DATA CENTER R&D LEADERSHIPThey Are Leading the Charge

© 2015, BIG SWITCH NETWORKS, INC. 6

Page 7: Networking Field Day 10 Presentation

7 © 2015, BIG SWITCH NETWORKS, INC.

OUR VISION:Deliver Hyperscale-style

Networking To Any Enterprise

Page 8: Networking Field Day 10 Presentation

8

HYPERSCALE EVERYONE

Mature, Out-of-the-box solutionCustomized Solutions

Vendor Supported Solution

White-box & Brite-box

One throat to choke: HW & SW

Modular, Scale-out Architecture

Mainstream RequirementsHyperscale Approach

Massive Scale

Internal Development

White Box Hardware

Internal Support

© 2015, BIG SWITCH NETWORKS, INC.

Operated by existing Network Admin/NetOps team

Page 9: Networking Field Day 10 Presentation

Google DC Networking Principle

Big Switch Architecture(Open SDN Fabric)

Merchant Silicon ✓(Merchant silicon based Open networking HW)

Centralized Control ✓(SDN Controller)

Clos Topology ✓(Clos Fabric)

GOOGLE DC NETWORKING PRINCIPLESBig Switch Architecture: Open SDN Fabric

Project Jupiter

Ref: https://www.youtube.com/watch?v=FaAZAII2x0w

Google DC Networking Principle

Big Switch Architecture(Open SDN Fabric)

Merchant Silicon ✓(Merchant silicon based Open networking HW)

Centralized Control ✓(SDN Controller)

Clos Topology ✓(Clos Fabric)

9 © 2015, BIG SWITCH NETWORKS, INC.

Page 10: Networking Field Day 10 Presentation

BIG SWITCH PORTFOLIO – OPEN SDN FABRICSBIG

MONITORING FABRIC

CONTROLLER

SWITCH LIGHT™ OSONIE BOOT LOADER

BIG CLOUD FABRIC

CONTROLLER

• ONIE: Open Network Install Environment• See HCL for HW Support Details

10G/40G(Trident-II)1G/10G/40G

© 2015, BIG SWITCH NETWORKS, INC. 10

Page 11: Networking Field Day 10 Presentation

SHARED “ONE BIG SWITCH” ARCHITECTUREDisaggregation of the “MainFrame”

FABRIC CARD

SUPERVISOR(S)

LINE CARD(S)

LINE CARD LINE CARDLINE CARD

LINE CARDLINE CARD

SUPERVISOR 2

FABR

IC

CARD

FABR

IC

CARD

LINE CARD LINE CARDLINE CARD

LINE CARDLINE CARD

SUPERVISOR 1

FABR

IC

CARD

FABR

IC

CARD

LINE CARD LINE CARDLINE CARD

LINE CARDLINE CARD

SUPERVISOR 2

FABR

IC

CARD

FABR

IC

CARD

LINE CARD LINE CARDLINE CARD

LINE CARDLINE CARD

SUPERVISOR 1

FABR

IC

CARD

FABR

IC

CARD

BIG CLOUD FABRIC

CONTROLLER

Hierarchical

Control Plane

1 3

SPINE SWITCHES

2

10G/40GBackplane

41 32 4

COMPUTE WORKLOAD

SERVICES & CONNECTIVITY

RACKS

COMPUTE WORKLOAD

LEAF SWITCHES

Physical&

VirtualWorkloads

1G/ 10G/40GWorkloads

• Disaggregated frame – One “Big Switch”• Open, Simple, Economical, Vendor Choice, Scale-out

• Traditional frame design• Single point of management• Proprietary, Expensive, Lock-in, Fixed Slots

NET

NET

NET

SDN & Clos Fabric Necessary for NetFrame Disaggregation© 2015, BIG SWITCH NETWORKS, INC. 1

1

Page 12: Networking Field Day 10 Presentation

BIG SWITCH PORTFOLIO – OPEN SDN FABRICSReplaces Network Packet Broker or Data Center Switch

VISIBILITY TOOLSNETWORK

PERF MONITORINGAPPLICATION

MONITORINGSECURITY

TOOLSVOIP

MONITORING

PROD

UCTI

ON

NETW

ORK

TAP

& SP

AN

PORT

S

WORKLOADS 1/10/40G ETHERNET SWITCH FABRIC

FILT

ER

PORT

S

SERVICE PORTS

DELIV

ERY

POR

TS

OptionalNPBNPB

1 32

1G/ 10G/40GWorkloads

10G/40GBackplane

Big CloudFabric

BigMonitoring

Fabric

© 2015, BIG SWITCH NETWORKS, INC. 12

Page 13: Networking Field Day 10 Presentation

SWITCH LIGHT ARCHITECTUREHierarchical Implementation of Control Plane

ONL Linux Kernel ASIC

LibC on Debian Wheezy Base Distribution ASIC SDK

SSH FanControl NTP Syslog SNMP

CLI

Indigo

Indigo/ASIC Driver

OpenFlowAgentLoxi

Legend

OpenNetwork

Linux

BSN OpenBSN

Closed

3rd PartyClosedSource

Switch Light OS

ZTNLoader

I2C GPIO DeviceTrees

Switch Light is our Indigo OpenFlow Agent running on Open Network Linux on x86 or ASIC-based hardware.

Big SwitchSDN Controller

© 2015, BIG SWITCH NETWORKS, INC. 13

Page 14: Networking Field Day 10 Presentation

FREE PRODUCT TRIAL ONLINE WITH BSN LABS

Both Products

© 2015, BIG SWITCH NETWORKS, INC. 14

Create free account now at

http://labs.bigswitch.com

Page 15: Networking Field Day 10 Presentation

Big Cloud Fabric

Rob Sherwood

15

Page 16: Networking Field Day 10 Presentation

BIG CLOUD FABRICBest Leaf-Spine Clos Fabric for Private Clouds

16

© 2015, BIG SWITCH NETWORKS, INC. 16

BIG CLOUD CONTROLLER

(CLI or GUI)

SWITCH LIGHT OS SWITCH LIGHT OS SWITCH LIGHT OS

SWITCH LIGHT OS SWITCH LIGHT OS

L2 + L3 CLOS FABRIC MANAGED BY SDN CONTROLLER

OPENSTACK & VMWARESingle Programmatic Interface for up to 16-Rack Fabric

SDN CONTROLLERFull Automation for Provisioning, HA/Resiliency & Management

L2 + L3 CLOS FABRICNative VM Mobility Across640+ Servers/Nodes

SWITCH LIGHT OSOpen Network Linux (ONL) Based OS for Dell-ON or Whitebox Switches

Whitebox Switchesor

Other Servers & Storage

or

Page 17: Networking Field Day 10 Presentation

17

3 Data Center Fabric

Multi-Hypervisor

PhysicalWorkloads …

Most Simple, Best Visibility

P Fabric

2

Most Automated, Best Visibility

P Fabric

BIG CLOUD FABRICDeployment options & Use cases

1

Most Resilient,Best Visibility

P+V Fabric

Use Cases: IaaS Clouds, Big Data/HPC, VDI, NFV, SDS, …© 2015, BIG SWITCH NETWORKS, INC.

Page 18: Networking Field Day 10 Presentation

BIG CLOUD FABRIC HARDWARE COMPONENTSMoving Parts

Big Cloud Fabric Controller

(VMs or physical appliance pair)

Switch Light OSon Spine

(2-6 40G bare metal switches)

OpenStack Plug-in

(optional)

Switch Light vSwitch(optional)

Switch Light OS on Leaf

(10G/40G bare metal switches)

18

© 2015, BIG SWITCH NETWORKS, INC.

Page 19: Networking Field Day 10 Presentation

POD-LEVEL DEPLOYMENTInter-operate with Existing PODs in Data Center

Data CenterCore

Routers

L3L2

L3L2

Example BCF PODs:• Private Cloud: Dev/Test• Analytics (Hadoop)• VDI• Server Virtualization (vSphere)• SDN Underlay (e.g. NSX)

Internet/WAN

Big Cloud Fabric

Controller

RACK NRACK N-1RACK 2RACK 1INGRESS

/EGRESS

40G

10G

Big Cloud Fabric

Controller

RACK NRACK N-1RACK 2RACK 1INGRESS

/EGRESS

40G

10G

© 2015, BIG SWITCH NETWORKS, INC. 19

Page 20: Networking Field Day 10 Presentation

BCF vs MainFrame

20

NetFrame Benefits(Chassis Pair)

Matched by BCF(Logical Chassis Pair)

Single pane of glass(Supervisor)

✓(Controller)

No login to linecard or fabric card ✓(Zero Touch Fabric – simple)

No internal fabric port config(Auto Clos within chassis)

✓(Zero Touch Fabric – auto Clos

Leaf/Spine configuration)

No L2/L3 protocols inside chassis(Sup auto configures datapath on all linecards)

✓(Controller auto configures datapath on all switches)

Auto upgrade of LC & FC(via Sup)

✓(16-rack auto upgrade in ~15 min)

NetFrame Challenges

BCF Advantages

Fixed number of slots Scale out

(2-racks -> 16 racks)

Slow to innovate(SUP HW/CPU hard to refresh)

High feature velocityEx: Fabric Analytics, NSX visibility

(Leverage latest x86 advances)

Vendor lock-in / Proprietary HW/SW Vendor Choice, Open HW

(HW/SW disaggregation)

Expensive Economical

(Pay as you scale)

NET

© 2015, BIG SWITCH NETWORKS, INC.

Page 21: Networking Field Day 10 Presentation

! tenanttenant BLUE logical-router route 0.0.0.0/24 tenant system interface segment web ip address 10.1.1.254/24

segment web member-port-group pg-bm0 vlan 20

WHY CUSTOMERS BUY: 1) SIMPLICITY

External Core Router

WEB WEB APP APP DB DB

Segment-Web Segment-DB

MULTIPLE L2 SEGMENTS

Segment-App

Logical Router(w/ policy)

LBFWTENANT

BLUE

Application Agility(Logical Networking,

Provisioning Templates)

Hitless FabricUpgrade

~15Minutes

Hitless Upgrade(Controller coordinated)

Zero-Touch Fabric(REST APIs, GUI, CLI)

Boxby

Box

Feature Big Cloud Fabric

Switch OS Install

Automatic

Link Aggregation

Automatic

Fabric Formation

Automatic

Trouble-shooting

Fabric-wide

L4-7 Service Chaining

Declarative (per tenant)

Add/Remove/Update Fabric

Automatic

Fabric Visibility Controller or API

Boxby

BoxBoxby

Box

16 racks, 40 devices

© 2015, BIG SWITCH NETWORKS, INC. 21

Page 22: Networking Field Day 10 Presentation

BIG CLOUD FABRIC − LOGICAL VIEW

22

PM VMVM VMVM VMVM VM

VM VMVM VM

VM VMVM PM

VM VMVM VMVM VMVM VM

VM VMVM VM

VM VMVM VM

VM VMVM VMVM VMVM VM

VM VMVM VM

VM VMVM VM

VM VMVM VMVM VMVM VM

VM VMVM VM

VM VMVM VM

VM VMVM VMVM VMVM VM

VM VMVM VM

VM VMVM VM

VM VMVM VMVM VMVM VM

VM VMVM VM

VM VMVM VM

VM VMVM VMVM VMVM VM

VM VMVM VM

VM VMVM VM

VM VMVM VMVM VMVM VM

VM PMVM VM

VM VMVM VM

Tenant: A logical grouping of L2/L3 networks

Multi-Tenancy: Multiple tenants can share same infrastructure

PM - Physical MachineVM - Virtual Machine

Logical Segment (broadcast network) – L2 network consisting of logical ports and end-points

Logical Router (VRF) – Tenant Router for Inter-segment or Intra-segment routing

System Router – Inter tenant and external communication

L3 Core

Core Router – External L3 router for out of fabric communication

Multiple Physical Racks

© 2015, BIG SWITCH NETWORKS, INC.

Page 23: Networking Field Day 10 Presentation

ZERO TOUCH FABRIC: SWITCH/FABRIC BRING-UPEliminates Box-by-Box Switch Bring-Up & Provisioning

23

Open Network Installation Environment

Big Cloud FabricController

ConfigurationRepository

Switch Light OSRepositor

y

COMPUTE WORKLOAD

A B A B

COMPUTE WORKLOAD

LLDP to automatically discover fabric topology

LACP or LLDP to automatically discoverport groups

BCF - Steps to install a fabric:1. Mount physical switches and cable them2. Install controller/s & add switches mac

and role on controller3. Power up switches

• Switches will download right image and configuration from controller

• Monitor link status on controller (single point for entire fabric)

One time configuration, no redundant and manual work

© 2015, BIG SWITCH NETWORKS, INC.

Page 24: Networking Field Day 10 Presentation

UPGRADE DETAILSThree simple commands:1. Copy New Image package to both Controllers

2. Stage the upgrade on both controllers

3. Launch The Upgrade on both the controllers

Controller coordinated upgrade process (Zero Touch):A. Upgrade standby controller and make it activeB. Upgrade Fabric SwitchesC. Upgrade old active controller

Controller coordinated upgrade

24

#copy scp://[email protected]:controller-upgrade-bcf-2.0.1.pkg image://

#upgrade stage

#upgrade launch

HierarchicalControl Plane

B

AC

© 2015, BIG SWITCH NETWORKS, INC.

Page 25: Networking Field Day 10 Presentation

WHY CUSTOMERS BUY: 2) RESILIENCY @ SCALE Chaos Monkey Resilience proves BCF is Best in class HA at Scale

Chaos Monkey Testing: 42k simulated End-points/VMs of background load and 640+ forced component failures during the “under stress” test runs

32 leaf / 6 spine / 16 rack pod Controller fail-over every 30

seconds Switch fail-over every 8 seconds Link fail-over every 4 seconds

Conclusion: 640 component failures in 30 minutes with no impact on application performance

© 2015, BIG SWITCH NETWORKS, INC. 25

Page 26: Networking Field Day 10 Presentation

WHY CUSTOMERS BUY: 3) SIMPLE, SIMPLE, SIMPLE

26

Verify Logical PathSegment, Logical Router, L3 Policy, L4-7 device (next hop)

View Simulated TopologySource ingress Leaf Spine egress Leaf Dest

Fabric Trace Fabric Analytics

Easy fine-grain time-series search of log events based on:• Event state (e.g. failures)• Configuration change (REST,

CLI or GUI)• Tenant / Segment / Devices• End-point (MAC or IP)

attachment & detachment

Fabric Programmability

– Native REST APIs: GUI & CLI are REST clients (consistent & hardened)– Controller is single point of API integration (versus tens of boxes)Benefits• No DevOps cost for network

automation• Print REST from CLI/GUI (accelerate

DevOps through NetOps)• Scalable M2M API interaction

dt-controller1# debug rest***** Enabled display rest mode *****dt-controller1# show tenant blueREST-POST: POST http://127.0.0.1:8080/api/v1/data/controller/core/aaa/audit-event {"attribute": [{"value": ”show tenant blue", "key": "cmd_args"}], "event-type": "cli.command", "session-cookie": "yx6pjq6cwo5YXZwHsDyw6Z_3Zm5PITwE”}REST-POST: http://127.0.0.1:8080/api/v1/data/controller/core/aaa/audit-event done 0:00:00.003089

© 2015, BIG SWITCH NETWORKS, INC.

Page 27: Networking Field Day 10 Presentation

NEW: FABRIC SPAN

27

© 2015, BIG SWITCH NETWORKS, INC.

Data Traffic Fabric SPAN Traffic

• Fabric SPAN feature benefit:– No need of sniffer on each rack– SPAN traffic does not compete with

production traffic– Physical or virtual source– Physical or virtual destination– Support SPAN filter

SWITCH LIGHT OS SWITCH LIGHT OS SWITCH LIGHT OS

SWITCH LIGHT OS SWITCH LIGHT OS

L2 + L3, P + V CLOS FABRIC MANAGED BY SDN CONTROLLER

SWITCH LIGHT VX

SWITCH LIGHT VX

SWITCH LIGHT VX

SWITCH LIGHT VX

BIG CLOUD FABRIC

CONTROLLER(CLI, GUI API)

BCFNeutro

nPlugin

SNIFFER SNIFFER SNIFFER

Page 28: Networking Field Day 10 Presentation

NEW: BIG CLOUD FABRIC (P+V FABRIC) Resilient Networking for OpenStack Clouds

© 2015, BIG SWITCH NETWORKS, INC. 28

SWITCH LIGHT OS SWITCH LIGHT OS SWITCH LIGHT OS

SWITCH LIGHT OS SWITCH LIGHT OS

L2 + L3, P + V CLOS FABRIC MANAGED BY SDN CONTROLLER

BCF NEUTRON PLUG-INSingle Programmatic Interface for a multi-rack P+V Fabric

P+V SDN CONTROLLERFull Automation for Provisioning, HA/Resiliency, Management & Visibility

SWITCH LIGHT OSOpen Network Linux (ONL) Based OS for Dell-ON or Whitebox Switches

SWITCH LIGHT VX

SWITCH LIGHT VX

SWITCH LIGHT VX

SWITCH LIGHT VXBARE METAL

SWITCH LIGHT VIRTUALUser space Agent on OVS Kernal Module

BIG CLOUD FABRIC

CONTROLLER(CLI, GUI API)

Industry’s 1st P+V SDN Fabric on Open HW

Physical: Switch Light OS on switches Virtual: Switch Light VX on KVM servers

Resilience for OpenStack Full Neutron Integration for L2/L3 networking Distributed virtual routing, NAT/PAT

Operational Simplicity for P+V Single pane of glass for P + V networks L4-L7 Service Insertion (LBaaS, FWaaS)

Deep P+V Visibility P+V Visibility & Troubleshooting

(VM- to-VM Path & Policy Visibility) Horizon Extensions (Fabric visibility, Heat

Templates)

BCFNeutronPlugin

New

New

1

Enhanced

Page 29: Networking Field Day 10 Presentation

SWITCH LIGHT VIRTUAL ARCHITECTUREStock OVS vs. Switch Light Virtual

29

© 2015, BIG SWITCH NETWORKS, INC.

OVS kernel flow table

ovs-vswitchd OF 1.x

OVS Datapath Management Utility

OpenFlow

OpenFlow Controller

OVS OF Agent

OVS Components

Kernel space

User space

Indigo OF Agent

OVS kernel flow table

Indigo/Kernel Driver

BCF Fwd Pipeline

OpenFlow

Offload protocol:

ICMPDHCPLLDPARPNATLACP

sFLOWInBand

BCF Controller

Switch Light Virtual Components

Switch Light Virtual

Kernel space

User space

Page 30: Networking Field Day 10 Presentation

EXAMPLE: NETWORK-TEST-AS-A-SERVICETest for physical and logical reachability exposed to app owners

31

“About 80% of the connectivity trouble tickets we faced were misconfigurations in guest VMs, but we wouldn’t find that out until a few hours had been spent checking the network. Much nicer when app owners can run a network check by themselves before filing the ticket.”

Example Integration with OpenStack Horizon built on controller REST APIs(source code available)

© 2015, BIG SWITCH NETWORKS, INC.

Page 31: Networking Field Day 10 Presentation

BCF GUI Demo

Rob Sherwood

32

Page 32: Networking Field Day 10 Presentation

TRY THIS DEMO AT HOME WITH BSN LABS

Both Products

© 2015, BIG SWITCH NETWORKS, INC. 33

Page 33: Networking Field Day 10 Presentation

Thank You

Page 34: Networking Field Day 10 Presentation

Big Cloud Fabric – VMware Rob SherwoodSyed Ghayur

35

Page 35: Networking Field Day 10 Presentation

36

2

BIG CLOUD FABRIC (P-FABRIC)VMware Solution: Expanded Use Cases, VI-admin Visibility

• Multiple vCenters associated with a single BCF

• One vCenter per tenant• Advance automation

(zero-touch), Deep visibility

Multi-vSphere Hosted Cloud

P-Fabric

P-Fabric

VMware Integrated OpenStack

• VMware private/public clouds with OpenStack Orchestration

• Fully automated, zero-touch fabric• vSphere & NSX visibility, fabric-

wide trouble-shooting

P-Fabric

vSphere 6 Automation

• Advance network automation for vSphere 6 (zero touch)

ESX host detection, L2 network creation, vMotion

• VM-level visibility and trouble-shooting via BCF Controller

vC-1 vC-2

vC-NvCenter

© 2015, BIG SWITCH NETWORKS, INC.

Page 36: Networking Field Day 10 Presentation

37

PREVIEW: VMware vCENTER GUI PLUGIN

• VM-admin has Fabric visibility

• ESX host to BCF Leaf Switch connectivity

• Option to configure BCF’s L2 & L3 networks

© 2015, BIG SWITCH NETWORKS, INC.

Page 37: Networking Field Day 10 Presentation

Demo: BCF + vSphere

Rob SherwoodSyed Ghayur

38

Page 38: Networking Field Day 10 Presentation

39

A. Auto host detection and LAG formationB. Auto L2 network creationC. Auto VM learningD. Network policy migration for vMotionE. VM-level Visibility and Troubleshooting

Big Cloud Fabric (P-Clos Edition)

BIG CLOUD FABRIC CONTROLLER

vCenter API

VIRTUALIZED WORKLOADS (ESXi)

L3/L2

VM WBVM 2VM APVM 1VM DB

Virtual Switch

vCenter APISBIG SWITCH

vCenter Extension

vCenter

Integration using vCenter API’sBIG CLOUD FABRIC - VCENTER INTEGRATION

© 2015, BIG SWITCH NETWORKS, INC.

Page 39: Networking Field Day 10 Presentation

vCenter Integration Details

Page 40: Networking Field Day 10 Presentation

41

A. HOST DETECTION AND LAG FORMATION

Big Cloud Fabric (P-Clos Edition)

vCenter API

L3/L2

DB-2APP-2

BIG SWITCH vCenter

Extension

vCenter

HOST-A

Virtual Switch

HOST-B

2

3

APP-1

1 BCF Registers with vCenter

2 Enable CDP on vCenter vSwitch

3 BCF gets vCenter notification and forms LAG using CDP on leaf switch

Virtual Switch

BIG CLOUD FABRIC CONTROLLER 21

© 2015, BIG SWITCH NETWORKS, INC.

Page 41: Networking Field Day 10 Presentation

42

B. L2 NETWORK AUTOMATION

Big Cloud Fabric (P-Clos Edition)

vCenter API

DB-2APP-2

BIG SWITCH vCenter

Extension

vCenter

HOST-A

Virtual Switch

HOST-B

12

APP-1

1 New Port-group created in vCenter

2 BCF gets vCenter notification and creates L2 segment on the fabric

L3/L2

Virtual Switch

BIG CLOUD FABRIC CONTROLLER

© 2015, BIG SWITCH NETWORKS, INC.

Page 42: Networking Field Day 10 Presentation

43

C. AUTO VM LEARNING

Big Cloud Fabric (P-Clos Edition)

vCenter API

DB-2APP-2

BIG SWITCH vCenter

Extension

vCenter

HOST-A

Virtual Switch

HOST-B

APP-1

L3/L2

1 vCenter creates a new VM and associates it to the port-group.

2BCF gets vCenter notification and enables segment membership for respective host LAG

WEB-1

Virtual Switch

BIG CLOUD FABRIC CONTROLLER 12

© 2015, BIG SWITCH NETWORKS, INC.

Page 43: Networking Field Day 10 Presentation

44

D. NETWORK POLICY MIGRATION FOR VMOTIONDynamic VLAN Provisioning

Big Cloud Fabric (P-Clos Edition)

vCenter API

DB-2APP-2

BIG SWITCH vCenter

Extension

vCenter

HOST-A HOST-B

APP-1

L3/L2

WEB-1

Virtual Switch

1 Initiate vMotion on the vCenter

2 BCF learns VM’s new location and updates the forwarding table

VLAN 41

WEB-1

1

2

Virtual Switch

BIG CLOUD FABRIC CONTROLLER 12

© 2015, BIG SWITCH NETWORKS, INC.

Page 44: Networking Field Day 10 Presentation

45

D. NETWORK POLICY MIGRATION FOR VMOTIONDynamic VLAN Pruning

Big Cloud Fabric (P-Clos Edition)

vCenter API

DB-2APP-2

BIG SWITCH vCenter

Extension

vCenter

HOST-A HOST-B

APP-1

L3/L2

Virtual Switch

3 BCF prunes VLAN on Host-A (since there are no other VM’s in that VLAN)

VLAN 41

WEB-1

3Virtual Switch

BIG CLOUD FABRIC CONTROLLER

1 Initiate vMotion on the vCenter

2 BCF learns VM’s new location and updates the forwarding table

12

2

WEB-1

© 2015, BIG SWITCH NETWORKS, INC.

Page 45: Networking Field Day 10 Presentation

46

● BCF controller provides vSphere visibility

○ Host Name, VM Name, vSwitch Type, vNIC, pNIC, …

○ Display Host/VM/vSwitch to ToR connectivity

● Identify config inconsistency between vSwitch & ToR

○ Alert admin for possible configuration issues

● Fabric Analytics displays the vSphere events, tasks, etc.

E. VM-LEVEL VISIBILITY & TROUBLESHOOTING

© 2015, BIG SWITCH NETWORKS, INC.

Page 46: Networking Field Day 10 Presentation

Big Monitoring Fabric*Next-gen Network Packet Broker (NPB)

Rob Sherwood Mostafa Mansour

47

*Formerly Big Tap Monitoring Fabric

Page 47: Networking Field Day 10 Presentation

EVERY ORGANIZATION NEEDS TO MONITOR...

48

Application Performanc

e Monitoring

Network Performanc

e Monitoring

Security Monitoring

Traffic Analytics / Recorders

Customer Experience Monitoring

TOO

LS

TRAD

ITIO

NA

L N

PBs

BASE

D

MO

NIT

ORI

NG

VISIBILITY TOOLSNETWORK PERF

MONITORINGAPPLICATION PERF

MONITORINGSECURITY

TOOLSVOIP MONITORINGPR

ODUC

TIO

N NE

TWOR

K

TAP

& SP

AN P

ORTS

WORKLOADS NETWORK PACKET

BROKERS

ISSU

ES

Complex (Box-by-Box)

ProprietaryExpensive

✗✗✗

© 2015, BIG SWITCH NETWORKS, INC.

Page 48: Networking Field Day 10 Presentation

BIG MONITORING FABRICBest Monitoring Fabric for Pervasive Security & Visibility

BIG TAP CONTROLLER

FILT

ER P

ORTS

DELI

VERY

PO

RTS

SERVICE PORTS

VISIBILITY TOOLSNETWORK PERF

MONITORINGAPPLICATION

PERF MONITORING

SECURITY TOOLS

VOIP MONITORING

PRO

DU

CTIO

N

NET

WO

RK

TAP

& SP

AN

PORT

S

SWITCH LIGHT™ OSOPEN NETWORK LINUX

1/10/40G ETHERNET SWITCH FABRIC

OptionalNODE NODE

WORKLOADS

BROWNFIELD NETWORK ETHERNET SWITCHING FABRIC WITH NPB SERVICE NODES CENTRALIZED TOOL FARM

© 2015, BIG SWITCH NETWORKS, INC. 49

Page 49: Networking Field Day 10 Presentation

PERVASIVE SECURITY WITH INLINE + OUT OF BANDService Chaining with a Single Pane of Glass

50

© 2015, BIG SWITCH NETWORKS, INC.

CENTRALIZED OUT-OF-BAND TOOL FARM

INLINE TOOL CHAINSTRAFFIC DISTRIBUTION / LOAD

SHARING

BIG TAP CONTROLLER

PERIMETER

FIREWALLDMZ

FIREWALL

1/10/40GETHERNET SWITCH

TRUSTED ZONEDATA CENTER / ENTERPRISE / CAMPUS

UNTRUSTED ZONE

ACL BASED SPAN

WEBPROXY

IINTRUSION PREVENTIO

NSSL

DECRYPT

INTERNET DMZ (INLINE) (OUT OF BAND)

Page 50: Networking Field Day 10 Presentation

BIG MONITORING FABRIC: FEATURE COMPARISONS

51

© 2015, BIG SWITCH NETWORKS, INC.

Feature Big Tap Legacy NPBs

Filtering / Aggregation / Load Balancing VM-to-VM Traffic monitoring 1G/10G/40G (100G on Roadmap) Event based Policy Management / API RBAC / TACACS+ Inter-DC Tunneling Deeper packet Matching Service Node chaining Scale-out, Multi-tier Fabric Specialized Functions (de-dup, pkt slicing) Service Node* In-line Deployment Mode Flow Generation Inbuilt Packet Capture Analytics (host/DNS/DHCP tracking)

Node

MONITORING FABRIC

Node

Big Tap Controller

PRODUCTION NETWORK

TOOL FARM

Leverage Existing NPBs Efficiently

Optional NPB or Service Nodes

Page 51: Networking Field Day 10 Presentation

NEW: BMF SERVICE NODE• Popular services

De-duplicationPacket Slicing/Mods(More to Come)

• Operational SimplicitySingle pane of glass for service nodes and fabric Scale-out DeploymentService chainingSupport legacy service nodes

• High performanceMulti 10GDPDK optimizations

• Commodity Economics50% savings over legacy

52

3rd Party SERVICE NODE3rd Party SERVICE NODE3rd Party SERVICE NODE

BIG MONITORING FABRIC CONTROLLER

VISIBILITY TOOLS

NETWORK PERF MONITORING

APPLICATION PERF MONITORING

SECURITY TOOLS

VOIP MONITORING

PRODUCTION NETWORK

(any vendor)

TAP

& SP

AN P

ORTS

1/10/40/100G* ETHERNET SWITCH FABRIC

FILT

ER P

ORTS

SERVICE PORTS

DELI

VERY

POR

TS

BIG TAP SERVICE NODEBIG TAP SERVICE

NODEBIG SwitchSERVICE

NODE• De-duplication• Packet Slicing• Scale-out Deployment

*100G Switch: Beta in Q4© 2015, BIG SWITCH NETWORKS, INC.

Page 52: Networking Field Day 10 Presentation

CUSTOMER VALIDATION

53

© 2015, BIG SWITCH NETWORKS, INC.

“…We have a number of packet analysis tools and we were using G******** to gather packets, but when you want to gather packets from everywhere that price point gets too high…

So we decided to go with a white box solution and Big Tap from Big Switch to gather packets and forward them to the tools as needed. We’re using software-defined networking first in non-production, in our monitoring space, and evaluating where we want to go next. It’s done well for us. We used it through our first peak of tax year 2014, which was in early February…

-Ted Turner, Sr. Network Engineer

http://www.networkworld.com/article/2901382/application-performance-management/when-intuit-s-network-gets-taxed-it-turns-to-riverbed-performance-management-tools.html

Page 53: Networking Field Day 10 Presentation

54

CentralizedTool Farm

PERVASIVE MONITORING – TAP/SPAN EVERY RACK

(actual customer diagram)

Tier-1 US Financial Services Institution

Filter SwitchCore SwitchDelivery Switch

Legend

Tier-1 US Financial Services Institution• Centralized tool farm for 120 racks• Mix of 1GE, 10GE and 40GE TAPs, SPANs, and Tools• NPB costs were reduced by more than 60% while

increasing monitoring network capacity multi-fold

Customer

Use Case

© 2015, BIG SWITCH NETWORKS, INC.

Page 54: Networking Field Day 10 Presentation

55

CentralizedTool Farm

PERVASIVE MONITORING – TAP/SPAN EVERY RACK

(actual customer diagram)

Tier-1 US Financial Services Institution

Customer

Use Case

Filter SwitchCore SwitchDelivery Switch

Legend

© 2015, BIG SWITCH NETWORKS, INC.

Page 55: Networking Field Day 10 Presentation

56

MOBILE / LTE NETWORK MONITORINGEnabling Advanced Monitoring for large Japanese Mobile carriers

SPAN SPAN

4G(eNode B)

RAN MOBILE CORE / DATA CENTER

3G

S5/S8S1-U

S12

SGi

TAP

TAP

TAP

TAP

SPAN

TAP

TAPSPAN

S-GW P-GW

1/10/40G ETHERNET SWITCH FABRIC

FILT

ER P

ORTS

SERVICE PORTS

DELI

VERY

POR

TSOptionalNPBNPB

BIG TAP CONTROLLER

Flexible & Deeper Packet Matching Policies based on Tunnel

End-point ID (TEID), GTP version, SCTP port number, etc.

Match inner headers of encapsulated packets like VXLAN, MPLS... (up to 128 bytes)

Replicate and load balance traffic to any tool

Tier-1 Mobile Service Providers in Japan• Scale-out Deployment: 1.5K+ Taps, growing to 5K+ • Support for matching multiple 3G/4G/LTE protocols• Load Balance traffic to multiple tools (3rd party/Internal)

Customer

Use Case

© 2015, BIG SWITCH NETWORKS, INC.

Page 56: Networking Field Day 10 Presentation

57

MOBILE / LTE NETWORK MONITORINGEnabling Advanced Monitoring for large Japanese Mobile carriers

SPAN SPAN

4G(eNode B)

RAN MOBILE CORE / DATA CENTER

3G

S5/S8S1-U

S12

SGi

TAP

TAP

TAP

TAP

SPAN

TAP

TAPSPAN

S-GW P-GW

1/10/40G ETHERNET SWITCH FABRIC

FILT

ER P

ORTS

SERVICE PORTS

DELI

VERY

POR

TSOptionalNPBNPB

BIG TAP CONTROLLER

Flexible & Deeper Packet Matching Policies based on Tunnel

End-point ID (TEID), GTP version, SCTP port number, etc.

Match inner headers of encapsulated packets like VXLAN, MPLS... (up to 128 bytes)

Replicate and load balance traffic to any tool

Customer

Use Case

© 2015, BIG SWITCH NETWORKS, INC.

Page 57: Networking Field Day 10 Presentation

58

REMOTE MONITORING – TAP/SPAN EVERY LOCATIONCentralized monitoring of remote DCs of a large technology provider

NPB

FILT

ER

PORT

S

DELIV

ERY

POR

TS

SERVICE PORTS

MONITORING FABRIC VISIBILITY TOOLS

NETWORK PERF MONITORING

APPLICATION PERF MONITORING

SECURITY TOOLS

VOIP MONITORING

NPB

PRIMARY DATA CENTER

CENTRALIZED

BIG TAP CONTROLLER

REMOTE DATA CENTER(S)

L2-GRE Tunnels

REM

OTE

FP

TUNN

EL

PORT

S

PRODUCTION TAP & SPAN

Leverage existing tools present in the primary DCsSupport multiple tunnels per interface, per switchSupport 1G / 10G / 40G tunnel interfacesBuilt-in Tunnel Management

Customer

Use Case

US Advanced Communications Co.• 10G/40G tunnels connecting 10+ datacenters• Centralized configuration, management and monitoring

© 2015, BIG SWITCH NETWORKS, INC.

Page 58: Networking Field Day 10 Presentation

59

REMOTE MONITORING – TAP/SPAN EVERY LOCATIONCentralized monitoring of remote DCs of a large technology provider

NPB

FILT

ER

PORT

S

DELIV

ERY

POR

TS

SERVICE PORTS

MONITORING FABRIC VISIBILITY TOOLS

NETWORK PERF MONITORING

APPLICATION PERF MONITORING

SECURITY TOOLS

VOIP MONITORING

NPB

PRIMARY DATA CENTER

CENTRALIZED

BIG TAP CONTROLLER

REMOTE DATA CENTER(S)

L2-GRE Tunnels

REM

OTE

FP

TUNN

EL

PORT

S

PRODUCTION TAP & SPAN

Leverage existing tools present in the primary DCsSupport multiple tunnels per interface, per switchSupport 1G / 10G / 40G tunnel interfacesBuilt-in Tunnel Management

Customer

Use Case

© 2015, BIG SWITCH NETWORKS, INC.

Page 59: Networking Field Day 10 Presentation

60

DMZ / INLINE SECURITYEnabling Pervasive security for a multinational energy corporation

INTERNET

DMZ

BIG TAP INLINE Switches

(1/10/40G)

Untrusted

Trusted

INLINE TOOLSTRAFFIC

DISTRIBUTION / LOAD SHARING

BIG TAP CONTROLLERS (HA PAIR)

Firewall A Firewall B Firewall C

Switch A Switch B Switch C

SPAN Traffic

Global Energy corporation• 10G/40G, line-rate, pervasive security monitoring• Requirement across multiple datacenters

Customer

Use Case

© 2015, BIG SWITCH NETWORKS, INC.

Page 60: Networking Field Day 10 Presentation

61

DMZ / INLINE SECURITYEnabling Pervasive security for a multinational energy corporation

INTERNET

DMZ

BIG TAP INLINE Switches

(1/10/40G)

Untrusted

Trusted

INLINE TOOLSTRAFFIC

DISTRIBUTION / LOAD SHARING

BIG TAP CONTROLLERS (HA PAIR)

Firewall A Firewall B

Switch A Switch B

SPAN Traffic

Customer

Use CaseFirewall A <-> Switch

AFirewall B <-> Switch

B

Span to QRadar

© 2015, BIG SWITCH NETWORKS, INC.

Page 61: Networking Field Day 10 Presentation

Big Monitoring Fabric (BMF)DMZ Security Demo with Inline Mode

62

Page 62: Networking Field Day 10 Presentation

63

BMF INLINE SETUPUntrusted

Trusted

INTERNET

DMZ

© 2015, BIG SWITCH NETWORKS, INC.

Page 63: Networking Field Day 10 Presentation

64

BMF INLINE SETUPUntrusted

Trusted

INTERNET

DMZ

eth 11

eth 12

Create a chain(bump-in- the wire)

© 2015, BIG SWITCH NETWORKS, INC.

Page 64: Networking Field Day 10 Presentation

65

BMF INLINE SETUPUntrusted

Trusted

INTERNET

DMZ

eth 11

eth 12

eth 20

Service 1 - FWInterested in

All Traffic eth 21

Create a Service Profile(Tool)

© 2015, BIG SWITCH NETWORKS, INC.

Page 65: Networking Field Day 10 Presentation

66

BMF INLINE SETUPUntrusted

Trusted

INTERNET

DMZ

eth 11

eth 12

eth 20

Service 1 - FWInterested in

All Traffic eth 21

Create a

Service

Profile(Tool)

eth 22

eth 23Service 2 -

ProxyInterested in Web/HTTP traffic only

Add more tools

© 2015, BIG SWITCH NETWORKS, INC.

Page 66: Networking Field Day 10 Presentation

67

OUT-OF-BAND

TOOL FARMFILT

ER

PORT

S

SERVICE PORTS DE

LIVE

RY

POR

TS

BIG TAPOUT-OF-BAND

BMF INLINE SETUPUntrusted

Trusted

INTERNET

DMZ

eth 22

eth 23Service 2 -

ProxyInterested in Web/HTTP traffic only

eth 20

Service 1 - FWInterested in

All Traffic eth 21

eth 11

eth 12

eth 24

Span

Create a ACL-based SPAN

© 2015, BIG SWITCH NETWORKS, INC.

Page 67: Networking Field Day 10 Presentation

68

OUT-OF-BAND

TOOL FARMFILT

ER

PORT

S

SERVICE PORTS DE

LIVE

RY

POR

TS

BIG TAPOUT-OF-BAND

BMF INLINE SETUPUntrusted

Trusted

INTERNET

DMZ

BMF CONTROLLERS (HA PAIR)

eth 22

eth 23Service 2 -

ProxyInterested in Web/HTTP traffic only

eth 20

Service 1 - FWInterested in

All Traffic eth 21

eth 11

eth 12

eth 24

Span

© 2015, BIG SWITCH NETWORKS, INC.

Page 68: Networking Field Day 10 Presentation

69

BMF INLINE SETUPUntrusted

Trusted

INTERNET

DMZ

BMF INLINE SWITCHES

Service 1 - FWInterested in All

Traffic

Service 2 - ProxyInterested in

Web/HTTP traffic only

Packet with HTTP header

Packet with UDP header

© 2015, BIG SWITCH NETWORKS, INC.

Page 69: Networking Field Day 10 Presentation

70

BMF INLINE SETUP - SUMMARYUntrusted

Trusted

INTERNET

DMZ

BMF INLINE SWITCHES

Service 1 - FWInterested in All

Traffic

Service 2 - ProxyInterested in

Web/HTTP traffic only

2. Create a Chain Attach the Service Profiles

1. Create Service Profile for “WEB PROXY”

FW Service is pre-configured

3. Create a SPAN

© 2015, BIG SWITCH NETWORKS, INC.

Page 70: Networking Field Day 10 Presentation

Use Cases & Business ModelPrashant Gandhi

71

*Formerly Big Tap Monitoring Fabric

Page 71: Networking Field Day 10 Presentation

72

HYPERSCALE EVERYONE

Mature, Out-of-the-box solutionCustomized Solutions

Vendor Supported Solution

White-box & Brite-box

One throat to choke: HW & SW

Modular, Scale-out Architecture

Mainstream RequirementsHyperscale Approach

Massive Scale

Internal Development

White Box Hardware

Internal Support

© 2015, BIG SWITCH NETWORKS, INC.

Operated by existing Network Admin/NetOps team

Page 72: Networking Field Day 10 Presentation

73

BIG MONITORING FABRIC: CUSTOMER USE-CASES*

Use-Case Customers

* List includes Production as well as POC customers

1. Pervasive Monitoring (Monitor Every Rack/Link)

GlobalSoftware Company

US Fin. Media

Co

2. Remote Monitoring (Monitor Every Location)

US Adv. Comm. Supplie

r

USFinancia

lCorp

3. Mobile/LTE Monitoring

Japan Mobile

SP

Taiwan Mobile

SP

US Mobile SP

4. Pervasive Security (DMZ / Inline

Security)

GlobalDrink

Co

Global Energy

Co

© 2015, BIG SWITCH NETWORKS, INC.

Page 73: Networking Field Day 10 Presentation

74

BIG CLOUD FABRIC: CUSTOMER USE-CASES*Use-Case Customers

* List includes Production as well as POC customers

1. OpenStack(IaaS Cloud, NFV)

2. VMware vSphere(Server Virtualization, Private Cloud, Hosted Cloud)

3. DC L2/L3 Fabric(Big Data/HPC, VDI)

Japan Fin

Serv

EUCable

Operator

Japan Fin

Serv

GlobalCreditCar

dCompany

GlobalContract Mfg

Co

Global Internet Security

EUTelco

© 2015, BIG SWITCH NETWORKS, INC.

Page 74: Networking Field Day 10 Presentation

75

BENEFITS OF NETFRAME DISAGGREGATION

© 2015, BIG SWITCH NETWORKS, INC.

Operational Benefits of Big Cloud Fabric*

• 10x Faster Fabric Setup & Installation

• 75% Faster deployment of new applications

• 12x more efficient network diagnostics and trouble shooting

• >50% lower cost of network operations (Capex & OpEx)

*ACG Research: Operational & Economic Analysis of Big Cloud Fabric compared to present mode of network operations. August 2015

Page 75: Networking Field Day 10 Presentation

© 2015, BIG SWITCH NETWORKS, INC.

INTRODUCING: ELASTIC PRICING FOR NETWORKINGInspired by Hyperscale Pricing Models

76

Base Capacity(4 Racks)$199K

Excess Capacity(4 Racks)

$599/switch/mo

8 Racks

Elastic SDN PricingHW, SW, Support

• CapEx Reduction: Pay for average use, not peak • Price Elasticity: Pay as you grow; Pay as you

burst• Business Agility: Eliminate procurement time• Flexible Terms: Cancel anytime

Current SDN PricingHW, SW, Support

$99K(starter Kit)

2 Racks

~200K

4 Racks

• High CapEx Reduction: Pay for Peak• Long Procurement Cycles

HA Fabric: 10G leaf HW, 40G Spine HW, Controllers, SW, Support2 Racks: 6 switches; 4 Racks: 10 switches; 8 Racks: 19 switches

Page 76: Networking Field Day 10 Presentation

© 2015, BIG SWITCH NETWORKS, INC.

INTRODUCING: ELASTIC PRICING FOR NETWORKINGInspired by Hyperscale Pricing Models

77

Configuration*

(2X capacity)

Switches(Trident-II)

Base Price(1X base capacity)

Elastic Price(1X excess capacity)

8 Racks 16 leaf, 3 spine $199K

$599(switch/month)12 Racks 24 leaf, 5 spine $299K

16 Racks 32 leaf, 6 spine $399K

*Includes 10G leaf and 40G spine switches, BCF Controllers (HA), SW licenses and Support

Page 77: Networking Field Day 10 Presentation

Thank You

Page 78: Networking Field Day 10 Presentation

Big Future

Rob Sherwood

79

© 2015, BIG SWITCH NETWORKS, INC.

Page 79: Networking Field Day 10 Presentation

BIG SWITCH ARCHITECTURE ADVANTAGE

80

Scale

Resiliency

Standard HW(switch, server)

+Same as

Box-by-box Network

Simplicity

Visibility

Automation

Analytics

SoftwareValue

Better thanBox-by-box Network

© 2015, BIG SWITCH NETWORKS, INC.

Page 80: Networking Field Day 10 Presentation

BIG SWITCH SOFTWARE IP

81

x86Server

Broadcom Switch

Switch LightSDN Controller

DPDK Apps

SDN AppsAnalytics

BSNSoftware

IP

Apollo / FireboldTrident/Trident+

Trident-IITomahawk (*)Trident-II+ (*)

*Future © 2015, BIG SWITCH NETWORKS, INC.

Page 81: Networking Field Day 10 Presentation

82

BIG SWITCH INNOVATION VELOCITYIncremental Development Comprehensive Solution

BSNNeutron

L2/L3 Plugin

BSNDockerPlugin

Simplicity

Visibility

Automation

Analytics

BCF vCPlugin

© 2015, BIG SWITCH NETWORKS, INC.

Page 82: Networking Field Day 10 Presentation

83

OPEN NETWORK LINUX

CPU(PowerPC, x86)

Misc Hardware(Fans, LED controllers, SFP, sensors, power supplies)

Packet Forwarding Chip(ASIC)

Hardware

Platform

Applications

ONL Linux Kernel Includes extra drivers: I2C, MUX, mngt Ethernet, etc.

Broadcom SDK(others coming soon)

Open Network Linux Platform Abstraction Layer

Platform Specific Drivers

Including: Optics

OFDPA

Platform Specific ASIC Drivers

OpenNSL

ORCBRCM

OCP Switch Hardware: Facebook Wedge, IM Niagara, Accton 6712, 7712, Dell S6000-ON, Quanta LY6, etc.

OpenRouteCache

Indigo OpenFlow Agent

Your OFDPA

App HERE

FacebookFBOSS

Your OpenNSLApp HERE

SAIInterface

Your SAI App

HERE

Quagga

Installer(ONIE)

© 2015, BIG SWITCH NETWORKS, INC.

Page 83: Networking Field Day 10 Presentation

NEW: BMF SERVICE NODE• Popular services

De-duplicationPacket Slicing/Mods(More to Come)

• Operational SimplicitySingle pane of glass for service nodes and fabric Scale-out DeploymentService chainingSupport legacy service nodes

• High performanceMulti 10GDPDK optimizations

• Commodity Economics50% savings over legacy

84

3rd Party SERVICE NODE3rd Party SERVICE NODE3rd Party SERVICE NODE

BIG MONITORING FABRIC CONTROLLER

VISIBILITY TOOLS

NETWORK PERF MONITORING

APPLICATION PERF MONITORING

SECURITY TOOLS

VOIP MONITORING

PRODUCTION NETWORK

(any vendor)

TAP

& SP

AN P

ORTS

1/10/40/100G* ETHERNET SWITCH FABRIC

FILT

ER P

ORTS

SERVICE PORTS

DELI

VERY

POR

TS

BIG TAP SERVICE NODEBIG TAP SERVICE

NODEBIG SwitchSERVICE

NODE• De-duplication• Packet Slicing• Scale-out Deployment

*100G Switch: Beta in Q4© 2015, BIG SWITCH NETWORKS, INC.

Page 84: Networking Field Day 10 Presentation

85

NEXT-GEN SWITCHES AND HARDWARE

Coming Soon:

100G Support

© 2015, BIG SWITCH NETWORKS, INC.

Page 85: Networking Field Day 10 Presentation

TECH PREVIEW: BCF-READINESS FOR CONTAINERSDocker Containers Hosted on BCF’s Switch Light Virtual

86

HierarchicalControl Plane

Big Cloud Fabric

Controllers

R1L2 R2L1 R2L2

S2S1

eth1

eth1

Container

WEB1

DB1

R1L1

HOST-1

eth1

eth1

Container

WEB2

DB2

HOST-2

SL-VX SL-VX

• Switch Light Virtual hosts Docker containers• Controller single pane of glass for configuration• Visibility, analytics, troubleshooting extended to Docker environment

• Opportunity to fully automate physical fabric configuration for Docker environments

Controller Configuration & Visibility

© 2015, BIG SWITCH NETWORKS, INC.

Page 86: Networking Field Day 10 Presentation

87

TRY THIS DEMO AT HOME WITH BSN LABS

Both Products

© 2015, BIG SWITCH NETWORKS, INC.

Page 87: Networking Field Day 10 Presentation

Thank You