Big Data clusters and SDN enabled clouds invite a new approach to data center networking. This session for data center architects will explore the transition from traditional scale-up chassis based Layer 2 centric networking, to the next generation of scale-out Layer 3 CLOS based fabrics of fixed switches.
TRANSCRIPT
Global Marketing
Architecting Data Center Networks in the era of Big Data and Cloud
Distributed, Scale-out Layer 3 fabrics Efficient fixed switches Open, industry standard protocols
TRILL OpenFlow VEPA SPB
THE SAME OLD
Or a Different Approach
Two approaches to DC Networking
Brad Hedlund
Presenter
Presentation Notes
This is the session summary. This is basically what we’re going to be talking about. There’s going to be two fundamental approaches to data center networking moving forward in the era of Big Data and cloud. You can do it the same old way you’ve always been doing it with a very Centralized model. A very scale-up model, of Layer 2 networks. And you scale your big data and cloud by building a bigger switch. Big monstrous power sucking chassis switches. And you build a flat Layer 2 environment.
Global Marketing
Networks that suck for Cloud & Big Data
3
PARTITIONED CAPACITY
Core
Dist
Access
“Data center networks are in my way” -James Hamilton, AWS
VM
Network Topology
Capacity Topology
Brad Hedlund
Presenter
Presentation Notes
Vertically scaled Web 1.0 In this era we designed the network for the application traffic, which was largely North/South, getting clients connected to a web server and having that web server respond. Not a lot of East-West traffic between servers. Not a lot of east west traffic between infrastructure pods. And the chassis switch was the best way scale Same with Virtualization 1.0 circa 2005. We took the same client/server workloads and just virtualized them. Still a majority of North/South traffic.
Global Marketing
Networks that Don’t suck for Cloud & Big Data
4
UNIFORM CAPACITY
Spine
Leaf
All points equidistant
VM
Network Topology
Capacity Topology
Brad Hedlund
Presenter
Presentation Notes
Here’s the alternative approach of Horizontal scaling. We eschew the age old premise of a 2-switch centralized Layer 2 domain. Here we have a capacity topology that is Flat and Uniform. All points are equidistant, from any point in the network to any other point in the network. Uniform bandwidth, Uniform latency from any point to any other. The result is a network that Doesn’t Suck for cloud and Big Data. We want to be able to place our workloads anywhere in the topology without compromise. We want the cloud orchestration tools do decide workload placement, because they will make the best decision. We don’t want the users to decide.
Global Marketing
Rack 2
Node
Node
Node
Node
Job Tracker
Rack 1
Node
Node
Node
Node
Name Node
Rack N
Node
Node
Node
Node
Node
switch
Big Data
• Inverse Virtualization • Workloads orchestrated like cattle • L2 or L3 network. Does it matter?
5
Rack 3
Node
Node
Node
Node
Secondary NN
Rack 4 World
Node
Node
Node
Node
Client
switch switch switch
switch switch
switch
TCP
TCP
TCP Client
TCP
Brad Hedlund
Presenter
Presentation Notes
Big Data clusters such as Hadoop are a perfect example of Inverse Virtualization. The cluster of servers collectively work together like one logical server, aggregating all of the CPU cores and disk drives to serve one application. The workload on each individual node is rather insignificant to the operation of the entire cluster. Unlike standard virtualization, there is no concept of workload mobility between nodes. If a node dies, we just restart the work on another node. Or we might start multiple copies of the same task on different nodes and wait for the first one to complete. We treat the tasks running on each node like one large herd of cattle, as opposed to individual precious pets. You can build an L2 or L3 network for a Big Data cluster. It doesn’t really matter. The nodes just need TCP connectivity.
Global Marketing
Basic requirements of Cloud (IaaS)
• Secure, Scalable Multi Tenancy
• Location independence
• On Demand virtual networks
6
VM VM
FW
VM VM
LB
switch switch
switch switch
switch switch
Physical Network
Virtual Network
World
Brad Hedlund
Presenter
Presentation Notes
These are some of the basic requirements we want from a cloud and that our network architecture needs to provide. We have the Physical network, we have the Virtual network. Tenants do not see or care about the Physical network. They care about their application architecture and the logical network segments that glue the application stack together. Two prevailing approaches to achieving these requirements: -Blend the virtual and physical networks -Abstract the virtual network from the physical
Global Marketing
Blend the Virtual and Physical Networks
•Tenant subnet = Network VLAN
7
VM VM VM VM
switch
switch switch
VM VM
VLAN 10
VLAN 20
Host Host
vSwitch vSwitch
VM VM
Brad Hedlund
Presenter
Presentation Notes
In this model the physical network responsible for: -VM forwarding (MAC tables) -Segmentation and isolation (VLANs) -Address resolution (ARP) The physical network is in the way. I need resources from it before I can proceed. Resources in the form of an available VLAN, available Forwarding table entries, and I need to provision theses resources in the physical network for each new service or tenant. The cloud networking tools need to provision both the physical and virtual network, adding complexity.
In this model we abstract the virtual network from the physical network, just like we’ve abstracted the virtual server from the physical server – through encapsulation. Encapsulation is the fundamental enabler of virtualization. Just like a virtual machine is encapsulated into a file, we encapsulate the virtual network traffic into an IP header as it traverses the physical network from source host to destination host. This is model referred to as “Network Virtualization”. Or “Overlays”. This is real and available today. Companies such as Rackspace have deployed this in production. In this model the physical network (underlay) provides an I/O fabric for the overlay. Setting up the network is a one time operation. From that point on, the network is out of the way. I don’t need multi tenancy resources from the network (no VLANs). I don’t need forwarding table entries for every VM instance (no MAC forwarding). I don’t need to provision the physical network for every new service or tenant. The network orchestration tools only need to provision one network, the virtual network. This works to keep the orchestration logic and its implementation simple.
Global Marketing
Scale-up centralized Layer 2
• 2-post Rooted Architecture
• Centralized L2/L3
• L2/L3/ARP table scale?
• Scale w/ Bigger Boxes
• Precious Pets
• VLAN Provisioning?
• Broadcasts
9
VM VM VM VM
vSwitch vSwitch
L3
L2
Brad Hedlund
Presenter
Presentation Notes
We take very good care of our pets.� Money is no object�We buy our pets all the equipment they need to stay healthy; lots of redundant fabric modules, power supplies, supervisor engines, fans, even air filters and protective doors.� When they sick Or die its a really really big deal.�This causes Lots of sadness and heartache.� Pets are irreplaceable. �There are no substitutes.�Remember when Nurse Focker tried to replace the real Jinxy with a fake Jinxy? It didn’t work, did it? No.� Firewall / LB inline services can offload ARP responsibilities from switches that have otherwise insufficient ARP table sizes, but that’s a precarious position to take.
Global Marketing
(16)
(2) (8)
(64)
1980 Server ports
Scale-out Layer 3 Leaf/Spine Fabric
• Mesh from Leaf to Spine
• OSPF, ISIS, BGP, TRILL
• ToR w/ 16 uplinks (ECMP)
10
768 Server ports 3072 Server ports 6144 Server ports
(16)
(128)
• Non-blocking Spine
• 3:1 @ ToR
• 128 port 2RU Spine
L3
L2
Brad Hedlund
Presenter
Presentation Notes
1980 server ports based on 45 ToRs Connecting to a pair 384 port chassis switches. We’ll start by saying goodbye to our precious pets. Sorry Jinxy! Nothing personal, it’s just business. Precious pets belong at home. They don’t belong in our Cloud or our Big Data. Instead of treating our switches like precious pets, we can now begin treat them more like Cattle. We can change the way we manage and care for each individual switch in the data center, right sizing the amount of care required for each switch. If one gets sick and dies, nobody really gets that upset. We just replace the dead switch with a new one. Port count of the Spine switch determines the Max # of ToR 6144 of 1G = 128 * S55 + 4 * Z9000 With the 6144 port fabric design we have 2048 cables from ToR to Spine. A chassis design would also have the same # of cables. If I VLT every pair of ToR, we have a 5623 port fabric. (2) 768 Port chassis (Nexus 7018) $2M 4092 (2) 384 Port chassis (Nexus 7010, Arista 7508) 1980
Global Marketing Brad Hedlund
6144 Server ports
(16)
(2) (8)
(64)
Uniform fabric for Cloud & Big Data
11
L3
L2
(16)
(128)
VM VM VM VM
vSwitch vSwitch
Rack 3 Rack 1
Name Node
Rack 2
Job Tracker
Rack N
Secondary NN Node
Node
Node
Node
Node
Client
Node
Node
Node
Node
Client
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Block I/O NAS Object
Storage Access Hadoop
Database
Presenter
Presentation Notes
Storage Block I/O, File, Object Database – NoSQL -Ceph -Nexanta -Dell DX -OpenStack Swift -Casandra -HBase Run Hadoop against the Cloud storage data; the VM disk image files, the databases, the objects and files. Feed that intelligence back in to scheduling algorithms or billing. Make better decisions for your cloud and customers. Provide new Big Data based applications for the cloud tenants. Intelligence as a Service? These opportunities become easier to realize when you have your Big Data and Cloud on the same scale out all points equidistant fabric, where there are no bottlenecks between the two environments. 6144 server ports @ 1G or 10G 1G = (4) Z9000, (128) S55 10G = (16) Z9000, (128) S4810 (2) 768 Port chassis (Nexus 7018) $2M 4092 10G ports *not factoring F2 16K MAC table (2) 384 Port chassis (Nexus 7010, Arista 7508) 1980 10G ports * not factoring Arista’s 8K ARP table
Global Marketing Brad Hedlund
(16)
(2) (8)
(64)
Attaching Services & North/South
12
(16)
(128)
Firewall Firewall
World
LB LB
vswitch VM VM VM
vswitch VM VM VM
vswitch VM VM VM
vswitch VM VM VM
vswitch VM VM VM
vswitch VM VM VM Rack 1 Rack N
Node
Node
Node
Node
Node
Client
Node
Node
Node
Node
Client
Name Node Job Tracker
Secondary NN
L3
L2
x86 Gateways
Global Marketing
Generic Logical Architecture 1
13
World
Brad Hedlund
FW
LB
FW
LB
VM VM VM
Green Co. Orange Co.
L3 NAT
L3 NAT
L2
L3
L2
L2
Fabric DC router • Overlay based L2 • Physical/Static FW
VM VM
Big Data
L2
Presenter
Presentation Notes
Global Marketing
Generic Logical Architecture 2
14
World
Brad Hedlund
FW
LB
FW
LB
VM VM VM
Green Co. Orange Co.
L3 NAT
L3 NAT
L2
L3
L2
L2
Fabric DC router • Overlay based L2 • Virtual/Mobile FW • Overlay Gateway
Pub DMZ
Big Data
VM VM L2
Presenter
Presentation Notes
Global Marketing
Generic Logical Architecture 3
15
World
Brad Hedlund
FW
LB
FW
LB
VM VM VM
Green Co. Orange Co.
L3 NAT
L3 NAT
L2
L3
L2
L2
Fabric DC router • No Overlays • TRILL based L2 • Virtual/Mobile FW
Pub DMZ
TRILL
Big Data
VM VM L2
Presenter
Presentation Notes
Global Marketing
Density: Fixed vs. Chassis
0
20
40
60
80
100
120
140
2008 2010 2012 2014
Chassis
Fixed
16
10G per RU @ Line Rate (L3)
Brad Hedlund
Presenter
Presentation Notes
The density of fixed switches doubles every two years. Moore’s law for the network. The density of chassis switches is improving too but not at the same rate. Chassis density 2008 – 3 (Nexus 7010 w/ 64 @ 21RU) *M1-32 linecard 2010 – 34 (Arista 7508 w/ 384 @ 11RU) 2012 – no change, Arista 7508 still most dense 2014 – anticipated 96pt per slot w/ current chassis Fixed density 2008 – 24 (Arista 7124, Force10 S2410) @ 1RU 2010 – 48 (Arista 7148) @ 1RU 2012 – 64 (Broadcom Trident) @ 1RU 2014 – anticipated 128pt @ 1RU Chassis –slow rate of innovation -Nexus M1 to F2 linecard: 4 years
Global Marketing
Power: Fixed vs. Chassis
0
2
4
6
8
10
12
14
16
18
2010 2012 2014
Chassis
Fixed
17
Max Watts / Line Rate 10G (L3)
Brad Hedlund
Presenter
Presentation Notes
*based on most dense platform for that year Chassis power 2008 – Nexus 7010 w/ 8 x M1-32 power calc = 8400W max (64 ports line rate), 131W / line rate port 2010 – Arista 7508 = 6600W max / 384 ports = 17W 2012 – Nexus 7009 w/ 7 x F2 = 4595W max / 336 = 13.6W 2014 – Anticipated 25% decrease = 10.2 (based on a 25% decrease from prior 2 years) Fixed power 2008 – Arista 7124SX – 210W / 24 ports = 8.75 W / line rate port (single chip) 2010 – Arista 7148SX – 760W / 48 ports = 15.8 W / line rate port (multi chip) 2012 – Broadcom Trident+ based platforms – 789W (Dell Force10 Z9000) / 128 line rate ports (multi chip) = 6.1W 2014 – Anticipated 60% decrease = 2.4W (based on a 60% decrease from prior 2 years)
• Non-blocking @ top tiers • Default route @ ToR & Leaf
• Leaf+ToR mesh groups • ~8usec worst case
0/0
0/0
/26 /26
/26
Brad Hedlund
Presenter
Presentation Notes
“Mind the Gap” = cable distances = 100-150mm with SR optics Non-block Leaf & Spine. Oversub @ ToR. All points equidistant from a bandwidth perspective. Varying latency. 1usec (best) vs. 8usec (worst) across 24K severs, Not bad!! ToR w/ just default pointing to Leaf Leaf w/ ToR subset specific subnets & default pointing to Spine Spine with all specific subnets
Global Marketing
The case for 40G QSFP switch ports
22
QSFP SFP+ SFP+ SFP+ SFP+
10G 10G 10G 10G 10G 10G 10G 10G
VS
$1,800 $1K $1K $1K $1K
Brad Hedlund
32 ToR
$512K $230K
Global Marketing
Comparing Fabric efficiencies of Fixed vs Chassis designs
Brad Hedlund May 2012
Global Marketing
Total Fabric Power: Chassis vs. Fixed
0
50
100
150
200
250
384 2048 4096 8192
Chassis
Fixed
24
non-blocking KW
Fabric size
Brad Hedlund
Presenter
Presentation Notes
The chart above shows that fully constructed non-blocking fabrics of all fixed switches are more power efficient than the typical design likely proposed by a Chassis vendor. As the fabric grows the efficiency gap widens. Given we already know that fixed switches are more power efficient than chassis switches, this data should make sense.
Global Marketing
Total Fabric RU: Chassis vs. Fixed
0
100
200
300
400
500
600
700
384 2048 4096 8192
Chassis
Fixed
25
RU
Fabric size
non-blocking
Brad Hedlund
Presenter
Presentation Notes
Again, the chart above shows a very similar patter with space efficiency. A fully constructed non-blocking fabric of all fixed switches consumes less data center space than the typical design of Chassis switches aggregating fixed switches.
Global Marketing
(2) (8)
8192 non-blocking Fabric
26
(64)
(128)
8192 non-blocking
Brad Hedlund
• 384RU • 153.6KW • 8192 ISL • 192 switches
Presenter
Presentation Notes
(64) Leaf fixed switches, (128) Spine fixed switches interconnected with 10G providing 8192 line rate 10G access ports at the Leaf layer, and 8192 inter-switch links. (192) switches total, each with a max rated power consumption of 800W.
Global Marketing
8192 non-blocking Fabric
27
8192 non-blocking
Brad Hedlund
• 608RU • 216.3KW • 8192 ISL • 288 switches
(32)
(256)
Presenter
Presentation Notes
(8) Arista 7508 Spine (64) Arista 7050S-64 Leaf Modified Arista 7508 max power down to 5KW, as each will have 6 linecards (not 8). (256) Leaf fixed switches each at 220W max power and 1RU, with 32 x 10G inter-switch links, and 32 x 10G non-blocking fabric access ports. (32) Arista 7508 Spine chassis each with (6) 48-port 10G linecards for uniform ECMP. Because each 11RU chassis switch is populated with 6 linecards of 8 possible, I’ve factored down the power from the documented max of 6600W, down to 5000W max. (288) total switches.
Global Marketing
(2) (32)
4096 non-blocking Fabric
28
(64)
4096 non-blocking
Brad Hedlund
• 192RU • 76.8KW • 4096 ISL • 96 switches
Presenter
Presentation Notes
(64) Leaf fixed switches, (32) Spine fixed switches interconnected with 10G providing 4096 line rate 10G access ports at the Leaf layer, and 4096 inter-switch links. (96) switches total, each with a max rated power consumption of 800W.
Global Marketing
4096 non-blocking Fabric
29
4096 non-blocking
Brad Hedlund
• 304RU • 108.1KW • 4096 ISL • 144 switches
(16)
(128)
Presenter
Presentation Notes
(8) Arista 7508 Spine (64) Arista 7050S-64 Leaf Modified Arista 7508 max power down to 5KW, as each will have 6 linecards (not 8). (128) Leaf fixed switches each at 220W max power and 1RU, with 32 x 10G inter-switch links, and 32 x 10G non-blocking fabric access ports. (16) Arista 7508 Spine chassis each with (6) 48-port 10G linecards for uniform ECMP. Because each 11RU chassis switch is populated with 6 linecards of 8 possible, I’ve factored down the power from the documented max of 6600W, down to 5000W max. (144) total switches.
Global Marketing
(2) (16)
2048 non-blocking Fabric
30
(32)
2048 non-blocking
Brad Hedlund
• 96RU • 38.4KW • 512 ISL • 48 switches
Presenter
Presentation Notes
(32) Leaf fixed switches, (16) Spine fixed switches interconnected with 40G providing 2048 line rate 10G access ports at the Leaf layer, and 512 inter-switch links. (48) switches total, each with a max rated power consumption of 800W.
Global Marketing
2048 non-blocking Fabric
31
2048 non-blocking
Brad Hedlund
• 152RU • 54KW • 2048 ISL • 72 switches
(8)
(64)
Presenter
Presentation Notes
(8) Arista 7508 Spine (64) Arista 7050S-64 Leaf Modified Arista 7508 max power down to 5KW, as each will have 6 linecards (not 8). (64) Leaf fixed switches each at 220W max power and 1RU, with 32 x 10G inter-switch links, and 32 x 10G non-blocking fabric access ports. (8) Arista 7508 Spine chassis each with (6) 48-port 10G linecards for uniform ECMP. Because each 11RU chassis switch is populated with 6 linecards of 8 possible, I’ve factored down the power from the documented max of 6600W, down to 5000W max. (72) total switches.
Global Marketing
(2) (4)
384 non-blocking Fabric
32
(6)
384 non-blocking
Brad Hedlund
• 20RU • 8KW • 96 ISL • 10 switches
Presenter
Presentation Notes
(6) Leaf fixed switches, (4) Spine fixed switches interconnected with 40G and providing 384 line rate 10G access ports at the Leaf layer, and 96 inter-switch links. (10) switches total, each with a max rated power consumption of 800W.
Global Marketing
384 non-blocking Fabric
33
384 non-blocking
Brad Hedlund
• 26RU • 18KW • 384 ISL • 14 switches
(2)
(12)
Presenter
Presentation Notes
Arista 7504 max power = 2500W Arista 7050S-64 max power = 220W� (12) Leaf fixed switches, (2) Spine chassis switches interconnected with 10G. Each Leaf switch at 220W max power has 32 x 10G uplink, and 32 x 10G downlink for 384 line rate access ports, and 384 inter-switch links (ISL). The (2) chassis switches are 192 x 10G port Arista 7504 each rated at 7RU and 2500W max power.
Global Marketing
(2)
(4)
256 non-blocking Fabric
34
256 non-blocking
Brad Hedlund
• 12RU • 4.8KW • 64 ISL • 6 switches
Presenter
Presentation Notes
Global Marketing
256 non-blocking Fabric
35
256 non-blocking
Brad Hedlund
• 22RU • 6.7KW • 256 ISL • 10 switches
(2)
(8)
Presenter
Presentation Notes
Arista 7050S-64 = 220W max Arista 7504 = 2500W max
Global Marketing
Total Fabric Power: Chassis vs. Fixed
0
10
20
30
40
50
60
70
80
768 1536 3072 6144
Chassis
Fixed
36
3:1 oversubscribed KW
Fabric size
Brad Hedlund
Presenter
Presentation Notes
The chart above shows that fabrics up to 6144 ports 3:1 oversubscribed built entirely fixed switches are more power efficient than the typical design likely proposed by even the most power efficient Chassis vendor.
Global Marketing
Total Fabric RU: Chassis vs. Fixed
0
50
100
150
200
250
768 1536 3072 6144
Chassis
Fixed
37
RU
Fabric size
Brad Hedlund
3:1 oversubscribed
Presenter
Presentation Notes
The chart above shows that fabrics up to 6144 ports 3:1 oversubscribed built entirely fixed switches are more space efficient than the typical design likely proposed by even the most space efficient Chassis vendor.
Global Marketing
(2)
(16)
768 @ 3:1 oversubscribed Fabric
38
768 @ 3:1
Brad Hedlund
• 20RU • 7.2KW • 64 ISL • 18 switches
Presenter
Presentation Notes
S4810 max power = 350W Z9000 max power = 800W
Global Marketing
768 @ 3:1 non-blocking Fabric
39
256 non-blocking
Brad Hedlund
• 30RU • 7.5KW • 256 ISL • 18 switches
(2)
(16)
Presenter
Presentation Notes
Arista 7050S-64 = 220W max Arista 7504 = 2500W max
Global Marketing
(2) (4)
1536 @ 3:1 oversubscribed Fabric
40
(32)
1536 @ 3:1
Brad Hedlund
• 40RU • 14.4KW • 128 ISL • 36 switches
Presenter
Presentation Notes
S4810 max power = 350W Z9000 max power = 800W
Global Marketing
1536 @ 3:1 oversubscribed Fabric
41
1536 @ 3:1
Brad Hedlund
• 54RU • 17KW • 512 ISL • 34 switches
(2)
(32)
Presenter
Presentation Notes
Arista 7508 max power = 6600W Arista 7050S-64 max power = 220W� (32) Leaf fixed switches, (2) Spine chassis switches interconnected with 10G. Each Leaf switch at 220W max power has 16 x 10G uplink, and 48 x 10G downlink for 1536 access ports oversubscribed 3:1 at the Leaf, and 512 inter-switch links (ISL). The (2) chassis switches are Arista 7508 each rated at 11RU and 6600W max power. Each chassis switch has (6) of (8) possible linecards, (256) of (384) possible ports. Therefore, max power for each chassis switch has been factored down to 5000W.
Global Marketing
(2) (8)
3072 @ 3:1 oversubscribed Fabric
42
(64)
3072 @ 3:1
Brad Hedlund
• 80RU • 28.8KW • 1024 ISL • 72 switches
Presenter
Presentation Notes
S4810 max power = 350W Z9000 max power = 800W
Global Marketing
3072 @ 3:1 oversubscribed Fabric
43
3072 @ 3:1
Brad Hedlund
• 108RU • 34KW • 1024 ISL • 68 switches
(4)
(64)
Presenter
Presentation Notes
Arista 7508 max power = 6600W Arista 7050S-64 max power = 220W� (64) Leaf fixed switches, (4) Spine chassis switches interconnected with 10G. Each Leaf switch at 220W max power has 16 x 10G uplink, and 48 x 10G downlink for 3072 access ports oversubscribed 3:1 at the Leaf, and 1024 inter-switch links (ISL). The (2) chassis switches are Arista 7508 each rated at 11RU and 6600W max power. Each chassis switch has (6) of (8) possible linecards, (256) of (384) possible ports. Therefore, max power for each chassis switch has been factored down to 5000W.
Global Marketing
(2) (8)
6144 @ 3:1 oversubscribed Fabric
44
(16)
(128)
6144 @ 3:1
Brad Hedlund
• 160RU • 57.6KW • 2048 ISL • 144 switches
Presenter
Presentation Notes
Global Marketing
6144 @ 3:1 oversubscribed Fabric
45
6144 @ 3:1
Brad Hedlund
• 216RU • 68KW • 2048 ISL • 136 switches
(8)
(128)
Presenter
Presentation Notes
(8) Arista 7508 Spine (128) Arista 7050S-64 Leaf Modified Arista 7508 max power down to 5KW, as each will have 6 linecards (not 8). (128) Leaf fixed switches each at 220W max power and 1RU, with 16 x 10G inter-switch links, and 48 x 10G fabric access ports oversubscribed 3:1 at the Leaf. (8) Arista 7508 Spine chassis each with (6) 48-port 10G linecards for uniform ECMP. Because each 11RU chassis switch is populated with 6 linecards of 8 possible, the power is factored down from the documented max of 6600W, down to 5000W max. (136) total switches.
Global Marketing
Total Fabric Power: Chassis vs. Fixed
0
200
400
600
800
1000
1200
1400
12288 24576 49152 98304
Chassis
Fixed
46
3:1 oversubscribed - Massive KW
Fabric size
Brad Hedlund
Presenter
Presentation Notes
The chart above shows that Massive fabrics up to 100K ports 3:1 oversubscribed built entirely fixed switches are almost identically power efficient to the typical design likely proposed by even the most power efficient Chassis vendor. At 12K fabric sizes chassis designs have slightly better power efficiency – however at the 24K+ fabric sizes the difference is miniscule.
Global Marketing
Total Fabric RU: Chassis vs. Fixed
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
12288 24576 49152 98304
Chassis
Fixed
47
RU
Fabric size
Brad Hedlund
3:1 oversubscribed - Massive
Presenter
Presentation Notes
The chart above shows that Massive fabrics up to 100K ports 3:1 oversubscribed built entirely fixed switches are much more space efficient compared to the typical design likely proposed by even the most dense Chassis vendor.
(16) Arista 7508 Spine (256) Arista 7050S-64 Leaf Modified Arista 7508 max power down to 5KW, as each will have 6 linecards (not 8). (256) Leaf fixed switches each at 220W max power and 1RU, with 16 x 10G inter-switch links, and 48 x 10G fabric access ports oversubscribed 3:1 at the Leaf. (16) Arista 7508 Spine chassis each with (6) 48-port 10G linecards for uniform ECMP. Because each 11RU chassis switch is populated with 6 linecards of 8 possible, the power is factored down from the documented max of 6600W, down to 5000W max. (272) total switches.
(32) Arista 7508 Spine (256) Arista 7050S-64 Leaf (512) Arista 7050S-64 ToR Modified Arista 7508 max power down to 5KW, as each will have 6 linecards (not 8). (512) ToR fixed switches each at 220W max power and 1RU, with 16 x 10G inter-switch links, and 48 x 10G fabric access ports oversubscribed 3:1 at the ToR. (256) Leaf Fixed switches. (32) Arista 7508 Spine chassis each with (6) 48-port 10G linecards for uniform ECMP. Because each 11RU chassis switch is populated with 6 linecards of 8 possible, the power is factored down from the documented max of 6600W, down to 5000W max. (800) total switches.
Global Marketing
49,152 @ 3:1 oversubscribed Fabric
52
(256)
(1024)
49,152 @ 3:1
Brad Hedlund
• 1792RU • 665.6KW • 20480 ISL • 1408 switches
(128)
Presenter
Presentation Notes
(384) Z9000 @ 2RU, 800W (1024) S4810 @1RU, 350W 40G between ToR & Leaf
Global Marketing
49,152 @ 3:1 oversubscribed Fabric
53
49,152 @ 3:1
Brad Hedlund
• 2240RU • 657.9KW • 32,768 ISL • 1600 switches
(64)
(1024)
(512)
Global Marketing
98,304 @ 3:1 oversubscribed Fabric
54
(512)
(2048)
98,304 @ 3:1
Brad Hedlund
• 3584RU • 1331.2KW • 40960 ISL • 2816 switches
(256)
Presenter
Presentation Notes
(768) Z9000 @ 2RU, 800W (2048) S4810 @1RU, 350W 40G between ToR & Leaf