openstack summit 2015

29
L.T.H. SDN, without SDN Andrew Yongjoon Kong

Upload: andrew-yongjoon-kong

Post on 21-Apr-2017

311 views

Category:

Engineering


2 download

TRANSCRIPT

Page 1: Openstack summit 2015

L.T.H.

SDN, without SDNAndrew Yongjoon Kong

Page 2: Openstack summit 2015

2nd largestsearchandportal

Page 3: Openstack summit 2015

The Peaceful operation

When we’re running out of resources ( cpu, memory, disk ), Just add new( or additional ) resources to existing one.

System team

Network team

CMDB API

New servers

New servers

New servers

New serversnkaos

(baremetalprovisioner)

provisioned servers

provisioned servers

provisioned servers

provisioned server

Chef serverOur

Team

NSDBCentral

monitoring tree

switches, router, vlans

Page 4: Openstack summit 2015

The Growth(I)

VM creation speed is accelerating

Page 5: Openstack summit 2015

The Growth(II)

Spend more than 45M krane ( $45,000) per month

– this also increased.

1krane =1Won($0.001)

• Using similar pricing with AWS EC2• Network/Disk usage not included

Page 6: Openstack summit 2015

The Growth(III)

Growth is accelerating

- No. of Engineer is growing

- New Pilot services or experiments are growing.

- The resources depletion speed is accelerating à this simply make more work to resource management teams

System team

Network team

New serversNew serversNew serversNew servers

NKAOS

CMDB API

New servers

New servers

New servers

New servers

Chef serverOur

Team

NSDBCentral

monitoring tree

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Page 7: Openstack summit 2015

The Growth(IV)

Scale, The only driving force disrupt everything.

System team

Network teamCMDB API

NSDBCentral

monitoring tree

Chef

server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New serversNew serversNew serversNew servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New serversNew serversNew serversNew servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

NKAOS Our Team

Page 8: Openstack summit 2015

Chef server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

The Growth – Lesson learned

Growth doesn’t come alone

– Infra growth includes scale-up , scale-out as well

– Scale-up includes these

• Add Server, Storage, Switches

• Add more power facility to supply juice fluently

• This is not that difficult.

– Scale-out include these

• Add New Datacenters, New Availability Zones

• This is nightmare!

This leads radical changes over everything

– The way of preparing, provisioning

– The way of monitoring, logging, developing

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Chef server

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

New servers

Page 9: Openstack summit 2015

The Growth – Lesson learned, Openstack (2)

Resources for Openstack finally comes to be exhausted

– CPU, Memory, Storage always experience shortages.

– They have skewness.

– Sometimes, CPU depleted. Sometimes, Storage depleted.

• All resources are able to be re-balanced.

• you can migration clients’ VM ( image , volume )

– IP is also Resources. • Very limited than our expectations

– No of IP counts is limited.

– Location of IP also is limited.

• Managing these Resources is getting tougher issue.

Page 10: Openstack summit 2015

Zone1(a.k.aRack)

Neutron Network

We’ve been using Provider Network (VLAN)

– ML2 plugin

– From OVS à LinuxBridge.

– Network Team plan/setup networks (the VLAN, IP[subnet], Gateways)

– Mapping availability zone / Neutron Network to that Physical networks

VLAN.1

eth0

eth1

brqxxx

eth1.1

tapxxx

vm

eth0

KVM

Hypervisor

Zone2(a.k.aRack)

VLAN.2

Zone3(a.k.aRack)

VLAN.3

Page 11: Openstack summit 2015

Zone1

1CPU1storageNoIPLeft

Resource Imbalance

After Running multiple Available Zones

– Experiencing resource imbalance between zones, naturally

– Filter Scheduling won’t helpful.

– Migration is a proper solution. ( add extra resource is better )

VLAN.1

Zone2

NoCPUNo

Storage1IP

VLAN.2

Zone3

VLAN.3

Hey Openstack, Create 1 VM ( 1cpu, 1 IP, 1 Storage)

openstackscheduler

x

Page 12: Openstack summit 2015

Resource Imbalance & Remedies

Develop Network Count filter

– Check Remaining IP count for each zone, treat ip count as resource

– Select the zone which have more ip count

– but experiencing harder issue

• Setup more 2Vlan ( and also trunking ) on same ethernet

• leading heterogeneous policy which cause complex configurations

• Still, Migration VM through zones with ip unchanged is not possible.

Zone1

VLAN.1eth0

eth1

brqxxx eth1.1tapxxxvm1

eth0

KVM

Hypervisor

brqYYY eth1.10vm1

eth0VLANtrunk

VLAN.10

Page 13: Openstack summit 2015

broadcast domain2

Rationale

Rethinking about Connectivity

Application

TCP

IPv4

ethernetdriver

broadcast domain

ARP Table

SRC IP mac eth0

Router

IP

mac eth0

Application

TCP

IPv4

ethernetdriver

ARP Table

dest IP mac eth0

Router

IP

mac eth0

broadcast terminationA.K.A Router

same subnet

different subnet

client destination

Page 14: Openstack summit 2015

Rationale

Rethinking about Connectivity (Overlay)

– it solve remote link layer separation issue.

– Still have issue with IP management

Application

TCP

IPv4

ethernetdriver

broadcast domain

ARP Table

SRC IP mac eth0

Router

IP

mac eth0

Application

TCP

IPv4

ethernetdriver

ARP Table

dest IP mac eth0

Router

IP

mac eth0

tunnel

broadcast domaintunnel

Page 15: Openstack summit 2015

Remedy , Version 2.0

we need to thinks of those requirement

– IP movement inter-rack, inter-zone, inter-dc(?)

– IP resource imbalance

– Fault Resilience

– Dynamically check status of network

– Simple IP Resource Planning and Management

Page 16: Openstack summit 2015

Router

We thinks Router as best candidate

– it dynamically detect and exchange changes.

– it is highly distributed.

– it have HA

– the issue is that most of time routing is done in ranges

Page 17: Openstack summit 2015

Finally, Come to route only IP

Generally, Known as /32 network.

– No L2 (link) consideration needed anymore ( no subnet )

– With Dynamic Routing Protocol, it move every where.

– Simple IP planning ( Just think of IP ranges )

– It’s very Atomic Resource, it keeps its IP after migration through zones

10.0.0.1 / 32 or IP 10.0.0.1 netmask 255.255.255.255

Page 18: Openstack summit 2015

How it setup

1. install nova/neutron agent.

2. create neutron network ( name: freenet, subnet: 10.10.100.0/24)

eth1

eth0

Compute node

nova-compute

neutron-linuxbridge-

agent

neutron-dhcp-agent

dhcp-server process

10.10.100.1

Page 19: Openstack summit 2015

How it setup

1. install nova/neutron agent.

2. create neutron network ( name: freenet, subnet: 10.10.100.0/24)

3. user create VM

eth1

eth0

Compute node

nova-compute

neutron-linuxbridge-

agent

neutron-dhcp-agent

dhcp-server process

10.10.100.1

linux bridge

vm

IP:10.10.100.2/32GW:10.10.100.1

Controller

Page 20: Openstack summit 2015

How it work

1. install nova/neutron agent.

2. create neutron network ( name: freenet, subnet: 10.10.100.0/24)

3. user create VM

4. update Routing(with Dynamic routing protocol)

eth1

eth0

Compute node

nova-compute

neutron-linuxbridge-

agent

neutron-dhcp-agent

dhcp-server process

10.10.100.1

linux bridge

vm

IP:10.10.100.2/32GW:10.10.100.1

Controller

192.1.1.201

Routing Table

Default GW 192.168.1.1 eth1

Host Route dest 10.10.100.2/32

to 10.10.100.1

Routing Table

1 10.100.10.2/32 via 192.1.1.201

advertising: via Dynamic Routing Protocol

192.1.1.202

Page 21: Openstack summit 2015

Phase 1

Use RIP and OSP

– Heterogeneous setting will be burden

– Using Default GW as eth1 even for compute node. Management and service network mixed.

eth1

eth0

Compute node

nova-compute

neutron-linuxbridge-

agent

neutron-dhcp-agent

dhcp-server process

10.10.100.1

linux bridge

vm

IP:10.10.100.2/32GW:10.10.100.1

Controller

192.1.1.201

Routing Table

Default GW 192.168.1.1 eth1

Host Route dest 10.10.100.2/32

to 10.10.100.1

Routing Table

1 10.100.10.2/32 via 192.1.1.201

RIP

192.1.1.202 OSPF

Page 22: Openstack summit 2015

Phase 2

Use BGP and switch namespace

– Isolating vm’s traffic using switch namespace.

– adoting same dynamic routing scheme to compute node

eth1

Compute node

nova-compute

neutron-linuxbridge-

agent

neutron-dhcp-agent

dhcp-server process

10.10.100.1

linux bridge

vm

IP:10.10.100.2/32

Controller

192.1.1.201

Routing Table

Default GW 192.168.1.1 eth1

Host Route dest 10.10.100.2/32

to 10.10.100.1

Routing Table

1 10.100.10.2/32 via 192.1.1.201

iBGP

192.1.1.202 eBGP

Switch Namespace

global name space

Routing Table

Default GW x.x.x.x eth0

eth0

Page 23: Openstack summit 2015

Phase 2

Use BGP and switch namespace

– Isolating vm’s traffic using switch namespace.

– adoting same dynamic routing scheme to compute node

eth1

Compute node

nova-compute

neutron-linuxbridge-

agent

neutron-dhcp-agent

dhcp-server process

10.10.100.1

linux bridge

vm

IP:10.10.100.2/32

Controller

192.1.1.201

Routing Table

Default GW 192.168.1.1 eth1

Host Route dest 10.10.100.2/32

to 10.10.100.1

Routing Table

1 10.100.10.2/32 via 192.1.1.201

iBGP

192.1.1.202 eBGP

Switch Namespace

global name space

Routing Table

Default GW x.x.x.x eth0

eth0

Page 24: Openstack summit 2015

What we solved?

Compute node2

nova-compute

neutron-linuxbridge-agent

neutron-dhcp-agent

linux bridge

Switch Namespace

global name space

Compute node1

nova-compute

neutron-linuxbridge-agent

neutron-dhcp-agent

linux bridge

Switch Namespace

global name space

AZ1

Compute node2

nova-compute

neutron-linuxbridge-agent

neutron-dhcp-agent

linux bridge

Switch Namespace

global name space

Compute node1

nova-compute

neutron-linuxbridge-agent

neutron-dhcp-agent

linux bridge

Switch Namespace

global name space

AZ2

Compute node2

nova-compute

neutron-linuxbridge-agent

neutron-dhcp-agent

linux bridge

Switch Namespace

global name space

Compute node1

nova-compute

neutron-linuxbridge-agent

neutron-dhcp-agent

linux bridge

Switch Namespace

global name space

AZ3

tor1 tor2 tor3

vm10.10.100.2/32

Routing Table

1 10.100.10.2/32 via tor1

rt1

rt2

Routing Table

1 10.100.10.2/32 via RT1

rt3

rt4

rt5

rt6

Routing Table

1 10.100.10.2/32 via RT3

Routing Table

1 10.100.10.2/32 via tor2

Page 25: Openstack summit 2015

What we solve?

Simple IP planning

– only IP ranges matter. (no more VLAN, IP subnet, Router planning)

Resource imbalancing

– No chance of IP imbalancing.

Fault Resilience

– If one router gone, it propagated by Dynamic routing protocol to other router

Distributed

– deciding routing path is very distributed. No single point of failure.

– scale out nature.

Page 26: Openstack summit 2015

What we still have to solve?

Still many issue

– Apply this to physical server

– Making Router setup by API ( REST, RPC) using seed BGP( only advertising)

– ACL propagation using API ( e.g. Flowspec)

– Shared storage base service

Page 27: Openstack summit 2015

One More Thing

Page 28: Openstack summit 2015

KField Network

Page 29: Openstack summit 2015

CI