best of breed openstack compute & block storage cloud... .pdf
DESCRIPTION
trueTRANSCRIPT
Deploying a best of breed OpenStack Compute & Block Storage Cloud
…with ass-kicking VMs to show for it
Adam Carter David Medberry John Griffith Director of Product Management Cloud Engineer PTL Cinder [email protected] [email protected] [email protected]
Agenda • What did we set out to accomplish • How did we get there?
– Compute (Nova) environment – Block Storage (Cinder) environment – Deployment via Ubuntu Charms
• What do we have to show for it? • What we learned along the way • Where to from here • Q&A
What did we set out to accomplish?
• A blueprint • A reference architecture
– For any OpenStack deployer looking to stand up a production-ready compute (Nova) and block storage (Cinder) environment
• For use cases such as – IaaS – DBaaS
• Emphasizing the attributes of – Predictable performance – Quality-of-Service – Ease of Use
OpenStack Compute Environment
• Folsom on Ubuntu 12.04 via the Ubuntu Cloud archive
– http://ubuntu-cloud.archive.canonical.com/
• Multi-node nova-compute and distinct nova cloud controller
• Messaging node with Horizon, Keystone, mysql, and Rabbitmq
• Swift
Deployment
• Juju/Charms • Applying a generic load to a 'local' volume is not an
ideal Juju use case, but it can be done • Customizing an image may solve the same problem
more efficiently if network bandwidth is an issue by pre-loading all packages and applications
• And co-locating an Ubuntu mirror in your cloud is always a win (aka our colo bandwidth was not designed for so many instances)
OpenStack Block Storage Environment
• Cinder Block Storage Service • Folsom version and drivers straight from Ubuntu
packages • Volume provisioning and iSCSI CHAP via
SolidFire OpenStack driver • Control QoS attributes outside of OpenStack into
the SolidFire API (today)
What do we have to show for it?
SYS
PSU
MASTER
FAN
S4810P
0
1
102 4 6 8 12 2214 16 18 20 24 3426 28 30 32 36 4638 40 42 44 48 56
6052
LNK ACTEthernet
QSFP+
SFP+
RS-232
SYS
PSU
MASTER
FAN
S4810P
0
1
102 4 6 8 12 2214 16 18 20 24 3426 28 30 32 36 4638 40 42 44 48 56
6052
LNK ACTEthernet
QSFP+
SFP+
RS-232LNK ACT
Stacking HDMI1 2 SFP+1 2
LNK ACT LNK ACT
Reset
LNK ACTStack No.
M RPSFan
PWRStatus
26 28 30 32 34 36 38 40 42 44 46 48
25 27 29 31 33 35 37 39 41 43 45 47
2 4 6 8 10 12 14 16 18 20 22 24
1 3 5 7 9 11 13 15 17 19 21 23
LNK ACT
Stacking HDMI1 2 SFP+1 2
LNK ACT LNK ACT
Reset
LNK ACTStack No.
M RPSFan
PWRStatus
26 28 30 32 34 36 38 40 42 44 46 48
25 27 29 31 33 35 37 39 41 43 45 47
2 4 6 8 10 12 14 16 18 20 22 24
1 3 5 7 9 11 13 15 17 19 21 23
SolidFire Five Node SF3010 Cluster
2x Intel QSSC-S4R
Ubuntu 12.04 LTSOpenStack Compute
(Nova)10
Gb
E(S
tora
ge I/
O)
1G
bE
(Man
ag
em
en
t Tra
ffic)
Connections are per-chassis (Redundant connections for each chassis & network)
1
2
1
2
1
2
1
2
1
2
5x Dell C1100 Chassis2x E5645, 96GB RAM
OpenStack Management Services:
Swift, Keystone, Glance, Cinder, Nova Scheduler
and API
KVM HypervisorOpenStack Cinder Block Storage Service
Estimated 600 – 1200 Virtual Machines, 210 – 415 IOPs per Application Instance
2x Dell PowerConnect 5548(10Gbps Stack)
2x Force10 S4810(80Gbps Stack, 2x40)
Where to from here
• Blueprint for Grizzly Cinder (this week) • Finish the reference architecture and
publish it • Cinder development in Grizzly
What we learned along the way… • Setting up an OpenStack cluster still challenging…even for
people with significant experience with Essex – things have moved, options have changed
• Critical to know the key scaling factors and your use model • Examine all the nova and cinder defaults and adjust for
your use case • Be prepared if you update from Essex to Folsom (or
Folsom to Grizzly) to encounter config changes you may not know about
• Be ready to fix bugs as you go with a new release (we found Folsom, SolidFire, and Juju bugs as we went)
DEMO TIME…
Any Questions?
Canonical Ubuntu Booth @Canonical
SolidFire Booth E7 @Solidfire