performance tuning a cloud application: a real world case study
DESCRIPTION
During the OpenStack Icehouse summit in Atlanta, Symantec presented on our vision for a Key Value as a Service storage technology utilizing MagnetoDB. Since then our Cloud Platform Team has rolled the service out in our production environments. Through that process we have learned about tuning requirements of the solution on bare metal versus hosted VMs within an OpenStack environment. Our initial performance testing was done with MagnetoDB running on bare metal nodes. After migrating the service from bare metal to an OpenStack VM hosted environment, we observed a 50% reduction in performance. This presentation will dig into the details of the performance baselines, the tuning of the Nova Compute servers, Virtual Machine settings, and the applications itself to increase our performance. Why larger community will be interested in this topic This presentation will dig in to the technical details of performance tuning an application running on an OpenStack Nova Compute cluster. We will examine the performance related configuration settings necessary to improve the hosted application from three different angles: the underlying compute node Operating System configuration the hypervisor virtualization layer and the Guest VM and Application stack This presentation will provide a real world analysis of the steps taken. In addition, it will provide an outline for other cloud operators to follow when they work towards performance tuning their own cloud stack.TRANSCRIPT
Cloud Platform Engineering
Performance Tuning a Cloud ApplicationShane GibsonSr. Principal Infrastructure Architect
2
Agenda• About Symantec and Me
• Key Value as a Service
• The Pesky Problem
• Resolving “The Pesky Problem”
• Performance Tuning Recommendations
• Summary
• Q&A
3
About Symantec and Me
4
The Symantec Team• Cloud Platform Engineering
– We are building a consolidated cloud platform that provides infrastructure and platform services for next generation Symantec products and services
– Starting small, but scaling to tens of thousands of nodes across multiple DCs– Cool technologies in use: OpenStack, Hadoop, Storm, Cassandra, MagnetoDB– Strong commitment to provide back to Open Source communities
• Shane Gibson– Served 4 years in USMC as a computer geek (mainframes and Unix)– Unix/Linux SysAdmin, System Architect, Network Architect, Security Architect– Now Cloud Infrastructure Architect for CPE group at Symantec
5
Key Value as a Service(the “cloud” application)
6
Key Value as a Service: General Architecture• MagnetoDB is a key value store with OpenStack REST and AWS
DynamoDB API compatibility
• Uses a “pluggable” backend storage capability
• Composite service made up of:– MagnetoDB front-end API and Streaming service– Cassandra for back end, Key Value based storage– OpenStack Keystone– AMQP Messaging Bus (eg RabbitMQ, QPID, ZeroMQ)– Load Balancing capabilities (Hardware or LBaaS)
7
Key Value as a Service: MagnetoDB– API Services Layer
• Data API• Streaming API• Monitoring API• AWS DynamoDB API
– Keystone and Notifications integrations
– MagnetoDB Database Driver• Cassandra
8
Key Value as a Service: MagnetoDB– API Services Layer
• Data API• Streaming API• Monitoring API• AWS DynamoDB API
– Keystone and Notifications integrations
– MagnetoDB Database Driver• Cassandra
9
Key Value as a Service: Cassandra– Database storage engine – Massively linearly scalable– Highly available w/ no SPoF– Other features:
• tunable consistency• key-value data model• ring topology• predictable high performance
and fault tolerance• Rack and Datacenter awareness
10
Key Value as a Service: Cassandra– Database storage engine – Massively linearly scalable– Highly available w/ no SPoF– Other features:
• tunable consistency• key-value data model• ring topology• predictable high performance
and fault tolerance• Rack and Datacenter awareness
11
Key Value as a Service: Other Stuff – Need a load balancing layer of
some sort• LBaaS or hardware
– Keystone service– AMQP service
• RabbitMQ
12
Key Value as a Service: Other Stuff – Need a load balancing layer of
some sort• LBaaS or hardware
– Keystone service– AMQP service
• RabbitMQ
13
Key Value as a Service: Other Stuff – Need a load balancing layer of
some sort• LBaaS or hardware
– Keystone service– AMQP service
• RabbitMQ
14
Key Value as a Service: Other Stuff – Need a load balancing layer of
some sort• LBaaS or hardware
– Keystone service– AMQP service
• RabbitMQ
15
Key Value as a Service: Putting it all Together
16
The Pesky Problem
17
The Pesky Problem: Deployed on Bare Metal• Initial deployment of KVaaS service on bare metal nodes
• Mixed both MagnetoDB API service on same node as Cassandra– MagnetoDB CPU –vs- Cassandra Disk I/O profile
• Cassandra directly managing the disks via JBOD (good!)
• MagnetoDB likes lots of CPU, direct access to 32 (HT) CPUs– Please don’t start me on a HyperThread CPU count rant
• KVaaS team performance expectation set from this experience!
18
The Pesky Problem: Moved to OpenStack Nova• KVaaS service migrated to a “stock” OpenStack Nova cluster
• Nova Compute nodes set with RAID 10 ephemeral disks
• OpenContrail used for SDN configuration
• Performance for each VM Guest roughly 66% of bare metal
• KVaaS team was unhappy
bare metal
250 RPS / HT Core*
virtualized
165 RPS / HT Core*
19
The Pesky Problem: Moved to OpenStack Nova, cont.
performance comparison of “list_tables”
* results averaged by core since test beds were different
20
The Pesky Problem: The Goal• Deploy our KVaaS service … as a flexible and scalable solution
• Ability to use OpenStack APIs to manage the service
• Cloud Provider run KVaaS service or Tenant managed service
• Initial deployment planned for OpenStack Nova platform– Not a containerization service … – Though … considering it …
• Easier auto-scaling, better service packing, flexibility, etc.
• Explore mixed MagnetoDB/Cassandra –vs- separated services
21
Resolving “The Pesky Problem”
22
Resolving the “Pesky Problem”: Approach• Baseline the test environment
– Bare metal deployment and test– Mimics the original deployment characteristics
• Deploy OpenStack Nova – Install KVaaS services
• Performance tune each component – Linux OS and Hardware configuration– KVM Hypervisor/Nova Compute performance tuning– MagnetoDB/Cassandra performance tuning
23
Resolving the “Pesky Problem”: Testing Tools• Linux OS and Hardware
– perf, openssl speed, iostat, iozone, iperf, dd (yes, really!), dtrace
• KVM Hypervisor/Nova Compute– kvm_stat, kvmtrace, perf stat –e ‘kvm:*’, specvirt
• MagnetoDB/Cassandra– magnetodb-test-bench, jstat, cstar_perf, cassandra-stress
• General Test Suite– Phoronix Test Suite
24
Resolving the “Pesky Problem”: Test Architecture
25
Resolving the “Pesky Problem”: Test Bench
26
Performance TuningRecommendations
27
Guest:• vhost_net or virtio_net,
virtio_blk, virtio_balloon, virtio_pc
• Paravirtualization !• Disable system perf. gathering – get
info from host hyper. tools• Elevator scheduler to “noop”• Give guests as much memory as you
can (FS cache!)
Performance Tuning Results: Linux OS and Hardware Recommendations:
Host:• vhost_net, transparent_hugepages,
high_res_timer, hpet, compaction, ksm, cgroups
• task scheduling tweaks (CFS)• Filesystem mount options
(noatime, nodirtime, relatime)• Tune wmem and rmem buffers !!!• Elevator I/O Scheduler = deadline
28
Performance Tuning Results: Linux OS and Hardware
7-10x
30%
10% less latency8x throughput
2x throughput
Host:• vhost_net, transparent_hugepages,
high_res_timer, hpet, compaction, ksm, cgroups
• task scheduling tweaks (CFS)• Filesystem mount options
(noatime, nodirtime, relatime)• Tune wmem and rmem buffers !!!• Elevator I/O Scheduler = deadline
Recommendations:Guest:• vhost_net or virtio_net,
virtio_blk, virtio_balloon, virtio_pc
• Paravirtualization !• Disable system perf. gathering – get
info from host hyper. tools• Elevator scheduler to “noop”• Give guests as much memory as you
can (FS cache!)
29
Performance Tuning Results: KVM /Nova ComputeRecommendations:
Host:• tweak Transparent Huge Pages• bubble up raw devices if possible
(warning: migration/portability)• multi-queue virtio-net• SR-IOV if can dedicate NIC
(warning: see bubble up warning!)
Guest:• qcow2 or raw for guest file backing• disk partition alignment is still very
important• preallocate metadata (qcow2)• fallocate entire guest image if can
(qcow2, lose oversubscribe ability) • set VM swappiness to zero • Async. I/O set to “native”
Host:• tweak Transparent Huge Pages• bubble up raw devices if possible
(warning: migration/portability)• multi-queue virtio-net• SR-IOV if can dedicate NIC
(warning: see bubble up warning!)
Recommendations:
30
Performance Tuning Results: KVM /Nova Compute
30
2 to 15% gain
~ 10% gain
40+% gain w/Host + Guest
8% gain in TPM
Guest:• qcow2 or raw for guest file backing• disk partition alignment is still very
important• preallocate metadata (qcow2)• fallocate entire guest image if can
(qcow2, lose oversubscribe ability) • set VM swappiness to zero • Async. I/O set to “native”
31
Performance Tuning Results: MagnetoDB/Cassandra Recommendations:
• disk: vm.dirty_ratio & vm.dirty_background_ratio – increasing cache may help write work loads that have ordered writes, or writes in bursty chunks
• “CommitLogDirectory“ and “DataFileDirectories“ on separate devices for write performance improvement
• GC tuning of Java heap/new gen – significant latency decreases
• Tune Bloom Filters, Data Caches, and Compaction
• Use compression for similar “column families”
32
Performance Tuning Results: MagnetoDB/Cassandra 10x pages
25-35% read perf.5-10% write gains
Recommendations:
• disk: vm.dirty_ratio & vm.dirty_background_ratio – increasing cache may help write work loads that have ordered writes, or writes in bursty chunks
• “CommitLogDirectory“ and “DataFileDirectories“ on separate devices for write performance improvement
• GC tuning of Java heap/new gen – significant latency decreases
• Tune Bloom Filters, Data Caches, and Compaction
• Use compression for similar “column families”
33
Summary
34
Summary: Notes• “clouds” are best composed of small services that can be
independently combined, tuned, and scaled
• human expectations in the transition from bare metal to cloud need to be reset
• an iterative step-by-step approach is best – Test … Tune … Test … Tune … !
• lots of complex pieces in a cloud application
35
Summary: Notes (continued)• Compose your services as individual building blocks
• Tune each component/service independently
• Then tune the whole system
• Automation is critical to iterative test/tune strategies!!
• Performance tuning is absolutely worth the investment
• Knowing your work loads is still (maybe even more?) critical
36
Questions and (hopefully?) Answers
Let’s talk…
Thank you!
Copyright © 2014 Symantec Corporation. All rights reserved. Symantec and the Symantec Logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.
This document is provided for informational purposes only and is not intended as advertising. All warranties relating to the information in this document, either express or implied, are disclaimed to the maximum extent allowed by law. The information in this document is subject to change without notice.
37
Shane [email protected]