white paperunitasglobal.com/wp-content/uploads/2018/02/cloud...economics of aws public cloud +...

12
ECONOMICS OF AWS PUBLIC CLOUD & OPENSTACK PRIVATE CLOUD AT SCALE WHITE PAPER

Upload: lamdang

Post on 17-Jul-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

ECONOMICS OF AWS PUBLIC CLOUD & OPENSTACK PRIVATE CLOUD AT SCALEWHITE PAPER

© 2013-2015 Unitas © 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

Many organizations find themselves trying to make a choice between different cloud deployment strategies. While much exists about the technical differences between differ-ent platforms, less is commonly available about the financial aspects. In this case study, we’ll look at the pros and cons of both public AWS and private OpenStack-based clouds, focusing on the economic implications of both, particularly for medium- to large-scale environments with longterm workloads.

A note on our methodology: Analysts have frequent-ly compared the cost of consuming public cloud with that of building and operating private cloud. In the case of private cloud, these analyses typically include costs for in-house engineering resources. While these costs certainly exist, for the purpose of this case study we are excluding them and com-paring infrastructure/compute resource costs alone. Each organization will have its own cost of in-house engineering resources that should be considered in addition to the cloud infrastructure.

The differences between the public and private cloud economic models result in pros and cons for each. Below we discuss three key differences that create financial implications that must be considered when weighing public vs private cloud for hosting enter-prise workloads.

PUBLIC VS PRIVATE CLOUD: Fundamentally Different Economic Models To consider the implications of different cloud delivery models, we’ll first start by defining each:

PUBLIC CLOUD: Self-service, on-demand compute and storage resources available to anyone. Typically structured in a “pay as you go” model with monthly invoicing for resources provisioned during each billing period (typically counted in hourly increments). Public cloud resources can be increased or decreased at any time, in real-time, with the ability to “scale up” resources with (effectively) no limit.

PRIVATE CLOUD: Self-service, on-demand compute and storage resources available exclusively to one particular organization. Built on dedicated hardware, typically a fixed monthly cost for the entirely of that hardware. No ability to scale beyond the hardware provided. Private clouds can be built in different ways; namely in-house/on-premise or hosted and managed by a 3rd party.

HYBRID CLOUD: An integrated combination of private and public cloud, with seamless burst and workload portability between the two.

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

RESERVATION VS CAPACITY

In both public and private clouds, we pay money in exchange for the ability to run workload within the cloud. At the most basic level, it appears we are purchasing the same thing in either – namely computing resourc-es. However, the method by which these resources are allocated and consumed are somewhat different in public and private clouds.

If it often said that with public cloud, we pay only for what we use – but that isn’t exactly accurate. More specifically, when we launch a virtual machine we begin paying for it immediately. The amount we pay for that virtual machine is the same hour-by-hour, regardless of how much we actually use the computing resources provided by the particular virtual machine. In essence, we are paying for a reservation (or allocation) of computing capacity – not the actual usage of that capacity or the performance derived from it.

In a private cloud, we are paying for the full, overall (fixed) capacity of the entire private cloud, regardless of how many virtual machines are provisioned on it, and (similar to public cloud) regardless of how much we utilize those virtual machines. Because the computing, storage, and network hardware are all fully dedicated to the organization using the private cloud, in essence we are paying for the capacity of the private cloud, and the performance it provides.

The distinction between paying for a reservation or allocation of computing resources in a public cloud versus paying for specific capacity and its associated performance in a private cloud is an important one. It is this very distinction that enables public cloud providers to maintain gross margins in an otherwise highly commoditized space.

But why does this matter to us, the cloud con-sumer? Let’s start by looking at a single virtual machine that we’ve provisioned in a public cloud. We know that running out of resources would be bad for the applications running on it – indeed, the virtual machine itself could crash under the right set of conditions.

Most industry benchmarks suggest that typical virtual machines average around 30% utilization of their available resources. If we graphed a typ-ical virtual machine’s CPU utilization over time, it might look something like Figure 1, below. (Note that we’ve just graphed CPU utilization of the virtual machine for clarity; the same concept applies to memory and storage.)

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

In this example, we’re reflecting an average of 30% utilization of the virtual machine’s CPU. However, in a public cloud we are paying for the availability of 100% of the virtual machine’s allocated resources, regardless of what the actual utilization of those resources is. This means on average, 70% of the available, paid-for resources are going unused. With a single virtual machine, this isn’t a huge problem. But as the number of machines increases, the economic impact increases geometrically. Consider a physical host that contains five virtual machines, each with its own usage pattern. Together, usage might look something like this:

Figure 2: Five virtual machines – CPU utilization over time

Figure 1: Virtual machine CPU utilization over time

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

Presented this way, it’s difficult to understand what our actual physical resource utilization looks like. However, if we visu-alize the same 5 virtual machines as a cumulative total value, the picture is clearer:

Figure 3: Five virtual machines – CPU utilization over time (stacked/combined) Figure 3: Five virtual machines – CPU utilization over time (stacked/combined)

With a single virtual machine, you have limited ability to take advantage of the unused resources, as the utilization pat-tern is relatively “spikey” or unpredictable. With many virtual machines, each with their own “peaks” in usage, the overall average usage becomes more predictable (“flatter”). As the number of virtual machines increases, the usage pattern becomes increasingly more predictable. (The so-called “law of large numbers” in part explains this effect.) The more predictable the overall usage is, the more effectively the excess, unused capacity can be utilized (or sold again, as is the case with public cloud). Provisioning or selling this excess capacity is commonly referred to as “thin provisioning” or “oversubscription.” Thus the economic impact isn’t limited to the 70% of unused server capacity, but also the lost oppor-tunity cost of that same capacity. This “smoothing” effect is depicted in the following graph:

Figure 4: Five virtual machines – CPU utilization over time (combined)

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

As the number of virtual machines increases, the effect becomes more pronounced. Consider the same graph as in figure 4, but now scaled 4X (20 virtual machines instead of 5):

Graph

Figure 5: Twenty virtual machines – CPU utilization over time (combined)

As scale increases, actual utilization becomes more predictable, thus providing visibility into under-utilized resourc-es available through thin provisioning. In a public cloud model, this excess thin-provisioned capacity is resold to other clients, increasing the cloud provider’s gross margin as some percentage of the physical hardware capacity is being sold to multiple clients. In a private cloud model, this excess capacity is yours to provision additional virtual machines in – at no additional cost. The greater the number of virtual machines, the greater the effect as resource management becomes more predictable and we can run the systems as efficiently (sometimes called “hot”) as we’re comfortable doing.

The difference between paying for a reservation versus capacity of computing resources results in a very significant cost savings when considering private and public cloud at scale. But there is one more secret of public cloud mod-els hidden in plain sight. History has shown that as computing requirements increase in a cloud environment, the average provisioned size of a virtual machine tends to increase as well. In other words, public cloud consumers buy “beefier” virtual machines (not just more of the same virtual machines) as the workload demand increases. Under-standing this, smart product managers have priced these larger machines higher than smaller ones on a per-core/per-GB basis. In other words, a virtual machine with eight CPU cores doesn’t cost double the same virtual machine with only four cores – it actually costs more than double.

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

RESOURCE SPRAWL

While various estimates exist, it’s no secret that large-scale public cloud environments house significant amounts re-sources that aren’t actively being used and may have even been forgotten. It’s easy to see how this can happen in a real-world scenario.

Most IT professionals are under constant pressure to meet deadlines or achieve timely milestones of some kind on a regular basis. Today’s business environment is fast-paced, and in a competitive economy, this won’t likely change. As such, there is always some urgent project for the IT team to be working on. Consider the following example:

Mitch is a database administrator for Acme Corp. He supports the critical database backend for the company’s line-of-business applications. One of the applications is reaching the limits of available resources on the virtual machines the database is provisioned on. In addition, the database has been in place for a couple years, and as a result the operating system is in need of an update and newer versions of the database software have features that would be helpful. Mitch decides the best course of action is to provision new, more-powerful virtual machines with the new operating system and database software already installed. He’ll then migrate the database to the new virtual machines, and redirect the front-end application to use the new upgraded database cluster once complete. Once migrated, the applications will be tested to ensure proper operation.

Good IT policy suggests that the old virtual machines and database are left in place for some period of time (often 30 days or longer) in case problems with the new system arise and Mitch needs to “roll back” to the old database. Unfortu-nately, after the database was up and running on the new virtual machines, Mitch moved on to his next project, and forgot about those virtual machines still provisioned within the cloud. The very design of most public cloud portal inter-faces make it easy for old resources to “drop off the radar” and be forgotten.

There’s no alert to remind the user of stranded/unused resources. (Out of sight; out of mind.) The problem is we’re still paying for them even though the processing load is now on the new virtual machines.

This scenario is just one of a multitude of cases where resources are stranded and abandoned in public cloud. Numerous studies have been conducted and most show that between 15-30% of public cloud resources being paid for are completely unused due to this effect. Naturally the percent-age of stranded, unused but paid-for resources gradually increases over time as an environment exists in public cloud, and the cloud service provider is able to resell the unused resources to other clients.

The only way to mitigate resource sprawl over time is through a combination of rigorous internal processes that are consistently followed and 3rd party cloud governance tooling. Unfortunately, at the time of this writing the software tools available to help track and eliminate abandoned resources are either relatively immature and not very capable, or are capable but very expensive—negating much of the benefit. We expect that progress will continue to be made in this area but for now it remains a significant financial detriment to large-scale public cloud as abandoned or forgotten resourc-es continue to cost.

Private cloud can help reduce (but not necessarily elimi-nate) cost related to resource sprawl. In our example above, when Mitch migrated the database the load transferred to the new virtual machines, which means most of the resourc-es associated with the old virtual machines were available for other virtual machines. In essence, those resources are “reclaimed” even though they’re still provisioned. While this may not apply to storage, certainly the more expensive CPU and memory resources are made available for new workload without additional cost.

So while “cleaning out” unused resources in private cloud is still important, it doesn’t have the financial imperative that those same resources in public cloud carry.

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

FEATURE LOCK-IN

It’s been said that public cloud computing will one day be “util-ity-like” and indeed, the basic resources (compute, memory, storage) are becoming increasingly commoditized.

Major public cloud providers are looking to differentiate their offerings from principal competitors. As public cloud resources become increasingly commoditized, major providers seek to differentiate their offerings from principal competitors. Unique “special features” and capabilities are one way to add value to a public cloud offering. Many of these special features are designed to save time or ease deployments for developers and those responsible for deploying application workload to public cloud. (Indeed this is in keeping with the original premise of cloud – namely that managing physical infrastructure in data centers is no longer required.) Providers announce new features at a rapid rate, always seeking to offer the most innovative service to their user base and staying ahead of the competition.

In our earlier example scenario, Mitch was tasked with migrating a database from one cluster to another. While not overly difficult, building a high availability database cluster in accordance with best practices takes time. Common steps might include:

1. Launching new virtual machines for the database 2. Updating software revisions / patching on the virtual machines 3. Installing and configuring selected database software 4. Establishing user roles and accounts 5. Establishing clustering/replication between the nodes 6. Launching new virtual machines for the front-end load balancers 7. Installing load balancing software, updating software 8. Performance testing & tuning

In addition, there are ongoing considerations such as perfor-mance management and tuning, database backup and archival, monitoring regular database operation, and regular software up-dates. Again, none of this is particularly challenging, but it does require time: a simple cluster such as this one can take many hours of work to build, and an ongoing time commitment to main-tain. Public clouds offer an alternative to this manual work in the form of a turn-key “Database as a Service” (DBaaS) offering that provides easily-consumed, highly-available, high performance clustered database capacity.

All of the back-end systems are built and operated by the cloud provider using best practices and abstracted from the user. The user merely needs to click a button (or make an API call) and the database is provided within minutes, ready to go. What would’ve taken many hours of initial work and ongoing maintenance now takes mere seconds. This can provide tremendous value to developers under constant pressure to deliver new applications on tight deadlines.

However, these public cloud special features have a poten-tial downside. Special feature services come at a significant price premium relative to the compute capacity required to support them (even if the underlying software is free), and build a level of dependence on the cloud-specific proprietary tooling. We call this “feature lock-in.” As a developer with critical timelines to hit, the path of least resistance usually wins…. and where hours of work can be orchestrated in sec-onds, most developers will opt to take that path. A multitude of similar features exist in the major public cloud providers. While often a relatively small percentage of overall spend for cloud at scale, they represent some of the most difficult to migrate or bring in-house. As previously illustrated, the cost differential for small environments isn’t as significant (as a percent of overall) as for a large environment. Thus the feature lock-in and “as-a-Service” price premium make an increasingly large impact as cloud footprint grows. Ease of consumption is a critical factor to consider when thinking about long-term financial implications of public vs private clouds. Ask yourself this question: if you’re a developer, and need resources to work on an application, is it easier to click a few buttons and pay for realtime resources on a credit card (and expense it later), or to submit internal requisition re-quest for resources? Justify hardware capex internally? Wait for weeks (or months) for it to appear?

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

ECONOMIC MODEL

So far we’ve covered three main topics: 1. Fundamental cost of basic computing units in cloud (CPU, memory, storage) – on a perunit basis, these are most cost effective in private cloud than in public when the environment achieves a certain level of scale. This is due primari-ly to the reservation versus capacity concept. The cost delta between public and private grows geometrically.

2. Resource sprawl (or “orphaned” resources) keep costing money even after we’ve moved on to other projects. This grows over time.

3. Special features help differentiate clouds, and many of these cloud conveniences make things easier to do – but they create feature lock-in and come at a premium price point, exacerbating the financial impact of the first two over time.

The net effect of all 3 things is that the larger the environment, the longer it exists, and the more predictable the baseline workload is, the more expensive public cloud is when compared to private cloud.

The following chart is a real-world monthly cost comparison of a SaaS provider’s AWS environment with the equivalent, iden-tical capacity in a Unitas Global OpenStack EPC environment:

Figure 6: Initial monthly cost for sample private cloud compared to equivalent AWS footprint

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

Over time, the price difference between public and private cloud environments continues to increase. For the same client, we forecasted a 5 year growth trajectory using data provided. Taking into account the factors described above, the cost savings continued to increase dramatically:

Figure 7: Five-year forecasted costs (private cloud versus AWS)

It’s important to note that the one place where this effect doesn’t result in savings is shortterm, highly “spikey” workloads that aren’t likely to be repeated. Outside of that scenario, the cost savings of private cloud at scale is obvious.

NON-ECONOMIC FACTORS While we’ve focused on the economic implications of public versus private cloud, it’s important to note that there are other significant business differences to consider. These include:

Performance In a private cloud environment, you have complete control of the hardware and software that comprise your environment as it is fully dedicated. As such, full transparency into the performance of the underlying hardware is possible. By contrast, in a public cloud the user is completely abstracted from underlying hardware. While in private cloud there is never any resource contention (unless specifically desired), in a public cloud environment “noisy neighbors” can impact performance of your resources. By controlling private resources, you can design them to meet the specific performance requirements of your workload, rather than trying to shoehorn your workload into general-purpose infrastructure. Public cloud providers must balance the needs of all customers, designing infrastructure to handle a variety of workload profiles.

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

© 2013-2015 Unitas

CASESTUDY

ECONOMICS OF AWS PUBLIC CLOUD + OPENSTACK PRIVATE CLOUD AT SCALE

Security In a truly private cloud environment, hardware is fully ded-icated to a particular company, organization, or workload. For this reason, borders are well-defined and custom secu-rity capabilities can be designed and integrated throughout the cloud infrastructure. In turn, this can make it easier to comply with major industry security standards and pass associated audits and certification processes.

Flexibility Private dedicated cloud infrastructure affords flexibility not found with public clouds. A truly custom solution provides certain advantages, including such things as the ability to integrate with existing legacy systems, direct network connections to on-premise networks, and specific hard-ware tailored to workloads. Existing security infrastructure requirements can be incorporated without the need to adapt to pre-packaged public cloud offerings. On the other hand, public cloud provides the most flexible options in terms of consumption, namely that resources can be pur-chased for very short periods of time and easily increased or decreased.

Ease of Use Public cloud remains the easiest service to consume, with mature portals and APIs available to provision resourc-es. However, private cloud infrastructure has made great strides in this area, with the likes of OpenStack Horizon providing equivalent functionality in private environments to that of major public cloud providers. Ease of consumption is a significant factor when driving cloud adoption within the organization.

PRIVATE (OR HYBRID) CLOUD WITH UNITAS GLOBAL

The Unitas Global private cloud solution brings together the best benefits of private cloud with those of public clouds:

1. Fully dedicated, custom-designed private cloud infra-structure, selected for specific workloads and opti-mized for performance.

2. Completely self-service provisioning via an easy-to-use, intuitive interface.

3. Feature flexibility – can build in capabilities like DBaaS to provide the time savings public cloud pro-vides with special services, without the vendor lock-in. (All special features in Unitas private cloud are using standard, open-source or open standard software.)

4. Ability to seamlessly integrate with public clouds for on-demand, short-term “burst” capacity beyond what is provided by the private cloud environment.

5. Compatibility with major public cloud APIs for pro-grammatic provisioning and orchestration.

6. Open standards for data portability. 7. Fully managed – Unitas operates the private cloud

using industry best practices; clients can simply con-sume resources and focus on applications.

Unitas Global believes that combining the advantages of both public and private clouds with the cost savings associ-ated with private Unitas OpenStack provides the best of both worlds for large-scale enterprise IT infrastructure require-ments.

Contact the Unitas Global team for more information about Unitas Global Managed OpenStack Private Cloud solutions today!

© 2011-2017 Unitas Global Case Study - Economics of AWS & Private OpenStack at Scale

W W W . U N I T A S G L O B A L . C O M

453 S Spring Street #201 Los Angeles, CA 90013 +1 855.586.4827 [email protected]