a survey of virtualization performance in cloud …...a survey of virtualization performance in...

7
A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota Duluth April 2014 PLEASE NOTE: This article was written as coursework and is not peer reviewed. Abstract—Virtualization is an important resource of cloud computing. Many virtualization technologies allow workload consolidation, multiple operating systems, and fault tolerance mechanisms. The number of applications moving to the cloud is increasing, and cloud computing is becoming a prominent distributed computing framework. Virtual machines provide many benefits to these applications. Thus, it is critically important to examine and review the performance effects of virtualization in cloud computing environments. This survey outlines research and experiments that have been done to test the effects virtual machines in cloud computing en- vironments. Applications such as Web 2.0 and high performance computing are considered. Benchmarking and experimental tools are introduced. Results from studies have show that CPU sharing in Amazon EC2 small instances degrades network performance. Live server migration may affect Web 2.0 technologies if operat- ing at peak workloads. HPC benchmarking tools show Xen may have significant variance in performance compared to KVM and VirtualBox. I. I NTRODUCTION A. Cloud Computing Cloud computing is paving the way for the future of distributed computing. Online interactive systems like Reddit, Expedia, Pinterest, and many others employ cloud infrastruc- tures like Amazon EC2 to meet user demands [1]. More and more, the benefits of Cloud Computing draw attention from other fields, like large scale scientific simulations and high performance computing (HPC) [2]. To have computing resources that are seemingly endless, scalable, and outsourced is enticing for a wide range of applications. The term cloud computing often refers to a special case of distributed com- puting, encapsulating both the hardware and software portions of the overall system. Thus, the cloud represents the fuzzy notion of networked computing resources. Users can allocate chunks of these computing resources depending on the task or demand. But cluster, grid, and other forms of multi-node computing infrastructures have existed for many years. So what sets apart cloud computing from these other environments? Armbrust et. al. attempt to classify cloud computing by having three new aspects that separate it from classic dis- tributed computing systems. These features are on-demand computing resources, no up-front commitment, and short-term resource rental [2]. These three aspects revolve around one key point: cloud computing resources are leased by a customer from a provider. They can reduce or expand resources without having to invest in hardware. For example, if a lessee runs an online website that sees 3x traffic during the weekends, they only have to pay for those extra resources at those periods of high traffic. Providers offer their computing resources as a cloud in different ways. Typically these resources are offered as a service (aaS) in the forms of infrastructure (IaaS), platform (PaaS), or software (SaaS). Other services such as storage, private, and hybrid clouds are also available [3]. All studies showin in this survey relate to IaaS providers. In general, many cloud environments are able to offer expansive computing capabilities by virtualizing processing, storage, and network resources. B. Virtualization Virtual computing resources are often controlled using a hypervisor, or virtual machine (VM) monitor which sits un- derneath the operating system layer. The hypervisor allocates resources to individual VMs and controls their execution state. There are two dominant virtualization techniques, full virtual- ization and paravirtualization. With full virtualization the VM is given the semblance of acting on the physical machine hardware with an isolated operating system (OS). In this sense, the guest OS is separated from the hypervisor, allowing more secure and solitary computing. Paravirtualization, on the other hand, works with the hypervisor in which the guest operating system is modified to know it is being virtualized. This allows the hypervisor to more adequately schedule com- puting resources to the VM for increased performance. Many hypervisors have different requirements in terms of hardware and software. Virtual machines provide many benefits, and are often used to better utilize all of the hardware resources available in powerful servers and hardware. Some of the major benefits include [4]: Workload Consolidation: Virtual machines can be moved and reorganized as units. This allows better machine utilization and less machines need to be active at a time. Updated Applications: Operating systems are loaded at the time the virtual machine is initialized. This means software doesn’t need to be manually updated, and users can choose what operating system and

Upload: others

Post on 22-Apr-2020

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Survey of Virtualization Performance in Cloud …...A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota

A Survey of VirtualizationPerformance in Cloud Computing

Matthew OverbyDepartment of Computer ScienceUniversity of Minnesota Duluth

April 2014PLEASE NOTE: This article was written as coursework and is not peer reviewed.

Abstract—Virtualization is an important resource of cloudcomputing. Many virtualization technologies allow workloadconsolidation, multiple operating systems, and fault tolerancemechanisms. The number of applications moving to the cloudis increasing, and cloud computing is becoming a prominentdistributed computing framework. Virtual machines providemany benefits to these applications. Thus, it is critically importantto examine and review the performance effects of virtualizationin cloud computing environments.

This survey outlines research and experiments that have beendone to test the effects virtual machines in cloud computing en-vironments. Applications such as Web 2.0 and high performancecomputing are considered. Benchmarking and experimental toolsare introduced. Results from studies have show that CPU sharingin Amazon EC2 small instances degrades network performance.Live server migration may affect Web 2.0 technologies if operat-ing at peak workloads. HPC benchmarking tools show Xen mayhave significant variance in performance compared to KVM andVirtualBox.

I. INTRODUCTION

A. Cloud Computing

Cloud computing is paving the way for the future ofdistributed computing. Online interactive systems like Reddit,Expedia, Pinterest, and many others employ cloud infrastruc-tures like Amazon EC2 to meet user demands [1]. Moreand more, the benefits of Cloud Computing draw attentionfrom other fields, like large scale scientific simulations andhigh performance computing (HPC) [2]. To have computingresources that are seemingly endless, scalable, and outsourcedis enticing for a wide range of applications. The term cloudcomputing often refers to a special case of distributed com-puting, encapsulating both the hardware and software portionsof the overall system. Thus, the cloud represents the fuzzynotion of networked computing resources. Users can allocatechunks of these computing resources depending on the taskor demand. But cluster, grid, and other forms of multi-nodecomputing infrastructures have existed for many years. So whatsets apart cloud computing from these other environments?

Armbrust et. al. attempt to classify cloud computing byhaving three new aspects that separate it from classic dis-tributed computing systems. These features are on-demandcomputing resources, no up-front commitment, and short-termresource rental [2]. These three aspects revolve around onekey point: cloud computing resources are leased by a customerfrom a provider. They can reduce or expand resources without

having to invest in hardware. For example, if a lessee runs anonline website that sees 3x traffic during the weekends, theyonly have to pay for those extra resources at those periods ofhigh traffic.

Providers offer their computing resources as a cloud indifferent ways. Typically these resources are offered as aservice (aaS) in the forms of infrastructure (IaaS), platform(PaaS), or software (SaaS). Other services such as storage,private, and hybrid clouds are also available [3]. All studiesshowin in this survey relate to IaaS providers. In general, manycloud environments are able to offer expansive computingcapabilities by virtualizing processing, storage, and networkresources.

B. Virtualization

Virtual computing resources are often controlled using ahypervisor, or virtual machine (VM) monitor which sits un-derneath the operating system layer. The hypervisor allocatesresources to individual VMs and controls their execution state.There are two dominant virtualization techniques, full virtual-ization and paravirtualization. With full virtualization the VMis given the semblance of acting on the physical machinehardware with an isolated operating system (OS). In thissense, the guest OS is separated from the hypervisor, allowingmore secure and solitary computing. Paravirtualization, on theother hand, works with the hypervisor in which the guestoperating system is modified to know it is being virtualized.This allows the hypervisor to more adequately schedule com-puting resources to the VM for increased performance. Manyhypervisors have different requirements in terms of hardwareand software.

Virtual machines provide many benefits, and are often usedto better utilize all of the hardware resources available inpowerful servers and hardware. Some of the major benefitsinclude [4]:

• Workload Consolidation: Virtual machines can bemoved and reorganized as units. This allows bettermachine utilization and less machines need to beactive at a time.

• Updated Applications: Operating systems are loadedat the time the virtual machine is initialized. Thismeans software doesn’t need to be manually updated,and users can choose what operating system and

Page 2: A Survey of Virtualization Performance in Cloud …...A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota

software they use. This reduces the burden on theserver administrator.

• Simultaneous Operating Systems: In most cases,each virtual machine is running a seperate copy ofan operating system. This allows a single server tohave multiple different operating systems, expandingusability.

• Machine Isolation: Often machine resources are guar-anteed for an instance of a virtual machine. Thisguaranteed resource allocation can provide a higherquality of service than many other time-shared com-pute environments.

Fig. 1. Full virtualization and paravirtualization.

C. Performance

These benefits are what make virtualization and cloudcomputing a perfect match. But before an application shouldmove to the cloud, it’s important to characterize the possibleperformance gains or losses. This is especially relevant whena customer is being billed for computing resources by thehour. In addition, many HPC and Web 2.0 applications relyon performance. Reworking or restructuring applications for

the cloud with expectations of being virtualized would onlybe beneficial if the performance gains are tangible.

II. PERFORMANCE MEASURES

Performance studies of virtualization techniques in cloudcomputing environments is challenging for a number of rea-sons. First, there are many aspects of performance that needto be considered, such as networking, CPU utilization, discI/O speeds, and more. Second, there is rarely a best wayto benchmark these computing tasks. Third, the diversitysoftware such as operating systems and core applications lendsmany results questionable or inconclusive. There are manydifferent hypervisors, cloud providers, operating systems, andbenchmark software suites to choose from. Depending onwhich hypervisor is considered, there are certain hardwaresoftware requirements that must be adhered to. Without takinginto account all of the potential options a performance studymay feel incomplete. Despite these challenges, it is importantboth for the progress of cloud computing and validation ofprocedures that these studies exist.

A. Network Performance of Amazon EC2

A prominent study is the paper ”The Impact of Virtualiza-tion on Network Performance of Amazon EC2 Data Center”by Guohui Wang and T. S. Eugene Ng. [5]. By narrowingthe scope of what resource is being measured and restrictingthe environment to a major cloud provider, the authors wereable to give a more complete study on network performance.The study focused network measurements on the AmazonElastic Compute Cloud (EC2), which uses the Xen open sourcehypervisor. Processing power on Amazon EC2 is broken into”EC2 compute units” in which one compute unit is equivalentto a 1.1 GHz 2007 Intel Xeon processor.

Two instance types were considered in this study, smalland medium, in which an instance is guest virtual machine.Small instances have 1.7 GB memory, 1 EC2 compute unit, and160GB of storage. Medium instances have 3.75GB of memoryand 2 EC2 compute units. The major finding of the studywas that abnormally unstable network performance skewedmeasurements for small instances. Medium instances were lessaffected. The unstable network performance in small instanceswas most often attributed to processor sharing. Wang et.al. considered four primary measurements: CPU consistency,TCP/UDP throughput, packet delay, and packet loss.

CPU consistency: A loop of one million iterations was ranto test CPU utilization, in which gettimeofday() was called eachiteration and the result stored in memory. They found regular”gaps” in time for the small instances, indicating processorsharing. This can be seen in figure 2.

TCP/UDP throughput: Pairs of instances sent TCP andUDP packets to on another. TCP throughput should achieve4Gb/s by hardware standards, and the UDP rate was cappedat 1Gb/s to avoid overflow. They found that medium instanceperformed as expected, but small instances had much lowerTCP throughput than UDP. Throughput is shown for smallinstances at a higher resolution in figure 3. The authors foundthat for small instances had gaps in connectivity, likely aresult of processor sharing. This had a dramatic affect on TCP

Page 3: A Survey of Virtualization Performance in Cloud …...A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota

Fig. 2. Results of iterative timestamps for virtualized and nonvirtualizedmachines. This figure is from [5].

Fig. 3. TCP and UDP throughput for a small instance over time on AmazonEC2. This figure is from [5].

throughput, but UDP throughput was less affected as ”bursts”of packets were sent to maintain the capped link rate.

Packet Delay: 10 ping probes were sent between instancesevery second, for a total of 5000 probes. By measuring theround trip time (RTT) of the packet, the delay could bedetermined. They found very large delays in the first setof pings, likely due to the packet being forwarded to asecurity device. After that, small instances showed major delayvariations, likely due to processor sharing.

Packet Loss: Because actual packet loss is typically verylow, usually around %2, the tool BADABING was used. BAD-ABING estimates packet loss by inspecting RTT and networkcongestion [6]. Because the delay variation in small instances

was so high, BADABING was unsuccessful at predicting areasonable packet loss rate.

Overall, this study showed that CPU sharing can degradenetwork performance in virtual machines. It also negativelyimpacts benchmarking tools such as BADABING. Using amedium instance type dramatically reduces these performancedegredations.

B. Live Virtual Machine Migration in Web 2.0 Applications

Live Virtual Machine Migration: Live virtual machinemigration is an important tool of cloud computing environ-ments. This technique involves moving all the contents ofa virtual machine to another physical host. It is especiallyuseful for server management in which it enables online systemmanagement, as well as workload balancing and consolidation[7].

Fig. 4. Live virtual machine migration on server shutdown, from [8].

The process of live VM migration involves three primarysteps:

1) Precopy memory pages: Memory from the sourcevirtual machine is copied and moved to the desti-nation machine. This is done without stopping thesource VM. This is referred to as the warm-up phase.

2) Stop VM on source, start VM on destination: TheVM is halted on the source. Memory pages that havebeen written to after the warm-up phase (dirty pages)are copied to the destination. Then the VM is startedon the destination. This is called the stop-and-copyphase. The time between the source VM being haltedand the destination VM starting is considered VMdowntime.

3) Postcopy memory: The execution state is transferredfrom source to destination. If the destination VMaccesses a page of memory that has not yet beencopied, it is pulled from the source VM.

The prominent performance costs of live VM migration areprocessor downtime in step 2, and bandwidth requirementsfor copying memory pages. However, many studies haveconcluded that VM downtime can be reduced to 60ms or lower[7].

Web 2.0 Applications: Primary adopters of cloud com-puting were Web 2.0 applications [10]. Web 2.0 applicationsare online interactive web sites such as Facebook, Wordpress,and Blogger. A server is required to process input such aslogging in, posting information, and updating profiles. Asonline applications get more complex the amount of processingneeded increases. Because cloud computing can generate new

Page 4: A Survey of Virtualization Performance in Cloud …...A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota

Fig. 5. Three webserver migrations and homepage response times for a maximum workload. This plot is from [9].

machines on the fly, as new users interact with a Web 2.0application more server instances can be allocated [11].

Live migration is shown to have variable effects on differ-ent applications, and limited studies have shown these effectson Web 2.0 applications [9]. Thus, a study was done byVoorsluys et. al. to quantify the performance degredation ofVM migration in these applications. They performed the studyon 6-servers with one head node and five virtual machinenodes. Each node had a Intel Xeon 2.33 GHz Quad-coreprocessor, 4GB of memory, a 7200RPM hard drive, andwere connected via gigabit ethernet switch. The head noderan Ubuntu Server 7.10, and each VM node ran UbuntuServer 8.04 with a paravirtualized kernel. The virtual machinesoftware was Citrix XenServer Enterprise Edition. Apache2.2.8 was used as the webserver, and MySQL was used forthe database. To conduct the study, two testing applicationswhere used: Olio (now retired) [12] and Faban [13]. 10 and20 minute benchmark tests were ran in two settings. Onewith a static number of 600 users, which was determinedin preliminary tests to be the maximum workload of theirsoftware and hardware setup. The other with a scaling numberof users: 100, 200, 300, 400, and 500.

• Olio: A Web 2.0 application developed by Sun Mi-crosystems. It allows users to log in, log out, loadspecific pages, search and tag events, and other com-mon Web 2.0 activities. Its primary purpose was tohelp developers test server infrastructure, and evaluatethe performance of online technologies.

• Faban: A Markov-chain load generator. Faban cansimulate users interacting with a web system. Thesevirtual users will log in, interact, and log out. The

number of virtual users customizable, allowing testersto run benchmarking tools on different size workloads.

Service Level Agreements: A Service Level Agreement(SLA) is a provider and customer contract that guaranteesa minimum level of service. Typically, the SLA outlinesminimum server response times for certain user interactions[14]. SLA violoations are a useful performance metric due totheir use in real-world applications. SLAs will differ dependingprovider and application. Voorsluys et. al. defined the metricsin their study as followed:

• Response times were recorded in five-minute windows

• If a response exceeded the maximum allowed, avioloation was recorded

• The percent of responses that caused an SLA violationwas considered

The maximum allowed response times are shown in figure 6.

Migration During Maximum Workload: With a fullworkload of 600 users, they found the live virtual machinemigration of the webserver had 3 second downtime over a44 second migration. Immediately after the migration, thewebserver had to catch up and respond to pending requests for5 seconds in which 99th percentile SLA violations occurred.90th percentile violations only occurred if multiple migrationshappened back-to-back. That is, if sufficient spacing is allowedbetween migrations, it mitigates the number of SLA violations.The results of homepage loading time during these migrationscan be seen in figure 5.

Migration with Scaling Workload: The experiments

Page 5: A Survey of Virtualization Performance in Cloud …...A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota

Fig. 6. Maximum response times for various user actions in seconds. Thistable is from [9].

Fig. 7. Maximum response times for scaling number of users by action. Thistable is from [9].

found that no SLA violations were recorded during a web-server migration if the number of users was less than themaximum. The maximum response time for all users duringthis experiment is shown in figure 7.

This study showed that live virtual machine migration canimpact a Web 2.0 application if it is operating at its maximumworkload. However, the experiment was limited in scale. Thetest was done on limited hardware and a single webserver. Itis not clear that scaling these tests to many webservers on atrue cloud platform would yield the same results.

C. VM Technologies for High Performance Computing

High Performance Computing: HPC typically refers tothe complex scientific algorithms which require computationsthat exceed the capabilities of common desktop hardware.Some examples are climate modeling, genome sequencing,and financial market modeling [16]. Different applications havedifferent needs. Specifically, some are CPU bound and requireexcessive data processing, while others require much morememory or disc space. Thus, it is necessary to investigate thedifferent VM technologies and their effect on different HPCapplications. Benchmarking tools already exist that encapsulatethe common functions and needs of many HPC applications,but do not benchmark the applications themselves. Two com-mon benchmarking tools are:

• SPEC OMP: Standard Performance Evaluation Cor-poration OpenMP Benchmark Suite assesses the per-formance of applications that use OpenMP [17].

OpenMP is an API that handles multi-node sharedmemory computing.

• HPCC: The High Performance Computing ChallengeBenchmark Suite consists of multiple tests that an-alyze the common functions of real-world HPC ap-plications [18]. It is the benchmarking tool that isused to classify the ”Top500” list of most powerfulsupercomputers [19].

A study was done by Younge et. al. that compared differentVM technologies using these two benchmarks [15]. In thisstudy the FutureGrid test bed was used. FutureGrid is aworkflow engine that allows researchers to examine cloudbased applications on geographically distributed, heteroge-neous server infrastructure [20]. This made testing betweendifferent VM technologies more simple, as well as offeringtools for analyzing performance. Younge et. al. chose to runtheir experiments on the Indiana University Data Center (ofFutureGrid) on four compute nodes. Each compute node hadtwo Intel Xeon 5570 Quad-core processors, 24GBs of RAM,and a QDR InfiniBand connection. They used the Red HatEnterprise Server Edition, and each node had a differenthypervisor. Because of the hardware limitations imposed bythe different hypervisors, each virtual machine was limited to8 processor cores and 16GB of ram. The hypervisors testedin this experiment (one per node) were Xen [21], Kernel-Based Virtual Machine (KVM) [22], Oracle VirtualBox [23],and a control with no hypervisor. VMware was omitted fromthe experiment due to the user-license forbidding performancetests to other VM technologies without authorization [15]. Themajor differences between VM technologies is shown in figure8. The experiments consisted of running the benchmarkingtools 20 times, and recording the average and variance ofperformance for each of the hypervisors.

SPEC OMP: The experiments with SPEC showed thatKVM performed on par with the native machine. Xen andVirtualBox where shown to have a score that was approxi-mately %8 lower. Unfortunately, the authors did not expressin any detail why they believed this was the case.

Floating Point Accuracy Per Second: FLOPS is a mea-surment of how many operations in floating point accuracy canbe done per second. Typically this is recorded in GFLOPS, or109 FLOPS. For Linpack, the subtest of HPCC that conductslinear algebra performance tests, experiments found a highdegree of variance with Xen. These results are shown in figure9. On average, all VMs performed about equally well, butunderperformed compared to the native machine. For the FastFourier Transform, a discrete mathematical solver from HPCC,Xen still showed a high degree of variance. However, Xen un-derperformed compared to the other hypervisors, which wereabout equal with native. A possible hypothesis to observationwas that there are adverse affects of Intel’s MPI on Xen [15].These results can be seen in figure 10.

PingPong: PingPong is a measurment of communicationbetween processes. One thread will send a message to another.Upon receival of the message, the other thread will returnthe message. This was used to measure thread latency: theinterval between simulation and response, and bandwidth:the number of messages that could be sent per second. Thebandwidth experiments showed a larger variance in Xen, and

Page 6: A Survey of Virtualization Performance in Cloud …...A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota

Fig. 8. Differences in virtual machine technologies. This table is from [15].

Fig. 9. Average GFLOPS for 20 runs with different VM technologies usingLinpack. This plot is from [15].

Fig. 10. Average GFLOPS for 20 runs with different VM technologies usingFast Fourier Transform. This plot is from [15].

that VirtualBox often well outperformed the other hypervisorsand native machine. This was attributed to the possibility of

Fig. 11. Average bandwidth of PingPong test between two processors withdifferent VM technologies. This plot is from [15].

Fig. 12. Average latency of PingPong test between two processors withdifferent VM technologies. This plot is from [15].

messages being sent on the same physical processor core,thus taking advantage of the CPU cache. These results canbe seen in figure 11. The experiments also showed that Xenhad unusually high latencies while KVM and VirtualBoxperformed similarly well to the native machine. These resultsare shown in figure 12.

Page 7: A Survey of Virtualization Performance in Cloud …...A Survey of Virtualization Performance in Cloud Computing Matthew Overby Department of Computer Science University of Minnesota

This is one of the few studies that compares different virtualmachine hypervisors. In this regard, the results and conclusionsof the authors is interesting. However, many of the findingswere not adequately investigated. Unusual effects (such as thehigh variability in Xen) was not expanded upon. Further testsor analysis would have supported their conclusions that KVMis the ideal hypervisor for HPC. This is especially importantwhen their findings are contrary to other studies [24].

III. CONCLUSIONS

Virtual machine technologies are an important and criticalcomponent of cloud computing. They reduce administrationcomplexity by allowing multiple operating systems, isolatedcompute environments, and fault tolerance. Workloads can bemore easily consolidated, and keeping software updated is nolonger a time consuming task. As cloud infrastructure getsmore sophisticated, the number of applications moving to thecloud grows. Virtual machines provide many benefits to theseapplications. Now, more than ever, it critically important toexamine and review the performance effects of virtualizationin cloud computing infrastructure.

Amazon EC2 network performance was examined by Guo-hui Wang and T. S. Eugene Ng. CPU sharing in small instancesdegrades network performance, and complicates the use ofbenchmarking tools. Virtual machine migration, an importanttool of VM technology, was found to have little impact onWeb 2.0 technologies except at peak workloads. With the easeand low cost of spawning new machines in cloud computing,Web 2.0 applications can achieve zero service level agreementviolations during migrations. Voorsluys et. al. showed this canbe done by increasing the number of instances during theVM migration, or sufficiently spacing the interval betweenmigrations. High performance computing and cloud computingmay still need further evaluation. Younge et. al. showed HPCbenchmarking of different hypervisors, but lacked insight orexplanations of the conclusions drawn.

Much of the research that examines the performance ofvirtualization in cloud computing does not adequately adapttheir tests to true cloud environments. It is not clear that smallscale tests of less than ten VMs are indicative to performancesof hundreds or thousands of virtual machines. Moving theexperiments of the many performance tests to real-world cloudsystems would be beneficial to future applications.

REFERENCES

[1] “All aws case studies,” https://aws.amazon.com/solutions/case-studies/all, accessed: 2014-05-15.

[2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski,G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A viewof cloud computing,” Commun. ACM, vol. 53, no. 4, pp. 50–58, Apr.2010. [Online]. Available: http://doi.acm.org/10.1145/1721654.1721672

[3] N. Manohar, “A survey of virtualization techniques in cloud comput-ing,” in Proceedings of International Conference on VLSI, Communi-cation, Advanced Devices, Signals and Systems and Networking, ser.Lecture Notes in Electrical Engineering, V. S. Chakravarthi, Y. J. M.Shirur, and R. Prasad, Eds. Springer India, 2013, vol. 258, pp. 461–470.

[4] S. Nanda and T. Chiueh, “A survey on virtualization technologies,” RPEReport, pp. 1–42, 2005.

[5] G. Wang and T. S. E. Ng, “The impact of virtualization on networkperformance of amazon ec2 data center,” in Proceedings of the29th Conference on Information Communications, ser. INFOCOM’10.Piscataway, NJ, USA: IEEE Press, 2010, pp. 1163–1171. [Online].Available: http://dl.acm.org/citation.cfm?id=1833515.1833691

[6] J. Sommers, P. Barford, N. Duffield, and A. Ron, “Improvingaccuracy in end-to-end packet loss measurement,” in Proceedingsof the 2005 Conference on Applications, Technologies, Architectures,and Protocols for Computer Communications, ser. SIGCOMM ’05.New York, NY, USA: ACM, 2005, pp. 157–168. [Online]. Available:http://doi.acm.org/10.1145/1080091.1080111

[7] C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. Limpach, I. Pratt,and A. Warfield, “Live migration of virtual machines,” in Proceedingsof the 2Nd Conference on Symposium on Networked Systems Design& Implementation - Volume 2, ser. NSDI’05. Berkeley, CA,USA: USENIX Association, 2005, pp. 273–286. [Online]. Available:http://dl.acm.org/citation.cfm?id=1251203.1251223

[8] https://www.poweradvantage.eaton.com/ipm/default.aspx, accessed:2014-05-15.

[9] W. Voorsluys, J. Broberg, S. Venugopal, and R. Buyya, “Cost ofvirtual machine live migration in clouds: A performance evaluation,” inCloud Computing, ser. Lecture Notes in Computer Science, M. Jaatun,G. Zhao, and C. Rong, Eds. Springer Berlin Heidelberg, 2009, vol.5931, pp. 254–265.

[10] I. Foster, Y. Zhao, I. Raicu, and S. Lu, “Cloud computing and gridcomputing 360-degree compared,” in Grid Computing EnvironmentsWorkshop, 2008. GCE’08. Ieee, 2008, pp. 1–10.

[11] L. Wang, G. Von Laszewski, A. Younge, X. He, M. Kunze, J. Tao,and C. Fu, “Cloud computing: a perspective study,” New GenerationComputing, vol. 28, no. 2, pp. 137–146, 2010.

[12] “Olio web 2.0 toolkit,” http://incubator.apache.org/projects/olio.html,accessed: 2014-05-15.

[13] “Faban load generator,” http://faban.org/, accessed: 2014-05-15.[14] A. Keller and H. Ludwig, “The wsla framework: Specifying and

monitoring service level agreements for web services,” Journal ofNetwork and Systems Management, vol. 11, no. 1, pp. 57–81, 2003.

[15] A. J. Younge, R. Henschel, J. T. Brown, G. von Laszewski, J. Qiu, andG. C. Fox, “Analysis of virtualization technologies for high performancecomputing environments,” in Cloud Computing (CLOUD), 2011 IEEEInternational Conference on. IEEE, 2011, pp. 9–16.

[16] S. C. Ahalt and K. L. Kelley, “Blue-collar computing: Hpc for the restof us,” Cluster World, vol. 2, no. 11, 2004.

[17] “Standard performance evaluation corporation,”http://www.spec.org/omp/, accessed: 2014-05-15.

[18] “Hpc challenge benchmarking,” http://icl.cs.utk.edu/hpcc/, accessed:2014-05-15.

[19] P. R. Luszczek, D. H. Bailey, J. J. Dongarra, J. Kepner, R. F. Lu-cas, R. Rabenseifner, and D. Takahashi, “The hpc challenge (hpcc)benchmark suite,” in Proceedings of the 2006 ACM/IEEE conferenceon Supercomputing. Citeseer, 2006, p. 213.

[20] “About futuregrid,” https://portal.futuregrid.org/about, accessed: 2014-05-15.

[21] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris,A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, “Xenand the art of virtualization,” SIGOPS Oper. Syst. Rev.,vol. 37, no. 5, pp. 164–177, Oct. 2003. [Online]. Available:http://doi.acm.org/10.1145/1165389.945462

[22] A. Kivity, Y. Kamay, D. Laor, U. Lublin, and A. Liguori, “kvm: thelinux virtual machine monitor,” in Proceedings of the Linux Symposium,vol. 1, 2007, pp. 225–230.

[23] V. Oracle, “Virtualbox user manual,” 2011.[24] T. Deshane, Z. Shepherd, J. Matthews, M. Ben-Yehuda, A. Shah,

and B. Rao, “Quantitative comparison of xen and kvm,” Xen Summit,Boston, MA, USA, pp. 1–2, 2008.