practicum - research paper:
TRANSCRIPT
-
8/8/2019 Practicum - research paper:
1/10
Study of secure isolation of virtual machines and
their exposure to hosts in a virtual environment.
Gavin Fitzpatrick
School of ComputingDublin City University
Dublin, Ireland
Email: [email protected]
AbstractIn this paper we look at the fundamentals
of virtualization and how it is defined, this paper also
discuses isolation for virtual machines within a single host
environment and what types of isolation are present. Type
1 and 2 hypervisors are described in detail, highlighting
the differences in architectures. This paper then discusses
a test platform, testing tools and experiments used to
test how well isolation is performed within five chosen
platforms. Results are discussed on a test by test basis
and also on a platform by platform basis. Related works
are discussed at the end of the paper.
I. INTRODUCTION
Virtualization has been around a long time since the
70s however x86 [21] Virtualization only came about
in the 1990s. As defined by Popek and Goldberd [?] an
x86 Virtual Machine Monitor (VMM) is defined by three
primary characteristics,1) Fidelity: A VMM must provide an environment for
programs identical to that of the original machine
2) Performance: Programs running within the VMM
must only suffer from minor decreases in per-
formance, the majority of guest CPU instructions
should by executed in hardware without interven-
tion from the VMM
3) Safety: The VMM must have complete control of
the system resources, Guests or Virtual Machines
(VMs) should not be able to access any resource
not allocated to them.
The safety characteristic can be further defined as dis-
cussed in [45] where isolation is divided into two dimen-
sions.
1) Resource Isolation: Refers to a VMMs ability
to isolate resource consumption of one VM from
that of another VM using appropriate algorithms
[20]. Using appropriate scheduling and allocation
of machine resources a VMM can enforce strong
resource isolation between VMs competing for the
same resources.
2) Namespace Isolation: States how a VMM limits
access to its file-system, processes, memory ad-
dresses, user ids etc. It also affects two aspects of
application programsi Configuration Independence: File names of one
VM do not conflict with that of another VM.
ii Security: One VM cannot modify data belonging
to another VM stored in the same host.
Namespace and Resource Isolation [45] may not be
such a major risk within private enterprise where all
infrastructure is physically and network secure. How-
ever with the emergence of cloud computing [31] and
infrastructure as a service (IaaS), companies can now
rent infrastructure directly off the cloud for example EC2
from Amazon[1]. This allows a single host to contain
multiple VMs from many different organizations. As a
result the isolation landscape discussed in [45] becomes
very important especially if there are any misbehaving
VMs running on the platform in question. This paper
will look at different hypervisor architectures and discuss
resource isolation within these hypervisors. Section II
details the testing environment including both physical
and virtual aspects to the environment. Section III dis-
cusses Type 1 hypervisor architectures used in this paper,
Section IV discusses type 2 hypervisor architectures usedin this paper. Sections III and IV also discuss how secure
isolation is achieved within each architecture where
possible. Section V discusses the testing tools used for
each experiment. Section VI discusses each experiment
and the reasons behind each experiment. Sections VII
and VIII discusses Results on a Test-tool and Hypervisor
Basis. Section IX discusses related work in the field, and
finally Section X closes with the Conclusion of the paper.
-
8/8/2019 Practicum - research paper:
2/10
I I . TES T ENVIRONMENT
A. Physical Environment
All testing was conducted on a Dell Dimension
530 with a VT [6] enabled Intel(R) Core(TM)2 Quad
CPU Q6600 @ 2.40GHz, 3Gb 677Mhz RAM, a single
100Mbps NIC and 3 x SATA 7200pm Hard disk drives
all of which were connected individually: Disk 1 contains the 64bit operating systems: Ubuntu
10.4, Windows 7 and Server 2008 R2
i Disk 1 contained a multiboot grub loader which
allowed Ubuntu 10.4 and Windows Server 2008 R2
boot. Window 7 was loaded into the boot loader
within Win2008 using a tool called Eazy BCD [?].
Disk 2 contains Citrix XEN server 5.6.0 [2]
Disk 3 contained ESXi 4.1.0 [25]
B. Virtual Machine Environment
As the host contains 3Gb of memory the number of
VMs is restricted due to the amount of memory in the
system. Some hypervisors do offer memory overcommit
techniques [25] [2] however to maintain consistency 4
VMs are loaded on the host machine. Each VM is
allocated 1 vCPU, 512mb ram, a bridged virtual network
card so it is visible on the physical network through the
hosts physical network card. The following four VMs
are installed and configured on each hypervisor:
VM1 contains a guest O.S windows XP SP3,
VM2 containst a guest O.S windows 2003 server,
VM3 contains a guest O.S ubuntu 10.4.
VM4 contains a guest O.S ubuntu 10.4 - this guestis also responsible for running bench mark tests
against the host
There are also 2 additional zombies VMs running with 2
laptops also cabled onto the same network. These VMs
run on Virtualbox [vboxarch] within each laptop. Each
VM is installed with mausezahn [9] which is a network
traffic generation tool used for network testing. All guest
OSs are 32bit. For the remainder of this paper each guest
stated above will be referred to as VMX with X referring
to the number of the VM, zombies will be referred to as
VMZ for the remainder of this paper.
III. TYP E 1 HYPERVISOR- NATIVE OR BAR E-M ETAL
x86 architectures are designed based on 4 rings of
privilege [30],
Ring 3: executes user mode - has no direct access
to the underling hardware
Ring 2: not used by modern operating systems.
Ring 1: not used by modern operating systems.
Ring 0: has full access to underlying hardware
within the host system
Type 1 hypervisors usually run at Ring 0 or in Rooted
Mode on Hardware assisted systems [21] and all access
to the underlying hardware resources are controlled by
the Hypervisor. Although in the past Para Virtualization,
Binary Virtualizationand Hardware Assisted Virtualiza-
tion represented different VMM architectures, however
today with Intels VT-x [6] and AMDs AMD-V [28] fea-
turing in all new systems the current Type 1 hypervisors
all take advantage of Hardware Virtualization, both the
VT-X and AMD-V operate at Ring -1 mode. However
as discussed in [30] there are some notable performance
differences.
Software outperforms hardware for workloads that
perform I/O, create processes or rapidly switch
contexts between guest context and host context.
Hardware outperforms software for workloads rich
in system calls.
As stated in [24] Type 1 hypervisors are designed
with greater levels of isolation in mind by maintaining
separate memory partitions for each guest, allows user
programs in Ring3 to execute natively on the CPU.
A. XenServer
The first type 1 hypervisor is Citrix Xen Server 5.6
[2] which is a Para virtualized hypervisor based on the
opensource XEN project [13]. The VMM automatically
loads a Secure Linux O.S as Domain 0 within the
XenServer. All guest interactions with the hypervisor aremanaged via Domain 0 which itself is a privileged guest
sitting on top of the Host [3]. XenServer schedules CPU
time slices to guest domains using the Borrowed Virtual
Tme Algorithm [34], XenServer also offers I/O rings
for data transfer from guests to the Xen hypervisor [42].
Network access from guests is controlled via VIFs [42]
which have 2 I/O rings associated with the VIFs for send
and recieve data using a round robin algorithm. However
Para virtualization requires modification to Guest O.Ss
in order to perform correctly, [42] discusses how many
additional lines of code are required to allow an O.S
to perform safely within this environment as stated inTable 2. For testing purposes there were no configuration
changes made to the XenServer environment, all guest
domains are stored on local disk 2.
B. Hyper-V R2
Microsofts Hyper-V [44] is also a Paravirtualized
hypervisor which follows the same architecture as
XenServer however microsoft uses partitions instead of
-
8/8/2019 Practicum - research paper:
3/10
domains, therefore a secure version of Windows 2008
R2 is loaded as the root partition i.e. Domain 0 [26]
[3]. Hyper-Vs Architecture is described in detail by
Russinovich [44] which states that the kernel runs at
Ring 0 while the VMM runs at Ring -1 which allows
full control of execution of the kernel code. The Hyper-
V process Hvix64.exe [6] for VT-x or Hvax64.exe for
AMD-V [28] is loaded as part of the Win2008 boot
process in order to launch itself into Ring -1. Each
child partition is represented by a Vmwp.exe process
which manages the state of each child partition. The
Vmms.exe process is used to create the VMwp.exe
process. Hyper-V offers two features which allow greater
native performance from its guests:
1) Enlightenments: The latest Microsoft operating
systems can directly request services from the hy-
pervisor using the Microsoft hyper-call API allow-
ing near native performance of guest executed code
on the hypervisor. Hyper-calls can also be usedto immediately schedule another virtual process to
access CPUs reducing the use of spin-locks on
multiprocessors.
2) Host Integration Services: Available to Microsoft
and Linux guests [8], when installed allows near
native access to hardware devices and consists of
3 components
Virtual Service Clients (VCS): resides at Ring
0 in the child partition, replaces guest devices
drives and communicates via the VMB with
the hypervisor Virtual Machine Bus Driver (VMB): presents
a communication through which guests within
child partitions can communicate with the root
partition
Virtual Service Providers (VSP): resides at
Ring 0 in the root partition, initiates request
via the roots device drive on behalf of guests
running within the child partitions.
For testing purposes there were no configuration changes
made to the Hyper-V environment of the root partition
Win2008 R2, all guest domains are stored on local disk
1 within the Windows 2008 partition.
C. ESXi
VMWares ESXi [25] is considered a baremetal hy-
pervisor which uses both Binary and Hardware assisted
virtualization [21]. Unlike XenServer and Hyper-V, ESXi
does not use a preloaded domain 0 guest for host man-
agement, all access and control to the physical resources
are controlled via the hypervisor. ESXi schedules CPU
time slices to the guest O.S using a [20] porportional
share based algorithm which allows access to the CPU
from different guests. All access is defined by shares
which can be customized by the administrator of the
host thereby allowing higher priority VMs to have more
CPU time (shares). Storage I/O is controlled in the
same way for guests using a Storage I/O controller [14]
meaning guests have access to I/O within the host via
shares which again can be controlled by an administrator.
Network I/O is controlled via [11] meaning guests will
have pre-assigned shares for network access.
For testing purposes there were no configuration
changes made to the ESXi 4.1 [25] environment, all
guests are stored on local disk 2 within the vmfs par-
tition, all guests have equal share access to the CPU,
I/O and Memory.
IV. TYP E 2 HYPERVISOR - HOSTED
Type 2 Hypervisor are loaded into memory withinthe host of a non virtualized OS via Ring0 drivers [22]
[27]. The hypervisor will exist as as number of processes
within the Host OS and is therefore dependant on CPU
scheduling within the Host OS.
A. VirtualBox
VirtualBox [22] claims to offer native virtualization
citevboxnative as guest code can run unmodified directly
on the host computer, however the VirtualBox hypervisor
resides within the host O.S and does not require VT-x
[6] or AMD-V [28] to operate this will be considered asa type 2 hypervisor for the remainder of the paper as its
architecture is similar to that of VMWares Workstation
[47]. VirtualBox runs two processes
1) Vboxsvc.exe: manages and tracks all virtual box
processes running within the hosted environment.
2) Virtualbox.exe: this process running within the
host operating system is responsible for the fol-
lowing functions:
i Contains a complete guest operating system in-
cluding all guest processes and drivers.
ii Contains a Ring 0 driver which sits inside the hostO.S and is responsible for the following:
allocating physical memory for the virtual
machine
switching between the host Ring3 and guest
context
VirtualBoxs argument regarding native virtualization
[23] is that the CPU can run in one of four states while
guests are running
-
8/8/2019 Practicum - research paper:
4/10
1) execute host ring-3 code (other host processes) or
host ring 0 code
2) emulate guest code - an emulator steps in to
translate this code into usable ring 3 code
3) execute guest ring 3 code natively
4) execute guest ring 0 code natively - if VT-x or
AMD-V is enabled this executed at ring 0, however
if VT-x or AMD-V is disabled or not present the
guest is fooled into running at ring 1.
VirtualBox 3.2.6 running on a Windows 7 host on Disk
1 and the default configuration was used in this paper.
B. Workstation
VMWares workstation is another hosted hypervisor
used to allow depending on hardware resources multiple
guests to run concurrently on top of the Host O.S. It
also uses processes [47] within the Host O.S to control
and manage its guests. Although there is no officialdocumentation for Workstation version 6 architecture
[?], I have found an article which states that Workstation
[27] and Virtualbox follow a similar architecture in [18]
in section 3.2.
vmware.exe - also known as the VMApp, resides
in Ring3 and handles I/O requests from the guests
via system calls.
vmware-vmx.exe - also known as the VMX driver,
resides at Ring0 within the Host O.S, guests com-
municate via the VMX driver to the host.
VMM - unknown to the Host O.S, gets loaded
into the Kernel at Ring0 when the VMX drives is
executed (Guest VM starts up)
Workstation 6.5 was used within the experiments with
default settings in place.
V. TESTING TOOLS
There are a number of benchmarking tools which
can be used for looking at performance metrics within
Virtual Machines such as VMWares VM Mark and
BenchSuite [7]. There are 4 key resources which are
shared across all Guests within a Virtual environment,CPU, Memory, Disk I/O and Network I/O. The VMM
or Hypervisor is responsible for sharing/scheduling out
these resources to each VM in a fair manor. However
if 1 VM is misbehaving as demonstrated in Section VI
the remaining Guests may not receive their fair share of
resources via the VMM. As a result of this I have chosen
the following testing tools to look at each resource during
each experiment and compare findings.
A. RAMspeed
I used RAMspeed [17] for measuring RAM read/write
performance within the test VM. The RAMspeed test
performs 4 sub-test operations.
Copy (A=B) Transfers data from B to A
Scale (A=m*B) modifies B before writing to A
Add (A = B+C) reads in B and C, adds these valuesthen writes to A
Triad (A=m*B+C) combination of Scale and Add
operations.
10 rounds are performed for both Integers and Floatpoint
point calculations. Each sub-test is averaged within each
round, an overall average is then taken across 10 rounds
to give an accurate reading.
B. System Stability Tester
I used systester [?] for benchmark and test CPUperformance using 2 different algorithms to calculate
512K values of Pi which are:
Borwein [15] Quadratic Convergence algorithm
which runs for five consequtive rounds
Gausse-Legendre [32] algorithm is greatly more
efficient at calculating Pi than Borwein therefore
decided to run this algorithm ten times for each test
to keep them comparable.
C. FIO I/O Tool
This tool was used to benchmark I/O [4] to the disk
subsystem within VM4. Ten 32mb files were written
directly to the disk within the host using the libaio
engine, each file contained random writes of 32k blocks
in size recording IOPS or max average bandwidth.
D. Ping tests
This test examines Network I/O, One hundred ICMP
[?] packets are sent from a virtual network card inside
the guest VM4 are sent to three locations:
1) VM2 within the same host
2) Hosts physical Network Interface
3) The physical gateway
These tests measure how efficient the internal virtual
networking was during the experiments carried out in
this paper The average response time over a hundred
IMCP packet requests is calculated for further review
for each test.
-
8/8/2019 Practicum - research paper:
5/10
E. Geekbench
Geekbench [5] is a proprietary benchmarking tool for
processor and memory performance, tests are scored
based on the following factors:
Integer: Blowfish, Text Compress/Decompress
Floating Point: Primality test, Dot Product
Memory: Read/Write Sequential, Stdlib Copy/Write Stream: Copy, Scale, Add, Triad - similar to ram-
speed tests
Each score is combined and an average is taken to
represent a single score for all four factors, however the
higher the value the better the score.
VI. EXPERIMENTS
There were in total 10 tests performed on all 5
platforms for each experiment as described within this
section. All tests are performed VM4 running the Ubuntu
10.4 operating system
A. System Idle / Control
For reference a control experiment which performed
the above tests on an idle platform with 4 VMs running.
Only the testing VM was active during this time perform-
ing the above tests described in Section V. For all other
experiments 1 or more of the remaining VMs would
misbehave depending on the experiment performed.
B. Experiment 1 (Exp1) - Crashme
As discussed in [41] crashme [33] subjects an O.S toa stress test by continually attempting to execute random
byte sequences until a failure has occurred Three parallel
tests are performed on the misbehaving guest VM1,
each test uses one of three random number generators:
RAND from the C library, [10]Merseene twister, VNSQ
(variation of the middle square method), these tests are
executed as follows +1000 666 50 00:30:00
+1000: specifies the size of random data string in
bytes, the + sign states the storage of these bytes
are malloced each time
666: input seed into the random number generator 50: how many times to loop before exiting the sub
process normally
00:30:00: all tests run for a maximum of 1800
seconds or 30 minutes
During this period VM1 did not crash therefore a sum-
mary of exit codes for each execution is collected in a
log file for further use.
Crashme causes the misbehaving VM to run at 100
C. Experiment 2 (Exp2) - Fuzz testing
Fuzz [36] is a random input testing tool used on
applications, it subjects them to streams of completely
random input messages and can be considered as a of
application error checking tool. Ormandy also uses this
testing approach in [41] however unlike Crashme which
executes its own processes against the cpu, fuzz sendsrandom messages via the message thread queue of target
application thread. As a result this causes the target
application to misbehave.. As discussed in an article by
Symantec [35] which investigates how different types
of hypervisors can be detected inside a virtual machine.
This is due to additional functionality made available
via a private guest to host channel allowing instructions
from the host to pass through to the guest such as reboot,
shutdown and clipboard information. VMware Tools [25]
XenServer Tools [2] are examples tools which must be
installed within the guest for this functionality to exist.
However it is not possible to run the fuzz application
against these processes within all platforms therefore I
looked at an existing common application which existed
in each VM1. Fuzz was against the calc.exe with the
following command:
fork -ws -a c:/windows/system32/calc.exe -e 78139
- which resulted in a crash of the calc application
and cpu usage 100
D. Experiment 3a (Exp3a) - Forkbomb on 1 VM
Fork bombs is a well known technique used by manybenchmark tools such as [7] which is used to create
a well known denial of service attack resulting in a
misbehaving guest. A Forkbomb is a parent process
which forks into new child processes until all resources
are exhausted. All allocated memory is consumed by
these child processes within the misbehaving VM. This
experiment tests what pressure or additional load is
placed on the MMU within the VMM of the system.
The first experiment runs a fork bomb on 1 misbe-
having guest VM1, along with VM4 this causes up to
33percent of the host memory to be active by the guests
causing a low to medium load on the MMU.
E. Experiment 3b (Exp3b) - Forkbomb on 2 VMs
A Forkbomb is executed in 2 misbehaving guests VM1
and VM2 This means along with the workload carried
out by VM4 and the 2 misbehaving guests causes up to
33percent of the host memory to be active by the guests
causing a medium load on the MMU.
-
8/8/2019 Practicum - research paper:
6/10
F. Experiment 3c (Exp3c)- Forkbomb on 3 VMs
The third Forkbomb test runs a Forkbomb in 3 mis-
behaving guests VM1, VM2 and VM3, these in addition
to VM4 cause up to 66percent of the host memory to be
active by the guests causing a high load on the MMU.
G. Experiment 4 - DoS attacks
Two zombie machines VMZ attack VM2 using a
Mausezahn [9] network packet generator. This allows 2
attacks scenarios to take place against a host and VM2
which would react as a misbehaving guest from a VMM
aspect.
Experiment 4a (Exp4a) - DoS attack - port 80 Both
VMZ send syn requests to port 80 on an IIS webserver
running within VM2, each SYN packets comes from
a random source address meaning the webserver gets
overloaded with SYN requests. VM2s CPU usage jumps
and holds at 85Experiment 4b (Exp4b) - DoS attack
against all ports using SYN requests with 1k bytepadding. Both VMZ send SYN requests to VM2 which
completely saturating the physical and virtual networks
with received traffic of12000KB/sec from both VMZ.
VII. RESULTS BY TESTING TOOL
Discuss results of each experiment performed using
each testing tool, note that all platforms are summed
and averaged as a single value for each experiment.
A. Ramspeed
Fig. 1. Ramspeed with Integers
Fig. 1 and 2 show how the Rampseed tests performedacross all platforms for each experiment. There is a
clear deterioration in access to Memory via the MMU
from during experiments Exp3b, Exp3c which involve
forkbombs that attack the 33percent to 50percent of
the phyisical memory within the system. Also in Exp4a
and Exp4b, which involves attacks on the network and
subsequently the I/O subsystem within each platform see
a large performance drop in RAM access.
Fig. 2. Ramspeed with Floating Point Numbers
B. System Test Suite
Fig. 3. Calculate Pi using Gausse Le
Fig. 4. Calculate Pi using Borwein
System Test suite which a CPU bench test that calcu-
lates Pi up to 512k in length for 10 rounds using [32] in
Fig.3 and for 5 rounds using [15] in Fig.4. Observations:
Both tests clearly illustrate the same loss in CPU
performance during Exp4a however there is animprovement in Exp4b over all platforms tested.
Gauss [32] shows high cpu isolation over Exp1-3b
Borwein [15] shows a decrease in CPU performance
during Exp1 which runs crashme operations against
the CPU.
Both tests show a decrease in performance during
the control test when all vms are idle apart from
VM4.
-
8/8/2019 Practicum - research paper:
7/10
C. Geekbench
Fig. 5. Geekbench CPU and Memory tests
Geekbench is a proprietary CPU and Memory Bench-
mark suite which can be runs many CPU and Memory
tests from VM4 running on the host. Fig. 5 illustrates
scores from all platforms over each experiment. Obser-
vations:
The control experiment which has no misbehaving
guests returns the highest score, however there is
a gradual decline from Exp1 to Exp3b and Exp4b
during which numerous CPU and memory tests take
place.
Exp3c (50percent of the hosts physical memory is
tested) and Exp4a (VM2 suffers from high CPU
and network I/O) return the lowest scores across all
platforms.
Geekbench reinforces the two previous test in A and B
which show a clear degradation of CPU and memory
performance in Exp3c and Exp4a.
D. FIO - I/O
Fig. 6. Random Writes to disk
FIO [4] is an open source I/O benchmarking tool
which can perform a variety of operations on a disk
subsystem. Observations of Fig. 6:
There is a clear degradation in performance during
Exp3c across all hyper-visors.
Fig. 7. Pings to Gateway from VM4
Fig. 8. Pings to Host from VM4
E. Ping Tests
Ping tests are performed from VM4 to the destinations
shown in Fig. 7, 8 and 9. Ping Observations:
Ping Gateway Fig. 7: Exp1 shows a 30percent
increase in ping responses across all platforms how-
ever all other experiments to 3c remain close to 1
second. item However Exp4a and 4b return ping
response times greater than 10 seconds for replies
with packet-loss greater than 50percent, there are
several factors for this, as the host network card
is being bombarded with network requests it may
not be able to send or receive ICMP packets to the
gateway on the physical network.
Ping Host Fig. 8: All experiments returned ping
responses between .3 and .4 seconds however Exp2
and Exp3b exhibited the lowest response times.
Ping VM Fig. 9: The control experiment across allplatforms shows a high ping response time, this
could be due to the low level of context switches
[30] due to the majority of vms running idle.
Exp3b shows an increase in response times to due
to a forkbomb being launched on VM2.
Exp4a and Exp4b were not included due to the
failure of some platforms to register any response
times from the VM
-
8/8/2019 Practicum - research paper:
8/10
Fig. 9. Pings to VM2 from VM4
VIII. RESULTS BY HYPERVISOR
Each Hypervisor is compared against the average
score across all platforms for each test as shown in Fig.1-
9.
A. Virtualbox
1) Memory Fig. 1,2,5:
i Geek: geekbench tests show a 2.4percent below
average score across all experiments, its interesting
to point out that Exp1 scores very poorly.
ii Ramspeed: Follows average trend across all ex-
periments however the initial control experiment
returns poor results for both Integers and floating
point values.
2) CPU Fig. 3,4: Consistently performs 5.5percent
below the average trend across all experiments.
3) Disk Fig. 6:Perfoms 22percent below average trend
across all experiments however there are spikes inperformance during Exp3b and Exp4a
4) Network: Ping response times are consistently
slower than average for all experiments apart from
Exp4a and 4b
As noted under the geekbench test, Exp1 causes a high
performance penalty which would suggest that crashme
is treated as emulated guest code and must be translated
into usable ring 3 code. Also Virtualbox operates ate
5.5percent below the CPU average across all experiments
which would suggest that a greater amount of Ring 3
code must be translated before it can be processed bythe hypervisor. Lower Disk I/O is common across all
non paravirtualized platforms.
B. Workstation
1) Memory Fig. 1,2,5:
i Geek:tests show a 1.1percent below average score
however there are big drop offs in Exp3c, Exp4a,
Exp4b
ii Ramspeed: Follows average trend across all plat-
forms for all experiments however Integer values
recorded a 3.3percent increase over average, while
Floating Point values recorded a 6percent increase
over the average trend.
2) CPU Fig. 3,4:Consistently performs 1.2percent be-
low average in all experiments in calculating pi to
512K numbers. [32], [15],
3) Disk Fig. 6: Consistently records 19percent below
average scores for all experiments keeping in line
with the average trend.
4) Network Fig. 7-9: Ping response times are faster
than average for the Host and VM tests.
Workstation does not seem to suffer from the same
resource isolation issues as Virtualbox, although it has
18percent below average. I/O performance memory per-
formance is highest across all platforms within Worksta-
tion when running the 8 experiments presented on this
paper. CPU is also slightly below the average however as
this system is hosted, the hypervisor must also contend
with the host operating system for access to the CPU.
C. XenServer
1) Memory Fig. 1,2,5:
i Geekbench: follows average trend however Exp3c
is below average, all other experiments are slightly
above average
ii Ramspeed: consistent with average for all ex-
periments apart from Exp3b to Exp4b whereMB/s trends below average, Memory is 4.5percent
slower than the average scores.
2) CPU Fig. 3,4: Both algorithms score 3percent
faster than average times as shown in graphs
3) Disk Fig. 6: Consistently performs much higher
than average as shown in Fig. 6. up to 41percent
greater performance
4) Network Fig. 7-9: Host and Gateway ping response
times are better than average apart from Exp4a,4b,
VM ping responses are about average as shown in
graphs
Although ESXi consistently outperforms all hypervisors
tested, XenServer however illustrates excellent disk I/O
and good network I/O performance, which can be traced
back to its I/O Ring architecture [42] that round robin
based algorithm therefore all guest domains get equal
access to I/O. Also XenServers CPU scheduling [34] can
be seen to offer good resource isolation as both Memory
and CPU closely compare to the average trends.
-
8/8/2019 Practicum - research paper:
9/10
D. Hyper-V
1) Memory Fig. 1,2,5:
i Geek: Overall Hyper-V follows the average scores
for Geekbench however Exp3b,3c,4a and 4b are
reporting higher scores than average, all other ex-
periments are reporting lower than average scores.
ii Ramspeed:Overall memory access is 3.4percentslower than the average trend. All experiments up
to Exp3b are showing below average scores
2) CPU Fig. 3,4: Pi Algorithms [32] Gauss is iden-
tical to the average across all platforms, however
Borwein [15] is 2.5percent slower than the average
3) Disk Fig. 6: Disk write access is 18percent faster
than the average however Exp3b and Exp3c show
a marked loss in bandwidth to disk subsystem.
4) Network Fig. 7-9: Ping responses to both Host and
VM2 are slower than average across all experi-
ments
Hyper-V and XenServer offer greater I/O performancefor all experiments presented on this paper. This per-
formance gain may be explained through its Host In-
tergration Service [44]. Although no attacks were made
directly against the I/O subsystem forkbombs running
within Exp3a, 3b and 3c would cause page file access
[16] in order to reduce the level of ram usage within one
of the two Micosoft child partitions. This would cause
increased demand on the disk subsystem. As a result
Disk performance is reduced on both XenServer and
Hyper-V during these experiments. Hyper-V performs
below average for memory and cpu isolation E. ESXi
1) Memory Fig. 1,2,5:
i Geek:Looking at the geekbench tests across all
platforms and experiments ESXi performs 2.2%
above average.
ii Ramspeed: ESXi scored above average for exper-
iments 1,2 and 4, however falls slightly below
average for the 3 forkbomb experiments, overall
ESXi performs 2.5% better than average.
2) CPU Fig. 3,4: Consistently outperforming all other
platforms in all experiments for both algorithms incalculating pi [32], [15] resulting in a 5% above
average performance.,
3) Disk Fig. 6: Consistently recorded slightly below
average scores for all experiments, however in
experiment 3c there is a very big hit on disk
write access, this trend results in a 18% below
average performance for disk write access to the
disk subsystem
4) Network Fig. 7-9: Ping response times are consis-
tently better than average for all experiments apart
from Exp4a and 4b
Clearly VMWares work on [20] [11] has improved
resource isolation within the hypervisor as CPU and
network consistently out perform other platform however
more work regarding isolation is required for around the
MMU [18] to improve memory access during high load
times. Also more work is required to improve I/O access
to the disk subsystem [14].
IX. RELATED WOR K
There has been quite a lot of related work in this
area due to isolation of virtual machines becoming a
hot topic as part of the current move towards IaaS [31]
within the Cloud. Similar stress tests were performed
in [40] and [39] which look at misbehaving virtual
machines in XenServer, VMWare, OpenVZ [12] and
Solaris [19] Containers. Also similar benchmarking workwas performed in [37] where a number of stress tests
are performed and analyzed for further study. Another
interesting piece of research [38] which involved stress
testing of applications and analysis of performance of
virtual environments. Finally at a data center level Intel
[46] have been researching performance benchmarking
of virtual machines within a multiple host environment.
X. CONCLUSION
Based on the testing tools and experiments covered
in this paper its clear to see that paravirtualization [21]
using Hyper-V [44] and XenServer [42] architectures of-
fer higher I/O disk subsystem throughput using IORings
[42] within XenServer and using Integration Services [8]
within Hyper-V which results in higher I/O resource
isolation [45] based on the experiments undertaken.
However VMWares ESXi Hypervisor [25] offers higher
resource isolation of cpu resources using [20] and better
network isolation based on its [11] Network I/O Control
technology. VMWare have also released [14] Storage
I/O Control which was tested as part of this paper
however proved inefficient based on the experiments
undertaken. A new player to the market is [29]KVMwhich is RedHat hypervisor replacement for Xen[13]
which offers comparable performance as shown in the
following article [43]. Although both VirtualBox and
Workstation are considered type 2 hypervisors [18] both
install Ring 0 drivers into the Host O.S [22] [27], as a
result both allow guest context Ring 3 code to run at host
context Ring 3 with very low levels of emulation or code
translation to native Ring3. Both hypervisors performed
-
8/8/2019 Practicum - research paper:
10/10
well during the resource isolation experiments as both
were close to the average trends across all experiments.
REFERENCES
[1] Amazon Elastic Compute Cloud (Amazon EC2). http://aws.
amazon.com/ec2/.
[2] Citrix XenServer. http://www.citrix.com/English/ps2/products/
product.asp?contentID=683148.[3] Comparing vmware esxi/esx and windows server 2008
with hyper-v. http://www.citrix.com/site/resources/dynamic/
salesdocs/Citrix XenServer Vs VMware.pdf.
[4] fio i/o benchmark tool,.
[5] Geek Benchmark Suite. http://www.primatelabs.ca/geekbench/.
[6] Intel Virtualization Technology Specification for the IA-32 Intel
Architecture. http://www.intel.com/technology/itj/2006/v10i3/
1-hardware/6-vt-x-vt-i-solutions.htm.
[7] Isolation Benchmark Suite. http://web2.clarkson.edu/class/
cs644/isolation/.
[8] Linux Integration Services RC Released. http:
//blogs.technet.com/b/iftekhar/archive/2009/07/22/
microsoft-hyper-v-r2- linux-integration-services-rc-released.
aspxl.
[9] Mausezahn. http://packages.ubuntu.com/karmic/mz.[10] Mersenne Random Number Generator. http://www-personal.
umich.edu/wagnerr/MersenneTwister.html.
[11] Network I/O Control: Architecture, Performance and Best
Practices. http://www.vmware.com/files/pdf/techpaper/VMW
Netioc BestPractices.pdf.
[12] Openvz - container-based virtualization for linux. http://wiki.
openvz.org/Main Page.
[13] OSS - XEN. http://www.xen.org.
[14] Performance Implications of Storage I/O Control in vSphere
Environments with Shared Storage. http://www.vmware.com/
files/pdf/techpaper/vsp 41 perf SIOC.pdf.
[15] Quadratic Convergence of Borwein. http://www.pi314.net/eng/
borwein.php.
[16] Ram, virtual memory, pagefile and all that stuff. http://support.microsoft.com/kb/2267427.
[17] Ramspeed, a cache and memory benchmarking tool,.
[18] Secure virtualization and multicore platforms state-of-the-art
report. H. Douglas C. Gehrmann.
[19] Solaris containers. http://www.sun.com/software/solaris/ds/
containers.jsp.
[20] The CPU Scheduler within VMWare ESX4. Whitepaper.
[21] Understanding full virtualization, paravirtualization and
hardware assist. http://www.vmware.com/files/pdf/VMware
paravirtualization.pdf.
[22] Virtualbox Architecture. http://www.virtualbox.org/wiki/
VirtualBox architecture.
[23] Virtualbox Native Virtualization. http://www.virtualbox.org/
wiki/Virtualization.
[24] Virtualbox Virtual Machines. http://www.xen.org/files/Marketing/HypervisorTypeComparison.pdf.
[25] VMWare vSphere Hypervisor ESXi. http://www.vmware.com/
products/vsphere-hypervisor/index.html.
[26] Windows 2008 r2. http://www.microsoft.com/
windowsserver2008/en/us/default.aspx.
[27] Workstation Processes. http://www.extremetech.com/article2/0,
2845,1156611,00.asp.
[28] AMD64 Virtualization Codenamed Pacifica Technology: Se-
cure Virtual Machine Architecture Reference Manual, May
2005.
[29] D. L. U. L. A. L. A. Kivity, Y. Kamay. Linux virtual machine
monitor. 2007.
[30] K. Adams and O. Agesen. A Comparison of Software and
Hardware Techniques for x86 Virtualization. 2006. http://www.
vmware.com/pdf/asplos235 adams.pdf.
[31] M. Armbrust, A. Fox, R. Griffith, A. Joseph, R.Katz, A. Kon-
winski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Za-
haria. Above the clouds: A berkeley view of cloud computing,
Feb 2009. http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf.
[32] L. Berggren, J. M. Borwein, and P. B. Borwein. GaussLegendre
algorithm.
[33] G. Carrette. Crachme: Randon input testing. http://people.
delphiforums.com/gjc/crashme.html.
[34] K. J. Duda and D. R. Cheriton. Borrowed-Virtual-Time (BVT)
scheduling: supporting latency-sensitive threads in a general-
purpose scheduler. pages 261276, december 1999.
[35] P. Ferrie. Attacks on Virtual Machine Emulators.
www.symantec.com/avcenter/reference/Virtual Machine
Threats.pdf.
[36] J. E. Foster and B. P. Miller. An Empirical Study of Robustness
of Windows NT Applications Using Random Testing. Seattle,
2000.
[37] J. Griffin and P. Doyle. Desktop virtualization scaling ex-periments with virtualbox (2009). 9th IT and T Conference.
http://arrow.dit.ie/ittpapnin/4.
[38] Y. Koh, R. Knaerhase, P. Brett, Z. Wen, and C. Pu. An analysis
of performance interfernece effects in virtual envrionments,
2007.
[39] J. Matthews, T.Deshane, W.Hu, J.Owens, M.McCabe,
D.Dimatos, and M.Hapuarachchi. Quantifying the performance
isolation properties of virtualization systems.
[40] J. Matthews, T.Deshane, W.Hu, J.Owens, M.McCabe,
D.Dimatos, and M.Hapuarachchi. Performance isolation of
a misbehaving virtual machine with xen,vmware and solaris
containers. 2007.
[41] T. Ormondy. An empirical study into the security exposure to
hosts of hostile virtualized environments.
[42] P.Barham, B.Dragovic, K. Fraser, S.Hand, T. Harris, A. Ho,
R. Neugebauer, I. Pratt, and A. Warfield. Xen and the Art of
Virtualization. 2003.
[43] B. Rao, A. Shah, and Z. S. J. M. M. Ben-Yehuda, T. Deshane.
Quantitative Comparison of Xen and KVM. June 2003.
[44] M. Russinovich. Inside Windows Server 2008 Kernel
Changes. http://technet.microsoft.com/en-us/magazine/2008.03.
kernel.aspx.
[45] S. Soltesz, M. Fiuczynski, L. Peterson, M. McCabe, and
J. Matthews. Virtual Doggelgnger: On the Performance,
Isolation and Scalability of Para- and Paene- Virtual-
izated Systems. http://www.cs.princeton.edu/mef/research/
paenevirtualization.pdf.
[46] O. Tickoo, R.Iyer, R. IIIikal, and D. Newell. Mod-
eling virtual machine performance: Challenges and ap-proaches. http://www.sigmetrics.org/sigmetrics/workshops/
papers hotmetrics/session3 1.pdf.
[47] Workstation 6 User Manual. http://www.vmware.com/pdf/ws6
manual.pdf.