chapter 4 energy curve model based …shodhganga.inflibnet.ac.in/bitstream/10603/34201/9/09...103...
TRANSCRIPT
103
CHAPTER 4
ENERGY CURVE MODEL BASED DYNAMIC VM
CONSOLIDATION TECHNIQUE
4.1 INTRODUCTION
This chapter discusses energy modeling for Virtual Machine (VM)
consolidation. Apart from meeting expectations as an Infrastructure as a
Service (IaaS) like model, this work stands out from the already exhibited
works, where the cloud resource provider supports and offers various types of
free long-running services. Each procurement transforms time-fluctuating
CPU utilization into easy execution of jobs. A model was required to study
the temporal validity of the execution of the job that is executed in a session
on a distributed infrastructure. The temporal validity based on the timing
constraints of an SLA should be modeled in order to study it on an energy
perspective. Hence from the arrival and service rates we defined the multi-
informative VM on a host in the previous chapter and how Energy curve is
developed through SLA and VM migration is presented in this chapter. To
model the energy curve we define a cost function and its hypothesis. This
chapter examines the disentangled issue of verifying the opportunity to
relocate a VM from an oversubscribed host to a different data store. This will
help minimize the cost of energy utilization, and the cost put forth by the
cloud supplier due to violation of the QoS prerequisites outlined in the SLAs
that is expended. Expenses brought in by cloud providers are characterized by
parametric evaluation and hence, different trials have been introduced. The
experiments were exhibited by continuous workload, and an energy modeled
104
consolidation was carried out. This model is a representation of an IaaS cloud,
where different individual cloud customers instantiate VMs, and the supplier
is not aware of the sorts of jobs sent on the VMs. Consolidation of VM
through SLA and VM migration is done by the Energy curve. The efficiency
of VM consolidation lies in the fact to maximize the time intervals between
Virtual Machine Migration from overloaded host servers. Although VMs
experience variable workloads, the maximum CPU capacity that can be
allocated to a VM should be less than the overall maximum CPU capacity.
4.2 AIM AND OBJECTIVES
This chapter presents a set of heuristics for the problem of energy
and performance efficient dynamic VM consolidation, which apply statistical
analysis of the observed history of system behaviour to infer potential future
states. The proposed algorithms consolidate VMs when needed to minimize
incredible usage by transforming possessions under QoS commitments. The
target compute environment is an Infrastructure as a Service (IaaS), where the
supplier is unaware of procurements and workloads served by the VMs, and
can simply watch them from an external point of view. As a result of this
property, IaaS scenarios are implied as being pseudo-aware of applications.
The confinements of these techniques are that they facilitate sub-optimal
conclusions and don't license the system manager to explicitly set a QoS
objective. Hence, the execution as to the QoS passed on by the calculation can
simply be equalized by suggesting tuning parameters of the applied host
overload detection algorithm. Interestingly, the philosophy proposed in this
area enables the calculations and the Energy Model regulator to explicitly
specify a QoS goal in terms of a workload independent QoS metric. The
underlying analytical model allows a derivation of an optimal randomized
control policy for any known stationary workload and a given state
configuration (Beloglazov & Buyya 2012). The literature survey shows
105
legacy algorithms such as minimum migration time where the RAM and
Bandwidth are taken into consideration and implemented. Once a host
overload is detected, the next step is to select VMs to offload from the host to
avoid performance degradation. The proposed work strategy for the rest of the
chapters is as shown in Figure 4.1.
Figure 4.1 Proposed Work Strategies
4.3 PROBLEM STATEMENT AND OVERVIEW
The chapter deals with a real world setting where a control
algorithm does not have the complete knowledge of future events, and
therefore, has to deal with an online problem. Online problems need to be
optimized since the optimization depends on a workload which is online in
nature. Online algorithms are designed for online problems. Competitive
analysis is the way to characterize the performance and the efficiency of the
online algorithms by using machine learning techniques. This has helped
PlanetLab Trace
Utilization (VM and Host)
CQR Analysis (machine learning)
Energy Curve Model Algorithm
(MPP)
Parameter (EIRP)
CoMoN Project Server
Modelling
Prediction (Cost Function and Hypothesis)
Host Oversubscript
Results Reliability Analysis for Real World Scenarios
Work 4
Work 5
Work 6
106
improve the efficiency of the competitive analysis by applying the knowledge
of future events. Control and internal memory are an algorithm's internal state
whereas in a real world situation the configuration of the algorithms is the
algorithm’s state (Beloglazov & Buyya 2012).
There are a few related works reviewed in this chapter that are
close to the proposed research direction, which are, however, different in one
or more aspects. Approaches to dynamic VM consolidation are application-
specific, whereas the approach proposed in this chapter is application-
agnostic, which is suitable for the IaaS model. Verma et al (2008) focused on
static and semi-static VM consolidation techniques, as these types of
consolidation are easier to implement in an enterprise environment. In
contrast, this chapter investigates the problem of dynamic consolidation to
take advantage of one-grained workload variations. Other solutions proposed
in the literature are centralized and do not have a direct way of controlling the
QoS, which are essential characteristics for the next generation data centers
and Cloud computing systems.
The issue of VM designation could be divided into two parts: the
first part is the attestation of new requests for VM provisioning and putting
the VMs on hosts, though the second part is the streamlining of the present
VM allocation. The first part could be seen as a canister pressing issue with
variable compartment sizes and costs. To settle it we apply an alteration of the
Best Fit Decreasing (BFD) figuring that is showed to use close to 11/9 • OPT
+ 1 s (where OPT is the measure of holders given by the optimal effect) (Yue
1991). In our conformity, the Modified Best Fit Decreasing (MBFD)
estimations, we sort all VMs in lessening solicit of their present CPU
utilizations, and disseminate each VM to a host that outfits the base construct
of energy use due to this dissemination. This allows leveraging the
heterogeneity of possessions by picking the most powerful capable data
107
centers first. The complexity of the scheduling part of the MBFD algorithm is
n * m, where n is the measure of VMs that must be designated and m is the
measure of hosts. The change of the present VM placement is completed in
two steps: at the first stage we select VMs that must be moved; at the second
stage the picked VMs are situated on the hosts using the MBFD calculation.
To verify when and which VMs ought to be moved, we present three two-fold
limit VM choice arrangements. The basic idea is to set upper and lower
utilization thresholds for hosts and to keep the total utilization of the CPU by
all the VMs allocated to the host between these thresholds. The focus is to
recover free holdings in order to neutralize SLA violations due to the
hardening in scenarios during the utilization by VMs additions. The
complexity between the old and new courses of action structures a set of VMs
that must be reallocated. The new position is refined using live introduction of
VMs. In the next section we discuss over the proposed VM determination and
arrangement policies.
Online Problems without any knowledge of future events can be
solved using Competitive analysis of Online Algorithms (Borodin & El-Yaniv
1998). An online algorithm is presented with a request sequence online
without any knowledge of future requests while the goal is to serve entire
service request while the cost is small. An adversary generates requests and
an online algorithm has to service the request.
An online algorithm is c-competitive if there is a constant ,
such that for all finite sequences :
( ) = . ( ) + (4.1)
where ( ) is the cost incurred by for the input I; ( ) is the cost of an
optimal offline algorithm for the input sequence ; and is a constant. This
means that for all possible inputs, incurs a cost within the constant factor
108
of the optimal offline cost plus a constant . can be a function of the
problem parameters, but it must be independent of the input . If is c-
competitive, it is said that attains a competitive ratio .
There is single physical server, or host, and VMs designated to
that host. Time is discrete in this issue and could be sliced into n time periods,
where every time span is 1 second. The resource supplier pays the expense of
energy devoured by the physical server. Energy for every unit time is the
expense of energy that is paid by the Cloud supplier. CPU execution is a
parameter used to express the Capacity of the Host and the VM procured
resource. The CPU use by a VM subjectively updates over the long run which
implies that VM encounters alert workloads. Assuming that the aggregate
CPU request surpasses the limit of the CPU, the host is oversubscribed. It
implies that the VMs demand their most extreme permitted CPU execution. A
resource supplier and client secure an SLA violation when the requested VM
limit surpasses the CPU limit. An SLA violation causes penalty to the
supplier, which is figured as a result of the expense of SLA violation for
every unit of time and time span of the SLA violation predominant between
them. At any point in time, an SLA violation happens and proceeds until
stopping time. However, because of the over-membership and variability of
the workload encountered by VMs, due to the over-subscription and
variability of the workload experienced by VMs, at the time v the overall
demand for the CPU performance exceeds the available CPU capacity and
does not decrease until stopping time. It is expected that as per the problem
definition, a solitary VM could be relocated out of the host. This relocation
expedites a lessening of the interest for the CPU execution and makes it lower
than the CPU limit. We define stopping time, which is equal to the latest of
either the end of the VM migration or the beginning of the SLA violation
(Beloglazov & Buyya 2012). A migration takes some time and during which
an extra host is used to accommodate the VM being migrated, and therefore,
109
the total energy consumed during a VM migration is twice the cost of energy.
The issue is to confirm the time m when a VM relocation ought to be started
to minimize the sum cost comprising of the energy cost and the expense
initiated by a SLA violation in the event that it happens.
4.4 THE COST FUNCTION
Cost caused by the SLA violation and the cost of the extra energy
consumption is the total cost. Energy consumed by the destination host is the
extra energy consumption, where a VM is migrated to, and the energy
consumed by the source host after the beginning of the SLA violation
(Borodin & El-Yaniv 1998). All the energy consumption is taken into
account, the energy consumed by the source host from the time when the
process starts to the time SLA violation starts. The reason is that this part of
energy cannot be eliminated by any algorithm by the problem definition.
Another restriction is that the SLA violation cannot occur until a migration
can be completed. VM migration can start before or after SLA violation and it
also depends on the stopping time. The algorithms derived are from the
Competitive online algorithms of (Beloglazov & Buyya 2012) and we have
implemented the MEP algorithm.
To dissect the issue, we characterize a cost function as follows. The
total cost incorporates the expense brought on by the SLA violation and the
expense of the added energy utilization. The added energy utilization is the
energy depleted by the additional host where a VM is moved to, and the
energy devoured by the main host after the start of the SLA violation. Hence,
all the energy utilization is considered with the exception of the energy
depleted by the main host from the starting time to SLA violation time. The
reason is that this part of energy cannot be wiped out by any algorithm by the
problem definition. An alternate confinement is that the SLA violation cannot
110
happen until a migration might be completed. As per the problem
proclamation, we outline the cost function.
The Cost Function C outlines three cases, C1 depicts the situation
when the movement happens after the event of the SLA violation, yet the
relocation begins not later than T soon after the start of the SLA violation.
Thus the expense is the expense of energy devoured by the additional host
from the start of the VM migration to the start of the potential SLA violation.
There is no expense of SLA violation, as consistent with the issue explanation
the stopping time is precisely the start of the potential SLA violation, so the
duration of the SLA violation is nil. C2 portrays the situation when the
relocation happens after the event of the SLA violation, yet the migration
begins later than T soon after the start of the SLA violation. C2 holds three
terms: (a) The expense of energy expended by the additional host from the
start of the relocation to the start of the SLA violation; (b) The expense of
energy depleted by both the primary host and the added host from the start of
the SLA violation to stopping time (c) The expense of the SLA violation from
the start of the SLA violation to the closure of the VM movement. C3
portrays the situation when the movement begins after the start of the SLA
violation. Thus the expense comprises of three terms: (a) The expense of
energy expended by the primary host from the start of the SLA violation to
stopping time; (b) The expense of energy devoured by the additional host
from the start of the VM migration to stopping time; (c) The expense of SLA
violation from the start of the SLA violation to stopping time.
With respect to the single VM movement issue (Beloglazov &
Buyya 2012), an SLA violation happens when the aggregate demand for the
CPU execution surpasses the accessible CPU limit. The most extreme number
of VMs apportioned to a host when they request their most extreme CPU limit
is VMs experience variable workloads, the most extreme CPU limit that
111
could be dispensed to a VM is Th . The total number of VMs is . VMs
might be moved between hosts utilizing live migration with a migration time
. We in this segment investigate a more unpredictable issue of dynamic
VM consolidation acknowledging various hosts and numerous VMs. There
are h homogeneous hosts, and the limit of every host is . The expense of
power is , and the expense of SLA violation for every unit of time is .
Without loss of generality, we can describe = 1 and C = s, where s R+.
This is proportional to demarcating = 1/s and = 1.
We expect that when a host is idle, i.e. there is no executable VMs,
it is exchanged off and depletes no force, or exchanged to the sleep mode with
negligible power consumption. We define non-idle hosts active. The total cost
C is demarcated as shown in Equation (4.2)
= ( + ) (4.2)
where t is the initial time; T is the total time; a {0,1} indicating whether if the
host i is active at time t; v {0,1} indicating if the host j is encountering a SLA
violation at time t. The issue is to confirm what time, which VMs and where
ought to be relocated to minimize the total cost C as shown in Equation 4.2.
4.4.1 The Optimal Online Deterministic Algorithm
Theorem 1
The upper bound of the competitive ratio of the optimal online
deterministic algorithm for the dynamic VM consolidation problem is
> 1 + /2( + 1).
112
Proof
In a single VM relocation issue, the optimal online deterministic
algorithm for the dynamic VM consolidation problem relocates a VM from a
host when an SLA violation happens at this host. The algorithm dependably
merges VMs to the base number of hosts, guaranteeing that the allotment does
not make an SLA violation. The omnipresent malicious adversary creates the
CPU request by VMs in a manner that cause host loading however as much as
could reasonably not to exceed and create an SLA violation, while keeping
however as many hosts active, i.e. depleting energy. Nevertheless, an SLA
violation happens at a host regardless of m + 1 VMs are designated to this
host, and these VMs request their most extreme CPU limit A . Subsequently,
the maximum number of hosts that experience an SLA violation
simultaneously is n (Beloglazov & Buyya 2012).
In an instance of a synchronous SLA violation at n hosts, the
amount of hosts not encountering an SLA violation is n = n n . The
method of the adversary is to make an online calculation keeping all the hosts
active constantly and making n hosts experience an SLA violation 50% of
the time. To show how this is actualized, we part the time into times of
length2t . Then T t = 2t , where .. The adversary acts as two
parts for each period. Throughout the first t , the adversary sets the CPU
request by the VMs in a manner to distribute precisely m + 1 VMs ton hosts
by relocating VMs from n hosts. As the VM migration time is t , the total
cost in this time is t nC , as all the hosts are dynamic throughout migration,
and there is no SLA violation. Throughout the following t , the adversary
sets the CPU request by the VMs to the most extreme making an SLA
violation at n hosts. The online calculation responds to the SLA violation,
and moves the vital number of VMs over to n hosts. Throughout this time,
the sum expense is t (nC + n C ), as all the hosts are again active, and
113
n hosts are encountering an SLA violation. Hence, the total cost throughout a
period 2t is defined in Equation (4.3)
= 2 + ( + ) (4.3)
Since VM consolidation is done to minimize the number of active
physical hosts, the individual intermigration time interval has to be
maximized since in a time of frames the mean number of hosts that are alive
is inversely proportional to the efficiency of VM consolidation. Consolidation
of VM through SLA and VM migration is done by the Energy curve. The
efficiency of VM consolidation is conceptualized by maximizing the time
intervals between Virtual Machine Migration from overloaded host servers.
Although VMs experience variable workloads, the maximum CPU capacity
that can be allocated to a VM should be less than the overall maximum CPU
capacity.
4.5 ENERGY CURVE MODEL
Hence, we limit the problem formulation to a single VM migration
as in Figure 4.2, i.e., the time span of a problem instance is from the end of a
previous VM migration where is the time when a VM migration starts;
is the CPU utilization threshold defining the host oversubscription;
( , ) is the time during which the host has been over-loaded,
which is a function of and ; is the total time, during which
the host has been active, on and to the end of the next VM migration. At some
point in time , an SLA violation occurs and continues until time
SLAV interval. An SLA violation in our terms is based on the QoS metrics
where both throughput and response time delivery is taken into account by the
organizing Cloud. The hosts which are alive and consume 100% utilization
and the performance degradation due to migration the product of SLATAH
and PDM gives our SLAV as shown in Equation (4.4) (Deboosereet al 2012).
114
= % = ( )
( )= (4.4)
SLAV toll starts Tslav
SLAV toll ends
VM migraton endsVM migration starts TVM
Least VMM point Tstop
Actual Energy curve(assumed to be convex)
Least SLAV point
Linear approximation of the VMM and SLAV
VM migration
Figure 4.2 The Objective time function in terms of SLAV and VM migration
If is the number of hosts, % is the total time during which the
host has experiences the utilization of 100% leading to an SLA violation.
is the total of the host being in the VM feeder state, is the
number of VMs; ( ) the estimate of the performance degradation of the
VM caused by migrations; ( ) is the total CPU capacity requested
by the VM during its lifetime (Beloglazov & Buyya 2012). In other words,
due to the over-subscription and variability of the workload experienced by
VMs, at the time the overall demand for the CPU performance exceeds
the available CPU capacity and does not decrease until . It is assumed
that according to the problem definition, a single VM can be migrated out of
the host. This migration leads to a decrease of the demand for the CPU
performance and makes it lower than the CPU capacity (Dobber et al 2009).
We define to be the stopping time, which is equal to the latest of either
the end of the VM migration or the beginning of the SLA violation. A VM
115
migration takes time . The problem is to determine the target time, when
a VM migration should be initiated to minimize the total cost consisting of the
energy cost and the cost caused by an SLA violation if it takes place. During a
migration an extra host is used to accommodate the VMs being migrated, and
therefore, the total energy consumed during a VM migration is twice the cost
incurred for one host. Let be the remaining time since the beginning of the
SLA violation, i.e. = .
The product of the objective time function and the power depleted
due to the time is the proposed Energy Curve Model. As time increases for
resource scarcity, the energy cost increases and this in turn increases the
power consumed by the battery where the energy cost includes the cost for the
normal operation and deadline time (Maleki et al 2012). The stopping time,is
the least of either the end of VM migration or start of SLA violation. A VM
migration takes a particular time and the extra host that is alive for the VM
migration to complete. This in turn involves two hosts to stay alive for a
single VM migration which involves twice the expense of power 2CpT. Hence
we tried to minimize the time for how long a VM migration should take place.
The sorted ascending ordered linear approximation of the time slope helped
us to minimize the time a VM migration would take place thereby reducing
the performance degradation due to migration at a particular SLA time per
active host.
4.6 MINIMUM PROCESSING POWER POLICY (MPP)
The energy curve helps to estimate the efficient use of the available
resource, SLAV and VM migration time is calculated. The energy consumed
by the processor in a data center is measured by our proposed MPP parameter
energy per instruction which is given by the ratio between the power and the
116
performance. We define the MPP parameter as Energy per instruction rate
performance for a host having VMs as shown Equation (4.5)
= (4.5)
( + ) = (4.6)
( ) = (4.7)
( ) = ( ) (4.8)
( ) = ( ) (4.9)
= ( ) = (4.10)
Energy curve is the extent of time for the SLA violation and VM
migration which being the trade-off parameters, this extent of time is
multiplied with the expense of power for which the CPU is active or reliable.
The performance of all capable hosts are studied. Our model addresses the
major trade-off that Cloud suffers, that is trade-off between power
consumption and performance. Performance and energy consumption depends
on the availability of efficient resources and scarcity of efficient resources
burdens time of SLAV and VM migration. The Minimum Processing Power
(MPP) policy migrates a VM from = { , … , } that requires the
minimum processing power to complete a migration relatively to the other
VMs allocated to the capable host based selected. MEP algorithm
concatenates the Energy Curve model explained below. For every new VM
i.e., sum of incoming and the VMs in migratable VM list are taken into
account. Processing VM are considered which are the active VMs in a host.
The energy curve is usually non-linear. Total energy cost can be leveraged by
117
harnessing the least costs of SLAV and VM migration. We introduce six steps
for achieving this-
Select new host identification for every migratable VM to reside
The host list is the Capable Hostlist.
Energy Slope of the time curve which can be achieved by
Equation (4.11)
=
(4.11)
Linear approximation of the Energy slope is performed.
The Sorted VM with minimum energy slope will be assigned to
EIRP rated host.
Minimum performance hosts are identified using Energy slope.
The MEP is iterated in parallel for all the VM in the hosts and
critical parallel parts are identified and hence allocation of
critical VM to critical host is realized in parallel in our
simulation.
Since host has been handling incoming VM’s host oversubscription
is possible and hence the above policy is used to iterate the following
algorithm to avoid the host oversubscription. The repute of a host can vary
with time due to various factors like the fluctuating load, malicious behaviour,
power shutdown and various other factors. The allocation of VMs =
{ , … , } of , to = { , … , } of , , hosts. The
algorithms defined below randomly assigns hosts to various group of VM in
host considered for N heterogeneous physical nodes, maximum available
hosts which is the least individual number of under-loaded host in
118
the host list and minimum available hosts which is the least
individual number of over-loaded hosts in the hosts list.
Lowest Energy
VM migration
SLA violation
VM
Migration
Figure 4.3 The Energy Curve illustration
From the above mentioned MPP policy we analyzed the VM
selection and Host oversubscription algorithms and implemented the Energy
Performance aware algorithms. The pseudo code and flowchart for the
proposed MEP algorithms is shown in Figure 4.4.
Algorithm (Pseudo code): The Minimum Energy Performance Algorithm
(MEP)
Input: CapableHostlist, vmlist Output: MEP host
foreach host in capableHostlist() do
ec
update capableHostlist()
119
Sort
ec) = MEPhostupdate add.NewVMlist
return MEPhost
Figure 4.4 Pseudo code and Flowchart for MEP algorithm
CapableHostlist
For every h in CapableHostlist
Add and Update NewVMlist (EIRP rated Host)
Sort EnergySlope ec
Deduce Energy Curve
Do LinearApproximation
Find EnergySlope ec
Find Min (EnergySlope ec )
120
4.7 RESULTS AND DISCUSSION
The above discussed models and systems have been contemplated
and the outcomes have been examined in detail. The consolidation of the
VMs by the Energy Curve and the VM selection procedures has lessened
Energy utilization by 20 to 30% and this has despite the fact increased the
AvgSLAV. The AvgSLAV increased because of consolidation of the VM
selection time and thus increasing the proficiency of the VM processed by the
host in a data center as indicated in Figure 4.5. The analysis of Energy and
Average SLA Violation is as shown in Figure 4.5. The Energy efficiency is
comparatively improved in most of the days of the PlanetLab workload
compared to the Provisioned systems. As the Average SLA violation
consolidated system by our proposed algorithm proves better during all the
days to about 10 to 14%. A better Energy efficiency when an efficient
Average SLA Violation was achieved. For simulation level examinations, it is
indispensable to direct tests utilizing certifiable trace from a genuine real
world system accessible. We have conducted experiments from genuine trace
taken from CoMon venture (Park & Pai, 2006), an overseeing base of
PlanetLab. Throughout simulations every VM is arbitrarily relegated to a
workload trace from one of the VMs from corresponding day since the
workload is on a days' CPU usage by more than 1000 VMs from 500 places
around the world. The workload trace is in an interim of 300 seconds and for
our simulations we have taken trace gathered throughout March and April 2011.
We have simulated the cloud model for the best conceivable QoS
bearing the trade-off between the performance and power consumption. The
Effect of the VM estimate regarding the VM migration and VM selection time
has been investigated for a day of real time PlanetLab trace is as shown in
Figure 4.6 where the data center is of 800 heterogeneous hosts 50% of which
121
are HP Proliant ML 110 G4 servers and other half are HP Proliant ML 110
G5 servers. The server use and power expended by these servers are taken
from true data from SPECPower benchmark instead of a diagnostic model of
a server which makes the simulation more effective (Corporation 2012).
Figure 4.5 For the PlanetLab workload – analysis for Energy and AvgSLAV
Figure 4.6 For the PlanetLab workload – Analysisof Energy Consumption and VM selection time
0
5
10
15
20
0
50
100
150
200
03-03-2011, 1052 VM
06-03-2011, 898 VM
22-03-2011, 1516 VM
25-03-2011, 1078 VM
03-04-2011, 1463 VM
09-04-2011, 1358 VM
11-04-2011, 1233 VM
12-04-2011, 1054 VM
20-04-2011, 1033 VM
Avg
SLAV
(%)
Ener
gy C
onsu
mpt
ion
(kW
h)
Energy Consumption LR MMT (kWh) Energy Consumption LR MPP (kWh) Average SLA violation LR MMT (%) Average SLA violation LR MPP (%)
0
0.002
0.004
0.006
0.008
0.01
0.012
0 20 40 60 80
100 120 140 160 180 200
03-03-2011, 1052 VM
06-03-2011, 898 VM
22-03-2011, 1516 VM
25-03-2011, 1078 VM
03-04-2011, 1463 VM
09-04-2011, 1358 VM
11-04-2011, 1233 VM
12-04-2011, 1054 VM
20-04-2011, 1033 VM
VM S
elec
tion
Tim
e (S
econ
ds)
Ener
gy C
onsu
mpt
ion
(kW
h)
Energy Consumption LR MMT (kWh) Energy Consumption LR MPP (kWh) VM mean selection time LR MMT (sec) VM mean selection time LR MPP (sec)
122
VM selection time in our simulation has increased and it
reciprocates the consolidation of VM due to the proposed algorithm
implemented. The Energy consumption due to the workload has been
efficiently reduced and an efficiency of 20% to 30% has been achieved.
Energy consumption due to data center workload is analyzed and the VM
selection time and the time before migration behaves differently due to the
consolidation due to the proposed algorithm. Figure 4.5 shows that
Consolidated VM selection time is almost consistent and it outperforms the
total provisioned algorithms. VM mean selection time is less than 0.002
seconds and this helps in better performance and the reduction in Energy
Consumption as on 6th March, 2011. Energy consumption is less amounting to
80 kWh which is 30 to 40% by our proposed system shows better efficient
results. The VM selection algorithms MMT and proposed MPP is coupled
with the LR to select host overloading and the results have been discussed
below. The VM migration time depends on the time the VM gets migrated
from a host through a broker which consolidates based on our proposed
algorithm. The proposed consolidated results are shown in Figure 4.5.
The temporal validity of the cloudlets with respect to VM
considers the RAM requested at the maximum from a host. This when
compared to the complexity of the algorithm seems a trivial task. The data
center simulated handles the algorithm for different PlanetLab workloads for
ten days between March and April 2011. The simulation results for the
number of VM migrations and average SLA violation has been tabulated as
shown in Table 4.1, trend shows that the average SLA violation for our
proposed VM consolidation method LR MPP is not as efficient as compared
to LR MMT.
123
Table 4.1 For the PlanetLab workload – Analysis for Number of VM migrations and AvgSLAV
PlanetLab Trace for 10 days
Number of VM migrations Average SLA violation
LR MMT LR MPP LR MMT (%)
LR MPP (%)
03-03-2011, 1052 VM 21052 1162 10.09 14.9206-03-2011, 898 VM 21025 2106 10.15 12.04
22-03-2011, 1516 VM 21922 1830 10.61 15.0925-03-2011, 1078 VM 26301 2494 10.14 11.5803-04-2011, 1463 VM 25370 1487 10.34 14.9609-04-2011, 1358 VM 20907 1498 10.71 15.5911-04-2011, 1233 VM 30654 2770 10.41 10.4912-04-2011, 1054 VM 26282 2484 10.24 11.9420-04-2011, 1033 VM 18299 1203 12.53 17.73
Due to the consolidation of the VM selection techniques and the
energy curve the number of VM migrations has to be reduced or in other
words has to be efficiently consolidated. The average SLAV is almost higher
but this has given efficient reduction in the Number of VM migration as
shown in Table 4.1. As we compare the VM selection time from the
consolidation algorithms and multi informative VM analysis has proved to
decrease the energy consumption and efficient VM consolidation has been
obtained. As in Figure 4.7, the LR MPP consolidated number of VM
migration has been reduced efficiently and SLA performance due to migration
also has been noted for efficiency. This contributed to the decrease inEnergy
consumption for a 24 hour simulation for each day with varied VM loads
compared to LR MMT. The CPU utilization has been considered as the
PlanetLab Workload for certain days in the month of March and
April 2011.
124
Figure 4.7 For the PlanetLab workload – Analysis for Number of VM migrations and SLAPDM
The SLA performance degradation due to migration can be seen in
the Figure 4.8. It can be depicted that Consolidated LR MPP method
outperforms the legacy LR MMT method and63% efficiency on an average
has been achieved. The mean value of the sample means of the time before a
host is switched to the sleep mode for the LR-MMT algorithm combination is
864 seconds with the 95% CI: (820, 908). Performance Degradation is higher
since there is more utilization of resources under constraints unlike the hosts
being overused and more servers left underused or not used at all.
From Figure 4.8 the energy consumption of LR MPP is efficient
but SLA PDM has increased for LR MMT the efficiency has been about 47%.
There is an increase in SLA PDM and it is due to proper consolidation of the
physical source and the time for the VM Migration to take place and attains a
lower SLAV. This time has induced a PDM due to the workload executed by
the host.
0
0.02
0.04
0.06
0.08
0.1
0
5000
10000
15000
20000
25000
30000
35000
03-03-2011, 1052 VM
06-03-2011, 898 VM
22-03-2011, 1516 VM
25-03-2011, 1078 VM
03-04-2011, 1463 VM
09-04-2011, 1358 VM
11-04-2011, 1233 VM
12-04-2011, 1054 VM
20-04-2011, 1033 VM
PDM
(%)
Num
ber o
f VM
mig
ratio
ns
Number of VM migrations LR MMT Number of VM migrations LR MPP SLA perf degradation due to migration LR MMT (%) SLA perf degradation due to migration LR MPP (%)
125
Figure 4.8 For the PlanetLab workload – Analysis for Energy consumption and SLAPDM
Table 4.2 For the PlanetLab workload – Analysis for Energy Consumption and SLATAH
PlanetLab Trace for 10 days
Energy ConsumptionSLA time per active
hostLR MMT
(kWh)LR MPP (kWh)
LRMMT(%)
LRMPP(%)
03-03-2011, 1052 VM 176.16 106.13 5.21 18.83
06-03-2011, 898 VM 133.84 80.56 5.3 29.8
22-03-2011, 1516 VM 118.72 105.08 4.5 25.23
25-03-2011, 1078 VM 164.47 96.18 5.45 30.27
03-04-2011, 1463 VM 160.02 141.25 4.44 29.48
09-04-2011, 1358 VM 124.61 113.43 4.43 20.05
11-04-2011, 1233 VM 189.71 113.98 5.42 28.13
12-04-2011, 1054 VM 164.26 97.38 5.41 32.11
20-04-2011, 1033 VM 185.58 76.24 5.4 23.15
0
0.02
0.04
0.06
0.08
0.1
0
50
100
150
200
03-03-2011, 1052 VM
06-03-2011, 898 VM
22-03-2011, 1516 VM
25-03-2011, 1078 VM
03-04-2011, 1463 VM
09-04-2011, 1358 VM
11-04-2011, 1233 VM
12-04-2011, 1054 VM
20-04-2011, 1033 VM
PDM
(%)
Ener
gy C
onsu
mpt
ion
(kW
h)
Energy Consumption LR MMT (kWh) Energy Consumption LR MPP (kWh) SLA perf degradation due to migration LR MMT (%) SLA perf degradation due to migration LR MPP (%)
126
It is observed that the SLATAH has been substantially increased
and this has led in a decrease in the energy consumption as in Table 4.2 of the
physical source (or) hosts involved in the experiment. But in Figure 4.9 the
host shut down of our LR MPP method has been substantially made
consistent through various simulations of the workload traces involved. The
number of host shut down has been fluctuating throughout our experiment for
the legacy LR MMT method in a data center. Whereas applying various
strategy and involving our own system with the proposed algorithm, gave
better results as shown in Figure 4.9. This has led to the reduction of the
energy consumed.
Figure 4.9 For the PlanetLab workload – Analysis for Energy Consumption and Number of Host shutdown
The simulation environment of the data center created, harnesses
the Energy consumption using the proposed algorithm. Number of Host
shutdowns has been analyzed as shown in the Figure 4.9 and the proposed LR
MPP method shows better results. The proposed methods have helped in
handling the number of host shutdown and a consistent lower level has been
0
1000
2000
3000
4000
5000
6000
7000
0
20
40
60
80
100
120
140
160
180
200
03-03-2011, 1052 VM
06-03-2011, 898 VM
22-03-2011, 1516 VM
25-03-2011, 1078 VM
03-04-2011, 1463 VM
09-04-2011, 1358 VM
11-04-2011, 1233 VM
12-04-2011, 1054 VM
20-04-2011, 1033 VM
Num
ber o
f Hos
t shu
tdow
ns
Ener
gy C
onsu
mpt
ion
(kW
h)
Energy Consumption LR MMT (kWh) Energy Consumption LR MPP (kWh) Number of host shutdowns LR MMT Number of host shutdowns LR MPP
127
maintained and a better scaling of energy consumption is obtained. A
consistent level of the Host Shutdowns is possible and this has led to a
reduced Energy consumption as well.
As shown in Figure 4.10 Number of VM migration involved in the
legacy system has been very much high for the trace simulated. We by
applying our LR MPP module, the system behaved very well consolidating
the VMs and scheduling in an efficient manner. Thereby a drastic efficient
confinement of number of VM migrations has been achieved which decreased
the energy consumed.
Figure 4.10 For the PlanetLab workload – Analysis for Number of VM migrations and Energy Consumption
The mean value of the sample means of the time before a host is
switched to the sleep mode for the LR MPP algorithm combination is 953
seconds with the 95% CI: (902, 1053). The SLATAH and PDM vary as
Energy of a simulation having 1052 VMs and 800 heterogeneous nodes the
lesser the Energy consumption, higher is the SLAV and this is reflected in
SLATAH. Consolidation of the hosts in turn increases the Host Shutdown
0
5000
10000
15000
20000
25000
30000
35000
0 20 40 60 80
100 120 140 160 180 200
03-03-2011, 1052 VM
06-03-2011, 898 VM
22-03-2011, 1516 VM
25-03-2011, 1078 VM
03-04-2011, 1463 VM
09-04-2011, 1358 VM
11-04-2011, 1233 VM
12-04-2011, 1054 VM
20-04-2011, 1033 VM
Num
ber o
f VM
mig
ratio
ns
Ener
gy C
onsu
mpt
tion
(kW
h)
Energy Consumption LR MMT (kWh) Energy Consumption LR MPP (kWh) Number of VM migrations LR MMT Number of VM migrations LR MPP
128
increasing the Energy requirement of the available hosts and Migration Time
lessens which increases SLAV.
4.8 SUMMARY
To help increase the profitability the Cloud resource provider
targets striking an efficient model between Energy and SLA violation. With
least resources we focus on giving quality management to its clients and this
might be made conceivable just with effective resource assignment algorithm.
We have executed new resource designation algorithms by working in host
oversubscription recognition and VM determination. Our investigations have
demonstrated that huge Energy changes could be accomplished when
contrasted with the existing power wary resource assignment algorithms. This
Energy harnessing has achieved an efficiency of 34% when compared to LR
MMT method by trading off with SLAV by looking after the performance
with in safe limits. The SLA and QoS measurements prompt the trade-off
between issue of energy and performance effective dynamic consolidation of
VMs. The effects have ended up being superior to the accessible conventional
strategies in the Energy point of view in the present Cloud foundation and
keeping up the server usage to an improved level and maintaining a strategic
distance from variance of servers or have in this manner diminishing the
energy consumption of the over-provisioned servers. Performance and energy
consumption depends on the availability of efficient resources and scarcity of
efficient resources burdens time of SLAV and VM migration. Further works
could be pointed at transforming better Energy SLAV trade-off with the goal
that the cloud situation could be made more proficient and vigorous. A
hindrance of the proposed calculations is the inability to unequivocally
indicate a reliability obligation: the execution of the algorithms concerning
reliability can just be balanced by tuning the parameters of the calculations on
alive hosts. Chapter 5 explores a host over-burden location dependent upon
129
CQR, which permits the articulate detail of a server overloading by using
machine learning techniques. We have conveyed the Cloud energy proficient
methods by dynamic union of VM and forecast of Host oversubscription
through the machine learning techniques on the workload trace. The real time
workload trace taken from CoMon infrastructure has been tested for our
simulations. The server usage and power devoured by the servers taken from
genuine data from SPECPower benchmark has served to make our simulation
trustworthy.