investigation of air management and energy performance in a data center in finland: case study

13
Energy and Buildings 43 (2011) 3360–3372 Contents lists available at SciVerse ScienceDirect Energy and Buildings j our na l ho me p age: www.elsevier.com/locate/enbuild Investigation of air management and energy performance in a data center in Finland: Case study Tao Lu , Xiaoshu Lü, Matias Remes, Martti Viljanen Department of Civil and Structural Engineering, School of Engineering, Aalto University, Rakentajanaukio 4 A, Otaniemi, Espoo, Finland a r t i c l e i n f o Article history: Received 5 May 2011 Accepted 29 August 2011 Keywords: Data center Air management Energy performance Return Temperature Index (RTI) Supply Heat Index (SHI) Power Usage Effectiveness (PUE) Heat reuses Energy conservation a b s t r a c t This is the first research paper on data center from Finland. The objectives of this study are to evaluate air management and energy performance of cooling system and investigate the possibilities of energy saving and reuse in the data center. Field measurements, particularly for long term’s IT and facility powers, were conducted. Different performance metrics for the cooling system and power consumption were examined and analysed. Key problem areas and energy saving opportunities were identified. The electrical end use breakdown was estimated. Results show that IT equipment intake conditions were within the recommended or allowable ranges from ASHRAE. The Power Usage Effectiveness (PUE) value for a typical year was about 1.33. Noticeable recirculation of hot air was not observed, but extreme bypass air was found. The air change rate was set much bigger than the recommended ASHRAE’s value. There was no heat recovery system. The air management and heat recovery issues therefore need to be addressed. Fan speeds (Computer Room Air-Conditioning Unit) should be reduced and the ventilation rate should be minimized. Further, a simulated heat recovery system was presented demonstrating that the data center could potentially provide yearly space and hot water heating for 30,916 m 2 non-domestic building. © 2011 Elsevier B.V. All rights reserved. 1. Introduction A data center (or datacom facility) is a facility housing high-performance computers, storage servers, computer servers, networking or other IT equipment. It provides various services such as storage, management, processing and exchange of digital data and information for Information and Communication Technology (ICT) [1]. Data centers consume huge amount of energy. The power consumption of a typical rack of servers in a data center is about 30 kW. Such power will increase nearly to 70 kW within the next decade due to the introduction of ultra-dense computing architec- tures [2]. With hundreds of racks per data center, the total energy cost can make up a significant share of energy consumption in buildings. Indeed, survey data have shown that data centers can be over 40 times as energy intensive as conventional office build- ings. In the US, for example, data centers make up 1.5% of the total power consumption [3]. The power consumption will be doubled in five years and it is expected that the cost will exceed the cost of the original capital investment by 2012 [4]. In typical data centers, IT equipment converts over 99% of its power into heat and results in over 70% of the total heat load which needs to be removed in order to ensure acceptable environments Corresponding author. Tel.: +358 09 47025306; fax: +358 09 3512724. E-mail addresses: [email protected].fi, tao.lu@tkk.fi (T. Lu). for both the indoor and the equipment. Therefore, cooling accounts for a large portion of those energy costs, consuming 25% or more of the total power in data centers [5], increasing cooling system efficiency is the key to energy efficient data centers. In general, cooling systems can be classified into two categories: air-forced cooling and liquid cooling. Air cooling is still predomi- nant. In this technique, cold air is pushed through racks, containing IT equipment, for heat removal. The aim is to keep the rack (IT equipment) inlet temperature within an acceptable range for reli- able operation of equipment in data centers [6,7]. The complexity of the rather high internal heat dissipation and cold and warm out- door environments requires a smart solution, capable of providing acceptable indoor climate management [8]. Many efforts have been put to advance air cooling systems. Introduced in 1992 by IBM [7], Hot Aisle/Cold Aisle (HACA) protocol for air cooling is probably the most popular technique to date. The majority of modern data cen- ters are still using HACA. Some other alternatives include In Row Cooling with Hot Aisle Containment [9], Cold Aisle Containment [10], Overhead Cooling [11], and Kool-IT [12]. However, with the growth of heat dissipation in data centers, air cooling and HACA are stumbling and many problems have occurred, such as hot spots and oversized cooling equipment. Consequently, liquid cooling is making its way back to data centers. It is considered to be more effi- cient and the future cooling technique since liquid can carry much more heat than air. In fact, liquid cooling is not new in data centers, but its acceptance has been and continues to be difficult [13]. 0378-7788/$ see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.enbuild.2011.08.034

Upload: tao-lu

Post on 05-Sep-2016

228 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Investigation of air management and energy performance in a data center in Finland: Case study

IF

TD

a

ARA

KDAERSPHE

1

hnaa(c3dtcbbipit

pn

0d

Energy and Buildings 43 (2011) 3360–3372

Contents lists available at SciVerse ScienceDirect

Energy and Buildings

j our na l ho me p age: www.elsev ier .com/ locate /enbui ld

nvestigation of air management and energy performance in a data center ininland: Case study

ao Lu ∗, Xiaoshu Lü, Matias Remes, Martti Viljanenepartment of Civil and Structural Engineering, School of Engineering, Aalto University, Rakentajanaukio 4 A, Otaniemi, Espoo, Finland

r t i c l e i n f o

rticle history:eceived 5 May 2011ccepted 29 August 2011

eywords:ata centerir managementnergy performance

a b s t r a c t

This is the first research paper on data center from Finland. The objectives of this study are to evaluate airmanagement and energy performance of cooling system and investigate the possibilities of energy savingand reuse in the data center. Field measurements, particularly for long term’s IT and facility powers,were conducted. Different performance metrics for the cooling system and power consumption wereexamined and analysed. Key problem areas and energy saving opportunities were identified. The electricalend use breakdown was estimated. Results show that IT equipment intake conditions were within therecommended or allowable ranges from ASHRAE. The Power Usage Effectiveness (PUE) value for a typical

eturn Temperature Index (RTI)upply Heat Index (SHI)ower Usage Effectiveness (PUE)eat reusesnergy conservation

year was about 1.33. Noticeable recirculation of hot air was not observed, but extreme bypass air wasfound. The air change rate was set much bigger than the recommended ASHRAE’s value. There was noheat recovery system. The air management and heat recovery issues therefore need to be addressed. Fanspeeds (Computer Room Air-Conditioning Unit) should be reduced and the ventilation rate should beminimized. Further, a simulated heat recovery system was presented demonstrating that the data centercould potentially provide yearly space and hot water heating for 30,916 m2 non-domestic building.

. Introduction

A data center (or datacom facility) is a facility housingigh-performance computers, storage servers, computer servers,etworking or other IT equipment. It provides various services suchs storage, management, processing and exchange of digital datand information for Information and Communication TechnologyICT) [1]. Data centers consume huge amount of energy. The poweronsumption of a typical rack of servers in a data center is about0 kW. Such power will increase nearly to 70 kW within the nextecade due to the introduction of ultra-dense computing architec-ures [2]. With hundreds of racks per data center, the total energyost can make up a significant share of energy consumption inuildings. Indeed, survey data have shown that data centers cane over 40 times as energy intensive as conventional office build-

ngs. In the US, for example, data centers make up 1.5% of the totalower consumption [3]. The power consumption will be doubled

n five years and it is expected that the cost will exceed the cost ofhe original capital investment by 2012 [4].

In typical data centers, IT equipment converts over 99% of itsower into heat and results in over 70% of the total heat load whicheeds to be removed in order to ensure acceptable environments

∗ Corresponding author. Tel.: +358 09 47025306; fax: +358 09 3512724.E-mail addresses: [email protected], [email protected] (T. Lu).

378-7788/$ – see front matter © 2011 Elsevier B.V. All rights reserved.oi:10.1016/j.enbuild.2011.08.034

© 2011 Elsevier B.V. All rights reserved.

for both the indoor and the equipment. Therefore, cooling accountsfor a large portion of those energy costs, consuming 25% or moreof the total power in data centers [5], increasing cooling systemefficiency is the key to energy efficient data centers.

In general, cooling systems can be classified into two categories:air-forced cooling and liquid cooling. Air cooling is still predomi-nant. In this technique, cold air is pushed through racks, containingIT equipment, for heat removal. The aim is to keep the rack (ITequipment) inlet temperature within an acceptable range for reli-able operation of equipment in data centers [6,7]. The complexityof the rather high internal heat dissipation and cold and warm out-door environments requires a smart solution, capable of providingacceptable indoor climate management [8]. Many efforts have beenput to advance air cooling systems. Introduced in 1992 by IBM [7],Hot Aisle/Cold Aisle (HACA) protocol for air cooling is probably themost popular technique to date. The majority of modern data cen-ters are still using HACA. Some other alternatives include In RowCooling with Hot Aisle Containment [9], Cold Aisle Containment[10], Overhead Cooling [11], and Kool-IT [12]. However, with thegrowth of heat dissipation in data centers, air cooling and HACAare stumbling and many problems have occurred, such as hot spotsand oversized cooling equipment. Consequently, liquid cooling is

making its way back to data centers. It is considered to be more effi-cient and the future cooling technique since liquid can carry muchmore heat than air. In fact, liquid cooling is not new in data centers,but its acceptance has been and continues to be difficult [13].
Page 2: Investigation of air management and energy performance in a data center in Finland: Case study

T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372 3361

with H

cmU[timholMdmaboawouFtacTa

F

Fig. 1. The cooling infrastructure

Currently, the evaluation of the air cooling performance in dataenters often focuses on two aspects: energy performance and airanagement. Some energy performance metrics, such as the Powerse Effectiveness (PUE) [14], the Return Temperature Index (RTI)

15], and the Supply Heat Index (SHI) [16] have been used. Notinghat energy application in data centers is inherently site specific,t depends heavily upon geography, climate and local environ-

ental conditions. To date, even though a number of case studiesave been published, most of them were conducted in the US andther areas [1,8,17–19]. As for Nordic countries, the high techniqueeading area, relevant research paper is still lacking. Karlsson and

oshfegh investigated airflow and temperature patterns in a smallata center in Sweden [8]. They concentrated more on air manage-ent rather than energy performance. Finland, among the most

dvanced countries in the world in the development and use ofoth high technology and energy efficient technology and the homef the telecommunication giant NOKIA, requires local benchmarknd guidelines for designing operating and retrofitting data centershich are becoming increasingly important with the rapid devel-

pment of information technology. To the best of our knowledge,ntil now there is no research paper about the data centers forinland available. However, the fact is that Finnish made great con-ributions to the green data centers, for example, one of the biggestchievements is a company in Finland used sea water to cool a dataenter and reused dissipated heat from IT equipment to heat homes.

his achievement was reported widely and won the internationallycclaimed IT sector Green Enterprise Award in May in 2010 [20].

In this paper, we will offer the case study on the data center ininland. The objectives of this study are:

Fig. 2. Chilled wa

ot Aisle/Cold Aisle arrangement.

• to examine and understand the data center’s design in coolingsystem;

• to evaluate the data center’s energy performance;• to investigate the data center’s air management;• to explore the opportunities for improving energy efficiency in

cooling and waste heat reuses.

The paper focuses on the cooling; the power management issueis not discussed.

2. Data center cooling infrastructure

The most common design structure for data centers in Finlandis raised floor with racks arranged in Hot Aisle/Cold Aisle layout(Fig. 1). The computer room air-conditioning unit (CRAC) cools theexhaust heat (i.e. hot air) from racks and pushes the chilled-air sup-ply (i.e. cold air) into the floor plenum. Cold air enters the Cold Aislethrough perforated floor tiles (i.e. ventilation tiles) to cool IT equip-ment in the racks. Exhaust air from the racks enters the Hot Aisleand finally migrates back to CRACs. A chilled water system is oftenemployed to back the above structure in cooling (Fig. 2).

The chiller is responsible for supplying chilled water to CRACsand rejecting heat to outside at the dry/fluid cooler. In some coun-tries, the cooling tower is used rather than the dry/fluid cooler.Some studies show that the use of cooling tower may risk the out-

break of some bacteria, such as legionella, which may cause healththreat [21]. Therefore, many countries trend to use dry/fluid coolersto replace cooling towers [21]. Chiller, CRACs and air distributionsystem (e.g. floor plenum, perforated titles and HACA arrangement)

ter system.

Page 3: Investigation of air management and energy performance in a data center in Finland: Case study

3362 T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372

Table 1ASHRAE environmental specification for data centers.

Temperature control range (Class 1a/Class 2b) Moisture control range (Class 1a/Class 2b)

Allowable level Recommended level Allowable level Recommended level

15–32 ◦Cc (Class 1a) 18–27 ◦Cc 20–80%17.22 ◦Cc max. dew point(Class 1a)

5.5 ◦Cc dew point – 60%RH and 15 ◦Cc dewpoint

10–35 ◦Cc (Class 2b) 21.11 ◦Cc max. dew point(Class 2b)

a Typically a datacom facility with tightly controlled environmental parameters (dew point, temperature, and relative humidity) and mission critical operations; types ofdatacom facilities are enterprise servers and storage products.

nvironmental parameters (dew point, temperature, and relative humidity) and missionc ersonal computers, and workstations.

ata Processing Environments.

fo

mdntraare

ka

2

etibca(Rcpesml

beaofiirph

TR

of the cold air is directly cooling the equipment. With so little coldair going into the intakes of computer equipment, heat removalis actually done by the mixture of bypass air and hot exhaust air,which leads to another problem: recirculation air. Researches also

b Typically a datacom space or office or lab environment with some control of eritical operations; types of datacom facilities are small servers, storage products, p

c These conditions are inlet conditions recommended in Thermal Guidelines for D

orm a complete cooling system. Heat is continuously pumped outf the data centers through the cooling system.

All air distribution systems have supply and return paths. Ras-ussen [22] categories both paths into three kinds: flooded, locally

ucted and fully ducted. Their combination, therefore, forms totallyine types of air distribution systems. The most common air dis-ribution system is the locally ducted supply path with floodedeturn path (see Fig. 1). The goal of a cooling system is to providen acceptable range of temperature and humidity for rack inlets viair distribution systems. Today, most environmental specificationsefer to the intake conditions. In 2009, ASHRAE [2] issued the latestnvironmental requirements for data centers as listed in Table 1.

Table 1 shows that the inlet air entering IT equipment should beept within a range of 18–27 ◦C and that the moisture level within

minimum 5.5 ◦C dew point and maximum 15 ◦C dew point.

.1. Air management and challenges

The aim of air management is to keep intake conditions (ITquipment) within the recommended ranges (e.g. ASHRAE) withhe minimum energy consumption. The preferred air managements to supply cold air as close to the equipment intakes as possi-le, namely the equipment inlet temperature is equal or near theold supply air temperature. As such, the cold supply air temper-ture can be increased near the maximum recommended valuee.g. 27 ◦C, Table 1) so as to maximize the chiller energy efficiency.aising the CRAC cold supply air temperature allows operating thehiller at a higher evaporator temperature (i.e. chilled water tem-erature, Fig. 2), which leads to high energy efficiency [23]. It isstimated that chiller energy savings are in the 15–25% range viaolely increasing the cold supply air temperature [24]. Further-ore, good air management allows the CRAC fan operating with

ow speed, which saves the fan power.However, with HACA, it is possible for hot exhaust air to flow

ack to the rack inlet to mix with cold air (supply). The mixing of hotxhaust air and cold supply air could elevate the inlet temperaturend result in hot spots. Hot spots (rack inlet conditions are too hotr dry) are common problems in data centers. A recent survey [25]nds that about one in ten racks runs hotter than published reliabil-

ty standards (e.g. AHSHRAE), and most hot spots occur in computerooms with light load. This finding indicates that the underlyingroblem causing hot spots is not inadequate cooling capacity origh heat density, but poor air management [25].

able 2ating of the Return Temperature Index (RTI).

Rating RTI (%)

Target 100Recirculation >100Bypass <100

Fig. 3. Bypass air.

Regarding air management, two major problems are identifiedand found to be associated with the current cooling system (HACA):bypass air (see Fig. 3) and recirculation air (see Fig. 4).

Bypass air: Bypass air does not participate in cooling equipmentwhich should be minimized. The causes may be due to an excessof supply air or leakage through cable cutouts. Excessive bypassairflow rate has been identified as an underlying problem related tocooling inefficiency and hot spots. Many researches [25] reveal thaton average about 59% of the cold air from the CRAC unit is bypassingthe air intakes of the computer equipment, meaning that only 41%

Fig. 4. Recirculation air.

Page 4: Investigation of air management and energy performance in a data center in Finland: Case study

uildin

ir

ibRp

fhg

3

t

efo

R

waae

ibi

lf

S

wpv

Sttmres

i

P

tmi

T. Lu et al. / Energy and B

ndicate that hot spots can be eliminated by reducing bypass airflowate [25].

Recirculation air: Recirculation air, on the other hand, partic-pates in cooling equipment multiple times which should alsoe minimized. The causes may be due to deficit of supply air.ecirculation air often leads to the rise of the equipment inlet tem-erature, which is considered as one of main reasons for hot spots.

Some practices to combat the above two problems and the per-ormance metrics to both evaluate and improve air managementave been suggested in [10–12,15,16,26]. The following section willive a brief introduction about these performance metrics.

. Methodology

Firstly, we introduce three performance metrics and two impor-ant equations for this study.

Performance metrics: Performance metrics are mainly used tovaluate the performance of air management. RTI is one of the per-ormance metrics which is used to measure the actual utilizationf the available airflow. The index is defined as follows [24]:

TI (%) =[

TReturn − TSupply

�TEquip

]100 (1)

here TReturn is the return air temperature for CRAC (weightedverage); TSupply is the supply air temperature for CRAC (weightedverage); and �TEquip is the temperature rises across the electronicquipment (weighted average).

The interpretation of the index is listed in Table 2.An RTI value above 100% suggests mainly recirculation air which

ncreases return temperature, and below 100% suggests mainlyypass air which bypasses racks and directly returns to CARC reduc-

ng return temperature.SHI is another performance metric which is used to measure the

ocal magnitude of hot and cold air mixing [16]. SHI is expressed asollows:

HI = Ti − Tv

To − Tv(2)

here Ti is the rack inlet temperature; To is the rack outlet tem-erature; and Tv is the air temperature from the adjacent plenument.

Typical SHI value is less than 0.4, and the smaller the better. A bigHI value often suggests mainly recirculation air, but also implieshat the rack experiences oversupplied air. In an ideal cooling sys-em, all SHI values should be close regardless of rack heat loads,

eaning that there is no mixing between hot and cold air and theack with high heat load gets more supply airflow. A cooling systemxperiencing extremely uneven SHI distribution indicates that coldupply air is not distributed properly.

PUE metric was first introduced by Green Grid [14] for measur-ng energy efficiency. PUE is defined as:

UE = Total Facility PowerIT Equipment Power

(3)

IT equipment power includes the load associated with all ofhe IT equipment, such as computer, storage, and network equip-

ent, along with supplemental equipment. The total facility powerncludes:

Power delivery components such as Uninterruptible Power Sup-ply (UPS), switch gear, generators, Power Distribution Unit (PDU),

batteries, and distribution losses external to the IT equipment.Cooling system components such as chillers, CRACs, pumps, anddry/fluid coolers.IT power.

gs 43 (2011) 3360–3372 3363

• Other miscellaneous component loads such as data center light-ing.

PUE values can range from 1.0 to infinity. Ideally, a PUE valueapproaching 1.0 would indicate 100% efficiency (i.e. all power usedby IT equipment only) [14]. Currently, there are no comprehensivedata showing the statistics distribution of the PUE for data centers.Lawrence Berkley National Labs conducted measurements in 22data centers showing that PUE values generally fall in 1.3–3.0 [14].

Energy balance equation:

Total heat load from IT equipment and others

= Total cooling power from CRACs (4)

Total cooling power from CRACs can be estimated based on thefollowing equation:

P = Q · � · c · �T (5)

where P is the cooling power (W); Q is the airflow rate of the CRAC(m3/s); � is the density of air (kg/m3) (1.25 kg/m3 at 20 ◦C); c is thespecific heat capacity of air (J/kg ◦C) (1005 J/kg ◦C at 20 ◦C); �T isthe CRAC temperature difference between return and supply air,TReturn − TSupply (◦C).

The cooling power can also be calculated from the chilled watersystem using the same equation as Eq. (5). But in the case of chilledwater, the equation is modified so that Q presents the water flowrate, �, the density (999.1 kg/m3 at 15 ◦C) and c, the specific heatcapacity (4186 J/kg ◦C at 15 ◦C) of water and �T, the differencebetween the return and supply water temperature. Apart from ITpower, CRAC fans, UPS and PDU etc. are also contributors to the totalheat load. In most of data center studies, Eq. (4) is used to validatethe accuracy of measurement data. It is also applied to do energyend use breakdown analysis. In general, we can assume that theinput power to equipment is totally convert to heat. For instance, ifthe input power to a CRAC fan is 4.4 kW, the heat released from thefan will be 4.4 kW. Further, if we know IT power and input pow-ers to CRAC fans, we can use Eq. (4) to extract the electrical powerconsumed in UPS and PDU. This part of power is fixed or IT power-related, but the measurement procedure may be complicated andtime-consuming because it is very difficult to track all heat sources.Eq. (4) helps us save not only measurement time but also costs.

Fan and pump law equation: The electricity consumption of thefans or pumps can be modelled using the following equation [27]:

Wfan or pump = ωF3 (6)

where ω is a constant value and F is the flow rate (m3/s). “Unknownconstant ω” can be eliminated by modifying Eq. (6) as:

Wfan or pump 1

Wfan or pum 2=

(F1

F2

)3(7)

Eq. (7) is more applicable in practice. By Eq. (7), a fan or pumprunning with 50% of the full speed requires only 12.5% of the powerof the same device operating at 100% speed, indicating great energysaving potential.

An efficient air cooling system would have good air manage-ment, which could maximally utilize chilled air without or withlittle bypass and recirculation air. The investigation of an air cool-ing system preferably involves the estimation of some performancemetrics through measurements to evaluate the utilization of cool-ing power and chilled air. In addition, there is always a need for good

thermal climate for IT equipment. Inlet condition (i.e. temperatureand humidity) measurements are therefore preferred. Based on theabove considerations, one data center is selected and the followingstudies will be done:
Page 5: Investigation of air management and energy performance in a data center in Finland: Case study

3364 T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372

-side e

t

4

T

Fig. 5. Fluid

Rack inlet condition examination. The aim is to examine whetherIT equipment is within the operative environment (temperatureand humidity, see Table 1). For the support, rack inlet tempera-ture and humidity need to be measured.Calculation of the CRAC cooling power. The result is used to validatemeasurement data and do energy end use breakdown as dis-cussed previously. As such, CRAC supply and return temperaturemeasurements are required. If it is unable to measure all CRACs,chilled water supply and return temperatures are in demand. Fur-thermore, airflow rates through perforated floor titles have tobe measured in order to determine actual airflow rates throughCRACs.Evaluation of the RTI and the SHI. The result is used to evaluate airmanagement and find out the existing problems, such as bypassair or recirculation air. In order to complete the evaluation, airtemperatures from perforated floor titles, rack inlet and outlettemperatures have to be measured.PUE analysis. It is aimed to evaluate the energy performance. Theinformation on IT power and the power for facilities is required.Electrical end use breakdown. It is made for energy efficiencyassessment and the illustration of potentials of energy saving andreuses.Outdoor air ventilation analysis. Data centers normally do not haveinternal humidity sources. Infiltration and ventilation are themain reasons for changes in the humidity level in the IT envi-ronment. Therefore, it is necessary and also a must to minimizeoutdoor air ventilation rate so as to decrease dehumidificationand humidification loads, which could consume huge amountof energy. Supply and exhaust airflow rates for the ventilationsystem are needed.

In addition, a simulation is also conducted to explore the poten-ials for the heat reuse in the data center.

. Case study in the data center

The data center is located in the southern city, Espoo, in Finland.he main function of the center is to provide computing and

conomizer.

information services for the nearby university and research insti-tutes. The center was built with raised floor and racks arrangedin Hot Aisle/Cold Aisle layout (see Fig. 1), typically operating for24 h per day. The data center also incorporates a fluid-side econo-mizer with the chilled water system for free cooling (Figs. 2 and 5).The design IT power is near 1 MW, but currently the data center isoperating under the half the designed power (i.e. about 500 kW).

4.1. Structure and cooling system

The data center has two exterior walls (concrete sandwich) andwindows (sealed and covered). Interior walls are brick structures.The raised floor is 800 mm high from the floor slab. Total floor area isabout 420 m2 with the room height = 2.9 m from the raised floor tothe ceiling, giving about 1218 m3 volume. The plan view of the datacenter is presented in Fig. 6. The center is divided into two parts:Main cluster and Examination area. Measurements were made overthe latter.

Examination area adopts the conventional Hot Aisle/Cold Aisleprotocol with locally ducted supply and flooded return (Fig. 1)although there is only one row of racks (Fig. 6). Main cluster takesdiffident cooling approach which is depicted in Fig. 7. Main clusteris made up of air-cooled supercomputers (Cray XT4 and XT5) wherechilled air from CRACs is directly ducted in and hot air exits at thetop (see Fig. 7). Compared to Examination area, this cooling method,also called fully ducted supply [22], provides a perfect separationof hot and cold air, which eliminates hot spots and enhances cool-ing efficiency. The rated IT power in Main cluster is about 490 kW,consisting of 20 × 11 kW and 30 × 9 kW units. Twenty-two racks(mainly HP and Dell products) reside at Examination area. Thepower consumption of IT equipment per rack ranges from a fewkilowatts to maximum of 24 kW in full occupation. The rated ITpower in Examination area is estimated around 200 kW. Charac-teristics about the structure, IT equipment and the cooling system

are summarized as followed:

• Structures

Page 6: Investigation of air management and energy performance in a data center in Finland: Case study

T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372 3365

Fig. 6. Plan view of the data center. The black arrows denote the approximate locations of the supply and exhaust ventilation ducts. The dimensions in the lower figure areapproximate (mm).

Fig. 7. Cooling system configuration in the data center.

Page 7: Investigation of air management and energy performance in a data center in Finland: Case study

3366 T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372

Fig. 8. The measurement points of CRACs and IT equipment racks. CRAC1.4 was measured from two points on the center axis and CRAC1.3 from one point. The rack inlet andexhaust air was measured at three different heights, at the centerline of the rack (mm).

– Two exterior walls (concrete sandwich), brick interior walls,sealed and covered windows.

– Floor area 420 m2, room height 2.9 m, volume 1218 m3.– Raised floor 800 mm from the floor slab.– Hot Aisle/Cold Aisle for rack arrangement at Examination area.Rated IT power– Main cluster (Cray supercomputers) 490 kW, Examination area

200 kW (estimated).Cooling system specifications– Air distribution system: fully ducted supply with flooded return

for Main cluster, locally ducted supply with flooded return forExamination area.

– Chiller: 1 unit, capacity 950 kW, electrical power 230 kW.– Pump (cooling circuit): capacity 950 kW, electrical power

7.5 kW, water flow rate 32.6 dm3/s.– Pump (condenser circuit): capacity 1237 kW, electrical power

15 kW, fluid flow rate 46.5 dm3/s.– Dry cooler: 2 units; Each: capacity 668 kW, electrical power

25.2 kW, fluid 35% ethyl. glycol, fluid flow rate 26.6 dm3/s, air-flow rate 46300 dm3/s.

– CRAC: 10 units (2 not in use); Each: rated cooling power

107.8 kW, fan electrical power 6.62 kW, airflow rate8220 dm3/s.

– Free cooling: fluid-side economizer, on when outdoor temper-ature is below 8 ◦C.

• Outdoor air ventilation– Constant air volume (CAV) system.

• Operating conditions for IT equipment– Temperature: 10–16 ◦C (chilled supply air) for Cray supercom-

puters at Main cluster, referred to Table 1 for IT equipment atExamination area.

– Humidity: 30–65% for Cray supercomputers at Main cluster,referred to Table 1 for IT equipment at Examination area.

Lighting is not considered since most of times it is off. The fluidused at the condenser loop is ethylene glycol/water mixture (35%ethyl. glycol), aiming at preventing the fluid from freezing since theoutdoor temperature could be near −20 ◦C in winter in Finland.

4.2. Measurement set up

Measurement subjects, measurement types and the types ofsensors are listed in Table 3 and the sensor locations are detailedin Fig. 8.

The power performance data of IT equipment and facil-ities were received from the center’s monitoring system.

Continuous type of measurement means the sensor continu-ously reads data with a certain interval while instantaneousmeasurement indicates that the measurement is taken at somelocation for several times and take the average measurement value.
Page 8: Investigation of air management and energy performance in a data center in Finland: Case study

T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372 3367

Table 3Measurements conducted in the data center.

Subject Measured quantity Measurement type Equipment

Computer room air conditioner (CRAC) T, RH (return and supply air ducts) Continuous T/RH logger ST-171 (Clas Ohlson)IT-equipment racks T, RH (inlet and exhaust air) Continuous, instantaneous ST-171, Vaisala HMI41 humidity and temperature meterPerforated floor tiles T, RH, air velocity Instantaneous Vaisala HMI41, Alnor GGA-65P Thermo-AnemometerVentilation T, RH (supply and exhaust air) Continuous ST-171

Fi

I(nrmhAorev

5

a

wrttO

F

ig. 9. Measured return and supply air temperatures of CRAC1.3 (measurementnterval = 5 min).

nstantaneous measurements were taken at the middle height1.2 m from the floor) of racks. Empty and almost empty racks wereot selected for measurement. The airflow rates from the perfo-ated floor titles were measured with an airflow meter. This airfloweters measured air velocities from the center and some corner

oles of perforated floor titles, and finally averaged the final result.irflow rates at perforated floor titles were calculated based onpen areas and measured average air velocities. Temperature andelative humidity were measured at all duct openings (supply andxhaust) for outdoor air ventilation system and finally took averagealues.

. Measurement results and discussion

CRACs: Figs. 9 and 10 present the measured return and supplyir temperatures for CRAC1.3 and CRAC1.4 respectively.

Perforated floor tiles: Airflow rates at the perforated floor tilesere measured on 27th October 2010. The distribution of airflow

ate was quite even, with little variation between individual flooriles caused by the different open areas (Fig. 11). Fig. 12 presentshe temperatures measured from the perforated floor tiles on 27thctober 2010.

ig. 11. Airflow rates at the centers of perforated floor tiles. Calculated from the measure

Fig. 10. Measured return and supply air temperatures of CRAC1.4 (measurementinterval = 5 min).

IT equipment racks: Fig. 13 shows the measured temperaturesand dew points from one rack’s inlet. The temperatures and dewpoints were average values from three different heights (Fig. 8).Fig. 14 shows the instantaneous temperature and humidity read-ings from racks at Examination area (27th October 2010).

Ventilation system: Figs. 15 and 16 present the average temper-ature and dew point values of the supply and exhaust air measuredfrom the ventilation duct openings in the room.

Power consumption: IT and facility power consumptions wereretrieved from the center’s monitoring system (see Fig. 17, 1stNovember 2009–1st November 2010). According to the informa-tion received from the site, Main cluster (Fig. 6) constituted about80% of the total IT power consumption.

5.1. Air management

The measured rack inlet temperature and humidity were in theranges of 13–15 ◦C (average values from three different heights)

and 7–10 ◦C (dew point) (Fig. 13). These ranges can also beobserved from instantaneous temperature and humidity measure-ments from some racks at Examination area (Fig. 14.). The rack inletdew point temperatures were within the recommended moisture

d airflow velocities by multiplying with the open areas of the perforated floor tiles.

Page 9: Investigation of air management and energy performance in a data center in Finland: Case study

3368 T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372

r tiles

loLCsimubt

czears

ctOrt

•••

Fof

Fig. 12. Temperatures at the perforated floo

evel, and the inlet temperatures fell within the allowable level bututside the recommended level (18–27 ◦C) from ASHRAE (Table 1).ow inlet temperatures stem from the operating requirement fromray supercomputers at Main cluster (Fig. 7), which require coldupply air within 10–16 ◦C. Low inlet temperatures often lead tonefficient cooling performances, such as low Coefficient of Perfor-

ance (COP) of the chiller and oversupplied cold air [23]. Somenshown data in Fig. 13 reveal that the temperature rises from theottom to the top (1.5 m difference, see Fig. 8) were less than 0.6 ◦C,he great separation of hot and cold air.

Performance metrics: Performance metrics SHI and RTI were cal-ulated for Examination area (Fig. 6). All the SHI-values were nearero because the measured inlet air temperatures of racks wereither lower than or near the measured air temperatures from thedjacent plenum vents (Figs. 12 and 14). This indicates that theecirculation of hot air was negligible and that the hot and cold airtreams were perfectly separated.

The weighted average temperature difference across racks wasomputed as 13.56 ◦C (Fig. 14). CRACs 1.1, 1.3 and 1.5 were knowno supply cold air for Examination area (all in full speed). On 27thctober 2010 (the same day as the temperature measurement on

acks was taken, see Fig. 14), the site’s monitoring system showedhat:

CRAC1.1: 14.2 ◦C (supply); 20.1 ◦C (return).

CRAC1.3: 14.3 ◦C (supply); 20 ◦C (return).CRAC1.5: 14.3 ◦C (supply); 19.1 ◦C (return).

ig. 13. Measured rack inlet and exhaust air temperatures. The values are averagef three measurement points at the centerline of the racks, heights 0.3, 1.2 and 1.8 mrom the floor level (measurement interval = 5 min).

. Measurement point: the center of the tile.

These data yielded 5.53 ◦C weighted average temperature differ-ence for CRACs. The RTI was then estimated as 41% (=5.53/13.56, Eq.(1)), suggesting extreme bypass air or oversupplied cold air. In fact,the CRAC airflow could ultimately be reduced by 59% (=1–41%). Dueto the cubical relationship between the fan power and fan flow (Eq.(7)), the fan energy could be reduced by 93% if Variable FrequencyDrive (VFD) is installed. Moreover, let us assume CRACs 1.1, 1.3 and1.5 are now in their half speeds. The electrical price is 100 D/MWhin Finland. The total cost save from three CRAC fans will be 15223 Dper year (input electrical power 6.62 kW for full speed, Section 4).That is a great saving; remember that this save is only from Exam-ination area and it is possible to have some CRAC fan power savingfrom Main cluster (five CRACs in use).

Overall, calculated performance metrics suggest that enoughcooling was provided for Examination area but cold air was over-supplied. Therefore, CRAC fan speed can be reduced in order tomitigate oversupplied cold air and save energy from Examinationarea. CRAC fans consume a big portion of cooling energy as they aretypically in operation at every moment through the whole year,thus reducing this part of energy is significant from the energyefficiency point of view.

5.2. Cooling power

Because of lacking measurement data for CRACs 1.1, 1.5, 1.7, 1.8,1.9 and 1.10, we used chilled water temperatures and water flowrate to evaluate the total cooling power for CRACs via the site’s mon-itoring system. The following data were received on 1st November(10:00–14:00) 2010 from the monitoring system:

• Supply water temperature (cooling circuit): 9.8 ◦C (the averagevalue).

• Return water temperature (cooling circuit): 15.3 ◦C (the averagevalue).

• Water flow rate (cooling circuit): 25 dm3/s (the average value).• IT power: 494 kW (the average value).• CRAC fans: full speed.

The total cooling power for CRACs was then calculated usingEq. (5) as: 575 kW (=0.025 × 999.1 × 4186 × (15.3 − 9.8)/1000). Theheat released from IT equipment and CRAC fans was: 544 kW

(=494 + 6.22 × 8). Given the fact that besides IT equipment and CRACfans there were other sources also dissipating heat, estimated thetotal cooling power had satisfactory energy balance with the totalheat load from the center (544 kW+).
Page 10: Investigation of air management and energy performance in a data center in Finland: Case study

T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372 3369

Fig. 14. (a) Rack inlet and exhaust air temperatures and (b) dew point readings taken on 27th October 2010. The measurement points are at the center axis of the racks about1

5

1c8

t

Fd

.2 m from the floor level.

.3. Energy performance and energy end use breakdown

The PUE value was slightly above 1.2 in winter time and about.5 in summer (Fig. 17). The difference was likely caused by the freeooling, which was activated when outside temperature was below

◦C, according to the information received from the site.

The average IT power was about 492 kW from 1st January 2010o 1st November 2010 (almost one year, Fig. 17). In order to estimate

ig. 15. Room supply and exhaust air temperatures measured from the ventilationuct (measurement interval = 5 min).

the electrical end use breakdown (one year), we had to select a ref-erence period in which the average IT power had to be near 492 kW.Luckily, the period of 10:00–14:00 on 1st November 2010 had anaverage IT power of 494 kW, which was near enough to 492 kW (seeSection 5.2). This period was then selected as the reference and the

power information from the site’s monitoring system in this periodwas:

Fig. 16. Room supply and exhaust air dew point temperatures measured from theventilation duct (measurement interval = 5 min).

Page 11: Investigation of air management and energy performance in a data center in Finland: Case study

3370 T. Lu et al. / Energy and Buildings 43 (2011) 3360–3372

) facil

•••

bp

sraf2d

•••••

8e

chiller in the data center.• The average PUE value (one year) is about: 1.33 = 100/75, which is

very close to calculated average PUE value from the measurement

Fig. 17. One year’s (1st November 2009–1st November 2010

IT power: 494 kW (the average value).Pumps and dry coolers: 17.4 kW (the average value).CRAC fans: full speed.

The average power of pumps and dry coolers was estimatedy the site based on the equipment’s operating conditions (e.g.ercentage of full speed) and manufacture performance data.

The electrical power consumption of the chiller was not mea-ured, but can be approximated by the variation of PUE values. As aesult, the average electrical power of the chiller was approximateds: 149 kW (=494 × (1.5 − 1.2)). The electrical power consumptionrom other equipment, such as UPS and PDU, was evaluated as:8 kW (=494 × (1.5 − 1) − 17.4 − 149 − 6.62 × 8). The power break-own was thus:

IT power: 494 kW.Chiller: 149.CRAC fans: 53 kW (=6.62 × 8).Pumps and dry coolers: 17.4 kW.Others: 28 kW.

In a typical year, approximate 200 days’ temperatures are below◦C (free cooling set point) in Finland. Fig. 18 shows one year’s thelectrical end use breakdown with the percentage information.

Some indications from Fig. 18 are:

The total power consumption of CRAC fans is close to the chiller,which does not match our original expectation. The main reasonis the cold climate in Finland which gives the center more time forfree cooling, but CRAC fans have to be in operation at any time.Therefore, there is great potential of energy saving from CRAC

fans. The use of VFD will lower energy use of CRAC fans sincecurrently CRAC fan speed is fixed (all in full speed).A number of case studies reveal that the fan energy savings in70–90% range and chiller energy savings in 15–25% range are

ity and IT power consumption (measurement interval = 1 h).

achievable with effective air management [24]. That means wecould possibly save 5.6–7.2% of facility power from CRAC fans, butonly 1.5–2.5% of facility power from the chiller in the data cen-ter. Because the average CRAC supply air temperature (14.43 ◦C,Figs. 9 and 10) is already near the maximum operating tempera-ture (16 ◦C, Section 4.1) of Cray supercomputers at Main cluster,actually there is not much space to save energy from the chiller.Clearly CRAC fans are more potential in energy saving than the

Fig. 18. Electrical end use breakdown for a typical year.

Page 12: Investigation of air management and energy performance in a data center in Finland: Case study

uildin

5

0nsItpfitspwmcrbmmTb

6

opworWfh

tetefh1utr5timnh

7

s

[[

[

[

[

[

T. Lu et al. / Energy and B

– 1.35 (Fig. 17, 1st November 2009–1st November 2010). We canconclude that the data center is energy efficient one.21% of total power consumption (chiller + CRACfans + pumps + cooler) is from the cooling system.About 97% of total electrical power (chiller + CRAC fans + ITpower + others) is converted to waste heat and then rejected tooutdoor each year. This part of the waste heat actually can bereused (Section 6).

.4. Outdoor air ventilation

The supply and exhaust airflow rates were about 0.9 and.8 m3/s (from the center’s monitoring system), respectively,amely the center was over pressurized (positive pressure to out-ide). Data centers normally do not have internal humidity sources.nfiltration and ventilation are the main reasons for changes inhe humidity level in the IT environment. Airtight structures androper control of outdoor air can significantly reduce the humidi-cation/dehumidification required in a data center. Consequently,he design guideline often requires the data center to maintain thepace under positive pressure relative to surrounding spaces (overressurized) in order to avoid infiltration [7]. The air change rateas then 2.66 1/h (=0.9 × 60 × 60/1218). ASHRAE recommends ainimum 0.25 (1/h) air change rate for data centers so as to dilute

ontaminants. Compared to this guideline value, the air changeate for the data center was too big. The ventilation rate shoulde minimized in order to decrease the humidification and dehu-idification load. It is very obvious outdoor air ventilation addedoisture to the data center during the measured period (Fig. 16).

his gives us more reason to minimize ventilation rate as discussedefore.

. Heat reuse for the data center

The data center has no heat recovery system. The huge amountf rejected heat from the data center could be reused for otherurposes, such as heating the rest of the building, supplying hotater or even being sold to energy markets. The potential amount

f heat which can be recovered from the data center is equal to:emoved heat from the data center + electrical power for the chiller.

hen free cooling is in place, there will be no electrical poweror the chiller and only removed heat left for potentially reusableeat.

The waste heat from the data center can be collected using addi-ional condenser (only available when the chiller in operation) orxtra heat exchanger. In typical buildings, the recovered heat is inhe form of hot water at 40–43 ◦C [28]. Such temperatures are highnough for most applications found in commercial and institutionalacilities. However, if free cooling is in operation, the generatedot water from rejected heat may be much lower than 40 ◦C (e.g.5–17 ◦C in our case). Such temperature of water can be possiblysed for preheating cold outdoor air before entering the ventila-ion duct or water for domestic uses. Nevertheless, by our previousesults (Section 5.3), about 97% of facility powers (approximate627 MWh, one year) can be reused for the data center. In Finland,he average energy demand for space heating and water heatings about 182 kWh/m2 for non-domestic buildings (one year). That

eans the reuse heat can support 30,916 m2 (=5627 × 1000/182)on-domestic building for yearly heating (space heating + watereating).

. Conclusions

One data center in Finland was examined. A series of field mea-urements, including a long-term (one year) measurement of IT

[

[

gs 43 (2011) 3360–3372 3371

power and facility power, were conducted to evaluate air manage-ment and cooling performance. The accuracy of measurements wasverified using energy balance. Results show that inlet conditions(temperature and humidity) for racks in the data center were allwithin the ASHRAE recommended or allowable ranges [2]. PUE val-ues were in the range of 1.2–1.5 depending on free-cooling mode.Notable recirculation air was not observed, but a certain degree ofbypass air in the data center. The ventilation rate was big, the samelevel as offices. Heat recovery system was lacking. All these plusthe simulation modelling give the following possibilities of energyconservation and system improvement:

• Reduce CRAC fan speed. Calculations show that the data centercan save 15,223 Dmore per year from CRAC fans.

• Add heat recovery system. The simulation results show thatpotentially the data center can provide yearly space heating andhot water heating for 30,916 m2 non-domestic building.

• Minimize ventilation rate for the sake of humidity control.

There is further work needed to be done concerning the studyof existing heat recovery system and the comparison between datacenters with and without heat recovery system. The developmentof heat recovery system has become our first priority due to itspotential in heating energy market. Liquid cooling study will beanother interesting topic.

Acknowledgments

This work has been supported by Finnish Funding Agency forTechnology and Innovation (TEKES) in the DC2F project. The firstauthor was partially supported by the Academy of Finland. Theauthors would also like to thank Professor Jukka Manner for hisadvice and guidance.

References

[1] H.S. Sun, S.E. Lee, Case study of data centers energy performance, Energy andBuildings 38 (2006) 522–533.

[2] ASHRAE, Thermal Guidelines for Data Processing Environments, Refrigeratingand Air-Conditioning Engineers, Inc., Atlanta, 2009.

[3] ENERGY STAR, Report to Congress on Server and Data Center Energy Efficiency,Public Law 109-431, U.S. Environmental Protection Agency, Washington, DC,2007.

[4] U.S. Department of Energy, Quick Start Guide to Increase Data Center EnergyEfficiency, U.S. Department of Energy, Washington, DC, 2010.

[5] K. Kant, Data center evolution: a tutorial on state of the art, issues, and chal-lenges, Computer Networks 53 (2009) 2939–2965.

[6] P.R. Schmidt, E.E. Cruz, M.K. Iyenar, Challenges of data centers ther-mal management, IBM Journal of Research and Development 49 (2005)709–723.

[7] ASHRAE, Design Considerations for Datacom Equipment Centers, AmericanSociety of Heating, Refrigerating and Air-Conditioning Engineers, Inc., Atlanta,2009.

[8] J.F. Karlsson, B. Moshfegh, Investigation of indoor climate and power usage ina data center, Energy and Buildings 37 (2005) 1075–1083.

[9] N. Rasmussen, An Improved Architecture for High-Density Data Centers, APCWhite Paper 126, 2008.

10] Emerson Electric Co., Knurr Coolflex, Knurr Technical Document, 2008.11] Emerson Electric Co., Liebert XD, Cooling Solutions for High Heat Density Appli-

cations, Liebert Technical Document, 2005.12] J. Fulton, A Strategic Approach to Datecenter Cooling, Afco Systems White

Paper, 2007.13] D. Rakesh, Choices in liquid cooling for your data center, The Data Center Journal

(2009).14] The Green Grid, The Green Grid data Center Power Efficiency Metrics: PUE and

DCiE, The Green Grid, 2007.15] M.K. Herrlin, Improved Data Center Energy Efficiency and Thermal Performance

by Advanced Airflow Analysis, Digital Power Forum 2007, San Francisco, CA,USA, 2007.

16] C.E. Bash, C.D. Patel, R.K. Sharma, Efficient thermal management of data centers-immediate and long-term research needs, International Journal of HVAC&RResearch 9 (2003) 137–152.

17] LBNL, Data center website of Lawrence Berkeley National Laboratory,http://datacenters.lbl.gov/, 2003.

Page 13: Investigation of air management and energy performance in a data center in Finland: Case study

3 uildin

[

[

[

[

[

[

[

[

[

CA, USA, 2006, pp. 445–452.

372 T. Lu et al. / Energy and B

18] B. Aebischer, R. Frischknecht, C. Genoud, A. Huser, F. Varone, Energy- and eco-efficiency of data centres, Deı̌partement de l’inteı̌rieur, de l’agriculture et del’environnement, Service cantonal de l’eı̌nergie, Geneve, 2003.

19] DEST, Brief Description of Server Room Activities by Danish Electricity SavingTrust, Danish Electricity Saving Trust (DEST) and Danish Technological Institute(DTU), Denmark, 2004.

20] Guardian, Helsinki data centre to heat homes, http://www.guardian.co.uk/environment/2010/jul/20/helsinki-data-centre-heat-homes, 2010.

21] M. Lucas, P.J. Martinez, A. Viedma, Comparative experimental drift studybetween a dry and adiabatic fluid cooler and a cooling tower, International

Journal of Refrigeration 31 (2008) 1169–1175.

22] N. Rasmussen, Air Distribution Architecture Options for Mission Critical Facil-ities, APC White Paper 55, 2003.

23] J. Moore, J. Chase, P. Ranganathan, R. Sharma, Making scheduling “cool”:temperature-aware workload placement in data centers, in: Proceedings of

[

[

gs 43 (2011) 3360–3372

the annual conference on USENIX Annual Technical Conference, USENIX Asso-ciation Berkeley, CA, USA, 2005, pp. 61–75.

24] M.K. Herrlin, Air Management Research, Application Assessment Report #0912,ANCIS Incorporated, 2010.

25] F. Robert, P.E. Lars strong, G.B. Kenneth, Reducing Bypass Airflow is Essentialfor Eliminating Computer Room Hot Spots, Uptime Institute Inc., 2007.

26] C.B. Bash, C.D. Patel, R.K. Sharma, Dynamic thermal management of air cooleddata centers, in: Proceedings of Thermal and Thermomechanical Phenomena inElectronics Systems, ITHERM’06, The Tenth Intersociety Conference, San Diego,

27] D.P. Mehta, A. Thumann, Handbook of Energy Engineering, Liiburn, GA, Fair-mont Press, 1989.

28] ASHRAE, Handbook-HVAC System and Equipment, American Society of Heat-ing, Refrigerating and Air-Conditioning Engineers, Inc., Atlanta, 2009.