executive guide: best practices for leading edge data centers · center is needed at all, or...

49
Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS

Upload: others

Post on 13-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

Executive Guide:

BEST PRACTICES FOR LEADING EDGE DATA CENTERS

Page 2: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

Authors:

Kevin BellChief Analyst, InnovaStrat

Ram Nidumolu, PhD.Founder and CEO, InnovaStrat

Expert Review Panel:

Mark Aggar Doug AlgerSam Brick Cynthia CurtisJay DietrichEd KettlerDarren McGannJane Pompe David Richter

MicrosoftCiscoAmerican Express CA Technologies IBM HPKPMG AlcoaKimberly Clark

Edited By:Amy O’Meara Corporate Eco Forum

Page 3: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

Introduction 5

Summary: A Strategic Agenda for Results 6

I. Key Trends 9

II. Strategic Considerations for Decisionmakers 11

III. Best Practices: Upgrading/Retrofitting Existing Data Centers 15

IV. Best Practices: New Data Centers 17

V. Best Practices by Domain 19

Server Utilization & Power Management 22

Advanced Lighting/Lights-out Operations 22

Vir tualization 23

Network Topology 23

Power Distribution 25

Combined Heat and Power Distributed Generation 25

Containerized/Modular Data Facilities (CMDF) 26

Data Center Siting 26

VI. Performance Metrics & Certification 28

Appendix A: Common Acronyms 33

Appendix B: Energy Efficiency Rebates 34

Appendix C: Case Studies 35

Existing Data Center Retrofits 35

New Data Centers 36

Endnotes 40

CONTENTS

Page 4: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

4

Page 5: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

INTRODUCTION

The guide is based on secondary research that included dozens of web-based technical and managerial sources—including research by the Green Grid, Data Center Metrics Taskforce, and other data center consortia or associations—and was reviewed and supplemented by comments from leading data industry experts. It also draws on technical studies that identify best practices in specialized areas of data center design and implementation, as well as several case studies of leading companies.

While hundreds of reports on data centers are published every year, these tend to be highly specialized and beyond the reach of the non-technical business executive. Our goal is to provide a readable and comprehensive description for high-level executives of the key trends, strategic considerations, business case studies, domain-specific best practices, and metrics that are becoming central to data center performance—many of which have been inspired by companies applying the lens of sustainability.

This guide is designed to help senior executives across business functions involved in data center decision making—CIOs, CFOs, COOs, VPs of Buildings and Facilities, Chief Sustainability Officers (CSOs), and others—understand the business implications of rapidly changing data center technologies and provide a roadmap for improving data center performance and sustainability.

5

Page 6: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

6

SUMMARY: A STRATEGIC AGENDA FOR RESULTS

KEY TECHNOLOGY TRENDS: Stay abreast of key technology trends shaping data center performance, par-ticularly regarding energy consumption and efficiency. Energy consumption by data centers has quadrupled in the last decade despite the doubling in energy efficiency of processors every two years. Data center energy performance is increasingly becoming a key part of corporate responsibility reporting.

CFO-CIO PARTNERSHIP: Recognize and address the challenges inherent in CFO-CIO conversations around data centers. These include the need for CIOs to commit to reducing data center energy costs, and for CFOs to recognize the necessity of upfront investments—and the alignment of incentives—that yield high returns in energy reduction down the road.

IT-OPERATIONS PARTNERSHIP: Develop and implement an explicit approach and process that ensures facilities and IT organizations communicate and interact effectively on data center activities. If these groups are not communicating and working well

together, most of the facility-level actions cannot be successfully implemented.

INTERNAL DATA CENTER VS. OUTSOURCED CLOUD COMPUTING: As part of a strategic conversation, first evaluate extensively whether a data center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often, investments in data centers are made de facto, without explicitly evaluat-ing whether they are actually required.

DATA CENTER PERFORMANCE INCENTIVES: Align organizational incentives to drive energy efficiency in the data center, and take advantage of utility rebates. Incentives to improve IT energy efficiency are often lacking, misaligned or at cross purposes. Decisionmakers should consider the significant cost savings and productivity opportunities that improvements in IT energy efficiency offer, and empower IT departments to pursue them. Furthermore, develop andimplement a plan for leveraging available energy efficiency rebates and incentives provided by utilities or the government for enabling data center investments and performance.

Page 7: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

7

BALANCING AVAILABILITY/UPTIME REQUIREMENTS WITH EFFICIENCY GOALS: Weigh decisions regarding the installation of additional infrastructure against opportunities for maintenance of existing equipment, and seek to minimize energy-intensive redundan-cies. The push for greater availability (99.99% uptime) often results in a layering of standby infrastructure that can significantly increase energy use.

UPGRADING EXISTING DATA CENTERS:

Before committing to investments in new data facilities, consider a business plan to upgrade or retrofit existing data centers to manage increased demand for processing capacity. These plans need to explicitly incorporate best practices and exemplar case studies for reducing energy spend, increasing energy efficiency, eliminating infrastructure redundancies, and maximizing the utilization of IT assets.

IMPLEMENTING NEW DATA CENTERS:

If the case for implementing new data centers is made, demand and evaluate a business plan for implementing new data

centers that incorporates best practices and exemplar case studies. These plans need to explicitly design for power usage effectiveness (PUE), airflow, floor temperature, cooling, power distribution,network architecture, and processor scheduling, among other factors covered in this guide.

LOW INVESTMENT BEST PRACTICES:

In addition to evaluating the case for upgrading or building new data centers, explore strategies that incorporate best practices for opportunities in domains that require low investment or lead to low disruption of data center operations. At a minimum, these should identify the business benefits from energy usage analysis, airflow and temperature set point management, server utilization and power management, and advanced lighting/ lights-out operations.

HIGH INVESTMENT BEST PRACTICES:

Demand business plans that incorporate best practices in domains that require high investments, even if these plans are not implemented immediately. At a minimum, these plans should identify the business benefits from applying best practices in virtualization, adaptive

Page 8: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

8

dynamic routing, network topology, air and water economizers, power distribution, and distributed generation of combined heat and power (CHP).

NEW FACILITIES DESIGN AND SITING:

Explore best practices in emerging domains that affect entire facilities, such as containerized /modular data (CMD) facilities, and using criteria for locating data centers that go beyond immediate performance benefits to the IT organization. Often, these aspects get left out of data center strategies and execution, even though their impact on data center performance is significant in the long run.

DATA CENTER PERFORMANCE METRICS & CERTIFICATION:

Demand data center strategies and implementation plans that describe how the performance will be monitored and improved using metrics, such as those provided by the Data Center Metrics Task Force, Green Grid, EPA portfolio man-agement of data centers, the Lawrence Berkeley National Laboratory Data Center Resources, as well as Energy Star ratings and certification. The availability of

real-time data for key metrics is essential for alerting and troubleshooting, while daily data can be effective for reporting purposes. Metrics should range from simple systems measurements to efficiency metrics such as PUE and partial PUE (pPUE), as well as consumption metrics for energy, water and other resources. The strategies should explicitly recognize precautions in using these metrics as well as how they fit into the emerging trends.

INCREMENTAL IS NOT ENOUGH:

Do not settle for incremental improve-ments in data center performance, given the wide variety of successful strategies, best practices, case studies, and opportunities described in this guide. Data center and IT management needs to justify why a more radical approach to improving data center performance cannot be taken. Strategies for improving data center performance need to show the requisite boldness for becoming a key component of the enterprise’s sustainability and business strategies.

Page 9: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

9

I. KEY TRENDS

95% of the 250 largest companies in the world now report on their corporate responsibility (CR) activities.1 IT data centers are often highlighted in company CR reports and are a key component of the CR strategy.

Data centers consume up to 40 times more energy than an office building of comparable size,2 and this difference in energy consumption is increasing over time.

The physical installed server base continues to grow despite server consolidation and virtualization, reaching roughly 40 million severs by 2012. Direct energy and cooling costs for those servers has reached roughly $5 billion per year.3

Processor energy efficiency is continuing a fifty year trend of doubling every two years.4 Moreover, processors and associated memory and data storage equipment footprints are shrinking and becoming denser as they become more efficient.

95%95% of the 250 largest companiesin the world

up to 40 times more energy

companies report on their corporate responsibility (CR) activities saved energy

40times

Energy demand by data centers has quadrupled over the last decade. Heat densities per square foot of processor rack have increased more than an order of magnitude over the same period.5 But more concentrated waste heat from denser physical con-figurations raises new cooling issues, which increases energy consumption.

More concentrated waste heat from denser physical configurations raises new configuration and capacity issues.Energy and water efficiency in data centers has not kept up with processor efficiency, with serious availability and cost implications for data center operators.6,7,8

Page 10: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

10

I. KEY TRENDS

Servers are getting smaller, so the energy requirements and heat output per cubic foot is getting higher as people cram more servers into the same space. However, an individual blade consumes far less power and produces less heat than a previous generation of server with equivalent performance.9

In aggregate, the higher energy demand can be offset through the expanded capabilities of server systems to virtualize workloads, increase the densities of processor deployment and operate at higher utilization rates —thereby reducing the overall cost of workload delivered per unit of energy consumed, while also significantly reducing the space and hardware investment required to support the workload.10

System power utilization and tempera-ture specifications are often set arbitrarily low relative to current standards. As a result, power systems are often overbuilt, and efficient cooling options are often unnecessarily constrained.11

Servers and network routers have increasingly sophisticated power management capabilities, including processor sleep and dynamic voltage and frequency management, memory sleep, and I /O power management, which should be enabled.

Implementation of these power man-agement functions can reduce server power use by 20%-70% when no workload is present. Best practice uses of these technologies, combined with best practice server and network architectures, have achieved 85% reductions in power consumption.12

Page 11: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

11

II. STRATEGIC CONSIDERATIONS FOR DECISIONMAKERSIn improving data center performance and managing data center costs, the following strategic considerations should be taken into account by C-level executives and senior IT managers.

CIO AND CFO INTERACTIONS

There’s often a lack of alignment between IT management of data centers and the need for cost control within a corporation. Data center managers are rarely directly responsible for monitoring their energy costs, and CFOs may be unaware of the implications of advances in processing capacity, or emerging best practices in data center infrastructure.

IT – FACILITIES COLLABORATION

The concept of a data center is changing, from a building that happens to have computers in it, to a computer that happens to be a building. 14 This blurring of boundaries between physical plant and IT infrastructure requires a consis-tent approach to address needs of both IT and facilities. If IT and facilities groups are not communicating and working together, most of the facility level actions cannot be successfully implemented.

Incentives for IT and facilities managers are often not aligned. In a traditional organization of functional roles, facilities and operations management controls infrastructure design, layout, facilities, and costs (often including energy and water),while IT controls the IT equipment. In many cases, the group paying for the equipment is not the one procuring it.

IT facilities and operations costs are increasing as infrastructure efficiency falls behind processing efficiency.13

Industry experts interviewed for this report repeatedly emphasized that a successfuldata center implementation requires a deep and coordinated understand-ing of what needs to be done and why, particularly between the CIO and CFO functions of the company.

The emphasis can vary between depart-ments regarding the relative importance of efficiency and costs, and vastly different planning horizons.

In many companies, IT and facilities report through different channels, creating two separate information and decision silos.

Many IT managers are poorly informed about heating and cooling infrastructure and opportunities, which is typically considered a facilities function.

Many times, IT does not have real control over replenishment cycles for facilities or IT equipment.

A successful data center implementation requires a coordinated and consistent partnership across traditional manage-ment functions.

Page 12: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

12

CASE STUDY : IBM

• IBM initiated a project to install real time thermal monitoring systems into the data center portfolio for delivery services and business recovery data center and software group IT labs.

• A few sites had forged a strong working partnership between the IT and site operation groups.

• By working in partnership, the organiza-tions were able to combine their IT and facilities system knowledge to implement solutions for a range of energy efficiency and reduction opportunities.

• Some are standard industry solutions while others represent site-specific opportunities.

• By building on the results achieved and the opportunities presented by the real-time thermal management system

and the implementation of a global target for data center PUE, IBM implemented this collaborative model across the data center portfolio.

• Some locations embraced the approach while others needed more support for the partnership.

• While this transition is not complete and IBM is still in the early phases of capturing the savings from opportunities revealed by real-time thermal monitoring, the net result has been a significant increase in the implementation of energy conservation projects in IBM’s raised floor portfolio.

• It has also slowed the rate of growth of IBM data center energy use beyond what would have been if the two groups had continued to operate separately.

Page 13: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

13

DATA CENTER VS. OUTSOURCED CLOUD COMPUTING

Data centers represent an enormous expenditure of money, resources, and time for organizations. As a result, the decision whether to invest in them in the first place needs to be carefully considered.

The availability of high-speed, reliable networks linked by high performance, readily scalable and geographically distributed cloud services has become a viable alternative to in-house or co- located processors and/or data storage.

It is important to consider the trade-off between local network, processor, and storage energy consumption and the incremental energy required to transport and switch data in a cloud configuration.

In general, private cloud storage is significantly more efficient than public cloud storage for files frequently accessed by local applications, and cloud storage is superior to local storage for files that are not constantly accessed. Processing needs to located with large data sets for performance and efficiency, regardless of whether it’s in the cloud or local.

Cloud processing may be superior to local processing for computationally intensive tasks that do not require frequent real-time changes to the user interface.

In many cases, the right answer is a combination of local infrastructure and cloud services, optimized for the type of processing and storage that is required. 15

IT INCENTIVES

Organizational incentives to achieve energy efficiency in the data center are often lacking, misaligned or at cross-purposes. Because the data center is generally seen as an operational expense for a company as a whole, while the capital expense of IT equipment inside is covered by the individual business unit, the true lifecycle cost of the equipment is not felt: as a result, the

purchaser does not pay the full cost of powering and cooling equipment over its lifetime.16

IT staff are often not willing to invest the upfront premium for more efficient equipment since they will not receive the return on investment from reduced operating costs. This problem is further exacerbated in colocation facilities where

Page 14: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

14

costs are even further removed. Creating alignment between facilities and IT can begin to address split incentives—for example, passing through power and cooling charges to individual business units. C-level decisionmakers should consider the significant cost savings and productivity opportunities that improve-ments in IT energy efficiency offer, and empower IT departments to pursue them.17

Energy efficiency rebates or incentives are often offered by utilities or public entities and can be used to offset project costs. Rebates or incentives depend on the project requirements of the specific energy efficiency program, and may be based on measured or calculated energy savings, the incentive/rebate needed to achieve a specified financial return for the project, the number of designated efficient units purchased, or some other relevant criteria.

Incentives offered by utilities may be harder to realize in the case of colocation facilities because of utility policies that restrict incentive payments to anyone other than the customer on record. As a result, companies leasing colocation space often cannot receive incentives for energy efficiency improve-ments within the facility.18

IT management should review examples of several data center project types, their annual energy savings, and the percent of the project cost covered by the incentive or rebate (Appendix B). For virtualization projects, the rebates/ incentives are applied to the cost of the purchase of the IT equipment.

IT departments should embrace the opportunity provided by incentives or rebates, which can provide additional impetus to propose data center consolidation projects.

Page 15: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

15

III. BEST PRACTICES: UPGRADING/RETROFITTING EXISTING DATA CENTERS

Monitoring: Install a basic energy and IT workload metering/reporting system 1 2

to understand energy use and workload distribution in your data center, and use this data to calculate PUE.

Physical Facility and Operations Review: Perform a “recommissioning”

review of your facilities to identify operating and efficiency improvements in your existing systems.

Implement Time- and Usage-Based Power Provisioning: Enable dynamic11

processor provisioning and equipment power controls.

IT Inventory and Infrastructure Review: Perform a similar “recommissioning”

Disposal and Recycling of Old Equipment: Identify unused, highly 3 4

review on your IT systems identifying current hardware and software systems that can be optimized for improved performance.

underutilized or redundant equipment and shut down, remove, or recycle it.

Rebalance Data Center Cooling: Plug penetrations that short circuit

Implement Power Management on New Equipment: Ensure that Server 5 6

cooling air, rebalance floor tiles to deliver cool air to hot spots, move to cold-aisle/hot-aisle configuration, and eliminate redundant cooling units.

Power Management is enabled on newly installed systems.

IT Operations Needs Analysis: Building on the IT Inventory and Infra-

Refresh/Upgrade Hardware: Install new, more efficient equipment where 7 8

structure review, determine hardware and software systems upgrades that will deliver efficiency improvements.

warranted—such as replacement of old ventilation equipment, power supplies, and processing and network infrastructure.

Virtualization and Consolidation: Consolidate multiple workloads

Use free cooling where possible: Optimize ambient air or water-cooling 9 10

onto high-utilization server and storage configurations.

technologies.

KEY STEPS TO A GREEN DATA CENTER: 19

Page 16: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

16

CASE STUDY : EM 20

Data center retrofits 2004-2009

• Hot and cold aisle design with a goal of eliminating hot spots

• Hot air return plenum for a more efficient removal of heated air

• Filler panels over rack units with no equipment installed

• Floor pillows to reduce wasted air flow around cables

• Selective in-row cooling

• Monthly computational fluid dynamics (CFD) analysis to identify and rectify hot spots with vented floor tiles

• Virtualization consolidated servers by 80%

• Data center energy efficiency increased by 34%, storage utilization increased by 19%, and consolidated server environ-ment consumes 70% less power, and 70% less cooling

• DC efficiency: power from $3.7M to $1.2M, cooling from $2.5M to $0.8M

• Storage: power from $5.8M to $2.6M, cooling from $3.2M to $1.4M

• Virtualization: power from $2.5M to $0.7 M, cooling from $1.5M to $0.5 M

Total: Power costs reduced from $12.0M to $4.5M, cooling from $7.2M to $2.7M (63% savings each)

Data center retrofit, New York, 2002 21

• 200 kW fuel cells to provide 1.4 MW capacity

• Higher facility reliability

• 1/3 reduction in cooling load due to thermally activated cooling powered by the waste heat of the fuel cell power systems

• Carbon footprint reduction: 5,500 tons/year reduction in CO2 emissions

• $680,000 per year in operating cost savings

CASE STUDY : VERIZON

Data center retrofit 22

• 2,300 m2 data center

• Raise chiller and humidity set-points, repair water economizer, VFD cooling pumps, improve air handler efficiency, add lighting controls

• Cooling plant improvements: 1,273 MWh/year savings, $150k, 1 year payback

• Air management improvements: 243 MWh/year savings, $30k, 2.7 year payback

• Lighting and generator set-point: 12 MWh/year savings, $1.5k, 3.3 year payback

Page 17: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

17

IV. BEST PRACTICES: NEW DATA CENTERS 23

MEASURE POWER USAGE EFFECTIVENESS (PUE):

Measure often, as seasonal weather variations can have a significant effect on PUE. Use new PUE standards for consistency (see Section VI: Performance Metrics and Certification).

MANAGE AIRFLOW:

Minimize hot and cold air mixing. Size your cooling load to your IT equipment, and be sure that your cooling load is proportionally synchronized to real-time IT loads.

USE FREE COOLING:

Ambient air, evaporative cooling, and thermal reservoirs minimize mechanical cooling.

DESIGN FOR HIGHER RAISED FLOOR TEMPERATURES:

Design the data center to operate at ASHRAE A2 or better and install a real-time thermal monitoring and control system in the data center to effectively manage the higher raised floor temperature.

OPTIMIZE POWER DISTRIBUTION:

Eliminate as many power conversion steps as possible. Minimize UPS losses by choosing high efficiency models. Maintain higher voltages all the way to loads to reduce line losses and minimize transformer losses.

Page 18: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

18

• 22% improvement in fleet PUE between 2005 and 2007.27 Current data center fleet average is 1.6, newer modularized data center configurations run at a PUE of 1.2.28 Current best practice Microsoft modular designs run at a PUE of 1.06. 29

Data Center, Quincy, WA• 46,000 m2 data center 30

• Modular construction 31

• Water treatment plant recycles 4,000,000 m3/year, 80% goes to groundwater recharge 32

CASE STUDY: MICROSOFT

Data center, San Antonio, TX• 44,000 m2 data center 33

• 30,000 m3 of recycled water/year 34

• Tree shading reduces cooling costs 35

Data center, Dublin, Ireland 36

• 100% ambient air cooling• 38% reduction in annual energy

demand, 200,000 m3 reduction in annual water demand

• PUE of 1.25

• Overall results: Data center overhead in Google centers is 14%, versus 100% for standard practice data centers. 24

• As of Q4 2011, the trailing twelve-month (TTM), energy-weighted average PUE for Google data centers with an IT load of at least 5MW and time-in-operation of at least 6 months, is only 1.14, with some individual data centers as low as 1.11. 25

• Efficient Servers: Servers only lose a little over 15% of the electricity they pull from the wall during power conversion steps, less than half of what is lost in a typical server. They omit parts that aren’t needed for Google applications. For example, these servers don’t have any graphics chips. Google also optimizes its servers and racks to use the minimum amount of

CASE STUDY: GOOGLE

fan power possible, and fans are controlled to spin only as fast as necessary to keep the server temperature below a threshold. The company encourages all of its suppliers to produce components that operate efficiently whether they are idle, operating at full capacity, or at lower usage levels, a property called “energy proportionality.” 26

Example - Hamina, Finland:

• Seawater-cooled dual-loop facility com-pletely eliminates the need for chillers.

• Returned seawater is allowed to cool and mixed with fresh seawater prior to release, minimizing environmental impact. Re-uses existing on-site industrial infrastructure, further reducing costs and eliminating incremental land-use impacts.

Page 19: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

19

V. BEST PRACTICES BY DOMAIN

Data center performance can be increased by implementing improvements in a wide variety of domains, as summarized in the table below and detailed subsequently.

Energy Usage Analysis

LOW INVESTMENT/

LOW DISRUPTION

HIGH INVESTMENT/

HIGH DISRUPTION

LEVEL OF EFFORT DATA CENTER DOMAIN EXAMPLES

Measure energy use to understand where it is being used. Use dedicated power meters where possible and vendor supplied calculators to derive energy use estimates.

Airflow and Set Point Management

Plug short circuits, rebalance air flows, and turn off unneeded computer room air conditioning (CRAC) units. Once rebalancing is done, increase the raised floor operating temperature.

Server Utilization & Power Management

Monitor and measure unused servers and storage on a day-to-day basis.

Advanced Lighting/Lights-Out Ops.

Deploy digitally controlled solid-state lighting to reduce energy demand.

Virtualization

Often provides the biggest cost and energy reduction leverage. Approximately 25% of virtualization opportunities are easy, 50% require more planning and effort, while the last 25% are most challenging.

Page 20: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

20

Deploy virtual networking algorithms to improve efficiency and reduce energy consumption.

Implement two-tier dynamic routing that uses both predictive and reactive dynamic provisioning to allocate workloads.

Adaptive Dynamic Routing

Network Topology

Air and Water Economizers

NEW OR EXPANDED SYSTEM/FACILITY

OPPORTUNITIES

HIGH INVESTMENT/

HIGH DISRUPTION

LEVEL OF EFFORT

Enhance airside systems with direct or indirect evaporative cooling, use cooling tower water loop that is cooler than the return chilled water.

Power Distribution

Direct Current systems have to be de-signed into a data center or an expansion section. You cannot safely mix and match AC and DC IT equipment in the data center.

CHP Distributed Generation

Deploy local combined heat and power generation with onsite renewable energy generation, along with optimized DC components.

Containerized/Modular Data (CMD)

Facilities

Use pre-engineered and pre-fabricated modular units that can be rapidly combined to meet changing customer requirements.

Data Center Siting

Locate data centers using additional criteria such as regional efforts to reduce carbon emissions through renewable energy.

DATA CENTER DOMAIN EXAMPLES

Page 21: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

21

LOW INVESTMENT/DISRUPTION PRACTICES

ENERGY USAGE ANALYSIS

Meter and track key power inputs to understand actual power use across the data center.

When looking to boost efficiency, reduce costs, and generally optimize your data center, first identify bottlenecks, over-provisioned resources, under-utilized equipment, and energy over-consumers.

Established metrics such as Power Usage Effectiveness (PUE)—along with forthcoming metrics from The Green Grid—provide an easy, effective means of pinpointing those areas and avoiding guesswork as to where problems may or may not exist.

Consider opportunities for energy savings you can achieve without conducting a major overhaul to your data center.

For instance:

• Plug cooling air short circuits, rebalance air flows, and turn off unneeded air conditioning units.

• Slightly increase your data center’s temperature and relative humidity set points to reduce energy us age and costs without exceeding recommended limits.

Use dedicated power meters and vendor-supplied calculators to fine-tune your equipment resource infrastructure.37

New research shows that data center temperature and humidity ranges can be significantly broader than was previously assumed, without compro-mising system reliability. These expanded ranges have been incorporated into current best practice standards. 39

Changes in temperature and humidity settings alone can significantly reduce cooling and dehumidification loads, while also opening up a variety of cooling options that rely on ambient air instead of chillers.

AIRFLOW AND SETPOINT MANAGEMENT 38

Plug holes, rebalance air flow and shut down CRACs first and balance to the current data center temperature, then move to the higher temperatures. It is best to raise the temperature in 1oC increments to expose hot spots so they can be corrected before raising the temperature again.

Benefits of real-time thermal monitoring systems in the data center are signifi-cant. They enable the data center team to identify and eliminate hot spots and increase raised floor temperature to

Page 22: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

22

increase the use of free cooling and reduce cooling loads while maintaining system reliability. Real-time monitoring improves equipment and installation plan-ning, provides early warning of developing

thermal instabilities and enables dynamic cooling management to match cooling delivery to heat generation as data center workload and energy use fluctuates with time. 40

Increasing server and storage system utilization and decreasing the number of unnecessary systems has huge potential — and is in fact the key opportunity represented by server virtualization technology.

Recent survey results indicate that only 20% of data center operators monitor unused servers on a day-to-day basis. Operators that do track server utiliza-tion data report that an average of 10% of servers are unused, implying an annual waste globally of $19 billion per year and 11 megatons of unnecessary carbon dioxide emissions in 2010. 41 This is true for both physical and virtual servers. 42

SERVER UTILIZATION & POWER MANAGEMENT

Nearly one-third of respondents had never attempted to discover unused servers, and 20% of those who had checked did not know what percentage of their servers was unused.

Ultimately, finding unused servers could avoid the need for a new data center. 43, 44

The effect is amplified in virtualized environments because of a perceived low cost per virtual machine. 45

Given that even heavy virtualized servers are achieving utilizations of only 50%, deploy-ing a server that reduces power use at idle by 80% can reduce server energy use by 40% as compared to a server deployed without power management enabled.

ADVANCED LIGHTING/LIGHTS-OUT OPERATIONS

Lighting loads can be significantly reduced, with concurrent reductions in cooling and O&M costs, through the use of dimmable LED technology throughout data facilities.Daylighting reduces the need for artificial lights. Digitally controlled solid-state lighting minimizes waste heat, reduces minimizes energy demand, and reduces maintenance costs.

Digitally controlled lighting sensor net-works similar to systems used in modern

warehouses can instantly light any area where illumination is required while remaining off the rest of the time.

While lighting is not typically considered a major component of data center energy demand, it is an easy and relatively low-cost fix with significant long-term benefits. However, viewed in the context of traditional facilities such as office space, energy for data center lighting can be significant.

Page 23: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

23

In most cases, virtualization provides the biggest cost and energy reduction leverage. It is estimated that 25% of the virtualization opportunities are easy, 50% require more planning and effort, and the last 25% may present significant obstacles.

Virtualization is the most important transition occurring in IT today. But it

HIGH INVESTMENT/DISRUPTION PRACTICES

VIRTUALIZATION

brings a unique set of management and security challenges related to server consolidation, infrastructure optimization, automation and synchronization, and agile responsiveness to dynamic changes in the computing environment. 46

NETWORK TOPOLOGY 47

As data server density and efficiency increases, networking components and topology become much larger issues, requiring optimization in order to reduce network latency, provide dynamic adaptive configurations, and increase energy efficiency.

Virtual networking algorithms can utilize existing equipment to improve data

throughput and reduce oversubscription of computing resources.

Initial prototypes run at 94% of theoretical maximum throughput efficiency, with zero oversubscription. A comparable hard-ware only system would cost 14 times as much to build. 48

AIR AND WATER ECONOMIZERS

Economizers are cooling technologies that take advantage of favorable outdoor conditions to provide partial or full cooling without using the energy of a refrigeration cycle. There are two main economizer categories: air-side and water-side systems.

Chiller/economizer system optimization can be driven through software systems

that enable integration of the powered and free-cooling systems.

Air-side systems: These are systems that may use direct, fresh air blown into the data center with hot air extracted and discharged back outdoors, or they may use an air-to-air heat exchanger. With the air-to-air heat exchanger, cooler outdoor

Page 24: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

24

air is used to partially or fully cool the interior data center air.

Airside systems may be enhanced with either direct or indirect evaporative cool-ing, extending their operating range.

For air-side economizers, if you are doing direct air cooling, it is important to perform a corrosives check on the air stream, especially in urban areas in growth markets, to determine if there the IT systems are exposed to corrosive gases present in the air stream.

Water-side systems remove heat from the chilled water loop by a heat exchange process with outdoor air. Typically, there may be a heat exchanger that is piped in series with the chilled water return and chiller as well as piped either in series or in parallel with the cooling tower and chiller condenser water circuit.

In water-side systems, when the cooling tower water loop is cooler than the return chilled water, it is used to partially or fully cool the chilled water, thus reducing or eliminating demand on the chiller refrigeration cycle.

The American Society of Heating, Refrig-eration, and Air Conditioning Engineers (ASHRAE) standard 90.1 on energy efficiency is adopted by many of the states and municipalities in the United States as their building energy efficiency code.

The recent 2010 version of this standard now includes applicability to data centers and requires economizers as a baseline installation for a large portion of the U.S.

Beyond their ability to save energy, economizers are becoming a regulatory requirement, leading to increased use. 49

Roughly half of US data centers larger than 500 m2 use economizers. 50 On average, these economizers are under-utilized by about 20%, often because organizationally mandated temperature set points are significantly lower than current best practice standards. 51

Roughly 80% of data centers report modest to major reductions in both energy costs (20% on average) and maintenance costs (7% on average).

As emerging best-practice temperature and humidity set-point standards become more widely implemented, data center PUE could be positively affected.

Water chillers offer additional advantages in areas with high peak-period electric rates, or in areas where large amounts of intermittent zero carbon electrical generation is available.

Water can be stored and chilled on-site in advance of peak load periods, when the cost of energy is cheaper, and/or when zero carbon renewable resources, such as wind energy, are available.

In many regional electric systems, this ability to shift data center loads brings significant reliability and stability benefits to the entire grid, adding value to the regional grid that data centers can realize in the form of dramatically reduced electrical rates.

Page 25: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

25

POWER DISTRIBUTION

data center, then transformed back to DC current at the point of use.

Each of these transformations results in significant energy losses and waste heat. Centralizing and minimizing transfor-mation to DC on a facility-wide scale simplifies the power system and maximizes its efficiency.

High voltage DC systems reduce inverter and transformer losses, and are more efficient for highly loaded systems. They are also simpler, involving fewer components, implying higher reliability and a lower TCO. 53 However, the cost of DC systems is significantly greater than AC systems, and there is an emerging consensus that the efficiency advantage of DC has been reduced with recent generations of AC systems.

Direct Current systems, which many con-sider more efficient, have to be designed into a data center or an expansion section. It is hard to safely mix and match AC and DC IT equipment in the data center.

Properly sequenced best practice Uninterruptible Power Supply (UPS) and inverter components dramatically improve power distribution efficiency. However, the correct equipment is a function of expected data center loading. Higher loading does not necessarily translate to higher efficiency. 52

Electricity is typically generated as direct current (DC), transformed to alternating current (AC) to facilitate long-distance networked transmission, transformed again to lower voltage AC at the data center interconnection to the power grid and various intermediate points within the

COMBINED HEAT AND POWER DISTRIBUTED GENERATION 54, 55

Local self-generation of electricity and process heat by larger users was standard practice earlier in the 20th century, but became less common as centralized generation prices dropped. This long-term trend had reversed by the late 1970’s, and distributed generation by data centers |can offer significant economic and environmental opportunities.

Local combined heat and power (CHP) and/or on-site renewable energy generation can dramatically reduce the

data center carbon footprint, and reduce longer-term pressure on the existing transmission and distribution system as well. In both cases, on-site generation of DC current improves overall system efficiency by an additional 20%-25%.

A typical CHP installation for a data center would use natural gas to power an efficient electric fuel cell generator, reusing the waste heat from the fuel cell to operate absorption chillers for air conditioning.

Page 26: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

26

NEW OR EXPANDED SYSTEM/FACILITIES PRACTICES

CONTAINERIZED/MODULAR DATA FACILITIES (CMDF)

The concept of agile data facilities is transforming the physical nature of data centers. A new approach to data infrastructure relies on the use of pre-engineered and pre-fabricated modular units that can be rapidly combined, recombined, and upgraded to meet changing customer requirements using best available practices and technology.

Traditional data centers have an advan-tage of being customizable down to an

The overall energy efficiency of this CHP process can be up to three times the efficiency of centralized generation, and the use of natural gas significantly reduces the carbon footprint relative to coal-fired generation, and in all parts of the country during system peaks. 56

Combined with optimization of DC components to maximize efficiency, CHP is a compelling alternative in areas where utility electrical rates or system reliability are a potential issue. 57, 58

Data centers have much higher energy utilization intensities (20 to 100 watts per square foot) than typical commercial buildings, with all of this energy being converted to heat as it is used within the facility.

Recovered heat in the form of steam or hot water can be used to power a chiller, which can then be used for facility air conditioning or, less commonly, to feed chilled water to water-cooled racks.

individual component. CMDF units must be arranged in pre-optimized combinations to work reliably.

However, CMDF units can be incrementally added to gracefully adjust as demand fluctuates, eliminating capital-intensive swings between over-capacity and under-capacity. However, the unit of scale may be much too large for the average data center operator to take into account.

DATA CENTER SITING

The current wave of new large data center construction is occurring in areas that offer existing energy (and often existing industrial)infrastructure, excellent connections to

the national data grid, and reliable energy sources. In many of these areas, data centers are replacing industrial production that has left the area.

Page 27: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

27

Data Center operators can increase or maximize the quantity of renewable energy in their electricity purchases by incorporating the availability of the renewable energy into their siting criteria. Considerations can include:

• Locations in countries or regions with abundant renewable sources such hydropower and installed wind generation. Two examples are Brazil, where the majority of the electricity supply is generated from hydropower, and the Pacific Northwest which has abundant hydroelectricity, long-term water storage, and the fastest growing next-generation renewable energy sector in the US;

• Locations that have plans for renew-able energy development and make arrangements to contract for existing or planned “generation facilities”;

• Locations which have resources and incentives for on-site renewable energy generation (primarily wind or solar) or on-site co-generation systems; and

• Availability of long-term, fixed-price contracts for renewable energy purchases, which can provide a hedge against future electricity price increases.

The availability of low CO2 or renewable energy sources must be balanced against other factors including network latency, the cost of power and water, taxes, proximity to transportation hubs, clients and required support infrastructure, political stability, and geographic risks (earthquakes, tornados, typhoons, etc).

Data center energy and water efficiency can go a long way towards reducing environmental impact, minimizing long terms costs, and improving reliability. It will become increasingly important over the coming years for new data center builds to take into consideration regional and national efforts to move towards a carbon and water neutral energy grid.

Page 28: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

28

The Data Center Metrics Task Force, an international standards committee formed to synchronize data center sustainability efforts, has developed a series of common definitions and guiding principles for describing data center performance. 59

The US Environmental Protection Agency (EPA) has developed a set of high-efficiency Energy Star certification standards for data facilities. The certification process includes technical assistance, and represents a reasonably current set of proven best practices for both new and updated data facilities.

REAL-TIME METRICS DATA

SIMPLE SYSTEMS METRICS

EFFICIENCY METRICS

VI. PERFORMANCE METRICS & CERTIFICATION

More accurate real-time data regarding data center metrics is critical because it will enable quick corrective action. For example, real-time data around temperature can lead to a better under-

Raised Floor Temperature: Tells you whether you are optimizing cooling in a data center. If temperatures or low, the cooling load is probably oversized.

Equipment Utilization: Most modern storage and server equipment includes dynamic utilization tracking. In general, at low utilization rates (1%-3%), the server

Some indices measure the relative energy efficiency of data center processes. For

standing of how sensitive data center energy consumption is to temperature. This will also help raise the data center’s ratings on the ASHRAE standards.

can often be eliminated. Utilization in the range of 3% to 10% is a candidate for consolidation, and a system operating at 10%-20% can take on further workload for consolidation. Results will vary between data facilities and utilization tracking is a powerful tool for identifying consolidation opportunities.

these metrics, a perfectly efficient data center would have an index ratio of 1.

Page 29: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

29

Power Usage Effectiveness (PUE) and partial Power Usage Effectiveness (pPUE), 60, 61 are currently the standard metrics for evaluating data center performance.

PUE describes the ratio of total facility energy use to energy that is directly consumed by IT equipment. Within a data center, pPUE identifies the efficiency of individual systems or components within a facility:

pPUE can be determined in terms of physical or logical components and is useful for prioritizing data center optimiza-tion strategies. Data Center infrastructure Efficiency (DCiE) is the reciprocal of PUE, describing IT equipment power as a percentage of total data center energy demand. 62

Recent studies estimate that the national average PUE in 2009 was slightly over 2 63, 64 suggesting that less than half of the energy currently used by data centers is actually devoted to processing data.

However, technological improvement has been rapid, and current best practice data centers are approaching PUE’s in the range of 1.1.

While there is no effective measure for integrating server workload into the PUE or DCiE calculation, workload can be represented by the IT equipment utilization measurements. Improvements in workload efficiency can be attained by setting and working to system utilization targets.

Energy Reuse Effectiveness (ERE) and Energy Reuse Factor (ERF) are extensions of PUE which provide a metric that cleanly accounts for net transfers of thermal or electric energy from one data center zone to another, or net transfers of thermal or electric energy to an outside energy consumer. 65 ERE and ERF are directly related to PUE, but facilitate a fuller accounting of net energy flows into and out of a data facility.

Page 30: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

30

PRECAUTIONS IN USING EFFICIENCY METRICS

PUE and its associated metrics have become industry standards. However, while PUE is a useful high-level indicator, it is based on a physical equipment view of the data center (e.g. IT equipment, chillers), instead of a functional one (IT processing, cooling). As such, PUE fails to capture some of the fundamental IT infrastructure transformations that are underway.

To accurately compare PUE between facilities, it is important to note what time scale is used (trailing twelve months, quarterly, single point in time) and whether it is for a single site or multiple sites. However, this information is seldom reported.

A large variety of new cooling options are emerging for data centers. At the same time, the traditional standalone processor box is being replaced by shared arrays of individual components. In many new rack configurations, fans and power supplies no longer exist in the context of a processor board.

In some cases, because the power and cooling load associated with a stand-alone server box cannot be isolated, PUE calculations do not account for them, and the effect of internalizing these cooling and power supply loads may actually increase PUE, even in cases where the overall data center energy demand may be lower.

With modular server configurations such as server blades, it may be easier to factor the servers’ PSU and fans into the PUE equation (though this is rarely done in

practice). But this might make the PUE worse, though the data center energy efficiency would remain the same.

Data operations are increasingly becoming virtualized across multiple physical locations. PUE calculations for a single physical location will not accurately account for virtualized networking energy loads over multiple data centers, or even for the average energy demand per unit of useful processing across a data center array.

Changes in aspects of the facility, such as commissioning of a new class of servers, can produce apparent changes in results to another aspect of the facility, which are not captured well by PUE and DCiE metrics.

Neither PUE nor DCiE provide any guidance or insight into the operation or productivity of IT equipment, an area that The Green Grid is investigating for additional metrics and approaches.

Frequently, changes in the deployment or operation of IT equipment will affect PUE/DCiE calculations, e.g., server virtualiza-tion and increased server utilization in data centers may allow for a reduction in the number of servers and overall IT power load, but lead to an increase in PUE (or a decrease in DCiE). Here, the overhead in power distribution and cooling has not changed, but the reduction in overall IT load results in a seemingly poorer result. However, the higher PUE value points to the need to reduce the cooling delivery when quantity of IT equipment used in the data center is reduced.

Page 31: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

31

CONSUMPTION METRICS

Some sustainability indices measure resource consumption per unit of energy, for critical resources such as carbon and water. A zero net impact data center would have an index ratio of zero: 66

THE EVOLUTION OF DATA CENTER METRICS

New and refined metrics are being developed to more accurately reflect the emerging energy usage patterns of data centers, but this remains an area of current research and implementation. We have only begun to scratch the surface of this huge and important opportunity. Two of the most important approaches to metrics currently being evaluated are: 69

• Data Center Performance Per Energy (DPPE), which evaluates efficiency as a composite of data center load, IT equipment efficiency, facility energy efficiency, and the environmental energy and water footprint of the energy that the data center consumes. 70

Carbon Usage Efficiency (CUE) 67 is based on the total carbon impact of the entire energy production cycle:

Water Usage Efficiency (WUE) 68 is measured at the data center. WUEsource includes water consumed across the entire energy production cycle:

Page 32: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

32

• The Green Grid Productivity Indicator, which factors in multiple attributes and combines DCiE with utilization levels of server, storage, network, and total data center capacity. 71

A flexible data center can adaptively respond to grid conditions and the carbonprofile of available energy resources, potentially facilitating the regional grid operator’s ability to reduce system peaks in energy demand and meet the demand profile with more ecologically sustainable and cost stable generation sources.

• This will require metrics and manage-ment systems which enable transferring and balance of workload between data centers in different grid regions to reduce peak loads at one data center and fill in workload at data centers with available base renewable energy.

• Achieving this objective will require the development of advanced data center management techniques to manage the workload and insure the reliability of workload completion, as well as the management of grid attributes.

Many Data Center operators are lookingmore closely at when the energy is consumed and how the energy is generated, as well as how much energy is being consumed, with an eye toward increasing purchases of renewable energy. The Green Grid’s Carbon Usage Effectiveness metric covers the important aspect of energy generation (both how and when). Future metrics can work to specifically identify actions taken by the data center operators to incorporate the following practices:

• Using on-site, self-generated low carbon and zero carbon resources.

• Taking steps to reduce demand on the grid, such as chilling water stored on-site at night in order to eliminate real-time HVAC demand on a hot summer afternoon.

• Maximizing overall energy efficiency by pairing data centers with compatible cogeneration-friendly manufacturing or commercial loads.

• Contracting for direct purchases of renewable energy for the data center facility.

Page 33: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

33

APPENDIX A: COMMONLY USED ACRONYMS

AHU Air handling unit

ASHRAE American Society of Heating Refrigerating and Air Conditioning Engineers

BAS Building automation system

BCS Building control system

BLCC Building life cycle cost

BTU British thermal unit

CDD Cooling degree days

Cfm Cubic feet per minute

CHP Combined heat and power

CMDF Containerized/modular data facility

COP Coefficient of performance

CUE Carbon usage efficiency

DCE Data center efficiency

DCcE Data center compute efficiency

DCiE Data center infrastructure efficiency

DCN Data center network

DX Direct expansion air conditioning

EER Energy efficiency ratio

EUI Energy use intensity

HDD Heating degree days

HHV Higher heating value

HVAC Heating, ventilation, and air conditioning

ICT Information and communications technology

IR Infrared

kWh Kilowatt hour (1 kWh = 3412 BTU)

LBNL Lawrence Berkeley National Laboratory

LED Light emitting diode

LEED Leadership in Energy and Environmental Design

MW Megawatt (1 MW = 1000 kW)

MWh Megawatt hour (1 MWh = 1000 kWh)

O&M Operations and maintenance

PDU Power distribution unit

PM Preventive maintenance

PSU Power supply unit

pPUE Partial power usage effectiveness

PUE Power usage effectiveness

PV Photovoltaic

QoS Quality of service

RTU Rooftop unit

ScE Server compute efficiency

SLA Service level agreement

UPS Uninterruptible Power Supply

UV Ultraviolet

VAV Variable air volume

WUE Water usage efficiency

Page 34: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

34

APPENDIX B: ENERGY EFFICIENCY REBATES 72

3% 10% 16% 7% 25%

Eliminated existing servers and consolidated workload onto virtual machines, reducing number of servers by more than 85%.

Removed and replaced existing equipment; reducing number of machines by more than 2/3.

Removed and replace existing storage devices with new equipment, reducing number of machines by more than 85%.

Old processors were scrapped and replaced with new Z-10 servers (70% processor reduction).

Eliminated exist-

ing servers and

consolidated

workload onto

virtual machines,

reducing number

of servers by

more than 75%.

Install Thermal

Monitoring System

and Integrated

Computer Room

Air Conditioning

System Control.

X Series Virtualization

Storage Refresh

Z Series Storage

Z Series Processor

X Series Virtualization

Raised Floor Cooling

Management

892

430

1150

250

360436

MW

H/y

r Sa

ving

sPr

ojec

t Titl

e/ D

escr

iptio

nU

tility

Reb

ate

as

% o

f Pro

ject

Cos

t

Page 35: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

35

QUALCOMM 74

APPENDIX C: CASE STUDIES

• 900 heavily utilized servers

• 100% air exchange at up to 90 degrees F, no humidity control and minimal air filtration

• Installed a 4.5 MW CHP system for its 1 million square foot campus and 100 m2 data center

• Water heated by the turbine exhaust drives a 1,200 ton chiller that provides cooling, supplying approximately 85

• Increased rack density, $2M/ year in savings

• CPU aggregation allowed a 25:1 reduc-tion in physical servers

• 1490 m2 data center

• New high efficiency chiller, custom controls, air management improvements, lighting

• Cooling plant improvements: 551 MWh/year, $516k, 8 year payback

EXISTING DATA CENTER RETROFITS

INTEL 73

SODEXO 75

SYBASE 76

• 67% estimated power savings using ambient air economizer 91% of the time. $2.9M in annual savings

percent of the building’s power and cooling loads

• The CHP system reduces carbon foot-print by 12 percent and NOx emissions by 91 percent

• Estimated payback period is four years

• Data Center power utilization remains slightly below 2007 levels despite a doubling of doubling server capacity

• Total savings: 1,542 MWh/year, $181.5k, 1.3 year payback

• Air management improvements: 1506 MWh/year, $177k, 1 year payback

• Lighting: 238 MWh, $17k, 0.6 year payback

Page 36: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

36

• Located in Allen, TX

• 27,700 square foot data center (including all data areas)

• LEED-Gold certified

• Employs rotary UPS rather than battery-based UPS

• Airside economizer system—56% of the time on outside air, saving $600,000 per year

• Increased IT equipment inlet air temperature to 77° +/- 2° F

• Use of unified computing hardware streamlined cable and electrical conduits, reduced supporting physical infrastructure and enabled elimination of raised access flooring

• PUE = 1.35 annual average at full load

• Chimney rack hot air isolation, increasing cooling efficiency

NEW DATA CENTERS

CISCO

• Southeastern US

• 3,000 m2 data center with 900 server racks

• Dynamic monitoring and set-point ad-justments, variable-speed air handlers,

WALT DISNEY 77

and system rebalancing delivered the bulk of the savings

• Energy consumption dropped by 9.1%, PUE improved by 15%. $500k investment shows a simple payback of 20 months

• Ground source heat pump system with economizer for the office area

• Evaporative system humidification for data halls and cooling for rotary UPS space

• Non-chemical condenser water treatment (recycled to retention pond)

• Variable frequency drives (VFDs) on major mechanical equipment

• Non-petroleum based (vegetable) substation oil

• Reduced electrical system transformation with medium voltage building distribution (4160V)

• Photovoltaic array on the roof (100 kW)

• LED interior and exterior lighting with occupancy sensors

• Low E glass windows, water efficient plumbing fixtures and high albedo roofing.

Page 37: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

37

FACEBOOK 79

• Made a public commitment to deliver 100% Carbon Neutral IT footprint by 2012

• New eco data center extension, New York City area

• 1.25 MW of critical load capability, and up to 5000 blade servers

DEUTSCHE BANK

• Located in the southwestern U.S.

• Alignment of IT and Facilities enabled a metrics-based approach to drive data center design and server procurement process, guided by The Green Grid’s Data Center Maturity Model

• Designed for a site-annualized PUE of less than 1.2

• Best-case site PUEL3,HC of 1.26 at 30–35 percent site load, and container pPUEL3,HC values as low as 1.018 (January 2012), and 1.046 (August 2011)

EBAY 78

• Located in Prineville, OR

• New data servers will have 19% higher throughput, cost 10% less to build, and eliminate the need for several tons of raw materials per data center

• When matched to a best practice data center building design using evaporative cooling, power savings grow to 38%, and cost savings of 24%

• PUE is reduced to ≈ 1.07 80

• Evaporative cooling system: Water is evaporated to cool incoming air during

• Primary cooling system is airside economization (mixing outside air and return air), with evaporative cooling, able to run chiller-free cooling 98% of the year despite the heat of the New York summertime

• Measured PUE average of 1.18

• Water-side economizer cooling 100% of the time, with chillers only for backup, even at 49ºC desert temperatures

• 80 percent of servers deployed in lower-cost Uptime Institute-defined Tier II space

• Server rollout process optimized for rack-at-a-time and container-at-a-time deployments, minimizing footprint and maximizing modularity, flexibility, and scalability

months where ambient air temperatures are above data center temperatures, 30% to 40% of the year

• Airside economizer: The facility is cooled by ambient air 60% to 70% percent of the year

• Re-use of server heat: A portion of excess heat from server racks is diverted to heat offices during the winter

• UPS system reduces electricity usage by 12 percent

Page 38: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

38

GREEN GRID MEMBER CORPORATION 82

FEDERAL EXPRESS

• Pilot 1,100 m2 data center at Syracuse University 85

• 100% on-site power generation

• 3.3 MWa, 3066 m2 data center installed variable speed fans, improved hot/cold aisle isolation, and modified sensors and temperature setpoints

• By 2009, data center projects including consolidation yielded a reduction in energy consumption by 60 percent from 2005 levels 83

• New data center located in Wynyard, England 84

• Ambient air cooling, annual energy savings of 40%

• 33,400 m2 , PUE of 1.2

HEWLETT- PACKARD

IBM

• Located in Colorado Springs, CO 81

• 15,400 m2

• 1.28 PUE

• Cooled by ambient air 60% of the year

• 75% of construction waste diverted from landfill

• Total annual energy savings of 9.1% and cash savings of $300k/year

• Total cost was $505k, for a 1.7 year payback

• Rainwater collection to reduce water consumption

• High reflectivity roof, enclosed cold aisles, lighting controls, recycled building (former distribution center)

• HP EcoPOD modular data centers combine fast provisioning (~12 weeks) and high efficiency

• PUE of 1.05-1.2 depending on selected cooling solution and site

• IBM consolidated server clusters by a 6:1 ratio and improved processor efficiency, octupling processing capacity at a cost of $1M, instead of the originally budgeted $50M 86

Page 39: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

39

• 1,250 m2 data center 87

• Remove redundant UPS, reduce unused server uptime, staged chillers, better air flow, water-side economizer, lighting controls

• Remove redundant UPS: 109 MWh/year, zero cost, $12k/year savings, free

• Reduce unused servers online: 273 MWh/year, $10k, 0.3 year payback

• Stage chillers: 92 MWh/year, $4k, 0.4 year payback

• Located in New York 88

• 11,100 m2

• Ambient air cooling 99% of the time, Design reduces cooling load from 25% of total data center loads to 1%

LUCASFILM

YAHOO!

• Switched bypass UPS: $100k, 1 year payback

• Improve airflow: 806 MWh, $113k, 1.3 year payback

• Water side economizer: $200k, 2 year payback

• Lighting controls: 10 MWh, $2.5k, 2.1 year payback

• Total: 3,109 MWh, $429k, 1.2 year payback

• 40% more efficient

• 95% reduction in water demand

• PUE of 1.08

Page 40: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

40

Curtis 2011 Greenberg 2009 Kaiser 2011:4-5 Kaiser 2011:7 Kaiser 2011:9 GreenGrid 2008 GreenGrid 2010 Darrow 2009 Darrow 2009:3-20 Darrow 2009:35-36 Darrow 2009:27 Darrow 2009:27 Avezado 2011a:5. Avezado 2011a:12. Data Center Metrics Task Force 2011. This document includes details and sample calculations for determining PUE. Belady 2007:4 ASHRAE 2010:3 Bednar 2009:5. Patterson 2010 In theory, data centers that are net producers of zero-carbon energy or reusable water would have a negative consumption index. Azavedo 2011:4. Azavedo 2011:5. Azevedo 2011:23 Green IT Promotion Council 2011a Belady 2008 IBM 2012 Intel 2008 Darrow 2009:43-44 Porzio 2011 US DOE 2009 Hazelrigg 2011 Green Grid 2012 Heiliger 2011 Frachtenberg 2011 GreenerComputing 2011 Brey 2011 CEF 2011 CEF 2011 CEF 2011 Greenbiz 2010 US DOE 2008a US DOE 2011f

474849505152535455565758596061

6263646566

67686970717273747576777879808182838485868788

KPMG International Survey of Corporate Responsibility Reporting 2011. Darrow 2009:1 Ebbers 2011:2 Koomey 2011:49 Bean 2009:3. There are many other references for these estimates as well. Apeture 2006 Brill 2007a Koomey 2009 Mark Aggar, 2012 Ebbers 2011:3, direct quote Cupps 2011 Abts 2010 Brill 2007a Barosso 2011 Baliga 2011 ACEEE, 2010 Aggar, 2011 ACEEE 2010 Hartog 2008 Smart2020 2011 Darrow 2009:41-42 US DOE 2008 Google 2011f Google 2012 Google 2012 Smart2020 2011c, direct quote Bhandarkar 2010a Bishop 2009 Miller 2010d Gilbert 2009 Sustainable Computing 2008 Miller 2011a Engineering Economics 2011 Engineering Economics 2011 Ajenstat 2008 Cloud Computing 2009 Bednar 2009:11-12. Direct quote. Blough 2011 ASHRAE 2011 IBM 2012 Blackburn 2010a:2 Blackburn 2010a:7 Hawkins 2010:6 Hawkins 2011:4 Hawkins 2011:7 Gosai 2011

1

2345

678910111213141516171819202122232425262728293031323334353637383940414243444546

ENDNOTES

Page 41: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

41

REFERENCES

(Abts 2010) D. Abts, M. R. Marty, P. M. Wells, P. Klauser and H. Liu, Energy Proportional Data center Networks, 37th International Symposium on Computer Architecture, Saint-Malo, France, June 19-23 2010, 10 pp. (ACEEE 2010) American Council for an Energy Efficient Economy, Summer Study on Energy Efficiency in Buildings, 2010, 112 pp. (Aggar 2011) M. Aggar, The IT Energy Efficiency Imperative, Microsoft, June 2011, 28 pp.

(Aperture 2006) Aperture Research Institute, Organizations Struggle With Data Center Capacity Management, Stamford, CT, November 23 2006, 7 pp.

(Ardisson 2009) J. Ardisson, Boosting Energy Efficiency, Green Grid Technical Forum, October 2009, 20 pp.

(ASHRAE 2010a) ASHRAE, Real-Time Energy Consumption Measurements in Data Centers, Green Grid Technical Forum, 2010, 72 pp.

(ASHRAE 2011) ASHRAE, 2011 Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance, TC 9.9 2011, 45 pp.

(Azevedo 2011b) D. Azevedo, Global Harmonization of Data Center Energy Efficiency Metrics, Green Grid, Beaverton OR USA, 2011, 27 pp.

(Azevedo 2011) D. Azevedo, C. Belady, M. Patterson and J. Pouchet, Using CUE and WUE to Improve Operations in Your Data Center, Green Grid Technical Forum, 2011, 6 pp.

(Azevedo 2011a) D. Azevedo, J. Cooley, M. Patterson and M. Blackburn, Data Center Efficiency Metrics: mPUE, Partial PUE, ERE, DCcE, Green Grid Technical Forum, 2011, 37 pp.

(Baliga 2011) J. Baliga, R. W. A. Ayre, K. Hinton and R. S. Tucker, Green Cloud Computing: Balancing Energy in Processing, Storage, and Transport, Proceedings of the IEEE, v99.1 2011, 149-167.

(Barroso 2011) L. A. Barroso and U. Hölzle, The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Synthesis Lectures on Computer Architecture, Madison, WI USA, University of Wisconsin Synthesis Lectures on Computer Architecture Number 6 2011, 120 pp.

(Bean 2009) J. Bean, R. Bednar, R. Jones, et.al., Proper Sizing of IT Power and Cooling Loads, Green Grid White Paper 23, Beaverton, OR USA, 2009, 10 pp.

(Bednar 2009) R. Bednar, J. Winiecki and K. Winkler, Energy Measurement Survey Results Analysis Green Grid White Paper 26, Beaverton, OR USA, 2009, 13 pp.

(Belady 2007) C. Belady, A. Rawson, J. Pfleuger and T. Cader, Green Grid Data Center Power Metrics: PUE and DCiE, Green Grid White Paper 6, Beaverton, OR USA, 2007, 9 pp.

( Belady 2010) C. Belady, M. Patterson, J. Pouchet and R. Tipley, Carbon Usage Effectiveness (CUE): A Green Grid Data Center Sustainability Metric, Green Grid White Paper 32, Beaverton,OR USA, 2010, 8 pp.

(Blackburn 2010) M. Blackburn, D. Avezedo, A. Hawkins, Z. Ortiz, R. Tipley and S. V. D. Berghe, The Green Grid Data Center Compute Efficiency Metric: DCcE, Green Grid White Paper 34, Beaverton, OR USA, 2010, 15 pp.

(Blackburn 2010a) M. Blackburn and A. Hawkins, Unused Servers Survey Results Analysis, Green Grid White Paper 28, Beaverton, OR USA, 12 pp.

Page 42: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

42

(Blough 2011) B. Blough, J. Bean, R. Jones, M. Patterson, R. Jones and R. Salvatore, Qualitative Analysis of Cooling Architectures for Data Centers, Green Grid White Paper 30, Beaverton, OR USA, 2011, 30 pp.

(Bouley 2010) D. Bouley, Impact of Virtualization on Data Center Physical Infrastructure, Green Grid White Paper 27, Beaverton, OR USA, 2010, 10 pp.

(Brey 2010) T. Brey, Impact of Virtualization on Data Center Physical Infrastructure - White Paper #27 Green Grid Technical Forum, 2010, 17 pp.

(Brey 2011) T. Brey, P. Lembke, J. Prisco, K. Abbott, D. Cortese, K. Hazelrigg, J. Larson, S. Shaffer, T. North and T. Darby, Case Study: The ROI of Cooling System Energy Efficiency Upgrades, Green Grid White Paper 39, Beaverton, OR USA, 2011, 42 pp.

(Brill 2007 ) K. G. Brill, Data Center Energy Efficiency and Productivity, Uptime Institute 2007, 10 pp.

(Brill 2007a) K. G. Brill, The Invisible Crisis in the Data Center: The Economic Meltdown of Moore’s Law, Uptime Institute 2007, 8 pp.

(Buyya 2010) R. Buyya, A. Beloglazov and J. Abawajy, Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges, 2010,

(CA Associates 2011) CA Associates, StratITsphere Enhances Competitive Advantage with Proactive Data Center Power Management, 2011, 6 pp.

(Climate Group 2008) Climate Group, SMART 2020: Enabling the Low Carbon Economy in the Information Age, October 2008, 87 pp.

(Coors 2010) L. Coors, View From the AC, Green Grid Technical Forum, 2010, 18 pp.

(Crawford 2009) T. Crawford and R. Donaldson, Alignment Track Results, Data Center Pulse Summit, February 2009, 12 pp.

(Cupps 2011) K. Cupps and M. Zosel, Workshop Report: The 4th Workshop on HPC Best Practices: Power Management - September 28-29, 2010, San Francisco, Lawrence Livermore National Laboratory LLNL-AR-472771, 2011, 56 pp.

(Curtis 2011) A. R. Curtis, T. Carpenter, M. Elsheikh, A. López-Ortiz and S. Keshav, REWIRE: An Optimization-based Framework for Data Center Network Design, University of Waterloo Tech Report CS-2011-21, Waterloo, Ontario Canada, , 2011, 12 pp.

(Darrow 2009 ) K. Darrow and B. Hedman, Opportunities for Combined Heat and Power in Data Centers, Oak Ridge, TN USA, Oak Ridge National Laboratory Subcontract Number: 4000021512, March 2009, 64 pp.

(Data Center Metrics Task Force 2010a) Data Center Metrics Task Force, Harmonizing Global Metrics for Data Center Energy Efficiency, February, 2010, 2 pp.

(Data Center Metrics Task Force 2011) Data Center Metrics Task Force, Recommendations for Measuring and Reporting Overall Data Center Efficiency: Version 2 – Measuring PUE for Data Centers, Data Center Metrics Task Force May 2011, 14 pp.

(Data Center Metrics Task Force 2011a) Data Center Metrics Task Force, Harmonizing Global Metrics for Data Center Energy Efficiency Global Taskforce Reaches Agreement on Measurement Protocols for PUE – Continues Discussion of Additional Energy Efficiency Metrics, Data Center Metrics Task Force May 2011, 12 pp.

Page 43: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

43

(Delforge 2011) P. Delforge, C. Hankins, C. Joseph, D. Nelson and M. Monroe, Beyond Energy: The Sustainable Data Center, Green Grid Technical Forum, 2011

(Ebbers 2011) M. Ebbers, M. Archibald, C. F. F. d. Fonseca, M. Griffel, V. Para and M. Searcy, Smarter Data Centers: Achieving Greater Efficiency, IBM International Technical Support Organization Redpaper REDP-4413-01, Raleigh, NC USA, October 21 2011, 138 pp.

(Fierher 2010) K. M. Fiehrer, Climate Savers Computing Initiative, Green Grid Technical Forum, 2010, 31 pp.

(Frachtenberg 2011) E. Frachtenberg, A. Heydari, H. Li, A. Michael, J. Na, A. Nisbet and P. Sarti, High-Efficiency Server Design, SC 11, Seattle, WA USA, November 2011, 11 pp.

(Garfield 2010) D. Garfield, A. Sullivan, A. Fanera, P. Scheiling and Z. Limbuwala, Global Regulatory and Legislative Trends, Green Grid Technical Forum, February, 2010, 32 pp.

(Garg 2011) S. K. Garg, C. S. Yeo and R. Buyya, Green Cloud Framework For Improving Carbon of Clouds, Euro Par 17th International Conference on Parallel Processing, Bordeaux France, 2011, 12 pp.

(Ghandi 2011) A. Gandhi, Y. Chen, D. Gmach, M. Arlitt and M. Marwah, Minimizing Data Center SLA Violations and Power Consumption via Hybrid Resource Provisioning, Hewlett-Packard HPL-2011-81 2011, 8 pp.

(Google 2012) Google, Data centers>Inside the data center>Efficiency>Power Usage Efficiency, Official Website, http://www.google.com/about/datacenters/inside/efficiency/power-usage.html

(Gosai 2011) B. Gosai, Building the Next-Generation Data Center – A Detailed Guide, CA Technologies, February 2011, 24 pp.

(Greenberg 2009) Greenberg, A, J R Hamilton, et al., Vl2: A Scalable and Flexible Data Center Network, Communications of the ACM, 2009, pp. 95-104.

(Green Grid 2008) The Green Grid, Quantitative Efficiency Analysis of Power Distribution Configurations for Data Centers, Green Grid White Paper 16, Beaverton, OR USA, 2008, 35 pp.

(Green Grid 2010) The Green Grid, Issues Relating to the Adoption of Higher Voltage Direct Current Power in the Data Center, Green Gird White Paper 31, Beaverton, OR USA, 2010, 25 pp.

(Green Grid 2011) The Green Grid, Data Center Maturity Model, 2011, 1 page

(Green Grid 2012) The Green Grid, Breaking New Ground on Data Center Efficiency, Case Study, Beaverton, OR, 2012, 20 pp.

(Haas 2009) Hass, J, J Froedge, et al., Usage and Public Reporting Guidelines for the Green Grid’s Infrastructure Metrics (PUE/DCIE), Green Gird White Paper 22, Beaverton, OR USA, 2009, 15 pp.

(Haas 2010) J. Haas and G. Navarro, Data Center Design Guide Work Group Overview and Status, Green Grid Technical Forum, 2010, 20 pp.

(Haas 2011) J. Haas, J. Woodbury and E. Shutter, Using the Data Center Design Guide to Improve the Efficiency of Your Data Centers, Green Grid Technical Forum, 2011, 21 pp.

(Halezrigg 2011) K. Hazelrigg, ROI of Cooling Energy Efficiency Upgrades, Green Grid Technical Forum, 2011, 24 pp.

Page 44: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

44

(Halperin 2011) D. Halperin, S. Kandula, J. Padhye, P. Bahl and D. Wetherall, Augmenting Data Center Networks With Multi-Gigabit Wireless Links, SIGCOMM 2011, Toronto, Ontario Canada, 2011, 12 pp.

(Hartog 2008) L. Hartog, Checklist: 12 Steps to a Greener Datacenter, itmanagement.com, 2008, 6 pp.

(Hawkins 2010) A. Hawkins, Unused Servers: Cost Savings and Increased Efficiency For Free?, Green Grid Technical Forum, 2010, 15 pp.

(Hawkins 2011) A. Hawkins, Determining the Implications of Unused Servers and How They Can be Addressed, Green Grid Technical Forum, 2011, 15 pp.

(Hinton 2011) K. Hinton, J. Baliga, M. Feng, R. Ayre and R. S. Tucker, Power Consumption and Energy Efficiency in the Internet, IEEE Network, March/April 2011, pp.6-12.

(Jagu 2011) C. Jagu, Corporate Social Responsibility and the Energy Efficient Data Center, Green Grid Technical Forum, 2011, 43 pp.

(Kaiser 2011) J. Kaiser, J. Bean, T. Harvey, M. Patterson and J. Winiecki, Survey Results: Data Center Economizer Use, Green Grid White Paper 41, Beaverton, OR USA, 2011, 19 pp.

(Koomey 2007) J. G. Koomey, Data Center Electricity Use: What We Know, EPA Data Center Stakeholder Workshop, Santa Clara, CA USA, February 16 2007, 15 pp.

(Koomey 2007a) J. G. Koomey, E. Mills, B. Tschudi, D. Sartor and B. Nordman, Promoting Efficiency in Data Centers, EPA Data Center Stakeholder Workshop, Santa Clara, CA USA, February 15 2007, 4 pp.

(Koomey 2008) J. G. Koomey, Worldwide Electricity Used in Data Centers, Environmental Research Letters, #3 2008, pp.1-9.

(Koomey 2009) J. G. Koomey, C. Belady, M. Patterson, A. Santos and K.-D. Lange, Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers, Intel Corporation 2009, 57 pp.

(Koomey 2011) J. G. Koomey, S. Berard, M. Sanchez and H. Wong, Implications of Historical Trends in the Electrical Efficiency of Computing, IEEE Annals of the History of Computing, July-September 2011, pp.46-54.

(KPMG 2011) KPMG, KPMG International Survey of Corporate Responsibility Reporting 2011, 2011, 36 pp.

(Laitner 2911) J. A. Laitner, ICT and the Economic Imperative of Energy Efficiency, Green Grid Technical Forum, San Jose, CA USA, March 2011, 28 pp.

(Lang 2007) K.-D. Lange, Benchmarking the Energy Efficiency of Servers, EPA Technical Workshop on Energy Efficient Servers and Datacenters, Santa Clara, CA USA, February 2007, 11 pp.

(LBL 2007) Lawrence Berkeley Laboratories, Data Center Data, EPA Technical Workshop on Energy Efficient Servers and Datacenters, Santa Clara, CA USA, 2007, 4 pp.

(Long 2009) B. Long, J. Freeman and M. Patterson, Assessment of EPA Mid-Tier Data Center at Potomac Yard, Green Grid Technical Forum, 2009, 21 pp.

(Loper 2007) J. Loper and S. Parr, Improving Data Center Efficiency: Some Policy Possibilities, in Energy Efficiency in Data Centers: A New Policy Frontier, Alliance to Save Energy 2007, 9 pp.

Page 45: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

45

(Masanet 2007) E. Masanet, EPA Study Overview, EPA Technical Workshop on Energy Efficient Servers and Data Centers, Santa Clara, CA USA, February 2007, 24 pp.

(Masero 2011) S. Masero, Powering the cloud: maintaining service availability and reducing data center costs with effective power management, in Optimizing Data Center and Cloud Operations in an Energy Constrained World, CA Technologies, November 2011, 10 pp.

(Meijer 2010) G. I. Meijer, Cooling Energy-Hungry Data Centers, Science, 328 November 26 2010, pp. 318-319.

(Meisner 2011a) D. Meisner, C. M. Sadler, L. A. Barroso, W.-D. Weber and T. F. Wenisch, Power Management of Online Data-Intensive Services, International Symposium on Computer Architecture, San Jose, CA USA, June 2011, 12 pp.

(Meisner 2011) D. Meisner and T. F. Wenisch, Does Low-Power Design Imply Energy Efficiency for Data Centers?, 17th IEEE/ACM international Symposium on Low-Power Electronics and Design, 2011, 6 pp.

(Monroe 2009a) M. Monroe, Productivity Proxy Proposals Feedback - Interim Results, Green Grid Technical Forum, 2009, 9 pp.

(Monroe 2009) M. Monroe and M. Khattar, Data Center Metrics Track Results, Data Center Pulse Summit, 2009, 15 pp.

(Monroe 2009b) M. Monroe and J. Tuccillo, Data Center Energy: Going Forward, Data Center World, Las Vegas, NV USA, 2009, 29 pp.

(Monroe 2010) M. Monroe and G. Navarro, Free Cooling Tool and Power Configuration Efficiency Estimator, Green Grid Technical Forum, San Jose, CA USA, 2010, 34 pp.

(Monroe 2010a) M. Monroe and J. Pfleuger, Productivity Proxies Update, Green Grid Technical Forum, San Jose, CA USA, 2010, 44 pp.

(Mudigonda 2011) J. Mudigonda, P. Yalagandula and J. C. Mogul, Taming the Flying Cable Monster: A Topology Design and Optimization Framework for Data-Center Networks, Palo Alto, CA USA, Hewlett-Packard Laboratories HPL-2011-30, September 6 2011, 15 pp.

(Nelson 2009) D. Nelson, Top 10 Update: Reporting out the pulse of the top requests from the Data Center end user community, Data Center Pulse Summit, November 2009, 28 pp.

(Nelson 2009a) D. Nelson, Top 10 Track Results, Data Center Pulse Summit, February 18 2009, 17 pp.

(Nelson 2009b) D. Nelson, Stack Framework, Data Center Pulse Summit, November 15 2009, 14 pp.

(Nelson 2011) D. Nelson, Data Center Pulse Readout, Green Grid Technical Forum, San Jose, CA USA, 2011, 24 pp.

(Nguyen 2011) K. K. Nguyen, M. Cheriet, M. Lemay, B. Saint-Arnaud, V. Reijs, A. Mackarel, P.Minoves, A. Pastrama and W. V. Heddeghem, Renewable Energy Provisioning for ICT Services in a Future Internet, Future Internet Assembly, 2011, 421-431 pp.

(Ourghanlian 2010) B. Ourghanlian, Improving Energy Efficiency: An End User Perspective, Green Grid EMEA Technical Forum 2010, San Jose, CA USA, 2010, 75 pp.

Page 46: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

46

(Patterson 2010a) M. Patterson, The Green Grid EPA Data Center Assessment, Green Grid Technical Forum, 2010, 21 pp.

(Patterson 2010) M. Patterson, B. Tschudi, O. VanGeet, J. Cooley and D. Azevedo, ERE: A Metric for Measuring the Benefit of Reuse Energy From a Data Center, Green Grid White Paper 29, Beaverton, OR USA, 2010, 15 pp.

(Patterson 2011) M. Patterson, D. Azevedo, C. Belady and J. Pouchet, Water Usage Effectiveness (WUE): A Green Grid Data Center Sustainabilty Metric, Green Grid White Paper 35, Beaverton, OR USA, Green Grid White Paper, 2011, 12 pp.

(Pflueger 2010a) J. Pfleuger, Developing a Toolbox for Data Center Planning, Design and Operations, Green Grid Technical Forum, February 2010, 20 pp.

(Pfluger 2010) J. Pflueger, A Roadmap for the Adoption of Power-Related Features in Servers, Green Grid White Paper 33, December 2010, 62 pp.

(PGE 2006) Pacific Gas and Electric, High Performance Data Centers: A Design Guidelines Sourcebook, San Francisco CA USA, 2006, 63 pp.

(Porzio 2011) T. Porzio, SODEXO: Deploying Virtualization Technology to the Datacenter, Corporate Eco Forum 2011, 2 pp.

(Ranganathan 2010) P. Ranganathan, Recipe for Efficiency: Principles of Power-Aware Computing, Communications of the ACM, April 2010, pp. 60-67.

(Rasmussen 2007) N. Rasmussen, Calculating Total Cooling Requirements for Data Centers, Green Grid, White Paper 25 2007, 8 pp.

(Reese 2009) P. Reese and S. Devito, Fanless Server Track Results, Data Center Pulse February 2009, 7 pp.

(Rodriguez 2009) J. Rodriguez and G. Hay, Cloud Computing Track Results, Data Center Pulse February 2009, 7 pp.

(Schutter 2010) E. Schutter, End User Birds of a Feather Session Readout, Green Grid Technical Forum, February 2010, 14 pp.

(Schutter 2011) E. Schutter, The Collision Course of Data Center Site Selection and Sustainability, Green Grid Technical Forum, 2011, 10 pp.

(Seese 2009) R. Seese and M. Ryan, Power Trip Track Results, Data Center Pulse Summit, February 18 2009, 6 pp.

(Seymour 2005) M. Seymour, Virtual Data Center Design: A Blueprint for Success, Future Facilities LTD, December 2005, 12 pp.

(SVLG 2008) Silicon Valley Leadership Group, Case Study: Sun Microsystems Energy-Efficient Modular Cooling Systems, July 2008, 4 pp.

(Singh 2010) H. Singh, Improving Energy Efficiency with The Green Grid, Thomson Reuters December 2010, 13 pp.

(Singh 2011b) H. Singh, Plotting a Path to Sustainability with The Green Grid’s Data Center Maturity Model, Green Grid Technical Forum, March 2011, 70 pp.

Page 47: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

47

(Singh 2011a) H. Singh, Data Center Maturity Model, Green Grid White Paper 36, February 2011c, 13 pp.

(Smith 2009) J. Smith, Two Years Later: DCiE and PUE as Metrics at Digital Realty Trust, Digital Realty Trust, April 2009, 10 pp.

(Smith 2009a) V. Smith and C. R. Ellis, The Green Grid Energy Policy Research for Data Centers, Green Grid White Paper 25, November 2009, 60 pp.

(Stokes 2011) A. Stokes and P. J. Simmons, Key Takeaways from May 13 Virtual Roundtable on Data Centers, Corporate Eco Forum June 2011, 3 pp.

(Strutt 2011) S. Strutt, The Effect of Data Centre Environment on IT Reliability & Energy Consumption, Green Grid Technical Forum, October 2011, 20 pp.

(Sullivan 2010) A. Sullivan, Energy Star for Data Centers, Green Grid Technical Forum, February 2010, 39 pp.

(Sullivan 2011) G. P. Sullivan, W. D. Hunt, R. Pugh, W. F. Sandusky, T. M. Koehler and B. K. Boyd, Metering Best Practices: A Guide to Achieving Utility Resource Efficiency, Release 2.0, Richland WA USA, Pacific Northwest National Laboratory PNNL-17221 August 2011, 225 pp.

(Tang 2012) C.-J. Tang, M.-R. Dai, H.-C. He and C.-C. Chuang, Evaluating Energy Efficiency of Data Centers with Generating Cost and Service Demand, Bulletin of Networking, Computing, Systems, and Software, January 2012, 16-20.

(Tayeb 2011) J. Tayeb, Utilizing the SDK Energy Checker, Green Grid Technical Forum, San Jose CA USA, March 2011, 37 pp.

(Thacker 2010) C. P. Thacker, Improving the Future by Examining the Past, ACM IEEE International Symposium on Computer Architecture, Saint-Malo, France , June 2010, 27 pp.

(Theile 2009) M. Thiele, Data Center Certification Track Results, Data Center Pulse Summit, 2009, 10 pp.

(Tschudi 2008) W. Tschudi, Data Center Assessments to Identify Efficiency Opportunities, Lawrence Berkeley National Laboratory, November 2008, 30 pp.

(US DOE 2001) US Department of Energy, Greening Federal Facilities, Washington DC USA, DOE/GO-102001-1165 2001, 211 pp.

(US DOE 2008) US Department of Energy, DOE Assessment Identifies 30% Energy Savings for Broadband and Wireless Communication Company, Washington DC USA, DOE/GO-102008-2644 December 2008, 2 pp.

US DOE 2008a) US Department of Energy, DOE Assessment Evaluates Energy Performance of Film and Entertainment Company Data Center, Washington DC USA, DOE/GO-102008-2645October 2008, 2 pp.

(US DOE 2008b) US Department of Energy, Energy Efficiency in Data Centers: Recommendations for Government-Industry Coordination, Washington DC USA, October 16 2008, 41 pp.

(US DOE 2009) US Department of Energy, Database Technology Company Saves $262,000 Annually, Washington DC USA, DOE/GO-102009-2850 July 2009, 2 pp.

Page 48: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

48

(US DOE 2009a) US Department of Energy, U.S. Data Centers Save Energy Now, 2009, 15 pp.

(US DOE 2010) US Department of Energy, Wireless Sensors Improve Data Center Energy Efficiency, DOE/Technical Case Study Bulletin CSO 20029, Washington DC USA, September 2010, 8 pp.

(US DOE 2011) US Department of Energy, Department of Energy Laboratories: Leadership in Green IT, DOE/GO-102011-3295 2011, 48 pp.

(US DOE 2011a) US Department of Energy, High-Efficiency, Wideband Three-Phase Rectifiers and Adaptive Rectifier Management for Telecom Central Office and Large Data Center Applications, DOE/EE-0500 May 2011, 2 pp.

(US DOE 2011b) US Department of Energy, Data Center Transformation from “Always On” to “Always Available”, Washington DC USA, DOE/EE-0494 May 2011, 2 pp.

(US DOE 2011c) US Department of Energy, Advanced Refrigerant-based Cooling Technologies for Information and Communications Infrastructure (ARCTIC), Washington DC USA, 2011, 2 pp.

(US DOE 2011d) US Department of Energy, Dynamic Energy Consumption Management of Routing Telecom and Data Centers through Real-Time Optimal Control, Washington DC USA, DOE/EE-0496 May 2011, 2 pp.

(US DOE 2011e) US Department of Energy, Adaptive Environmentally Contained Power and Cooling Information Technology (IT) Infrastructure for Data Centers, Washington DC USA, DOE/E-0492 2011, 2 pp.

(US DOE 2011f) US Department of Energy, Yahoo! Compute Coop Next Generation Passive Cooling Design for Data Centers, Washington DC USA, DOE/EE-0504 May 2011, 2 pp.

(US DOE 2011g) US Department of Energy, Development of a Very Dense Liquid Cooled Compute Platform, Washington DC USA, DOE/EE-0495 May 2011, 2 pp.

(US DOE 2011h) US Department of Energy, A Measurement–Management Technology for Improving Energy, Washington DC USA, DOE/EE-0491 May 2011, 2 pp.

(US DOE 2011i) US Department of Energy, Integrated DC-DC Conversion for Energy-Efficient Multicore Microprocessors, Washington DC USA, DOE/EE-0501 May 2011, 2 pp.

(US DOE 2011j) US Department of Energy, Information and Communication Technology Portfolio: Improving Energy Efficiency and Productivity in America’s Telecommunication Systems and Data Centers, Washington DC USA, DOE/EE-0390 March 2011, 12 pp.

(US DOE 2011k) US Department of Energy, Economizer-based Data Center Liquid Cooling with Advanced Metal Interfaces, Washington DC USA, DOE/EE-0497 May 2011, 2 pp.

(US DOE 2011l) US Department of Energy, Federspiel Controls’ Data Center Energy Efficient Cooling Control System, Washington DC USA, DOE/EE-0499 May 2011, 2 pp.

(US DOE 2011m) US Department of Energy, Power Minimization Techniques for Networked Data Centers, Washington DC USA, DOE/EE-0502 May 2011, 2 pp.

(US DOE 2011n) US Department of Energy, Energy Efficiency of Data Networks through Rate Adaptation, Washington DC USA, DOE/EE-0498 May 2011, 2 pp.

(US DOE 2011o) US Department of Energy, SeaMicro Volume Server Power Reduction, Washington DC USA, DOE/EE-0503 May 2011, 2 pp.

Page 49: Executive Guide: BEST PRACTICES FOR LEADING EDGE DATA CENTERS · center is needed at all, or whether these services can be provided by outsourced cloud computing providers. Often,

49

(US DOE 2011p) US Department of Energy, US Data Centers: Save Energy Now, Washington DC USA, 2011, 15 pp.

(US EPA 2007) US Environmental Protection Agency, Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431, Washington DC USA, August 2007, 133 pp.

(US EPA 2008) US Environmental Protection Agency, Quick Start Guide to Increase Data Center Energy Efficiency, Washington DC USA, October 2008, 2 pp.

(US EPA 2011) US Environmental Protection Agency, Portfolio Manager: Quick Reference Guide for Data Centers, Washington DC USA, August 2011, 2 pp.

(Wang 2011) L. Wang, S. U. Khan and J. Dayal, Thermal Aware Workload Placement With Task- Temperature Profiles in a Data Center, Journal of Supercomputing, June 7 2011, 24 pp.

(Warner 2006) M. Warner and S. Hall, The Virtual Facility: Identifying and Overcoming Design and Operational Deficiencies in the Modern Mission Critical Facility, Future Facilities LTD, September 2006, 10 pp.

(Webb 2009) M. Webb, Smart 2020: Pathways to Scale, Climate Group, December 2009, 5 pp.

(Winkler 2008) K. Winkler, Data Center Study Baseline Report, Green Grid White Paper 8, Beaverton OR USA, March 2008, 29 pp.

(Wong 2011) H. M. L. Wong, DC Pro IT Tool / DOE Data Center Energy Practitioner (DCEP), Green Grid Technical Forum, Beaverton OR USA, March, 2011, 38 pp.

(Yamamura 2010) H. Yamamura, Issues Relating to the Adoption of Higher Voltage Direct Current Power in the Data Center, Green Grid Technical Forum, Beaverton OR USA, December, 2010, 18 pp.

(Zatz 2011) M. Zatz, EPA Energy Star Rating for Data Centers –Experience from the First Six Months, Green Grid Forum, Beaverton OR USA, 2011, 40 pp.

(Ziff-Davis 2011) Ziff-Davis, 12 Steps to a Greener Datacenter, Ziff-Davis, version 120711, December 2011, 4 pp.