best practices - data center cost and design

22
Research Publication Date: 5 August 2011 ID Number: G00213184 © 2011 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartner's prior written permission. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartner's research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner's Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see "Guiding Principles on Independence and Objectivity" on its website, http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp Best Practices for Data Center Costs and Design David J. Cappuccio, John R. Phelps Data center facilities rarely meet the operational and capacity requirements of their initial designs. The combination of new technologies (e.g., blade servers, which require substantial incremental power and cooling capacity), pressures to consolidate multiple data centers into fewer locations, the need for incremental space, changes in operational procedures, green IT initiatives and rising energy costs, and potential changes in safety and security regulations imposes constant facilities changes in the modern data center. The overarching rule in data center facilities is to design for flexibility, efficiency and scalability. Here, we combine many of the best practices in data center design, and attempt to put a financial value on them. Key Findings New data center designs should focus on creating the most efficient use of floor space, and the power and cooling delivered to it. Multizoned designs provide increased flexibility, and can reduce operating costs up to 20% by reducing overall mechanical/electrical requirements. Selecting the right tier level(s) will be the key cost driver, as mechanical/electric redundancy contributes to upward of 40% of the overall cost to build. Recommendations Use the rack unit as the basis for estimating power and space requirements. Design for vertical density, ensuring optimal compute capacity per kilowatt. Design multiple power zones to increase flexibility and reduce upfront capital costs. Take a comprehensive view of location and site selection. Consider utility infrastructure, labor and real estate markets, and public incentives. Opt for single-story, industrial-type buildings, except for special design considerations (e.g., second-floor natural air cooling plenum). Evaluate the level of redundancy (tier level) and its cost with the cost of downtime, while considering multitier designs. Maintain/establish close communication, and liaison between the data center facilities organization and the IT organization.

Upload: marc-c

Post on 23-Sep-2014

555 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Best Practices - Data Center Cost and Design

Research

Publication Date: 5 August 2011 ID Number: G00213184

© 2011 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its

affiliates. This publication may not be reproduced or distributed in any form without Gartner's prior written permission. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all

warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartner's research organization

and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice.

Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may

include firms and funds that have financial interests in entities covered in Gartner research. Gartner's Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research

organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see "Guiding Principles on Independence and Objecti vity" on its website,

http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp

Best Practices for Data Center Costs and Design

David J. Cappuccio, John R. Phelps

Data center facilities rarely meet the operational and capacity requirements of their initial designs. The combination of new technologies (e.g., blade servers, which require substantial incremental power and cooling capacity), pressures to consolidate multiple data centers into fewer locations, the need for incremental space, changes in operational procedures, green IT initiatives and rising energy costs, and potential changes in safety and security regulations imposes constant facilities changes in the modern data center. The overarching rule in data center facilities is to design for flexibility, efficiency and scalability. Here, we combine many of the best practices in data center design, and attempt to put a financial value on them.

Key Findings

New data center designs should focus on creating the most efficient use of floor space, and the power and cooling delivered to it.

Multizoned designs provide increased flexibility, and can reduce operating costs up to 20% by reducing overall mechanical/electrical requirements.

Selecting the right tier level(s) will be the key cost driver, as mechanical/electric redundancy contributes to upward of 40% of the overall cost to build.

Recommendations

Use the rack unit as the basis for estimating power and space requirements.

Design for vertical density, ensuring optimal compute capacity per kilowatt.

Design multiple power zones to increase flexibility and reduce upfront capital costs.

Take a comprehensive view of location and site selection. Consider utility infrastructure, labor and real estate markets, and public incentives.

Opt for single-story, industrial-type buildings, except for special design considerations (e.g., second-floor natural air cooling plenum).

Evaluate the level of redundancy (tier level) and its cost with the cost of downtime, while considering multitier designs.

Maintain/establish close communication, and liaison between the data center facilities organization and the IT organization.

Page 2: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 2 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

TABLE OF CONTENTS

Analysis ....................................................................................................................................... 3 The Keys to Successful Data Center Design: Flexibility and Scalability............................. 3

Density Versus Capacity ..................................................................................... 3 Emphasize Airflow Design ................................................................................... 4 Install Only the Power You Need ......................................................................... 4 Use Power Capping Management Technologies.................................................. 4 Use Zones to Reduce Build and Electrical Costs ................................................. 4 Rack Layout ........................................................................................................ 5 Rack Units........................................................................................................... 5

Availability Design Criteria ............................................................................................... 6 Critical Building Systems ..................................................................................... 7 Location .............................................................................................................. 8 Site Selection ...................................................................................................... 9 Architecture ....................................................................................................... 10 Power Distribution ............................................................................................. 11 Power Supply .................................................................................................... 12 Mechanical Systems ......................................................................................... 13 Security Systems .............................................................................................. 14 Raised-Access Floor (RAF) ............................................................................... 14 Fire Detection and Suppression ......................................................................... 16

Data Center Construction Costs ..................................................................................... 17 Sizing Estimates ................................................................................................ 17 Data Center Cost Estimates .............................................................................. 18

Data Center Facilities Management Options .................................................................. 20 Model 1: Assign Data Center Facilities Management Responsibility to Corporate Facilities Organization ....................................................................................... 20 Model 2: The Management of Data Center Facilities Is an IT Organizational Responsibility .................................................................................................... 21 Model 3. A Matrix Model, in Which IT Facilities Staff Reports to IT Organization and to Corporate Facilities Organization ............................................................ 21

Recommended Reading ............................................................................................................. 21

LIST OF TABLES

Table 1. Total Space and Power for IT (Base Case) — 200 Racks................................................ 6

LIST OF FIGURES

Figure 1. The Resulting Power Demands of Rack Layouts ............................................................ 5

Figure 2. Design Criteria: Scope and Redundancy ........................................................................ 7

Figure 3. Critical Building Systems ............................................................................................... 8

Figure 4. Data Center Space Estimates by Tier Level ................................................................. 17

Figure 5. Data Center Estimated Build Costs .............................................................................. 19

Figure 6. Major Construction Category Costs.............................................................................. 20

Page 3: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 3 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

ANALYSIS

The Keys to Successful Data Center Design: Flexibility and Scalability

Data center facilities rarely achieve the operational and capacity requirements specified in their designs. The key to a sustainable data center facility is to consider it an integrated system in which each component must be considered in the context of flexibility and scalability. Planners need to anticipate the following "forces of change":

New equipment

Consolidations and expansions

New redundancy requirements

Incremental power requirements

Sustainability and green IT

Incremental cooling demands

Constrained floor space

New safety and security regulations

Changes in operational procedures

Changes in mission

Cost pressures

The overarching rule in data center facilities is to design for flexibility to change as technology changes, for efficiency in operations of the facility itself (e.g., energy and airflow), and for vertical scalability within the floor space, providing the greatest compute per square foot, or per kilowatt (kW) possible over time. This rule embraces several key principles in the site location, building selection, floor layout, electrical system design, mechanical design and concept of modularity that enable the data center to change and adapt as needed, with minimum renovation and changes to basic building systems.

This research delves into specific guidelines for achieving a high level of flexibility and scalability in the data center. These best practices address site location, building selection, and principles in the design and provisioning of critical facilities systems.

Density Versus Capacity

Power density and capacity are two of the most crucial considerations in data center planning. As a rule, plan for the data center to scale within zones and to increase capacity on a modular basis. Power zones are designed to support differing workload types at different power and cooling densities, thus negating the need to overprovision the entire floor to support maximum power and cooling. Assess the trade-offs between space and power in the total cost of the new facility. Provide an average space of 30 square feet for each rack, inclusive of aisle ways, door swing space and maintenance areas.

Page 4: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 4 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Many data centers are overdesigned for electrical capacity because of concerns about meeting the incremental power and cooling demands of modern server equipment, such as blade servers. Data center designers must be sure to plan for sufficient capacity relative to patch panels, conduits and intermediate distribution feeds, as well as pay attention to equipment densities relative to initial and longer-term electrical power capacities.

Emphasize Airflow Design

Airflow is the single most important factor in determining cooling efficiency, and hence cooling capacity. Doubling the speed of a fan causes an eight-times increase in power consumption, so airflow designs should minimize speed and complexity of flow. Short air paths that discourage mixing are best design practices, and the advantage grows as rack power density increases. Full enclosure of hot aisles can result in substantial improvements in energy efficiency.

Install Only the Power You Need

Once the decision has been made on the size of a data center, the most critical analysis must take place: How much power do you need on the raised floor? Data centers that were designed more than seven years ago average 40 watts (W) to 60W per square foot, whereas newer buildings in the design phase are increasing that power requirement quite a bit, mainly due to the needs of high-density equipment — specifically racks. It's common to see designs for 200W per square foot and upward. The fear of underprovisioning a new building is driving these decisions, rather than a more logical approach.

Consider this: Current high-density racks can easily demand 10, 15 or even 25 kW per rack (333W, 500W and 833W per square foot, respectively); however, designing to this target assumes that all workloads are similar. Most racks don't come close to the theoretical maximums, and Gartner believes that high-density planning estimates of 10 to 12 kW per rack are reasonable, but only for a subset of your floor space.

A more reasoned approach would be to consider density zones within the new floor space. Simply put, density zones make the assumption that workloads can be partitioned into low-, medium- and high-density areas, and the power distribution and cooling capacity for those areas can be engineered accordingly. In addition, future upgrades can be applied at the zone level, instead of upgrading the entire data center.

Use Power Capping Management Technologies

Modern Xeon two-socket servers range from 50W idle power to 250W active power, under a 750W nameplate. This range makes it difficult to provision power, as 250W could easily represent 100% overprovisioning. Power capping tools are a potential solution, as they allow servers to have automatic limits set to protect against surprise overloads. This approach requires careful cooperation between facilities and equipment managers.

Use Zones to Reduce Build and Electrical Costs

Most customers that have considered the density zone approach have found that truly high-density applications comprise, on average, approximately 10% to 15% of the total, while medium-density requirements are about 20%, with the rest of the workload is allocated to low density. Using this design principle on a 9,000-square-foot data center would yield a lower cost point initially, and an ongoing reduction in operating costs. On average, a traditional 9,000-square-foot data center designed to support 8 kW per rack would cost $20.9 million (with 2N redundancy).

Assuming 10% of the floor space was designed to high density (15 kW per rack), 20% was medium density (8 kW per rack) and the rest was low or normal density (5 kW per rack), the

Page 5: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 5 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

overall building cost would be reduced to $18.4 million. At a 75% load, the yearly electrical costs would be $1.37 million in the traditional design, versus $1.1 million using the zoned approach — a $250,000 yearly reduction in operating costs.

If the zone requirements changed and more high-density floor space was needed, then scaling up the power distribution units (PDUs) would be a simple method of increasing power. Adding more on-floor computer room air-conditioning (CRAC) units would help address cooling issues.

However, you will need additional power for air-conditioning, humidification, lighting, and uninterruptible power supply (UPS) and transformer losses. This additional demand could add one to one-and-a-half times more wattage to the electrical load, depending on equipment spacing and air handling efficiencies.

Rack Layout

Gartner recommends that the data center be designed to scale individual racks or zones as needed. You can add incremental capacity on a modular basis. Focus on density versus capacity, and consider segregating workloads into hot or cold containment areas for improved cooling efficiencies.

Several modern blade chassis can be packed into a single-rack enclosure, resulting in power demands of 18 to 20 kW per rack. The primary issue with dense packing the layout with high-density servers is that, although it is efficient from a space standpoint, it creates heat problems that require incremental cooling, which can incur significant incremental electrical costs if not done properly. The zoned approach (see Figure 1) is becoming common in designs today, and, in some cases, higher-density zones are either segmented for special cooling or will utilize in-rack cooling techniques.

Figure 1. The Resulting Power Demands of Rack Layouts

Source: Gartner (August 2011)

Rack Units

Use the rack unit as the primary planning factor for estimating the space and power requirements of the data center. Each rack configuration should reflect total power, space and floor loading demands. Strive for an average of 4 to 5 kW per rack for low-density areas. Medium- and high-density areas should be determined by an application and performance mapping exercise, and should focus on the power needs and the percentage of total floor space needed.

Traditionally, facilities planners used space planning factors, such as square feet per rack or watts per square foot, to estimate data center capacity. The problem with this approach is that it fails to capture the energy intensity associated with high-density rack configurations. An overall watts-per-square-foot calculation fails to recognize the diversity from one rack configuration to the next, where significant hot spots can require incremental air-conditioning capacity. Gartner recommends planning the data center on a rack unit basis (see Table 1). This technique requires

Page 6: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 6 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

that each rack configuration be calculated from a total-wattage standpoint, in terms of equipment power and incremental power for air-conditioning.

Table 1. Total Space and Power for IT (Base Case) — 200 Racks

Zone Space kW/Rack Racks

Floor

Space IT Load Cooling

Total

kW

High 10% 15 20 600 210 315 525

Medium 20% 8 40 1,200 224 224 448

Low 70% 5 140 4,200 490 294 784

Total 100% 200 6,000 924 833 1,757

Source: Gartner (August 2011)

By aggregating the total rack population, you can calculate total kW and space. In the example in Table 1, a simple rack configuration of 200 racks results in a total space requirement of 6,000 square feet and a total power requirement of 924 kW of IT load and 833 kW of cooling load. As power density per rack increases, the amount of cooling required also increases. A general rule of thumb is that below 6 kW per rack, you can assume that 0.6W of cooling will be needed for every watt of IT load. With between 6 and 10 kW per rack, assume a 1-to-1 ratio of cooling load to IT load. Above 10 kW per rack, assume 1.5W of cooling for every watt of IT load (e.g., higher heat generated requires more density in airflow, thus greater cooling watts required).

To determine the total generator load, you would have to factor in conversion losses (AC to DC) throughout the system. Depending on design, this would average between 8% and 14%. In the example above, assuming an 11% conversion loss, the total kW required would be 1,950 kW, or a generator capacity of 2,438 kilovolt-amp (kva), assuming a 0.8 power factor. Another alternative would be to consider multiple smaller generators, rather than one or two very large ones to support the load. Smaller systems often have shorter lead times, and, depending on market conditions, can be less expense per kW delivered.

Availability Design Criteria

The first critical decision that needs to be made prior to beginning a data center design project is what tier classification will be needed (see Figure 2). While many architecture and engineering (A&E) companies use The Uptime Institute's or the Telecommunications Infrastructure Standard for Data Centers' TIA-942 tier classifications as a baseline, variations abound. Some companies (e.g., Bruns-Pak) have even created their own classifications to get into a more granular level of detail. Essentially, the decision becomes a cost versus risk analysis, based on the types of workload that the new data center will support, and the financial impact to your company if an extended outage were to occur. The recent trend is toward higher availability and high resilience to outside factors and, thus, the move toward Tier 3 and Tier 4 sites with high levels of redundancy built in. While Tier 4 sites are the optimal for availability, the cost of getting to Tier 4 is significant, and the overall energy efficiency of the site will be much less at Tier 4 because of all the redundancies. A secondary trend is beginning to emerge in which clients are considering building multiple tiers within the same site, essentially segmenting applications and service levels by tier, and then designing the building to suit. This is a newer concept, but it is architecturally feasible and can reduce upfront capital costs substantially.

Page 7: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 7 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Figure 2. Design Criteria: Scope and Redundancy

Source: Telecommunications Infrastructure Standard for Data Centers' TIA-942

Critical Building Systems

Planners should design and implement the data center as an integrated system that optimizes electrical power, space allocation and mechanical systems. Five critical building systems require explicit engineering for a new or upgraded data center (see Figure 3):

Power source — Power distribution, including UPS, backup generators (e.g., diesel or natural gas), PDUs and intermediate distribution units.

Heating, ventilation and air-conditioning (HVAC) systems — These may include rooftop units and distributed units that provide localized air cooling. Either overhead or underfloor distribution of air can be an effective means of distributing air evenly throughout the IT area. Additional cooling and air handling may be required between racks. Perhaps the single greatest issue in the contemporary data center is maintaining adequate cooling and air movement, given the intense heat generated by modern blade server and storage devices.

Fire protection systems — These include detection and abatement systems that most likely will combine preaction wet systems interconnected with dry systems (such as FM 200 and Inergen) for sensitive areas.

Security systems — Local and central watch, entrance and egress, monitoring.

Raised floor systems — Traditional data center design has always called for a raised floor to provide an area for power feeds, cabling (in older installations) and a plenum for forced airflow to support under-rack airflow/cooling. These floors have continued to become higher and higher, from an average of 12 to 18 inches in the last generation to 24, 26 and even 36 inches today, depending on power and cooling requirements and the tier level being designed. However, many of the traditional benefits of a raised floor are no longer valid; in fact, if designed properly, a data center can be even more efficient and simpler to configure (and reconfigure) if designed on a slab floor instead.

Page 8: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 8 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

The days of bulky cabling are gone, replaced by thin copper and fiber optics distributed in cable trays just above the racks. Dedicated power circuits used to be the design point for underfloor use, in which these cables, once installed, were rarely changed again. In today's dynamic world of high-density, rapidly changing environments, the ability to move and upgrade power connections quickly has moved the physical location of power connectivity overhead. Also, in traditional data centers, the overall cooling requirements did not change dramatically over time. An "ecosystem" was created that forced enough cool air under the raised floor to provide adequate support for all the planned equipment (which had static requirements). In today's world, the equipment racks require so much power that, in many cases, underfloor cooling is no longer adequate, and the creation of power zones have also forced companies to look at in-row and in-rack cooling solutions as the premier way to cool new equipment.

Figure 3. Critical Building Systems

CHP: combined heat and power

Source: Gartner (August 2011)

Location

Location will affect operational and cost efficiencies significantly. In many cases, location criteria will be constrained by operational or technical requirements. For example, backup data centers that require synchronous replication will be limited to a radius of 30 to 50 miles from the primary data center. WAN costs may also limit the scope of the site search. However, many data centers can be located in distant locations; therefore, measures should be taken to optimize the location selection. Critical factors include:

Labor markets

Page 9: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 9 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Public incentives

Communication infrastructure

Electrical services

Taxes (personal, property)

Proximity to public transit

Real estate markets

Proximity to services/suppliers

Quality of life

Security and public safety

Operational considerations

Site Selection

Selecting a data center site is a complex process involving a large number of variables. In many instances, it is prudent to use a site selection consultant who has the experience, contacts and evaluation tools necessary to undertake a complex site selection process. A best practice is to work through an internal process that establishes the most critical selection criteria, and that assigns weights or degrees of importance to the criteria. The results of this process can be used as a foundation for discussions with prospective consultants.

The following are the major criteria that should be considered in the site selection process:

HR issues — relate to the labor profile in the site area. It is a very important criterion if the data center will be fully staffed, and goes down in importance (weighting) as the number of people to be staffed in the data center decreases. It becomes a nonfactor if you have plans to run a "lights out" data center. Factors such as skills available in the local market, prevailing labor rates and commuting distances are some of the important considerations.

Network connectivity — is important to ensure that response times are sufficient for all

areas of the company that require access to the data center systems. Today, high-speed connectivity is available in most areas in most countries; however, you should verify the availability to ensure there are no surprises. This is especially important as companies become global in scope and as their data centers must support many continents. Factors such as distance to fiber trunk lines, availability of multiple carriers, network reliability and options for dark fiber are some of the important considerations to be examined.

Utilities — is a critical consideration, particularly in an era of rising energy rates and demand. Some areas of the world may not have enough capacity to supply the needs of a large data center. Other resources, such as water and natural gas, may also be important. Factors such as electricity rates and capacity, reliability of the power supply, access to dual power grids, and projected power rates are examples of important considerations to be examined.

Environmental concerns — is a relatively new set of criteria for site selection and is focused on the environment, because of the new emphasis on greenhouse gas emissions and other environmental concerns. Factors such as applicability for the use of

Page 10: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 10 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

free cooling technologies, located where the use of renewable energy solutions (wind, sun, etc.) are possible, and local utility companies' use of renewable power are examples of important considerations to be examined.

Data center and staff security/safety/access — is another extremely important criterion, as it addresses the security of the data center site, and safety and accessibility for resident staff. Natural disasters not only can impact the safety of staff, but also will greatly impact the ability of the data center to continue operating in the face of a local natural disaster. Potential loss of accessibility to the data center could be caused by a single bridge or road being closed, which means problems for the data center operations.

Public service access — is a criterion that is often overlooked. It concerns the degree the targeted site is able to be serviced (quickly) by first responders, such as fire, police and ambulance services. This could be an issue if the site is highly remote or rural, where normal public services are sparse and not readily accessible.

Business services — relates to those services required to support data center operations, such as access to IT vendors, facility equipment and maintenance vendors, telecom services, and other logistical services, such as trucking or delivery services.

Quality of life factors — are those in the site area that relate to the employees staffing the data center. If the data center will have a significant complement of technical, professional and managerial staff, then it is wise to consider quality-of-life factors. This is particularly relevant if other operations will be colocated at the data center, such as network operations, test and development staff, help desk, or one or more departments from the IT organization. This area decreases in importance (weighting) as the number of people to be housed in the data center goes down. It becomes a nonfactor if you have plans to run a lights-out data center. Factors such as affordable housing, property and state and local income tax rates, recreational and cultural facilities, and the quality of local schools are examples of important considerations to be examined.

Incentives — are important because many regions offer economic incentives to attract a "high end" operation, such as a data center, to their communities. This is particularly true if a significant head count will be residing at the data center. Incentives might include tax abatements, energy credits, investment credits and grants. Ensure that special tax rates will not be short term and vanish after you have invested in the area. As part of the site search, IT managers should contact the local economic development agency in the target area to determine what incentives might apply. An ancillary issue relates to the ease of dealing with local authorities relative to zoning, permits, etc. Municipalities in certain parts of the world are extremely bureaucratic, so this should be considered in the incentives category.

Real estate market conditions — vary widely from area to area; therefore, it's important to establish the market profile for the targeted regions under consideration.

Building and acquisition parameters — will entail a range of parameters that include the type of building (industrial, on grade, high bay is preferred), rental rates, terms, options and other considerations in the building specification, and from a lease or purchase contract standpoint.

Architecture

The type of building you select can significantly affect occupancy costs, security, expansion and operational flexibility. As a general rule:

Page 11: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 11 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Avoid multistory office buildings with small floor plates.

In most cases, opt for single-story, windowless structures, such as industrial structures, with large open floor plates. These offer lower rental and operating costs, better physical security, and more flexible space configuration.

Consider building floor sizes with wide column spacing — ideally, 40 feet.

Prefer buildings with higher floor-to-floor clearances — 13 to 14 feet from structural slab to lowest structural member.

Avoid an above-grade location of the IT floor, or, if unavoidable, ensure that the building has adequate and multiple riser capacity for primary/emergency power distribution, diverse fiber entries and other vertical services.

Target buildings that have dual electrical grid service and dual communication connectivity. Avoid issues such as vibration and electromagnetic interference from power utility lines.

In terms of security, provide for multiple levels of secured access within the data center, and within specialized, highly secure areas within the data center (such as tape vaults) by using card key systems in combination with a biometric authentication system.

Other architecture considerations include:

Large column bays (30 feet by 50 feet is good)

If a raised floor is used, a load factor of at least 150 to 200 pounds per square foot

Minimum 13.5 feet clear from structural slab to lowest structural member

If needed, efficient floor plates (that is, rectangular, square or side core)

Minimal windows; hardened facilities preferred

Level roof without skylights

Loading docks for equipment delivery access

Compactor for rubbish removal

Single point of entry and sufficient setback of building for perimeter security purposes

Multiple story only: adequate riser space for primary/emergency power, HVAC, diverse fiber entries and other vertical services

Rooftop acoustic and aesthetic screening for mechanical equipment

Power Distribution

The electrical power plant and distribution system design is crucial to data center reliability and operational efficiency. Blade server technology creates enormous power demands to energize the servers and to support incremental air-conditioning.

Several fundamental principles should serve as the foundation for the electrical system design. These include:

Maintenance and emergency shutdown switches at all entry points in the facility

Page 12: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 12 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

A grounding system that complies with the National Electrical Code, Article 250 or local codes if applicable

The provision of a signal reference grid (SRG) to reduce high-frequency impedance

The use of higher-gauge wire for future electrical expansion

The use of scalable PDUs to integrate circuit breakers and equipment connections

The use of power conditioning equipment to integrate with the UPS

Consider using electrical cable trays above the floor to separate signal cables from electrical cables, or, if a raised floor is used, electrical cable trays should be positioned just below the surface to enhance the airflow throughout the raised floor.

Other power distribution best practices include:

Be aware of overall power requirements (that is, use rack unit measure).

Strive for multiple utility feeds.

Provide for maintenance bypass and emergency shutdown.

Determine if equipment requires single-phase or three-phase power.

Plan for ambient air temperatures between 76 F and 78 F, or in compliance with the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) standards.

Maintain relative humidity levels to minimize electrostatic discharge.

Be mindful of electromagnetic interference (EMI); conduct a study to determine if shielding or other preventive measures are required.

Power Supply

For high availability, data centers should strive to provide at least three lines of defense for backup power — multiple feeds from the public utility; UPS systems and generators to sustain power for longer-term outages.

The modern UPS provides data centers with emergency backup power to keep IT equipment running over short power disruptions, bridge the time needed to start backup generators for longer power disruptions or allow time for controlled shutdown if no generators are available. UPSs have also taken over the role of providing conditioned power and isolating building power from noise and waveform distortion caused by the equipment. When looking to purchase a UPS, multiple factors must be examined to ensure that it meets the requirements of your data center strategy, while also meeting total cost of ownership (TCO) goals. Several basic principles should guide the size and capability of the UPS system. It should:

Be sized to energize all computer equipment and other electrical devices (such as emergency lighting and security devices) for 100% of the power demand for no less than twice the average start time of your generators. If quick-start generators are expected to transfer load within six seconds, the UPS load should support at least 12 seconds. Current generation flywheel/generator combinations support this technique, as well as traditional battery-based UPS systems.

Page 13: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 13 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Be sized for peak load or fault overload conditions. This relates to the surge in power demand when the equipment is first energized. As a rule of thumb, size the UPS for 150% of operating demand.

Be continuously operational to filter and condition the power.

Be of high energy efficiency over a broad range of possible loads (20% to 100%).

In most instances, doesn't support HVAC systems.

If a UPS is not used, then surge protection should be provided at the panels with a stand-alone isolation/regulation transformer. To sustain power during an extended period, install a diesel generator to provide backup power for longer-term outages. For high levels of fault tolerance, an additional backup generator will be required. Consider local ordinances and codes relating to fuel tank location and noise abatement. Also, have a plan to monitor the stored fuel to ensure it does not break down over time and that you have enough fuel on hand to cover long outages.

Periodically test the generators to ensure their operational integrity. Specify the UPS and backup generator capacity to meet total peak load. Depending on the total generator load, some local power companies may offer to buy time on the generators during peak demand days, which could be used to offset some operational costs.

Mechanical Systems

Because cooling the data center has become a crucial issue, when designing an HVAC system, follow these key guidelines:

Ensure an ambient temperature between 76 F and 80 F, in alignment with ASHRAE recommendations.

Maintain a relative humidity of 45% to 50%.

Depending on location, integration of air and water-side economizers is highly recommended.

A preliminary computational fluid dynamic (CFD) analysis should be done early in the design phase.

Ensure that there is bottom-to-top airflow for raised floor systems, and top-down airflow for slab systems.

Strive for redundant systems by installing multiple HVAC units (as opposed to relying on a single centralized chiller).

Maintain a static pressure within the raised floor plenum of 5% greater than the data center raised-floor area.

Selectively position perforated tiles in the raised floor to direct chilled air into the rack area.

For high-density zones, hot- or cold-aisle containment should be considered to improve cooling efficiencies.

Seal all penetrations in the raised floor to maintain a constant static pressure, and use blanking panels within the racks.

Page 14: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 14 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Establish a vapor barrier throughout the perimeter of the data center to minimize condensation. Use spot cooling or special rack enclosures for hot spots in the data center layout.

Design the airflow to maximize the flow of chilled air across and through the equipment racks. With raised floors, this requires that chilled air flow from bottom to top and from front to back through the racks. Alternating aisles between cold aisle and hot aisle facilitates more efficient temperature control.

Security Systems

IT management needs to clearly define an IT security policy and strategy. An integral part of the IT security policy should be the data center's physical security. Different security zones should be established using the concentric circle concept. For example, the first level of security would be the grounds, then the building, then the data center and then the machine room. Inside the machine room, you might consider side-by-side access levels (for example, someone can access the server area, but not the storage or communication area).

When looking at the methods to grant access to different areas, you should use a combination of methods (at least two), based on:

What the person has possession of (picture ID, key, magnetic card, etc.)

What the person knows (password, PIN, etc.)

Who the person is, based on biometric data (fingerprint, palm print, retina scan, etc.)

Also, access may be based on time of day, temporary need or job title.

In the end, good physical security depends on the effort and accountability of your staff. They should be trained in basic security actions; in secure areas, for example, they should ask people they do not recognize, and who appear to have no badge or escort, what they need. They should also be aware of and prevent anyone from "piggybacking" access behind them.

Raised-Access Floor (RAF)

Although a raised floor was a symbol for and part of the definition of a data center in the past, it should no longer be considered mandatory. The new realities of the data center have changed the playing field, and companies need to examine all the pros and cons of a raised floor to make an informed, not a historical, decision. We believe that, by 2015, more than 50% of all new data center builds will not use a raised floor, up from 3% to 5% today.

Raised floors have traditionally been used to provide:

Space for running bulky data cables

Space for running large power cables and connectors

Shorter, direct cable runs

A clean look for showplace data centers

A common ground for equipment

Space for liquid cooling piping

A cold air plenum

Page 15: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 15 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Changing technologies and realities have removed the need for most of the above reasons for a raised floor, especially the newer cooling point solutions, such as in row and in-rack cooling. Results of polls taken during the past two years show that the use of in-row and in-rack cooling is continuing to grow in use in the data center. At the end of 2010, 52% of the poll respondents were using some in-row and/or in-rack cooling.

The elimination of raised floors could remove most, if not all, the following problems with raised floors that have been experienced by clients:

Cost

Load factor considerations

Stability, especially in earthquake zones

Low ceilings due to added height of raised floors

Restricted air flow due to obstructions such as cables under the raised floor

Problems keeping area under the flooring clean

Recabling problems due to raised floor support bars every few feet

Safety when floor tile is removed for maintenance work

Added fire detection/suppression needs to cover the underfloor area

If a raised floor is desired, then specify a height relative to the overall data center size; consider using cast aluminum floor tiles to ensure maximum floor loading capability. Some guidelines for raised-floor heights are:

Facilities with less than 1,000 square feet — 12-inch RAF height

Facilities with 1,000 to 5,000 square feet — 12- to 18-inch RAF height

Facilities with 5,000 to 10,000 square feet — 18- to 24-inch RAF height

Facilities with 10,000 square feet and above — 24-inch to 36-inch RAF height

When designing a data center with raised floors, consider some of the newer raised-floor technologies, such as:

Directional airflow via moveable, perforated tiles

Variable-air-volume (VAV) dampers

Underfloor fans to increase air volume

Cold-air isolation systems

Enhanced reinforcement to support heavier loads

Air sealing grommets

New tile materials

Page 16: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 16 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Fire Detection and Suppression

Without a good fire prevention plan, a fire can have a devastating impact on the data center, from the actual fire and even from the suppression technique. Data center management must work closely with the facilities department to ensure that fire threats are minimized with good planning, training and procedures, and to ensure the installation of proper fire detection and suppression equipment. A good fire protection plan is more than just selecting a fire suppression system. It also includes things such as fire prevention, fire detection and first-response planning.

A comprehensive fire protection plan should include the following factors:

Fire prevention — The best way to protect a data center from a fire is to prevent it from ever happening.

Fire detection — As a fire passes through its four stages (precombustion, visible smoke, flame and intense heat), it becomes harder to extinguish, causes more damage and there is more potential for loss of life. The goal of fire detection is to discover a fire as early as possible (in the precombustion stage). The best photoelectric detector for a data center is an air sampling smoke detector. Because of its sensitivity (up to 1,000 times more sensitive than ordinary photoelectric or ionization detectors), it is sometimes referred to as a very early smoke detector (VESD). Placement of the sensors must take into consideration raised floors, suspended ceilings and hot-aisle/cold-aisle containment areas.

First response — In a large percentage of cases, fires in the data center can be extinguished before any automatic fire suppression equipment is activated. The best defense is usually a properly trained person with a proper and charged fire extinguisher. Note that retail, powder-based extinguishers release corrosive dust and should not be allowed in a data center.

Control system — The control system can be any type, from a simple sound of an alarm and activation of the suppression system when a detector goes off to a very intelligent system. Intelligent control systems can be programmed to vary the sensitivity of the detectors, put in time delays between the alarm and activation of fire suppression (which allows people to try to put out the fire first), delay of an alarm and the activation of the fire suppression system until two different sensors detect a fire (which eliminates the alarm and suppression system activating because of a faulty sensor), or other things based on programmed scripts.

Evacuation — No doors should be left open to the data center after evacuation, as most fire suppression systems that use clean agents use a total flooding method, which requires the area to be sealed as tightly as possible. After evacuation, all personnel should know to go to a designated location, so it can be determined that all staff have exited the data center. Fire drills should be run at least every six months to ensure that staff can exit the building safely and promptly.

Fire suppression — A large number of fire suppression agents are now on the market to replace Halon's use for fire suppression in the data center. Data center management and facilities management need to work together to understand the impact that their fire suppression choice will have on long-term cost, cleanliness, greenness and evacuation. Even if, by law, you're required to have water sprinklers in your facility, you need to augment the sprinkler system with a clean-agent fire suppression system that is set to try to extinguish a fire before the sprinkler system activates, to avoid the damage that water will cause to a data center's equipment and media. The two most popular clean agent systems use Inergen (IG-541) or FM-200 (HFC-227ea). Also popular as clean-

Page 17: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 17 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

agent systems are ECARO-25 (HFC-125) and Novec 1230 (FK-5-1-12). ECARO-25 can be used as a direct, drop-in replacement using the same original Halon equipment that might have been previously installed.

Continuation/recovery of processing — Part of your contingency planning needs to cover the actions that must take place when a fire alarm is activated, along with the evacuation and activation of the fire suppression system. When should you initiate failover to a backup site? When should you shut down and power down systems? When is it possible to continue running the systems? These are questions that should be addressed as part of your contingency planning.

Resetting for new coverage — Sometimes overlooked is the process of resetting your system to be ready to detect and suppress another fire. Recharging handheld and wheeled fire extinguishers is a high priority, as is the replenishment of your data center fire suppression system.

Data Center Construction Costs

Sizing Estimates

In the following examples, we have developed a typical data center build estimate, based on multiple client inquiries during the past year. These are averages based on dozens of builds of different sizes and complexities, but we feel they are representative of the market today. The first step in design is to determine how much space will be needed, both for IT and for the building as a whole. Figure 4 represents the average sizes needed to support an 8,000-square-foot data center (IT space), which also houses 32 office staff, three food staff and a network operations center (NOC) with five people. The space is broken out by tier, because different tiers have different requirements for mechanical redundancy, and thus varying requirements for floor space.

Figure 4. Data Center Space Estimates by Tier Level

Source: Gartner (August 2011)

Page 18: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 18 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Data Center Cost Estimates

The average construction costs for the data center referenced above are shown in Figure 5. Again, these are average costs and will vary by region, and more importantly by the power and cooling configuration integrated into the design. This example represents an 8,000-square-foot data center that supports three power zones within the building. The high-density zone is 10% of the floor space and supports 15 kW per rack. Medium density represents 20% of the floor space and supports 8 kW per rack, and the remainder is low density and supports an average of 5 kW per rack. From a watts-per-square-foot perspective this equates to approximately 360W, 170W and 110W, respectively. The overall power requirement is 2.4 megawatts (MW), of which 1.1 MW is IT load.

Page 19: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 19 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Figure 5. Data Center Estimated Build Costs

Source: Gartner (August 2011)

A breakdown of costs by major category is shown in Figure 6 below:

Page 20: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 20 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Figure 6. Major Construction Category Costs

MES — mechanical electrical systems

Source: Gartner (August 2011)

Data Center Facilities Management Options

There are three organizational models for managing data center facilities. No one is preferred; selection will depend on the degree of facilities management expertise within the IT organization and the company's organizational philosophy.

Model 1: Assign Data Center Facilities Management Responsibility to

Corporate Facilities Organization

In this model, the IT organization is the customer of the facilities management organization. IT specifies the facilities requirements, service levels and budgets. The corporate facilities organization typically assigns personnel to the data center and manages day-to-day facilities operations, including critical building systems, maintenance, repair and the contracting for specialized services, including physical security, cleaning, grounds maintenance, shipping and receiving. This structure leverages the professional expertise of the corporate facilities staff, and ensures close alignment with facilities standards and policies.

Page 21: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 21 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Model 2: The Management of Data Center Facilities Is an IT Organizational Responsibility

In this model, facilities professionals report to the data center operations manager and manage the day-to-day facilities management tasks. In most cases, the IT facilities personnel will work closely with corporate facilities and coordinate on issues of facilities standards, security standards, environmental, health and safety standards, and other corporate policies and guidelines. The IT facilities staff will also seek specialized services from the corporate facilities staff relating to site selection for new data centers, building construction or leasing, and specialized engineering services relating to electrical and mechanical systems design and engineering. This model ensures maximum control of the IT organization for its data center facilities operations.

Model 3. A Matrix Model, in Which IT Facilities Staff Reports to IT

Organization and to Corporate Facilities Organization

In the matrix structure, the IT facilities staff is accountable to the IT organization, relative to service performance, infrastructure availability and efficiency, and facilities budget management. The IT facilities organization is accountable to the corporate facilities organization, relative to individual employee performance, career development, training, and maintaining adherence to corporate facilities standards and policies. The matrix structure captures the benefits of the first two models; however, as with any matrix structure, it introduces the potential for organizational conflict over resource allocation and operational priorities, as well as for disputes over facilities investment and levels of redundancy.

In all cases, it is best to maintain close ties between the IT organization and the corporate facilities organization. This will require periodic joint staff meetings, project review meetings and multidiscipline project teams, particularly for data center relocation or expansion projects. It is particularly important that facilities professionals work closely with IT procurement specialists in reviewing vendor product specifications relating to power requirements, redundancy features, floor loading factors, and issues relating to space and cooling. With the advent of high-density servers, problems with heat and power demands can wreak havoc with facilities power and cooling systems.

RECOMMENDED READING

Some documents may not be available as part of your current Gartner subscription.

"Choosing a Data Center Site: "Location, Location, Location"

"Containers and Modules: Is This the Future of the Data Center?"

"Shrinking Data Centers: Your Next Data Center Will Be Smaller Than You Think"

"Data Center Decisions: Build, Retrofit or Colocate; Why Not a Hybrid Approach"

"What to Consider When Designing Next-Generation Data Centers"

"Critical Factors in Choosing a Data Center UPS"

"Data Center Fire Suppression Options Are Cost, Toxicity, Green and Clean"

"Data Center Fire Protection Is a Critical Part of Business Continuity"

"Data Center Facility Location Selection Criteria"

Page 22: Best Practices - Data Center Cost and Design

Publication Date: 5 August 2011/ID Number: G00213184 Page 22 of 22

© 2011 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

REGIONAL HEADQUARTERS

Corporate Headquarters 56 Top Gallant Road

Stamford, CT 06902-7700 U.S.A.

+1 203 964 0096

European Headquarters

Tamesis

The Glanty Egham

Surrey, TW20 9AW UNITED KINGDOM

+44 1784 431611

Asia/Pacific Headquarters

Gartner Australasia Pty. Ltd.

Level 9, 141 Walker Street North Sydney

New South Wales 2060 AUSTRALIA

+61 2 9459 4600

Japan Headquarters Gartner Japan Ltd.

Aobadai Hills, 6F 7-7, Aobadai, 4-chome

Meguro-ku, Tokyo 153-0042

JAPAN +81 3 3481 3670

Latin America Headquarters Gartner do Brazil

Av. das Nações Unidas, 12551

9° andar—World Trade Center 04578-903—São Paulo SP

BRAZIL +55 11 3443 1509