european code best practices v4 0 5-r1

Upload: sergiosantamaria

Post on 04-Jun-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    1/50

    Version 4.0.5 Public

    EUROPEAN COMMISSIONDIRECTORATE-GENERALJOINT RESEARCH CENTREInstitute for Energy and TransportRenewable Energy Unit

    2013 Best Practicesfor the EU Code of Conduct on Data Centres

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    2/50

    Version 4.0.5 Public

    Version 4.0.5 Public

    1 Document Information

    1.1 Version History

    Version1 Description Version Updates Date

    4.0.5 2013 Review draft Introduced charging tariffs for consideration 25 Mar 2013

    4.0.4 2013 Review draft Corrected IEC 62040-3 in section 3.2.2 28 Feb 2013

    4.0.3 2013 Review draft Corrected versions, dates and numbering 19 Feb 2013

    4.0.2 2013 Review draft Modified references to Main Document 23 Jan 2013

    4.0.1 2013 Update draft Comments from 2012 stakeholders meeting 13 Dec 2012

    3.0.8 2012 Release Final changes from stakeholder call 15 Dec 2011

    3.0.0 2011 Release None 28 Feb 2011

    Version History

    1.2 Release History

    Version Description Authoriser Date

    4.0.5 2013 Release Mark Acton 30 May 2013

    3.0.8 2012 Release Liam Newcombe 15 Dec 2011

    3.0.1 2011 Release Fix Liam Newcombe 01 March 2011

    3.0.0 2011 Release Liam Newcombe 28 Feb 20112.0.0 2010 Release Liam Newcombe 19 Nov 2009

    1.0.0 First Release Liam Newcombe 23 Oct 2008

    Release History

    1 Version numbering standard, integer number for release version, first decimal point for major revisions, second

    decimal point for minor revisions

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    3/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    4/50

    Version 4.0.5 Public

    2013 Best Practices 4 of 50

    2 Introduction

    This document is a companion to the EU Code of Conduct on Data Centres and provides thefull list of identified best practices for data centre operators as referenced in the Code ofConduct Participant and Endorser Guidelines documents.

    2.1 Role of Best Practices

    This Best Practice supplement to the Code of Conduct is provided as an education andreference document as part of the Code of Conduct to assist data centre operators inidentifying and implementing measures to improve the energy efficiency of their datacentres. A broad group of expert reviewers from operators, vendors, consultants,academics, professional and national bodies have contributed to and reviewed the bestpractices.

    This best practice supplement is a full list of the identified and recognised data centreenergy efficiency best practices within the Code of Conduct. The best practice list providesa common terminology and frame of reference for describing an energy efficiency practice,to assist Participants and Endorsers in avoiding doubt or confusion over terminology.Customers or suppliers of IT services may also find it useful to request or provide a list ofCode of Conduct practices implemented in a data centre to assist in procurement ofservices that meet their environmental or sustainability standards.

    2.2 Expected Minimum Practices

    To help ensure that Participants to the Code of Conduct are recognised as havingcommitted to a useful and substantial level of energy saving effort, a subset of the bestpractices are identified in this document as being the expected minimum level of energysaving activity for Participant status.

    The less disruptive or intrusive of the practices are identified as being applied to theexisting data centre and IT equipment, retrospectively where necessary. It is accepted thata number of the practices identified as expected are inappropriate or present an

    unnecessary burden when applied to an existing running data centre. These practices areidentified as being expected either, when new IT equipment or software is sourced anddeployed, or during a retrofit of the facility. These practices provide substantial benefits andare intended to achieve efficiency improvements through the natural churn of equipmentand facilities.

    All expected practices should be applied to any data centre constructed from 2011onwards, specifically all practices marked as Entire data centre, New software, New ITequipment and New build or retrofit which are within the applicants control.

    Practices are marked in the expected column as;

    Category DescriptionEntire Data Centre Expected to be applied to all existing IT, Mechanical and Electrical

    equipment within the data centre

    New Software Expected during any new software install or upgrade

    New IT Equipment Expected for new or replacement IT equipment

    New build or retrofit Expected for any data centre built or undergoing a significant refitof the M&E equipment from 2010 onwards

    Optional practices Practices without a background colour are optional for participants

    Note that existing IT equipment moved from another data centre is not expected to complywith the New IT Equipment or New Software practices. New or replacement IT equipmentexcludes the direct replacement of failed hardware with like for like as part of normal

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    5/50

    Version 4.0.5 Public

    2013 Best Practices 5 of 50

    operations. New software install or upgrade refers to major upgrades of software and notthe application of service packs and patches in normal management and use.

    Retrofit is intended to describe major disruptive works in the data centre which present theopportunity at little incremental cost to implement these additional practices. Examples ofretrofit would be (a) when the power to the data floor is shut off and the IT equipment and

    racks removed it is expected that practice 5.1.1 Contained hot or cold aisle would beimplemented (b) if the CRAC units are being upgraded or replaced it is expected thatpractice 5.5.1 Variable speed fans would be implemented as part of this change.

    2.3 Application and Assessment

    The best practices form part of the application and assessment for Participant status. Thisprocess is described in the Participant and Endorser Guidelines documents.

    2.4 Value of Practices

    Each practice has been assigned a qualitative value to indicate the level of benefit to beexpected from an action and the relative priorities that should be applied to them. Thesevalues are from 1 to 5 with 5 indicating the maximum value. These values are not intended

    to be totalled to provide an overall operator score and should not be mistaken forquantitative. This would require large scale data on the effects of each practice ortechnology which is not yet available as well as a complex system of scoring representingthe combinational increase or reduction of individual practice values within that specificfacility

    2.5 Applicability of Expected Practices

    It is understood that not all operators will be able to implement all of the expected practicesin their facilities due to physical, logistical, planning or other constraints. In these instancesan explanation of why the expected action is not applicable or practical should be providedin the Reason why this practice cannot be implemented in this data centre column in thereporting form, alternative best practices from the supplement may be identified as directreplacements if they result in similar energy savings.

    2.6 Type of Applicant

    Each applicant should identify the type of operator that best describes their activity withinthe data centre for which they are completing the form on the Data Centre Information tabas;

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    6/50

    Version 4.0.5 Public

    2013 Best Practices 6 of 50

    Type Description

    Operator Operates the entire data centre from the physicalbuilding through to the consumption of the ITservices delivered.

    Colo provider Operates the data centre for the primary purpose ofselling space, power and cooling capacity tocustomers who will install and manage IT hardware.

    Colo customer Owns and manages IT equipment located in a datacentre in which they purchase managed space,power and cooling capacity.

    Managed service provider (MSP) Owns and manages the data centre space, power,cooling, IT equipment and some level of software forthe purpose of delivering IT services to customers.This would include traditional IT outsourcing.

    Managed service provider in Colo A managed service provider which purchasesspace, power or cooling in this data centre.

    Table 2-1 Types of applicants

    The type of operator serves two purposes, first it assists the secretariat in the assessmentof an application and second it will be included in the listing for data centres which achieveparticipant status on the Code of Conduct website.

    2.7 Applicants who do not control the entire data centre

    It is understood that not all operators are responsible for all aspects of the IT environmentdefined within the best practices. This is not a barrier to Participant status but the operatorshould sign as a Participant and act as an Endorser for those practices outside of theircontrol.

    The following sections are included to provide guidance to operators with partial control ofthe data centre on which practices they are expected to Implement and which they areexpected to Endorse.

    It is suggested that you download the application form, select your type of operator andthen your areas of responsibility whilst reading this document to understand how thiscategorisation guides practice implementation.

    2.7.1 Guidance to operators with partial control of the data centre

    The best practice tab of the reporting form provides guidance for each of the minimumexpected practices on whether these are considered to apply to each of these exampletypes of operator, in which cases responsibility is to be shared and how that may beimplemented. This may be found in the columns labelled Guidance to operators withpartial control of the data centre.

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    7/50

    Version 4.0.5 Public

    2013 Best Practices 7 of 50

    2.7.2 Areas of Responsibility

    Operators areas of responsibility are defined as;

    Area Description

    Physicalbuilding

    The building including security, location and maintenance.

    Mechanical andelectrical plant

    The selection, installation, configuration, maintenance andmanagement of the mechanical and electrical plant.

    Data floor The installation, configuration, maintenance and management ofthe main data floor where IT equipment is installed. This includesthe floor (raised in some cases), positioning of CRAC units andPDUs, basic layout of cabling systems (under floor or overhead).

    Racks The installation, configuration, maintenance and management ofthe racks into which rack mount IT equipment is installed.

    IT equipment The selection, installation, configuration, maintenance andmanagement of the physical IT equipment.

    OperatingSystem /Virtualisation

    The selection, installation, configuration, maintenance andmanagement of the Operating System and virtualisation (bothclient and hypervisor) software installed on the IT equipment. Thisincludes monitoring clients, hardware management agents etc.

    Software The selection, installation, configuration, maintenance andmanagement of the application software installed on the ITequipment.

    Businesspractices

    The determination and communication of the businessrequirements for the data centre including the importance ofsystems, reliability availability and maintainability specificationsand data management processes.

    Table 2-2 Areas of responsibility

    An example of Participant responsibility would be a collocation provider who does notcontrol the IT equipment should actively endorse the practices relating to IT equipment totheir customers. This might include the provision of services to assist customers in

    adopting those practices. Equally an IT operator using collocation should request theircollocation provider to implement the practices relating to the facility.

    An applicant should mark their responsibility for each of these areas on the Data CentreInformation tab of the reporting form as Y, N, or Partial.

    Note that these boundaries of responsibility do not apply within organisations. Anapplicant is considered to control an area if a parent, subsidiary or group companyowns or controls the area. For example, if another division of the same group ofcompanies operates a colo facility within which the applicant operates equipmentas a service provider this is considered to be a managed service provider withresponsibility for the physical building, mechanical and electrical plant, data floorand racks, not a managed service provider in colo.

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    8/50

    Version 4.0.5 Public

    2013 Best Practices 8 of 50

    2.7.3 Implement or Endorse

    Each operator should determine which of the practices apply to them based on theirareas of responsibility. The table below provides an overview for common types ofParticipant;

    Operator Coloprovider

    Colocustomer

    MSP inColo

    MSP

    Physicalbuilding

    Implement Implement Endorse Endorse Implement

    Mechanical &electrical plant

    Implement Implement Endorse Endorse Implement

    Data floor andair flow

    Implement Implement& Endorse

    Implement& Endorse

    Implement Implement

    Racks and rackair flow

    Implement Implement& Endorse

    Implement& Endorse

    Implement Implement

    IT equipment Implement Endorse Implement Implement Implement

    OperatingSystem &Virtualisation

    Implement Endorse Implement Implement Implement

    Software Implement Endorse Implement Implement& Endorse

    Implement& Endorse

    Businesspractices

    Implement Endorse Implement Endorse Endorse

    Table 2-3 Areas of responsibility for common applicant types

    The reporting form contains logic to assist Applicants in determining which of theExpected Practices they should Endorse and which they should Implement based uponthe areas of the data centre that are within their control. An Applicant should select Y,N or Partial for each of the identified areas of control on the Data Centre Informationtab of the reporting form. The form will then mark each Expected Practice with IImplement, E Endorse or I & E Implement and Endorse action in the columns labelledExpected Status based on responsibility areas to provide guidance to the applicant.

    There are many instances where the responsibility for a practice will need to be sharedbetween supplier and customer, for example the installation of IT equipment in thecorrect orientation in a hot / cold aisle layout data centre. In this case both parties shouldImplement the practice themselves and Endorse it to the other party(ies).

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    9/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    10/50

    Version 4.0.5 Public

    2013 Best Practices 10 of 50

    3 Data Centre Utilisation, Management and Planning

    It is important to develop a holistic strategy and management approach to the data centre. This willenable the Participant to effectively deliver reliability, economic, utilisation and environmental benefits.

    3.1 Involvement of Organisational GroupsIneffective communication between the disciplines working in the data centre is a major driver ofinefficiency as well as capacity and reliability issues.

    No Name Description Expected Value

    3.1.1 Groupinvolvement

    Establish an approval board containingrepresentatives from all disciplines (software, IT,M&E). Require the approval of this group for anysignificant decision to ensure that the impacts of thedecision have been properly understood and aneffective solution reached. For example, this could

    include the definition of standard IT hardware liststhrough considering the M&E implications ofdifferent types of hardware. This group could beseen as the functional equivalent of a changeboard.

    EntireData

    Centre

    5

    3.2 General Policies

    These policies apply to all aspects of the data centre and its operation.

    No Name Description Expected Value

    3.2.1 Consider the

    embedded energyin devices

    Carry out an audit of existing equipment to

    maximise any unused existing capability byensuring that all areas of optimisation, consolidationand aggregation are identified prior to new materialinvestment.

    Entire

    DataCentre

    3

    3.2.2 Mechanical andelectricalequipmentenvironmentaloperating ranges

    Recommend the selection and deployment ofmechanical and electrical equipment which doesnot itself require refrigeration.

    Note that this refers to mechanical compressorsand heat pumps, any device which uses energy toraise the temperature of the rejected heat

    Optional 3

    3.2.3 Lifecycle Analysis Introduce a plan for Lifecycle Assessment (LCA) inaccordance with emerging EU Guidelines and

    internationally standardised methodology (ISO14040 ff). http://lct.jrc.ec.europa.eu/index_jrc

    Optional 2

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    11/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    12/50

    Version 4.0.5 Public

    2013 Best Practices 12 of 50

    4 IT Equipment and Services

    The IT equipment creates the demand for power and cooling in the data centre, any reductions in powerand cooling used by or provisioned for the IT equipment will have magnified effects at the utility energysupply.

    Note that the specifications of IT equipment operating temperature and humidity ranges in this section donot indicate that the data floor should be immediately operated at the upper bound of these ranges, this isaddressed in section 5.3. The purpose of the equipment environmental specifications in this section is toensure that new equipment is capable of operating under the wider ranges of temperature and humiditythus allowing greater flexibility in operating temperature and humidity to the operator.

    4.1 Selection and Deployment of New IT Equipment

    Once IT equipment is purchased and installed in the data centre it typically spends several years in thedata centre consuming power and creating heat. The appropriate selection of hardware anddeployment methods can provide significant long term savings.

    No Name Description Expected Value

    4.1.1 IT hardware Power

    Include the Energy efficiency performance of the ITdevice as a high priority decision factor in thetender process. This may be through the use ofEnergy Star or SPECPower(http://www.spec.org/power_ssj2008/results/) typestandard metrics or through application ordeployment specific user metrics more closelyaligned to the target environment which mayinclude service level or reliability components. Thepower consumption of the device at the expectedutilisation or applied workload should beconsidered in addition to peak performance per

    Watt figures.

    New ITEquipment

    5

    4.1.2 New IThardware Restricted(legacy)operatingtemperature andhumidity range

    Where no equipment of the type being procuredmeets the operating temperature and humidityrange of practice 4.1.3, then equipment supportingat a minimum the restricted (legacy) range of 15C

    - 32C inlet temperature (59F 89.6F) andhumidity from 20% to 80% relative humidity andbelow 17C maximum dew point (62.6F) may beprocured.

    This range is defined as the ASHRAE Allowablerange for Class A1 class equipment. Class A1equipment is typically defined as Enterprise class

    servers (including mainframes) and storageproducts such as tape devices and libraries.

    To support the restrictive range of operationequipment should be installed in a separate area ofthe data floor in order to facilitate the segregationof environmental controls as described in practices5.1.10 and 5.1.12.

    In certain cases older technology equipment mustbe procured due to compatibility and applicationvalidation requirements, such as for air trafficcontrol systems. These systems should beconsidered as subset of this practice and installed

    in such a manner so as not to restrict the operationof other equipment as above.

    New ITEquipment

    4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    13/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    14/50

    Version 4.0.5 Public

    2013 Best Practices 14 of 50

    4.1.7 Provision to theas configuredpower

    Provision power and cooling only to the as-configured power draw capability of the equipment,not the PSU or nameplate rating. Note that thismay require changes to the provisioning if the ITequipment is upgraded internally.

    New ITEquipment

    3

    4.1.8 Energy Starcomplianthardware

    The Energy Star Labelling programs for ITequipment should be used as a guide to serverselection where and when available for that classof equipment. Operators who are able to determinethe in use energy efficiency of hardware throughmore advanced or effective analysis should selectthe most efficient equipment for their scenario.

    This practice should be in line with the proposedVersion 2.0 Energy Star Computer ServerSpecification when available.

    Additionally ensure the use of the Energy Star DataCenter Storage Specification V1.0 when released.

    New ITEquipmentfrom 2014

    3

    4.1.9 Energy &temperaturereportinghardware

    Select equipment with power and inlet temperaturereporting capabilities, preferably reporting energyused as a counter in addition to power as a gauge.Where applicable, industry standard reportingapproaches should be used such as IPMI, DCMIand SMASH.

    To assist in the implementation of temperature andenergy monitoring across a broad range of datacentres all devices with an IP interface shouldsupport one of;

    SNMP polling of inlet temperature andpower draw. Note that event based SNMP

    traps and SNMP configuration are notrequired

    IPMI polling of inlet temperature and powerdraw (subject to inlet temperature beingincluded as per IPMI 2.0 rev 4)

    An interface protocol which the operatorsexisting monitoring platform is able toretrieve inlet temperature and power drawdata from without the purchase ofadditional licenses from the equipmentvendor

    The intent of this practice is to provide energy and

    environmental monitoring of the data centrethrough normal equipment churn.

    New ITEquipmentfrom 2014

    3

    4.1.10 Control ofequipmentenergy use

    Select equipment which provides mechanisms toallow the external control of its energy use. Anexample of this would be the ability to externallyrestrict a servers maximum energy use or triggerthe shutdown of components, entire systems orsub-systems

    Optional 5

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    15/50

    Version 4.0.5 Public

    2013 Best Practices 15 of 50

    4.1.11 Select freestandingequipmentsuitable for thedata centre Airflow direction

    When selecting equipment which is free standingor supplied in custom racks the air flow direction ofthe enclosures should match the air flow design inthat area of the data centre. This is commonly frontto rear or front to top.

    Specifically the equipment should match the hot /cold aisle layout or containment schemeimplemented in the facility.

    Uncorrected equipment with non standard air flowwill compromise the air flow management of thedata centre and therefore restrict temperature setpoints. It is possible to mitigate this compromise bysegregating such equipment as per practice 5.1.10

    New ITEquipment

    4

    4.1.12 Operatingtemperaturerange Liquidcooled ITequipment

    Devices whose primary cooling method is not air(directly liquid cooled) are not subject to the airenvironmental requirements specified in 4.1.3.

    These devices should be able to operate with

    supply coolant liquid temperatures equal to the airtemperatures specified in 4.1.3 - 10C to 35C(50F to 95F)

    As described in 5.6.4 this practice applies todevices which deliver cooling fluid directly to theheat removal system of the components such aswater cooled heat sinks or heat pipes and not thedelivery of cooling liquid to an internal mechanicalrefrigeration plant or in chassis air cooling systemswhich are required to deliver coolant liquid or air tothe IT components within the range specified.

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    16/50

    Version 4.0.5 Public

    2013 Best Practices 16 of 50

    4.1.13 IT equipmentpower againstinlettemperature

    When selecting new IT equipment require thevendor to supply at minimum;

    Either the total system power or cooling fan powerfor temperatures covering the full allowable inlettemperature range for the equipment under 100%load on a specified benchmark such asSPECPower http://www.spec.org/power_ssj2008/).Data should be provided for 5C or smaller steps ofinlet temperature

    Optional but recommended;

    Total system power covering the full allowable inlettemperature range under 0% and 50% load on theselected benchmark.

    These sets of data shown easily in a single tableand single chart will allow a data centre operator toselect equipment to meet their chosen operatingtemperature range without significant increase inpower consumption.

    This practice is intended to improve the thermalperformance of IT equipment by allowing operatorsto avoid devices with compromised cooling designsand creating a market pressure toward deviceswhich operate equally well at increased intaketemperature.

    This practice is likely to be modified to use thesame measurements as proposed in the Version2.0 Energy Star Computer Server Specification.

    Additionally ensure the use of the Energy Star DataCenter Storage Specification V1.0 when released.

    New ITEquipmentfrom 2014

    4

    4.1.14 New IThardware Extendedoperatingtemperature andhumidity range

    Include the operating temperature and humidityranges at the air intake of new equipment as highpriority decision factors in the tender process.

    Consider equipment which operates under a widerrange of intake temperature and humidity such asthat defined in ASHRAE Class A4 or ETSI EN 3.1.

    This extended range allows operators to eliminatethe capital cost of providing any cooling capabilityin hotter climate regions.

    Many vendors provide equipment whose intaketemperature and humidity ranges exceed theminimum sets represented by the described

    classes in one or more parameters. Operatorsshould request the actual supported range fromtheir vendor(s) and determine whether thispresents an opportunity for additional energy orcost savings through extending the operatingtemperature or humidity range in all or part of theirdata centre.

    Optional 3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    17/50

    Version 4.0.5 Public

    2013 Best Practices 17 of 50

    4.2 Deployment of New IT Services

    The service architecture, software and deployment of IT services have an impact at least as great asthat of the IT hardware.

    No Name Description Expected Value4.2.1 Deploy using

    Grid andVirtualisationtechnologies

    Processes should be put in place to require seniorbusiness approval for any new service thatrequires dedicated hardware and will not run on aresource sharing platform. This applies to servers,storage and networking aspects of the service.

    New ITEquipment

    5

    4.2.2 Reduce IThardwareresilience level

    Determine the business impact of service incidentsfor each deployed service and deploy only the levelof hardware resilience actually justified.

    New ITEquipment

    4

    4.2.3 Reduce hot /cold standbyequipment

    Determine the business impact of service incidentsfor each IT service and deploy only the level ofBusiness Continuity / Disaster Recovery standby ITequipment and resilience that is actually justified bythe business impact.

    New ITEquipment

    4

    4.2.4 Select efficientsoftware

    Make the energy use performance of the softwarea primary selection factor. Whilst forecasting andmeasurement tools and methods are still beingdeveloped, approximations can be used such asthe (under load) power draw of the hardwarerequired to meet performance and availabilitytargets. This is an extension of existing capacityplanning and benchmarking processes. SeeFurther development of software efficiencydefinitions in section 11.

    NewSoftware

    4

    4.2.5 Develop efficientsoftware

    Make the energy use performance of the softwarea major success factor of the project. Whilstforecasting and measurement tools and methodsare still being developed approximations, can beused such as the (under load) power draw of thehardware required to meet performance andavailability targets. This is an extension of existingcapacity planning and benchmarking processes.Performance optimisation should not be seen as alow impact area to reduce the project budget. SeeFurther development of software efficiencydefinitions in section 11.

    NewSoftware

    4

    4.2.6 Incentives todevelop efficientsoftware

    If outsourcing software development then includethe energy use of the software in the bonus /penalty clauses of the contract. Whilst forecastingand measurement tools and methods are still beingdeveloped approximations, can be used such asthe (under load) power draw of the hardwarerequired to meet performance and availabilitytargets. This is an extension of existing capacityplanning and benchmarking processes.Performance optimisation should not be seen as alow impact area to reduce the project budget. SeeFurther development of software efficiencydefinitions in section 11.

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    18/50

    Version 4.0.5 Public

    2013 Best Practices 18 of 50

    4.2.7 Eliminatetraditional 2Nhardwareclusters

    Determine the business impact of short serviceincidents for each deployed service and replacetraditional active / passive server hardware clusterswith fast recovery approaches such as restartingvirtual machines elsewhere. (this does not refer togrid or High Performance Compute clusters)

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    19/50

    Version 4.0.5 Public

    2013 Best Practices 19 of 50

    4.3 Management of Existing IT Equipment and Services

    It is common to focus on new services and equipment being installed into the data centre but there arealso substantial opportunities to achieve energy and cost reductions from within the existing serviceand physical estate.

    No Name Description Expected Value

    4.3.1 Audit existingphysical andservice estate

    Audit the existing physical and logical estate toestablish what equipment is in place and whatservice(s) it delivers. Consider the implementationof an ITIL type Configuration Management Database and Service Catalogue.

    Optional 4

    4.3.2 Decommissionunused services

    Completely decommission and remove, thesupporting hardware for unused services

    EntireData

    Centre

    5

    4.3.3 Virtualise and

    archive legacyservices

    Servers which cannot be decommissioned for

    compliance or other reasons but which are not usedon a regular basis should be virtualised and thenthe disk images archived to a low power media.These services can then be brought online whenactually required

    Optional 5

    4.3.4 Consolidation ofexisting services

    Existing services that do not achieve high utilisationof their hardware should be consolidated throughthe use of resource sharing technologies to improvethe use of physical resources. This applies toservers, storage and networking devices.

    Optional 5

    4.3.5 Decommissionlow business

    value services

    Identify services whose business value is low anddoes not justify the financial or environmental cost,

    decommission or archive these services

    Optional 4

    4.3.6 Shut down idleequipment

    Servers, networking and storage equipment that isidle for significant time and cannot be virtualisedand archived should be shut down or put into a lowpower sleep state. It may be necessary to validatethe ability of legacy applications and hardware tosurvive these state changes without loss of functionor reliability.

    Optional 3

    4.3.7 Control ofsystem energyuse

    Consider resource management systems capableof analysing and optimising where, when and howIT workloads are executed and their consequentenergy use. This may include technologies that

    allow remote deployment or delayed execution ofjobs or the movement of jobs within theinfrastructure to enable shutdown of components,entire systems or sub-systems. The desiredoutcome is to provide the ability to limit localisedheat output or constrain system power draw to afixed limit, at a data centre, row, rack or sub-DClevel

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    20/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    21/50

    Version 4.0.5 Public

    2013 Best Practices 21 of 50

    4.4 Data Management

    Storage is a major growth area in both cost and energy consumption within the data centre. It isgenerally recognised that a significant proportion of the data stored is either unnecessary orduplicated nor requires high performance access and that this represents an organisationalchallenge. Some sectors have a particular issue due to very broad and non specific data retention

    directions from governments or regulating bodies. Where there is little structure to the data storage,implementation of these regulations can cause large volumes of data not required by the regulationsto be unnecessarily heavily protected and archived.

    No Name Description Expected Value

    4.4.1 Datamanagementpolicy

    Develop a data management policy to define whichdata should be kept, for how long and at what levelof protection. Communicate the policy to users andenforce. Particular care should be taken tounderstand the impact of any data retentionrequirements,

    EntireData

    Centre

    3

    4.4.2 Separate userlogical datastorage areas byretention andprotection policy

    Provide users with multiple data storage areaswhich are clearly identified by their retention policyand level of data protection. Communicate thispolicy to users to enable them to store data in anarea which matches the required levels ofprotection and retention. This is particularlyvaluable where strong retention requirements existas it allows data subject to those requirements tobe separated at source presenting substantialopportunities for cost and energy savings. Wherepossible automate the application of these policies.

    Optional 3

    4.4.3 Separatephysical datastorage areas byprotection andperformancerequirements

    Create a tiered storage environment utilisingmultiple media types delivering the requiredcombinations of performance, capacity andresilience. Implement clear guidelines on usage ofstorage tiers with defined SLAs for performanceand availability. Consider a tiered charging modelbased on usage at each tier.

    Optional 4

    4.4.4 Select lowerpower storagedevices

    When selecting storage hardware evaluate theenergy efficiency in terms of the service deliveredper Watt between options. This may be deploymentspecific and should include the achievedperformance and storage volume per Watt as wellas additional factors where appropriate, such asthe achieved levels of data protection, performance

    availability and recovery capability required to meetthe business service level requirements defined inthe data management policy.

    Evaluate both the in use power draw and the peakpower of the storage device(s) as configured, bothimpact per device cost and energy consumptionthrough provisioning.

    Additionally ensure the use of the Energy Star DataCenter Storage Specification V1.0 when released.

    Optional 3

    4.4.5 Reduce total datavolume

    Implement an effective data identification andmanagement policy and process to reduce the totalvolume of data stored. Consider implementingclean up days where users delete unnecessarydata from storage.

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    22/50

    Version 4.0.5 Public

    2013 Best Practices 22 of 50

    4.4.6 Reduce totalstorage volume

    Implement the data management policy to reducethe number of copies of data, both logical andphysical (mirrors). Implement storage subsystemspace saving features, such as space efficientsnapshots / copies or compression. Implementstorage subsystem thin provisioning features where

    possible.

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    23/50

    Version 4.0.5 Public

    2013 Best Practices 23 of 50

    5 Cooling

    Cooling of the Data Centre is frequently the largest energy loss in the facility and as such represents asignificant opportunity to improve efficiency.

    5.1 Air Flow Management and DesignThe objective of air flow management is to minimise bypass air, which returns to the cooling (CRAC /CRAH) units without performing cooling and the resultant recirculation and mixing of cool and hot airincreasing equipment intake temperatures. To compensate, cooling unit air supply temperatures arefrequently reduced or air flow volumes increased, which has an energy penalty. Addressing theseissues will deliver more uniform equipment inlet temperatures and allow set points to be increased (withthe associated energy savings) without the risk of equipment overheating. Implementation of airmanagement actions alone does not result in an energy saving they are enablers which need to betackled before set points can be raised.

    No Name Description Expected Value

    5.1.1 Design Contained hot orcold air

    There are a number of design concepts whosebasic intent is to contain and separate the cold airfrom the heated return air on the data floor;

    Hot aisle containment

    Cold aisle containment

    Contained rack supply, room return

    Room supply, Contained rack return, (inc.rack chimneys)

    Contained rack supply, Contained rackreturn

    This action is expected for air cooled facilities over1kW per square meter power density.

    Note that the in rack cooling options are onlyconsidered to be containment where the entire datafloor area is cooled in rack, not in mixedenvironments where they return cooled air forremix with other air flow.

    Note that failure to contain air flow results in both areduction in achievable cooling efficiency and anincrease in risk. Changes in IT hardware and ITmanagement tools mean that the air flow and heatoutput of IT devices is no longer constant and mayvary rapidly due to power management andworkload allocation tools. This may result in rapid

    changes to data floor air flow pattern and ITequipment intake temperature which cannot beeasily predicted or prevented.

    New buildor retrofit 5

    5.1.2 Rack air flowmanagement Blanking Plates

    Installation of blanking plates where there is noequipment to reduce hot air re-circulating throughgaps in the rack. This reduces air heated by onedevice being ingested by another device,increasing intake temperature and reducingefficiency.

    EntireData

    Centre

    3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    24/50

    Version 4.0.5 Public

    2013 Best Practices 24 of 50

    5.1.3 Rack air flowmanagement Other openings

    Installation of aperture brushes (draught excluders)or cover plates to cover all air leakageopportunities in each rack. This includes;

    floor openings at the base of the rack

    Gaps at the sides, top and bottom of the

    rack between equipment or mounting railsand the perimeter of the rack

    New buildor retrofit

    3

    5.1.4 Raised floor airflowmanagement

    Close all unwanted apertures in the raised floor.Review placement and opening factors of ventedtiles to reduce bypass. Maintain unbroken rows ofcabinets to prevent re-circulated air wherenecessary fill with empty fully blanked racks.Managing unbroken rows is especially important inhot and cold aisle environments. Any openingbetween the aisles will degrade the separation ofhot and cold air.

    EntireData

    Centre

    3

    5.1.5 Design Return

    plenums

    Consider the use of return plenums to return

    heated air from the IT equipment to the airconditioning units

    Optional 3

    5.1.6 Design Contained hot orcold air Retrofit

    Where hot / cold aisle separation is already in usebut there is no containment of hot or cold air it ispossible to retrofit to provide basic separation forexample using curtains. Care should be taken tounderstand implications for fire systems.

    Optional 3

    5.1.7 Raised floor airflowmanagement Obstructions

    Review the placement and level of obstructioncreated by cabling, cable trays and other structuresin the air flow paths, these obstruct airflow andcreate turbulence, increasing the resistance andincreasing the energy requirements of air

    movement and may increase velocities, causingnegative pressure. The use of overhead cablingtrays can substantially reduce these losses.

    Optional 2

    5.1.8 Design Hot /cold aisle

    As the power densities and air flow volumes of ITequipment have increased it has becomenecessary to ensure that equipment shares an airflow direction, within the rack, in adjacent racks andacross aisles.

    The hot / cold aisle concept aligns equipment airflow to create aisles between racks that are fedcold air from which all of the equipment drawsintake air in conjunction with hot aisles with no cold

    air feed to which all equipment exhausts air.

    New ITEquipment

    and

    New buildor retrofit

    3

    5.1.9 Design Raisedfloor orsuspendedceiling height

    It is common to use the voids in the raised floor,suspended ceiling or both in a data centre to feedcold air to equipment or extract hot air from theequipment. Where they are used, increasing thesize of these spaces can reduce fan losses movingthe air.

    Optional 3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    25/50

    Version 4.0.5 Public

    2013 Best Practices 25 of 50

    5.1.10 Equipmentsegregation

    Deploy groups of equipment with substantiallydifferent environmental requirements and / orequipment airflow direction in a separate area.Where the equipment has different environmentalrequirements it is preferable to provide separateenvironmental controls.

    This objective of this practice is to address theissue of the data centre cooling plant settings beingconstrained by the equipment with the mostrestrictive environmental range or poor air flowcontrol as this compromises the efficiency of theentire data centre.

    This practice applies to IT, mechanical andelectrical equipment installed in the data centre.

    New ITEquipment

    and

    New buildor retrofit

    3

    5.1.11 Provideadequate freearea on rackdoors

    Solid doors can be replaced (where doors arenecessary) with partially perforated doors to ensureadequate cooling airflow which often impede thecooling airflow and may promote recirculationwithin the enclosed cabinet further increasing theequipment intake temperature.

    New ITEquipment

    New buildor retrofit

    3

    5.1.12 Separateenvironmentalzones

    Where a data centre houses both IT equipmentcompliant with the extended range of practice 4.1.3and other equipment which requires morerestrictive temperature or humidity control separateareas should be provided. These areas shouldhave separate environmental controls and may useseparate cooling systems to facilitate optimisationof the cooling efficiency of each zone.

    Examples are equipment which;

    Requires tighter environmental controls tomaintain battery capacity and lifetime suchas UPS

    Requires tighter environmental controls tomeet archival criteria such as tape

    Requires tighter environmental controls tomeet long warranty durations (10+ year)

    The objective of this practice is to avoid the need toset the data centre cooling plant for the equipmentwith the most restrictive environmental range andtherefore compromising the efficiency of the entiredata centre.

    New buildor retrofit

    4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    26/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    27/50

    Version 4.0.5 Public

    2013 Best Practices 27 of 50

    5.2 Cooling Management

    The data centre is not a static system and the cooling systems should be tuned in response to changesin the facility thermal load.

    No Name Description Expected Value

    5.2.1 Scalable ormodularinstallation anduse of coolingequipment

    Cooling plant should be installed in a modularfashion allowing operators to shut downunnecessary equipment. This should then be part ofthe review at each cooling load change. Design tomaximise the part load efficiency as described in3.3

    Optional 3

    5.2.2 Shut downunnecessarycoolingequipment

    If the facility is not yet fully populated or space hasbeen cleared through consolidation non variableplant such as fixed speed fan CRAC units can beturned off in the empty areas.

    Note that this should not be applied in cases whereoperating more plant at lower load is more efficient,e.g. variable speed drive CRAC units.

    Optional 3

    5.2.3 Review ofcooling beforeIT equipmentchanges

    The availability of cooling including the placementand flow of vented tiles should be reviewed beforeeach IT equipment change to optimise the use ofcooling resources.

    EntireData

    Centre

    2

    5.2.4 Review ofcoolingstrategy

    Periodically review the IT equipment and coolingdeployment against strategy.

    EntireData

    Centre

    2

    5.2.5 Review CRAC/ AHU Settings

    Ensure that CRAC units in occupied areas haveappropriate and consistent temperature and relativehumidity settings to avoid units working againsteach other.

    For example many CRAC units now have the optionto connect their controls and run together wheninstalled in the same area.

    Care should be taken to understand and avoid anypotential new failure modes or single points offailure that may be introduced.

    Optional 3

    5.2 Dynamiccontrol ofbuilding

    cooling

    It is possible to implement control systems that takemany factors including cooling load, data floor airtemperature and external air temperature into

    account to optimise the cooling system, (e.g. chilledwater loop temperature) in real time.

    Optional 3

    5.2 Effectiveregularmaintenanceof coolingplant

    Effective regular maintenance of the cooling systemin order to conserve or achieve a like newcondition is essential to maintain the designedcooling efficiency of the data centre.

    Examples are: belt tension, condenser coil fouling(water or air side), evaporator fouling, filter changesetc.

    EntireData

    Centre

    2

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    28/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    29/50

    Version 4.0.5 Public

    2013 Best Practices 29 of 50

    5.3.2 Review andincrease theworking humidityrange

    Reduce the lower humidity set point(s) of the datacentre within the ASHRAE Class A2 range (20%relative humidity) to remove de-humidificationlosses.

    Review and if practical increase the upper humidityset point(s) of the data floor within the currenthumidity range of 21C (69.8F) dew point & 80%RH to decrease the dehumidification loads withinthe facility.

    The current, relevant standard is the ASHRAEClass A2 allowable range for Data Centers (needspublished URL or replacement).

    Note that some data centres may containequipment with legacy environmental ranges asdefined in 4.1.2, the humidity range for thesefacilities will be restricted by this equipment untilsegregation can be achieved as described in5.1.12.

    Entire DataCentre

    4

    5.3.3 Expanded ITequipment inletenvironmentalconditions(temperatureand humidity)

    Where appropriate and effective, Data Centres canbe designed and operated within the air inlettemperature and relative humidity ranges of 5C to40C and 5% to 80% RH, non-condensingrespectively, and under exceptional conditions upto +45C as described in ETSI EN 300 019, Class3.1.

    Note that using the full range up to 40C or 45Cwill allow for the complete elimination ofrefrigeration in most climates allowing the operatorto eliminate the capital and maintenance cost ofthe refrigeration plant..

    Optional 5

    5.3 Review and ifpossible raisechilled watertemperature

    Review and if possible increase the chilled watertemperature set points to maximise the use of freecooling economisers and reduce compressorenergy consumption.

    Where a DX system is used the evaporatortemperatures should be reviewed.

    Electronic expansion valves (EEVs) allow bettercontrol and permit higher evaporator temperaturesthan thermostatic expansion valves (TEVs).

    Entire DataCentre

    4

    5.3 Control to ahumidity range

    Controlling humidity within a wide range ofhumidity ratio or relative humidity can reduce

    humidification and dehumidification loads.

    Optional 3

    8.1 Industrial Space The data centre should be considered as anindustrial space, designed built and operate withthe single primary objective of delivering highavailability IT services reliably and efficiently.

    This objective should not be compromised by theneed for human comfort other than to comply withlocal statutory requirement and law.

    Data Centres are technical spaces, not officespace, and should therefore only require thecontrol make up air volumes and environmentalconditions according to sensible warehouse or

    industrial levels rather than for seated humancomfort.

    Entire DataCentre

    from 2014

    3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    30/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    31/50

    Version 4.0.5 Public

    2013 Best Practices 31 of 50

    5.4 Cooling Plant

    The cooling plant typically represents the major part of the energy used in the cooling system. This isalso the area with the greatest variation in technologies.

    5.4.1 Free and Economised Cooling

    Free or economised cooling designs use cool ambient conditions to meet part or all of the facilitiescooling requirements hence compressor work for cooling is reduced or removed, which can result insignificant energy reduction. Economised cooling can be retrofitted to some facilities. Theopportunities for the utilisation of free cooling are increased in cooler and dryer climates and whereincreased temperature set points are used. Where refrigeration plant can be reduced in size (oreliminated), operating and capital cost are reduced, including that of supporting electricalinfrastructure.

    No Name Description Expected Value

    5.4.1.1 Direct air freecooling

    External air is used to cool the facility.Refrigeration systems are present to deal with

    humidity and high external temperatures ifnecessary. Exhaust air is re-circulated and mixedwith intake air to control supply air temperatureand humidity.

    This design tends to have the lowest temperaturedifference between external temperature and ITsupply air.

    Note that in many cases full mechanical cooling /refrigeration capacity is required as a backup toallow operation during periods of high airbornepollutant.

    Note that the IT equipment is likely to be exposed

    to a large humidity range to allow direct air sideeconomisation to work effectively. The achievableeconomiser hours are directly constrained by thechosen upper humidity limit.

    Optional 5

    5.4.1.2 Indirect air freecooling

    Re circulated air within the facility is primarilypassed through an air to air heat exchangeragainst external air (may have adiabatic cooling) toremove heat to the atmosphere. A variation of thisis a thermal wheel, quasi indirect free coolingsystem.

    This design tends to have a low temperaturedifference between external temperature and IT

    supply air.Note that the operating IT equipment humidityrange may be well controlled at negligible energycost in this type of design.

    Optional 5

    5.4.1.3 Direct water freecooling

    Chilled water cooled by the external ambient airvia a free cooling coil. This may be achieved bydry coolers or by evaporative assistance throughspray onto the dry coolers.

    This design tends to have a medium temperaturedifference between external temperature and ITsupply air.

    Note that the operating IT equipment humidityrange may be well controlled at negligible energycost in this type of design.

    Optional 5

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    32/50

    Version 4.0.5 Public

    2013 Best Practices 32 of 50

    5.4.1.4 Indirect waterfree cooling

    Chilled water is cooled by the external ambientconditions via a heat exchanger which is usedbetween the condenser and chilled water circuits.This may be achieved by cooling towers or drycoolers, the dry coolers may have evaporativeassistance through spray onto the coolers.

    This design tends to have a higher temperaturedifference between external temperature and ITsupply air restricting the economiser hoursavailable and increasing energy overhead.

    Note that the operating IT equipment humidityrange may be well controlled at negligible energycost in this type of design.

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    33/50

    Version 4.0.5 Public

    2013 Best Practices 33 of 50

    5.4.2 High Efficiency Cooling Plant

    When refrigeration is used as part of the cooling system design high efficiency cooling plant shouldbe selected. Designs should operate efficiently at system level and employ efficient components.This demands an effective control strategy which optimises efficient operation, without compromisingreliability.

    Even in designs where the refrigeration is expected to run for very few hours per year the costsavings in infrastructure electrical capacity and utility power availability or peak demand fees justifythe selection of high efficiency plant.

    No Name Description Expected Value

    5.4.2 Chillers withhigh COP

    Where refrigeration2is installed make the

    Coefficient Of Performance of chiller systemsthrough their likely working range a high prioritydecision factor during procurement of new plant.

    New buildor retrofit

    3

    5.4.2 Cooling systemoperating

    temperatures

    Evaluate the opportunity to decrease condensingtemperature or increase evaporating temperature;

    reducing delta T between these temperaturesmeans less work is required in cooling cycle henceimproved efficiency. These temperatures aredependent on required IT equipment intake airtemperatures and the quality of air flowmanagement (see Temperature and HumiditySettings).

    EntireData

    Centre

    3

    5.4.2 Efficient partload operation

    Optimise the facility for the partial load it willexperience for most of operational time rather thanmax load. e.g. sequence chillers, operate coolingtowers with shared load for increased heatexchange area

    New buildor retrofit

    3

    5.4.2 Variable speeddrives forcompressors,pumps and fans

    Reduced energy consumption for thesecomponents in the part load condition where theyoperate for much of the time.

    Consider new or retrofit of Electrically Commutated(EC) motors which are significantly more energyefficient than traditional AC motors across a widerange of speeds.

    In addition to installing variable speed drives it iscritical to include the ability to properly control thespeed according to demand. It is of limited value toinstall drives which are manually set at a constantspeed or have limited control settings.

    New buildor retrofit

    2

    2Note that this refers to mechanical compressors and heat pumps, any device which uses energy to raise

    the temperature of the rejected heat

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    34/50

    Version 4.0.5 Public

    2013 Best Practices 34 of 50

    5.4.2.5 Select systemswhich facilitatethe use ofeconomisers

    Cooling designs should be chosen which allow theuse of as much Free Cooling as is possibleaccording to the physical site constraints, localclimatic or regulatory conditions that may beapplicable.

    Select systems which facilitate the use of coolingeconomisers. In some data centres it may bepossible to use air side economisers others maynot have sufficient available space and may requirea chilled liquid cooling system to allow the effectiveuse of economised cooling.

    New buildor retrofit

    4

    5.4.2.6 Do not sharedata centrechilled watersystem withcomfort cooling

    In buildings which are principally designed toprovide an appropriate environment for ITequipment, and that have cooling systemsdesigned to remove heat from technical spaces, donot share chilled water systems with humancomfort cooling in other parts of the building.

    The required temperature to achieve latent coolingfor comfort cooling is substantially below thatrequired for sensible cooling of the data centre andwill compromise the efficiency of the data centrecooling system.

    If comfort cooling remains a requirement considerthe use of heat pumps to provide either cooling orheating for office area comfort.

    New buildor retrofit

    4

    5.4.2.7 Do not allownon ITequipment todictate coolingsystem set-

    points

    Where other equipment requires a more restrictivetemperature or humidity control range than the ITequipment this should not be permitted to dictatethe set points of the cooling system responsible forthe IT equipment.

    Optional 4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    35/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    36/50

    Version 4.0.5 Public

    2013 Best Practices 36 of 50

    5.6.6 Do not controlhumidity atCRAC unit

    The only humidity control that should be present inthe data centre is that on fresh Make Up aircoming into the building and not on re-circulating airwithin the equipment rooms. Humidity control at theCRAC unit is unnecessary and undesirable.

    Humidity control should be centralised. Do notinstall humidity control at the CRAC unit on re-circulating air. Instead control the specific humidityof the make-up air at the supply AHU.

    The chilled water loop or DX evaporatortemperature should in any case be too high toprovide de-humidification.

    When purchasing new CRAC units select modelswhich are not equipped with humidity controlcapability, including any reheat capability, this willreduce both capital and on-going maintenancecosts.

    New buildor retrofit

    4

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    37/50

    Version 4.0.5 Public

    2013 Best Practices 37 of 50

    5.7 Reuse of Data Centre Waste Heat

    Data Centres produce significant quantities of waste heat, whilst this is typically at a relatively lowtemperature there are some applications for reuse of this energy. As IT equipment utilisation isincreased through consolidation and virtualisation the exhaust temperature is likely to increase whichwill provide greater opportunity for waste heat to be re-used. Directly liquid cooled IT equipment is likelyto provide a further improvement in the return temperature of coolant.

    No Name Description Expected Value

    5.7.1 Waste heat re-use

    It may be possible to provide low grade heating toindustrial space or to other targets such as adjacentoffice space fresh air directly from heat rejectedfrom the data centre. This can reduce energy useelsewhere.

    Optional 3

    5.7.2 Heat pumpassisted wasteheat re-use

    Where it is not possible to directly re use the wasteheat from the data centre due to the temperaturebeing too low it can still be economic to useadditional heat pumps to raise the temperature to auseful point. This can supply office, district andother heating.

    Optional 2

    5.7.3 Use data floorwaste heat towarm generatorand fuel storageareas

    Reduce or eliminate the electrical preheat loads forgenerators and fuel storage by using warm exhaustair from the data floor to maintain temperature inthe areas housing generators and fuel storagetanks.

    Optional 1

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    38/50

    Version 4.0.5 Public

    2013 Best Practices 38 of 50

    6 Data Centre Power EquipmentThe other major part of the facility infrastructure is the power conditioning and delivery system. Thisnormally includes uninterruptible power supplies, power distribution units and cabling but may alsoinclude backup generators and other equipment.

    6.1 Selection and Deployment of New Power Equipment

    Power delivery equipment has a substantial impact upon the efficiency of the data centre and tends tostay in operation for many years once installed. Careful selection of the power equipment at designtime can deliver substantial savings through the lifetime of the facility.

    No Name Description Expected Value

    6.1.1 Modular UPSdeployment

    It is now possible to purchase modular (scalable)UPS systems across a broad range of powerdelivery capacities. Physical installation,transformers and cabling are prepared to meet thedesign electrical load of the facility but the sources

    of inefficiency (such switching units and batteries)are installed, as required, in modular units. Thissubstantially reduces both the capital cost and thefixed overhead losses of these systems. In lowpower environments these may be frames with plugin modules whilst in larger environments these aremore likely to be entire UPS units.

    New buildor retrofit

    3

    6.1.2 High efficiencyUPS

    High efficiency UPS systems should be selected, ofany technology including electronic or rotary tomeet site requirements.

    New buildor retrofit

    3

    6.1.3 Use efficientUPS operating

    modes

    UPS should be deployed in their most efficientoperating modes such as line interactive.

    Technologies such as Rotary and High Voltage DC(direct current) can also show improved efficiencyas there is no dual conversion requirement. This isparticularly relevant for any UPS system feedingmechanical loads e.g. CRAC fans.

    New buildor retrofit

    2

    6.1.4 Code of Conductcompliant UPS

    Select UPS systems compliant with the EU Code ofConduct for UPS where that UPS technology isincluded.

    Rotary UPS are not included in the UPS Code ofConduct. Note that this does not suggest that rotaryUPS should not be used, simply that rotarytechnology is not currently covered by the UPS

    Code of Conduct.This practice should be implemented in line withInternational Standard IEC 62040-3 (ed2 - 2011-03).

    A UPS conforming to this standard shall be able toperform as rated when operating within thefollowing minimum ambient ranges:

    Temperature 0C to +40C.

    Relative Humidity 20% to 80%

    Optional 2

    8.1.5 Power FactorCorrection

    Monitor, understand and manage theconsequences of the Power Factors of both the

    Mechanical and Electrical infrastructure andinstalled IT equipment within the data centre.

    Optional 3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    39/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    40/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    41/50

    Version 4.0.5 Public

    2013 Best Practices 41 of 50

    8 Data Centre Building

    The location and physical layout of the data centre building is important to achieving flexibility andefficiency. Technologies such as fresh air cooling require significant physical plant space and air ductspace that may not be available in an existing building.

    8.1 Building Physical Layout

    The physical layout of the building can present fundamental constraints on the applicable technologiesand achievable efficiencies.

    No Name Description Expected Value

    8.1.1 Locate M&Eplant outside thecooled area

    Heat generating Mechanical and Electrical plantsuch as UPS units should be located outside thecooled areas of the data centre wherever possibleto reduce the loading on the data centre coolingplant.

    Optional 2

    8.1.2 Select a buildingwith sufficientceiling height

    Insufficient ceiling height will obstruct the use ofefficient air cooling technologies such as raisedfloor, suspended ceiling or ducts in the data centre.

    Optional 3

    8.1.3 Facilitate the useof economisers

    The physical layout of the building should notobstruct the use of economisers (either air orwater)

    Optional 3

    8.1.4 Location andorientation ofplant equipment

    Cooling equipment, particularly dry or adiabaticcoolers should be located in an area of free airmovement to avoid trapping it in a local hot spot.Ideally this equipment should also be located in aposition on the site where the waste heat does notaffect other buildings and create further demand forair conditioning.

    Optional 2

    8.1.5 Minimise directsolar heating

    Minimise solar heating of the cooled areas of thedata centre by providing shade or increasing thealbedo (reflectivity) of the building through the useof light coloured roof and wall surfaces. Shade maybe constructed, provided by trees or green roofsystems.

    Optional 2

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    42/50

    Version 4.0.5 Public

    2013 Best Practices 42 of 50

    8.2 Building Geographic Location

    Whilst some operators may have no choice of the geographic location for a data centre it neverthelessimpacts achievable efficiency, primarily through the impact of external climate.

    No Name Description Expected Value8.2.1 Locate the Data

    Centre wherewaste heat canbe reused

    Locating the data centre where there are availableuses for waste heat can save substantial energy.Heat recovery can be used to heat office orindustrial space, hydroponic farming and evenswimming pools.

    Optional 2

    8.2.2 Locate the DataCentre in anarea of lowambienttemperature

    Free and economised cooling technologies aremore effective in areas of low ambient externaltemperature and or humidity.

    Note that most temperature climates including muchof Northern, Western and Central Europe presentsignificant opportunity for economised cooling and

    zero refrigeration.

    Optional 3

    8.2.3 Avoid locatingthe data centrein high ambienthumidity areas

    Free cooling is particularly impacted by highexternal humidity as dehumidification becomesnecessary, many economiser technologies (such asevaporative cooling) are also less effective.

    Optional 1

    8.2.4 Locate near asource of freecooling

    Locating the data centre near a source of freecooling such as a river subject to localenvironmental regulation.

    Optional 3

    8.2.5 Co-locate withpower source

    Locating the data centre close to the powergenerating plant can reduce transmission lossesand provide the opportunity to operate sorption

    chillers from power source waste heat.

    Optional 2

    8.3 Water sources

    Data centres can use a significant quantity of water in cooling and humidity control, the use of lowenergy intensity water sources can reduce the effective energy consumption of the data centre.

    No Name Description Expected Value

    8.3.1 Capture rainwater

    Capture and storage of rain water for evaporativecooling or other non-potable purposes may reduceoverall energy consumption

    Optional 1

    8.3.2 Other watersources

    Use of other local non-utility water sources forevaporative cooling or other non-potable purposesmay reduce overall energy consumption

    Optional 1

    8.3.3 Metering of waterconsumption

    The site should meter water consumption from allsources. The site should seek to use this data tomanage and reduce overall water consumption.

    Note that water consumption cannot be directlycompared with energy efficiency (PUE) unless theenergy intensity of the water source is understood.Comparing water consumption between buildings istherefore not useful.

    Optional 1

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    43/50

    Version 4.0.5 Public

    2013 Best Practices 43 of 50

    9 MonitoringThe development and implementation of an energy monitoring and reporting management strategy iscore to operating an efficient data centre.

    9.1 Energy Use and Environmental Measurement

    Most data centres currently have little or no energy use or environmental measurement capability;many do not even have a separate utility meter or bill. The ability to measure energy use and factorsimpacting energy use is a prerequisite to identifying and justifying improvements. It should also benoted that measurement and reporting of a parameter may also include alarms and exceptions if thatparameter passes outside of the acceptable or expected operating range.

    No Name Description Expected Value

    9.1.1 Incoming energyconsumptionmeter

    Install metering equipment capable of measuringthe total energy use of the data centre, including allpower conditioning, distribution and coolingsystems. This should be separate from any nondata centre building loads. Note that this isrequired for CoC reporting

    EntireData

    Centre

    3

    9.1.2 IT Energyconsumptionmeter

    Install metering equipment capable of measuringthe total energy delivered to IT systems, includingpower distribution units. This may also includeother power feeds where non UPS protectedpower is delivered to the racks. Note that this isrequired for CoC reporting.

    EntireData

    Centre

    3

    9.1.3 Room levelmetering ofsupply airtemperature and

    humidity

    Install metering equipment at room level capable ofindicating the supply air temperature and humidityfor the IT equipment.

    Optional 2

    9.1.4 CRAC / AHU unitlevel metering ofsupply or returnair temperature

    Collect data from CRAC / AHU units on supply andreturn (dependent upon operating mode) airtemperature.

    Optional 3

    9.1.5 PDU levelmetering of ITEnergyconsumption

    Improve visibility of IT energy consumption bymetering at the Power Distribution Unit inputs oroutputs.

    Optional 3

    9.1.6 PDU levelmetering ofMechanical andElectrical energyconsumption

    Improve visibility of data centre infrastructureoverheads

    Optional 3

    9.1.7 Row or Racklevel metering oftemperature

    Improve visibility of air supply temperature inexisting hot / cold aisle environments with air flowmanagement issues.

    Note that this is not normally necessary in acontained air flow environment as air temperaturestend to be more stable and better controlled.

    Optional 2

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    44/50

    Version 4.0.5 Public

    2013 Best Practices 44 of 50

    9.1.8 IT Device levelmetering oftemperature

    Improve granularity and reduce metering cost byusing built in device level metering of intake and /or exhaust air temperature as well as key internalcomponent temperatures.

    Note that most new servers provide this feature aspart of the basic chipset functionality.

    Optional 4

    9.1.9 IT Device levelmetering ofenergyconsumption

    Improve granularity and reduce metering cost byusing built in IT device level metering of energyconsumption.

    Note that most new servers provide this feature aspart of the basic chipset functionality.

    Optional 4

    9.2 Energy Use and Environmental Collection and Logging

    Once data on energy use and environmental (temperature and humidity) conditions is available throughthe installation of measurement devices it needs to be collected and logged.

    No Name Description Expected Value9.2.1 Periodic manual

    readingsEntry level energy, temperature and humidity (drybulb temperature, relative humidity and dew pointtemperature) reporting can be performed withperiodic manual readings of measurement andmetering equipment. This should occur at regulartimes, ideally at peak load.

    Note that energy reporting is required by the CoCreporting requirements also that automatedreadings are considered to be a replacement forthis practice when applying for Participant status.

    Entire DataCentre

    3

    9.2.2 Automated daily

    readings

    Automated daily readings enable more effective

    management of energy use.

    Supersedes Periodic manual readings.

    Optional 4

    9.2.3 Automatedhourly readings

    Automated hourly readings enable effectiveassessment of how IT energy use varies with ITworkload

    Supersedes Periodic manual readings andAutomated daily readings.

    Optional 4

    8.1 Achievedeconomisedcooling hours new build DC

    Require collection and logging of full economiser,partial economiser and full mechanical hoursthroughout the year.

    The intent being to record the amount or time and

    energy spent running mechanical cooling versusrelying of free cooling.

    The site design, cooling system operational set-points and IT equipment environmental controlranges should allow the data centre to operatewithout refrigeration for a significant part of theyear with no refrigeration for the IT cooling load asevaluated against a Typical Meteorological Yearfor the site

    Note that this refers to mechanical compressorsand heat pumps, any device which uses energy toraise the temperature of the rejected heat

    New Build

    From 2015

    3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    45/50

    Version 4.0.5 Public

    2013 Best Practices 45 of 50

    9.3 Energy Use and Environmental Reporting

    Energy use and environmental (temperature and humidity) data needs to be reported to be of use inmanaging the energy efficiency of the facility.

    No Name Description Expected Value

    9.3.1 Written report Entry level reporting consists of periodic writtenreports on energy consumption and environmentalranges. This should include determining theaveraged DCIE or PUE over the reporting period.

    Note that this is required by the CoC reportingrequirements, also that this report may beproduced by an automated system.

    Entire DataCentre

    3

    9.3.2 Energy andenvironmentalreportingconsole

    An automated energy and environmental reportingconsole to allow M&E staff to monitor the energyuse and efficiency of the facility provides enhancedcapability. Averaged and instantaneous DCIE orPUE are reported.

    Supersedes written report

    Optional 3

    9.3.3 Integrated ITenergy andenvironmentalreportingconsole

    An integrated energy and environmental reportingcapability in the main IT reporting console allowsintegrated management of energy use andcomparison of IT workload with energy use.

    Averaged, instantaneous and working range DCIEor PUE are reported and related to IT workload.

    Supersedes Written report and Energy andenvironmental reporting console. This reportingmay be enhanced by the integration of effective

    physical and logical asset and configuration data.

    Optional 4

    8.1 Achievedeconomisedcooling hours new build DC

    Require reporting to log full economiser, partialeconomiser and full mechanical hours throughoutthe year.

    The intent being to record the amount or time andenergy spent running mechanical cooling versusrelying of free cooling.

    The site design, cooling system operational set-points and IT equipment environmental controlranges should allow the data centre to operatewithout refrigeration for a significant part of theyear with no refrigeration for the IT cooling load as

    evaluated against a Typical Meteorological Yearfor the site

    Note that this refers to mechanical compressorsand heat pumps, any device which uses energy toraise the temperature of the rejected heat

    New Build

    From 2015

    3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    46/50

    Version 4.0.5 Public

    2013 Best Practices 46 of 50

    9.4 IT Reporting

    Utilisation of the IT equipment is a key factor in optimising the energy efficiency of the data centre.Consider reporting aggregated data relevant to specific internal business needs, This practice willremain optional while effective open metrics and reporting mechanisms remain under development.

    No Name Description Expected Value

    9.4.1 ServerUtilisation

    Logging and internal reporting of the processorutilisation of the overall or grouped by service /location IT server estate. Whilst effective metricsand reporting mechanisms are still underdevelopment a basic level of reporting can be highlyinformative.

    Optional 3

    9.4.2 NetworkUtilisation

    Logging and internal reporting of the proportion ofthe overall or grouped by service / location networkcapacity utilised. Whilst effective metrics andreporting mechanisms are still under development a

    basic level of reporting can be highly informative.

    Optional 3

    9.4.3 StorageUtilisation

    Logging and internal reporting of the proportion ofthe overall or grouped by service / location storagecapacity and performance utilised. Whilst effectivemetrics and reporting mechanisms are still underdevelopment a basic level of reporting can be highlyinformative.

    The meaning of utilisation can vary depending onwhat is considered available capacity (e.g., ports,raw v. usable data storage) and what is consideredused (e.g., allocation versus active usage). Ensurethe definition used in these reports is clear and

    consistent.Note that mixed incentives are possible herethrough the use of technologies such as de-duplication.

    Optional 3

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    47/50

    Version 4.0.5 Public

    2013 Best Practices 47 of 50

    10 Practices to become minimum expectedThe following practices are planned to become minimum expected practices in future updates of theCode. The update year of the code in which the practices will become expected is shown in the table.

    No Name Description Expected Year

    3.2.2 Mechanical andelectricalequipmentenvironmentaloperatingranges

    Recommend the selection and deployment ofmechanical and electrical equipment which doesnot itself require refrigeration.

    This practice should be implemented in line withIEC6440-3

    Note that this refers to mechanical compressorsand heat pumps, any device which uses energy toraise the temperature of the rejected heat

    New buildor retrofit

    2014

    4.1.3 New IThardware

    Expectedoperatingtemperature andhumidity range

    Include the operating temperature and humidityranges at the air intake of new equipment as high

    priority decision factors in the tender process.Equipment should be able to withstand and bewithin warranty for the full range of 10C to 40Cinlet temperature (41F to 104F)and humiditywithin -12C (14F) lower dew point or 8% relative

    humidity to 85% relative humidity or 24C(75.2F)dew point. This is defined by the ASHRAEClass A3allowable temperature and humidityrange.

    Vendors are required to publish (not make availableon request) any restriction to the operating hourswithin this range for any model or range which

    restricts warranty to less than continuous operationwithin the allowable range.

    To address equipment types which cannot beprocured to meet this specification exclusions andmitigation measures are provided in practices 4.1.2for new IT equipment, 5.1.10 for existing datacentres and 5.1.12 for new build data centres.Directly liquid cooled IT devices are addressed inpractice 4.1.12.

    New ITEquipment

    2014

    4.1.8 Energy Star IThardware

    The Energy Star Labelling programs for ITequipment should be used as a guide to serverselection where and when available for that class of

    equipment. Operators who are able to determinethe in use energy efficiency of hardware throughmore advanced or effective analysis should selectthe most efficient equipment for their scenario.

    This practice should be in line with the proposedVersion 2.0 Energy Star Computer ServerSpecification when available.

    Additionally ensure the use of the Energy Star DataCenter Storage Specification V1.0 when released.

    New ITEquipment

    2014

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    48/50

    Version 4.0.5 Public

    2013 Best Practices 48 of 50

    4.1.9 Energy &temperaturereportinghardware

    Select equipment with power and inlet temperaturereporting capabilities, preferably reporting energyused as a counter in addition to power as a gauge.Where applicable, industry standard reportingapproaches should be used such as IPMI, DCMIand SMASH.

    To assist in the implementation of temperature andenergy monitoring across a broad range of datacentres all devices with an IP interface shouldsupport one of;

    SNMP polling of inlet temperature andpower draw. Note that event based SNMPtraps and SNMP configuration are notrequired

    IPMI polling of inlet temperature and powerdraw (subject to inlet temperature beingincluded as per IPMI 2.0 rev 4)

    An interface protocol which the operatorsexisting monitoring platform is able toretrieve inlet temperature and power drawdata from without the purchase of additionallicenses from the equipment vendor

    The intent of this practice is to provide energy andenvironmental monitoring of the data centre throughnormal equipment churn.

    New ITequipment

    2014

    4.1.13 IT equipmentpower againstinlettemperature

    When selecting new IT equipment require thevendor to supply at minimum;

    Either the total system power or cooling fan powerfor temperatures covering the full allowable inlet

    temperature range for the equipment under 100%load on a specified benchmark such asSPECPower (http://www.spec.org/power_ssj2008/).Data should be provided for 5C or smaller steps ofinlet temperature

    Optional but recommended;

    Total system power covering the full allowable inlettemperature range under 0% and 50% load on theselected benchmark.

    These sets of data shown easily in a single tableand single chart will allow a data centre operator toselect equipment to meet their chosen operating

    temperature range without significant increase inpower consumption.

    This practice is intended to improve the thermalperformance of IT equipment by allowing operatorsto avoid devices with compromised cooling designsand creating a market pressure toward deviceswhich operate equally well at increased intaketemperature.

    This practice is likely to be modified to use thesame measurements as proposed in the Version2.0 Energy Star Computer Server Specification.

    Additionally ensure the use of the Energy Star DataCenter Storage Specification V1.0 when released.

    New ITEquipment

    2014

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    49/50

  • 8/13/2019 European COde Best Practices v4 0 5-r1

    50/50