design of smart data centres and offices - … awards 2016/programme... · design of smart data...

35
Design of Smart Data Centres and Offices Efficient Energy Solutions for A*STAR Computational Resource Centre National Super Computing Centre (A*CRC and NSCC) Alan Davis A*CRC Director of Computing Systems NSCC Chairman of Joint Storage Cmte

Upload: vubao

Post on 03-Aug-2018

218 views

Category:

Documents


1 download

TRANSCRIPT

Design of Smart Data

Centres and Offices

Efficient Energy Solutions for

A*STAR Computational Resource Centre

National Super Computing Centre

(A*CRC and NSCC)

Alan Davis

A*CRC Director of Computing Systems

NSCC Chairman of Joint Storage Cmte

8 Research Units10 Research Units

A*STAR

Agency for Science Technology and Research

Commercialisation Scholarships

Biomedical Research Council

(BMRC)

A*STAR Graduate Academy

ETPL

Science & Engineering

Research Council

(SERC)

Joint Council Office

(JCO)

>5,400 Staff>4,500 Researchers, Engineers and Technical Support Staff

>40% of whom come from 60 countries

Mission: We advance science and develop innovative technology to further economic growth and improve lives

A*STAR Computational Resource Centre

A user facility that provides HPC resources

for entire A*STAR research community

Chairman, Dr Tan Tin Wee and

Dr John Kahn (CIO)

A*CRC

A*CRC - additional activities

study state-of-the-art HPC technologies,

engage in forward trends discussions with vendors,

observe and study best practices and trends,

implement the best technological solutions for our

users’ needs

5

National Petascale Facility

The National Supercomputing Centre of Singapore

is a national petascale computational facility

established to support high performance science and

engineering computing needs

for the academic, research and industrial communities in

Singapore

Director, Dr. Tan Tin Wee

ACRC-NSCC Data Centre @ Fusionopolis

6

Level 17

7

National Petascale Facility

NSCC Stakeholders

NSCC Supercomputer Architecture

8

VPN

InfiniBand Network

(fully non-blocking)

EthernetNetwork

Base Compute Nodes (1160 nodes) Accelerated Nodes (128 nodes)Tiered Storage (13 PB)

NUS Remote

Login Nodes

GIS FAT node NTU Remote

Login Nodes

A*STAR Remote

Login Nodes

NSCC

Login Nodes

HPC Hardware

9

EDR Interconnect

•EDR (100Gbps) Fat Tree

•40 Gb/s InfiniBand

connection to remote

stakeholder campuses

(NUS NTU GIS)

13PB Storage

•HSM Tiered, 3 Tiers

(Lustre, GPFS, WOS)

•500 GB/s IME

1 PFLOP System

•1,288 nodes (24 cores

2.6 GHz E5-2690v3)

•128 GB DDR4 RAM

•10 Large memory nodes

(6TB, 2TB, 1TB)

Data Centre Upgrade

• Location – Level 17, North tower

• Layout – as of beginning of the project

A*CRC DC

NSCC DC

Data Centre Upgrade - Challenges• Electrical upgrades to provide higher capacity to NSCC and A*CRC:

– Extensive discussions to obtain approval from Building owner (JTC) and LEWs to bring in additional dedicated power to NSCC at 17th Floor

– Required one weekend for electrical shutdown of 13 floors in Fusionopolis building and systems

• A*STAR Research Institutes

• A*CRC DC and A*STAR Corporate IT DC

• Commercial entities such as Fitness First

– Performed works overnight to avoid disruption to entities

– Rehearsals and detailed planning to ensure safety and smoothness of operations

• Elevators - limited size restricted moving large sized equipment up to 17th floor.

• Implemention of an efficient cooling system.

A*CRC Data Centre

• Total DC space 210 sqm

• Share facilities with A*STAR and NSCC (switchboards,

UPS, generator set, …)

A*CRC Data Centre

NSCC Data Centre

14

HPC Racks

Storage Racks

Chilled water Cooling:

Rear door heat

exchangers

Liquid Cooling:

Warm water cooling direct-

to-chip

Air Cooling:

Computer Room Air

Handler (CRAH) units

L18S Warm water dry coolers & pumps

NSCC Data Centre – Cooling System

Combination of 3 cooling systems to achieve max. efficiency

NSCC Data Centre – Green & EcoFriendly

Warm water cooling for CPUs

•First free-cooling system in Singapore and South-East Asia.

•Water is maintained at a temperature of 40ºC. Enters the racks at 40ºC, exits the racks at 45ºC.

•Equipment placed in a technical floor(18th) cool down the water only using fans.

•The system can easily be extended for future expansion.

Green Data Centre

•PUE of 1.4 (average for Singapore is above 2.5)

•Around $1M savings in electricity every year.

Cool-Central® Liquid Cooling technology

17

Direct-to-Chip Cooling Technology

• Direct-to-chip hot water (40 °C / 105 °F) based Cool-Central® Liquid Cooling captures between 60-80% of the servers heat.

• Helps to reduce data centre cooling costs by over 50% and allows for 2.5-5x higher data center density.

Primergy CX400

Network Operations Centre

18

• Monitor the technical operations of the data centre complex

Relative Humidity, Temperature, CPU utilization, Power Loads, etc.

• Integrated DCIM-BMS

• Schneider Struxureware DataCentre Expert

Evolution of Building Energy Management

(Sweep Timer)(Motion Sensors &

Photocell

Sensors)

(RF Motion

Sensors)

(IoT

Technologies)

Enlighted System: How it works

Sensor Unit Gateway Energy Manager Real-time Data

Wireless

IEEE

802.15.4(with AES 128

bit encryption)

Ethernet Interactive

Few components, wireless network and quick, non-disruptive installation

makes Enlighted a simple and advanced way of managing a building’s physical

environment. The lighting layout of a building hosts and powers the sensor

network. Each sensor covers 10m2 and pings the environment 65 times a minute

and collects data on occupancy, temperature and ambient light.

A*CRC / NSCC Lighting Floor Plan –

Open Office

A*CRC Lighting Floor Plan - DC

Task Tuning Savings Daylight Harvesting Savings

Occupancy SavingsActual

usage

Baseline

usage

A*CRC Lighting Savings Dashboard -

Open Offices

1.0 MWh

A*CRC Lighting Savings Dashboard - DC

Space Utilization: Heat Map

Space Utilization: Motion Trails

Area-wise Utilization and Occupancy Rates

Acknowledgements

NSCC - Dr Tan Tin Wee

A*CRC – Mr Lim Ching Kwang

Mr Edmund Ho

Mr Berny Heng

Back-Up Slides

HPC Hardware – Compute nodes

30

• Large Memory Nodes

– 10 Nodes configured with high memory

– FUJITSU Server PRIMERGY RX4770 M2

– Intel(R) Xeon(R) CPU E7-4830 v3 @ 2.10GHz

– 5x 1 TB, 4x 2 TB, and 1x 6 TB memory

configuration

– EDR Infiniband

• Standard Compute nodes

– 1,160 nodes

– Fujitsu Server PRIMERGY CX2550 M1

– 27,840 CPU Cores

– Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz

– 128 GB RAM / Server

– EDR InfiniBand

– Liquid cooling system

– High density servers

HPC Hardware - Storage

31

Tier 0

B

urst B

uffe

r

Tier 0

Scratch

FS

Tier 1

Ho

me

FS

Tier 1

P

roje

ct FS

Tier 2

A

rchive

265 TB500 GB/s

4 PB210 GB/s

4 PB100 GB/s

WOS Active

ArchiveInfinite Memory

EngineGRIDScaler

GPFS® Storage

HSM

5PB20TB/h

EXAScaler Lustre® Storage

SingAREN Network Topology

SingAREN

Lightwave Internet

Exchange

(SLIX)

A*STAR

GS

NTU

NUS

SingAREN Open Exchange Point

Internet2 (USA)

NICT(Japan)

NII(Japan)

TEIN(EU)

Other international connections

RP SP ESSEC SGIXSOX Starhub

Google

Microsoft

STAR-N

Singapore International Connectivity

UK London

Singapore

USA Los Angeles

100Gbpsco-funded with

10Gbpsco-funded with