hpcc mid-morning break dirk colbry, ph.d. research specialist institute for cyber enabled discovery...

11
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

Upload: theresa-quinn

Post on 24-Dec-2015

226 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

HPCC Mid-Morning Break

Dirk Colbry, Ph.D.

Research Specialist

Institute for Cyber Enabled Discovery

Introduction to the new

GPU (GFX) cluster

Page 2: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

Naming

• dev - developer node.• gfx - graphics node.• GPU - Graphics Processing Unit.

I use this term interchangeably with gfx• nvx - nVidia Graphics node.

Internal prefix used on the new cluster.

Page 3: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

HPCC System Diagram

Page 4: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

System Diagram of GFX

• dev-gfx08 - (formerly gfx-000)• dev-gfx10 – New developer node• gfx10 – New 32 node GPU cluster

Page 5: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

dev-gfx08 (formerly gfx-000)

• Single Quad core 2.4 Ghz Intel Processor.

• 8GB of CPU RAM• Three nVidia GTX 280 Video cards:

1GB of ram per card 240 CUDA processing Cores per card 1.3 GHz Processor Clock Speed

• Total of 724 cores on a single machine

Page 6: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

New gfx10 hardware

• 32 Node cluster + 1 developer node CPU: 2x Intel Xeon E5530 Quad-Core 2.40GHZ Memory: 18GB of Ram (~2GB per core) Hard drive: 200GB disk for Local Scratch Network: Ethernet only, (no Infiniband) GPU: Two Nvidia Tesla M1060 GPUs

• Note: 21 of these nodes are buy-in nodes and set aside for primary use by the owners.

Page 7: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

Each nVidia Tesla M1060

• Number of Streaming Processor Cores 240• Frequency of processor cores 1.3 GHz• Single Precision peak floating point performance 933 gigaflops• Double Precision peak floating point performance 78 gigaflops• Dedicated Memory 4 GB GDDR3 • Memory Speed 800 MHz• Memory Interface 512-bit • Memory Bandwidth 102 GB/sec• System Interface PCIe

Page 8: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

Installed Software on HPCC

• Cuda toolkit 2.2 and 2.3 For programming in c/c++ and fortran

• PGI compilers with graphics accelerator support.

• cublas – Cuda version of blas libraries• cufft – Cuda version of fft libraries• pycuda – Python Cuda Interface• Openmm/gromacs – Molecular Dynamics

Program optimized for GPUs

Page 9: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

Other Available Software

• OpenCL c/c++ interface

• Jacket Matlab GPU wrapper

• Lattice Boltzmann pde solver

• OpenVIDIA Machine Vision

• Many Many others

• Cuda Zone ~90 thousand cuda

developers. Lots of software

examples Developer Forms Tutorials

• http://www.nvidia.com/object/cuda_home.html

Page 10: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

Running on the new cluster

• We are still researching the best way to run on the GPU cluster.

• Looking for Beta Tests: Probably add the gfx10 attribute to your

submission script. Please sign up and we will add you to a mailing

list and give you access to the gfx10 queue.

Page 11: HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster

Example script

#!/bin/bash –login#PBS –l nodes=1:ppn=1:gfx10,walltime=01:00:00

module load cuda

myprogram myarguments