high performance computing workshop hpc 101
DESCRIPTION
High Performance Computing Workshop HPC 101. Dr. Charles J Antonelli LSAIT ARS June, 2014. Credits. Contributors: Brock Palen (CAEN HPC) Jeremy Hallum (MSIS) Tony Markel (MSIS) Bennet Fauber (CAEN HPC) Mark Montague (LSAIT ARS) Nancy Herlocher (LSAIT ARS) LSAIT ARS CAEN HPC. - PowerPoint PPT PresentationTRANSCRIPT
High PerformanceComputing Workshop
HPC 101Dr. Charles J Antonelli
LSAIT ARSJune, 2014
cja 2014 2
CreditsContributors:
Brock Palen (CAEN HPC)Jeremy Hallum (MSIS)Tony Markel (MSIS)Bennet Fauber (CAEN HPC)Mark Montague (LSAIT ARS)Nancy Herlocher (LSAIT ARS)LSAIT ARSCAEN HPC
6/14
cja 2014 3
RoadmapHigh Performance ComputingFlux ArchitectureFlux MechanicsFlux Batch OperationsIntroduction to Scheduling
6/14
4
High Performance Computing
6/14cja 2014
cja 2014 5
Cluster HPCA computing cluster
a number of computing nodes connected together via special hardware and software that together can solve large problems.
A cluster is much less expensive than a single supercomputer (e.g., a mainframe)Using clusters effectively requires support in scientific software applications (e.g., Matlab's Parallel Toolbox, or R's Snow library), or custom code
6/14
cja 2014 6
Programming Models
Two basic parallel programming modelsMessage-passingThe application consists of several processes running on different nodes and communicating with each other over the network
Used when the data are too large to fit on a single node, and simple synchronization is adequate“Coarse parallelism”Implemented using MPI (Message Passing Interface) libraries
Multi-threadedThe application consists of a single process containing several parallel threads that communicate with each other using synchronization primitives
Used when the data can fit into a single process, and the communications overhead of the message-passing model is intolerable“Fine-grained parallelism” or “shared-memory parallelism”Implemented using OpenMP (Open Multi-Processing) compilers and libraries
Both6/14
cja 2014 7
Amdahl’s Law
6/14
cja 2014 8
Flux Architecture
6/14
cja 2014 9
FluxFlux is a university-wide shared computational discovery / high-performance computing service.
Provided by Advanced Research Computing at U-MOperated by CAEN HPCProcurement, licensing, billing by U-M ITSInterdisciplinary since 2010
6/14http://arc.research.umich.edu/resources-services/flux/
cja 2014 10
The Flux clusterLogin nodes Compute nodes
Storage…
Data transfernode
6/14
cja 2014 11
A Flux node
12-16 Intel cores
48-64 GB RAM
Local disk
Ethernet InfiniBand6/14
cja 2014 12
A Large Memory Flux node
1 TB RAM
Local disk
Ethernet InfiniBand6/14
32-40 Intel cores
cja 2014 13
Coming soon:A Flux GPU node
16 Intel cores
64 GB RAM
Local disk
6/14
8 GPUs
Each GPU contains 2,688 GPU cores
cja 2014 14
Flux softwareLicensed and open software:
Abacus, BLAST, BWA, bowtie, ANSYS, Java, Mason, Mathematica, Matlab, R, RSEM, STATA SE, …See http://cac.engin.umich.edu/resources
C, C++, Fortran compilers:Intel (default), PGI, GNU toolchains
You can choose software using the module command
6/14
cja 2014 15
Flux networkAll Flux nodes are interconnected via Infiniband and a campus-wide private Ethernet networkThe Flux login nodes are also connected to the campus backbone networkThe Flux data transfer node is connected over a 10 Gbps connection to the campus backbone networkThis means
The Flux login nodes can access the InternetThe Flux compute nodes cannotIf Infiniband is not available for a compute node, code on that node will fall back to Ethernet communications
6/14
cja 2014 16
Flux dataLustre filesystem mounted on /scratch on all login, compute, and transfer nodes
640 TB of short-term storage for batch jobsLarge, fast, short-term
NFS filesystems mounted on /home and /home2 on all nodes
80 GB of storage per user for development & testingSmall, slow, long-term
6/14
cja 2014 17
Flux dataFlux does not provide large, long-term storageAlternatives:
Value Storage (NFS)$20.84 / TB / month (replicated, no backups)$10.42 / TB / month (non-replicated, no backups)
LSA Large Scale Research Storage2 TB free to researchers (replicated, no backups)
Faculty members, lecturers, postdocs, GSI/GSRAAdditional storage $30 / TB / year (replicated, no backups)
Departmental serverCAEN can mount your storage on the login nodes
6/14
cja 2014 18
Copying dataThree ways to copy data to/from Flux
From Linux or Mac OS X, use scp:scp localfile [email protected]:remotefilescp [email protected]:remotefile localfilescp -r localdir [email protected]:remotedir
From Windows, use WinSCPU-M Blue Dischttp://www.itcs.umich.edu/bluedisc/
Use Globus Connect
6/14
cja 2014 19
Globus ConnectFeatures
High-speed data transfer, much faster than SCP or SFTPReliable & persistentMinimal client software: Mac OS X, Linux, Windows
GridFTP EndpointsGateways through which data flowExist for XSEDE, OSG, …UMich: umich#flux, umich#nyxAdd your own client endpoint!Add your own server endpoint: contact [email protected]
More informationhttp://cac.engin.umich.edu/resources/login-nodes/globus-gridftp
6/14
cja 2014 20
Flux Mechanics
6/14
cja 2014 21
Using FluxThree basic requirements to use Flux:
1. A Flux account2. A Flux allocation3. An MToken (or a Software Token)
6/14
cja 2014 22
Using Flux1. A Flux account
Allows login to the Flux login nodesDevelop, compile, and test codeAvailable to members of U-M community, freeGet an account by visiting https://www.engin.umich.edu/form/cacaccountapplication
6/14
cja 2014 23
Using Flux2. A Flux allocation
Allows you to run jobs on the compute nodesSome units cost-share Flux rates
Regular Flux: $11.72/core/monthLSA, Engineering, Medical School $6.60/monthLarge Memory Flux: $23.82/core/monthLSA, Engineering, Medical School $13.30/monthGPU Flux: $107.10/2 CPU cores and 1 GPU/monthLSA, Engineering, Medical School $60/monthFlux Operating Environment: $113.25/node/monthLSA, Engineering, Medical School $63.50/monthFlux pricing at http://arc.research.umich.edu/flux/hardware-services/
Rackham grants are available for graduate studentsDetails at http://arc.research.umich.edu/resources-services/flux/flux-pricing/To inquire about Flux allocations please email [email protected]
6/14
cja 2014 24
Using Flux3. An MToken (or a Software Token)
Required for access to the login nodesImproves cluster security by requiring a second means of proving your identity
You can use either an MToken or an application for your mobile device (called a Software Token) for thisInformation on obtaining and using these tokens at http://cac.engin.umich.edu/resources/login-nodes/tfa
6/14
cja 2014 25
Logging in to Fluxssh flux-login.engin.umich.edu
MToken (or Software Token) requiredYou will be randomly connected a Flux login node
Currently flux-login1 or flux-login2Firewalls restrict access to flux-login.To connect successfully, either
Physically connect your ssh client platform to the U-M campus wired or MWireless network, orUse VPN software on your client platform, orUse ssh to login to an ITS login node (login.itd.umich.edu), and ssh to flux-login from there
6/14
cja 2014 26
ModulesThe module command allows you to specify what versions of software you want to usemodule list -- Show loaded modulesmodule load name -- Load module name for usemodule avail -- Show all available modulesmodule avail name -- Show versions of module name*module unload name -- Unload module namemodule -- List all optionsEnter these commands at any time during your sessionA configuration file allows default module commands to be executed at login
Put module commands in file ~/privatemodules/defaultDon’t put module commands in your .bashrc / .bash_profile
6/14
cja 2014 27
Flux environmentThe Flux login nodes have the standard GNU/Linux toolkit:
make, autoconf, awk, sed, perl, python, java, emacs, vi, nano, …
Watch out for source code or data files written on non-Linux systems
Use these tools to analyze and convert source files to Linux formatfiledos2unix
6/14
cja 2014 28
Lab 1Task: Invoke R interactively on the login node
module load Rmodule list
Rq()
Please run only very small computations on the Flux login nodes, e.g., for testing
6/14
cja 2014 29
Lab 2Task: Run R in batch mode
module load R
Copy sample code to your login directorycdcp ~cja/hpc-sample-code.tar.gz .tar -zxvf hpc-sample-code.tar.gzcd ./hpc-sample-code
Examine Rbatch.pbs and Rbatch.R Edit Rbatch.pbs with your favorite Linux editor
Change #PBS -M email address to your own
6/14
cja 2014 30
Lab 2Task: Run R in batch mode
Submit your job to Fluxqsub Rbatch.pbs
Watch the progress of your jobqstat -u uniqnamewhere uniqname is your own uniqnameWhen complete, look at the job’s outputless Rbatch.out
Copy your results to your local workstation (change uniqname to your own uniqname)scp [email protected]:hpc-sample-code/Rbatch.out Rbatch.out
6/14
cja 2014 31
Lab 3Task: Use the multicore packageThe multicore package allows you to use multiple cores on the same node
module load Rcd ~/sample-code
Examine Rmulti.pbs and Rmulti.R Edit Rmulti.pbs with your favorite Linux editor
Change #PBS -M email address to your own
6/14
cja 2014 32
Lab 3Task: Use the multicore package
Submit your job to Fluxqsub Rmulti.pbs
Watch the progress of your jobqstat -u uniqnamewhere uniqname is your own uniqnameWhen complete, look at the job’s outputless Rmulti.out
Copy your results to your local workstation (change uniqname to your own uniqname)scp [email protected]:hpc-sample-code/Rmulti.out Rmulti.out
6/14
cja 2014 33
Compiling CodeAssuming default module settings
Use mpicc/mpiCC/mpif90 for MPI codeUse icc/icpc/ifort with -mp for OpenMP code
Serial code, Fortran 90:ifort -O3 -ipo -no-prec-div –xHost -o prog prog.f90
Serial code, C:icc -O3 -ipo -no-prec-div –xHost –o prog prog.cMPI parallel code:mpicc -O3 -ipo -no-prec-div –xHost -o prog prog.cmpirun -np 2 ./prog
6/14
cja 2014 34
Lab 4Task: compile and execute simple programs on the Flux login node
Copy sample code to your login directory:cdcp ~brockp/cac-intro-code.tar.gz .tar -xvzf cac-intro-code.tar.gzcd ./cac-intro-code
Examine, compile & execute helloworld.f90:ifort -O3 -ipo -no-prec-div -xHost -o f90hello helloworld.f90./f90hello
Examine, compile & execute helloworld.c:icc -O3 -ipo -no-prec-div -xHost -o chello helloworld.c./chello
Examine, compile & execute MPI parallel code:mpicc -O3 -ipo -no-prec-div -xHost -o c_ex01 c_ex01.cmpirun -np 2 ./c_ex01
6/14
cja 2014 35
MakefilesThe make command automates your code compilation processUses a makefile to specify dependencies between source and object filesThe sample directory contains a sample makefileTo compile c_ex01:make c_ex01To compile all programs in the directorymakeTo remove all compiled programsmake cleanTo make all the programs using 8 compiles in parallel make -j8
6/14
cja 2014 36
Flux Batch Operations
6/14
cja 2014 37
Portable Batch System
All production runs are run on the compute nodes using the Portable Batch System (PBS)PBS manages all aspects of cluster job execution except job scheduling
Flux uses the Torque implementation of PBSFlux uses the Moab scheduler for job schedulingTorque and Moab work together to control access to the compute nodes
PBS puts jobs into queuesFlux has a single queue, named flux
6/14
cja 2014 38
Cluster workflowYou create a batch script and submit it to PBSPBS schedules your job, and it enters the flux queueWhen its turn arrives, your job will execute the batch scriptYour script has access to any applications or data stored on the Flux clusterWhen your job completes, anything it sent to standard output and error are saved and returned to youYou can check on the status of your job at any time, or delete it if it’s not doing what you wantA short time after your job completes, it disappears
6/14
cja 2014 39
Basic batch commands
Once you have a script, submit it:qsub scriptfile
$ qsub singlenode.pbs6023521.nyx.engin.umich.edu
You can check on the job status:qstat jobidqstat -u user$ qstat -u cjanyx.engin.umich.edu: Req'd Req'd ElapJob ID Username Queue Jobname SessID NDS TSK Memory Time S Time-------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - -----6023521.nyx.engi cja flux hpc101i -- 1 1 -- 00:05 Q --
To delete your jobqdel jobid
$ qdel 6023521$
6/14
cja 2014 40
Loosely-coupled batch script
#PBS -N yourjobname#PBS -V#PBS -A youralloc_flux#PBS -l qos=flux#PBS -q flux#PBS –l procs=12,pmem=1gb,walltime=01:00:00#PBS -M youremailaddress#PBS -m abe#PBS -j oe
#Your Code Goes Below:cd $PBS_O_WORKDIRmpirun ./c_ex01
6/14
cja 2014 41
Tightly-coupled batch script
#PBS -N yourjobname#PBS -V#PBS -A youralloc_flux#PBS -l qos=flux#PBS -q flux#PBS –l nodes=1:ppn=12,mem=47gb,walltime=02:00:00#PBS -M youremailaddress#PBS -m abe#PBS -j oe
#Your Code Goes Below:cd $PBS_O_WORKDIRmatlab -nodisplay -r script
6/14
cja 2014 42
Lab 5Task: Run an MPI job on 8 cores
Compile c_ex05cd ~/cac-intro-codemake c_ex05Edit file run with your favorite Linux editor
Change #PBS -M address to your ownI don’t want Brock to get your email!
Change #PBS -A allocation to FluxTraining_flux, or to your own allocation, if desiredChange #PBS -l allocation to flux
Submit your jobqsub run
6/14
cja 2014 43
PBS attributesAs always, man qsub is your friend
-N : sets the job name, can’t start with a number-V : copy shell environment to compute node-A youralloc_flux: sets the allocation you are using-l qos=flux: sets the quality of service parameter-q flux: sets the queue you are submitting to-l : requests resources, like number of cores or nodes-M : whom to email, can be multiple addresses-m : when to email: a=job abort, b=job begin, e=job end-j oe: join STDOUT and STDERR to a common file
-I : allow interactive use-X : allow X GUI use
6/14
cja 2014 44
PBS resources (1)A resource (-l) can specify:
Request wallclock (that is, running) time-l walltime=HH:MM:SSRequest C MB of memory per core-l pmem=CmbRequest T MB of memory for entire job-l mem=Tmb Request M cores on arbitrary node(s)-l procs=MRequest a token to use licensed software-l gres=stata:1-l gres=matlab-l gres=matlab%Communication_toolbox
6/14
cja 2014 45
PBS resources (2)A resource (-l) can specify:
For multithreaded code:Request M nodes with at least N cores per node-l nodes=M:ppn=NRequest M cores with exactly N cores per node (note the differencevis a vis ppn syntax and semantics!)-l nodes=M,tpn=N(you’ll only use this for specific algorithms)
6/14
cja 2014 46
Interactive jobsYou can submit jobs interactively:
qsub -I -X -V -l procs=2 -l walltime=15:00 -A youralloc_flux -l qos=flux –q flux
This queues a job as usualYour terminal session will be blocked until the job runsWhen your job runs, you'll get an interactive shell on one of your nodesInvoked commands will have access to all of your nodesWhen you exit the shell your job is deleted
Interactive jobs allow you toDevelop and test on cluster node(s)Execute GUI tools on a cluster nodeUtilize a parallel debugger interactively
6/14
cja 2014 47
Lab 6Task: Run an interactive job
Enter this command (all on one line):qsub -I -V -l procs=1 -l walltime=30:00 -A FluxTraining_flux -l qos=flux -q fluxWhen your job starts, you’ll get an interactive shellCopy and paste the batch commands from the “run” file, one at a time, into this shellExperiment with other commandsAfter thirty minutes, your interactive shell will be killed
6/14
cja 2014 48
Lab 7Task: Run Matlab interactively
module load matlab
Start an interactive PBS sessionqsub -I -V -l procs=2-l walltime=30:00 -A FluxTraining_flux -l qos=flux -q flux
Run Matlab in the interactive PBS sessionmatlab -nodisplay
6/14
cja 2014 49
Introduction to Scheduling
6/14
cja 2014 50
The Scheduler (1/3)Flux scheduling policies:
The job’s queue determines the set of nodes you run onThe job’s account and qos determine the allocation to be charged
If you specify an inactive allocation, your job will never run
The job’s resource requirements help determine when the job becomes eligible to run
If you ask for unavailable resources, your job will wait until they become freeThere is no pre-emption
6/14
cja 2014 51
The Scheduler (2/3)Flux scheduling policies:
If there is competition for resources among eligible jobs in the allocation or in the cluster, two things help determine when you run:
How long you have waited for the resourceHow much of the resource you have used so farThis is called “fairshare”
The scheduler will reserve nodes for a job with sufficient priority
This is intended to prevent starving jobs with large resource requirements
6/14
cja 2014 52
The Scheduler (3/3)Flux scheduling policies:
If there is room for shorter jobs in the gaps of the schedule, the scheduler will fit smaller jobs in those gaps
This is called “backfill”
CoresTime
6/14
cja 2014 53
Gaining insightThere are several commands you can run to get some insight over the scheduler’s actions:
freenodes : shows the number of free nodes and cores currently availablemdiag -a youralloc_name : shows resources defined for your allocation and who can run against itshowq -w acct=yourallocname: shows jobs using your allocation (running/idle/blocked)checkjob jobid : Can show why your job might not be startingshowstart -e all jobid : Gives you a coarse estimate of job start time; use the smallest value returned
6/14
cja 2014 54
Some Flux Resources
http://arc.research.umich.edu/resources-services/flux/
U-M Advanced Research Computing Flux pageshttp://cac.engin.umich.edu/
CAEN HPC Flux pageshttp://www.youtube.com/user/UMCoECAC
CAEN HPC YouTube channelFor assistance: [email protected]
Read by a team of people including unit support staffCannot help with programming questions, but can help with operational Flux and basic usage questions
6/14
cja 2014 55
SummaryThe Flux cluster is just a collection of similar Linux machines connected together to run your code, much faster than your desktop canCommand-line scripts are queued by a batch system and executed when resources become availableSome important commands areqsubqstat -u usernameqdel jobidcheckjob
Develop and test, then submit your jobs in bulk and let the scheduler optimize their execution
6/14
cja 2014 56
Any Questions?Charles J. AntonelliLSAIT Advocacy and Research [email protected]://www.umich.edu/~cja734 763 0607
6/14
cja 2014 57
References1. Amdah, Gene M., “Validity of the Single Processor Approach to Achieving Large Scale Computing
Capabilities,” Reprinted from the AFIPS Conference Proceedings, Vol. 30 (Atlantic City, N.J., Apr. 18-20), AFIPS Press, Reston, Va., 1967, pp. 483-485, Solid-state circuits newsletter, ISSN 1098-4232, vol. 12, issue 3, pp. 19-20, 2007. DOI 10.1109/N-SSC.2007.4785615 (accessed June 2014).
2. J. L. Gustafson, “Reevaluating Amdahl’s Law,” Communications of the ACM, vol 31, issue 5, pp 532-533, May 1988. http://www.johngustafson.net/pubs/pub13/amdahl.pdf (accessed June 2014).
3. Mark D. Hill and Michael R. Marty, “Amdahl’s Law in the Multicore Era,” IEEE Computer, vol. 41, no. 7, pp. 33-38, July 2008. http://research.cs.wisc.edu/multifacet/papers/ieeecomputer08_amdahl_multicore.pdf (accessed June 2014).
4. Flux Hardware, http://arc.research.umich.edu/flux/hardware-services/ (accessed June 2014).5. InfiniBand, http://en.wikipedia.org/wiki/InfiniBand (accessed June 2014).6. Lustre file system, http://wiki.lustre.org/index.php/Main_Page (accessed June 2014).7. Supported Flux software,
http://arc.research.umich.edu/flux-and-other-hpc-resources/flux/software-library/, (accessed June 2014)8. Intel C and C++ Compiler 14 User and Reference Guide,
https://software.intel.com/en-us/compiler_14.0_ug_c (accessed June 2014).9. Intel Fortran Compiler 14 User and Reference Guide,https
://software.intel.com/en-us/compiler_14.0_ug_f (accessed June 2014).10. Torque Administrator’s Guide,
http://docs.adaptivecomputing.com/torque/4-2-8/torqueAdminGuide-4.2.8.pdf (accessed June 2014).11. Jurg van Vliet & Flvia Paginelli, Programming Amazon EC2,’Reilly Media, 2011. ISBN 978-1-449-39368-
7.
6/14