using the argo cluster paul sexton cs 566 february 6, 2006
TRANSCRIPT
![Page 1: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/1.jpg)
Using the Argo Cluster
Paul Sexton
CS 566
February 6, 2006
![Page 2: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/2.jpg)
2
Logging in
The address is argo-new.cc.uic.edu Use ssh to log in, using your ACCC username Ex: ssh [email protected] Pay attention to the login message, this is how the
admins will communicate information to you.
![Page 3: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/3.jpg)
3
Environmental Variables
Add the following to your .bash_profile:
# For the PGI compilers
export PGI_HOME=/usr/common/pgi-6.0-0/linux86/6.0
export LIB=$PGI_HOME/lib:/$PGI_HOME/liblf:/lib:/usr/lib
# For the MPI libraries
export MPICH2_HOME=/usr/common/mpich2-1.0.1
export LD_LIBRARY_PATH=$MPICH2_HOME/lib:$LD_LIBRARY_PATH
export PATH=$MPICH2_HOME/bin:$PATH
![Page 4: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/4.jpg)
4
Compiling with GCC
MPICH provides wrapper scripts for C, C++, and Fortran.
Use mpicc in place of gcc Use mpicxx in place of g++ Use mpif77 in place of g77.
![Page 5: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/5.jpg)
5
Compiling with PGI
Use pgcc in place of cc/gcc Use pgCC in place of CC/g++ Use pgf77 in place of f77/g77. No wrapper scripts provided, need to add include
and lib paths manually:
![Page 6: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/6.jpg)
6
Compiling with PGI (cont.)
For C or C++, add the following to your compiler flags: -I$MPICH2_HOME/include -L$MPICH2_HOME/lib -lmpich
For Fortran, also add: -lfmpich
![Page 7: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/7.jpg)
7
Running an MPI program
Normally, to run an MPI program, you’d use the mpirun/mpiexec command.
Ex: mpirun -np 4 myprogram Don’t do this on argo.
![Page 8: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/8.jpg)
8
Running an MPI program on Argo Edit and compile your code on the master node,
and run it on the slave nodes. Using PBS / Torque. The important commands:
qsub qdel qstat qnodes
![Page 9: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/9.jpg)
9
qsub - submit a job
qsub [options] script_file script_file is a shell script that gives the path to
your executable and the command to run it. Do not call qsub on your program directly! Returns a status message giving you the job id.
Ex: “3208.argo-new.cc.uic.edu”
![Page 10: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/10.jpg)
10
A qsub script
For example, imagine your program is called “foo.exe”.
The contents of the script file would be: mpiexec -n 4 /path/to/foo/foo.exe
Where “/path/to/foo/” is the path to the executable’s directory.
![Page 11: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/11.jpg)
11
qdel - remove a job
qdel job_id job_id is the number qsub returned Ex: qdel 3208
![Page 12: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/12.jpg)
12
qstat - queue status
qstat job_id To check on the status of a job when you know the
number Ex: qstat 3208
qstat -u userid To check on the status of all of your jobs. Ex: qstat -u psexto2
qstat To obtain status of all jobs running on argo.
![Page 13: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/13.jpg)
13
qnodes - node status
Shows the availability of all 64 nodes, and how many jobs they’re currently running.
No arguments. No man page entry for some reason.
![Page 14: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/14.jpg)
14
Viewing the output
Standard output and standard error are both redirected to files:
[script name].{e,o}[job id] Ex:
stderr in my_script.e3208 stdout in my_script.o3208
The .e file should usually be empty.
![Page 15: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/15.jpg)
15
More on qsub
There are numerous options to qsub, but the most common / useful one is “-l nodes”:
To set the number of nodes: qsub -l nodes=4 my_script
To pick the nodes manually: qsub -l nodes=argo9-1+argo9-2+argo9-3 my_script
To run only on a subset of nodes: qsub -l nodes=6:cpu.xeon my_script qsub -l nodes=8:cpu.amd my_script qsub -l nodes=8:cpu.smp my_script
![Page 16: Using the Argo Cluster Paul Sexton CS 566 February 6, 2006](https://reader035.vdocuments.site/reader035/viewer/2022072005/56649ccd5503460f94998870/html5/thumbnails/16.jpg)
16
And that’s about all there is…
Final Notes: The Intel compilers are also available. The PGI tools include a debugger, PGDBG, and a profiler,
PGPROF. These are highly recommended for debugging and profiling parallel code.
Limit yourself to 3 jobs at a time.