MPI Use Guide
- Overview of installed MPI libraries
- Using the MPI libraries to compile your code
- Running your job
This page describes how to use the MPI libraries installed on the HPC cluster.
Overview of installed MPI libraries
There are multiple MPI libraries installed on the cluster, each compiled in
two ways (see below). The SLURM scheduler also has integrated libraries for
most MPI versions, which you can read about
There are five different MPI libraries installed:
- OpenMPI version 1.6.4
- MVAPICH2 version 1.9
- MPICH version 3.0.4
- MPICH version 3.1
- MPICH2 version 1.5
IMPORTANT: If your software can use OpenMPI or MVAPICH2, these are the recommended MPI libraries
for CHTC's HPC Cluster and will perform the fastest on the cluster's Infiniband networking.
MPICH and MPICH2 do not use Infiniband, by default, and will perform slower than OpenMPI or MVAPICH2,
though we've configured them to work as well as for ethernet-only clusters, so they'll still work if your
software will only run with MPICH or MPICH2.
Each of these MPI libraries is available in two compiled modes:
- compiled with generic GCC compilers
- compiled with Intel Composer XE and Intel MPI Library Development Kit 4.1 compilers
MPI libraries compiled with GCC compilers are located on all SLURM nodes in the following directory:
MPI libraries compiled with Intel compilers are located on all SLURM nodes in the following directory:
Using the MPI libraries to compile your code
In order to successfully compile and run your code using these MPI libraries you need to set a few environmental variables. To set these variables you will be using the Environmental Modules package (http://modules.sourceforge.net). This package is very easy to use and it will automatically set the environmental variables necessary to use the flavor and version of MPI that you need.
First, you are going to want to run the following command to see the available modules:
[alice@service]$ module avail
When you run the above command you will receive output similar to this:
[alice@service]$ module avail
---------------------------------------- /etc/modulefiles ----------------------------------------
mpi/gcc/mpich-3.0.4 mpi/gcc/openmpi-1.6.4 mpi/intel/mvapich2-1.9
mpi/gcc/mvapich2-1.9 mpi/intel/mpich-3.0.4 mpi/intel/openmpi-1.6.4
As you can see, the MPI libraries compiled with GCC compilers are listed under mpi/gcc/ and the MPI libraries compiled with Intel compilers are listed under mpi/intel/.
To load a module, let's say OpenMPI compiled with GCC compilers, simply run this command:
[alice@service]$ module load mpi/gcc/openmpi-1.6.4
Now all necessary environmental variables are set correctly and you can go ahead and compile your code!
If you loaded the wrong module, let's say MPICH compiled with Intel compilers, you can unload it by running:
[alice@service]$ module unload mpi/intel/mpich-3.0.4
NOTE: Before using any of the MPI libraries under mpi/intel/ you first need to load the compile/intel module.
Running your job
NOTE: To run your job you need the same module loaded as when you compiled your code. When you log out of your terminal all loaded modules are automatically unloaded.
To see if you have a module loaded run this command:
[alice@service]$ module list
After you make sure that you have the same module loaded as when you compiled your code you can go ahead and run your job.
See http://chtc.cs.wisc.edu/HPCuseguide.shtml for information on how to run your job.