Powered by:
Open Science Grid
Center for High Throughput Computing

MPI Use Guide

This page describes how to use the MPI libraries installed on the HPC cluster.

Overview of installed MPI libraries

There are multiple MPI libraries installed on the cluster, each compiled in two ways (see below). The SLURM scheduler also has integrated libraries for most MPI versions, which you can read about here.

There are five different MPI libraries installed:

  • OpenMPI version 1.6.4
  • MVAPICH2 version 1.9
  • MPICH version 3.0.4
  • MPICH version 3.1
  • MPICH2 version 1.5

IMPORTANT: If your software can use OpenMPI or MVAPICH2, these are the recommended MPI libraries for CHTC's HPC Cluster and will perform the fastest on the cluster's Infiniband networking. MPICH and MPICH2 do not use Infiniband, by default, and will perform slower than OpenMPI or MVAPICH2, though we've configured them to work as well as for ethernet-only clusters, so they'll still work if your software will only run with MPICH or MPICH2.

Each of these MPI libraries is available in two compiled modes:

  • compiled with generic GCC compilers
  • compiled with Intel Composer XE and Intel MPI Library Development Kit 4.1 compilers

MPI libraries compiled with GCC compilers are located on all SLURM nodes in the following directory:

/usr/mpi/gcc/

MPI libraries compiled with Intel compilers are located on all SLURM nodes in the following directory:

/usr/mpi/intel/

Using the MPI libraries to compile your code

In order to successfully compile and run your code using these MPI libraries you need to set a few environmental variables. To set these variables you will be using the Environmental Modules package (http://modules.sourceforge.net). This package is very easy to use and it will automatically set the environmental variables necessary to use the flavor and version of MPI that you need.

First, you are going to want to run the following command to see the available modules:

[alice@service]$ module avail

When you run the above command you will receive output similar to this:

[alice@service]$ module avail
---------------------------------------- /etc/modulefiles ----------------------------------------
mpi/gcc/mpich-3.0.4                 mpi/gcc/openmpi-1.6.4               mpi/intel/mvapich2-1.9
mpi/gcc/mvapich2-1.9                mpi/intel/mpich-3.0.4               mpi/intel/openmpi-1.6.4

As you can see, the MPI libraries compiled with GCC compilers are listed under mpi/gcc/ and the MPI libraries compiled with Intel compilers are listed under mpi/intel/.

To load a module, let's say OpenMPI compiled with GCC compilers, simply run this command:

[alice@service]$ module load mpi/gcc/openmpi-1.6.4

Now all necessary environmental variables are set correctly and you can go ahead and compile your code!

If you loaded the wrong module, let's say MPICH compiled with Intel compilers, you can unload it by running:

[alice@service]$ module unload mpi/intel/mpich-3.0.4

NOTE: Before using any of the MPI libraries under mpi/intel/ you first need to load the compile/intel module.

Running your job

NOTE: To run your job you need the same module loaded as when you compiled your code. When you log out of your terminal all loaded modules are automatically unloaded.

To see if you have a module loaded run this command:

[alice@service]$ module list
  • If you receive the following output, your module has been unloaded, and you will need to load it again. (See previous section for help)
    No Modulefiles Currently Loaded.
  • If you receive the following output (assuming you loaded OpenMPI compiled with GCC compilers), your module is still loaded and your job can be run.
    Currently Loaded Modulefiles:
      1) mpi/gcc/openmpi-1.6.4
    

After you make sure that you have the same module loaded as when you compiled your code you can go ahead and run your job.
See http://chtc.cs.wisc.edu/HPCuseguide.shtml for information on how to run your job.