Powered by:
Open Science Grid
Center for High Throughput Computing

Running MPI Jobs in CHTC

This guide describes when and how to run multi-core jobs, programmed with OpenMP or MPI, in CHTC's high throughput compute (HTC) system.

To best understand the below information, users should already have an understanding of:

Overview

Before you begin, review our below discussion of MPI requirements and use cases, to make sure that our multi-core OpenMP/MPI capabilities are the right solution for your computing problem. If you have any questions, contact a CHTC facilitator at chtc@cs.wisc.edu.

Once you know that you need to run multi-core jobs that use OpenMP or MPI on our HTC system, you will need to do the following:

  1. Compile your code using our OpenMP/MPI module system
  2. Create a script to that loads the OpenMP or MPI module you used for compiling, and then runs your code
  3. Make sure your submit file has certain key requirements

If your MPI program is especially large (more than 20 MB, compiled), or if it can only run from the exact location to which it was installed, you may also need to take advantage of CHTC's Gluster fileshare just for your software - see our additional notes on Using Gluster for software, below for more details. Otherwise, all software and input files less than ~20 MB should use HTCondor's "transfer_input_files" feature in the submit file.

Requirements and Use Cases

Most jobs on CHTC's HTC system are run on one CPU (sometimes called a "processor", or "core") and can be executed without any special system libraries. However, in some cases, it may be advantageous to run a single program on multiple CPUs (also called multi-core), in order to speed up single computations that cannot be broken up as independent jobs. If you have questions about the advantages and disadvantages of running multi-core jobs versus single-core jobs, contact one of CHTC's facilitators at chtc@cs.wisc.edu.

Running on multiple CPUs is often enabled by two special types of programming called OpenMP or MPI. For you OpenMP and MPI jobs to compile and run, CHTC has a certain set of OpenMP/MPI packages installed to a shared location, that can be accessed from within jobs (see below). To see which OpenMP and MPI packages are supported, you can type the following on a an HTC submit server:

[alice@submit]$ module avail

(Make sure to compile according to the below instructions, and to not do so on a submit server.)

Compiling Code

You can compile your program yourself within an interactive job on one of our compiling servers. (Do not compile code on the submit server, as doing so may cause performance issues.) The interactive job is essentially a regular HTCondor job, but without an executable; you are the one running the commands instead (in this case, to compile the program). If you are compiling your code within Gluster, you'll still be able to do so within an interactive job.

Instructions for submitting an interactive build/compile job are here: http://chtc.cs.wisc.edu/inter-submit.shtml
The only line in the submit file that you need to change is transfer_input_files to reflect all the source files on which your program depends.

Once your interactive job begins on one of our compiling servers, you can find which OpenMP/MPI modules are available to you by typing:

[alice@build]$ module avail

Choose the module you want to use and load it with the following command:

[alice@build]$ module load mpi_module

where mpi_module is replaced with the name of the OpenMP or MPI module you'd like to use.

After loading the module, compile your program. If your program is organized in directories, make sure to create a tar.gz file of anything you want copied back to the submit server. Once typing exit the interactive job will end, and any *files* created during the interactive job will be copied back to the submit location for you.

Script For Running OpenMP/MPI Jobs

To run your newly compiled program within a job, you need to write a script that loads an OpenMP or MPI module and then runs the program, like so:

#!/bin/bash

# Command to enable modules, and then load an appropriate MP/MPI module
. /etc/profile.d/modules.sh
module load mpi_module

# Command to run your OpenMP/MPI program
# (This example uses mpirun, other programs
# may use mpiexec, or other commands)
mpirun -np 8 myprogram

Replace mpi_module with the name of the module you used to compile your code, myprogram with the name of your program, and X with the number of CPUs you want the program to use. There may be additional options or flags necessary to run your particular program; make sure to check the program's documentation about running multi-core processes.

Submit File Requirements

There are several important requirements to consider when writing a submit file for multicore jobs. They are shown in the sample submit file below and include:

  • Require Gluster, for OpenMP or MPI modules. Make sure that you include a requirements statement for our Gluster file share - this is where the OpenMP/MPI modules live.
  • Request *accurate* CPUs and memory Run at least one test job and look at the log file produced by HTCondor to determine how much memory and disk space your multi-core jobs actually use. Requesting too much memory will cause two issues: your jobs will match more slowly and they will be wasting resources that could be used by others. Also, the fewer CPUs your jobs require, the sooner you'll have more jobs running. Jobs requesting 16 CPUs or less will do best, as nearly all of CHTC's servers have at least that many, but you can request and use up to 36 CPUs per job.
  • The script you wrote above (shown as run_mpi.sh below) should be your submit file "executable", and your compiled program and any files should be listed in transfer_input_files.
  • Use the getenv = true statement to set up the job's running environment.

A sample submit file for multi-core jobs is given below:

# multicore.sub
# A sample submit file for running a single multicore (8 cores) job

universe = vanilla
log = mc_$(Cluster).log
output = mc_$(Cluster).out
error = mc_$(Cluster).err

executable = run_mpi.sh
# arguments = (if you want to pass any to the shell script)
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
transfer_input_files = input_files, myprogram

requirements = ( Target.HasGluster == true )
getenv = true

request_cpus = 8
request_memory = 8GB
request_disk = 2GB

queue

After the submit file is complete, you can submit your jobs using condor_submit.

Using Gluster for Software

If your compiled program is large (over 20 MB) or requires that it run from the same location within which it was installed, you may need to use our web proxy (large software) or Gluster file share (install location) instead of being transferred with every job though these should be used only as last resorts. See our File Availability guide for more details. If your software is larger than a few GB or will not run correctly if moved to a different directory from where it was first installed, specific changes to the procedure described above are noted below for using Gluster:

  • Request a Gluster directory from CHTC, by emailing chtc@cs.wisc.edu with a description of why you need it.
  • When compiling, copy all of your files into your Gluster directory before submitting the interactive job for compiling. When submitting the interactive job, don't include your source files in the transfer_input_files line. Instead, once the interactive job starts, move into your Gluster directory:
    $ cd /mnt/gluster/NetId/path/to/source_code
    From there, follow the commands about loading modules and compiling as above. Once you're finished, type exit to end the interactive job.
  • In your job's executable script, instead of listing the name of the compiled program after your mpirun or mpiexec command, list the full path to that program, like so:
With file transfer From Gluster
#!/bin/bash
# Activate modules and load the appropriate module
. /etc/profile.d/modules.sh
module load mpi_module
mpirun -np 8 myprogram
#!/bin/bash
# Activate modules and load the appropriate module
. /etc/profile.d/modules.sh
module load mpi_module
mpirun -np 8 /mnt/gluster/NetId/path/to/myprogram
  • In your submit file, do not include the name of your compiled executable in "transfer_input_files". It will be referenced directly from your script, as seen above on the right.

    With file transfer From Gluster
    transfer_input_files = input_files,myprogram
    
    transfer_input_files = input_files
    
  • Once you've compiled your code within Gluster and written your script and submit file, you can submit your job from your /home/ directory as normal, using condor_submit.

    Other Installations

    Your software may require newer versions of OpenMP/MPI libraries than those available via our modules. If this is the case, send an email to chtc@cs.wisc.edu, to find out if we can install that library into the module system.