Skip to content

Compiling code and using toolchains

A significant portion of the COSMOS software is built using the EasyBuild software framework. This framework provides so-called Toolchains which are utilised to build software. LUNARC recommends using toolchains when building software. This includes compiling your software outside the EasyBuild framework.

Currently provided toolchains

Toolchains for CPU nodes

  • GCC: GCC
  • foss: GCC, OpenMPI, OpenBLAS, FFTW, BLACS, ScaLAPACK
  • gompi: GCC, OpenMPI
  • gomkl: GCC, OpenMPI, MKL
  • gfbf: GCC, FlexiBLAS, FFTW (no MPI)
  • intel: icc, ifort, Intel MPI, MKL
  • iimpi: icc, ifort, Intel MPI

CUDA based toolchains for GPU nodes

  • gcccuda: GCC, CUDA
  • gompic: GCC, CUDA, OpenMPI
  • goolfc: GCC, CUDA, OpenMPI, OpenBLAS, FFTW, BLACS, ScaLAPACK
  • iccifortcuda: icc, ifort, CUDA
  • iimpic: icc, ifort, CUDA, Intel MPI
  • intelcuda: icc, ifort, CUDA, Intel MPI, MKL

If you require additional toolchains, contact LUNARC support to discuss your requirements.

Selecting a toolchain

The above choices of toolchains are a bit overwhelming, in particular for new users. We recommend first choosing a toolchain and then selecting a version. Good choices for general use are the toolchains:

  • foss, if you want to use the GCC compiler suite
  • gomkl, if you want to use the GCC compiler suite with Intel's MKL performance library
  • intel, if you want to use the Intel compiler suite

Example: To check the foss versions available you

  module avail foss
and you get an output similar to

------------------ /sw/easybuild/modules/all/Core ------------------
   foss/2015a    foss/2015b    foss/2016a (D)

  Where:
   D:  Default Module

Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible
modules matching any of the "keys".

This shows you that three versions of the foss toolchain are available, with version 2016a being the default. The version numbering at the time of writing is a time stamp. Version 2022a was released at the beginning of 2022, 2022b in the middle of 2022 and 2023a at the beginning of 2023. If you load e.g. the foss/2023a module

module load foss/2023a

It will load several modules for you, incl. compiler, libraries and utilities. The command

module list

will now show you which compiler and library versions it will be using. Please note that after loading a toolchain

  • Several modules become available, which rely on components inside this specific toolchain
  • The defaults of many modules change to a version that was built with the components inside the selected foss module

Selecting a version of the intel or the pomkl toolchain is very similar to selecting a foss module, just replace foss with intel or pomkl in the above examples.

Compiling serial code using a toolchain

Once a toolchain module is selected, there are no differences from earlier LUNARC services when it comes to compiling serial code. If you have loaded a toolchain build use the following commands to compile.

  • gcc: C compiler
  • g++: C++ compiler
  • gfortran: Fortran compiler
  • icc: C compiler
  • icpc: C++ compiler
  • ifort: Fortran compiler

In all cases please do not forget about compiler options, in particular optimisation flags. You should have the toolchain used for compiling loaded when executing the code.

Compiling MPI code using a toolchain

The commands you use to compile MPI code depend on the MPI library and the compiler you intend to use.

Toolchains using OpenMPI

When using a toolchain utilising OpenMPI (e.g. foss, iomkl, pomkl) use:

  • mpicc: MPI compiler for C code
  • mpicxx or *mpic++: MPI compiler for C++ code
  • mpifort: MPI compiler for Fortran code

Inside your slurm job-script, executables build with OpenMPI need to get started using the mpirun command. For MPI jobs not using threads, we recommend using task binding. A simple job script for standard MPI jobs (no threads, e.g. OpenMP) is available on COSMOS:

/sw/pkg/submissionsScripts/script_openmpi.sh

If you use this, you will need to modify it for your own needs. A more detailed guide on the submission system is available.

Info

In the latest OpenMPI releases the commands mpif77 and mpif90 have been depreciated. Fortran users should switch to using mpifort.

Toolchains using the Intel compiler and Intel MPI library

When using a toolchain utilising the Intel MPI library and the Intel compiler (e.g. intel, iimpi) use:

  • mpiicc: MPI compiler for C code
  • mpiicpc: MPI compiler for C++ code
  • mpiifort: MPI compiler for Fortran code

Inside your slurm job-script, executables build with the Intel MPI library need to get started using the srun command. Task binding is still under investigation and not yet available. For a simple job script for standard MPI jobs is available on COSMOS:

/sw/pkg/submissionsScripts/script_intelmpi.sh

If you use this, you will need to modify it for your own needs. A more detailed guide on the submission system is available.

Toolchains using the GCC compiler and Intel MPI library

When using a toolchain utilising the Intel MPI library and the GCC compiler (e.g. gimkl) use:

  • mpigcc: MPI compiler for C code
  • mpigxx: MPI compiler for C++ code
  • mpif90: MPI compiler for Fortran 95 code

Executable builds using this setup are also started with the srun command from inside a job-script as described above.


Author: (LUNARC)

Last Updated: 2024-06-18