Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

This page lists some tips and hints for the pilot users before the actual manual will be ready. The manuals are being built simultaneously.

Sisu (Cray XC30)

The actual manual is being built and is unfortunately quite incomplete. Also, Cray XC30 official documentation by Cray has not yet been made public. The most up-to-date information is found with the syntax "command -h", e.g. aprun -h or man command. Good ones: man mpi_intro, man salloc, man sbatch, man ftn, ...

Batch jobs

To submit a parallel (mpi only) job:

#SBATCH -n 3072
#SBATCH -t 00:45:00
#SBATCH -o ocp2k.%j
#SBATCH -e ecp2k.%j
#SBATCH --ntasks-per-node=16

aprun -B /appl/chem/cp2k/2013-01-31/cp2k.popt H2O-1024.inp > H2O-1024_3072.out

The job is submitted with


and monitored with


The current batch queue configuration can be found with




Installing applications

Home directories are on the same Lustre disk as the working area ($WRKDIR) and are available on the compute nodes. This is different to Vuori and Louhi. Both in Sisu and Taito there are environment variables $USERAPPL which point to subdirectories in your home directory where you can install your own applications. Applications installed by CSC are found $APPLDIR which points to /appl which a different directory in Sisu and Taito. Applications are being installed and they will come available in /appl . If you are compiling your own code you can share your experience either on the email list, or add your comments, performance data (and makefile or equivalent) in here.

Use $TMPDIR (/tmp/username) for compiling code. Compiling code is difficult load for Lustre.

Taito (HP supercluster)

Program development environment

All installed mpi libraries ( intelmpi, mvapich2, openmpi) have GNU(gcc) and Intel compiler wrappers.

  • Fortran compiler: f95
  • C compiler: cc
  • C++ compiler: CC
  • Fortran mpi wrapper: mpif90
  • c mpi wrapper: mpicc
  • c++ mpi wrapper: mpiCC
  • Default programming environment ( 1) intel/12.1.5 2) mkl/10.3.11 3) intelmpi/4.0.3 and  in the this default case above compiling wrappers refer to Intel compilers

Compiler tips


  • aggressive compilation set (xhost = the highest instruction set available on the compilation host processor.)
    • -xhost -O3 -opt-prefetch -unroll-aggressive -no-prec-div -fp-model fast=2
  • basic compilation set
    • -xhost -O2
  • disable unsafe floating point optimizations
    • –fp-model precise –fp-model source
  • Put automatic arrays on heap instead of stack (if you know that there might be automatic arrays that will need a lot of memory ) (Intel For­tran al­lo­cates au­to­matic ar­rays on the stack by de­fault while GNU For­tran al­lo­cates them on the heap).
    • -heap-arrays
  • google: quick optimization reference intel compiler
  • Linking to Intel MKL libraries, useful helper application:
  • Interprocedural optimization
    • the above link has titles like: Overview, Using IPO. IPO for Large Programs, IPO related performance issues
  • intel documents (not so new documents but they cover for example profile guided optimization for Intel compiler). No MPI related stuff, OpenMPyes.

Batch job example

SLURM Batch job example (

### job script example

## name of your job
#SBATCH -J my_jobname
## system error message output file
#SBATCH -e my_output_err_%j
## system message output file
#SBATCH -o my_output_%j
## A per-process (soft) memory limit. limit is specified in MB. Example: 1 GB is 1000
#SBATCH --mem-per-cpu=1000
## how long a job takes, wallclock time hh:mm:ss
#SBATCH -t 00:01:00
##the number of processes (number of cores)
#SBATCH -n 24
##parallel/serial queue 
#SBATCH -p pappa
# gcc/4.7.1 and mvapich2/1.8.1 is my programming environment
module swap intel gcc/4.7.1
module swap intelmpi mvapich2/1.8.1
## run my MPI executable
srun ./my_mpi_program

Submit a batch job

sbatch my_job_script

See more examples: Vuori User's Guide.

  • No labels