Once Spring has been installed it is ready to run in a multi-CPU environment. Spring uses Mpi4py to parallelize the computing tasks and it can be run on high-performance computing clusters.

How to launch multi-CPU jobs

Single multi-CPU machine

The programs can be launched from the GUI interface or from the command line and number of available CPUs can be specified in the parameter “Number of CPUs”.

Computer cluster with multi-CPU nodes

A Spring job can be spread across multiple machines that belong to a computer cluster. If you have set up passwordless SSH (OpenMpi FAQ entry) the program will internally distribute the work load to the assigned nodes. Spring has been tested for SLURM, PBS, LSF and Sungrid engine submission systems. Save a parameter file as a python file for cluster submission in the Spring GUI.







2600 MB




10 GB

  • SLURM submission system:

% sbatch -N 30 --mem-per-cpu=2600 --tmp=10000
  • PBS submission system:

% qsub -r n -q queue_name -l select=30:ncpus=1:mem=2600mb
  • LSF submission system:

% bsub -q queue_name -M 100000 -R "select[tmp>10000]" -R "rusage[mem=2600]" -o ref009.log -n 30
  • Sungrid engine submission system:

% qsub -V -cwd -pe openmpi 30

A note on the “Temporary directory”

Spring deposits GB of temporary files to the specified temporary space. This should ideally be a directory local to the host node to make use of fast disk access and to avoid excessive network file traffic. The directory should have a minimum of 5 GB per node available. If Spring crashes it will attempt to clear those temporary directories. Depending on the nature of the aborting process, this will not always be possible. Therefore, you should monitor disk usage of your temporary directory space and run cleanup scripts regularly.