Skip to content

LAMMPS

LAMMPS is a molecular dynamics application for large-scale atomic/molecular parallel simulations of solid-state materials, soft matter and mesoscopic systems.

LAMMPS is available as a module on Apocrita.

Use Spack for additional variants

Simple variants of LAMMPS have been installed by the Apocrita ITSR Apps Team. Advanced users may want to self-install their own additional variants via Spack.

Versions

Regular and GPU accelerated versions have been installed on Apocrita.

Usage

To run the required version, load one of the following modules:

  • For LAMMPS (non-GPU), load lammps/<version>
  • For GPU accelerated LAMMPS versions, load lammps-gpu/<version>

To run the default installed version of LAMMPS, simply load the lammps module:

$ module load lammps
$ mpirun -- \
    lmp -help

Usage example: lmp -var t 300 -echo screen -in in.alloy
...

For full usage documentation, pass the -help option.

Example jobs

Make sure you have an accompanying data file

The examples below specify an input file only, but will often need an accompanying data file (e.g. data.file) to be present in the same directory as well. For more examples, see the example benchmarks in the official LAMMPS GitHub repository.

Serial job

Here is an example job running on 4 cores and 16GB of total memory:

#!/bin/bash
#SBATCH -n 4               # (or --ntasks=4) Request 4 cores
#SBATCH --mem-per-cpu=4G   # Request 4GB RAM per core (16G total)
#SBATCH -t 1:0:0           # Request 1 hour runtime

module load lammps

# Slurm knows how many tasks to use for mpirun, detected automatically from
# ${SLURM_NTASKS}. Use -- to ensure arguments are passed to the application
# and not mpirun
mpirun -- \
  lmp \
  -in in.file \
  -log output.log

Parallel job

Here is an example job running on 96 cores across 2 nodes with MPI:

#!/bin/bash
#SBATCH -N 2             # (or --nodes=2) Request 2 nodes
#SBATCH -n 96            # (or --ntasks=96) Request 96 cores
#SBATCH -t 240:0:0       # Request 240 hours runtime
#SBATCH -p parallel      # (or --partition=parallel) Request parallel partition
#SBATCH --exclusive
#SBATCH --mem=0

module load lammps

# Slurm knows how many tasks to use for mpirun, detected automatically from
# ${SLURM_NTASKS}. Use -- to ensure arguments are passed to the application
# and not mpirun
mpirun -- \
  lmp \
  -in in.file \
  -log output.log

GPU jobs

Use the requested number of GPUs

Always use the $SLURM_GPUS_ON_NODE environment variable as below. Failure to do this will results in errors such as:

ERROR: Could not find/initialize a specified accelerator device (src/src/GPU/gpu_extra.h:57)

Here is an example job running on 1 GPU:

#!/bin/bash
#SBATCH -n 8               # (or --ntasks=8) Request 8 cores
#SBATCH --mem-per-cpu=11G  # Request 11GB RAM per core (88G total)
#SBATCH -p gpu             # (or --partition=gpu) Request gpu partition
#SBATCH --cpus-per-gpu=8   # 8 cores per GPU
#SBATCH --gres=gpu:1       # Request 1 GPU of any type
#SBATCH -t 240:0:0         # Request 240 hours runtime

# Load the GPU-accelerated version
module load lammps-gpu

# Slurm knows how many tasks to use for mpirun, detected automatically from
# ${SLURM_NTASKS}. Use -- to ensure arguments are passed to the application
# and not mpirun
mpirun -- \
  lmp \
  -sf gpu \
  -pk gpu ${SLURM_GPUS_ON_NODE} \
  -in in.lc \
  -log in.lc.log

Here is an example job running on 2 GPUs:

#!/bin/bash
#SBATCH -n 16              # (or --ntasks=16) Request 16 cores
#SBATCH --mem-per-cpu=11G  # Request 11GB RAM per core (176G total)
#SBATCH -p gpu             # (or --partition=gpu) Request gpu partition
#SBATCH --cpus-per-gpu=8   # 8 cores per GPU
#SBATCH --gres=gpu:2       # Request 2 GPUs of any type
#SBATCH -t 240:0:0         # Request 240 hours runtime

# Load the GPU-accelerated version
module load lammps-gpu

# Slurm knows how many tasks to use for mpirun, detected automatically from
# ${SLURM_NTASKS}. Use -- to ensure arguments are passed to the application
# and not mpirun
mpirun -- \
  lmp \
  -sf gpu \
  -pk gpu ${SLURM_GPUS_ON_NODE} \
  -in in.lc \
  -log in.lc.log

References