forked from lijiext/lammps
52 lines
2.0 KiB
Plaintext
52 lines
2.0 KiB
Plaintext
These are input scripts used to run versions of several of the
|
|
benchmarks in the top-level bench directory using the GPU accelerator
|
|
package. The results of running these scripts on two different machines
|
|
(a desktop with 2 Tesla GPUs and the ORNL Titan supercomputer) are shown
|
|
on the "GPU (Fermi)" section of the Benchmark page of the LAMMPS WWW
|
|
site: lammps.sandia.gov/bench.
|
|
|
|
Examples are shown below of how to run these scripts. This assumes
|
|
you have built 3 executables with the GPU package
|
|
installed, e.g.
|
|
|
|
lmp_linux_single
|
|
lmp_linux_mixed
|
|
lmp_linux_double
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
To run on just CPUs (without using the GPU styles),
|
|
do something like the following:
|
|
|
|
mpirun -np 1 lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj
|
|
mpirun -np 12 lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.eam
|
|
|
|
The "xyz" settings determine the problem size. The "t" setting
|
|
determines the number of timesteps.
|
|
|
|
These mpirun commands run on a single node. To run on multiple
|
|
nodes, scale up the "-np" setting.
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
To run with the GPU package, do something like the following:
|
|
|
|
mpirun -np 12 lmp_linux_single -sf gpu -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
|
|
mpirun -np 8 lmp_linux_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.eam
|
|
|
|
The "xyz" settings determine the problem size. The "t" setting
|
|
determines the number of timesteps. The "np" setting determines how
|
|
many MPI tasks (per node) the problem will run on. The numeric
|
|
argument to the "-pk" setting is the number of GPUs (per node); 1 GPU
|
|
is the default. Note that you can use more MPI tasks than GPUs (per
|
|
node) with the GPU package.
|
|
|
|
These mpirun commands run on a single node. To run on multiple nodes,
|
|
scale up the "-np" setting, and control the number of MPI tasks per
|
|
node via a "-ppn" setting.
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
If the script has "titan" in its name, it was run on the Titan
|
|
supercomputer at ORNL.
|