lammps/bench/FERMI
Axel Kohlmeyer bdd2f3a6b2 remove references to Make.py and USER-CUDA 2017-07-20 12:25:42 -04:00
..
README remove references to Make.py and USER-CUDA 2017-07-20 12:25:42 -04:00
in.eam git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12487 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2014-09-11 16:05:10 +00:00
in.eam.titan git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12391 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2014-09-04 14:28:10 +00:00
in.lj git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12487 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2014-09-11 16:05:10 +00:00
in.lj.titan git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12391 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2014-09-04 14:28:10 +00:00
in.rhodo git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12487 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2014-09-11 16:05:10 +00:00
in.rhodo.scaled.titan git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12391 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2014-09-04 14:28:10 +00:00
in.rhodo.split.titan git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12391 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2014-09-04 14:28:10 +00:00

README

These are input scripts used to run versions of several of the
benchmarks in the top-level bench directory using the GPU accelerator
package.  The results of running these scripts on two different machines
(a desktop with 2 Tesla GPUs and the ORNL Titan supercomputer) are shown
on the "GPU (Fermi)" section of the Benchmark page of the LAMMPS WWW
site: lammps.sandia.gov/bench.

Examples are shown below of how to run these scripts.  This assumes
you have built 3 executables with the GPU package
installed, e.g.

lmp_linux_single
lmp_linux_mixed
lmp_linux_double

------------------------------------------------------------------------

To run on just CPUs (without using the GPU styles),
do something like the following:

mpirun -np 1 lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj
mpirun -np 12 lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.eam

The "xyz" settings determine the problem size.  The "t" setting
determines the number of timesteps.

These mpirun commands run on a single node.  To run on multiple
nodes, scale up the "-np" setting.

------------------------------------------------------------------------

To run with the GPU package, do something like the following:

mpirun -np 12 lmp_linux_single -sf gpu -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
mpirun -np 8 lmp_linux_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.eam

The "xyz" settings determine the problem size.  The "t" setting
determines the number of timesteps.  The "np" setting determines how
many MPI tasks (per node) the problem will run on.  The numeric
argument to the "-pk" setting is the number of GPUs (per node); 1 GPU
is the default.  Note that you can use more MPI tasks than GPUs (per
node) with the GPU package.

These mpirun commands run on a single node.  To run on multiple nodes,
scale up the "-np" setting, and control the number of MPI tasks per
node via a "-ppn" setting.

------------------------------------------------------------------------

If the script has "titan" in its name, it was run on the Titan
supercomputer at ORNL.