git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12485 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp 2014-09-11 16:00:27 +00:00
parent 17a7fb2c98
commit 705237ae9e
4 changed files with 27 additions and 95 deletions

View File

@ -16,38 +16,45 @@ lmp_linux_double
The precision (single, mixed, double) refers to the GPU and USER-CUDA The precision (single, mixed, double) refers to the GPU and USER-CUDA
pacakge precision. See the README files in the lib/gpu and lib/cuda pacakge precision. See the README files in the lib/gpu and lib/cuda
directories for instructions on how to build the packages with directories for instructions on how to build the packages with
different precisions. The doc/Section_accelerate.html file also has a different precisions. The GPU and USER-CUDA sub-sections of the
summary description. doc/Section_accelerate.html file also describes this process.
------------------------------------------------------------------------ ------------------------------------------------------------------------
If the script has "cpu" in its name, it is meant to be run in CPU-only To run on just CPUs (without using the GPU or USER-CUDA styles),
mode (without using the GPU or USER-CUDA styles). For example: do something like the following:
mpirun -np 1 ../lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj.cpu mpirun -np 1 lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj
mpirun -np 12 ../lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.lj.cpu mpirun -np 12 lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.lj
The "xyz" settings determine the problem size. The "t" setting The "xyz" settings determine the problem size. The "t" setting
determines the number of timesteps. determines the number of timesteps.
These mpirun commands run on a single node. To run on multiple
nodes, scale up the "-np" setting.
------------------------------------------------------------------------ ------------------------------------------------------------------------
If the script has "gpu" in its name, it is meant to be run using To run with the GPU package, do something like the following:
the GPU package. For example:
mpirun -np 12 ../lmp_linux_single -sf gpu -v g 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu mpirun -np 12 lmp_linux_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
mpirun -np 8 lmp_linux_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
mpirun -np 8 ../lmp_linux_mixed -sf gpu -v g 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu
The "xyz" settings determine the problem size. The "t" setting The "xyz" settings determine the problem size. The "t" setting
determines the number of timesteps. The "np" setting determines how determines the number of timesteps. The "np" setting determines how
many MPI tasks per compute node the problem will run on, and the "g" many MPI tasks (per node) the problem will run on, The numeric
setting determines how many GPUs per compute node the problem will run argument to the "-pk" setting is the number of GPUs (per node). Note
on, i.e. 1 or 2 in this case. Note that you can use more MPI tasks that you can use more MPI tasks than GPUs (per node) with the GPU
than GPUs (both per compute node) with the GPU package. package.
These mpirun commands run on a single node. To run on multiple
nodes, scale up the "-np" setting, and control the number of
MPI tasks per node via a "-ppn" setting.
------------------------------------------------------------------------ ------------------------------------------------------------------------
To run with the USER-CUDA package, do something like the following:
If the script has "cuda" in its name, it is meant to be run using If the script has "cuda" in its name, it is meant to be run using
the USER-CUDA package. For example: the USER-CUDA package. For example:
@ -62,7 +69,10 @@ setting determines how many GPUs per compute node the problem will run
on, i.e. 1 or 2 in this case. For the USER-CUDA package, the number on, i.e. 1 or 2 in this case. For the USER-CUDA package, the number
of MPI tasks and GPUs (both per compute node) must be equal. of MPI tasks and GPUs (both per compute node) must be equal.
These mpirun commands run on a single node. To run on multiple
nodes, scale up the "-np" setting.
------------------------------------------------------------------------ ------------------------------------------------------------------------
If the script has "titan" in its name, it was run on the Titan supercomputer If the script has "titan" in its name, it was run on the Titan
at ORNL. supercomputer at ORNL.

View File

@ -1,22 +0,0 @@
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 $x 0 $y 0 $z
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run $t

View File

@ -1,27 +0,0 @@
# 3d Lennard-Jones melt
# set variable g = 1/2 for 1/2 GPUs
if "$g == 1" then "package cuda gpu/node 1"
if "$g == 2" then "package cuda gpu/node 2"
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 $x 0 $y 0 $z
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run $t

View File

@ -1,29 +0,0 @@
# 3d Lennard-Jones melt
# newton off is required for GPU package
# set variable g = 1/2 for 1/2 GPUs
newton off
if "$g == 1" then "package gpu force/neigh 0 0 1"
if "$g == 2" then "package gpu force/neigh 0 1 1"
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 $x 0 $y 0 $z
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run $t