forked from lijiext/lammps
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12493 f3b2605a-c512-4ea7-a41b-209d697bcdaa
This commit is contained in:
parent
978269a4ce
commit
3564238044
|
@ -1,15 +1,18 @@
|
||||||
These example scripts can be run with the USER-CUDA
|
These example scripts can be run with the USER-CUDA package, assuming
|
||||||
package, assuming you built LAMMPS with the package.
|
you built LAMMPS with the package and the precision you want.
|
||||||
|
|
||||||
|
Note that these benchmark problems are identical to those in the
|
||||||
|
examples/cuda directory which use the USER-CUDA package.
|
||||||
|
|
||||||
You can run any of the scripts as follows. You can also reset the
|
You can run any of the scripts as follows. You can also reset the
|
||||||
x,y,z variables in the command line to change the size of the problem.
|
x,y,z variables in the command line to change the size of the problem.
|
||||||
|
|
||||||
With USER-CUDA on 1 GPU:
|
With the USER-CUDA package on 1 GPU:
|
||||||
|
|
||||||
lmp_machine -c on -sf cuda < in.cuda.melt.2.5
|
lmp_machine -c on -sf cuda < in.cuda.melt.2.5
|
||||||
lmp_machine -c on -sf cuda < in.cuda.phosphate
|
lmp_machine -c on -sf cuda -v x 6 -v y 6 -v z 6 < in.cuda.phosphate
|
||||||
|
|
||||||
With USER-CUDA on 2 GPUs:
|
With the USER-CUDA package on 2 GPUs:
|
||||||
|
|
||||||
mpirun -np 2 lmp_machine -c on -sf cuda -pk cuda 2 < in.cuda.melt.2.5
|
mpirun -np 2 lmp_machine -c on -sf cuda -pk cuda 2 < in.cuda.melt.2.5
|
||||||
mpirun -np 2 lmp_machine -c on -sf cuda -pk cuda 2 < in.cuda.rhodo
|
mpirun -np 2 lmp_machine -c on -sf cuda -pk cuda 2 < in.cuda.rhodo
|
||||||
|
@ -18,4 +21,7 @@ CPU-only:
|
||||||
|
|
||||||
lmp_machine in.cuda.melt.2.5
|
lmp_machine in.cuda.melt.2.5
|
||||||
mpirun -np 4 lmp_machine < in.cuda.melt.5.0
|
mpirun -np 4 lmp_machine < in.cuda.melt.5.0
|
||||||
mpirun -np 8 lmp_machine < in.cuda.rhodo
|
mpirun -np 8 lmp_machine -v x 1 -v y 1 -v z 2 < in.cuda.rhodo
|
||||||
|
|
||||||
|
Note that with the USER-CUDA package you must insure the number of MPI
|
||||||
|
tasks equals the number of GPUs (both per node).
|
||||||
|
|
|
@ -27,5 +27,5 @@ neigh_modify delay 0 every 20 check no
|
||||||
|
|
||||||
fix 1 all nve
|
fix 1 all nve
|
||||||
|
|
||||||
thermo 100
|
thermo 100
|
||||||
run 5000
|
run 1000
|
||||||
|
|
|
@ -27,5 +27,5 @@ neigh_modify delay 0 every 20 check no
|
||||||
|
|
||||||
fix 1 all nve
|
fix 1 all nve
|
||||||
|
|
||||||
thermo 100
|
thermo 100
|
||||||
run 5000
|
run 1000
|
||||||
|
|
|
@ -24,10 +24,8 @@ kspace_style pppm 1e-5
|
||||||
|
|
||||||
neighbor 2.0 bin
|
neighbor 2.0 bin
|
||||||
|
|
||||||
thermo 100
|
thermo 100
|
||||||
|
timestep 0.001
|
||||||
timestep 0.001
|
|
||||||
|
|
||||||
fix 1 all npt temp 400 400 0.01 iso 1000.0 1000.0 1.0
|
|
||||||
run 1000
|
|
||||||
|
|
||||||
|
fix 1 all npt temp 400 400 0.01 iso 1000.0 1000.0 1.0
|
||||||
|
run 200
|
||||||
|
|
|
@ -23,10 +23,11 @@ replicate $x $y $z
|
||||||
fix 1 all shake 0.0001 5 0 m 1.0 a 232
|
fix 1 all shake 0.0001 5 0 m 1.0 a 232
|
||||||
fix 2 all npt temp 300.0 300.0 100.0 &
|
fix 2 all npt temp 300.0 300.0 100.0 &
|
||||||
z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
|
z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
|
||||||
|
|
||||||
special_bonds charmm
|
special_bonds charmm
|
||||||
|
|
||||||
thermo 100
|
thermo 100
|
||||||
thermo_style multi
|
thermo_style multi
|
||||||
timestep 2.0
|
timestep 2.0
|
||||||
|
|
||||||
run 1000
|
run 200
|
||||||
|
|
Loading…
Reference in New Issue