From f6151f67354d145b0f9d0da9cc76c66e15a98ea2 Mon Sep 17 00:00:00 2001 From: sjplimp Date: Mon, 2 May 2011 15:01:49 +0000 Subject: [PATCH] git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@6051 f3b2605a-c512-4ea7-a41b-209d697bcdaa --- doc/Section_commands.html | 16 ++-- doc/Section_commands.txt | 4 + doc/Section_errors.html | 29 ++++-- doc/Section_errors.txt | 29 ++++-- doc/Section_intro.html | 8 ++ doc/Section_intro.txt | 8 ++ doc/Section_start.html | 197 ++++++++++++++++++-------------------- doc/Section_start.txt | 197 ++++++++++++++++++-------------------- doc/fix_gpu.html | 18 ++-- doc/fix_gpu.txt | 18 ++-- doc/kspace_style.html | 37 ++++++- doc/kspace_style.txt | 37 ++++++- doc/pair_coeff.html | 2 + doc/pair_coeff.txt | 2 + doc/pair_lj_expand.html | 33 ++++++- doc/pair_lj_expand.txt | 31 +++++- doc/pair_morse.html | 34 ++++++- doc/pair_morse.txt | 31 +++++- doc/pair_style.html | 2 + doc/pair_style.txt | 2 + 20 files changed, 478 insertions(+), 257 deletions(-) diff --git a/doc/Section_commands.html b/doc/Section_commands.html index 70bc1e8857..5f996268de 100644 --- a/doc/Section_commands.html +++ b/doc/Section_commands.html @@ -399,12 +399,13 @@ potentials. Click on the style itself for a full description: lj/charmm/coul/long/gpulj/charmm/coul/long/optlj/class2lj/class2/coul/cut lj/class2/coul/longlj/cutlj/cut/gpulj/cut/opt lj/cut/coul/cutlj/cut/coul/cut/gpulj/cut/coul/debyelj/cut/coul/long -lj/cut/coul/long/gpulj/cut/coul/long/tip4plj/expandlj/gromacs -lj/gromacs/coul/gromacslj/smoothlj96/cutlj96/cut/gpu -lubricatemeammorsemorse/opt -peri/lpsperi/pmbreaxresquared -softswtabletersoff -tersoff/zblyukawayukawa/colloid +lj/cut/coul/long/gpulj/cut/coul/long/tip4plj/expandlj/expand/gpu +lj/gromacslj/gromacs/coul/gromacslj/smoothlj96/cut +lj96/cut/gpulubricatemeammorse +morse/gpumorse/optperi/lpsperi/pmb +reaxresquaredsoftsw +tabletersofftersoff/zblyukawa +yukawa/colloid

These are pair styles contributed by users, which can be used if @@ -483,7 +484,8 @@ description: Kspace solvers. Click on the style itself for a full description:

- +
ewaldpppmpppm/tip4p +
ewaldpppmpppm/gpu/singlepppm/gpu/double
pppm/tip4p

These are Kspace solvers contributed by users, which can be used if diff --git a/doc/Section_commands.txt b/doc/Section_commands.txt index 1a18b8b9b2..1c58401303 100644 --- a/doc/Section_commands.txt +++ b/doc/Section_commands.txt @@ -611,6 +611,7 @@ potentials. Click on the style itself for a full description: "lj/cut/coul/long/gpu"_pair_lj.html, "lj/cut/coul/long/tip4p"_pair_lj.html, "lj/expand"_pair_lj_expand.html, +"lj/expand/gpu"_pair_lj_expand.html, "lj/gromacs"_pair_gromacs.html, "lj/gromacs/coul/gromacs"_pair_gromacs.html, "lj/smooth"_pair_lj_smooth.html, @@ -619,6 +620,7 @@ potentials. Click on the style itself for a full description: "lubricate"_pair_lubricate.html, "meam"_pair_meam.html, "morse"_pair_morse.html, +"morse/gpu"_pair_morse.html, "morse/opt"_pair_morse.html, "peri/lps"_pair_peri.html, "peri/pmb"_pair_peri.html, @@ -728,6 +730,8 @@ Kspace solvers. Click on the style itself for a full description: "ewald"_kspace_style.html, "pppm"_kspace_style.html, +"pppm/gpu/single"_kspace_style.html, +"pppm/gpu/double"_kspace_style.html, "pppm/tip4p"_kspace_style.html :tb(c=4,ea=c,w=100) These are Kspace solvers contributed by users, which can be used if diff --git a/doc/Section_errors.html b/doc/Section_errors.html index 8cd9ed6e46..b90784e0c0 100644 --- a/doc/Section_errors.html +++ b/doc/Section_errors.html @@ -173,6 +173,10 @@ the bond topologies you have defined. neighbors for each atom. This likely means something is wrong with the bond topologies you have defined. +

Accelerated style in input script but no fix gpu + +
GPU acceleration requires fix gpu in the input script. +
All angle coeffs are not set
All angle coefficients must be set in the data file or by the @@ -1240,9 +1244,9 @@ non-periodic z dimension. unless you use the kspace_modify command to define a 2d slab with a non-periodic z dimension. -
Cannot use pair hybrid with multiple GPU pair styles +
Cannot use pair hybrid with GPU neighbor builds -
Self-explanatory. +
See documentation for fix gpu.
Cannot use pair tail corrections with 2d simulations @@ -1843,7 +1847,7 @@ does not exist.
Self-explanatory. -
Could not find or initialize a specified accelerator device +
Could not find/initialize a specified accelerator device
Your GPU setup is invalid. @@ -2123,6 +2127,10 @@ model. used. Most likely, one or more atoms have been blown out of the simulation box to a great distance. +
Double precision is not supported on this accelerator. + +
In this case, you must compile the GPU library for single precision. +
Dump cfg and fix not computed at compatible times
The fix must produce per-atom quantities on timesteps that dump cfg @@ -2355,6 +2363,10 @@ smaller simulation or on more processors.
Self-explanatory. +
Fix gpu split must be positive for hybrid pair styles. + +
See documentation for fix gpu. +
Fix ID for compute atom/molecule does not exist
Self-explanatory. @@ -3227,6 +3239,11 @@ this fix.
This is the way the fix must be defined in your input script. +
GPU library not compiled for this accelerator + +
The GPU library was not built for your accelerator. Check the arch flag in +lib/gpu. +
Gmask function in equal-style variable formula
Gmask is per-atom operation. @@ -3509,7 +3526,7 @@ simulation box.
Eigensolve for rigid body was not sufficiently accurate. -
Insufficient memory on accelerator (or no fix gpu) +
Insufficient memory on accelerator.
Self-explanatory. @@ -4587,10 +4604,6 @@ contain the same atom.
Any rigid body defined by the fix rigid command must contain 2 or more atoms. -
Out of memory on GPGPU - -
You are attempting to run with too many atoms on the GPU. -
Out of range atoms - cannot compute PPPM
One or more atoms are attempting to map their charge to a PPPM grid diff --git a/doc/Section_errors.txt b/doc/Section_errors.txt index d94e8a9be7..0e2b2e804b 100644 --- a/doc/Section_errors.txt +++ b/doc/Section_errors.txt @@ -170,6 +170,10 @@ An inconsistency was detected when computing the number of 1-4 neighbors for each atom. This likely means something is wrong with the bond topologies you have defined. :dd +{Accelerated style in input script but no fix gpu} :dt + +GPU acceleration requires fix gpu in the input script. :dd + {All angle coeffs are not set} :dt All angle coefficients must be set in the data file or by the @@ -1237,9 +1241,9 @@ For kspace style pppm, all 3 dimensions must have periodic boundaries unless you use the kspace_modify command to define a 2d slab with a non-periodic z dimension. :dd -{Cannot use pair hybrid with multiple GPU pair styles} :dt +{Cannot use pair hybrid with GPU neighbor builds} :dt -Self-explanatory. :dd +See documentation for fix gpu. :dd {Cannot use pair tail corrections with 2d simulations} :dt @@ -1840,7 +1844,7 @@ The compute ID for computing temperature does not exist. :dd Self-explanatory. :dd -{Could not find or initialize a specified accelerator device} :dt +{Could not find/initialize a specified accelerator device} :dt Your GPU setup is invalid. :dd @@ -2120,6 +2124,10 @@ The domain has become extremely large so that neighbor bins cannot be used. Most likely, one or more atoms have been blown out of the simulation box to a great distance. :dd +{Double precision is not supported on this accelerator.} :dt + +In this case, you must compile the GPU library for single precision. :dd + {Dump cfg and fix not computed at compatible times} :dt The fix must produce per-atom quantities on timesteps that dump cfg @@ -2352,6 +2360,10 @@ This is not allowed. Make your SRD bin size smaller. :dd Self-explanatory. :dd +{Fix gpu split must be positive for hybrid pair styles.} :dt + +See documentation for fix gpu. :dd + {Fix ID for compute atom/molecule does not exist} :dt Self-explanatory. :dd @@ -3224,6 +3236,11 @@ When using a "*" in the restart file name, no matching file was found. :dd This is the way the fix must be defined in your input script. :dd +{GPU library not compiled for this accelerator} :dt + +The GPU library was not built for your accelerator. Check the arch flag in +lib/gpu. :dd + {Gmask function in equal-style variable formula} :dt Gmask is per-atom operation. :dd @@ -3506,7 +3523,7 @@ Eigensolve for rigid body was not sufficiently accurate. :dd Eigensolve for rigid body was not sufficiently accurate. :dd -{Insufficient memory on accelerator (or no fix gpu)} :dt +{Insufficient memory on accelerator. } :dt Self-explanatory. :dd @@ -4584,10 +4601,6 @@ contain the same atom. :dd Any rigid body defined by the fix rigid command must contain 2 or more atoms. :dd -{Out of memory on GPGPU} :dt - -You are attempting to run with too many atoms on the GPU. :dd - {Out of range atoms - cannot compute PPPM} :dt One or more atoms are attempting to map their charge to a PPPM grid diff --git a/doc/Section_intro.html b/doc/Section_intro.html index f9b00bb689..bce1a9d718 100644 --- a/doc/Section_intro.html +++ b/doc/Section_intro.html @@ -505,6 +505,14 @@ the list.
+ + + + + + + + diff --git a/doc/Section_intro.txt b/doc/Section_intro.txt index e4c26c8aab..a8e46df996 100644 --- a/doc/Section_intro.txt +++ b/doc/Section_intro.txt @@ -490,6 +490,14 @@ the list. :link(sjp,http://www.sandia.gov/~sjplimp) +pppm GPU single and double : Mike Brown (ORNL) +pair_style lj/cut/expand : Inderaj Bains (NVIDIA) +temperature accelerated dynamics (TAD) : Aidan Thompson (Sandia) +pair reax/c and fix qeq/reax : Metin Aktulga (Purdue, now LBNL) +DREIDING force field, pair_style hbond/dreiding, etc : Tod Pascal (CalTech) +fix adapt and compute ti for thermodynamic integreation for free energies : Sai Jayaraman (Sandia) +pair born and pair gauss : Sai Jayaraman (Sandia) +stochastic rotation dynamics (SRD) via fix srd : Jemery Lechman (Sandia) and Pieter in 't Veld (BASF) ipp Perl script tool : Reese Jones (Sandia) eam_database and createatoms tools : Xiaowang Zhou (Sandia) electron force field (eFF) : Andres Jaramillo-Botero and Julius Su (Caltech) diff --git a/doc/Section_start.html b/doc/Section_start.html index a83aaa0ad5..08287e3377 100644 --- a/doc/Section_start.html +++ b/doc/Section_start.html @@ -994,143 +994,130 @@ processing units (GPUs). We plan to add more over time. Currently, they only support NVIDIA GPU cards. To use them you need to install certain NVIDIA CUDA software on your system:

-
  • Check if you have an NVIDIA card: cat /proc/driver/nvidia/cards/0 -
  • Go to http://www.nvidia.com/object/cuda_get.html -
  • Install a driver and toolkit appropriate for your system (SDK is not necessary) -
  • Follow the instructions in README in lammps/lib/gpu to build the library. -
  • Run lammps/lib/gpu/nvc_get_devices to list supported devices and properties +
    • Check if you have an NVIDIA card: cat /proc/driver/nvidia/cards/0 Go +
    • to http://www.nvidia.com/object/cuda_get.html Install a driver and +
    • toolkit appropriate for your system (SDK is not necessary) Follow the +
    • instructions in README in lammps/lib/gpu to build the library. Run +
    • lammps/lib/gpu/nvc_get_devices to list supported devices and +
    • properties

    GPU configuration

    When using GPUs, you are restricted to one physical GPU per LAMMPS -process. Multiple processes can share a single GPU and in many cases it -will be more efficient to run with multiple processes per GPU. Any GPU -accelerated style requires that fix gpu be used in the -input script to select and initialize the GPUs. The format for the fix -is: +process. Multiple processes can share a single GPU and in many cases +it will be more efficient to run with multiple processes per GPU. Any +GPU accelerated style requires that fix gpu be used in +the input script to select and initialize the GPUs. The format for the +fix is:

    fix name all gpu mode first last split 
     

    where name is the name for the fix. The gpu fix must be the first -fix specified for a given run, otherwise the program will exit -with an error. The gpu fix will not have any effect on runs -that do not use GPU acceleration; there should be no problem -with specifying the fix first in any input script. +fix specified for a given run, otherwise the program will exit with an +error. The gpu fix will not have any effect on runs that do not use +GPU acceleration; there should be no problem with specifying the fix +first in any input script.

    -

    mode can be either "force" or "force/neigh". In the former, -neighbor list calculation is performed on the CPU using the -standard LAMMPS routines. In the latter, the neighbor list -calculation is performed on the GPU. The GPU neighbor list -can be used for better performance, however, it -should not be used with a triclinic box. +

    mode can be either "force" or "force/neigh". In the former, neighbor +list calculation is performed on the CPU using the standard LAMMPS +routines. In the latter, the neighbor list calculation is performed on +the GPU. The GPU neighbor list can be used for better performance, +however, it cannot not be used with a triclinic box or with +hybrid pair styles.

    -

    There are cases when it might be more efficient to select the CPU for neighbor -list builds. If a non-GPU enabled style requires a neighbor list, it will also -be built using CPU routines. Redundant CPU and GPU neighbor list calculations -will typically be less efficient. For hybrid pair -styles, GPU calculated neighbor lists might be less efficient because -no particles will be skipped in a given neighbor list. +

    There are cases when it might be more efficient to select the CPU for +neighbor list builds. If a non-GPU enabled style requires a neighbor +list, it will also be built using CPU routines. Redundant CPU and GPU +neighbor list calculations will typically be less efficient.

    -

    first is the ID (as reported by lammps/lib/gpu/nvc_get_devices) -of the first GPU that will be used on each node. last is the -ID of the last GPU that will be used on each node. If you have -only one GPU per node, first and last will typically both be -0. Selecting a non-sequential set of GPU IDs (e.g. 0,1,3) -is not currently supported. +

    first is the ID (as reported by lammps/lib/gpu/nvc_get_devices) of +the first GPU that will be used on each node. last is the ID of the +last GPU that will be used on each node. If you have only one GPU per +node, first and last will typically both be 0. Selecting a +non-sequential set of GPU IDs (e.g. 0,1,3) is not currently supported.

    -

    split is the fraction of particles whose forces, torques, -energies, and/or virials will be calculated on the GPU. This -can be used to perform CPU and GPU force calculations -simultaneously. If split is negative, the software will -attempt to calculate the optimal fraction automatically -every 25 timesteps based on CPU and GPU timings. Because the GPU speedups -are dependent on the number of particles, automatic calculation of the -split can be less efficient, but typically results in loop times -within 20% of an optimal fixed split. +

    split is the fraction of particles whose forces, torques, energies, +and/or virials will be calculated on the GPU. This can be used to +perform CPU and GPU force calculations simultaneously. If split is +negative, the software will attempt to calculate the optimal fraction +automatically every 25 timesteps based on CPU and GPU timings. Because +the GPU speedups are dependent on the number of particles, automatic +calculation of the split can be less efficient, but typically results +in loop times within 20% of an optimal fixed split.

    -

    If you have two GPUs per node, 8 CPU cores per node, and -would like to run on 4 nodes with dynamic balancing of -force calculation across CPU and GPU cores, the fix -might be +

    If you have two GPUs per node, 8 CPU cores per node, and would like to +run on 4 nodes with dynamic balancing of force calculation across CPU +and GPU cores, the fix might be

    fix 0 all gpu force/neigh 0 1 -1 
     
    -

    with LAMMPS run on 32 processes. In this case, all -CPU cores and GPU devices on the nodes would be utilized. -Each GPU device would be shared by 4 CPU cores. The -CPU cores would perform force calculations for some -fraction of the particles at the same time the GPUs -performed force calculation for the other particles. +

    with LAMMPS run on 32 processes. In this case, all CPU cores and GPU +devices on the nodes would be utilized. Each GPU device would be +shared by 4 CPU cores. The CPU cores would perform force calculations +for some fraction of the particles at the same time the GPUs performed +force calculation for the other particles.

    -

    Because of the large number of cores on each GPU -device, it might be more efficient to run on fewer -processes per GPU when the number of particles per process -is small (100's of particles); this can be necessary -to keep the GPU cores busy. +

    Because of the large number of cores on each GPU device, it might be +more efficient to run on fewer processes per GPU when the number of +particles per process is small (100's of particles); this can be +necessary to keep the GPU cores busy.

    GPU input script

    -

    In order to use GPU acceleration in LAMMPS, -fix_gpu -should be used in order to initialize and configure the -GPUs for use. Additionally, GPU enabled styles must be -selected in the input script. Currently, -this is limited to a few pair styles. -Some GPU-enabled styles have additional restrictions -listed in their documentation. +

    In order to use GPU acceleration in LAMMPS, fix_gpu +should be used in order to initialize and configure the GPUs for +use. Additionally, GPU enabled styles must be selected in the input +script. Currently, this is limited to a few pair +styles and PPPM. Some GPU-enabled styles have +additional restrictions listed in their documentation.

    GPU asynchronous pair computation

    -

    The GPU accelerated pair styles can be used to perform -pair style force calculation on the GPU while other -calculations are -performed on the CPU. One method to do this is to specify -a split in the gpu fix as described above. In this case, -force calculation for the pair style will also be performed -on the CPU. +

    The GPU accelerated pair styles can be used to perform pair style +force calculation on the GPU while other calculations are performed on +the CPU. One method to do this is to specify a split in the gpu fix +as described above. In this case, force calculation for the pair +style will also be performed on the CPU.

    -

    When the CPU work in a GPU pair style has finished, -the next force computation will begin, possibly before the -GPU has finished. If split is 1.0 in the gpu fix, the next -force computation will begin almost immediately. This can -be used to run a hybrid GPU pair style at -the same time as a hybrid CPU pair style. In this case, the -GPU pair style should be first in the hybrid command in order to -perform simultaneous calculations. This also -allows bond, angle, -dihedral, improper, -and long-range force -computations to be run simultaneously with the GPU pair style. -Once all CPU force computations have completed, the gpu fix -will block until the GPU has finished all work before continuing -the run. +

    When the CPU work in a GPU pair style has finished, the next force +computation will begin, possibly before the GPU has finished. If +split is 1.0 in the gpu fix, the next force computation will begin +almost immediately. This can be used to run a +hybrid GPU pair style at the same time as a hybrid +CPU pair style. In this case, the GPU pair style should be first in +the hybrid command in order to perform simultaneous calculations. This +also allows bond, angle, +dihedral, improper, and +long-range force computations to be run +simultaneously with the GPU pair style. Once all CPU force +computations have completed, the gpu fix will block until the GPU has +finished all work before continuing the run.

    GPU timing

    GPU accelerated pair styles can perform computations asynchronously -with CPU computations. The "Pair" time reported by LAMMPS -will be the maximum of the time required to complete the CPU -pair style computations and the time required to complete the GPU -pair style computations. Any time spent for GPU-enabled pair styles -for computations that run simultaneously with bond, -angle, dihedral, -improper, and long-range calculations -will not be included in the "Pair" time. +with CPU computations. The "Pair" time reported by LAMMPS will be the +maximum of the time required to complete the CPU pair style +computations and the time required to complete the GPU pair style +computations. Any time spent for GPU-enabled pair styles for +computations that run simultaneously with bond, +angle, dihedral, +improper, and long-range +calculations will not be included in the "Pair" time.

    -

    When mode for the gpu fix is force/neigh, -the time for neighbor list calculations on the GPU will be added -into the "Pair" time, not the "Neigh" time. A breakdown of the -times required for various tasks on the GPU (data copy, neighbor -calculations, force computations, etc.) are output only -with the LAMMPS screen output at the end of each run. These timings represent -total time spent on the GPU for each routine, regardless of asynchronous -CPU calculations. +

    When mode for the gpu fix is force/neigh, the time for neighbor list +calculations on the GPU will be added into the "Pair" time, not the +"Neigh" time. A breakdown of the times required for various tasks on +the GPU (data copy, neighbor calculations, force computations, etc.) +are output only with the LAMMPS screen output at the end of each +run. These timings represent total time spent on the GPU for each +routine, regardless of asynchronous CPU calculations.

    GPU single vs double precision

    -

    See the lammps/lib/gpu/README file for instructions on how to build -the LAMMPS gpu library for single, mixed, and double precision. The latter -requires that your GPU card supports double precision. +

    See the lammps/lib/gpu/README file for instructions on how to build +the LAMMPS gpu library for single, mixed, and double precision. The +latter requires that your GPU card supports double precision.


    diff --git a/doc/Section_start.txt b/doc/Section_start.txt index 4b4d96693f..fbdd015ab4 100644 --- a/doc/Section_start.txt +++ b/doc/Section_start.txt @@ -984,143 +984,130 @@ processing units (GPUs). We plan to add more over time. Currently, they only support NVIDIA GPU cards. To use them you need to install certain NVIDIA CUDA software on your system: -Check if you have an NVIDIA card: cat /proc/driver/nvidia/cards/0 -Go to http://www.nvidia.com/object/cuda_get.html -Install a driver and toolkit appropriate for your system (SDK is not necessary) -Follow the instructions in README in lammps/lib/gpu to build the library. -Run lammps/lib/gpu/nvc_get_devices to list supported devices and properties :ul +Check if you have an NVIDIA card: cat /proc/driver/nvidia/cards/0 Go +to http://www.nvidia.com/object/cuda_get.html Install a driver and +toolkit appropriate for your system (SDK is not necessary) Follow the +instructions in README in lammps/lib/gpu to build the library. Run +lammps/lib/gpu/nvc_get_devices to list supported devices and +properties :ul GPU configuration :h4 When using GPUs, you are restricted to one physical GPU per LAMMPS -process. Multiple processes can share a single GPU and in many cases it -will be more efficient to run with multiple processes per GPU. Any GPU -accelerated style requires that "fix gpu"_fix_gpu.html be used in the -input script to select and initialize the GPUs. The format for the fix -is: +process. Multiple processes can share a single GPU and in many cases +it will be more efficient to run with multiple processes per GPU. Any +GPU accelerated style requires that "fix gpu"_fix_gpu.html be used in +the input script to select and initialize the GPUs. The format for the +fix is: fix {name} all gpu {mode} {first} {last} {split} :pre where {name} is the name for the fix. The gpu fix must be the first -fix specified for a given run, otherwise the program will exit -with an error. The gpu fix will not have any effect on runs -that do not use GPU acceleration; there should be no problem -with specifying the fix first in any input script. +fix specified for a given run, otherwise the program will exit with an +error. The gpu fix will not have any effect on runs that do not use +GPU acceleration; there should be no problem with specifying the fix +first in any input script. -{mode} can be either "force" or "force/neigh". In the former, -neighbor list calculation is performed on the CPU using the -standard LAMMPS routines. In the latter, the neighbor list -calculation is performed on the GPU. The GPU neighbor list -can be used for better performance, however, it -should not be used with a triclinic box. +{mode} can be either "force" or "force/neigh". In the former, neighbor +list calculation is performed on the CPU using the standard LAMMPS +routines. In the latter, the neighbor list calculation is performed on +the GPU. The GPU neighbor list can be used for better performance, +however, it cannot not be used with a triclinic box or with +"hybrid"_pair_hybrid.html pair styles. -There are cases when it might be more efficient to select the CPU for neighbor -list builds. If a non-GPU enabled style requires a neighbor list, it will also -be built using CPU routines. Redundant CPU and GPU neighbor list calculations -will typically be less efficient. For "hybrid"_pair_hybrid.html pair -styles, GPU calculated neighbor lists might be less efficient because -no particles will be skipped in a given neighbor list. +There are cases when it might be more efficient to select the CPU for +neighbor list builds. If a non-GPU enabled style requires a neighbor +list, it will also be built using CPU routines. Redundant CPU and GPU +neighbor list calculations will typically be less efficient. -{first} is the ID (as reported by lammps/lib/gpu/nvc_get_devices) -of the first GPU that will be used on each node. {last} is the -ID of the last GPU that will be used on each node. If you have -only one GPU per node, {first} and {last} will typically both be -0. Selecting a non-sequential set of GPU IDs (e.g. 0,1,3) -is not currently supported. +{first} is the ID (as reported by lammps/lib/gpu/nvc_get_devices) of +the first GPU that will be used on each node. {last} is the ID of the +last GPU that will be used on each node. If you have only one GPU per +node, {first} and {last} will typically both be 0. Selecting a +non-sequential set of GPU IDs (e.g. 0,1,3) is not currently supported. -{split} is the fraction of particles whose forces, torques, -energies, and/or virials will be calculated on the GPU. This -can be used to perform CPU and GPU force calculations -simultaneously. If {split} is negative, the software will -attempt to calculate the optimal fraction automatically -every 25 timesteps based on CPU and GPU timings. Because the GPU speedups -are dependent on the number of particles, automatic calculation of the -split can be less efficient, but typically results in loop times -within 20% of an optimal fixed split. +{split} is the fraction of particles whose forces, torques, energies, +and/or virials will be calculated on the GPU. This can be used to +perform CPU and GPU force calculations simultaneously. If {split} is +negative, the software will attempt to calculate the optimal fraction +automatically every 25 timesteps based on CPU and GPU timings. Because +the GPU speedups are dependent on the number of particles, automatic +calculation of the split can be less efficient, but typically results +in loop times within 20% of an optimal fixed split. -If you have two GPUs per node, 8 CPU cores per node, and -would like to run on 4 nodes with dynamic balancing of -force calculation across CPU and GPU cores, the fix -might be +If you have two GPUs per node, 8 CPU cores per node, and would like to +run on 4 nodes with dynamic balancing of force calculation across CPU +and GPU cores, the fix might be fix 0 all gpu force/neigh 0 1 -1 :pre -with LAMMPS run on 32 processes. In this case, all -CPU cores and GPU devices on the nodes would be utilized. -Each GPU device would be shared by 4 CPU cores. The -CPU cores would perform force calculations for some -fraction of the particles at the same time the GPUs -performed force calculation for the other particles. +with LAMMPS run on 32 processes. In this case, all CPU cores and GPU +devices on the nodes would be utilized. Each GPU device would be +shared by 4 CPU cores. The CPU cores would perform force calculations +for some fraction of the particles at the same time the GPUs performed +force calculation for the other particles. -Because of the large number of cores on each GPU -device, it might be more efficient to run on fewer -processes per GPU when the number of particles per process -is small (100's of particles); this can be necessary -to keep the GPU cores busy. +Because of the large number of cores on each GPU device, it might be +more efficient to run on fewer processes per GPU when the number of +particles per process is small (100's of particles); this can be +necessary to keep the GPU cores busy. GPU input script :h4 -In order to use GPU acceleration in LAMMPS, -"fix_gpu"_fix_gpu.html -should be used in order to initialize and configure the -GPUs for use. Additionally, GPU enabled styles must be -selected in the input script. Currently, -this is limited to a few "pair styles"_pair_style.html. -Some GPU-enabled styles have additional restrictions -listed in their documentation. +In order to use GPU acceleration in LAMMPS, "fix_gpu"_fix_gpu.html +should be used in order to initialize and configure the GPUs for +use. Additionally, GPU enabled styles must be selected in the input +script. Currently, this is limited to a few "pair +styles"_pair_style.html and PPPM. Some GPU-enabled styles have +additional restrictions listed in their documentation. GPU asynchronous pair computation :h4 -The GPU accelerated pair styles can be used to perform -pair style force calculation on the GPU while other -calculations are -performed on the CPU. One method to do this is to specify -a {split} in the gpu fix as described above. In this case, -force calculation for the pair style will also be performed -on the CPU. +The GPU accelerated pair styles can be used to perform pair style +force calculation on the GPU while other calculations are performed on +the CPU. One method to do this is to specify a {split} in the gpu fix +as described above. In this case, force calculation for the pair +style will also be performed on the CPU. -When the CPU work in a GPU pair style has finished, -the next force computation will begin, possibly before the -GPU has finished. If {split} is 1.0 in the gpu fix, the next -force computation will begin almost immediately. This can -be used to run a "hybrid"_pair_hybrid.html GPU pair style at -the same time as a hybrid CPU pair style. In this case, the -GPU pair style should be first in the hybrid command in order to -perform simultaneous calculations. This also -allows "bond"_bond_style.html, "angle"_angle_style.html, -"dihedral"_dihedral_style.html, "improper"_improper_style.html, -and "long-range"_kspace_style.html force -computations to be run simultaneously with the GPU pair style. -Once all CPU force computations have completed, the gpu fix -will block until the GPU has finished all work before continuing -the run. +When the CPU work in a GPU pair style has finished, the next force +computation will begin, possibly before the GPU has finished. If +{split} is 1.0 in the gpu fix, the next force computation will begin +almost immediately. This can be used to run a +"hybrid"_pair_hybrid.html GPU pair style at the same time as a hybrid +CPU pair style. In this case, the GPU pair style should be first in +the hybrid command in order to perform simultaneous calculations. This +also allows "bond"_bond_style.html, "angle"_angle_style.html, +"dihedral"_dihedral_style.html, "improper"_improper_style.html, and +"long-range"_kspace_style.html force computations to be run +simultaneously with the GPU pair style. Once all CPU force +computations have completed, the gpu fix will block until the GPU has +finished all work before continuing the run. GPU timing :h4 GPU accelerated pair styles can perform computations asynchronously -with CPU computations. The "Pair" time reported by LAMMPS -will be the maximum of the time required to complete the CPU -pair style computations and the time required to complete the GPU -pair style computations. Any time spent for GPU-enabled pair styles -for computations that run simultaneously with "bond"_bond_style.html, -"angle"_angle_style.html, "dihedral"_dihedral_style.html, -"improper"_improper_style.html, and "long-range"_kspace_style.html calculations -will not be included in the "Pair" time. +with CPU computations. The "Pair" time reported by LAMMPS will be the +maximum of the time required to complete the CPU pair style +computations and the time required to complete the GPU pair style +computations. Any time spent for GPU-enabled pair styles for +computations that run simultaneously with "bond"_bond_style.html, +"angle"_angle_style.html, "dihedral"_dihedral_style.html, +"improper"_improper_style.html, and "long-range"_kspace_style.html +calculations will not be included in the "Pair" time. -When {mode} for the gpu fix is force/neigh, -the time for neighbor list calculations on the GPU will be added -into the "Pair" time, not the "Neigh" time. A breakdown of the -times required for various tasks on the GPU (data copy, neighbor -calculations, force computations, etc.) are output only -with the LAMMPS screen output at the end of each run. These timings represent -total time spent on the GPU for each routine, regardless of asynchronous -CPU calculations. +When {mode} for the gpu fix is force/neigh, the time for neighbor list +calculations on the GPU will be added into the "Pair" time, not the +"Neigh" time. A breakdown of the times required for various tasks on +the GPU (data copy, neighbor calculations, force computations, etc.) +are output only with the LAMMPS screen output at the end of each +run. These timings represent total time spent on the GPU for each +routine, regardless of asynchronous CPU calculations. GPU single vs double precision :h4 -See the lammps/lib/gpu/README file for instructions on how to build -the LAMMPS gpu library for single, mixed, and double precision. The latter -requires that your GPU card supports double precision. +See the lammps/lib/gpu/README file for instructions on how to build +the LAMMPS gpu library for single, mixed, and double precision. The +latter requires that your GPU card supports double precision. :line diff --git a/doc/fix_gpu.html b/doc/fix_gpu.html index 72839bc0d1..f71a8e8a4a 100644 --- a/doc/fix_gpu.html +++ b/doc/fix_gpu.html @@ -48,14 +48,13 @@ should not be any problems with specifying this fix first in input scripts.

    mode specifies where neighbor list calculations will be performed. If mode is force, neighbor list calculation is performed on the CPU. If mode is force/neigh, neighbor list calculation is -performed on the GPU. GPU neighbor -list calculation currently cannot be used with a triclinic box. +performed on the GPU. GPU neighbor list calculation currently cannot be +used with a triclinic box. GPU neighbor list calculation currently +cannot be used with hybrid pair styles. GPU neighbor lists are not compatible with styles that are not GPU-enabled. When a non-GPU enabled style requires a neighbor list, it will also be built using CPU routines. In these cases, it will typically be more efficient -to only use CPU neighbor list builds. For hybrid pair -styles, GPU calculated neighbor lists might be less efficient because -no particles will be skipped in a given neighbor list. +to only use CPU neighbor list builds.

    first and last specify the GPUs that will be used for simulation. On each node, the GPU IDs in the inclusive range from first to last will @@ -77,7 +76,8 @@ style.

    In order to use GPU acceleration, a GPU enabled style must be selected in the input script in addition to this fix. Currently, -this is limited to a few pair styles. +this is limited to a few pair styles and +the PPPM kspace style.

    More details about these settings and various possible hardware configuration are in this section of the @@ -95,8 +95,10 @@ the run command.

    Restrictions:

    The fix must be the first fix specified for a given run. The force/neigh -mode should not be used with a triclinic box or GPU-enabled pair styles -that need special_bonds settings. +mode should not be used with a triclinic box or hybrid +pair styles. +

    +

    split must be positive when using hybrid pair styles.

    Currently, group-ID must be all.

    diff --git a/doc/fix_gpu.txt b/doc/fix_gpu.txt index 88fa6f5414..df8fbadb8f 100644 --- a/doc/fix_gpu.txt +++ b/doc/fix_gpu.txt @@ -39,14 +39,13 @@ should not be any problems with specifying this fix first in input scripts. {mode} specifies where neighbor list calculations will be performed. If {mode} is force, neighbor list calculation is performed on the CPU. If {mode} is force/neigh, neighbor list calculation is -performed on the GPU. GPU neighbor -list calculation currently cannot be used with a triclinic box. +performed on the GPU. GPU neighbor list calculation currently cannot be +used with a triclinic box. GPU neighbor list calculation currently +cannot be used with "hybrid"_pair_hybrid.html pair styles. GPU neighbor lists are not compatible with styles that are not GPU-enabled. When a non-GPU enabled style requires a neighbor list, it will also be built using CPU routines. In these cases, it will typically be more efficient -to only use CPU neighbor list builds. For "hybrid"_pair_hybrid.html pair -styles, GPU calculated neighbor lists might be less efficient because -no particles will be skipped in a given neighbor list. +to only use CPU neighbor list builds. {first} and {last} specify the GPUs that will be used for simulation. On each node, the GPU IDs in the inclusive range from {first} to {last} will @@ -68,7 +67,8 @@ style. In order to use GPU acceleration, a GPU enabled style must be selected in the input script in addition to this fix. Currently, -this is limited to a few "pair styles"_pair_style.html. +this is limited to a few "pair styles"_pair_style.html and +the PPPM "kspace style"_kspace_style.html. More details about these settings and various possible hardware configuration are in "this section"_Section_start.html#2_8 of the @@ -86,8 +86,10 @@ the "run"_run.html command. [Restrictions:] The fix must be the first fix specified for a given run. The force/neigh -{mode} should not be used with a triclinic box or GPU-enabled pair styles -that need "special_bonds"_special_bonds.html settings. +{mode} should not be used with a triclinic box or "hybrid"_pair_hybrid.html +pair styles. + +{split} must be positive when using "hybrid"_pair_hybrid.html pair styles. Currently, group-ID must be all. diff --git a/doc/kspace_style.html b/doc/kspace_style.html index 57c035f570..30b0bcbc1b 100644 --- a/doc/kspace_style.html +++ b/doc/kspace_style.html @@ -15,7 +15,7 @@

    kspace_style style value 
     
    -
    • style = none or ewald or pppm or pppm/tip4p or ewald/n +
      • style = none or ewald or pppm or pppm/tip4p or ewald/n or pppm/gpu/single or pppm/gpu/double
          none value = none
           ewald value = precision
        @@ -25,6 +25,10 @@
           pppm/tip4p value = precision
             precision = desired accuracy
           ewald/n value = precision
        +    precision = desired accuracy
        +  pppm/gpu/single value = precision
        +    precision = desired accuracy
        +  pppm/gpu/double value = precision
             precision = desired accuracy 
         
        @@ -72,6 +76,11 @@ long-range potentials.

        Currently, only the ewald/n style can be used with non-orthogonal (triclinic symmetry) simulation boxes.

        +

        The pppm/gpu/single and pppm/gpu/double styles are GPU-enabled +version of pppm. See more details below. +

        +
        +

        When a kspace style is used, a pair style that includes the short-range correction to the pairwise Coulombic or other 1/r^N forces must also be selected. For Coulombic interactions, these styles are @@ -88,6 +97,27 @@ of K-space vectors for style ewald or the FFT grid size for style

        See the kspace_modify command for additional options of the K-space solvers that can be set.

        +
        + +

        The pppm/gpu/single style performs single precision +charge assignment and force interpolation calculations on the GPU. +The pppm/gpu/double style performs the mesh calculations on the GPU +in double precision. FFT solves are calculated on the CPU in both +cases. If either pppm/gpu/single or pppm/gpu/double are used with +a GPU-enabled pair style, part of the PPPM calculation can be performed +concurrently on the GPU while other calculations for non-bonded and +bonded force calculation are performed on the CPU. +

        +

        More details about GPU settings and various possible hardware +configurations are in this section of the +manual. +

        +

        Additional requirements in your input script to run with GPU-enabled +PPPM styles are as follows: +

        +

        fix gpu must be used. The fix controls +the essential GPU selection and initialization steps. +

        Restrictions:

        A simulation must be 3d and periodic in all dimensions to use an Ewald @@ -103,6 +133,11 @@ LAMMPS section for more info. enabled if LAMMPS was built with that package. See the Making LAMMPS section for more info.

        +

        The pppm/gpu/single and pppm/gpu/double styles are part of the +"gpu" package. They are only enabled if LAMMPS was built with that +package. See the Making LAMMPS section for +more info. +

        When using a long-range pairwise TIP4P potential, you must use kspace style pppm/tip4p and vice versa.

        diff --git a/doc/kspace_style.txt b/doc/kspace_style.txt index b6b12696d2..217978c193 100644 --- a/doc/kspace_style.txt +++ b/doc/kspace_style.txt @@ -12,7 +12,7 @@ kspace_style command :h3 kspace_style style value :pre -style = {none} or {ewald} or {pppm} or {pppm/tip4p} or {ewald/n} :ulb,l +style = {none} or {ewald} or {pppm} or {pppm/tip4p} or {ewald/n} or {pppm/gpu/single} or {pppm/gpu/double} :ulb,l {none} value = none {ewald} value = precision precision = desired accuracy @@ -21,6 +21,10 @@ style = {none} or {ewald} or {pppm} or {pppm/tip4p} or {ewald/n} :ulb,l {pppm/tip4p} value = precision precision = desired accuracy {ewald/n} value = precision + precision = desired accuracy + {pppm/gpu/single} value = precision + precision = desired accuracy + {pppm/gpu/double} value = precision precision = desired accuracy :pre :ule @@ -67,6 +71,11 @@ long-range potentials. Currently, only the {ewald/n} style can be used with non-orthogonal (triclinic symmetry) simulation boxes. +The {pppm/gpu/single} and {pppm/gpu/double} styles are GPU-enabled +version of {pppm}. See more details below. + +:line + When a kspace style is used, a pair style that includes the short-range correction to the pairwise Coulombic or other 1/r^N forces must also be selected. For Coulombic interactions, these styles are @@ -83,6 +92,27 @@ of K-space vectors for style {ewald} or the FFT grid size for style See the "kspace_modify"_kspace_modify.html command for additional options of the K-space solvers that can be set. +:line + +The {pppm/gpu/single} style performs single precision +charge assignment and force interpolation calculations on the GPU. +The {pppm/gpu/double} style performs the mesh calculations on the GPU +in double precision. FFT solves are calculated on the CPU in both +cases. If either {pppm/gpu/single} or {pppm/gpu/double} are used with +a GPU-enabled pair style, part of the PPPM calculation can be performed +concurrently on the GPU while other calculations for non-bonded and +bonded force calculation are performed on the CPU. + +More details about GPU settings and various possible hardware +configurations are in "this section"_Section_start.html#2_8 of the +manual. + +Additional requirements in your input script to run with GPU-enabled +PPPM styles are as follows: + +"fix gpu"_fix_gpu.html must be used. The fix controls +the essential GPU selection and initialization steps. + [Restrictions:] A simulation must be 3d and periodic in all dimensions to use an Ewald @@ -98,6 +128,11 @@ The {ewald/n} style is part of the "user-ewaldn" package. It is only enabled if LAMMPS was built with that package. See the "Making LAMMPS"_Section_start.html#2_3 section for more info. +The {pppm/gpu/single} and {pppm/gpu/double} styles are part of the +"gpu" package. They are only enabled if LAMMPS was built with that +package. See the "Making LAMMPS"_Section_start.html#2_3 section for +more info. + When using a long-range pairwise TIP4P potential, you must use kspace style {pppm/tip4p} and vice versa. diff --git a/doc/pair_coeff.html b/doc/pair_coeff.html index fa98d3addd..0f54432555 100644 --- a/doc/pair_coeff.html +++ b/doc/pair_coeff.html @@ -134,6 +134,7 @@ the pair_style command, and coefficients specified by the associated
      • pair_style lj/cut/coul/long/gpu - GPU-enabled version of LJ with long-range Coulomb
      • pair_style lj/cut/coul/long/tip4p - LJ with long-range Coulomb for TIP4P water
      • pair_style lj/expand - Lennard-Jones for variable size particles +
      • pair_style lj/expand/gpu - GPU-enabled version of lj/expand
      • pair_style lj/gromacs - GROMACS-style Lennard-Jones potential
      • pair_style lj/gromacs/coul/gromacs - GROMACS-style LJ and Coulombic potential
      • pair_style lj/smooth - smoothed Lennard-Jones potential @@ -142,6 +143,7 @@ the pair_style command, and coefficients specified by the associated
      • pair_style lubricate - hydrodynamic lubrication forces
      • pair_style meam - modified embedded atom method (MEAM)
      • pair_style morse - Morse potential +
      • pair_style morse/gpu - GPU-enabled version of Morse potential
      • pair_style morse/opt - optimized version of Morse potential
      • pair_style peri/lps - peridynamic LPS potential
      • pair_style peri/pmb - peridynamic PMB potential diff --git a/doc/pair_coeff.txt b/doc/pair_coeff.txt index baf95341db..308e35329c 100644 --- a/doc/pair_coeff.txt +++ b/doc/pair_coeff.txt @@ -131,6 +131,7 @@ the pair_style command, and coefficients specified by the associated "pair_style lj/cut/coul/long/gpu"_pair_lj.html - GPU-enabled version of LJ with long-range Coulomb "pair_style lj/cut/coul/long/tip4p"_pair_lj.html - LJ with long-range Coulomb for TIP4P water "pair_style lj/expand"_pair_lj_expand.html - Lennard-Jones for variable size particles +"pair_style lj/expand/gpu"_pair_lj_expand.html - GPU-enabled version of lj/expand "pair_style lj/gromacs"_pair_gromacs.html - GROMACS-style Lennard-Jones potential "pair_style lj/gromacs/coul/gromacs"_pair_gromacs.html - GROMACS-style LJ and Coulombic potential "pair_style lj/smooth"_pair_lj_smooth.html - smoothed Lennard-Jones potential @@ -139,6 +140,7 @@ the pair_style command, and coefficients specified by the associated "pair_style lubricate"_pair_lubricate.html - hydrodynamic lubrication forces "pair_style meam"_pair_meam.html - modified embedded atom method (MEAM) "pair_style morse"_pair_morse.html - Morse potential +"pair_style morse/gpu"_pair_morse.html - GPU-enabled version of Morse potential "pair_style morse/opt"_pair_morse.html - optimized version of Morse potential "pair_style peri/lps"_pair_peri.html - peridynamic LPS potential "pair_style peri/pmb"_pair_peri.html - peridynamic PMB potential diff --git a/doc/pair_lj_expand.html b/doc/pair_lj_expand.html index 8dfb3d2068..9e766d3f4b 100644 --- a/doc/pair_lj_expand.html +++ b/doc/pair_lj_expand.html @@ -11,10 +11,14 @@

        pair_style lj/expand command

        +

        pair_style lj/expand/gpu command +

        Syntax:

        pair_style lj/expand cutoff 
         
        +
        pair_style lj/expand/gpu cutoff 
        +
        • cutoff = global cutoff for lj/expand interactions (distance units)

        Examples: @@ -49,6 +53,29 @@ commands, or by mixing as described below:

        The delta values can be positive or negative. The last coefficient is optional. If not specified, the global LJ cutoff is used.

        +

        Style lj/expand/gpu is a GPU-enabled version of style lj/expand. +See more details below. +

        +
        + +

        The lj/expand/gpu style is identical to the lj/expand style, +except that each processor off-loads its pairwise calculations to a +GPU chip. Depending on the hardware available on your system this can provide a +speed-up. See the Running on GPUs section of +the manual for more details about hardware and software requirements +for using GPUs. +

        +

        More details about these settings and various possible hardware +configuration are in this section of the +manual. +

        +

        Additional requirements in your input script to run with GPU-enabled styles +are as follows: +

        +

        The newton pair setting must be off and +fix gpu must be used. The fix controls +the essential GPU selection and initialization steps. +


        Mixing, shift, table, tail correction, restart, rRESPA info: @@ -80,7 +107,11 @@ to be specified in an input script that reads a restart file.


        -

        Restrictions: none +

        Restrictions: +

        +

        The lj/expand/gpu style is part of the "gpu" package. It is only +enabled if LAMMPS was built with that package. See the Making +LAMMPS section for more info.

        Related commands:

        diff --git a/doc/pair_lj_expand.txt b/doc/pair_lj_expand.txt index 3c82f5b944..96487df87e 100644 --- a/doc/pair_lj_expand.txt +++ b/doc/pair_lj_expand.txt @@ -7,10 +7,12 @@ :line pair_style lj/expand command :h3 +pair_style lj/expand/gpu command :h3 [Syntax:] pair_style lj/expand cutoff :pre +pair_style lj/expand/gpu cutoff :pre cutoff = global cutoff for lj/expand interactions (distance units) :ul @@ -46,6 +48,29 @@ cutoff (distance units) :ul The delta values can be positive or negative. The last coefficient is optional. If not specified, the global LJ cutoff is used. +Style {lj/expand/gpu} is a GPU-enabled version of style {lj/expand}. +See more details below. + +:line + +The {lj/expand/gpu} style is identical to the {lj/expand} style, +except that each processor off-loads its pairwise calculations to a +GPU chip. Depending on the hardware available on your system this can provide a +speed-up. See the "Running on GPUs"_Section_start.html#2_8 section of +the manual for more details about hardware and software requirements +for using GPUs. + +More details about these settings and various possible hardware +configuration are in "this section"_Section_start.html#2_8 of the +manual. + +Additional requirements in your input script to run with GPU-enabled styles +are as follows: + +The "newton pair"_newton.html setting must be {off} and +"fix gpu"_fix_gpu.html must be used. The fix controls +the essential GPU selection and initialization steps. + :line [Mixing, shift, table, tail correction, restart, rRESPA info]: @@ -77,7 +102,11 @@ This pair style can only be used via the {pair} keyword of the :line -[Restrictions:] none +[Restrictions:] + +The {lj/expand/gpu} style is part of the "gpu" package. It is only +enabled if LAMMPS was built with that package. See the "Making +LAMMPS"_Section_start.html#2_3 section for more info. [Related commands:] diff --git a/doc/pair_morse.html b/doc/pair_morse.html index e5183ef53e..0f505c5d28 100644 --- a/doc/pair_morse.html +++ b/doc/pair_morse.html @@ -11,12 +11,18 @@

        pair_style morse command

        +

        pair_style morse/gpu command +

        pair_style morse/opt command

        Syntax:

        pair_style morse cutoff 
         
        +
        pair_style morse/gpu cutoff 
        +
        +
        pair_style morse/opt cutoff 
        +
        • cutoff = global cutoff for Morse interactions (distance units)

        Examples: @@ -53,6 +59,29 @@ give identical answers. Depending on system size and the processor you are running on, it may be 5-25% faster (for the pairwise portion of the run time).

        +

        Style morse/gpu is a GPU-enabled version of style morse. +See more details below. +

        +
        + +

        The morse/gpu style is identical to the morse style, +except that each processor off-loads its pairwise calculations to a +GPU chip. Depending on the hardware available on your system this can provide a +speed-up. See the Running on GPUs section of +the manual for more details about hardware and software requirements +for using GPUs. +

        +

        More details about these settings and various possible hardware +configuration are in this section of the +manual. +

        +

        Additional requirements in your input script to run with GPU-enabled styles +are as follows: +

        +

        The newton pair setting must be off and +fix gpu must be used. The fix controls +the essential GPU selection and initialization steps. +


        Mixing, shift, table, tail correction, restart, rRESPA info: @@ -82,8 +111,9 @@ to be specified in an input script that reads a restart file.

        Restrictions:

        -

        The morse/opt style is part of the "opt" package. It is only -enabled if LAMMPS was built with that package. See the Making +

        The morse/opt style is part of the "opt" package. The morse/gpu +style is part of the "gpu" package. They are only +enabled if LAMMPS was built with those packages. See the Making LAMMPS section for more info.

        Related commands: diff --git a/doc/pair_morse.txt b/doc/pair_morse.txt index 1c1799c242..8e23d84767 100644 --- a/doc/pair_morse.txt +++ b/doc/pair_morse.txt @@ -7,11 +7,14 @@ :line pair_style morse command :h3 +pair_style morse/gpu command :h3 pair_style morse/opt command :h3 [Syntax:] pair_style morse cutoff :pre +pair_style morse/gpu cutoff :pre +pair_style morse/opt cutoff :pre cutoff = global cutoff for Morse interactions (distance units) :ul @@ -49,6 +52,29 @@ give identical answers. Depending on system size and the processor you are running on, it may be 5-25% faster (for the pairwise portion of the run time). +Style {morse/gpu} is a GPU-enabled version of style {morse}. +See more details below. + +:line + +The {morse/gpu} style is identical to the {morse} style, +except that each processor off-loads its pairwise calculations to a +GPU chip. Depending on the hardware available on your system this can provide a +speed-up. See the "Running on GPUs"_Section_start.html#2_8 section of +the manual for more details about hardware and software requirements +for using GPUs. + +More details about these settings and various possible hardware +configuration are in "this section"_Section_start.html#2_8 of the +manual. + +Additional requirements in your input script to run with GPU-enabled styles +are as follows: + +The "newton pair"_newton.html setting must be {off} and +"fix gpu"_fix_gpu.html must be used. The fix controls +the essential GPU selection and initialization steps. + :line [Mixing, shift, table, tail correction, restart, rRESPA info]: @@ -78,8 +104,9 @@ These pair styles can only be used via the {pair} keyword of the [Restrictions:] -The {morse/opt} style is part of the "opt" package. It is only -enabled if LAMMPS was built with that package. See the "Making +The {morse/opt} style is part of the "opt" package. The {morse/gpu} +style is part of the "gpu" package. They are only +enabled if LAMMPS was built with those packages. See the "Making LAMMPS"_Section_start.html#2_3 section for more info. [Related commands:] diff --git a/doc/pair_style.html b/doc/pair_style.html index 450428a7bc..862a22d7cc 100644 --- a/doc/pair_style.html +++ b/doc/pair_style.html @@ -136,6 +136,7 @@ the pair_style command, and coefficients specified by the associated

      • pair_style lj/cut/coul/long/gpu - GPU-enabled version of LJ with long-range Coulomb
      • pair_style lj/cut/coul/long/tip4p - LJ with long-range Coulomb for TIP4P water
      • pair_style lj/expand - Lennard-Jones for variable size particles +
      • pair_style lj/expand/gpu - GPU-enabled version of lj/expand
      • pair_style lj/gromacs - GROMACS-style Lennard-Jones potential
      • pair_style lj/gromacs/coul/gromacs - GROMACS-style LJ and Coulombic potential
      • pair_style lj/smooth - smoothed Lennard-Jones potential @@ -144,6 +145,7 @@ the pair_style command, and coefficients specified by the associated
      • pair_style lubricate - hydrodynamic lubrication forces
      • pair_style meam - modified embedded atom method (MEAM)
      • pair_style morse - Morse potential +
      • pair_style morse/gpu - GPU-enabled version of Morse potential
      • pair_style morse/opt - optimized version of Morse potential
      • pair_style peri/lps - peridynamic LPS potential
      • pair_style peri/pmb - peridynamic PMB potential diff --git a/doc/pair_style.txt b/doc/pair_style.txt index 0db8457ea5..1943b32c99 100644 --- a/doc/pair_style.txt +++ b/doc/pair_style.txt @@ -133,6 +133,7 @@ the pair_style command, and coefficients specified by the associated "pair_style lj/cut/coul/long/gpu"_pair_lj.html - GPU-enabled version of LJ with long-range Coulomb "pair_style lj/cut/coul/long/tip4p"_pair_lj.html - LJ with long-range Coulomb for TIP4P water "pair_style lj/expand"_pair_lj_expand.html - Lennard-Jones for variable size particles +"pair_style lj/expand/gpu"_pair_lj_expand.html - GPU-enabled version of lj/expand "pair_style lj/gromacs"_pair_gromacs.html - GROMACS-style Lennard-Jones potential "pair_style lj/gromacs/coul/gromacs"_pair_gromacs.html - GROMACS-style LJ and Coulombic potential "pair_style lj/smooth"_pair_lj_smooth.html - smoothed Lennard-Jones potential @@ -141,6 +142,7 @@ the pair_style command, and coefficients specified by the associated "pair_style lubricate"_pair_lubricate.html - hydrodynamic lubrication forces "pair_style meam"_pair_meam.html - modified embedded atom method (MEAM) "pair_style morse"_pair_morse.html - Morse potential +"pair_style morse/gpu"_pair_morse.html - GPU-enabled version of Morse potential "pair_style morse/opt"_pair_morse.html - optimized version of Morse potential "pair_style peri/lps"_pair_peri.html - peridynamic LPS potential "pair_style peri/pmb"_pair_peri.html - peridynamic PMB potential
pppm GPU single and double Mike Brown (ORNL)
pair_style lj/cut/expand Inderaj Bains (NVIDIA)
temperature accelerated dynamics (TAD) Aidan Thompson (Sandia)
pair reax/c and fix qeq/reax Metin Aktulga (Purdue, now LBNL)
DREIDING force field, pair_style hbond/dreiding, etc Tod Pascal (CalTech)
fix adapt and compute ti for thermodynamic integreation for free energies Sai Jayaraman (Sandia)
pair born and pair gauss Sai Jayaraman (Sandia)
stochastic rotation dynamics (SRD) via fix srd Jemery Lechman (Sandia) and Pieter in 't Veld (BASF)
ipp Perl script tool Reese Jones (Sandia)
eam_database and createatoms tools Xiaowang Zhou (Sandia)
electron force field (eFF) Andres Jaramillo-Botero and Julius Su (Caltech)