forked from lijiext/lammps
121 lines
5.0 KiB
Plaintext
121 lines
5.0 KiB
Plaintext
/* ----------------------------------------------------------------------
|
|
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
|
http://lammps.sandia.gov, Sandia National Laboratories
|
|
Steve Plimpton, sjplimp@sandia.gov
|
|
|
|
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
|
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
|
certain rights in this software. This software is distributed under
|
|
the GNU General Public License.
|
|
|
|
See the README file in the top-level LAMMPS directory.
|
|
------------------------------------------------------------------------- */
|
|
|
|
/* ----------------------------------------------------------------------
|
|
Contributing authors: Mike Brown (ORNL), brownw@ornl.gov
|
|
Peng Wang (Nvidia), penwang@nvidia.com
|
|
Inderaj Bains (NVIDIA), ibains@nvidia.com
|
|
Paul Crozier (SNL), pscrozi@sandia.gov
|
|
------------------------------------------------------------------------- */
|
|
|
|
GENERAL NOTES
|
|
|
|
This library, libgpu.a, provides routines for GPU acceleration
|
|
of LAMMPS pair styles. Compilation of this library requires
|
|
installing the CUDA GPU driver and CUDA toolkit for your operating
|
|
system. In addition to the LAMMPS library, the binary nvc_get_devices
|
|
will also be built. This can be used to query the names and
|
|
properties of GPU devices on your system. A Makefile for OpenCL
|
|
compilation is provided, but support for OpenCL use is not currently
|
|
provided by the developers.
|
|
|
|
NOTE: Installation of the CUDA SDK is not required.
|
|
|
|
Current pair styles supporting GPU acceleration:
|
|
|
|
1. lj/cut
|
|
2. lj96/cut
|
|
3. lj/expand
|
|
4. lj/cut/coul/cut
|
|
5. lj/cut/coul/long
|
|
6. lj/charmm/coul/long
|
|
7. morse
|
|
8. cg/cmm
|
|
9. cg/cmm/coul/long
|
|
10. gayberne
|
|
11. pppm
|
|
|
|
MULTIPLE LAMMPS PROCESSES
|
|
|
|
Multiple LAMMPS MPI processes can share GPUs on the system, but multiple
|
|
GPUs cannot be utilized by a single MPI process. In many cases, the
|
|
best performance will be obtained by running as many MPI processes as
|
|
CPU cores available with the condition that the number of MPI processes
|
|
is an integer multiple of the number of GPUs being used. See the
|
|
LAMMPS user manual for details on running with GPU acceleration.
|
|
|
|
BUILDING AND PRECISION MODES
|
|
|
|
To build, edit the CUDA_ARCH, CUDA_PRECISION, CUDA_HOME variables in one of
|
|
the Makefiles. CUDA_ARCH should be set based on the compute capability of
|
|
your GPU. This can be verified by running the nvc_get_devices executable after
|
|
the build is complete. Additionally, the GPU package must be installed and
|
|
compiled for LAMMPS. This may require editing the gpu_SYSPATH variable in the
|
|
LAMMPS makefile.
|
|
|
|
Please note that the GPU library accesses the CUDA driver library directly,
|
|
so it needs to be linked not only to the CUDA runtime library (libcudart.so)
|
|
that ships with the CUDA toolkit, but also with the CUDA driver library
|
|
(libcuda.so) that ships with the Nvidia driver. If you are compiling LAMMPS
|
|
on the head node of a GPU cluster, this library may not be installed,
|
|
so you may need to copy it over from one of the compute nodes (best into
|
|
this directory).
|
|
|
|
The gpu library supports 3 precision modes as determined by
|
|
the CUDA_PRECISION variable:
|
|
|
|
CUDA_PREC = -D_SINGLE_SINGLE # Single precision for all calculations
|
|
CUDA_PREC = -D_DOUBLE_DOUBLE # Double precision for all calculations
|
|
CUDA_PREC = -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double
|
|
|
|
NOTE: PPPM acceleration can only be run on GPUs with compute capability>=1.1.
|
|
You will get the error "GPU library not compiled for this accelerator."
|
|
when attempting to run PPPM on a GPU with compute capability 1.0.
|
|
|
|
NOTE: Double precision is only supported on certain GPUs (with
|
|
compute capability>=1.3). If you compile the GPU library for
|
|
a GPU with compute capability 1.1 and 1.2, then only single
|
|
precistion FFTs are supported, i.e. LAMMPS has to be compiled
|
|
with -DFFT_SINGLE. For details on configuring FFT support in
|
|
LAMMPS, see http://lammps.sandia.gov/doc/Section_start.html#2_2_4
|
|
|
|
NOTE: For Tesla and other graphics cards with compute capability>=1.3,
|
|
make sure that -arch=sm_13 is set on the CUDA_ARCH line.
|
|
|
|
NOTE: For Fermi, make sure that -arch=sm_20 is set on the CUDA_ARCH line.
|
|
|
|
NOTE: The gayberne/gpu pair style will only be installed if the ASPHERE
|
|
package has been installed.
|
|
|
|
NOTE: The cg/cmm/gpu and cg/cmm/coul/long/gpu pair styles will only be
|
|
installed if the USER-CG-CMM package has been installed.
|
|
|
|
NOTE: The lj/cut/coul/long/gpu, cg/cmm/coul/long/gpu, and pppm/gpu styles
|
|
will only be installed if the KSPACE package has been installed.
|
|
|
|
NOTE: The lj/charmm/coul/long will only be installed if the MOLECULE package
|
|
has been installed.
|
|
|
|
EXAMPLE BUILD PROCESS
|
|
|
|
cd ~/lammps/lib/gpu
|
|
emacs Makefile.linux
|
|
make -f Makefile.linux
|
|
./nvc_get_devices
|
|
cd ../../src
|
|
emacs ./MAKE/Makefile.linux
|
|
make yes-asphere
|
|
make yes-kspace
|
|
make yes-gpu
|
|
make linux
|