forked from lijiext/lammps
0a85b18a46 | ||
---|---|---|
.. | ||
cudpp_mini | ||
geryon | ||
Makefile.fermi | ||
Makefile.lammps | ||
Makefile.lens | ||
Makefile.lincoln | ||
Makefile.linux | ||
Makefile.linux_opencl | ||
Makefile.longhorn | ||
Makefile.mac | ||
Makefile.mac_opencl | ||
Makefile.serial | ||
Makefile.serial_opencl | ||
Nvidia.makefile | ||
Opencl.makefile | ||
README | ||
lal_answer.cpp | ||
lal_answer.h | ||
lal_atom.cpp | ||
lal_atom.cu | ||
lal_atom.h | ||
lal_aux_fun1.h | ||
lal_balance.h | ||
lal_base_atomic.cpp | ||
lal_base_atomic.h | ||
lal_base_charge.cpp | ||
lal_base_charge.h | ||
lal_base_ellipsoid.cpp | ||
lal_base_ellipsoid.h | ||
lal_buck.cpp | ||
lal_buck.cu | ||
lal_buck.h | ||
lal_buck_coul.cpp | ||
lal_buck_coul.cu | ||
lal_buck_coul.h | ||
lal_buck_coul_ext.cpp | ||
lal_buck_coul_long.cpp | ||
lal_buck_coul_long.cu | ||
lal_buck_coul_long.h | ||
lal_buck_coul_long_ext.cpp | ||
lal_buck_ext.cpp | ||
lal_cg_cmm.cpp | ||
lal_cg_cmm.cu | ||
lal_cg_cmm.h | ||
lal_cg_cmm_ext.cpp | ||
lal_cg_cmm_long.cpp | ||
lal_cg_cmm_long.cu | ||
lal_cg_cmm_long.h | ||
lal_cg_cmm_long_ext.cpp | ||
lal_charmm_long.cpp | ||
lal_charmm_long.cu | ||
lal_charmm_long.h | ||
lal_charmm_long_ext.cpp | ||
lal_coul_long.cpp | ||
lal_coul_long.cu | ||
lal_coul_long.h | ||
lal_coul_long_ext.cpp | ||
lal_device.cpp | ||
lal_device.cu | ||
lal_device.h | ||
lal_eam.cpp | ||
lal_eam.cu | ||
lal_eam.h | ||
lal_eam_ext.cpp | ||
lal_ellipsoid_extra.h | ||
lal_ellipsoid_nbor.cu | ||
lal_gayberne.cpp | ||
lal_gayberne.cu | ||
lal_gayberne.h | ||
lal_gayberne_ext.cpp | ||
lal_gayberne_lj.cu | ||
lal_lj.cpp | ||
lal_lj.cu | ||
lal_lj.h | ||
lal_lj96.cpp | ||
lal_lj96.cu | ||
lal_lj96.h | ||
lal_lj96_ext.cpp | ||
lal_lj_class2_long.cpp | ||
lal_lj_class2_long.cu | ||
lal_lj_class2_long.h | ||
lal_lj_class2_long_ext.cpp | ||
lal_lj_coul.cpp | ||
lal_lj_coul.cu | ||
lal_lj_coul.h | ||
lal_lj_coul_ext.cpp | ||
lal_lj_coul_long.cpp | ||
lal_lj_coul_long.cu | ||
lal_lj_coul_long.h | ||
lal_lj_coul_long_ext.cpp | ||
lal_lj_expand.cpp | ||
lal_lj_expand.cu | ||
lal_lj_expand.h | ||
lal_lj_expand_ext.cpp | ||
lal_lj_ext.cpp | ||
lal_morse.cpp | ||
lal_morse.cu | ||
lal_morse.h | ||
lal_morse_ext.cpp | ||
lal_neighbor.cpp | ||
lal_neighbor.h | ||
lal_neighbor_cpu.cu | ||
lal_neighbor_gpu.cu | ||
lal_neighbor_shared.cpp | ||
lal_neighbor_shared.h | ||
lal_pppm.cpp | ||
lal_pppm.cu | ||
lal_pppm.h | ||
lal_pppm_ext.cpp | ||
lal_precision.h | ||
lal_preprocessor.h | ||
lal_re_squared.cpp | ||
lal_re_squared.cu | ||
lal_re_squared.h | ||
lal_re_squared_ext.cpp | ||
lal_re_squared_lj.cu | ||
lal_table.cpp | ||
lal_table.cu | ||
lal_table.h | ||
lal_table_ext.cpp | ||
lal_yukawa.cpp | ||
lal_yukawa.cu | ||
lal_yukawa.h | ||
lal_yukawa_ext.cpp |
README
-------------------------------- LAMMPS ACCELERATOR LIBRARY -------------------------------- W. Michael Brown (ORNL) Peng Wang (NVIDIA) Axel Kohlmeyer (Temple) Steve Plimpton (SNL) Inderaj Bains (NVIDIA) ------------------------------------------------------------------- This directory has source files to build a library that LAMMPS links against when using the GPU package. When you are done building this library, two files should exist in this directory: libgpu.a the library LAMMPS will link against Makefile.lammps settings the LAMMPS Makefile will import The latter file will have settings like this (can be omitted if blank): gpu_SYSINC = gpu_SYSLIB = -lcudart -lcuda gpu_SYSPATH = -L/usr/local/cuda/lib64 SYSINC is for settings needed to compile LAMMPS source files SYSLIB is for additional system libraries needed by this package SYSPATH is the path(s) to where those libraries are You must insure these settings are correct for your system, else the LAMMPS build will likely fail. ------------------------------------------------------------------- GENERAL NOTES -------------------------------- This library, libgpu.a, provides routines for GPU acceleration of certain LAMMPS styles and neighbor list builds. Compilation of this library requires installing the CUDA GPU driver and CUDA toolkit for your operating system. Installation of the CUDA SDK is not necessary. In addition to the LAMMPS library, the binary nvc_get_devices will also be built. This can be used to query the names and properties of GPU devices on your system. A Makefile for OpenCL compilation is provided, but support for OpenCL use is not currently provided by the developers. Details of the implementation are provided in: Brown, W.M., Wang, P. Plimpton, S.J., Tharrington, A.N. Implementing Molecular Dynamics on Hybrid High Performance Computers - Short Range Forces. Computer Physics Communications. 2011. 182: p. 898-911. and Brown, W.M., Kohlmeyer, A. Plimpton, S.J., Tharrington, A.N. Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh. Computer Physics Communications. 2011. In press. NOTE: Installation of the CUDA SDK is not required. Current styles supporting GPU acceleration: 1. lj/cut 2. lj96/cut 3. lj/expand 4. lj/cut/coul/cut 5. lj/cut/coul/long 6. lj/charmm/coul/long 7. lj/class2 8. lj/class2/coul/long 9. morse 10. cg/cmm 11. cg/cmm/coul/long 12. coul/long 13. gayberne 14. resquared 15. pppm MULTIPLE LAMMPS PROCESSES -------------------------------- Multiple LAMMPS MPI processes can share GPUs on the system, but multiple GPUs cannot be utilized by a single MPI process. In many cases, the best performance will be obtained by running as many MPI processes as CPU cores available with the condition that the number of MPI processes is an integer multiple of the number of GPUs being used. See the LAMMPS user manual for details on running with GPU acceleration. BUILDING AND PRECISION MODES -------------------------------- To build, edit the CUDA_ARCH, CUDA_PRECISION, CUDA_HOME variables in one of the Makefiles. CUDA_ARCH should be set based on the compute capability of your GPU. This can be verified by running the nvc_get_devices executable after the build is complete. Additionally, the GPU package must be installed and compiled for LAMMPS. This may require editing the gpu_SYSPATH variable in the LAMMPS makefile. Please note that the GPU library accesses the CUDA driver library directly, so it needs to be linked not only to the CUDA runtime library (libcudart.so) that ships with the CUDA toolkit, but also with the CUDA driver library (libcuda.so) that ships with the Nvidia driver. If you are compiling LAMMPS on the head node of a GPU cluster, this library may not be installed, so you may need to copy it over from one of the compute nodes (best into this directory). The gpu library supports 3 precision modes as determined by the CUDA_PRECISION variable: CUDA_PREC = -D_SINGLE_SINGLE # Single precision for all calculations CUDA_PREC = -D_DOUBLE_DOUBLE # Double precision for all calculations CUDA_PREC = -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double NOTE: PPPM acceleration can only be run on GPUs with compute capability>=1.1. You will get the error "GPU library not compiled for this accelerator." when attempting to run PPPM on a GPU with compute capability 1.0. NOTE: Double precision is only supported on certain GPUs (with compute capability>=1.3). If you compile the GPU library for a GPU with compute capability 1.1 and 1.2, then only single precision FFTs are supported, i.e. LAMMPS has to be compiled with -DFFT_SINGLE. For details on configuring FFT support in LAMMPS, see http://lammps.sandia.gov/doc/Section_start.html#2_2_4 NOTE: For graphics cards with compute capability>=1.3 (e.g. Tesla C1060), make sure that -arch=sm_13 is set on the CUDA_ARCH line. NOTE: For newer graphics card (a.k.a. "Fermi", e.g. Tesla C2050), make sure that either -arch=sm_20 or -arch=sm_21 is set on the CUDA_ARCH line, depending on hardware and CUDA toolkit version. NOTE: The gayberne/gpu pair style will only be installed if the ASPHERE package has been installed. NOTE: The cg/cmm/gpu and cg/cmm/coul/long/gpu pair styles will only be installed if the USER-CG-CMM package has been installed. NOTE: The lj/cut/coul/long/gpu, cg/cmm/coul/long/gpu, coul/long/gpu, lj/charmm/coul/long/gpu and pppm/gpu styles will only be installed if the KSPACE package has been installed. EXAMPLE BUILD PROCESS -------------------------------- cd ~/lammps/lib/gpu emacs Makefile.linux make -f Makefile.linux ./nvc_get_devices cd ../../src emacs ./MAKE/Makefile.linux make yes-asphere make yes-kspace make yes-gpu make linux