forked from lijiext/lammps
4591d9958d | ||
---|---|---|
.. | ||
cudpp_mini | ||
geryon | ||
Makefile.fermi | ||
Makefile.firefly | ||
Makefile.firefly_opencl | ||
Makefile.lammps | ||
Makefile.lens | ||
Makefile.lincoln | ||
Makefile.linux | ||
Makefile.linux_opencl | ||
Makefile.longhorn | ||
Makefile.mac | ||
Makefile.mac_opencl | ||
Nvidia.makefile | ||
Opencl.makefile | ||
README | ||
atomic_gpu_memory.cpp | ||
atomic_gpu_memory.h | ||
base_ellipsoid.cpp | ||
base_ellipsoid.h | ||
charge_gpu_memory.cpp | ||
charge_gpu_memory.h | ||
cmm_cut_gpu.cpp | ||
cmm_cut_gpu_kernel.cu | ||
cmm_cut_gpu_memory.cpp | ||
cmm_cut_gpu_memory.h | ||
cmmc_long_gpu.cpp | ||
cmmc_long_gpu_kernel.cu | ||
cmmc_long_gpu_memory.cpp | ||
cmmc_long_gpu_memory.h | ||
cmmc_msm_gpu.cpp | ||
cmmc_msm_gpu_kernel.cu | ||
cmmc_msm_gpu_memory.cpp | ||
cmmc_msm_gpu_memory.h | ||
coul_long_gpu.cpp | ||
coul_long_gpu_kernel.cu | ||
coul_long_gpu_memory.cpp | ||
coul_long_gpu_memory.h | ||
crml_gpu.cpp | ||
crml_gpu_kernel.cu | ||
crml_gpu_memory.cpp | ||
crml_gpu_memory.h | ||
ellipsoid_extra.h | ||
ellipsoid_nbor.cu | ||
gayberne.cpp | ||
gayberne.cu | ||
gayberne.h | ||
gayberne_ext.cpp | ||
gayberne_lj.cu | ||
gb_gpu.cpp | ||
gb_gpu_extra.h | ||
gb_gpu_kernel.cu | ||
gb_gpu_kernel_lj.cu | ||
gb_gpu_kernel_nbor.cu | ||
gb_gpu_memory.cpp | ||
gb_gpu_memory.h | ||
lj96_cut_gpu.cpp | ||
lj96_cut_gpu_kernel.cu | ||
lj96_cut_gpu_memory.cpp | ||
lj96_cut_gpu_memory.h | ||
lj_class2_long.cpp | ||
lj_class2_long.cu | ||
lj_class2_long.h | ||
lj_class2_long_ext.cpp | ||
lj_cut_gpu.cpp | ||
lj_cut_gpu_kernel.cu | ||
lj_cut_gpu_memory.cpp | ||
lj_cut_gpu_memory.h | ||
lj_expand_gpu.cpp | ||
lj_expand_gpu_kernel.cu | ||
lj_expand_gpu_memory.cpp | ||
lj_expand_gpu_memory.h | ||
ljc_cut_gpu.cpp | ||
ljc_cut_gpu_kernel.cu | ||
ljc_cut_gpu_memory.cpp | ||
ljc_cut_gpu_memory.h | ||
ljcl_cut_gpu.cpp | ||
ljcl_cut_gpu_kernel.cu | ||
ljcl_cut_gpu_memory.cpp | ||
ljcl_cut_gpu_memory.h | ||
morse_gpu.cpp | ||
morse_gpu_kernel.cu | ||
morse_gpu_memory.cpp | ||
morse_gpu_memory.h | ||
nv_kernel_def.h | ||
pair_gpu_ans.cpp | ||
pair_gpu_ans.h | ||
pair_gpu_atom.cpp | ||
pair_gpu_atom.h | ||
pair_gpu_atom_kernel.cu | ||
pair_gpu_balance.h | ||
pair_gpu_build_kernel.cu | ||
pair_gpu_dev_kernel.cu | ||
pair_gpu_device.cpp | ||
pair_gpu_device.h | ||
pair_gpu_nbor.cpp | ||
pair_gpu_nbor.h | ||
pair_gpu_nbor_kernel.cu | ||
pair_gpu_nbor_shared.cpp | ||
pair_gpu_nbor_shared.h | ||
pair_gpu_precision.h | ||
pair_win_sort.cpp | ||
pppm_gpu_kernel.cu | ||
pppm_gpu_memory.cpp | ||
pppm_gpu_memory.h | ||
pppm_l_gpu.cpp | ||
re_squared.cpp | ||
re_squared.cu | ||
re_squared.h | ||
re_squared_ext.cpp | ||
re_squared_lj.cu | ||
replace_code.sh |
README
This directory has source files to build a library that LAMMPS links against when using the GPU package. When you are done building this library, two files should exist in this directory: libgpu.a the library LAMMPS will link against Makefile.lammps settings the LAMMPS Makefile will import The latter file will have settings like this (can be omitted if blank): gpu_SYSINC = gpu_SYSLIB = -lcudart -lcuda gpu_SYSPATH = -L/usr/local/cuda/lib64 SYSINC is for settings needed to compile LAMMPS source files SYSLIB is for additional system libraries needed by this package SYSPATH is the path(s) to where those libraries are You must insure these settings are correct for your system, else the LAMMPS build will likely fail. ------------------------------------------------------------------------- Contributing authors: Mike Brown (ORNL), brownw@ornl.gov Peng Wang (Nvidia), penwang@nvidia.com Inderaj Bains (NVIDIA), ibains@nvidia.com Paul Crozier (SNL), pscrozi@sandia.gov ------------------------------------------------------------------------- GENERAL NOTES This library, libgpu.a, provides routines for GPU acceleration of LAMMPS pair styles. Compilation of this library requires installing the CUDA GPU driver and CUDA toolkit for your operating system. In addition to the LAMMPS library, the binary nvc_get_devices will also be built. This can be used to query the names and properties of GPU devices on your system. A Makefile for OpenCL compilation is provided, but support for OpenCL use is not currently provided by the developers. NOTE: Installation of the CUDA SDK is not required. Current pair styles supporting GPU acceleration: 1. lj/cut 2. lj96/cut 3. lj/expand 4. lj/cut/coul/cut 5. lj/cut/coul/long 6. lj/charmm/coul/long 7. morse 8. cg/cmm 9. cg/cmm/coul/long 10. coul/long 11. gayberne 12. pppm MULTIPLE LAMMPS PROCESSES Multiple LAMMPS MPI processes can share GPUs on the system, but multiple GPUs cannot be utilized by a single MPI process. In many cases, the best performance will be obtained by running as many MPI processes as CPU cores available with the condition that the number of MPI processes is an integer multiple of the number of GPUs being used. See the LAMMPS user manual for details on running with GPU acceleration. BUILDING AND PRECISION MODES To build, edit the CUDA_ARCH, CUDA_PRECISION, CUDA_HOME variables in one of the Makefiles. CUDA_ARCH should be set based on the compute capability of your GPU. This can be verified by running the nvc_get_devices executable after the build is complete. Additionally, the GPU package must be installed and compiled for LAMMPS. This may require editing the gpu_SYSPATH variable in the LAMMPS makefile. Please note that the GPU library accesses the CUDA driver library directly, so it needs to be linked not only to the CUDA runtime library (libcudart.so) that ships with the CUDA toolkit, but also with the CUDA driver library (libcuda.so) that ships with the Nvidia driver. If you are compiling LAMMPS on the head node of a GPU cluster, this library may not be installed, so you may need to copy it over from one of the compute nodes (best into this directory). The gpu library supports 3 precision modes as determined by the CUDA_PRECISION variable: CUDA_PREC = -D_SINGLE_SINGLE # Single precision for all calculations CUDA_PREC = -D_DOUBLE_DOUBLE # Double precision for all calculations CUDA_PREC = -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double NOTE: PPPM acceleration can only be run on GPUs with compute capability>=1.1. You will get the error "GPU library not compiled for this accelerator." when attempting to run PPPM on a GPU with compute capability 1.0. NOTE: Double precision is only supported on certain GPUs (with compute capability>=1.3). If you compile the GPU library for a GPU with compute capability 1.1 and 1.2, then only single precistion FFTs are supported, i.e. LAMMPS has to be compiled with -DFFT_SINGLE. For details on configuring FFT support in LAMMPS, see http://lammps.sandia.gov/doc/Section_start.html#2_2_4 NOTE: For graphics cards with compute capability>=1.3 (e.g. Tesla C1060), make sure that -arch=sm_13 is set on the CUDA_ARCH line. NOTE: For newer graphics card (a.k.a. "Fermi", e.g. Tesla C2050), make sure that either -arch=sm_20 or -arch=sm_21 is set on the CUDA_ARCH line, depending on hardware and CUDA toolkit version. NOTE: The gayberne/gpu pair style will only be installed if the ASPHERE package has been installed. NOTE: The cg/cmm/gpu and cg/cmm/coul/long/gpu pair styles will only be installed if the USER-CG-CMM package has been installed. NOTE: The lj/cut/coul/long/gpu, cg/cmm/coul/long/gpu, coul/long/gpu, lj/charmm/coul/long/gpu and pppm/gpu styles will only be installed if the KSPACE package has been installed. EXAMPLE BUILD PROCESS cd ~/lammps/lib/gpu emacs Makefile.linux make -f Makefile.linux ./nvc_get_devices cd ../../src emacs ./MAKE/Makefile.linux make yes-asphere make yes-kspace make yes-gpu make linux