git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@7018 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp 2011-09-30 16:56:12 +00:00
parent 4c107e07a4
commit 7d2dd1a90f
8 changed files with 42 additions and 44 deletions

View File

@ -455,11 +455,11 @@ package</A>.
<TR ALIGN="center"><TD ><A HREF = "pair_charmm.html">lj/charmm/coul/charmm/implicit/cuda</A></TD><TD ><A HREF = "pair_charmm.html">lj/charmm/coul/long/cuda</A></TD><TD ><A HREF = "pair_charmm.html">lj/charmm/coul/long/gpu</A></TD><TD ><A HREF = "pair_charmm.html">lj/charmm/coul/long/opt</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_class2.html">lj/class2/coul/cut/cuda</A></TD><TD ><A HREF = "pair_class2.html">lj/class2/coul/long/cuda</A></TD><TD ><A HREF = "pair_class2.html">lj/class2/coul/long/gpu</A></TD><TD ><A HREF = "pair_class2.html">lj/class2/cuda</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_class2.html">lj/class2/gpu</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/coul/cut/cuda</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/coul/cut/gpu</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/coul/debye/cuda</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_lj.html">lj/cut/coul/long/cuda</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/coul/long/gpu</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/cuda</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/experimental/cuda</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_lj.html">lj/cut/gpu</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/opt</A></TD><TD ><A HREF = "pair_lj_expand.html">lj/expand/cuda</A></TD><TD ><A HREF = "pair_lj_expand.html">lj/expand/gpu</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_gromacs.html">lj/gromacs/coul/gromacs/cuda</A></TD><TD ><A HREF = "pair_gromacs.html">lj/gromacs/cuda</A></TD><TD ><A HREF = "pair_lj_smooth.html">lj/smooth/cuda</A></TD><TD ><A HREF = "pair_lj96.html">lj96/cut/cuda</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_lj96.html">lj96/cut/gpu</A></TD><TD ><A HREF = "pair_morse.html">morse/cuda</A></TD><TD ><A HREF = "pair_morse.html">morse/gpu</A></TD><TD ><A HREF = "pair_morse.html">morse/opt</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_resquared.html">resquared/gpu</A>
<TR ALIGN="center"><TD ><A HREF = "pair_lj.html">lj/cut/coul/long/cuda</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/coul/long/gpu</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/coul/long/opt</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/coul/long/tip4p/opt</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_lj.html">lj/cut/cuda</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/experimental/cuda</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/gpu</A></TD><TD ><A HREF = "pair_lj.html">lj/cut/opt</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_lj_expand.html">lj/expand/cuda</A></TD><TD ><A HREF = "pair_lj_expand.html">lj/expand/gpu</A></TD><TD ><A HREF = "pair_gromacs.html">lj/gromacs/coul/gromacs/cuda</A></TD><TD ><A HREF = "pair_gromacs.html">lj/gromacs/cuda</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_lj_smooth.html">lj/smooth/cuda</A></TD><TD ><A HREF = "pair_lj96.html">lj96/cut/cuda</A></TD><TD ><A HREF = "pair_lj96.html">lj96/cut/gpu</A></TD><TD ><A HREF = "pair_morse.html">morse/cuda</A></TD></TR>
<TR ALIGN="center"><TD ><A HREF = "pair_morse.html">morse/gpu</A></TD><TD ><A HREF = "pair_morse.html">morse/opt</A></TD><TD ><A HREF = "pair_resquared.html">resquared/gpu</A>
</TD></TR></TABLE></DIV>
<HR>

View File

@ -728,6 +728,8 @@ package"_Section_accelerate.html.
"lj/cut/coul/debye/cuda"_pair_lj.html,
"lj/cut/coul/long/cuda"_pair_lj.html,
"lj/cut/coul/long/gpu"_pair_lj.html,
"lj/cut/coul/long/opt"_pair_lj.html,
"lj/cut/coul/long/tip4p/opt"_pair_lj.html,
"lj/cut/cuda"_pair_lj.html,
"lj/cut/experimental/cuda"_pair_lj.html,
"lj/cut/gpu"_pair_lj.html,

View File

@ -248,14 +248,15 @@ vendor-provided MPI which the compiler has no trouble finding.
file (MPI_INC) and the MPI library file (MPI_PATH) are found and the
name of the library file (MPI_LIB).
</P>
<P>If you are installing MPI yourself, we recommend Argonne's MPICH 1.2
or 2.0 or OpenMPI. MPICH can be downloaded from the <A HREF = "http://www-unix.mcs.anl.gov/mpi">Argonne MPI
site</A>. OpenMPI can be downloaded the
<A HREF = "http://www.open-mpi.org">OpenMPI site</A>. LAM MPI should also work. If
you are running on a big parallel platform, your system people or the
vendor should have already installed a version of MPI, which will be
faster than MPICH or OpenMPI or LAM, so find out how to build and link
with it. If you use MPICH or OpenMPI or LAM, you will have to
<P>If you are installing MPI yourself, we recommend Argonne's MPICH2
or OpenMPI. MPICH can be downloaded from the <A HREF = "http://www.mcs.anl.gov/research/projects/mpich2/">Argonne MPI
site</A>. OpenMPI can
be downloaded from the <A HREF = "http://www.open-mpi.org">OpenMPI site</A>.
Other MPI packages should also work. If you are running on a big
parallel platform, your system people or the vendor should have
already installed a version of MPI, which is likely to be faster
than a self-installed MPICH or OpenMPI, so find out how to build
and link with it. If you use MPICH or OpenMPI, you will have to
configure and build it for your platform. The MPI configure script
should have compiler options to enable you to use the same compiler
you are using for the LAMMPS build, which can avoid problems that can

View File

@ -243,14 +243,15 @@ Failing this, with these 3 variables you can specify where the mpi.h
file (MPI_INC) and the MPI library file (MPI_PATH) are found and the
name of the library file (MPI_LIB).
If you are installing MPI yourself, we recommend Argonne's MPICH 1.2
or 2.0 or OpenMPI. MPICH can be downloaded from the "Argonne MPI
site"_http://www-unix.mcs.anl.gov/mpi. OpenMPI can be downloaded the
"OpenMPI site"_http://www.open-mpi.org. LAM MPI should also work. If
you are running on a big parallel platform, your system people or the
vendor should have already installed a version of MPI, which will be
faster than MPICH or OpenMPI or LAM, so find out how to build and link
with it. If you use MPICH or OpenMPI or LAM, you will have to
If you are installing MPI yourself, we recommend Argonne's MPICH2
or OpenMPI. MPICH can be downloaded from the "Argonne MPI
site"_http://www.mcs.anl.gov/research/projects/mpich2/. OpenMPI can
be downloaded from the "OpenMPI site"_http://www.open-mpi.org.
Other MPI packages should also work. If you are running on a big
parallel platform, your system people or the vendor should have
already installed a version of MPI, which is likely to be faster
than a self-installed MPICH or OpenMPI, so find out how to build
and link with it. If you use MPICH or OpenMPI, you will have to
configure and build it for your platform. The MPI configure script
should have compiler options to enable you to use the same compiler
you are using for the LAMMPS build, which can avoid problems that can

View File

@ -50,10 +50,11 @@ visualization program</A>, so that it can monitor the progress of the
simulation and interactively apply forces to selected atoms.
</P>
<P>If LAMMPS is compiled with the preprocessor flag -DLAMMPS_ASYNC_IMD
then fix imd will use posix threads to spawn a thread on MPI rank 0 in
order to offload data reading and writing from the main execution
thread and potentiall lower the inferred latencies for slow
communication links. This feature has only been tested under linux.
then fix imd will use POSIX threads to spawn a IMD communication
thread on MPI rank 0 in order to offload data reading and writing
from the main execution thread and potentially lower the inferred
latencies for slow communication links. This feature has only been
tested under linux.
</P>
<P>There are example scripts for using this package with LAMMPS in
examples/USER/imd. Additional examples and a driver for use with the
@ -155,13 +156,6 @@ This fix is not invoked during <A HREF = "minimize.html">energy minimization</A>
LAMMPS was built with that package. See the <A HREF = "Section_start.html#start_3">Making
LAMMPS</A> section for more info.
</P>
<P>On platforms that support multi-threading, this fix can be compiled in
a way that the coordinate transfers to the IMD client can be handled
from a separate thread, when LAMMPS is compiled with the
-DLAMMPS_ASYNC_IMD preprocessor flag. This should to keep MD loop
times low and transfer rates high, especially for systems with many
atoms and for slow connections.
</P>
<P>When used in combination with VMD, a topology or coordinate file has
to be loaded, which matches (in number and ordering of atoms) the
group the fix is applied to. The fix internally sorts atom IDs by

View File

@ -42,10 +42,11 @@ visualization program"_VMD, so that it can monitor the progress of the
simulation and interactively apply forces to selected atoms.
If LAMMPS is compiled with the preprocessor flag -DLAMMPS_ASYNC_IMD
then fix imd will use posix threads to spawn a thread on MPI rank 0 in
order to offload data reading and writing from the main execution
thread and potentiall lower the inferred latencies for slow
communication links. This feature has only been tested under linux.
then fix imd will use POSIX threads to spawn a IMD communication
thread on MPI rank 0 in order to offload data reading and writing
from the main execution thread and potentially lower the inferred
latencies for slow communication links. This feature has only been
tested under linux.
There are example scripts for using this package with LAMMPS in
examples/USER/imd. Additional examples and a driver for use with the
@ -125,7 +126,7 @@ If IMD control messages are received, a line of text describing the
message and its effect will be printed to the LAMMPS output screen, if
screen output is active.
:link(VMD,http://www.ks.uiuc.edu/Research/vmd)x
:link(VMD,http://www.ks.uiuc.edu/Research/vmd)
:link(imdvmd,http://www.ks.uiuc.edu/Research/vmd/imd/)
:link(vrpnicms,http://sites.google.com/site/akohlmey/software/vrpn-icms)
@ -145,13 +146,6 @@ This fix is part of the USER-MISC package. It is only enabled if
LAMMPS was built with that package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info.
On platforms that support multi-threading, this fix can be compiled in
a way that the coordinate transfers to the IMD client can be handled
from a separate thread, when LAMMPS is compiled with the
-DLAMMPS_ASYNC_IMD preprocessor flag. This should to keep MD loop
times low and transfer rates high, especially for systems with many
atoms and for slow connections.
When used in combination with VMD, a topology or coordinate file has
to be loaded, which matches (in number and ordering of atoms) the
group the fix is applied to. The fix internally sorts atom IDs by

View File

@ -35,8 +35,12 @@
</H3>
<H3>pair_style lj/cut/coul/long/gpu command
</H3>
<H3>pair_style lj/cut/coul/long/opt command
</H3>
<H3>pair_style lj/cut/coul/long/tip4p command
</H3>
<H3>pair_style lj/cut/coul/long/tip4p/opt command
</H3>
<P><B>Syntax:</B>
</P>
<PRE>pair_style style args

View File

@ -19,7 +19,9 @@ pair_style lj/cut/coul/debye/cuda command :h3
pair_style lj/cut/coul/long command :h3
pair_style lj/cut/coul/long/cuda command :h3
pair_style lj/cut/coul/long/gpu command :h3
pair_style lj/cut/coul/long/opt command :h3
pair_style lj/cut/coul/long/tip4p command :h3
pair_style lj/cut/coul/long/tip4p/opt command :h3
[Syntax:]