git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@14724 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp 2016-03-07 17:29:34 +00:00
parent d6a67c2849
commit 0e5c36676f
26 changed files with 3428 additions and 2298 deletions

View File

@ -260,7 +260,11 @@ it gives quick access to documentation for all LAMMPS commands.</p>
<li class="toctree-l2"><a class="reference internal" href="Section_howto.html#drude-induced-dipoles">6.27. Drude induced dipoles</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="Section_example.html">7. Example problems</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_example.html">7. Example problems</a><ul>
<li class="toctree-l2"><a class="reference internal" href="Section_example.html#lowercase-directories">7.1. Lowercase directories</a></li>
<li class="toctree-l2"><a class="reference internal" href="Section_example.html#uppercase-directories">7.2. Uppercase directories</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="Section_perf.html">8. Performance &amp; scalability</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_tools.html">9. Additional tools</a><ul>
<li class="toctree-l2"><a class="reference internal" href="Section_tools.html#amber2lmp-tool">9.1. amber2lmp tool</a></li>

View File

@ -80,7 +80,11 @@
<li class="toctree-l1"><a class="reference internal" href="Section_packages.html">4. Packages</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_accelerate.html">5. Accelerating LAMMPS performance</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_howto.html">6. How-to discussions</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="">7. Example problems</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="">7. Example problems</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#lowercase-directories">7.1. Lowercase directories</a></li>
<li class="toctree-l2"><a class="reference internal" href="#uppercase-directories">7.2. Uppercase directories</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="Section_perf.html">8. Performance &amp; scalability</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_tools.html">9. Additional tools</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_modify.html">10. Modifying &amp; extending LAMMPS</a></li>
@ -137,20 +141,19 @@
<div class="section" id="example-problems">
<h1>7. Example problems<a class="headerlink" href="#example-problems" title="Permalink to this headline"></a></h1>
<p>The LAMMPS distribution includes an examples sub-directory with
several sample problems. Each problem is in a sub-directory of its
own. Most are 2d models so that they run quickly, requiring at most a
couple of minutes to run on a desktop machine. Each problem has an
input script (in.*) and produces a log file (log.*) and dump file
(dump.*) when it runs. Some use a data file (data.*) of initial
coordinates as additional input. A few sample log file outputs on
different machines and different numbers of processors are included in
the directories to compare your answers to. E.g. a log file like
log.crack.foo.P means it ran on P processors of machine &#8220;foo&#8221;.</p>
<p>For examples that use input data files, many of them were produced by
<a class="reference external" href="http://pizza.sandia.gov">Pizza.py</a> or setup tools described in the
<a class="reference internal" href="Section_tools.html"><em>Additional Tools</em></a> section of the LAMMPS
documentation and provided with the LAMMPS distribution.</p>
<p>The LAMMPS distribution includes an examples sub-directory with many
sample problems. Many are 2d models that run quickly are are
straightforward to visualize, requiring at most a couple of minutes to
run on a desktop machine. Each problem has an input script (in.*) and
produces a log file (log.*) when it runs. Some use a data file
(data.*) of initial coordinates as additional input. A few sample log
file run on different machines and different numbers of processors are
included in the directories to compare your answers to. E.g. a log
file like log.date.crack.foo.P means the &#8220;crack&#8221; example was run on P
processors of machine &#8220;foo&#8221; on that date (i.e. with that version of
LAMMPS).</p>
<p>Many of the input files have commented-out lines for creating dump
files and image files.</p>
<p>If you uncomment the <a class="reference internal" href="dump.html"><em>dump</em></a> command in the input script, a
text dump file will be produced, which can be animated by various
<a class="reference external" href="http://lammps.sandia.gov/viz.html">visualization programs</a>. It can
@ -160,69 +163,77 @@ script, and assuming you have built LAMMPS with a JPG library, JPG
snapshot images will be produced when the simulation runs. They can
be quickly post-processed into a movie using commands described on the
<a class="reference internal" href="dump_image.html"><em>dump image</em></a> doc page.</p>
<p>Animations of many of these examples can be viewed on the Movies
section of the <a class="reference external" href="http://lammps.sandia.gov">LAMMPS WWW Site</a>.</p>
<p>These are the sample problems in the examples sub-directories:</p>
<p>Animations of many of the examples can be viewed on the Movies section
of the <a class="reference external" href="http://lammps.sandia.gov">LAMMPS web site</a>.</p>
<p>There are two kinds of sub-directories in the examples dir. Lowercase
dirs contain one or a few simple, quick-to-run problems. Uppercase
dirs contain up to several complex scripts that illustrate a
particular kind of simulation method or model. Some of these run for
longer times, e.g. to measure a particular quantity.</p>
<p>Lists of both kinds of directories are given below.</p>
<hr class="docutils" />
<div class="section" id="lowercase-directories">
<h2>7.1. Lowercase directories<a class="headerlink" href="#lowercase-directories" title="Permalink to this headline"></a></h2>
<table border="1" class="docutils">
<colgroup>
<col width="15%" />
<col width="85%" />
<col width="16%" />
<col width="84%" />
</colgroup>
<tbody valign="top">
<tr class="row-odd"><td>balance</td>
<tr class="row-odd"><td>accelerate</td>
<td>run with various acceleration options (OpenMP, GPU, Phi)</td>
</tr>
<tr class="row-even"><td>balance</td>
<td>dynamic load balancing, 2d system</td>
</tr>
<tr class="row-even"><td>body</td>
<tr class="row-odd"><td>body</td>
<td>body particles, 2d system</td>
</tr>
<tr class="row-odd"><td>colloid</td>
<tr class="row-even"><td>colloid</td>
<td>big colloid particles in a small particle solvent, 2d system</td>
</tr>
<tr class="row-even"><td>comb</td>
<tr class="row-odd"><td>comb</td>
<td>models using the COMB potential</td>
</tr>
<tr class="row-even"><td>coreshell</td>
<td>core/shell model using CORESHELL package</td>
</tr>
<tr class="row-odd"><td>crack</td>
<td>crack propagation in a 2d solid</td>
</tr>
<tr class="row-even"><td>cuda</td>
<td>use of the USER-CUDA package for GPU acceleration</td>
</tr>
<tr class="row-odd"><td>dipole</td>
<tr class="row-odd"><td>deposit</td>
<td>deposit atoms and molecules on a surface</td>
</tr>
<tr class="row-even"><td>dipole</td>
<td>point dipolar particles, 2d system</td>
</tr>
<tr class="row-even"><td>dreiding</td>
<tr class="row-odd"><td>dreiding</td>
<td>methanol via Dreiding FF</td>
</tr>
<tr class="row-odd"><td>eim</td>
<tr class="row-even"><td>eim</td>
<td>NaCl using the EIM potential</td>
</tr>
<tr class="row-even"><td>ellipse</td>
<tr class="row-odd"><td>ellipse</td>
<td>ellipsoidal particles in spherical solvent, 2d system</td>
</tr>
<tr class="row-odd"><td>flow</td>
<tr class="row-even"><td>flow</td>
<td>Couette and Poiseuille flow in a 2d channel</td>
</tr>
<tr class="row-even"><td>friction</td>
<tr class="row-odd"><td>friction</td>
<td>frictional contact of spherical asperities between 2d surfaces</td>
</tr>
<tr class="row-odd"><td>gpu</td>
<td>use of the GPU package for GPU acceleration</td>
</tr>
<tr class="row-even"><td>hugoniostat</td>
<td>Hugoniostat shock dynamics</td>
</tr>
<tr class="row-odd"><td>indent</td>
<td>spherical indenter into a 2d solid</td>
</tr>
<tr class="row-even"><td>intel</td>
<td>use of the USER-INTEL package for CPU or Intel(R) Xeon Phi(TM) coprocessor</td>
</tr>
<tr class="row-odd"><td>kim</td>
<tr class="row-even"><td>kim</td>
<td>use of potentials in Knowledge Base for Interatomic Models (KIM)</td>
</tr>
<tr class="row-even"><td>line</td>
<td>line segment particles in 2d rigid bodies</td>
</tr>
<tr class="row-odd"><td>meam</td>
<td>MEAM test for SiC and shear (same as shear examples)</td>
</tr>
@ -262,69 +273,108 @@ section of the <a class="reference external" href="http://lammps.sandia.gov">LAM
<tr class="row-odd"><td>prd</td>
<td>parallel replica dynamics of vacancy diffusion in bulk Si</td>
</tr>
<tr class="row-even"><td>qeq</td>
<tr class="row-even"><td>python</td>
<td>using embedded Python in a LAMMPS input script</td>
</tr>
<tr class="row-odd"><td>qeq</td>
<td>use of the QEQ package for charge equilibration</td>
</tr>
<tr class="row-odd"><td>reax</td>
<tr class="row-even"><td>reax</td>
<td>RDX and TATB models using the ReaxFF</td>
</tr>
<tr class="row-even"><td>rigid</td>
<tr class="row-odd"><td>rigid</td>
<td>rigid bodies modeled as independent or coupled</td>
</tr>
<tr class="row-odd"><td>shear</td>
<tr class="row-even"><td>shear</td>
<td>sideways shear applied to 2d solid, with and without a void</td>
</tr>
<tr class="row-even"><td>snap</td>
<tr class="row-odd"><td>snap</td>
<td>NVE dynamics for BCC tantalum crystal using SNAP potential</td>
</tr>
<tr class="row-odd"><td>srd</td>
<tr class="row-even"><td>srd</td>
<td>stochastic rotation dynamics (SRD) particles as solvent</td>
</tr>
<tr class="row-odd"><td>streitz</td>
<td>use of Streitz/Mintmire potential with charge equilibration</td>
</tr>
<tr class="row-even"><td>tad</td>
<td>temperature-accelerated dynamics of vacancy diffusion in bulk Si</td>
</tr>
<tr class="row-odd"><td>tri</td>
<td>triangular particles in rigid bodies</td>
<tr class="row-odd"><td>vashishta</td>
<td>use of the Vashishta potential</td>
</tr>
</tbody>
</table>
<p>vashishta: models using the Vashishta potential</p>
<p>Here is how you might run and visualize one of the sample problems:</p>
<p>Here is how you can run and visualize one of the sample problems:</p>
<div class="highlight-python"><div class="highlight"><pre>cd indent
cp ../../src/lmp_linux . # copy LAMMPS executable to this dir
lmp_linux -in in.indent # run the problem
</pre></div>
</div>
<p>Running the simulation produces the files <em>dump.indent</em> and
<em>log.lammps</em>. You can visualize the dump file as follows:</p>
<div class="highlight-python"><div class="highlight"><pre>../../tools/xmovie/xmovie -scale dump.indent
</pre></div>
</div>
<em>log.lammps</em>. You can visualize the dump file of snapshots with a
variety of 3rd-party tools highlighted on the
<a class="reference external" href="http://lammps.sandia.gov/viz.html">Visualization</a> page of the LAMMPS
web site.</p>
<p>If you uncomment the <a class="reference internal" href="dump_image.html"><em>dump image</em></a> line(s) in the input
script a series of JPG images will be produced by the run. These can
be viewed individually or turned into a movie or animated by tools
like ImageMagick or QuickTime or various Windows-based tools. See the
script a series of JPG images will be produced by the run (assuming
you built LAMMPS with JPG support; see <a class="reference internal" href="Section_start.html"><em>Section start 2.2</em></a> for details). These can be viewed
individually or turned into a movie or animated by tools like
ImageMagick or QuickTime or various Windows-based tools. See the
<a class="reference internal" href="dump_image.html"><em>dump image</em></a> doc page for more details. E.g. this
Imagemagick command would create a GIF file suitable for viewing in a
browser.</p>
<div class="highlight-python"><div class="highlight"><pre>% convert -loop 1 *.jpg foo.gif
</pre></div>
</div>
</div>
<hr class="docutils" />
<p>There is also a COUPLE directory with examples of how to use LAMMPS as
a library, either by itself or in tandem with another code or library.
See the COUPLE/README file to get started.</p>
<p>There is also an ELASTIC directory with an example script for
computing elastic constants at zero temperature, using an Si example. See
the ELASTIC/in.elastic file for more info.</p>
<p>There is also an ELASTIC_T directory with an example script for
computing elastic constants at finite temperature, using an Si example. See
the ELASTIC_T/in.elastic file for more info.</p>
<p>There is also a USER directory which contains subdirectories of
user-provided examples for user packages. See the README files in
those directories for more info. See the
<a class="reference internal" href="Section_start.html"><em>Section_start.html</em></a> file for more info about user
packages.</p>
<div class="section" id="uppercase-directories">
<h2>7.2. Uppercase directories<a class="headerlink" href="#uppercase-directories" title="Permalink to this headline"></a></h2>
<table border="1" class="docutils">
<colgroup>
<col width="10%" />
<col width="90%" />
</colgroup>
<tbody valign="top">
<tr class="row-odd"><td>ASPHERE</td>
<td>various aspherical particle models, using ellipsoids, rigid bodies, line/triangle particles, etc</td>
</tr>
<tr class="row-even"><td>COUPLE</td>
<td>examples of how to use LAMMPS as a library</td>
</tr>
<tr class="row-odd"><td>DIFFUSE</td>
<td>compute diffusion coefficients via several methods</td>
</tr>
<tr class="row-even"><td>ELASTIC</td>
<td>compute elastic constants at zero temperature</td>
</tr>
<tr class="row-odd"><td>ELASTIC_T</td>
<td>compute elastic constants at finite temperature</td>
</tr>
<tr class="row-even"><td>KAPPA</td>
<td>compute thermal conductivity via several methods</td>
</tr>
<tr class="row-odd"><td>MC</td>
<td>using LAMMPS in a Monte Carlo mode to relax the energy of a system</td>
</tr>
<tr class="row-even"><td>USER</td>
<td>examples for USER packages and USER-contributed commands</td>
</tr>
<tr class="row-odd"><td>VISCOSITY</td>
<td>compute viscosity via several methods</td>
</tr>
</tbody>
</table>
<p>Nearly all of these directories have README files which give more
details on how to understand and use their contents.</p>
<p>The USER directory has a large number of sub-directories which
correspond by name to a USER package. They contain scripts that
illustrate how to use the command(s) provided in that package. Many
of the sub-directories have their own README files which give further
instructions. See the <a class="reference internal" href="Section_packages.html"><em>Section packages</em></a> doc
page for more info on specific USER packages.</p>
</div>
</div>

View File

@ -8,21 +8,20 @@
7. Example problems :h3
The LAMMPS distribution includes an examples sub-directory with
several sample problems. Each problem is in a sub-directory of its
own. Most are 2d models so that they run quickly, requiring at most a
couple of minutes to run on a desktop machine. Each problem has an
input script (in.*) and produces a log file (log.*) and dump file
(dump.*) when it runs. Some use a data file (data.*) of initial
coordinates as additional input. A few sample log file outputs on
different machines and different numbers of processors are included in
the directories to compare your answers to. E.g. a log file like
log.crack.foo.P means it ran on P processors of machine "foo".
The LAMMPS distribution includes an examples sub-directory with many
sample problems. Many are 2d models that run quickly are are
straightforward to visualize, requiring at most a couple of minutes to
run on a desktop machine. Each problem has an input script (in.*) and
produces a log file (log.*) when it runs. Some use a data file
(data.*) of initial coordinates as additional input. A few sample log
file run on different machines and different numbers of processors are
included in the directories to compare your answers to. E.g. a log
file like log.date.crack.foo.P means the "crack" example was run on P
processors of machine "foo" on that date (i.e. with that version of
LAMMPS).
For examples that use input data files, many of them were produced by
"Pizza.py"_http://pizza.sandia.gov or setup tools described in the
"Additional Tools"_Section_tools.html section of the LAMMPS
documentation and provided with the LAMMPS distribution.
Many of the input files have commented-out lines for creating dump
files and image files.
If you uncomment the "dump"_dump.html command in the input script, a
text dump file will be produced, which can be animated by various
@ -36,29 +35,39 @@ snapshot images will be produced when the simulation runs. They can
be quickly post-processed into a movie using commands described on the
"dump image"_dump_image.html doc page.
Animations of many of these examples can be viewed on the Movies
section of the "LAMMPS WWW Site"_lws.
Animations of many of the examples can be viewed on the Movies section
of the "LAMMPS web site"_lws.
These are the sample problems in the examples sub-directories:
There are two kinds of sub-directories in the examples dir. Lowercase
dirs contain one or a few simple, quick-to-run problems. Uppercase
dirs contain up to several complex scripts that illustrate a
particular kind of simulation method or model. Some of these run for
longer times, e.g. to measure a particular quantity.
Lists of both kinds of directories are given below.
:line
Lowercase directories :h4
accelerate: run with various acceleration options (OpenMP, GPU, Phi)
balance: dynamic load balancing, 2d system
body: body particles, 2d system
colloid: big colloid particles in a small particle solvent, 2d system
comb: models using the COMB potential
coreshell: core/shell model using CORESHELL package
crack: crack propagation in a 2d solid
cuda: use of the USER-CUDA package for GPU acceleration
deposit: deposit atoms and molecules on a surface
dipole: point dipolar particles, 2d system
dreiding: methanol via Dreiding FF
eim: NaCl using the EIM potential
ellipse: ellipsoidal particles in spherical solvent, 2d system
flow: Couette and Poiseuille flow in a 2d channel
friction: frictional contact of spherical asperities between 2d surfaces
gpu: use of the GPU package for GPU acceleration
hugoniostat: Hugoniostat shock dynamics
indent: spherical indenter into a 2d solid
intel: use of the USER-INTEL package for CPU or Intel(R) Xeon Phi(TM) coprocessor
kim: use of potentials in Knowledge Base for Interatomic Models (KIM)
line: line segment particles in 2d rigid bodies
meam: MEAM test for SiC and shear (same as shear examples)
melt: rapid melt of 3d LJ system
micelle: self-assembly of small lipid-like molecules into 2d bilayers
@ -72,31 +81,35 @@ peptide: dynamics of a small solvated peptide chain (5-mer)
peri: Peridynamic model of cylinder impacted by indenter
pour: pouring of granular particles into a 3d box, then chute flow
prd: parallel replica dynamics of vacancy diffusion in bulk Si
python: using embedded Python in a LAMMPS input script
qeq: use of the QEQ package for charge equilibration
reax: RDX and TATB models using the ReaxFF
rigid: rigid bodies modeled as independent or coupled
shear: sideways shear applied to 2d solid, with and without a void
snap: NVE dynamics for BCC tantalum crystal using SNAP potential
srd: stochastic rotation dynamics (SRD) particles as solvent
streitz: use of Streitz/Mintmire potential with charge equilibration
tad: temperature-accelerated dynamics of vacancy diffusion in bulk Si
tri: triangular particles in rigid bodies :tb(s=:)
vashishta: models using the Vashishta potential
vashishta: use of the Vashishta potential :tb(s=:)
Here is how you might run and visualize one of the sample problems:
Here is how you can run and visualize one of the sample problems:
cd indent
cp ../../src/lmp_linux . # copy LAMMPS executable to this dir
lmp_linux -in in.indent # run the problem :pre
Running the simulation produces the files {dump.indent} and
{log.lammps}. You can visualize the dump file as follows:
../../tools/xmovie/xmovie -scale dump.indent :pre
{log.lammps}. You can visualize the dump file of snapshots with a
variety of 3rd-party tools highlighted on the
"Visualization"_http://lammps.sandia.gov/viz.html page of the LAMMPS
web site.
If you uncomment the "dump image"_dump_image.html line(s) in the input
script a series of JPG images will be produced by the run. These can
be viewed individually or turned into a movie or animated by tools
like ImageMagick or QuickTime or various Windows-based tools. See the
script a series of JPG images will be produced by the run (assuming
you built LAMMPS with JPG support; see "Section start
2.2"_Section_start.html for details). These can be viewed
individually or turned into a movie or animated by tools like
ImageMagick or QuickTime or various Windows-based tools. See the
"dump image"_dump_image.html doc page for more details. E.g. this
Imagemagick command would create a GIF file suitable for viewing in a
browser.
@ -105,20 +118,24 @@ browser.
:line
There is also a COUPLE directory with examples of how to use LAMMPS as
a library, either by itself or in tandem with another code or library.
See the COUPLE/README file to get started.
Uppercase directories :h4
There is also an ELASTIC directory with an example script for
computing elastic constants at zero temperature, using an Si example. See
the ELASTIC/in.elastic file for more info.
ASPHERE: various aspherical particle models, using ellipsoids, rigid bodies, line/triangle particles, etc
COUPLE: examples of how to use LAMMPS as a library
DIFFUSE: compute diffusion coefficients via several methods
ELASTIC: compute elastic constants at zero temperature
ELASTIC_T: compute elastic constants at finite temperature
KAPPA: compute thermal conductivity via several methods
MC: using LAMMPS in a Monte Carlo mode to relax the energy of a system
USER: examples for USER packages and USER-contributed commands
VISCOSITY: compute viscosity via several methods :tb(s=:)
There is also an ELASTIC_T directory with an example script for
computing elastic constants at finite temperature, using an Si example. See
the ELASTIC_T/in.elastic file for more info.
Nearly all of these directories have README files which give more
details on how to understand and use their contents.
There is also a USER directory which contains subdirectories of
user-provided examples for user packages. See the README files in
those directories for more info. See the
"Section_start.html"_Section_start.html file for more info about user
packages.
The USER directory has a large number of sub-directories which
correspond by name to a USER package. They contain scripts that
illustrate how to use the command(s) provided in that package. Many
of the sub-directories have their own README files which give further
instructions. See the "Section packages"_Section_packages.html doc
page for more info on specific USER packages.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -137,12 +137,17 @@
<div class="section" id="performance-scalability">
<h1>8. Performance &amp; scalability<a class="headerlink" href="#performance-scalability" title="Permalink to this headline"></a></h1>
<p>LAMMPS performance on several prototypical benchmarks and machines is
discussed on the Benchmarks page of the <a class="reference external" href="http://lammps.sandia.gov">LAMMPS WWW Site</a> where
CPU timings and parallel efficiencies are listed. Here, the
benchmarks are described briefly and some useful rules of thumb about
their performance are highlighted.</p>
<p>These are the 5 benchmark problems:</p>
<p>Current LAMMPS performance is discussed on the Benchmarks page of the
<a class="reference external" href="http://lammps.sandia.gov">LAMMPS WWW Site</a> where CPU timings and parallel efficiencies are
listed. The page has several sections, which are briefly described
below:</p>
<ul class="simple">
<li>CPU performance on 5 standard problems, strong and weak scaling</li>
<li>GPU and Xeon Phi performance on same and related problems</li>
<li>Comparison of cost of interatomic potentials</li>
<li>Performance of huge, billion-atom problems</li>
</ul>
<p>The 5 standard problems are as follow:</p>
<ol class="arabic simple">
<li>LJ = atomic fluid, Lennard-Jones potential with 2.5 sigma cutoff (55</li>
</ol>
@ -161,73 +166,46 @@ field with a 10 Angstrom LJ cutoff (440 neighbors per atom),
particle-particle particle-mesh (PPPM) for long-range Coulombics, NPT
integration</li>
</ol>
<p>The input files for running the benchmarks are included in the LAMMPS
distribution, as are sample output files. Each of the 5 problems has
32,000 atoms and runs for 100 timesteps. Each can be run as a serial
benchmarks (on one processor) or in parallel. In parallel, each
benchmark can be run as a fixed-size or scaled-size problem. For
fixed-size benchmarking, the same 32K atom problem is run on various
numbers of processors. For scaled-size benchmarking, the model size
is increased with the number of processors. E.g. on 8 processors, a
256K-atom problem is run; on 1024 processors, a 32-million atom
problem is run, etc.</p>
<p>A useful metric from the benchmarks is the CPU cost per atom per
timestep. Since LAMMPS performance scales roughly linearly with
problem size and timesteps, the run time of any problem using the same
model (atom style, force field, cutoff, etc) can then be estimated.
For example, on a 1.7 GHz Pentium desktop machine (Intel icc compiler
under Red Hat Linux), the CPU run-time in seconds/atom/timestep for
the 5 problems is</p>
<table border="1" class="docutils">
<colgroup>
<col width="25%" />
<col width="14%" />
<col width="14%" />
<col width="14%" />
<col width="14%" />
<col width="17%" />
</colgroup>
<tbody valign="top">
<tr class="row-odd"><td>Problem:</td>
<td>LJ</td>
<td>Chain</td>
<td>EAM</td>
<td>Chute</td>
<td>Rhodopsin</td>
</tr>
<tr class="row-even"><td>CPU/atom/step:</td>
<td>4.55E-6</td>
<td>2.18E-6</td>
<td>9.38E-6</td>
<td>2.18E-6</td>
<td>1.11E-4</td>
</tr>
<tr class="row-odd"><td>Ratio to LJ:</td>
<td>1.0</td>
<td>0.48</td>
<td>2.06</td>
<td>0.48</td>
<td>24.5</td>
</tr>
</tbody>
</table>
<p>The ratios mean that if the atomic LJ system has a normalized cost of
1.0, the bead-spring chains and granular systems run 2x faster, while
the EAM metal and solvated protein models run 2x and 25x slower
respectively. The bulk of these cost differences is due to the
expense of computing a particular pairwise force field for a given
number of neighbors per atom.</p>
<p>Performance on a parallel machine can also be predicted from the
one-processor timings if the parallel efficiency can be estimated.
The communication bandwidth and latency of a particular parallel
machine affects the efficiency. On most machines LAMMPS will give
fixed-size parallel efficiencies on these benchmarks above 50% so long
as the atoms/processor count is a few 100 or greater - i.e. on 64 to
128 processors. Likewise, scaled-size parallel efficiencies will
typically be 80% or greater up to very large processor counts. The
benchmark data on the <a class="reference external" href="http://lammps.sandia.gov">LAMMPS WWW Site</a> gives specific examples on
some different machines, including a run of 3/4 of a billion LJ atoms
on 1500 processors that ran at 85% parallel efficiency.</p>
<p>Input files for these 5 problems are provided in the bench directory
of the LAMMPS distribution. Each has 32,000 atoms and runs for 100
timesteps. The size of the problem (number of atoms) can be varied
using command-line switches as described in the bench/README file.
This is an easy way to test performance and either strong or weak
scalability on your machine.</p>
<p>The bench directory includes a few log.* files that show performance
of these 5 problems on 1 or 4 cores of Linux desktop. The bench/FERMI
and bench/KEPLER dirs have input files and scripts and instructions
for running the same (or similar) problems using OpenMP or GPU or Xeon
Phi acceleration options. See the README files in those dirs and the
<a class="reference internal" href="Section_accelerate.html"><em>Section accelerate</em></a> doc pages for
instructions on how to build LAMMPS and run on that kind of hardware.</p>
<p>The bench/POTENTIALS directory has input files which correspond to the
table of results on the
<span class="xref std std-ref">Potentials</span> section of
the Benchmarks web page. So you can also run those test problems on
your machine.</p>
<p>The <span class="xref std std-ref">billion-atom</span> section
of the Benchmarks web page has performance data for very large
benchmark runs of simple Lennard-Jones (LJ) models, which use the
bench/in.lj input script.</p>
<hr class="docutils" />
<p>For all the benchmarks, a useful metric is the CPU cost per atom per
timestep. Since performance scales roughly linearly with problem size
and timesteps for all LAMMPS models (i.e. inteatomic or coarse-grained
potentials), the run time of any problem using the same model (atom
style, force field, cutoff, etc) can then be estimated.</p>
<p>Performance on a parallel machine can also be predicted from one-core
or one-node timings if the parallel efficiency can be estimated. The
communication bandwidth and latency of a particular parallel machine
affects the efficiency. On most machines LAMMPS will give parallel
efficiencies on these benchmarks above 50% so long as the number of
atoms/core is a few 100 or greater, and closer to 100% for large
numbers of atoms/core. This is for all-MPI mode with one MPI task per
core. For nodes with accelerator options or hardware (OpenMP, GPU,
Phi), you should first measure single node performance. Then you can
estimate parallel performance for multi-node runs using the same logic
as for all-MPI mode, except that now you will typically need many more
atoms/node to achieve good scalability.</p>
</div>

View File

@ -8,13 +8,17 @@
8. Performance & scalability :h3
LAMMPS performance on several prototypical benchmarks and machines is
discussed on the Benchmarks page of the "LAMMPS WWW Site"_lws where
CPU timings and parallel efficiencies are listed. Here, the
benchmarks are described briefly and some useful rules of thumb about
their performance are highlighted.
Current LAMMPS performance is discussed on the Benchmarks page of the
"LAMMPS WWW Site"_lws where CPU timings and parallel efficiencies are
listed. The page has several sections, which are briefly described
below:
These are the 5 benchmark problems:
CPU performance on 5 standard problems, strong and weak scaling
GPU and Xeon Phi performance on same and related problems
Comparison of cost of interatomic potentials
Performance of huge, billion-atom problems :ul
The 5 standard problems are as follow:
LJ = atomic fluid, Lennard-Jones potential with 2.5 sigma cutoff (55
neighbors per atom), NVE integration :olb,l
@ -34,44 +38,49 @@ field with a 10 Angstrom LJ cutoff (440 neighbors per atom),
particle-particle particle-mesh (PPPM) for long-range Coulombics, NPT
integration :ole,l
The input files for running the benchmarks are included in the LAMMPS
distribution, as are sample output files. Each of the 5 problems has
32,000 atoms and runs for 100 timesteps. Each can be run as a serial
benchmarks (on one processor) or in parallel. In parallel, each
benchmark can be run as a fixed-size or scaled-size problem. For
fixed-size benchmarking, the same 32K atom problem is run on various
numbers of processors. For scaled-size benchmarking, the model size
is increased with the number of processors. E.g. on 8 processors, a
256K-atom problem is run; on 1024 processors, a 32-million atom
problem is run, etc.
Input files for these 5 problems are provided in the bench directory
of the LAMMPS distribution. Each has 32,000 atoms and runs for 100
timesteps. The size of the problem (number of atoms) can be varied
using command-line switches as described in the bench/README file.
This is an easy way to test performance and either strong or weak
scalability on your machine.
A useful metric from the benchmarks is the CPU cost per atom per
timestep. Since LAMMPS performance scales roughly linearly with
problem size and timesteps, the run time of any problem using the same
model (atom style, force field, cutoff, etc) can then be estimated.
For example, on a 1.7 GHz Pentium desktop machine (Intel icc compiler
under Red Hat Linux), the CPU run-time in seconds/atom/timestep for
the 5 problems is
The bench directory includes a few log.* files that show performance
of these 5 problems on 1 or 4 cores of Linux desktop. The bench/FERMI
and bench/KEPLER dirs have input files and scripts and instructions
for running the same (or similar) problems using OpenMP or GPU or Xeon
Phi acceleration options. See the README files in those dirs and the
"Section accelerate"_Section_accelerate.html doc pages for
instructions on how to build LAMMPS and run on that kind of hardware.
Problem:, LJ, Chain, EAM, Chute, Rhodopsin
CPU/atom/step:, 4.55E-6, 2.18E-6, 9.38E-6, 2.18E-6, 1.11E-4
Ratio to LJ:, 1.0, 0.48, 2.06, 0.48, 24.5 :tb(ea=c,ca1=r)
The bench/POTENTIALS directory has input files which correspond to the
table of results on the
"Potentials"_http://lammps.sandia.gov/bench.html#potentials section of
the Benchmarks web page. So you can also run those test problems on
your machine.
The ratios mean that if the atomic LJ system has a normalized cost of
1.0, the bead-spring chains and granular systems run 2x faster, while
the EAM metal and solvated protein models run 2x and 25x slower
respectively. The bulk of these cost differences is due to the
expense of computing a particular pairwise force field for a given
number of neighbors per atom.
The "billion-atom"_http://lammps.sandia.gov/bench.html#billion section
of the Benchmarks web page has performance data for very large
benchmark runs of simple Lennard-Jones (LJ) models, which use the
bench/in.lj input script.
Performance on a parallel machine can also be predicted from the
one-processor timings if the parallel efficiency can be estimated.
The communication bandwidth and latency of a particular parallel
machine affects the efficiency. On most machines LAMMPS will give
fixed-size parallel efficiencies on these benchmarks above 50% so long
as the atoms/processor count is a few 100 or greater - i.e. on 64 to
128 processors. Likewise, scaled-size parallel efficiencies will
typically be 80% or greater up to very large processor counts. The
benchmark data on the "LAMMPS WWW Site"_lws gives specific examples on
some different machines, including a run of 3/4 of a billion LJ atoms
on 1500 processors that ran at 85% parallel efficiency.
:line
For all the benchmarks, a useful metric is the CPU cost per atom per
timestep. Since performance scales roughly linearly with problem size
and timesteps for all LAMMPS models (i.e. inteatomic or coarse-grained
potentials), the run time of any problem using the same model (atom
style, force field, cutoff, etc) can then be estimated.
Performance on a parallel machine can also be predicted from one-core
or one-node timings if the parallel efficiency can be estimated. The
communication bandwidth and latency of a particular parallel machine
affects the efficiency. On most machines LAMMPS will give parallel
efficiencies on these benchmarks above 50% so long as the number of
atoms/core is a few 100 or greater, and closer to 100% for large
numbers of atoms/core. This is for all-MPI mode with one MPI task per
core. For nodes with accelerator options or hardware (OpenMP, GPU,
Phi), you should first measure single node performance. Then you can
estimate parallel performance for multi-node runs using the same logic
as for all-MPI mode, except that now you will typically need many more
atoms/node to achieve good scalability.

View File

@ -1,21 +1,20 @@
Example problems
================
The LAMMPS distribution includes an examples sub-directory with
several sample problems. Each problem is in a sub-directory of its
own. Most are 2d models so that they run quickly, requiring at most a
couple of minutes to run on a desktop machine. Each problem has an
input script (in.*) and produces a log file (log.*) and dump file
(dump.*) when it runs. Some use a data file (data.*) of initial
coordinates as additional input. A few sample log file outputs on
different machines and different numbers of processors are included in
the directories to compare your answers to. E.g. a log file like
log.crack.foo.P means it ran on P processors of machine "foo".
The LAMMPS distribution includes an examples sub-directory with many
sample problems. Many are 2d models that run quickly are are
straightforward to visualize, requiring at most a couple of minutes to
run on a desktop machine. Each problem has an input script (in.*) and
produces a log file (log.*) when it runs. Some use a data file
(data.*) of initial coordinates as additional input. A few sample log
file run on different machines and different numbers of processors are
included in the directories to compare your answers to. E.g. a log
file like log.date.crack.foo.P means the "crack" example was run on P
processors of machine "foo" on that date (i.e. with that version of
LAMMPS).
For examples that use input data files, many of them were produced by
`Pizza.py <http://pizza.sandia.gov>`_ or setup tools described in the
:doc:`Additional Tools <Section_tools>` section of the LAMMPS
documentation and provided with the LAMMPS distribution.
Many of the input files have commented-out lines for creating dump
files and image files.
If you uncomment the :doc:`dump <dump>` command in the input script, a
text dump file will be produced, which can be animated by various
@ -28,94 +27,109 @@ snapshot images will be produced when the simulation runs. They can
be quickly post-processed into a movie using commands described on the
:doc:`dump image <dump_image>` doc page.
Animations of many of these examples can be viewed on the Movies
section of the `LAMMPS WWW Site <lws_>`_.
Animations of many of the examples can be viewed on the Movies section
of the `LAMMPS web site <lws_>`_.
These are the sample problems in the examples sub-directories:
There are two kinds of sub-directories in the examples dir. Lowercase
dirs contain one or a few simple, quick-to-run problems. Uppercase
dirs contain up to several complex scripts that illustrate a
particular kind of simulation method or model. Some of these run for
longer times, e.g. to measure a particular quantity.
+-------------+----------------------------------------------------------------------------+
| balance | dynamic load balancing, 2d system |
+-------------+----------------------------------------------------------------------------+
| body | body particles, 2d system |
+-------------+----------------------------------------------------------------------------+
| colloid | big colloid particles in a small particle solvent, 2d system |
+-------------+----------------------------------------------------------------------------+
| comb | models using the COMB potential |
+-------------+----------------------------------------------------------------------------+
| crack | crack propagation in a 2d solid |
+-------------+----------------------------------------------------------------------------+
| cuda | use of the USER-CUDA package for GPU acceleration |
+-------------+----------------------------------------------------------------------------+
| dipole | point dipolar particles, 2d system |
+-------------+----------------------------------------------------------------------------+
| dreiding | methanol via Dreiding FF |
+-------------+----------------------------------------------------------------------------+
| eim | NaCl using the EIM potential |
+-------------+----------------------------------------------------------------------------+
| ellipse | ellipsoidal particles in spherical solvent, 2d system |
+-------------+----------------------------------------------------------------------------+
| flow | Couette and Poiseuille flow in a 2d channel |
+-------------+----------------------------------------------------------------------------+
| friction | frictional contact of spherical asperities between 2d surfaces |
+-------------+----------------------------------------------------------------------------+
| gpu | use of the GPU package for GPU acceleration |
+-------------+----------------------------------------------------------------------------+
| hugoniostat | Hugoniostat shock dynamics |
+-------------+----------------------------------------------------------------------------+
| indent | spherical indenter into a 2d solid |
+-------------+----------------------------------------------------------------------------+
| intel | use of the USER-INTEL package for CPU or Intel(R) Xeon Phi(TM) coprocessor |
+-------------+----------------------------------------------------------------------------+
| kim | use of potentials in Knowledge Base for Interatomic Models (KIM) |
+-------------+----------------------------------------------------------------------------+
| line | line segment particles in 2d rigid bodies |
+-------------+----------------------------------------------------------------------------+
| meam | MEAM test for SiC and shear (same as shear examples) |
+-------------+----------------------------------------------------------------------------+
| melt | rapid melt of 3d LJ system |
+-------------+----------------------------------------------------------------------------+
| micelle | self-assembly of small lipid-like molecules into 2d bilayers |
+-------------+----------------------------------------------------------------------------+
| min | energy minimization of 2d LJ melt |
+-------------+----------------------------------------------------------------------------+
| msst | MSST shock dynamics |
+-------------+----------------------------------------------------------------------------+
| nb3b | use of nonbonded 3-body harmonic pair style |
+-------------+----------------------------------------------------------------------------+
| neb | nudged elastic band (NEB) calculation for barrier finding |
+-------------+----------------------------------------------------------------------------+
| nemd | non-equilibrium MD of 2d sheared system |
+-------------+----------------------------------------------------------------------------+
| obstacle | flow around two voids in a 2d channel |
+-------------+----------------------------------------------------------------------------+
| peptide | dynamics of a small solvated peptide chain (5-mer) |
+-------------+----------------------------------------------------------------------------+
| peri | Peridynamic model of cylinder impacted by indenter |
+-------------+----------------------------------------------------------------------------+
| pour | pouring of granular particles into a 3d box, then chute flow |
+-------------+----------------------------------------------------------------------------+
| prd | parallel replica dynamics of vacancy diffusion in bulk Si |
+-------------+----------------------------------------------------------------------------+
| qeq | use of the QEQ package for charge equilibration |
+-------------+----------------------------------------------------------------------------+
| reax | RDX and TATB models using the ReaxFF |
+-------------+----------------------------------------------------------------------------+
| rigid | rigid bodies modeled as independent or coupled |
+-------------+----------------------------------------------------------------------------+
| shear | sideways shear applied to 2d solid, with and without a void |
+-------------+----------------------------------------------------------------------------+
| snap | NVE dynamics for BCC tantalum crystal using SNAP potential |
+-------------+----------------------------------------------------------------------------+
| srd | stochastic rotation dynamics (SRD) particles as solvent |
+-------------+----------------------------------------------------------------------------+
| tad | temperature-accelerated dynamics of vacancy diffusion in bulk Si |
+-------------+----------------------------------------------------------------------------+
| tri | triangular particles in rigid bodies |
+-------------+----------------------------------------------------------------------------+
Lists of both kinds of directories are given below.
vashishta: models using the Vashishta potential
Here is how you might run and visualize one of the sample problems:
----------
Lowercase directories
---------------------
+-------------+------------------------------------------------------------------+
| accelerate | run with various acceleration options (OpenMP, GPU, Phi) |
+-------------+------------------------------------------------------------------+
| balance | dynamic load balancing, 2d system |
+-------------+------------------------------------------------------------------+
| body | body particles, 2d system |
+-------------+------------------------------------------------------------------+
| colloid | big colloid particles in a small particle solvent, 2d system |
+-------------+------------------------------------------------------------------+
| comb | models using the COMB potential |
+-------------+------------------------------------------------------------------+
| coreshell | core/shell model using CORESHELL package |
+-------------+------------------------------------------------------------------+
| crack | crack propagation in a 2d solid |
+-------------+------------------------------------------------------------------+
| cuda | use of the USER-CUDA package for GPU acceleration |
+-------------+------------------------------------------------------------------+
| deposit | deposit atoms and molecules on a surface |
+-------------+------------------------------------------------------------------+
| dipole | point dipolar particles, 2d system |
+-------------+------------------------------------------------------------------+
| dreiding | methanol via Dreiding FF |
+-------------+------------------------------------------------------------------+
| eim | NaCl using the EIM potential |
+-------------+------------------------------------------------------------------+
| ellipse | ellipsoidal particles in spherical solvent, 2d system |
+-------------+------------------------------------------------------------------+
| flow | Couette and Poiseuille flow in a 2d channel |
+-------------+------------------------------------------------------------------+
| friction | frictional contact of spherical asperities between 2d surfaces |
+-------------+------------------------------------------------------------------+
| hugoniostat | Hugoniostat shock dynamics |
+-------------+------------------------------------------------------------------+
| indent | spherical indenter into a 2d solid |
+-------------+------------------------------------------------------------------+
| kim | use of potentials in Knowledge Base for Interatomic Models (KIM) |
+-------------+------------------------------------------------------------------+
| meam | MEAM test for SiC and shear (same as shear examples) |
+-------------+------------------------------------------------------------------+
| melt | rapid melt of 3d LJ system |
+-------------+------------------------------------------------------------------+
| micelle | self-assembly of small lipid-like molecules into 2d bilayers |
+-------------+------------------------------------------------------------------+
| min | energy minimization of 2d LJ melt |
+-------------+------------------------------------------------------------------+
| msst | MSST shock dynamics |
+-------------+------------------------------------------------------------------+
| nb3b | use of nonbonded 3-body harmonic pair style |
+-------------+------------------------------------------------------------------+
| neb | nudged elastic band (NEB) calculation for barrier finding |
+-------------+------------------------------------------------------------------+
| nemd | non-equilibrium MD of 2d sheared system |
+-------------+------------------------------------------------------------------+
| obstacle | flow around two voids in a 2d channel |
+-------------+------------------------------------------------------------------+
| peptide | dynamics of a small solvated peptide chain (5-mer) |
+-------------+------------------------------------------------------------------+
| peri | Peridynamic model of cylinder impacted by indenter |
+-------------+------------------------------------------------------------------+
| pour | pouring of granular particles into a 3d box, then chute flow |
+-------------+------------------------------------------------------------------+
| prd | parallel replica dynamics of vacancy diffusion in bulk Si |
+-------------+------------------------------------------------------------------+
| python | using embedded Python in a LAMMPS input script |
+-------------+------------------------------------------------------------------+
| qeq | use of the QEQ package for charge equilibration |
+-------------+------------------------------------------------------------------+
| reax | RDX and TATB models using the ReaxFF |
+-------------+------------------------------------------------------------------+
| rigid | rigid bodies modeled as independent or coupled |
+-------------+------------------------------------------------------------------+
| shear | sideways shear applied to 2d solid, with and without a void |
+-------------+------------------------------------------------------------------+
| snap | NVE dynamics for BCC tantalum crystal using SNAP potential |
+-------------+------------------------------------------------------------------+
| srd | stochastic rotation dynamics (SRD) particles as solvent |
+-------------+------------------------------------------------------------------+
| streitz | use of Streitz/Mintmire potential with charge equilibration |
+-------------+------------------------------------------------------------------+
| tad | temperature-accelerated dynamics of vacancy diffusion in bulk Si |
+-------------+------------------------------------------------------------------+
| vashishta | use of the Vashishta potential |
+-------------+------------------------------------------------------------------+
Here is how you can run and visualize one of the sample problems:
.. parsed-literal::
@ -124,16 +138,16 @@ Here is how you might run and visualize one of the sample problems:
lmp_linux -in in.indent # run the problem
Running the simulation produces the files *dump.indent* and
*log.lammps*. You can visualize the dump file as follows:
.. parsed-literal::
../../tools/xmovie/xmovie -scale dump.indent
*log.lammps*. You can visualize the dump file of snapshots with a
variety of 3rd-party tools highlighted on the
`Visualization <http://lammps.sandia.gov/viz.html>`_ page of the LAMMPS
web site.
If you uncomment the :doc:`dump image <dump_image>` line(s) in the input
script a series of JPG images will be produced by the run. These can
be viewed individually or turned into a movie or animated by tools
like ImageMagick or QuickTime or various Windows-based tools. See the
script a series of JPG images will be produced by the run (assuming
you built LAMMPS with JPG support; see :doc:`Section start 2.2 <Section_start>` for details). These can be viewed
individually or turned into a movie or animated by tools like
ImageMagick or QuickTime or various Windows-based tools. See the
:doc:`dump image <dump_image>` doc page for more details. E.g. this
Imagemagick command would create a GIF file suitable for viewing in a
browser.
@ -146,23 +160,38 @@ browser.
----------
There is also a COUPLE directory with examples of how to use LAMMPS as
a library, either by itself or in tandem with another code or library.
See the COUPLE/README file to get started.
Uppercase directories
---------------------
There is also an ELASTIC directory with an example script for
computing elastic constants at zero temperature, using an Si example. See
the ELASTIC/in.elastic file for more info.
+-----------+--------------------------------------------------------------------------------------------------+
| ASPHERE | various aspherical particle models, using ellipsoids, rigid bodies, line/triangle particles, etc |
+-----------+--------------------------------------------------------------------------------------------------+
| COUPLE | examples of how to use LAMMPS as a library |
+-----------+--------------------------------------------------------------------------------------------------+
| DIFFUSE | compute diffusion coefficients via several methods |
+-----------+--------------------------------------------------------------------------------------------------+
| ELASTIC | compute elastic constants at zero temperature |
+-----------+--------------------------------------------------------------------------------------------------+
| ELASTIC_T | compute elastic constants at finite temperature |
+-----------+--------------------------------------------------------------------------------------------------+
| KAPPA | compute thermal conductivity via several methods |
+-----------+--------------------------------------------------------------------------------------------------+
| MC | using LAMMPS in a Monte Carlo mode to relax the energy of a system |
+-----------+--------------------------------------------------------------------------------------------------+
| USER | examples for USER packages and USER-contributed commands |
+-----------+--------------------------------------------------------------------------------------------------+
| VISCOSITY | compute viscosity via several methods |
+-----------+--------------------------------------------------------------------------------------------------+
There is also an ELASTIC_T directory with an example script for
computing elastic constants at finite temperature, using an Si example. See
the ELASTIC_T/in.elastic file for more info.
Nearly all of these directories have README files which give more
details on how to understand and use their contents.
There is also a USER directory which contains subdirectories of
user-provided examples for user packages. See the README files in
those directories for more info. See the
:doc:`Section_start.html <Section_start>` file for more info about user
packages.
The USER directory has a large number of sub-directories which
correspond by name to a USER package. They contain scripts that
illustrate how to use the command(s) provided in that package. Many
of the sub-directories have their own README files which give further
instructions. See the :doc:`Section packages <Section_packages>` doc
page for more info on specific USER packages.
.. _lws: http://lammps.sandia.gov

File diff suppressed because it is too large Load Diff

View File

@ -1,13 +1,17 @@
Performance & scalability
=========================
LAMMPS performance on several prototypical benchmarks and machines is
discussed on the Benchmarks page of the `LAMMPS WWW Site <lws_>`_ where
CPU timings and parallel efficiencies are listed. Here, the
benchmarks are described briefly and some useful rules of thumb about
their performance are highlighted.
Current LAMMPS performance is discussed on the Benchmarks page of the
`LAMMPS WWW Site <lws_>`_ where CPU timings and parallel efficiencies are
listed. The page has several sections, which are briefly described
below:
These are the 5 benchmark problems:
* CPU performance on 5 standard problems, strong and weak scaling
* GPU and Xeon Phi performance on same and related problems
* Comparison of cost of interatomic potentials
* Performance of huge, billion-atom problems
The 5 standard problems are as follow:
#. LJ = atomic fluid, Lennard-Jones potential with 2.5 sigma cutoff (55
neighbors per atom), NVE integration
@ -22,51 +26,54 @@ These are the 5 benchmark problems:
field with a 10 Angstrom LJ cutoff (440 neighbors per atom),
particle-particle particle-mesh (PPPM) for long-range Coulombics, NPT
integration
The input files for running the benchmarks are included in the LAMMPS
distribution, as are sample output files. Each of the 5 problems has
32,000 atoms and runs for 100 timesteps. Each can be run as a serial
benchmarks (on one processor) or in parallel. In parallel, each
benchmark can be run as a fixed-size or scaled-size problem. For
fixed-size benchmarking, the same 32K atom problem is run on various
numbers of processors. For scaled-size benchmarking, the model size
is increased with the number of processors. E.g. on 8 processors, a
256K-atom problem is run; on 1024 processors, a 32-million atom
problem is run, etc.
Input files for these 5 problems are provided in the bench directory
of the LAMMPS distribution. Each has 32,000 atoms and runs for 100
timesteps. The size of the problem (number of atoms) can be varied
using command-line switches as described in the bench/README file.
This is an easy way to test performance and either strong or weak
scalability on your machine.
A useful metric from the benchmarks is the CPU cost per atom per
timestep. Since LAMMPS performance scales roughly linearly with
problem size and timesteps, the run time of any problem using the same
model (atom style, force field, cutoff, etc) can then be estimated.
For example, on a 1.7 GHz Pentium desktop machine (Intel icc compiler
under Red Hat Linux), the CPU run-time in seconds/atom/timestep for
the 5 problems is
The bench directory includes a few log.* files that show performance
of these 5 problems on 1 or 4 cores of Linux desktop. The bench/FERMI
and bench/KEPLER dirs have input files and scripts and instructions
for running the same (or similar) problems using OpenMP or GPU or Xeon
Phi acceleration options. See the README files in those dirs and the
:doc:`Section accelerate <Section_accelerate>` doc pages for
instructions on how to build LAMMPS and run on that kind of hardware.
+----------------+---------+---------+---------+---------+-----------+
| Problem: | LJ | Chain | EAM | Chute | Rhodopsin |
+----------------+---------+---------+---------+---------+-----------+
| CPU/atom/step: | 4.55E-6 | 2.18E-6 | 9.38E-6 | 2.18E-6 | 1.11E-4 |
+----------------+---------+---------+---------+---------+-----------+
| Ratio to LJ: | 1.0 | 0.48 | 2.06 | 0.48 | 24.5 |
+----------------+---------+---------+---------+---------+-----------+
The bench/POTENTIALS directory has input files which correspond to the
table of results on the
:ref:`Potentials <potentials>` section of
the Benchmarks web page. So you can also run those test problems on
your machine.
The ratios mean that if the atomic LJ system has a normalized cost of
1.0, the bead-spring chains and granular systems run 2x faster, while
the EAM metal and solvated protein models run 2x and 25x slower
respectively. The bulk of these cost differences is due to the
expense of computing a particular pairwise force field for a given
number of neighbors per atom.
The :ref:`billion-atom <billion>` section
of the Benchmarks web page has performance data for very large
benchmark runs of simple Lennard-Jones (LJ) models, which use the
bench/in.lj input script.
Performance on a parallel machine can also be predicted from the
one-processor timings if the parallel efficiency can be estimated.
The communication bandwidth and latency of a particular parallel
machine affects the efficiency. On most machines LAMMPS will give
fixed-size parallel efficiencies on these benchmarks above 50% so long
as the atoms/processor count is a few 100 or greater - i.e. on 64 to
128 processors. Likewise, scaled-size parallel efficiencies will
typically be 80% or greater up to very large processor counts. The
benchmark data on the `LAMMPS WWW Site <lws_>`_ gives specific examples on
some different machines, including a run of 3/4 of a billion LJ atoms
on 1500 processors that ran at 85% parallel efficiency.
----------
For all the benchmarks, a useful metric is the CPU cost per atom per
timestep. Since performance scales roughly linearly with problem size
and timesteps for all LAMMPS models (i.e. inteatomic or coarse-grained
potentials), the run time of any problem using the same model (atom
style, force field, cutoff, etc) can then be estimated.
Performance on a parallel machine can also be predicted from one-core
or one-node timings if the parallel efficiency can be estimated. The
communication bandwidth and latency of a particular parallel machine
affects the efficiency. On most machines LAMMPS will give parallel
efficiencies on these benchmarks above 50% so long as the number of
atoms/core is a few 100 or greater, and closer to 100% for large
numbers of atoms/core. This is for all-MPI mode with one MPI task per
core. For nodes with accelerator options or hardware (OpenMP, GPU,
Phi), you should first measure single node performance. Then you can
estimate parallel performance for multi-node runs using the same logic
as for all-MPI mode, except that now you will typically need many more
atoms/node to achieve good scalability.
.. _lws: http://lammps.sandia.gov

View File

@ -49,11 +49,13 @@ Here is a quick overview of how to use the KOKKOS package
for CPU acceleration, assuming one or more 16-core nodes.
More details follow.
use a C++11 compatible compiler
make yes-kokkos
make mpi KOKKOS_DEVICES=OpenMP # build with the KOKKOS package
make kokkos_omp # or Makefile.kokkos_omp already has variable set
Make.py -v -p kokkos -kokkos omp -o mpi -a file mpi # or one-line build via Make.py
.. parsed-literal::
use a C++11 compatible compiler
make yes-kokkos
make mpi KOKKOS_DEVICES=OpenMP # build with the KOKKOS package
make kokkos_omp # or Makefile.kokkos_omp already has variable set
Make.py -v -p kokkos -kokkos omp -o mpi -a file mpi # or one-line build via Make.py
.. parsed-literal::
@ -72,12 +74,14 @@ details follow.
discuss use of NVCC, which Makefiles to examine
use a C++11 compatible compiler
KOKKOS_DEVICES = Cuda, OpenMP
KOKKOS_ARCH = Kepler35
make yes-kokkos
make machine
Make.py -p kokkos -kokkos cuda arch=31 -o kokkos_cuda -a file kokkos_cuda
.. parsed-literal::
use a C++11 compatible compiler
KOKKOS_DEVICES = Cuda, OpenMP
KOKKOS_ARCH = Kepler35
make yes-kokkos
make machine
Make.py -p kokkos -kokkos cuda arch=31 -o kokkos_cuda -a file kokkos_cuda
.. parsed-literal::
@ -101,11 +105,13 @@ for the Intel Phi:
make machine
Make.py -p kokkos -kokkos phi -o kokkos_phi -a file mpi
host=MIC, Intel Phi with 61 cores (240 threads/phi via 4x hardware threading):
mpirun -np 1 lmp_g++ -k on t 240 -sf kk -in in.lj # 1 MPI task on 1 Phi, 1*240 = 240
mpirun -np 30 lmp_g++ -k on t 8 -sf kk -in in.lj # 30 MPI tasks on 1 Phi, 30*8 = 240
mpirun -np 12 lmp_g++ -k on t 20 -sf kk -in in.lj # 12 MPI tasks on 1 Phi, 12*20 = 240
mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis
.. parsed-literal::
host=MIC, Intel Phi with 61 cores (240 threads/phi via 4x hardware threading):
mpirun -np 1 lmp_g++ -k on t 240 -sf kk -in in.lj # 1 MPI task on 1 Phi, 1*240 = 240
mpirun -np 30 lmp_g++ -k on t 8 -sf kk -in in.lj # 30 MPI tasks on 1 Phi, 30*8 = 240
mpirun -np 12 lmp_g++ -k on t 20 -sf kk -in in.lj # 12 MPI tasks on 1 Phi, 12*20 = 240
mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis
**Required hardware/software:**

View File

@ -43,9 +43,9 @@ Description
"""""""""""
Styles *brownian* and *brownain/poly* compute Brownian forces and
torques on finite-size particles. The former requires monodisperse
spherical particles; the latter allows for polydisperse spherical
particles.
torques on finite-size spherical particles. The former requires
monodisperse spherical particles; the latter allows for polydisperse
spherical particles.
These pair styles are designed to be used with either the :doc:`pair_style lubricate <pair_lubricate>` or :doc:`pair_style lubricateU <pair_lubricateU>` commands to provide thermostatting
when dissipative lubrication forces are acting. Thus the parameters
@ -134,8 +134,8 @@ Restrictions
""""""""""""
These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the :ref:`Making LAMMPS <2_3>` section for more info.
These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the :ref:`Making LAMMPS <2_3>` section for more info.
Only spherical monodisperse particles are allowed for pair_style
brownian.

View File

@ -47,8 +47,8 @@ Description
"""""""""""
Styles *lubricate* and *lubricate/poly* compute hydrodynamic
interactions between mono-disperse spherical particles in a pairwise
fashion. The interactions have 2 components. The first is
interactions between mono-disperse finite-size spherical particles in
a pairwise fashion. The interactions have 2 components. The first is
Ball-Melrose lubrication terms via the formulas in :ref:`(Ball and Melrose) <Ball>`
.. image:: Eqs/pair_lubricate.jpg
@ -207,8 +207,8 @@ Restrictions
""""""""""""
These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the :ref:`Making LAMMPS <2_3>` section for more info.
These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the :ref:`Making LAMMPS <2_3>` section for more info.
Only spherical monodisperse particles are allowed for pair_style
lubricate.

View File

@ -34,8 +34,9 @@ Description
"""""""""""
Styles *lubricateU* and *lubricateU/poly* compute velocities and
angular velocities such that the hydrodynamic interaction balances the
force and torque due to all other types of interactions.
angular velocities for finite-size spherical particles such that the
hydrodynamic interaction balances the force and torque due to all
other types of interactions.
The interactions have 2 components. The first is
Ball-Melrose lubrication terms via the formulas in :ref:`(Ball and Melrose) <Ball>`
@ -187,8 +188,8 @@ Restrictions
""""""""""""
These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the :ref:`Making LAMMPS <2_3>` section for more info.
These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the :ref:`Making LAMMPS <2_3>` section for more info.
Currently, these pair styles assume that all other types of
forces/torques on the particles have been already been computed when

View File

@ -25,7 +25,7 @@ Syntax
pair_style style
style = *peri/pmb* or *peri/lps* or *peri/ves* or *peri/eps*:ul
* style = *peri/pmb* or *peri/lps* or *peri/ves* or *peri/eps*
Examples
""""""""

View File

@ -166,11 +166,13 @@ produce an executable compatible with specific hardware.</p>
<p>Here is a quick overview of how to use the KOKKOS package
for CPU acceleration, assuming one or more 16-core nodes.
More details follow.</p>
<p>use a C++11 compatible compiler
<div class="highlight-python"><div class="highlight"><pre>use a C++11 compatible compiler
make yes-kokkos
make mpi KOKKOS_DEVICES=OpenMP # build with the KOKKOS package
make kokkos_omp # or Makefile.kokkos_omp already has variable set
Make.py -v -p kokkos -kokkos omp -o mpi -a file mpi # or one-line build via Make.py</p>
Make.py -v -p kokkos -kokkos omp -o mpi -a file mpi # or one-line build via Make.py
</pre></div>
</div>
<div class="highlight-python"><div class="highlight"><pre>mpirun -np 16 lmp_mpi -k on -sf kk -in in.lj # 1 node, 16 MPI tasks/node, no threads
mpirun -np 2 -ppn 1 lmp_mpi -k on t 16 -sf kk -in in.lj # 2 nodes, 1 MPI task/node, 16 threads/task
mpirun -np 2 lmp_mpi -k on t 8 -sf kk -in in.lj # 1 node, 2 MPI tasks/node, 8 threads/task
@ -186,12 +188,14 @@ mpirun -np 32 -ppn 4 lmp_mpi -k on t 4 -sf kk -in in.lj # 8 nodes, 4 MPI tasks
assuming one or more nodes, each with 16 cores and a GPU. More
details follow.</p>
<p>discuss use of NVCC, which Makefiles to examine</p>
<p>use a C++11 compatible compiler
<div class="highlight-python"><div class="highlight"><pre>use a C++11 compatible compiler
KOKKOS_DEVICES = Cuda, OpenMP
KOKKOS_ARCH = Kepler35
make yes-kokkos
make machine
Make.py -p kokkos -kokkos cuda arch=31 -o kokkos_cuda -a file kokkos_cuda</p>
Make.py -p kokkos -kokkos cuda arch=31 -o kokkos_cuda -a file kokkos_cuda
</pre></div>
</div>
<div class="highlight-python"><div class="highlight"><pre>mpirun -np 1 lmp_cuda -k on t 6 -sf kk -in in.lj # one MPI task, 6 threads on CPU
mpirun -np 4 -ppn 1 lmp_cuda -k on t 6 -sf kk -in in.lj # ditto on 4 nodes
</pre></div>
@ -210,11 +214,13 @@ make machine
Make.py -p kokkos -kokkos phi -o kokkos_phi -a file mpi
</pre></div>
</div>
<p>host=MIC, Intel Phi with 61 cores (240 threads/phi via 4x hardware threading):
<div class="highlight-python"><div class="highlight"><pre>host=MIC, Intel Phi with 61 cores (240 threads/phi via 4x hardware threading):
mpirun -np 1 lmp_g++ -k on t 240 -sf kk -in in.lj # 1 MPI task on 1 Phi, 1*240 = 240
mpirun -np 30 lmp_g++ -k on t 8 -sf kk -in in.lj # 30 MPI tasks on 1 Phi, 30*8 = 240
mpirun -np 12 lmp_g++ -k on t 20 -sf kk -in in.lj # 12 MPI tasks on 1 Phi, 12*20 = 240
mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis</p>
mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis
</pre></div>
</div>
<p><strong>Required hardware/software:</strong></p>
<p>Kokkos support within LAMMPS must be built with a C++11 compatible
compiler. If using gcc, version 4.8.1 or later is required.</p>

View File

@ -61,16 +61,13 @@ use a C++11 compatible compiler
make yes-kokkos
make mpi KOKKOS_DEVICES=OpenMP # build with the KOKKOS package
make kokkos_omp # or Makefile.kokkos_omp already has variable set
Make.py -v -p kokkos -kokkos omp -o mpi -a file mpi # or one-line build via Make.py
Make.py -v -p kokkos -kokkos omp -o mpi -a file mpi # or one-line build via Make.py :pre
mpirun -np 16 lmp_mpi -k on -sf kk -in in.lj # 1 node, 16 MPI tasks/node, no threads
mpirun -np 2 -ppn 1 lmp_mpi -k on t 16 -sf kk -in in.lj # 2 nodes, 1 MPI task/node, 16 threads/task
mpirun -np 2 lmp_mpi -k on t 8 -sf kk -in in.lj # 1 node, 2 MPI tasks/node, 8 threads/task
mpirun -np 32 -ppn 4 lmp_mpi -k on t 4 -sf kk -in in.lj # 8 nodes, 4 MPI tasks/node, 4 threads/task :pre
specify variables and settings in your Makefile.machine that enable OpenMP, GPU, or Phi support
include the KOKKOS package and build LAMMPS
enable the KOKKOS package and its hardware options via the "-k on" command-line switch use KOKKOS styles in your input script :ul
@ -86,7 +83,7 @@ KOKKOS_DEVICES = Cuda, OpenMP
KOKKOS_ARCH = Kepler35
make yes-kokkos
make machine
Make.py -p kokkos -kokkos cuda arch=31 -o kokkos_cuda -a file kokkos_cuda
Make.py -p kokkos -kokkos cuda arch=31 -o kokkos_cuda -a file kokkos_cuda :pre
mpirun -np 1 lmp_cuda -k on t 6 -sf kk -in in.lj # one MPI task, 6 threads on CPU
mpirun -np 4 -ppn 1 lmp_cuda -k on t 6 -sf kk -in in.lj # ditto on 4 nodes :pre
@ -108,8 +105,7 @@ host=MIC, Intel Phi with 61 cores (240 threads/phi via 4x hardware threading):
mpirun -np 1 lmp_g++ -k on t 240 -sf kk -in in.lj # 1 MPI task on 1 Phi, 1*240 = 240
mpirun -np 30 lmp_g++ -k on t 8 -sf kk -in in.lj # 30 MPI tasks on 1 Phi, 30*8 = 240
mpirun -np 12 lmp_g++ -k on t 20 -sf kk -in in.lj # 12 MPI tasks on 1 Phi, 12*20 = 240
mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis
mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis :pre
[Required hardware/software:]

View File

@ -164,9 +164,9 @@ pair_coeff * *
<div class="section" id="description">
<h2>Description<a class="headerlink" href="#description" title="Permalink to this headline"></a></h2>
<p>Styles <em>brownian</em> and <em>brownain/poly</em> compute Brownian forces and
torques on finite-size particles. The former requires monodisperse
spherical particles; the latter allows for polydisperse spherical
particles.</p>
torques on finite-size spherical particles. The former requires
monodisperse spherical particles; the latter allows for polydisperse
spherical particles.</p>
<p>These pair styles are designed to be used with either the <a class="reference internal" href="pair_lubricate.html"><em>pair_style lubricate</em></a> or <a class="reference internal" href="pair_lubricateU.html"><em>pair_style lubricateU</em></a> commands to provide thermostatting
when dissipative lubrication forces are acting. Thus the parameters
<em>mu</em>, <em>flaglog</em>, <em>flagfld</em>, <em>cutinner</em>, and <em>cutoff</em> should be
@ -226,8 +226,8 @@ to be specified in an input script that reads a restart file.</p>
<hr class="docutils" />
<div class="section" id="restrictions">
<h2>Restrictions<a class="headerlink" href="#restrictions" title="Permalink to this headline"></a></h2>
<p>These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the <span class="xref std std-ref">Making LAMMPS</span> section for more info.</p>
<p>These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the <span class="xref std std-ref">Making LAMMPS</span> section for more info.</p>
<p>Only spherical monodisperse particles are allowed for pair_style
brownian.</p>
<p>Only spherical particles are allowed for pair_style brownian/poly.</p>

View File

@ -35,9 +35,9 @@ pair_coeff * * :pre
[Description:]
Styles {brownian} and {brownain/poly} compute Brownian forces and
torques on finite-size particles. The former requires monodisperse
spherical particles; the latter allows for polydisperse spherical
particles.
torques on finite-size spherical particles. The former requires
monodisperse spherical particles; the latter allows for polydisperse
spherical particles.
These pair styles are designed to be used with either the "pair_style
lubricate"_pair_lubricate.html or "pair_style
@ -121,8 +121,8 @@ This pair style can only be used via the {pair} keyword of the
[Restrictions:]
These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the "Making
These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the "Making
LAMMPS"_Section_start.html#2_3 section for more info.
Only spherical monodisperse particles are allowed for pair_style

View File

@ -166,8 +166,8 @@ fix 1 all adapt 1 pair lubricate mu * * v_mu
<div class="section" id="description">
<h2>Description<a class="headerlink" href="#description" title="Permalink to this headline"></a></h2>
<p>Styles <em>lubricate</em> and <em>lubricate/poly</em> compute hydrodynamic
interactions between mono-disperse spherical particles in a pairwise
fashion. The interactions have 2 components. The first is
interactions between mono-disperse finite-size spherical particles in
a pairwise fashion. The interactions have 2 components. The first is
Ball-Melrose lubrication terms via the formulas in <a class="reference internal" href="pair_lubricateU.html#ball"><span>(Ball and Melrose)</span></a></p>
<img alt="_images/pair_lubricate.jpg" class="align-center" src="_images/pair_lubricate.jpg" />
<p>which represents the dissipation W between two nearby particles due to
@ -285,8 +285,8 @@ to be specified in an input script that reads a restart file.</p>
<hr class="docutils" />
<div class="section" id="restrictions">
<h2>Restrictions<a class="headerlink" href="#restrictions" title="Permalink to this headline"></a></h2>
<p>These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the <span class="xref std std-ref">Making LAMMPS</span> section for more info.</p>
<p>These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the <span class="xref std std-ref">Making LAMMPS</span> section for more info.</p>
<p>Only spherical monodisperse particles are allowed for pair_style
lubricate.</p>
<p>Only spherical particles are allowed for pair_style lubricate/poly.</p>

View File

@ -38,8 +38,8 @@ fix 1 all adapt 1 pair lubricate mu * * v_mu :pre
[Description:]
Styles {lubricate} and {lubricate/poly} compute hydrodynamic
interactions between mono-disperse spherical particles in a pairwise
fashion. The interactions have 2 components. The first is
interactions between mono-disperse finite-size spherical particles in
a pairwise fashion. The interactions have 2 components. The first is
Ball-Melrose lubrication terms via the formulas in "(Ball and
Melrose)"_#Ball
@ -190,8 +190,8 @@ This pair style can only be used via the {pair} keyword of the
[Restrictions:]
These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the "Making
These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the "Making
LAMMPS"_Section_start.html#2_3 section for more info.
Only spherical monodisperse particles are allowed for pair_style

View File

@ -154,8 +154,9 @@ pair_coeff * *
<div class="section" id="description">
<h2>Description<a class="headerlink" href="#description" title="Permalink to this headline"></a></h2>
<p>Styles <em>lubricateU</em> and <em>lubricateU/poly</em> compute velocities and
angular velocities such that the hydrodynamic interaction balances the
force and torque due to all other types of interactions.</p>
angular velocities for finite-size spherical particles such that the
hydrodynamic interaction balances the force and torque due to all
other types of interactions.</p>
<p>The interactions have 2 components. The first is
Ball-Melrose lubrication terms via the formulas in <a class="reference internal" href="#ball"><span>(Ball and Melrose)</span></a></p>
<img alt="_images/pair_lubricate.jpg" class="align-center" src="_images/pair_lubricate.jpg" />
@ -272,8 +273,8 @@ to be specified in an input script that reads a restart file.</p>
<hr class="docutils" />
<div class="section" id="restrictions">
<h2>Restrictions<a class="headerlink" href="#restrictions" title="Permalink to this headline"></a></h2>
<p>These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the <span class="xref std std-ref">Making LAMMPS</span> section for more info.</p>
<p>These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the <span class="xref std std-ref">Making LAMMPS</span> section for more info.</p>
<p>Currently, these pair styles assume that all other types of
forces/torques on the particles have been already been computed when
it is invoked. This requires this style to be defined as the last of

View File

@ -31,8 +31,9 @@ pair_coeff * * :pre
[Description:]
Styles {lubricateU} and {lubricateU/poly} compute velocities and
angular velocities such that the hydrodynamic interaction balances the
force and torque due to all other types of interactions.
angular velocities for finite-size spherical particles such that the
hydrodynamic interaction balances the force and torque due to all
other types of interactions.
The interactions have 2 components. The first is
Ball-Melrose lubrication terms via the formulas in "(Ball and
@ -177,8 +178,8 @@ This pair style can only be used via the {pair} keyword of the
[Restrictions:]
These styles are part of the FLD package. They are only enabled if
LAMMPS was built with that package. See the "Making
These styles are part of the COLLOID package. They are only enabled
if LAMMPS was built with that package. See the "Making
LAMMPS"_Section_start.html#2_3 section for more info.
Currently, these pair styles assume that all other types of

View File

@ -146,7 +146,9 @@
<div class="highlight-python"><div class="highlight"><pre>pair_style style
</pre></div>
</div>
<p>style = <em>peri/pmb</em> or <em>peri/lps</em> or <em>peri/ves</em> or <em>peri/eps</em>:ul</p>
<ul class="simple">
<li>style = <em>peri/pmb</em> or <em>peri/lps</em> or <em>peri/ves</em> or <em>peri/eps</em></li>
</ul>
</div>
<div class="section" id="examples">
<h2>Examples<a class="headerlink" href="#examples" title="Permalink to this headline"></a></h2>

View File

@ -17,7 +17,7 @@ pair_style peri/eps command :h3
pair_style style :pre
style = {peri/pmb} or {peri/lps} or {peri/ves} or {peri/eps}:ul
style = {peri/pmb} or {peri/lps} or {peri/ves} or {peri/eps} :ul
[Examples:]

File diff suppressed because one or more lines are too long