git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@8261 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp 2012-06-12 22:54:32 +00:00
parent ae7eb10d55
commit 5c3adddd85
5 changed files with 207 additions and 153 deletions

View File

@ -20,7 +20,7 @@ balancing has not yet been released.
</PRE>
<LI>one or more keyword/arg pairs may be appended
</UL>
<LI>keyword = <I>x</I> or <I>y</I> or <I>z</I> or <I>dynamic</I>
<LI>keyword = <I>x</I> or <I>y</I> or <I>z</I> or <I>dynamic</I> or <I>out</I>
<PRE> <I>x</I> args = <I>uniform</I> or Px-1 numbers between 0 and 1
<I>uniform</I> = evenly spaced cuts between processors in x dimension
@ -31,19 +31,20 @@ balancing has not yet been released.
<I>z</I> args = <I>uniform</I> or Pz-1 numbers between 0 and 1
<I>uniform</I> = evenly spaced cuts between processors in z dimension
numbers = Pz-1 ascending values between 0 and 1, Pz - # of processors in z dimension
<I>dynamic</I> args = Nrepeat Niter dimstr thresh
Nrepeat = # of times to repeat dimstr sequence
<I>dynamic</I> args = disstr Niter thresh
dimstr = sequence of letters containing "x" or "y" or "z", each not more than once
Niter = # of times to iterate within each dimension of dimstr sequence
dimstr = sequence of letters containing "x" or "y" or "z"
thresh = stop balancing when this imbalance threshhold is reached
<I>out</I> arg = filename
filename = output file to write each processor's sub-domain to
</PRE>
</UL>
<P><B>Examples:</B>
</P>
<PRE>balance x uniform y 0.4 0.5 0.6
balance dynamic 1 5 xzx 1.1
balance dynamic 5 10 x 1.0
balance dynamic xz 5 1.1
balance dynamic x 20 1.0 out tmp.balance
</PRE>
<P><B>Description:</B>
</P>
@ -67,14 +68,13 @@ very different numbers of particles per processor. This can lead to
poor performance in a scalability sense, when the simulation is run in
parallel.
</P>
<P>Note that the <A HREF = "processors.html">processors</A> command gives you some
control over how the box volume is split across processors.
Specifically, for a Px by Py by Pz grid of processors, it lets you
choose Px, Py, and Pz, subject to the constraint that Px * Py * Pz =
P, the total number of processors. This can be sufficient to achieve
good load-balance for some models on some processor counts. However,
all the processor sub-domains will still be the same shape and have
the same volume.
<P>Note that the <A HREF = "processors.html">processors</A> command gives you control
over how the box volume is split across processors. Specifically, for
a Px by Py by Pz grid of processors, it chooses or lets you choose Px,
Py, and Pz, subject to the constraint that Px * Py * Pz = P, the total
number of processors. This is sufficient to achieve good load-balance
for many models on many processor counts. However, all the processor
sub-domains will still be the same shape and have the same volume.
</P>
<P>This command does not alter the topology of the Px by Py by Pz grid or
processors. But it shifts the cutting planes between processors (in
@ -141,48 +141,77 @@ partitioning, which could be uniform or the result of a previous
balance command.
</P>
<P>The <I>dimstr</I> argument is a string of characters, each of which must be
an "x" or "y" or "z". The characters can appear in any order, and can
be repeated as many times as desired. These are all valid <I>dimstr</I>
arguments: "x" or "xyzyx" or "yyyzzz".
an "x" or "y" or "z". Eacn character can appear zero or one time,
since there is no advantage to balancing on a dimension more than
once. You should normally only list dimensions where you expect there
to be a density variation in the particles.
</P>
<P>Balancing proceeds by adjusting the cutting planes in each of the
dimensions listed in <I>dimstr</I>, one dimension at a time. The entire
sequence of dimensions is repeated <I>Nrepeat</I> times. For a single
dimension, the balancing operation (described below) is iterated on
<I>Niter</I> times. After each dimension finishes, the imbalance factor is
re-computed, and the balancing operation halts if the <I>thresh</I>
dimensions listed in <I>dimstr</I>, one dimension at a time. For a single
dimension, the balancing operation (described below) is iterated on up
to <I>Niter</I> times. After each dimension finishes, the imbalance factor
is re-computed, and the balancing operation halts if the <I>thresh</I>
criterion is met.
</P>
<P>The interplay between <I>Nrepeat</I>, <I>Niter</I>, and <I>dimstr</I> means that
these commands do essentially the same thing, the only difference
being how often the imbalance factor is computed and checked against
the threshhold:
<P>A rebalance operation in a single dimension is performed using a
recursive multisectioning algorithm, where the position of each
cutting plane (line in 2d) in the dimension is adjusted independently.
This is similar to a recursive bisectioning (RCB) for a single value,
except that the bounds used for each bisectioning take advantage of
information from neighboring cuts if possible. At each iteration, the
count of particles on either side of each plane is tallied. If the
counts do not match the target value for the plane, the position of
the cut is adjusted. As the recustion progresses, the count of
particles on either side of the plane gets closer to the target value.
</P>
<PRE>balance y dynamic 5 10 x 1.2
balance y dynamic 1 10 xxxxx 1.2
balance y dynamic 50 1 x 1.2
<HR>
<P>The <I>out</I> keyword writes a text file to the specified <I>filename</I> with
the results of the balancing operation. The file contains the bounds
of the sub-domain for each processor after the balancing operation
completes. The format of the file is compatible with the
<A HREF = "pizza">Pizza.py</A> <I>mdump</I> tool which has support for manipulating and
visualizing mesh files. An example is show here for a balancing by 4
processors for a 2d problem:
</P>
<PRE>ITEM: TIMESTEP
0
ITEM: NUMBER OF SQUARES
4
ITEM: SQUARES
1 1 1 2 7 6
2 2 2 3 8 7
3 3 3 4 9 8
4 4 4 5 10 9
ITEM: TIMESTEP
0
ITEM: NUMBER OF NODES
10
ITEM: BOX BOUNDS
-153.919 184.703
0 15.3919
-0.769595 0.769595
ITEM: NODES
1 1 -153.919 0 0
2 1 7.45545 0 0
3 1 14.7305 0 0
4 1 22.667 0 0
5 1 184.703 0 0
6 1 -153.919 15.3919 0
7 1 7.45545 15.3919 0
8 1 14.7305 15.3919 0
9 1 22.667 15.3919 0
10 1 184.703 15.3919 0
</PRE>
<P>A rebalance operation in a single dimension is performed using an
iterative "diffusive" load-balancing algorithm <A HREF = "#Cybenko">(Cybenko)</A>.
One iteration on a dimension (which is repeated <I>Niter</I> times), works
as follows. Assume there are Px processors in the x dimension. This
defines Px slices of the simulation, each of which contains Py*Pz
processors. The task is to adjust the position of the Px-1 cuts
between slices, leaving the end cuts unchanged (left and right edges
of the simulation box).
<P>The "SQUARES" lists the node IDs of the 4 vertices in a rectangle for
each processor (1 to 4). The first SQUARE 1 (for processor 0) is a
rectangle of type 1 (equal to SQUARE ID) and contains vertices
1,2,7,6. The coordinates of all the vertices are listed in the NODES
section. Note that the 4 sub-domains share vertices, so there are
only 10 unique vertices in total.
</P>
<P>The iteration beings by calculating the number of atoms within each of
the Px slices. Then for each slice, its atom count is compared to its
neighbors. If a slice has more atoms than its left (or right)
neighbor, the cut is moved towards the center of the slice,
effectively shrinking the width of the slice and migrating atoms to
the other slice. The distance to move the cut is a function of the
"density" of atoms in the donor slice and the difference in counts
between the 2 slices. A damping factor is also applied to avoid
oscillations in the position of the cutting plane as iterations
proceed. Hence the "diffusive" nature of the algorithm as work
(atoms) effectively diffuses from highly loaded processors to
less-loaded processors.
<P>For a 3d problem, the syntax is similar with "SQUARES" replaced by
"CUBES", and 8 vertices listed for each processor, instead of 4.
</P>
<HR>
@ -200,10 +229,4 @@ appear in <I>dimstr</I> for the <I>dynamic</I> keyword.
</P>
<P><B>Default:</B> none
</P>
<HR>
<A NAME = "Cybenko"></A>
<P><B>(Cybenko)</B> Cybenko, J Par Dist Comp, 7, 279-301 (1989).
</P>
</HTML>

View File

@ -16,7 +16,7 @@ balancing has not yet been released.
balance keyword args ... :pre
one or more keyword/arg pairs may be appended :ule,l
keyword = {x} or {y} or {z} or {dynamic} :l
keyword = {x} or {y} or {z} or {dynamic} or {out} :l
{x} args = {uniform} or Px-1 numbers between 0 and 1
{uniform} = evenly spaced cuts between processors in x dimension
numbers = Px-1 ascending values between 0 and 1, Px - # of processors in x dimension
@ -26,18 +26,19 @@ keyword = {x} or {y} or {z} or {dynamic} :l
{z} args = {uniform} or Pz-1 numbers between 0 and 1
{uniform} = evenly spaced cuts between processors in z dimension
numbers = Pz-1 ascending values between 0 and 1, Pz - # of processors in z dimension
{dynamic} args = Nrepeat Niter dimstr thresh
Nrepeat = # of times to repeat dimstr sequence
{dynamic} args = disstr Niter thresh
dimstr = sequence of letters containing "x" or "y" or "z", each not more than once
Niter = # of times to iterate within each dimension of dimstr sequence
dimstr = sequence of letters containing "x" or "y" or "z"
thresh = stop balancing when this imbalance threshhold is reached :pre
thresh = stop balancing when this imbalance threshhold is reached
{out} arg = filename
filename = output file to write each processor's sub-domain to :pre
:ule
[Examples:]
balance x uniform y 0.4 0.5 0.6
balance dynamic 1 5 xzx 1.1
balance dynamic 5 10 x 1.0 :pre
balance dynamic xz 5 1.1
balance dynamic x 20 1.0 out tmp.balance :pre
[Description:]
@ -61,14 +62,13 @@ very different numbers of particles per processor. This can lead to
poor performance in a scalability sense, when the simulation is run in
parallel.
Note that the "processors"_processors.html command gives you some
control over how the box volume is split across processors.
Specifically, for a Px by Py by Pz grid of processors, it lets you
choose Px, Py, and Pz, subject to the constraint that Px * Py * Pz =
P, the total number of processors. This can be sufficient to achieve
good load-balance for some models on some processor counts. However,
all the processor sub-domains will still be the same shape and have
the same volume.
Note that the "processors"_processors.html command gives you control
over how the box volume is split across processors. Specifically, for
a Px by Py by Pz grid of processors, it chooses or lets you choose Px,
Py, and Pz, subject to the constraint that Px * Py * Pz = P, the total
number of processors. This is sufficient to achieve good load-balance
for many models on many processor counts. However, all the processor
sub-domains will still be the same shape and have the same volume.
This command does not alter the topology of the Px by Py by Pz grid or
processors. But it shifts the cutting planes between processors (in
@ -135,48 +135,77 @@ partitioning, which could be uniform or the result of a previous
balance command.
The {dimstr} argument is a string of characters, each of which must be
an "x" or "y" or "z". The characters can appear in any order, and can
be repeated as many times as desired. These are all valid {dimstr}
arguments: "x" or "xyzyx" or "yyyzzz".
an "x" or "y" or "z". Eacn character can appear zero or one time,
since there is no advantage to balancing on a dimension more than
once. You should normally only list dimensions where you expect there
to be a density variation in the particles.
Balancing proceeds by adjusting the cutting planes in each of the
dimensions listed in {dimstr}, one dimension at a time. The entire
sequence of dimensions is repeated {Nrepeat} times. For a single
dimension, the balancing operation (described below) is iterated on
{Niter} times. After each dimension finishes, the imbalance factor is
re-computed, and the balancing operation halts if the {thresh}
dimensions listed in {dimstr}, one dimension at a time. For a single
dimension, the balancing operation (described below) is iterated on up
to {Niter} times. After each dimension finishes, the imbalance factor
is re-computed, and the balancing operation halts if the {thresh}
criterion is met.
The interplay between {Nrepeat}, {Niter}, and {dimstr} means that
these commands do essentially the same thing, the only difference
being how often the imbalance factor is computed and checked against
the threshhold:
A rebalance operation in a single dimension is performed using a
recursive multisectioning algorithm, where the position of each
cutting plane (line in 2d) in the dimension is adjusted independently.
This is similar to a recursive bisectioning (RCB) for a single value,
except that the bounds used for each bisectioning take advantage of
information from neighboring cuts if possible. At each iteration, the
count of particles on either side of each plane is tallied. If the
counts do not match the target value for the plane, the position of
the cut is adjusted. As the recustion progresses, the count of
particles on either side of the plane gets closer to the target value.
balance y dynamic 5 10 x 1.2
balance y dynamic 1 10 xxxxx 1.2
balance y dynamic 50 1 x 1.2 :pre
:line
A rebalance operation in a single dimension is performed using an
iterative "diffusive" load-balancing algorithm "(Cybenko)"_#Cybenko.
One iteration on a dimension (which is repeated {Niter} times), works
as follows. Assume there are Px processors in the x dimension. This
defines Px slices of the simulation, each of which contains Py*Pz
processors. The task is to adjust the position of the Px-1 cuts
between slices, leaving the end cuts unchanged (left and right edges
of the simulation box).
The {out} keyword writes a text file to the specified {filename} with
the results of the balancing operation. The file contains the bounds
of the sub-domain for each processor after the balancing operation
completes. The format of the file is compatible with the
"Pizza.py"_pizza {mdump} tool which has support for manipulating and
visualizing mesh files. An example is show here for a balancing by 4
processors for a 2d problem:
The iteration beings by calculating the number of atoms within each of
the Px slices. Then for each slice, its atom count is compared to its
neighbors. If a slice has more atoms than its left (or right)
neighbor, the cut is moved towards the center of the slice,
effectively shrinking the width of the slice and migrating atoms to
the other slice. The distance to move the cut is a function of the
"density" of atoms in the donor slice and the difference in counts
between the 2 slices. A damping factor is also applied to avoid
oscillations in the position of the cutting plane as iterations
proceed. Hence the "diffusive" nature of the algorithm as work
(atoms) effectively diffuses from highly loaded processors to
less-loaded processors.
ITEM: TIMESTEP
0
ITEM: NUMBER OF SQUARES
4
ITEM: SQUARES
1 1 1 2 7 6
2 2 2 3 8 7
3 3 3 4 9 8
4 4 4 5 10 9
ITEM: TIMESTEP
0
ITEM: NUMBER OF NODES
10
ITEM: BOX BOUNDS
-153.919 184.703
0 15.3919
-0.769595 0.769595
ITEM: NODES
1 1 -153.919 0 0
2 1 7.45545 0 0
3 1 14.7305 0 0
4 1 22.667 0 0
5 1 184.703 0 0
6 1 -153.919 15.3919 0
7 1 7.45545 15.3919 0
8 1 14.7305 15.3919 0
9 1 22.667 15.3919 0
10 1 184.703 15.3919 0 :pre
The "SQUARES" lists the node IDs of the 4 vertices in a rectangle for
each processor (1 to 4). The first SQUARE 1 (for processor 0) is a
rectangle of type 1 (equal to SQUARE ID) and contains vertices
1,2,7,6. The coordinates of all the vertices are listed in the NODES
section. Note that the 4 sub-domains share vertices, so there are
only 10 unique vertices in total.
For a 3d problem, the syntax is similar with "SQUARES" replaced by
"CUBES", and 8 vertices listed for each processor, instead of 4.
:line
@ -193,8 +222,3 @@ appear in {dimstr} for the {dynamic} keyword.
"processors"_processors.html, "fix balance"_fix_balance.html
[Default:] none
:line
:link(Cybenko)
[(Cybenko)] Cybenko, J Par Dist Comp, 7, 279-301 (1989).

View File

@ -13,6 +13,8 @@
</H3>
<H3><A HREF = "dump_image.html">dump image</A> command
</H3>
<H3><A HREF = "dump_molfile.html">dump molfile</A> command
</H3>
<P><B>Syntax:</B>
</P>
<PRE>dump ID group-ID style N file args
@ -21,7 +23,7 @@
<LI>group-ID = ID of the group of atoms to be dumped
<LI>style = <I>atom</I> or <I>cfg</I> or <I>dcd</I> or <I>xtc</I> or <I>xyz</I> or <I>image</I> or <I>local</I> or <I>custom</I>
<LI>style = <I>atom</I> or <I>cfg</I> or <I>dcd</I> or <I>xtc</I> or <I>xyz</I> or <I>image</I> or <I>molfile</I> or <I>local</I> or <I>custom</I>
<LI>N = dump every this many timesteps
@ -37,6 +39,8 @@
</PRE>
<PRE> <I>image</I> args = discussed on <A HREF = "dump_image.html">dump image</A> doc page
</PRE>
<PRE> <I>molfile</I> args = discussed on <A HREF = "dump_molfile.html">dump molfile</A> doc page
</PRE>
<PRE> <I>local</I> args = list of local attributes
possible attributes = index, c_ID, c_ID[N], f_ID, f_ID[N]
index = enumeration of local values
@ -133,9 +137,9 @@ parallel, because data for a single snapshot is collected from
multiple processors.
</P>
<P>For the <I>atom</I>, <I>custom</I>, <I>cfg</I>, and <I>local</I> styles, sorting is off by
default. For the <I>dcd</I>, <I>xtc</I>, and <I>xyz</I> styles, sorting by atom ID
is on by default. See the <A HREF = "dump_modify.html">dump_modify</A> doc page for
details.
default. For the <I>dcd</I>, <I>xtc</I>, <I>xyz</I>, and <I>molfile</I> styles, sorting by
atom ID is on by default. See the <A HREF = "dump_modify.html">dump_modify</A> doc
page for details.
</P>
<HR>
@ -290,11 +294,11 @@ from using the (numerical) atom type to an element name (or some
other label). This will help many visualization programs to guess
bonds and colors.
</P>
<P>Note that DCD, XTC, and XYZ formatted files can be read directly by
<A HREF = "http://www.ks.uiuc.edu/Research/vmd">VMD</A> (a popular molecular viewing
program). See <A HREF = "Section_tools.html#vmd">Section tools</A> of the manual
and the tools/lmp2vmd/README.txt file for more information about
support in VMD for reading and visualizing LAMMPS dump files.
<P>Note that <I>atom</I>, <I>custom</I>, <I>dcd</I>, <I>xtc</I>, and <I>xyz</I> style dump files can
be read directly by <A HREF = "http://www.ks.uiuc.edu/Research/vmd">VMD</A> (a popular
molecular viewing program). See <A HREF = "Section_tools.html#vmd">Section tools</A>
of the manual and the tools/lmp2vmd/README.txt file for more information
about support in VMD for reading and visualizing LAMMPS dump files.
</P>
<HR>

View File

@ -8,6 +8,7 @@
dump command :h3
"dump image"_dump_image.html command :h3
"dump molfile"_dump_molfile.html command :h3
[Syntax:]
@ -15,7 +16,7 @@ dump ID group-ID style N file args :pre
ID = user-assigned name for the dump :ulb,l
group-ID = ID of the group of atoms to be dumped :l
style = {atom} or {cfg} or {dcd} or {xtc} or {xyz} or {image} or {local} or {custom} :l
style = {atom} or {cfg} or {dcd} or {xtc} or {xyz} or {image} or {molfile} or {local} or {custom} :l
N = dump every this many timesteps :l
file = name of file to write dump info to :l
args = list of arguments for a particular style :l
@ -27,6 +28,8 @@ args = list of arguments for a particular style :l
{image} args = discussed on "dump image"_dump_image.html doc page :pre
{molfile} args = discussed on "dump molfile"_dump_molfile.html doc page :pre
{local} args = list of local attributes
possible attributes = index, c_ID, c_ID\[N\], f_ID, f_ID\[N\]
index = enumeration of local values
@ -122,9 +125,9 @@ parallel, because data for a single snapshot is collected from
multiple processors.
For the {atom}, {custom}, {cfg}, and {local} styles, sorting is off by
default. For the {dcd}, {xtc}, and {xyz} styles, sorting by atom ID
is on by default. See the "dump_modify"_dump_modify.html doc page for
details.
default. For the {dcd}, {xtc}, {xyz}, and {molfile} styles, sorting by
atom ID is on by default. See the "dump_modify"_dump_modify.html doc
page for details.
:line
@ -279,11 +282,11 @@ from using the (numerical) atom type to an element name (or some
other label). This will help many visualization programs to guess
bonds and colors.
Note that DCD, XTC, and XYZ formatted files can be read directly by
"VMD"_http://www.ks.uiuc.edu/Research/vmd (a popular molecular viewing
program). See "Section tools"_Section_tools.html#vmd of the manual
and the tools/lmp2vmd/README.txt file for more information about
support in VMD for reading and visualizing LAMMPS dump files.
Note that {atom}, {custom}, {dcd}, {xtc}, and {xyz} style dump files can
be read directly by "VMD"_http://www.ks.uiuc.edu/Research/vmd (a popular
molecular viewing program). See "Section tools"_Section_tools.html#vmd
of the manual and the tools/lmp2vmd/README.txt file for more information
about support in VMD for reading and visualizing LAMMPS dump files.
:line

View File

@ -1,16 +1,15 @@
This directory used to contain utility scripts for using VMD to
visualize and analyze LAMMPS trajectories. As of April 2010 all of the
scripts and many additional features have been merged into the
topotools plugin that is shipped with VMD. Please see
http://www.ks.uiuc.edu/Research/vmd/plugins/topotools and
http://sites.google.com/site/akohlmey/software/topotools for more
details about the latest version of VMD.
scripts and many additional features have been merged into the topotools
plugin that is bundled with VMD. Updates between VMD releases are here:
http://sites.google.com/site/akohlmey/software/topotools
This page also contains detailed documentation and some tutorials.
The scripts within VMD are maintained by Axel Kohlmeyer
<akohlmey@gmail.com>; please contact him through the LAMMPS mailing
list in case of problems.
These scripts within VMD and the plugin for native LAMMPS dump files are
are maintained by Axel Kohlmeyer <akohlmey@gmail.com>; please contact
him through the LAMMPS mailing list in case of problems.
Below are a few comment on support for LAMMPS in VMD.
Below are a few comments on support for LAMMPS in VMD.
-------------------------
@ -20,25 +19,26 @@ Below are a few comment on support for LAMMPS in VMD.
that LAMMPS can generate. Supported are: atom (text mode), custom
(text mode, only some fields are directly supported, please see
below for more details), dcd, xyz and xtc. Cfg and binary native
dump files are not supported (04/2010).
dump files are not supported (06/2012). The new molfile dump style
in addition allows to use VMD molfile plugins to write dumps in
any format that is supported by VMD.
VMD requires all frames of a file to have the same number of
However VMD requires all frames of a file to have the same number of
atoms. If the number of atoms changes between two frames, the file
reader will stop. The topotools plugin has a special scripted file
reader for .xyz files that can generate the necessary padding so
that the file can still be read into VMD. Whether an atom is real
or "invisible" is then flagged in the "user" field. For efficiency
reader for .xyz files that can generate the necessary padding so that
the file can still be read into VMD. Whether an atom is real or
"invisible" is then flagged in the "user" field. For efficiency
reasons this script will not preserve atom identity between frames.
2. Topology files, a.k.a. as "data" files
The topotools plugin also contains a read and write option for
LAMMPS data files. This reader will try to preserve as much
information as possible and will also store useful information as
comments upon writing. It does not store or read coefficient data
and it cannot handle class2 force fields. In combination with
other functionality in topotools complete topologies for rather
complicated systems can be build.
The topotools plugin also contains a read and write option for LAMMPS
data files. This reader will try to preserve as much information as
possible and will also store useful information as comments upon
writing. It does not store or read coefficient data. In combination
with other functionality in topotools complete topologies for rather
complicated systems for LAMMPS can be build with VMD scripting.
3. Reading custom data fields into VMD