forked from lijiext/lammps
new MESSAGE package for client/server/coupling
This commit is contained in:
parent
5c21d2aff9
commit
2f55981224
|
@ -482,6 +482,7 @@ in the command's documentation.
|
|||
"lattice"_lattice.html,
|
||||
"log"_log.html,
|
||||
"mass"_mass.html,
|
||||
"message"_message.html,
|
||||
"minimize"_minimize.html,
|
||||
"min_modify"_min_modify.html,
|
||||
"min_style"_min_style.html,
|
||||
|
@ -513,6 +514,7 @@ in the command's documentation.
|
|||
"restart"_restart.html,
|
||||
"run"_run.html,
|
||||
"run_style"_run_style.html,
|
||||
"server"_server.html,
|
||||
"set"_set.html,
|
||||
"shell"_shell.html,
|
||||
"special_bonds"_special_bonds.html,
|
||||
|
@ -574,6 +576,7 @@ USER-INTEL, k = KOKKOS, o = USER-OMP, t = OPT.
|
|||
"bond/create"_fix_bond_create.html,
|
||||
"bond/swap"_fix_bond_swap.html,
|
||||
"box/relax"_fix_box_relax.html,
|
||||
"client/md"_fix_client_md.html,
|
||||
"cmap"_fix_cmap.html,
|
||||
"controller"_fix_controller.html,
|
||||
"deform (k)"_fix_deform.html,
|
||||
|
@ -678,8 +681,6 @@ USER-INTEL, k = KOKKOS, o = USER-OMP, t = OPT.
|
|||
"vector"_fix_vector.html,
|
||||
"viscosity"_fix_viscosity.html,
|
||||
"viscous"_fix_viscous.html,
|
||||
"wall/body/polygon"_fix_wall_body_polygon.html,
|
||||
"wall/body/polyhedron"_fix_wall_body_polyhedron.html,
|
||||
"wall/colloid"_fix_wall.html,
|
||||
"wall/gran"_fix_wall_gran.html,
|
||||
"wall/gran/region"_fix_wall_gran_region.html,
|
||||
|
@ -932,9 +933,7 @@ KOKKOS, o = USER-OMP, t = OPT.
|
|||
"airebo (oi)"_pair_airebo.html,
|
||||
"airebo/morse (oi)"_pair_airebo.html,
|
||||
"beck (go)"_pair_beck.html,
|
||||
"body/nparticle"_pair_body_nparticle.html,
|
||||
"body/rounded/polygon"_pair_body_rounded/polygon.html,
|
||||
"body/rounded/polyhedron"_pair_body_rounded/polyhedron.html,
|
||||
"body"_pair_body.html,
|
||||
"bop"_pair_bop.html,
|
||||
"born (go)"_pair_born.html,
|
||||
"born/coul/dsf"_pair_born.html,
|
||||
|
|
|
@ -37,7 +37,8 @@ This section describes how to perform common tasks using LAMMPS.
|
|||
6.25 "Polarizable models"_#howto_25
|
||||
6.26 "Adiabatic core/shell model"_#howto_26
|
||||
6.27 "Drude induced dipoles"_#howto_27
|
||||
6.28 "Magnetic spins"_#howto_28 :all(b)
|
||||
6.28 "Magnetic spins"_#howto_28
|
||||
6.29 "Using LAMMPS in client/server mode"_#howto_29 :all(b)
|
||||
|
||||
The example input scripts included in the LAMMPS distribution and
|
||||
highlighted in "Section 7"_Section_example.html also show how to
|
||||
|
@ -663,7 +664,7 @@ atoms and pass those forces to LAMMPS. Or a continuum finite element
|
|||
nodal points, compute a FE solution, and return interpolated forces on
|
||||
MD atoms.
|
||||
|
||||
LAMMPS can be coupled to other codes in at least 3 ways. Each has
|
||||
LAMMPS can be coupled to other codes in at least 4 ways. Each has
|
||||
advantages and disadvantages, which you'll have to think about in the
|
||||
context of your application.
|
||||
|
||||
|
@ -752,7 +753,8 @@ LAMMPS and half to the other code and run both codes simultaneously
|
|||
before syncing them up periodically. Or it might instantiate multiple
|
||||
instances of LAMMPS to perform different calculations.
|
||||
|
||||
:line
|
||||
(4) Couple LAMMPS with another code in a client/server mode. This is
|
||||
described in a separate "howto section"_#howto_29.
|
||||
|
||||
6.11 Visualizing LAMMPS snapshots :link(howto_11),h4
|
||||
|
||||
|
@ -2955,6 +2957,130 @@ property/atom"_compute_property_atom.html. It enables to output all the
|
|||
per atom magnetic quantities. Typically, the orientation of a given
|
||||
magnetic spin, or the magnetic force acting on this spin.
|
||||
|
||||
:line
|
||||
|
||||
6.29 Using LAMMPS in client/server mode :link(howto_29),h4
|
||||
|
||||
Client/server coupling of two codes is where one code is the "client"
|
||||
and sends request messages to a "server" code. The server responds to
|
||||
each request with a reply message. This enables the two codes to work
|
||||
in tandem to perform a simulation. LAMMPS can act as either a client
|
||||
or server code.
|
||||
|
||||
Some advantages of client/server coupling are that the two codes run
|
||||
as stand-alone executables; they are not linked together. Thus
|
||||
neither code needs to have a library interface. This often makes it
|
||||
easier to run the two codes on different numbers of processors. If a
|
||||
message protocol (format and content) is defined for a particular kind
|
||||
of simulation, then in principle any code that implements the
|
||||
client-side protocol can be used in tandem with any code that
|
||||
implements the server-side protocol, without the two codes needing to
|
||||
know anything more specific about each other.
|
||||
|
||||
A simple example of client/server coupling is where LAMMPS is the
|
||||
client code performing MD timestepping. Each timestep it sends a
|
||||
message to a server quantum code containing current coords of all the
|
||||
atoms. The quantum code computes energy and forces based on the
|
||||
coords. It returns them as a message to LAMMPS, which completes the
|
||||
timestep.
|
||||
|
||||
Alternate methods for code coupling with LAMMPS are described in "this
|
||||
section"_#howto10.
|
||||
|
||||
LAMMPS support for client/server coupling is in its "MESSAGE
|
||||
package"_Section_package.html#MESSAGE which implements several
|
||||
commands that enable LAMMPS to act as a client or server, as discussed
|
||||
below. The MESSAGE package also wraps a client/server library called
|
||||
CSlib which enables two codes to exchange messages in different ways,
|
||||
either via files, a socket, or MPI. The CSlib is provided with LAMMPS
|
||||
in the lib/message dir. It has its own
|
||||
"website"_http://cslib.sandia.gov (as of Aug 2018) with documentation
|
||||
and test programs.
|
||||
|
||||
NOTE: For client/server coupling to work between LAMMPS and another
|
||||
code, the other code also has to use the CSlib. This can sometimes be
|
||||
done without any modifications to the other code by simply wrapping it
|
||||
with a Python script that exchanges CSlib messages with LAMMPS and
|
||||
prepares input for or processes output from the other code. The other
|
||||
code also has to implement a matching protocol for the format and
|
||||
content of messages that LAMMPS exchanges with it.
|
||||
|
||||
These are the commands currently in the MESSAGE package for two
|
||||
protocols, MD and MC (Monte Carlo). New protocols can easily be
|
||||
defined and added to this directory, where LAMMPS acts as either the
|
||||
client or server.
|
||||
|
||||
"message"_message.html
|
||||
"fix client md"_fix_client_md.html = LAMMPS is a client for running MD
|
||||
"server md"_server_md.html = LAMMPS is a server for computing MD forces
|
||||
"server mc"_server_mc.html = LAMMPS is a server for computing a Monte Carlo energy
|
||||
|
||||
The server doc files give details of the message protocols
|
||||
for data that is exchanged bewteen the client and server.
|
||||
|
||||
These example directories illustrate how to use LAMMPS as either a
|
||||
client or server code:
|
||||
|
||||
examples/message
|
||||
examples/COUPLE/README
|
||||
examples/COUPLE/lammps_mc
|
||||
examples/COUPLE/lammps_vasp :ul
|
||||
|
||||
The examples/message dir couples a client instance of LAMMPS to a
|
||||
server instance of LAMMPS.
|
||||
|
||||
The lammps_mc dir shows how to couple LAMMPS as a server to a simple
|
||||
Monte Carlo client code as the driver.
|
||||
|
||||
The lammps_vasp dir shows how to couple LAMMPS as a client code
|
||||
running MD timestepping to VASP acting as a server providing quantum
|
||||
DFT forces, thru a Python wrapper script on VASP.
|
||||
|
||||
Here is how to launch a client and server code together for any of the
|
||||
4 modes of message exchange that the "message"_message.html command
|
||||
and the CSlib support. Here LAMMPS is used as both the client and
|
||||
server code. Another code could be subsitituted for either.
|
||||
|
||||
The examples below show launching both codes from the same window (or
|
||||
batch script), using the "&" character to launch the first code in the
|
||||
background. For all modes except {mpi/one}, you could also launch the
|
||||
codes in separate windows on your desktop machine. It does not
|
||||
matter whether you launch the client or server first.
|
||||
|
||||
In these examples either code can be run on one or more processors.
|
||||
If running in a non-MPI mode (file or zmq) you can launch a code on a
|
||||
single processor without using mpirun.
|
||||
|
||||
IMPORTANT: If you run in mpi/two mode, you must launch both codes via
|
||||
mpirun, even if one or both of them runs on a single processor. This
|
||||
is so that MPI can figure out how to connect both MPI processes
|
||||
together to exchange MPI messages between them.
|
||||
|
||||
For message exchange in {file}, {zmq}, or {mpi/two} modes:
|
||||
|
||||
% mpirun -np 1 lmp_mpi -log log.client < in.client &
|
||||
% mpirun -np 2 lmp_mpi -log log.server < in.server :pre
|
||||
|
||||
% mpirun -np 4 lmp_mpi -log log.client < in.client &
|
||||
% mpirun -np 1 lmp_mpi -log log.server < in.server :pre
|
||||
|
||||
% mpirun -np 2 lmp_mpi -log log.client < in.client &
|
||||
% mpirun -np 4 lmp_mpi -log log.server < in.server :pre
|
||||
|
||||
For message exchange in {mpi/one} mode:
|
||||
|
||||
Launch both codes in a single mpirun command:
|
||||
|
||||
mpirun -np 2 lmp_mpi -mpi 2 -in in.message.client -log log.client : -np 4 lmp_mpi -mpi 2 -in in.message.server -log log.server
|
||||
|
||||
The two -np values determine how many procs the client and the server
|
||||
run on.
|
||||
|
||||
A LAMMPS executable run in this manner must use the -mpi P
|
||||
command-line option as their first option, where P is the number of
|
||||
processors the first code in the mpirun command (client or server) is
|
||||
running on.
|
||||
|
||||
:line
|
||||
:line
|
||||
|
||||
|
|
|
@ -100,6 +100,7 @@ Package, Description, Doc page, Example, Library
|
|||
"MANYBODY"_#MANYBODY, many-body potentials, "pair_style tersoff"_pair_tersoff.html, shear, -
|
||||
"MC"_#MC, Monte Carlo options, "fix gcmc"_fix_gcmc.html, -, -
|
||||
"MEAM"_#MEAM, modified EAM potential, "pair_style meam"_pair_meam.html, meam, int
|
||||
"MESSAGE"_#MESSAGE, client/server messaging, "message"_message.html, message, int
|
||||
"MISC"_#MISC, miscellanous single-file commands, -, -, -
|
||||
"MOLECULE"_#MOLECULE, molecular system force fields, "Section 6.6.3"_Section_howto.html#howto_3, peptide, -
|
||||
"MPIIO"_#MPIIO, MPI parallel I/O dump and restart, "dump"_dump.html, -, -
|
||||
|
@ -879,6 +880,52 @@ examples/meam :ul
|
|||
|
||||
:line
|
||||
|
||||
MESSAGE package :link(MESSAGE),h4
|
||||
|
||||
[Contents:]
|
||||
|
||||
Commands to use LAMMPS as either a client or server
|
||||
and couple it to another application.
|
||||
|
||||
[Install or un-install:]
|
||||
|
||||
Before building LAMMPS with this package, you must first build the
|
||||
CSlib library in lib/message. You can do this manually if you prefer;
|
||||
follow the instructions in lib/message/README. You can also do it in
|
||||
one step from the lammps/src dir, using a command like these, which
|
||||
simply invoke the lib/message/Install.py script with the specified
|
||||
args:
|
||||
|
||||
make lib-message # print help message
|
||||
make lib-message args="-m -z" # build with MPI and socket (ZMQ) support
|
||||
make lib-message args="-s" # build as serial lib with no ZMQ support
|
||||
|
||||
The build should produce two files: lib/message/cslib/src/libmessage.a
|
||||
and lib/message/Makefile.lammps. The latter is copied from an
|
||||
existing Makefile.lammps.* and has settings to link with the
|
||||
open-source ZMQ library if requested in the build.
|
||||
|
||||
You can then install/un-install the package and build LAMMPS in the
|
||||
usual manner:
|
||||
|
||||
make yes-message
|
||||
make machine :pre
|
||||
|
||||
make no-message
|
||||
make machine :pre
|
||||
|
||||
[Supporting info:]
|
||||
|
||||
src/MESSAGE: filenames -> commands
|
||||
lib/message/README
|
||||
"message"_message.html
|
||||
"fix client/md"_fix_client_md.html
|
||||
"server md"_server_md.html
|
||||
"server mc"_server_mc.html
|
||||
examples/message :ul
|
||||
|
||||
:line
|
||||
|
||||
MISC package :link(MISC),h4
|
||||
|
||||
[Contents:]
|
||||
|
|
|
@ -1204,6 +1204,7 @@ letter abbreviation can be used:
|
|||
-i or -in
|
||||
-k or -kokkos
|
||||
-l or -log
|
||||
-m or -mpi
|
||||
-nc or -nocite
|
||||
-pk or -package
|
||||
-p or -partition
|
||||
|
@ -1351,6 +1352,24 @@ specified file is "none", then no log files are created. Using a
|
|||
"log"_log.html command in the input script will override this setting.
|
||||
Option -plog will override the name of the partition log files file.N.
|
||||
|
||||
-mpi P :pre
|
||||
|
||||
If used, this must be the first command-line argument after the LAMMPS
|
||||
executable name. It is only used when running LAMMPS in client/server
|
||||
mode with the "mpi/one" mode of messaging provided by the
|
||||
"message"_message.html command and the CSlib library LAMMPS links with
|
||||
from the lib/message directory. See the "message"_message.html
|
||||
command for more details
|
||||
|
||||
In the mpi/one mode of messaging, both executables (the client and the
|
||||
server) are launched by one mpirun command. P should be specified as
|
||||
the number of processors (MPI tasks) the first executable is running
|
||||
on (could be the client or the server code).
|
||||
|
||||
This information is required so that both codes can shrink the
|
||||
MPI_COMM_WORLD communicator they are part of to the subset of
|
||||
processors they are actually running on.
|
||||
|
||||
-nocite :pre
|
||||
|
||||
Disable writing the log.cite file which is normally written to list
|
||||
|
|
|
@ -0,0 +1,105 @@
|
|||
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
|
||||
|
||||
:link(lws,http://lammps.sandia.gov)
|
||||
:link(ld,Manual.html)
|
||||
:link(lc,Section_commands.html#comm)
|
||||
|
||||
:line
|
||||
|
||||
fix client/md command :h3
|
||||
|
||||
[Syntax:]
|
||||
|
||||
fix ID group-ID client/md :pre
|
||||
|
||||
ID, group-ID are documented in "fix"_fix.html command
|
||||
client/md = style name of this fix command :ul
|
||||
|
||||
[Examples:]
|
||||
|
||||
fix 1 all client/md :pre
|
||||
|
||||
[Description:]
|
||||
|
||||
This fix style enables LAMMPS to run as a "client" code and
|
||||
communicate each timestep with a separate "server" code to perform an
|
||||
MD simulation together.
|
||||
|
||||
"This section"_Section_howto.html#howto_29 gives an overview of
|
||||
client/server coupling of LAMMPS with another code where one code is
|
||||
the "client" and sends request messages to a "server" code. The
|
||||
server responds to each request with a reply message. This enables
|
||||
the two codes to work in tandem to perform a simulation.
|
||||
|
||||
When using this fix, LAMMPS (as the client code) passes the current
|
||||
coordinates of all particles to the server code each timestep, which
|
||||
computes their interaction, and returns the energy, forces, and virial
|
||||
for the interacting particles to LAMMPS, so it can complete the
|
||||
timestep.
|
||||
|
||||
The server code could be a quantum code, or another classical MD code
|
||||
which encodes a force field (pair_style in LAMMPS lingo) which LAMMPS
|
||||
does not have. In the quantum case, this fix is a mechanism for
|
||||
running {ab initio} MD with quantum forces.
|
||||
|
||||
The group associated with this fix is ignored.
|
||||
|
||||
The protocol for message format and content that LAMMPS exchanges with
|
||||
the server code is defined on the "server md"_server_md.html doc page.
|
||||
|
||||
Note that when using LAMMPS in this mode, your LAMMPS input script
|
||||
should not normally contain force field commands, like a
|
||||
"pair_style"_doc/pair_style.html, "bond_style"_doc/bond_style.html, or
|
||||
"kspace_style"_kspace_style.html commmand. However it is possible for
|
||||
a server code to only compute a portion of the full force-field, while
|
||||
LAMMPS computes the remaining part. Your LAMMPS script can also
|
||||
specify boundary conditions or force constraints in the usual way,
|
||||
which will be added to the per-atom forces returned by the server
|
||||
code.
|
||||
|
||||
See the examples/message dir for example scripts where LAMMPS is both
|
||||
the "client" and/or "server" code for this kind of client/server MD
|
||||
simulation. The examples/message/README file explains how to launch
|
||||
LAMMPS and another code in tandem to perform a coupled simulation.
|
||||
|
||||
:line
|
||||
|
||||
[Restart, fix_modify, output, run start/stop, minimize info:]
|
||||
|
||||
No information about this fix is written to "binary restart
|
||||
files"_restart.html.
|
||||
|
||||
The "fix_modify"_fix_modify.html {energy} option is supported by this
|
||||
fix to add the potential energy computed by the server application to
|
||||
the system's potential energy as part of "thermodynamic
|
||||
output"_thermo_style.html.
|
||||
|
||||
The "fix_modify"_fix_modify.html {virial} option is supported by this
|
||||
fix to add the server application's contribution to the system's
|
||||
virial as part of "thermodynamic output"_thermo_style.html. The
|
||||
default is {virial yes}
|
||||
|
||||
This fix computes a global scalar which can be accessed by various
|
||||
"output commands"_Section_howto.html#howto_15. The scalar is the
|
||||
potential energy discussed above. The scalar value calculated by this
|
||||
fix is "extensive".
|
||||
|
||||
No parameter of this fix can be used with the {start/stop} keywords of
|
||||
the "run"_run.html command. This fix is not invoked during "energy
|
||||
minimization"_minimize.html.
|
||||
|
||||
[Restrictions:]
|
||||
|
||||
This fix is part of the MESSAGE package. It is only enabled if LAMMPS
|
||||
was built with that package. See the "Making
|
||||
LAMMPS"_Section_start.html#start_3 section for more info.
|
||||
|
||||
A script that uses this command must also use the
|
||||
"message"_message.html command to setup the messaging protocol with
|
||||
the other server code.
|
||||
|
||||
[Related commands:]
|
||||
|
||||
"message"_message.html, "server"_server.html
|
||||
|
||||
[Default:] none
|
|
@ -67,6 +67,7 @@ label.html
|
|||
lattice.html
|
||||
log.html
|
||||
mass.html
|
||||
message.html
|
||||
min_modify.html
|
||||
min_style.html
|
||||
minimize.html
|
||||
|
@ -94,6 +95,9 @@ reset_timestep.html
|
|||
restart.html
|
||||
run.html
|
||||
run_style.html
|
||||
server.html
|
||||
server_mc.html
|
||||
server_md.html
|
||||
set.html
|
||||
shell.html
|
||||
special_bonds.html
|
||||
|
@ -140,7 +144,8 @@ fix_bond_break.html
|
|||
fix_bond_create.html
|
||||
fix_bond_react.html
|
||||
fix_bond_swap.html
|
||||
fix_box_relax.html
|
||||
fix_box_relax.htmlf
|
||||
fix_client_md.html
|
||||
fix_cmap.html
|
||||
fix_colvars.html
|
||||
fix_controller.html
|
||||
|
@ -283,8 +288,6 @@ fix_vector.html
|
|||
fix_viscosity.html
|
||||
fix_viscous.html
|
||||
fix_wall.html
|
||||
fix_wall_body_polygon.html
|
||||
fix_wall_body_polyhedron.html
|
||||
fix_wall_ees.html
|
||||
fix_wall_gran.html
|
||||
fix_wall_gran_region.html
|
||||
|
@ -426,9 +429,8 @@ pair_agni.html
|
|||
pair_airebo.html
|
||||
pair_awpmd.html
|
||||
pair_beck.html
|
||||
pair_body_nparticle.html
|
||||
pair_body.html
|
||||
pair_body_rounded_polygon.html
|
||||
pair_body_rounded_polyhedron.html
|
||||
pair_bop.html
|
||||
pair_born.html
|
||||
pair_brownian.html
|
||||
|
|
|
@ -0,0 +1,159 @@
|
|||
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
|
||||
|
||||
:link(lws,http://lammps.sandia.gov)
|
||||
:link(ld,Manual.html)
|
||||
:link(lc,Section_commands.html#comm)
|
||||
|
||||
:line
|
||||
|
||||
message command :h3
|
||||
|
||||
[Syntax:]
|
||||
|
||||
message which protocol mode arg :pre
|
||||
|
||||
which = {client} or {server} :ulb,l
|
||||
protocol = {md} or {mc} :l
|
||||
mode = {file} or {zmq} or {mpi/one} or {mpi/two} :l
|
||||
{file} arg = filename
|
||||
filename = file used for message exchanges
|
||||
{zmq} arg = socket-ID
|
||||
socket-ID for client = localhost:5555, see description below
|
||||
socket-ID for server = *:5555, see description below
|
||||
{mpi/one} arg = none
|
||||
{mpi/two} arg = filename
|
||||
filename = file used to establish communication bewteen 2 MPI jobs :pre
|
||||
:ule
|
||||
|
||||
[Examples:]
|
||||
|
||||
message client md file tmp.couple
|
||||
message server md file tmp.couple :pre
|
||||
|
||||
message client md zmq localhost:5555
|
||||
message server md zmq *:5555 :pre
|
||||
|
||||
message client md mpi/one
|
||||
message server md mpi/one :pre
|
||||
|
||||
message client md mpi/two tmp.couple
|
||||
message server md mpi/two tmp.couple :pre
|
||||
|
||||
[Description:]
|
||||
|
||||
Establish a messaging protocol between LAMMPS and another code for the
|
||||
purpose of client/server coupling.
|
||||
|
||||
"This section"_Section_howto.html#howto_29 gives an overview of
|
||||
client/server coupling of LAMMPS with another code where one code is
|
||||
the "client" and sends request messages to a "server" code. The
|
||||
server responds to each request with a reply message. This enables
|
||||
the two codes to work in tandem to perform a simulation.
|
||||
|
||||
:line
|
||||
|
||||
The {which} argument defines LAMMPS to be the client or the server.
|
||||
|
||||
:line
|
||||
|
||||
The {protocol} argument defines the format and content of messages
|
||||
that will be exchanged between the two codes. The current options
|
||||
are:
|
||||
|
||||
md = run dynamics with another code
|
||||
mc = perform Monte Carlo moves with another code :ul
|
||||
|
||||
For protocol {md}, LAMMPS can be either a client or server. See the
|
||||
"server md"_server_md.html doc page for details on the protocol.
|
||||
|
||||
For protocol {mc}, LAMMPS can be the server. See the "server
|
||||
mc"_server_mc.html doc page for details on the protocol.
|
||||
|
||||
:line
|
||||
|
||||
The {mode} argument specifies how messages are exchanged between the
|
||||
client and server codes. Both codes must use the same mode and use
|
||||
consistent parameters.
|
||||
|
||||
For mode {file}, the 2 codes communicate via binary files. They must
|
||||
use the same filename, which is actually a file prefix. Several files
|
||||
with that prefix will be created and deleted as a simulation runs.
|
||||
The filename can include a path. Both codes must be able to access
|
||||
the path/file in a common filesystem.
|
||||
|
||||
For mode {zmq}, the 2 codes communicate via a socket on the server
|
||||
code's machine. The client specifies an IP address (IPv4 format) or
|
||||
the DNS name of the machine the server code is running on, followed by
|
||||
a 4-digit port ID for the socket, separated by a colon. E.g.
|
||||
|
||||
localhost:5555 # client and server running on same machine
|
||||
192.168.1.1:5555 # server is 192.168.1.1
|
||||
deptbox.uni.edu:5555 # server is deptbox.uni.edu :pre
|
||||
|
||||
The server specifes "*:5555" where "*" represents all available
|
||||
interfaces on the server's machine, and the port ID must match
|
||||
what the client specifies.
|
||||
|
||||
NOTE: What are allowed port IDs?
|
||||
|
||||
NOTE: Additional explanation is needed here about how to use the {zmq}
|
||||
mode on a parallel machine, e.g. a cluster with many nodes.
|
||||
|
||||
For mode {mpi/one}, the 2 codes communicate via MPI and are launched
|
||||
by the same mpirun command, e.g. with this syntax for OpenMPI:
|
||||
|
||||
mpirun -np 2 lmp_mpi -mpi 2 -in in.client -log log.client : -np 4 othercode args # LAMMPS is client
|
||||
mpirun -np 2 othercode args : -np 4 lmp_mpi -mpi 2 -in in.server # LAMMPS is server :pre
|
||||
|
||||
Note the use of the "-mpi P" command-line argument with LAMMPS. See
|
||||
the "command-line args"_Section_start.html#start_6 doc page for
|
||||
further explanation.
|
||||
|
||||
For mode {mpi/two}, the 2 codes communicate via MPI, but are launched
|
||||
be 2 separate mpirun commands. The specified {filename} argument is a
|
||||
file the 2 MPI processes will use to exchange info so that an MPI
|
||||
inter-communicator can be established to enable the 2 codes to send
|
||||
MPI messages to each other. Both codes must be able to access the
|
||||
path/file in a common filesystem.
|
||||
|
||||
:line
|
||||
|
||||
Normally, the message command should be used at the top of a LAMMPS
|
||||
input script. It performs an initial handshake with the other code to
|
||||
setup messaging and to verify that both codes are using the same
|
||||
message protocol and mode. Assuming both codes are launched at
|
||||
(nearly) the same time, the other code should perform the same kind of
|
||||
initialization.
|
||||
|
||||
If LAMMPS is the client code, it will begin sending messages when a
|
||||
LAMMPS client command begins its operation. E.g. for the "fix
|
||||
client/md"_fix_client_md.html command, it is when a "run"_run.html
|
||||
command is executed.
|
||||
|
||||
If LAMMPS is the server code, it will begin receiving messages when
|
||||
the "server"_server.html command is invoked.
|
||||
|
||||
A fix client command will terminate its messaging with the server when
|
||||
LAMMPS ends, or the fix is deleted via the "unfix"_unfix command. The
|
||||
server command will terminate its messaging with the client when the
|
||||
client signals it. Then the remainder of the LAMMPS input script will
|
||||
be processed.
|
||||
|
||||
If both codes do something similar, this means a new round of
|
||||
client/server messaging can be initiated after termination by re-using
|
||||
a 2nd message command in your LAMMPS input script, followed by a new
|
||||
fix client or server command.
|
||||
|
||||
:line
|
||||
|
||||
[Restrictions:]
|
||||
|
||||
This command is part of the MESSAGE package. It is only enabled if
|
||||
LAMMPS was built with that package. See the "Making
|
||||
LAMMPS"_Section_start.html#start_3 section for more info.
|
||||
|
||||
[Related commands:]
|
||||
|
||||
"server"_server.html, "fix client/md"_fix_client_md.html
|
||||
|
||||
[Default:] none
|
|
@ -0,0 +1,71 @@
|
|||
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
|
||||
|
||||
:link(lws,http://lammps.sandia.gov)
|
||||
:link(ld,Manual.html)
|
||||
:link(lc,Section_commands.html#comm)
|
||||
|
||||
:line
|
||||
|
||||
server command :h3
|
||||
|
||||
[Syntax:]
|
||||
|
||||
server protocol :pre
|
||||
|
||||
protocol = {md} or {mc} :ul
|
||||
|
||||
[Examples:]
|
||||
|
||||
server md :pre
|
||||
|
||||
[Description:]
|
||||
|
||||
This command starts LAMMPS running in "server" mode, where it receives
|
||||
messages from a separate "client" code and responds by sending a reply
|
||||
message back to the client. The specified {protocol} determines the
|
||||
format and content of messages LAMMPS expects to receive and how it
|
||||
responds.
|
||||
|
||||
"This section"_Section_howto.html#howto_29 gives an overview of
|
||||
client/server coupling of LAMMPS with another code where one code is
|
||||
the "client" and sends request messages to a "server" code. The
|
||||
server responds to each request with a reply message. This enables
|
||||
the two codes to work in tandem to perform a simulation.
|
||||
|
||||
When this command is invoked, LAMMPS will run in server mode in an
|
||||
endless loop, waiting for messages from the client code. The client
|
||||
signals when it is done sending messages to LAMMPS, at which point the
|
||||
loop will exit, and the remainder of the LAMMPS script will be
|
||||
processed.
|
||||
|
||||
The {protocol} argument defines the format and content of messages
|
||||
that will be exchanged between the two codes. The current options
|
||||
are:
|
||||
|
||||
"md"_server_md.html = run dynamics with another code
|
||||
"mc"_server_mc.html = perform Monte Carlo moves with another code :ul
|
||||
|
||||
For protocol {md}, LAMMPS can be either a client (via the "fix
|
||||
client/md"_fix_client_md.html command) or server. See the "server
|
||||
md"_server_md.html doc page for details on the protocol.
|
||||
|
||||
For protocol {mc}, LAMMPS can be the server. See the "server
|
||||
mc"_server_mc.html doc page for details on the protocol.
|
||||
|
||||
:line
|
||||
|
||||
[Restrictions:]
|
||||
|
||||
This command is part of the MESSAGE package. It is only enabled if
|
||||
LAMMPS was built with that package. See the "Making
|
||||
LAMMPS"_Section_start.html#start_3 section for more info.
|
||||
|
||||
A script that uses this command must also use the
|
||||
"message"_message.html command to setup the messaging protocol with
|
||||
the other client code.
|
||||
|
||||
[Related commands:]
|
||||
|
||||
"message"_message.html, "fix client/md"_fix_client_md.html
|
||||
|
||||
[Default:] none
|
|
@ -0,0 +1,112 @@
|
|||
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
|
||||
|
||||
:link(lws,http://lammps.sandia.gov)
|
||||
:link(ld,Manual.html)
|
||||
:link(lc,Section_commands.html#comm)
|
||||
|
||||
:line
|
||||
|
||||
server mc command :h3
|
||||
|
||||
[Syntax:]
|
||||
|
||||
server mc :pre
|
||||
|
||||
mc = the protocol argument to the "server"_server.html command
|
||||
|
||||
[Examples:]
|
||||
|
||||
server mc :pre
|
||||
|
||||
[Description:]
|
||||
|
||||
This command starts LAMMPS running in "server" mode, where it will
|
||||
expect messages from a separate "client" code that match the {mc}
|
||||
protocol for format and content explained below. For each message
|
||||
LAMMPS receives it will send a message back to the client.
|
||||
|
||||
"This section"_Section_howto.html#howto_29 gives an overview of
|
||||
client/server coupling of LAMMPS with another code where one code is
|
||||
the "client" and sends request messages to a "server" code. The
|
||||
server responds to each request with a reply message. This enables
|
||||
the two codes to work in tandem to perform a simulation.
|
||||
|
||||
When this command is invoked, LAMMPS will run in server mode in an
|
||||
endless loop, waiting for messages from the client code. The client
|
||||
signals when it is done sending messages to LAMMPS, at which point the
|
||||
loop will exit, and the remainder of the LAMMPS script will be
|
||||
processed.
|
||||
|
||||
See an example of how this command is used in
|
||||
examples/COUPLE/lammps_mc/in.server.
|
||||
|
||||
:line
|
||||
|
||||
When using this command, LAMMPS (as the server code) receives
|
||||
instructions from a Monte Carlo (MC) driver to displace random atoms,
|
||||
compute the energy before and after displacement, and run dynamics to
|
||||
equilibrate the system.
|
||||
|
||||
The MC driver performs the random displacements on random atoms,
|
||||
accepts or rejects the move in an MC sense, and orchestrates the MD
|
||||
runs.
|
||||
|
||||
The format and content of the exchanged messages are explained here in
|
||||
a conceptual sense. Python-style pseudo code for the library calls to
|
||||
the CSlib is shown, which performs the actual message exchange between
|
||||
the two codes. See the "CSlib website"_http://cslib.sandia.gov doc
|
||||
pages for more details on the actual library syntax (as of Aug 2018).
|
||||
The "cs" object in this pseudo code is an instance of the CSlib that
|
||||
both the client and server codes store.
|
||||
|
||||
See the src/MESSAGE/server_mc.cpp file for details on how LAMMPS uses
|
||||
these messages. See the examples/COUPLE/lammmps_mc/mc.cpp file for an
|
||||
example of how an MC driver code can use these messages.
|
||||
|
||||
Let NATOMS=1, EINIT=2, DISPLACE=3, ACCEPT=4, RUN=5.
|
||||
|
||||
[Client sends one of these kinds of message]:
|
||||
|
||||
cs.send(NATOMS,0) # msgID = 1 with no fields :pre
|
||||
|
||||
cs.send(EINIT,0) # msgID = 2 with no fields :pre
|
||||
|
||||
cs.send(DISPLACE,2) # msgID = 3 with 2 fields
|
||||
cs.pack(1,1,ID) # 1st field = ID of atom to displace
|
||||
cs.pack(2,3,xnew) # 2nd field = new xyz coords of displaced atom :pre
|
||||
|
||||
cs.send(ACCEPT,1) # msgID = 4 with 1 field
|
||||
cs.pack(1,1,flag) # 1st field = accept/reject flag :pre
|
||||
|
||||
cs.send(RUN,1) # msgID = 5 with 1 field
|
||||
cs.pack(1,1,nsteps) # 1st field = # of timesteps to run MD :pre
|
||||
|
||||
[Server replies]:
|
||||
|
||||
cs.send(NATOMS,1) # msgID = 1 with 1 field
|
||||
cs.pack(1,1,Natoms) # 1st field = number of atoms :pre
|
||||
|
||||
cs.send(EINIT,2) # msgID = 2 with 2 fields
|
||||
cs.pack(1,1,poteng) # 1st field = potential energy of system
|
||||
cs.pack(2,3*Natoms,x) # 2nd field = 3N coords of Natoms :pre
|
||||
|
||||
cs.send(DISPLACE,1) # msgID = 3 with 1 field
|
||||
cs.pack(1,1,poteng) # 1st field = new potential energy of system :pre
|
||||
|
||||
cs.send(ACCEPT,0) # msgID = 4 with no fields
|
||||
|
||||
cs.send(RUN,0) # msgID = 5 with no fields
|
||||
|
||||
:line
|
||||
|
||||
[Restrictions:]
|
||||
|
||||
This command is part of the MESSAGE package. It is only enabled if
|
||||
LAMMPS was built with that package. See the "Making
|
||||
LAMMPS"_Section_start.html#start_3 section for more info.
|
||||
|
||||
[Related commands:]
|
||||
|
||||
"message"_message.html
|
||||
|
||||
[Default:] none
|
|
@ -0,0 +1,122 @@
|
|||
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
|
||||
|
||||
:link(lws,http://lammps.sandia.gov)
|
||||
:link(ld,Manual.html)
|
||||
:link(lc,Section_commands.html#comm)
|
||||
|
||||
:line
|
||||
|
||||
server md command :h3
|
||||
|
||||
[Syntax:]
|
||||
|
||||
server md :pre
|
||||
|
||||
md = the protocol argument to the "server"_server.html command
|
||||
|
||||
[Examples:]
|
||||
|
||||
server md :pre
|
||||
|
||||
[Description:]
|
||||
|
||||
This command starts LAMMPS running in "server" mode, where it will
|
||||
expect messages from a separate "client" code that match the {md}
|
||||
protocol for format and content explained below. For each message
|
||||
LAMMPS receives it will send a message back to the client.
|
||||
|
||||
"This section"_Section_howto.html#howto_29 gives an overview of
|
||||
client/server coupling of LAMMPS with another code where one code is
|
||||
the "client" and sends request messages to a "server" code. The
|
||||
server responds to each request with a reply message. This enables
|
||||
the two codes to work in tandem to perform a simulation.
|
||||
|
||||
When this command is invoked, LAMMPS will run in server mode in an
|
||||
endless loop, waiting for messages from the client code. The client
|
||||
signals when it is done sending messages to LAMMPS, at which point the
|
||||
loop will exit, and the remainder of the LAMMPS script will be
|
||||
processed.
|
||||
|
||||
See an example of how this command is used in
|
||||
examples/message/in.message.server.
|
||||
|
||||
:line
|
||||
|
||||
When using this command, LAMMPS (as the server code) receives the
|
||||
current coordinates of all particles from the client code each
|
||||
timestep, computes their interaction, and returns the energy, forces,
|
||||
and virial for the interacting particles to the client code, so it can
|
||||
complete the timestep. This command could also be used with a client
|
||||
code that performs energy minimization, using the server to compute
|
||||
forces and energy each iteration of its minimizer.
|
||||
|
||||
When using the "fix client/md" command, LAMMPS (as the client code)
|
||||
does the timestepping and receives needed energy, forces, and virial
|
||||
values from the server code.
|
||||
|
||||
The format and content of the exchanged messages are explained here in
|
||||
a conceptual sense. Python-style pseudo code for the library calls to
|
||||
the CSlib is shown, which performs the actual message exchange between
|
||||
the two codes. See the "CSlib website"_http://cslib.sandia.gov doc
|
||||
pages for more details on the actual library syntax (as of Aug 2018).
|
||||
The "cs" object in this pseudo code is an instance of the CSlib that
|
||||
both the client and server codes store.
|
||||
|
||||
See the src/MESSAGE/server_md.cpp and src/MESSAGE/fix_client_md.cpp
|
||||
files for details on how LAMMPS uses these messages. See the
|
||||
examples/COUPLE/lammps_vasp/vasp_wrapper.py file for an example of how
|
||||
a quantum code (VASP) can use use these messages.
|
||||
|
||||
The following code uses these values, defined as enums in LAMMPS:
|
||||
|
||||
enum{SETUP=1,STEP};
|
||||
enum{UNITS=1,DIM,NATOMS,NTYPES,BOXLO,BOXHI,BOXTILT,TYPES,COORDS,CHARGE};
|
||||
enum{FORCES=1,ENERGY,VIRIAL}; :pre
|
||||
|
||||
[Client sends 2 kinds of messages]:
|
||||
|
||||
# required fields: NATOMS, NTYPES, BOXLO, BOXHI, TYPES, COORDS
|
||||
# optional fields: others in 2nd enum above :pre
|
||||
|
||||
cs.send(SETUP,nfields) # msgID with nfields :pre
|
||||
|
||||
cs.pack_string(UNITS,units) # units = "lj", "real", "metal", etc
|
||||
cs.pack_int(NATOMS,natoms) # total numer of atoms
|
||||
cs.pack_int(NTYPES,ntypes) # number of atom types
|
||||
cs.pack(BOXLO,3,boxlo) # 3-vector of lower box bounds
|
||||
cs.pack(BOXHI,3,boxhi) # 3-vector of upper box bounds
|
||||
cs.pack(BOXTILT,3,boxtilt) # 3-vector of tilt factors for triclinic boxes
|
||||
cs.pack(TYPES,natoms,type) # vector of per-atom types
|
||||
cs.pack(COORDS,3*natoms,x) # vector of 3N atom coords
|
||||
cs.pack(CHARGE,natoms,q) # vector of per-atom charge :pre
|
||||
|
||||
# required fields: COORDS
|
||||
# optional fields: BOXLO, BOXHI, BOXTILT :pre
|
||||
|
||||
cs.send(STEP,nfields) # msgID with nfields :pre
|
||||
|
||||
cs.pack_int(NATOMS,natoms) # total numer of atoms
|
||||
cs.pack_int(NTYPES,ntypes) # number of atom types
|
||||
cs.pack(BOXLO,3,boxlo) # 3-vector of lower box bounds
|
||||
cs.pack(BOXTILT,3,boxtilt) # 3-vector of tilt factors for triclinic boxes :pre
|
||||
|
||||
[Server replies to either kind of message]:
|
||||
|
||||
cs.send(msgID,3) # msgID = 1 with 3 fields
|
||||
cs.pack(FORCES,3*Natoms,f) # vector of 3N forces on atoms
|
||||
cs.pack(ENERGY,1,poteng) # total potential energy of system
|
||||
cs.pack(VIRIAL,6,virial) # global virial tensor (6-vector) :pre
|
||||
|
||||
:line
|
||||
|
||||
[Restrictions:]
|
||||
|
||||
This command is part of the MESSAGE package. It is only enabled if
|
||||
LAMMPS was built with that package. See the "Making
|
||||
LAMMPS"_Section_start.html#start_3 section for more info.
|
||||
|
||||
[Related commands:]
|
||||
|
||||
"message"_message.html, "fix client/md"_fix_client_md.html
|
||||
|
||||
[Default:] none
|
|
@ -10,6 +10,7 @@ See these sections of the LAMMPS manaul for details:
|
|||
|
||||
2.5 Building LAMMPS as a library (doc/Section_start.html#start_5)
|
||||
6.10 Coupling LAMMPS to other codes (doc/Section_howto.html#howto_10)
|
||||
6.29 Using LAMMPS in client/server mode (doc/Section_howto.html#howto_29)
|
||||
|
||||
In all of the examples included here, LAMMPS must first be built as a
|
||||
library. Basically, in the src dir you type one of
|
||||
|
@ -33,9 +34,11 @@ These are the sub-directories included in this directory:
|
|||
|
||||
simple simple example of driver code calling LAMMPS as a lib
|
||||
multiple example of driver code calling multiple instances of LAMMPS
|
||||
lammps_mc client/server coupling Monte Carlo with LAMMPS MD
|
||||
lammps_quest MD with quantum forces, coupling to Quest DFT code
|
||||
lammps_spparks grain-growth Monte Carlo with strain via MD,
|
||||
coupling to SPPARKS kinetic MC code
|
||||
lammps_vasp client/server coupling LAMMPS MD with VASP quantum DFT
|
||||
library collection of useful inter-code communication routines
|
||||
fortran a simple wrapper on the LAMMPS library API that
|
||||
can be called from Fortran
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
# Makefile for MC
|
||||
|
||||
SHELL = /bin/sh
|
||||
|
||||
SRC = mc.cpp random_park.cpp
|
||||
OBJ = $(SRC:.cpp=.o)
|
||||
|
||||
# change this line for your machine to path for CSlib src dir
|
||||
|
||||
CSLIB = /home/sjplimp/lammps/lib/message/cslib/src
|
||||
|
||||
# compiler/linker settings
|
||||
|
||||
CC = g++
|
||||
CCFLAGS = -g -O3 -I$(CSLIB)
|
||||
LINK = g++
|
||||
LINKFLAGS = -g -O -L$(CSLIB)
|
||||
|
||||
# targets
|
||||
|
||||
mc: $(OBJ)
|
||||
# this line if built the CSlib within lib/message with ZMQ support
|
||||
# note this is using the serial (no-mpi) version of the CSlib
|
||||
$(LINK) $(LINKFLAGS) $(OBJ) -lcsnompi -lzmq -o mc
|
||||
# this line if built the CSlib without ZMQ support
|
||||
# $(LINK) $(LINKFLAGS) $(OBJ) -lcsnompi -o mc
|
||||
|
||||
clean:
|
||||
@rm -f *.o mc
|
||||
|
||||
# rules
|
||||
|
||||
%.o:%.cpp
|
||||
$(CC) $(CCFLAGS) -c $<
|
|
@ -0,0 +1,106 @@
|
|||
Sample Monte Carlo (MC) wrapper on LAMMPS via client/server coupling
|
||||
|
||||
See the MESSAGE package (doc/Section_messages.html#MESSAGE)
|
||||
and Section_howto.html#howto10 for more details on how
|
||||
client/server coupling works in LAMMPS.
|
||||
|
||||
In this dir, the mc.cpp/h files are a standalone "client" MC code. It
|
||||
should be run on a single processor, though it could become a parallel
|
||||
program at some point. LAMMPS is also run as a standalone executable
|
||||
as a "server" on as many processors as desired using its "server mc"
|
||||
command; see it's doc page for details.
|
||||
|
||||
Messages are exchanged between MC and LAMMPS via a client/server
|
||||
library (CSlib), which is included in the LAMMPS distribution in
|
||||
lib/message. As explained below you can choose to exchange data
|
||||
between the two programs either via files or sockets (ZMQ). If the MC
|
||||
program became parallel, data could also be exchanged via MPI.
|
||||
|
||||
The MC code makes simple MC moves, by displacing a single random atom
|
||||
by a small random amount. It uses LAMMPS to calculate the energy
|
||||
change, and to run dynamics between MC moves.
|
||||
|
||||
----------------
|
||||
|
||||
Build LAMMPS and the MC client code
|
||||
|
||||
First, build LAMMPS with its MESSAGE package installed:
|
||||
|
||||
cd lammps/lib/message
|
||||
python Install.py -m -z # build CSlib with MPI and ZMQ support
|
||||
cd lammps/src
|
||||
make yes-message
|
||||
make mpi
|
||||
|
||||
You can leave off the -z if you do not have ZMQ on your system.
|
||||
|
||||
Next build the MC client code:
|
||||
|
||||
First edit the Makefile in this dir. The CSLIB variable should be the
|
||||
path to where the LAMMPS lib/message dir is on your system. If you
|
||||
built the CSlib without ZMQ support you will also need to
|
||||
comment/uncomment two lines. Then you can just type "make" and you
|
||||
should get an "mc" executable.
|
||||
|
||||
----------------
|
||||
|
||||
To run in client/server mode:
|
||||
|
||||
Both the client (MC) and server (LAMMPS) must use the same messaging
|
||||
mode, namely file or zmq. This is an argument to the MC code; it can
|
||||
be selected by setting the "mode" variable when you run LAMMPS. The
|
||||
default mode = file.
|
||||
|
||||
Here we assume LAMMPS was built to run in parallel, and the MESSAGE
|
||||
package was installed with socket (ZMQ) support. This means either of
|
||||
the messaging modes can be used and LAMMPS can be run in serial or
|
||||
parallel. The MC code is always run in serial.
|
||||
|
||||
When you run, the server should print out thermodynamic info
|
||||
for every MD run it performs (between MC moves). The client
|
||||
will print nothing until the simulation ends, then it will
|
||||
print stats about the accepted MC moves.
|
||||
|
||||
The examples below are commands you should use in two different
|
||||
terminal windows. The order of the two commands (client or server
|
||||
launch) does not matter. You can run them both in the same window if
|
||||
you append a "&" character to the first one to run it in the
|
||||
background.
|
||||
|
||||
--------------
|
||||
|
||||
File mode of messaging:
|
||||
|
||||
% mpirun -np 1 mc in.mc file tmp.couple
|
||||
% mpirun -np 1 lmp_mpi -v mode file < in.mc.server
|
||||
|
||||
% mpirun -np 1 mc in.mc file tmp.couple
|
||||
% mpirun -np 4 lmp_mpi -v mode file < in.mc.server
|
||||
|
||||
ZMQ mode of messaging:
|
||||
|
||||
% mpirun -np 1 mc in.mc zmq localhost:5555
|
||||
% mpirun -np 1 lmp_mpi -v mode zmq < in.mc.server
|
||||
|
||||
% mpirun -np 1 mc in.mc zmq localhost:5555
|
||||
% mpirun -np 4 lmp_mpi -v mode zmq < in.mc.server
|
||||
|
||||
--------------
|
||||
|
||||
The input script for the MC program is in.mc. You can edit it to run
|
||||
longer simulations.
|
||||
|
||||
500 nsteps = total # of steps of MD
|
||||
100 ndynamics = # of MD steps between MC moves
|
||||
0.1 delta = displacement size of MC move
|
||||
1.0 temperature = used in MC Boltzman factor
|
||||
12345 seed = random number seed
|
||||
|
||||
--------------
|
||||
|
||||
The problem size that LAMMPS is computing the MC energy for and
|
||||
running dynamics on is set by the x,y,z variables in the LAMMPS
|
||||
in.mc.server script. The default size is 500 particles. You can
|
||||
adjust the size as follows:
|
||||
|
||||
lmp_mpi -v x 10 -v y 10 -v z 20 # 8000 particles
|
|
@ -0,0 +1,7 @@
|
|||
# MC params
|
||||
|
||||
500 nsteps
|
||||
100 ndynamics
|
||||
0.1 delta
|
||||
1.0 temperature
|
||||
12345 seed
|
|
@ -0,0 +1,36 @@
|
|||
# 3d Lennard-Jones Monte Carlo server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then &
|
||||
"message server mc file tmp.couple" &
|
||||
elif "${mode} == zmq" &
|
||||
"message server mc zmq *:5555" &
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
create_box 1 box
|
||||
create_atoms 1 box
|
||||
mass 1 1.0
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 20 check no
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 50
|
||||
|
||||
server mc
|
|
@ -0,0 +1,254 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones Monte Carlo server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server mc file tmp.couple" elif "${mode} == zmq" "message server mc zmq *:5555"
|
||||
message server mc file tmp.couple
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000633001 secs
|
||||
mass 1 1.0
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 20 check no
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 50
|
||||
|
||||
server mc
|
||||
run 0
|
||||
Neighbor list info ...
|
||||
update every 20 steps, delay 0 steps, check no
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7733681 0 -4.6176881 -5.0221006
|
||||
Loop time of 1.90735e-06 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
52.4% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 1.907e-06 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1956 ave 1956 max 1956 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 19500 ave 19500 max 19500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 19500
|
||||
Ave neighs/atom = 39
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
Loop time of 2.14577e-06 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
46.6% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 2.146e-06 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1956 ave 1956 max 1956 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 19501 ave 19501 max 19501 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 19501
|
||||
Ave neighs/atom = 39.002
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
50 0.70239211 -5.6763152 0 -4.6248342 0.59544428
|
||||
100 0.7565013 -5.757431 0 -4.6249485 0.21982657
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.7565013 -5.7565768 0 -4.6240944 0.22436405
|
||||
Loop time of 1.90735e-06 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
157.3% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 1.907e-06 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1939 ave 1939 max 1939 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18757 ave 18757 max 18757 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18757
|
||||
Ave neighs/atom = 37.514
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.7565013 -5.757431 0 -4.6249485 0.21982657
|
||||
150 0.76110797 -5.7664315 0 -4.6270529 0.16005254
|
||||
200 0.73505651 -5.7266069 0 -4.6262273 0.34189744
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73505651 -5.7181381 0 -4.6177585 0.37629943
|
||||
Loop time of 9.53674e-07 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
209.7% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 9.537e-07 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1899 ave 1899 max 1899 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18699 ave 18699 max 18699 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18699
|
||||
Ave neighs/atom = 37.398
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73505651 -5.7266069 0 -4.6262273 0.34189744
|
||||
250 0.73052476 -5.7206316 0 -4.627036 0.39287516
|
||||
300 0.76300831 -5.7675007 0 -4.6252773 0.16312925
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76300831 -5.768304 0 -4.6260806 0.15954325
|
||||
Loop time of 9.53674e-07 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
314.6% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 9.537e-07 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1903 ave 1903 max 1903 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18715 ave 18715 max 18715 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18715
|
||||
Ave neighs/atom = 37.43
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76300831 -5.768304 0 -4.6260806 0.15954325
|
||||
350 0.72993309 -5.7193261 0 -4.6266162 0.3358374
|
||||
400 0.72469448 -5.713463 0 -4.6285954 0.44859547
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.72469448 -5.7077332 0 -4.6228655 0.47669832
|
||||
Loop time of 9.53674e-07 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
314.6% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 9.537e-07 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1899 ave 1899 max 1899 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18683 ave 18683 max 18683 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18683
|
||||
Ave neighs/atom = 37.366
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.72469448 -5.713463 0 -4.6285954 0.44859547
|
||||
450 0.75305735 -5.7518283 0 -4.6245015 0.34658587
|
||||
500 0.73092571 -5.7206337 0 -4.6264379 0.43715809
|
||||
Total wall time: 0:00:02
|
|
@ -0,0 +1,254 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones Monte Carlo server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server mc file tmp.couple" elif "${mode} == zmq" "message server mc zmq *:5555"
|
||||
message server mc file tmp.couple
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 2 by 2 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000604868 secs
|
||||
mass 1 1.0
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 20 check no
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 50
|
||||
|
||||
server mc
|
||||
run 0
|
||||
Neighbor list info ...
|
||||
update every 20 steps, delay 0 steps, check no
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7733681 0 -4.6176881 -5.0221006
|
||||
Loop time of 3.09944e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
72.6% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 3.099e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 125 max 125 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1099 ave 1099 max 1099 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 4875 ave 4875 max 4875 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 19500
|
||||
Ave neighs/atom = 39
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
Loop time of 3.33786e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
119.8% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 3.338e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 125 max 125 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1099 ave 1099 max 1099 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 4875.25 ave 4885 max 4866 min
|
||||
Histogram: 1 0 0 0 2 0 0 0 0 1
|
||||
|
||||
Total # of neighbors = 19501
|
||||
Ave neighs/atom = 39.002
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
50 0.70210225 -5.6759068 0 -4.6248598 0.59609192
|
||||
100 0.75891559 -5.7611234 0 -4.6250267 0.20841608
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.75891559 -5.7609392 0 -4.6248426 0.20981291
|
||||
Loop time of 3.51667e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
113.7% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 3.517e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 126 max 124 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 2
|
||||
Nghost: 1085.25 ave 1089 max 1079 min
|
||||
Histogram: 1 0 0 0 0 1 0 0 0 2
|
||||
Neighs: 4690.25 ave 4996 max 4401 min
|
||||
Histogram: 1 0 0 1 0 1 0 0 0 1
|
||||
|
||||
Total # of neighbors = 18761
|
||||
Ave neighs/atom = 37.522
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.75891559 -5.7609392 0 -4.6248426 0.20981291
|
||||
150 0.75437991 -5.7558622 0 -4.6265555 0.20681722
|
||||
200 0.73111257 -5.7193748 0 -4.6248993 0.35230715
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73111257 -5.7143906 0 -4.6199151 0.37126023
|
||||
Loop time of 2.92063e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
119.8% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 2.921e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 126 max 123 min
|
||||
Histogram: 1 0 0 0 0 0 1 0 0 2
|
||||
Nghost: 1068.5 ave 1076 max 1063 min
|
||||
Histogram: 2 0 0 0 0 0 1 0 0 1
|
||||
Neighs: 4674.75 ave 4938 max 4419 min
|
||||
Histogram: 1 0 0 0 1 1 0 0 0 1
|
||||
|
||||
Total # of neighbors = 18699
|
||||
Ave neighs/atom = 37.398
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73111257 -5.7193748 0 -4.6248993 0.35230715
|
||||
250 0.73873144 -5.7312505 0 -4.6253696 0.33061033
|
||||
300 0.76392796 -5.7719207 0 -4.6283206 0.18197874
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76392796 -5.7725589 0 -4.6289588 0.17994628
|
||||
Loop time of 3.39746e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
117.7% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 3.397e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 128 max 121 min
|
||||
Histogram: 1 0 0 0 0 1 0 1 0 1
|
||||
Nghost: 1069 ave 1080 max 1055 min
|
||||
Histogram: 1 0 0 0 0 0 2 0 0 1
|
||||
Neighs: 4672 ave 4803 max 4600 min
|
||||
Histogram: 2 0 0 1 0 0 0 0 0 1
|
||||
|
||||
Total # of neighbors = 18688
|
||||
Ave neighs/atom = 37.376
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76392796 -5.7725589 0 -4.6289588 0.17994628
|
||||
350 0.71953041 -5.7041632 0 -4.6270261 0.44866153
|
||||
400 0.7319047 -5.7216051 0 -4.6259438 0.46321355
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.7319047 -5.7158168 0 -4.6201554 0.49192039
|
||||
Loop time of 3.39746e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
117.7% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 3.397e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 132 max 118 min
|
||||
Histogram: 1 0 0 0 0 2 0 0 0 1
|
||||
Nghost: 1057.5 ave 1068 max 1049 min
|
||||
Histogram: 1 0 0 1 1 0 0 0 0 1
|
||||
Neighs: 4685.75 ave 5045 max 4229 min
|
||||
Histogram: 1 0 0 1 0 0 0 0 0 2
|
||||
|
||||
Total # of neighbors = 18743
|
||||
Ave neighs/atom = 37.486
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.7319047 -5.7216051 0 -4.6259438 0.46321355
|
||||
450 0.74503154 -5.7405318 0 -4.6252196 0.33211879
|
||||
500 0.70570501 -5.6824439 0 -4.6260035 0.62020788
|
||||
Total wall time: 0:00:02
|
|
@ -0,0 +1,254 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones Monte Carlo server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server mc file tmp.couple" elif "${mode} == zmq" "message server mc zmq *:5555"
|
||||
message server mc zmq *:5555
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000612974 secs
|
||||
mass 1 1.0
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 20 check no
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 50
|
||||
|
||||
server mc
|
||||
run 0
|
||||
Neighbor list info ...
|
||||
update every 20 steps, delay 0 steps, check no
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7733681 0 -4.6176881 -5.0221006
|
||||
Loop time of 2.14577e-06 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
46.6% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 2.146e-06 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1956 ave 1956 max 1956 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 19500 ave 19500 max 19500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 19500
|
||||
Ave neighs/atom = 39
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
Loop time of 1.90735e-06 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
157.3% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 1.907e-06 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1956 ave 1956 max 1956 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 19501 ave 19501 max 19501 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 19501
|
||||
Ave neighs/atom = 39.002
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
50 0.70239211 -5.6763152 0 -4.6248342 0.59544428
|
||||
100 0.7565013 -5.757431 0 -4.6249485 0.21982657
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.7565013 -5.7565768 0 -4.6240944 0.22436405
|
||||
Loop time of 9.53674e-07 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
209.7% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 9.537e-07 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1939 ave 1939 max 1939 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18757 ave 18757 max 18757 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18757
|
||||
Ave neighs/atom = 37.514
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.7565013 -5.757431 0 -4.6249485 0.21982657
|
||||
150 0.76110797 -5.7664315 0 -4.6270529 0.16005254
|
||||
200 0.73505651 -5.7266069 0 -4.6262273 0.34189744
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73505651 -5.7181381 0 -4.6177585 0.37629943
|
||||
Loop time of 9.53674e-07 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
209.7% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 9.537e-07 | | |100.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1899 ave 1899 max 1899 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18699 ave 18699 max 18699 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18699
|
||||
Ave neighs/atom = 37.398
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73505651 -5.7266069 0 -4.6262273 0.34189744
|
||||
250 0.73052476 -5.7206316 0 -4.627036 0.39287516
|
||||
300 0.76300831 -5.7675007 0 -4.6252773 0.16312925
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76300831 -5.768304 0 -4.6260806 0.15954325
|
||||
Loop time of 0 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
0.0% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 0 | | | 0.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1903 ave 1903 max 1903 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18715 ave 18715 max 18715 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18715
|
||||
Ave neighs/atom = 37.43
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76300831 -5.768304 0 -4.6260806 0.15954325
|
||||
350 0.72993309 -5.7193261 0 -4.6266162 0.3358374
|
||||
400 0.72469448 -5.713463 0 -4.6285954 0.44859547
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.72469448 -5.7077332 0 -4.6228655 0.47669832
|
||||
Loop time of 0 on 1 procs for 0 steps with 500 atoms
|
||||
|
||||
0.0% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 0 | | | 0.00
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1899 ave 1899 max 1899 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18683 ave 18683 max 18683 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18683
|
||||
Ave neighs/atom = 37.366
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.658 | 2.658 | 2.658 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.72469448 -5.713463 0 -4.6285954 0.44859547
|
||||
450 0.75305735 -5.7518283 0 -4.6245015 0.34658587
|
||||
500 0.73092571 -5.7206337 0 -4.6264379 0.43715809
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,254 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones Monte Carlo server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server mc file tmp.couple" elif "${mode} == zmq" "message server mc zmq *:5555"
|
||||
message server mc zmq *:5555
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 2 by 2 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000566006 secs
|
||||
mass 1 1.0
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 20 check no
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 50
|
||||
|
||||
server mc
|
||||
run 0
|
||||
Neighbor list info ...
|
||||
update every 20 steps, delay 0 steps, check no
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7733681 0 -4.6176881 -5.0221006
|
||||
Loop time of 4.29153e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
99.0% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 4.292e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 125 max 125 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1099 ave 1099 max 1099 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 4875 ave 4875 max 4875 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 19500
|
||||
Ave neighs/atom = 39
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
Loop time of 3.57628e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
97.9% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 3.576e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 125 max 125 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1099 ave 1099 max 1099 min
|
||||
Histogram: 4 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 4875.25 ave 4885 max 4866 min
|
||||
Histogram: 1 0 0 0 2 0 0 0 0 1
|
||||
|
||||
Total # of neighbors = 19501
|
||||
Ave neighs/atom = 39.002
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7723127 0 -4.6166327 -5.015531
|
||||
50 0.70210225 -5.6759068 0 -4.6248598 0.59609192
|
||||
100 0.75891559 -5.7611234 0 -4.6250267 0.20841608
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.75891559 -5.7609392 0 -4.6248426 0.20981291
|
||||
Loop time of 3.09944e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
121.0% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 3.099e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 126 max 124 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 2
|
||||
Nghost: 1085.25 ave 1089 max 1079 min
|
||||
Histogram: 1 0 0 0 0 1 0 0 0 2
|
||||
Neighs: 4690.25 ave 4996 max 4401 min
|
||||
Histogram: 1 0 0 1 0 1 0 0 0 1
|
||||
|
||||
Total # of neighbors = 18761
|
||||
Ave neighs/atom = 37.522
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
100 0.75891559 -5.7609392 0 -4.6248426 0.20981291
|
||||
150 0.75437991 -5.7558622 0 -4.6265555 0.20681722
|
||||
200 0.73111257 -5.7193748 0 -4.6248993 0.35230715
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73111257 -5.7143906 0 -4.6199151 0.37126023
|
||||
Loop time of 2.14577e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
139.8% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 2.146e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 126 max 123 min
|
||||
Histogram: 1 0 0 0 0 0 1 0 0 2
|
||||
Nghost: 1068.5 ave 1076 max 1063 min
|
||||
Histogram: 2 0 0 0 0 0 1 0 0 1
|
||||
Neighs: 4674.75 ave 4938 max 4419 min
|
||||
Histogram: 1 0 0 0 1 1 0 0 0 1
|
||||
|
||||
Total # of neighbors = 18699
|
||||
Ave neighs/atom = 37.398
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
200 0.73111257 -5.7193748 0 -4.6248993 0.35230715
|
||||
250 0.73873144 -5.7312505 0 -4.6253696 0.33061033
|
||||
300 0.76392796 -5.7719207 0 -4.6283206 0.18197874
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76392796 -5.7725589 0 -4.6289588 0.17994628
|
||||
Loop time of 1.90735e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
157.3% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 1.907e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 128 max 121 min
|
||||
Histogram: 1 0 0 0 0 1 0 1 0 1
|
||||
Nghost: 1069 ave 1080 max 1055 min
|
||||
Histogram: 1 0 0 0 0 0 2 0 0 1
|
||||
Neighs: 4672 ave 4803 max 4600 min
|
||||
Histogram: 2 0 0 1 0 0 0 0 0 1
|
||||
|
||||
Total # of neighbors = 18688
|
||||
Ave neighs/atom = 37.376
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
300 0.76392796 -5.7725589 0 -4.6289588 0.17994628
|
||||
350 0.71953041 -5.7041632 0 -4.6270261 0.44866153
|
||||
400 0.7319047 -5.7216051 0 -4.6259438 0.46321355
|
||||
run 0
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.7319047 -5.7158168 0 -4.6201554 0.49192039
|
||||
Loop time of 2.14577e-06 on 4 procs for 0 steps with 500 atoms
|
||||
|
||||
151.5% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Output | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Modify | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Other | | 2.146e-06 | | |100.00
|
||||
|
||||
Nlocal: 125 ave 132 max 118 min
|
||||
Histogram: 1 0 0 0 0 2 0 0 0 1
|
||||
Nghost: 1057.5 ave 1068 max 1049 min
|
||||
Histogram: 1 0 0 1 1 0 0 0 0 1
|
||||
Neighs: 4685.75 ave 5045 max 4229 min
|
||||
Histogram: 1 0 0 1 0 0 0 0 0 2
|
||||
|
||||
Total # of neighbors = 18743
|
||||
Ave neighs/atom = 37.486
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.619 | 2.619 | 2.619 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
400 0.7319047 -5.7216051 0 -4.6259438 0.46321355
|
||||
450 0.74503154 -5.7405318 0 -4.6252196 0.33211879
|
||||
500 0.70570501 -5.6824439 0 -4.6260035 0.62020788
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,261 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
// MC code used with LAMMPS in client/server mode
|
||||
// MC is the client, LAMMPS is the server
|
||||
|
||||
// Syntax: mc infile mode modearg
|
||||
// mode = file, zmq
|
||||
// modearg = filename for file, localhost:5555 for zmq
|
||||
|
||||
#include <math.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
#include "cslib.h"
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
#include "mc.h"
|
||||
#include "random_park.h"
|
||||
|
||||
void error(const char *);
|
||||
CSlib *cs_create(char *, char *);
|
||||
|
||||
#define MAXLINE 256
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
// main program
|
||||
|
||||
int main(int narg, char **arg)
|
||||
{
|
||||
if (narg != 4) {
|
||||
error("Syntax: mc infile mode modearg");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// initialize CSlib
|
||||
|
||||
CSlib *cs = cs_create(arg[2],arg[3]);
|
||||
|
||||
// create MC class and perform run
|
||||
|
||||
MC *mc = new MC(arg[1],cs);
|
||||
mc->run();
|
||||
|
||||
// final MC stats
|
||||
|
||||
int naccept = mc->naccept;
|
||||
int nattempt = mc->nattempt;
|
||||
|
||||
printf("------ MC stats ------\n");
|
||||
printf("MC attempts = %d\n",nattempt);
|
||||
printf("MC accepts = %d\n",naccept);
|
||||
printf("Acceptance ratio = %g\n",1.0*naccept/nattempt);
|
||||
|
||||
// clean up
|
||||
|
||||
delete cs;
|
||||
delete mc;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void error(const char *str)
|
||||
{
|
||||
printf("ERROR: %s\n",str);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
CSlib *cs_create(char *mode, char *arg)
|
||||
{
|
||||
CSlib *cs = new CSlib(0,mode,arg,NULL);
|
||||
|
||||
// initial handshake to agree on protocol
|
||||
|
||||
cs->send(0,1);
|
||||
cs->pack_string(1,(char *) "mc");
|
||||
|
||||
int msgID,nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
|
||||
return cs;
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
// MC class
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
MC::MC(char *mcfile, CSlib *cs_caller)
|
||||
{
|
||||
cs = cs_caller;
|
||||
|
||||
// setup MC params
|
||||
|
||||
options(mcfile);
|
||||
|
||||
// random # generator
|
||||
|
||||
random = new RanPark(seed);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MC::~MC()
|
||||
{
|
||||
free(x);
|
||||
delete random;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MC::run()
|
||||
{
|
||||
int iatom,accept,msgID,nfield;
|
||||
double pe_initial,pe_final,edelta;
|
||||
double dx,dy,dz;
|
||||
double xold[3],xnew[3];
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
|
||||
enum{NATOMS=1,EINIT,DISPLACE,ACCEPT,RUN};
|
||||
|
||||
// one-time request for atom count from MD
|
||||
// allocate 1d coord buffer
|
||||
|
||||
cs->send(NATOMS,0);
|
||||
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
natoms = cs->unpack_int(1);
|
||||
|
||||
x = (double *) malloc(3*natoms*sizeof(double));
|
||||
|
||||
// loop over MC moves
|
||||
|
||||
naccept = nattempt = 0;
|
||||
|
||||
for (int iloop = 0; iloop < nloop; iloop++) {
|
||||
|
||||
// request current energy from MD
|
||||
// recv energy, coords from MD
|
||||
|
||||
cs->send(EINIT,0);
|
||||
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
pe_initial = cs->unpack_double(1);
|
||||
double *x = (double *) cs->unpack(2);
|
||||
|
||||
// perform simple MC event
|
||||
// displace a single atom by random amount
|
||||
|
||||
iatom = (int) natoms*random->uniform();
|
||||
xold[0] = x[3*iatom+0];
|
||||
xold[1] = x[3*iatom+1];
|
||||
xold[2] = x[3*iatom+2];
|
||||
|
||||
dx = 2.0*delta*random->uniform() - delta;
|
||||
dy = 2.0*delta*random->uniform() - delta;
|
||||
dz = 2.0*delta*random->uniform() - delta;
|
||||
|
||||
xnew[0] = xold[0] + dx;
|
||||
xnew[1] = xold[1] + dx;
|
||||
xnew[2] = xold[2] + dx;
|
||||
|
||||
// send atom ID and its new coords to MD
|
||||
// recv new energy
|
||||
|
||||
cs->send(DISPLACE,2);
|
||||
cs->pack_int(1,iatom+1);
|
||||
cs->pack(2,4,3,xnew);
|
||||
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
pe_final = cs->unpack_double(1);
|
||||
|
||||
// decide whether to accept/reject MC event
|
||||
|
||||
if (pe_final <= pe_initial) accept = 1;
|
||||
else if (temperature == 0.0) accept = 0;
|
||||
else if (random->uniform() >
|
||||
exp(natoms*(pe_initial-pe_final)/temperature)) accept = 0;
|
||||
else accept = 1;
|
||||
|
||||
nattempt++;
|
||||
if (accept) naccept++;
|
||||
|
||||
// send accept (1) or reject (0) flag to MD
|
||||
|
||||
cs->send(ACCEPT,1);
|
||||
cs->pack_int(1,accept);
|
||||
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
|
||||
// send dynamics timesteps
|
||||
|
||||
cs->send(RUN,1);
|
||||
cs->pack_int(1,ndynamics);
|
||||
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
}
|
||||
|
||||
// send exit message to MD
|
||||
|
||||
cs->send(-1,0);
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MC::options(char *filename)
|
||||
{
|
||||
// default params
|
||||
|
||||
nsteps = 0;
|
||||
ndynamics = 100;
|
||||
delta = 0.1;
|
||||
temperature = 1.0;
|
||||
seed = 12345;
|
||||
|
||||
// read and parse file
|
||||
|
||||
FILE *fp = fopen(filename,"r");
|
||||
if (fp == NULL) error("Could not open MC file");
|
||||
|
||||
char line[MAXLINE];
|
||||
char *keyword,*value;
|
||||
char *eof = fgets(line,MAXLINE,fp);
|
||||
|
||||
while (eof) {
|
||||
if (line[0] == '#') { // comment line
|
||||
eof = fgets(line,MAXLINE,fp);
|
||||
continue;
|
||||
}
|
||||
|
||||
value = strtok(line," \t\n\r\f");
|
||||
if (value == NULL) { // blank line
|
||||
eof = fgets(line,MAXLINE,fp);
|
||||
continue;
|
||||
}
|
||||
|
||||
keyword = strtok(NULL," \t\n\r\f");
|
||||
if (keyword == NULL) error("Missing keyword in MC file");
|
||||
|
||||
if (strcmp(keyword,"nsteps") == 0) nsteps = atoi(value);
|
||||
else if (strcmp(keyword,"ndynamics") == 0) ndynamics = atoi(value);
|
||||
else if (strcmp(keyword,"delta") == 0) delta = atof(value);
|
||||
else if (strcmp(keyword,"temperature") == 0) temperature = atof(value);
|
||||
else if (strcmp(keyword,"seed") == 0) seed = atoi(value);
|
||||
else error("Unknown param in MC file");
|
||||
|
||||
eof = fgets(line,MAXLINE,fp);
|
||||
}
|
||||
|
||||
// derived params
|
||||
|
||||
nloop = nsteps/ndynamics;
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef MC_H
|
||||
#define MC_H
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
class MC {
|
||||
public:
|
||||
int naccept; // # of accepted MC events
|
||||
int nattempt; // # of attempted MC events
|
||||
|
||||
MC(char *, class CSlib *);
|
||||
~MC();
|
||||
void run();
|
||||
|
||||
private:
|
||||
int nsteps; // total # of MD steps
|
||||
int ndynamics; // steps in one short dynamics run
|
||||
int nloop; // nsteps/ndynamics
|
||||
int natoms; // # of MD atoms
|
||||
|
||||
double delta; // MC displacement distance
|
||||
double temperature; // MC temperature for Boltzmann criterion
|
||||
double *x; // atom coords as 3N 1d vector
|
||||
double energy; // global potential energy
|
||||
|
||||
int seed; // RNG seed
|
||||
class RanPark *random;
|
||||
|
||||
class CSlib *cs; // messaging library
|
||||
|
||||
void options(char *);
|
||||
};
|
||||
|
||||
#endif
|
|
@ -0,0 +1,72 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
// Park/Miller RNG
|
||||
|
||||
#include <math.h>
|
||||
#include "random_park.h"
|
||||
//#include "error.h"
|
||||
|
||||
#define IA 16807
|
||||
#define IM 2147483647
|
||||
#define AM (1.0/IM)
|
||||
#define IQ 127773
|
||||
#define IR 2836
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
RanPark::RanPark(int seed_init)
|
||||
{
|
||||
//if (seed_init <= 0)
|
||||
// error->one(FLERR,"Invalid seed for Park random # generator");
|
||||
seed = seed_init;
|
||||
save = 0;
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
uniform RN
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
double RanPark::uniform()
|
||||
{
|
||||
int k = seed/IQ;
|
||||
seed = IA*(seed-k*IQ) - IR*k;
|
||||
if (seed < 0) seed += IM;
|
||||
double ans = AM*seed;
|
||||
return ans;
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
gaussian RN
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
double RanPark::gaussian()
|
||||
{
|
||||
double first,v1,v2,rsq,fac;
|
||||
|
||||
if (!save) {
|
||||
do {
|
||||
v1 = 2.0*uniform()-1.0;
|
||||
v2 = 2.0*uniform()-1.0;
|
||||
rsq = v1*v1 + v2*v2;
|
||||
} while ((rsq >= 1.0) || (rsq == 0.0));
|
||||
fac = sqrt(-2.0*log(rsq)/rsq);
|
||||
second = v1*fac;
|
||||
first = v2*fac;
|
||||
save = 1;
|
||||
} else {
|
||||
first = second;
|
||||
save = 0;
|
||||
}
|
||||
return first;
|
||||
}
|
|
@ -0,0 +1,28 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef RANPARK_H
|
||||
#define RANPARK_H
|
||||
|
||||
class RanPark {
|
||||
public:
|
||||
RanPark(int);
|
||||
double uniform();
|
||||
double gaussian();
|
||||
|
||||
private:
|
||||
int seed,save;
|
||||
double second;
|
||||
};
|
||||
|
||||
#endif
|
|
@ -0,0 +1,53 @@
|
|||
# Startparameter for this run:
|
||||
NWRITE = 2 write-flag & timer
|
||||
PREC = normal normal or accurate (medium, high low for compatibility)
|
||||
ISTART = 0 job : 0-new 1-cont 2-samecut
|
||||
ICHARG = 2 charge: 1-file 2-atom 10-const
|
||||
ISPIN = 1 spin polarized calculation?
|
||||
LSORBIT = F spin-orbit coupling
|
||||
INIWAV = 1 electr: 0-lowe 1-rand 2-diag
|
||||
|
||||
# Electronic Relaxation 1
|
||||
ENCUT = 600.0 eV #Plane wave energy cutoff
|
||||
ENINI = 600.0 initial cutoff
|
||||
NELM = 100; NELMIN= 2; NELMDL= -5 # of ELM steps
|
||||
EDIFF = 0.1E-05 stopping-criterion for ELM
|
||||
# Ionic relaxation
|
||||
EDIFFG = 0.1E-02 stopping-criterion for IOM
|
||||
NSW = 0 number of steps for IOM
|
||||
NBLOCK = 1; KBLOCK = 1 inner block; outer block
|
||||
IBRION = -1 ionic relax: 0-MD 1-quasi-New 2-CG #No ion relaxation with -1
|
||||
NFREE = 0 steps in history (QN), initial steepest desc. (CG)
|
||||
ISIF = 2 stress and relaxation # 2: F-yes Sts-yes RlxIon-yes cellshape-no cellvol-no
|
||||
IWAVPR = 10 prediction: 0-non 1-charg 2-wave 3-comb # 10: TMPCAR stored in memory rather than file
|
||||
|
||||
POTIM = 0.5000 time-step for ionic-motion
|
||||
TEBEG = 3500.0; TEEND = 3500.0 temperature during run # Finite Temperature variables if AI-MD is on
|
||||
SMASS = -3.00 Nose mass-parameter (am)
|
||||
estimated Nose-frequenzy (Omega) = 0.10E-29 period in steps =****** mass= -0.366E-27a.u.
|
||||
PSTRESS= 0.0 pullay stress
|
||||
|
||||
# DOS related values:
|
||||
EMIN = 10.00; EMAX =-10.00 energy-range for DOS
|
||||
EFERMI = 0.00
|
||||
ISMEAR = 0; SIGMA = 0.10 broadening in eV -4-tet -1-fermi 0-gaus
|
||||
|
||||
# Electronic relaxation 2 (details)
|
||||
IALGO = 48 algorithm
|
||||
|
||||
# Write flags
|
||||
LWAVE = T write WAVECAR
|
||||
LCHARG = T write CHGCAR
|
||||
LVTOT = F write LOCPOT, total local potential
|
||||
LVHAR = F write LOCPOT, Hartree potential only
|
||||
LELF = F write electronic localiz. function (ELF)
|
||||
|
||||
# Dipole corrections
|
||||
LMONO = F monopole corrections only (constant potential shift)
|
||||
LDIPOL = F correct potential (dipole corrections)
|
||||
IDIPOL = 0 1-x, 2-y, 3-z, 4-all directions
|
||||
EPSILON= 1.0000000 bulk dielectric constant
|
||||
|
||||
# Exchange correlation treatment:
|
||||
GGA = -- GGA type
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
K-Points
|
||||
0
|
||||
Monkhorst Pack
|
||||
15 15 15
|
||||
0 0 0
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
W unit cell
|
||||
1.0
|
||||
3.16 0.00000000 0.00000000
|
||||
0.00000000 3.16 0.00000000
|
||||
0.00000000 0.00000000 3.16
|
||||
W
|
||||
2
|
||||
Direct
|
||||
0.00000000 0.00000000 0.00000000
|
||||
0.50000000 0.50000000 0.50000000
|
||||
|
|
@ -0,0 +1,90 @@
|
|||
Sample LAMMPS MD wrapper on VASP quantum DFT via client/server
|
||||
coupling
|
||||
|
||||
See the MESSAGE package (doc/Section_messages.html#MESSAGE) and
|
||||
Section_howto.html#howto10 for more details on how client/server
|
||||
coupling works in LAMMPS.
|
||||
|
||||
In this dir, the vasp_warp.py is a wrapper on the VASP quantum DFT
|
||||
code so it can work as a "server" code which LAMMPS drives as a
|
||||
"client" code to perform ab initio MD. LAMMPS performs the MD
|
||||
timestepping, sends VASP a current set of coordinates each timestep,
|
||||
VASP computes forces and energy and virial and returns that info to
|
||||
LAMMPS.
|
||||
|
||||
Messages are exchanged between MC and LAMMPS via a client/server
|
||||
library (CSlib), which is included in the LAMMPS distribution in
|
||||
lib/message. As explained below you can choose to exchange data
|
||||
between the two programs either via files or sockets (ZMQ). If the
|
||||
vasp_wrap.py program became parallel, or the CSlib library calls were
|
||||
integrated into VASP directly, then data could also be exchanged via
|
||||
MPI.
|
||||
|
||||
----------------
|
||||
|
||||
Build LAMMPS with its MESSAGE package installed:
|
||||
|
||||
cd lammps/lib/message
|
||||
python Install.py -m -z # build CSlib with MPI and ZMQ support
|
||||
cd lammps/src
|
||||
make yes-message
|
||||
make mpi
|
||||
|
||||
You can leave off the -z if you do not have ZMQ on your system.
|
||||
|
||||
----------------
|
||||
|
||||
To run in client/server mode:
|
||||
|
||||
Both the client (LAMMPS) and server (vasp_wrap.py) must use the same
|
||||
messaging mode, namely file or zmq. This is an argument to the
|
||||
vasp_wrap.py code; it can be selected by setting the "mode" variable
|
||||
when you run LAMMPS. The default mode = file.
|
||||
|
||||
Here we assume LAMMPS was built to run in parallel, and the MESSAGE
|
||||
package was installed with socket (ZMQ) support. This means either of
|
||||
the messaging modes can be used and LAMMPS can be run in serial or
|
||||
parallel. The vasp_wrap.py code is always run in serial, but it
|
||||
launches VASP from Python via an mpirun command which can run VASP
|
||||
itself in parallel.
|
||||
|
||||
When you run, the server should print out thermodynamic info every
|
||||
timestep which corresponds to the forces and virial computed by VASP.
|
||||
VASP will also generate output files each timestep. The vasp_wrapper.py
|
||||
script could be generalized to archive these.
|
||||
|
||||
The examples below are commands you should use in two different
|
||||
terminal windows. The order of the two commands (client or server
|
||||
launch) does not matter. You can run them both in the same window if
|
||||
you append a "&" character to the first one to run it in the
|
||||
background.
|
||||
|
||||
--------------
|
||||
|
||||
File mode of messaging:
|
||||
|
||||
% mpirun -np 1 lmp_mpi -v mode file < in.client.W
|
||||
% python vasp_wrap.py file POSCAR_W
|
||||
|
||||
% mpirun -np 2 lmp_mpi -v mode file < in.client.W
|
||||
% python vasp_wrap.py file POSCAR_W
|
||||
|
||||
ZMQ mode of messaging:
|
||||
|
||||
% mpirun -np 1 lmp_mpi -v mode zmq < in.client.W
|
||||
% python vasp_wrap.py zmq POSCAR_W
|
||||
|
||||
% mpirun -np 2 lmp_mpi -v mode zmq < in.client.W
|
||||
% python vasp_wrap.py zmq POSCAR_W
|
||||
|
||||
--------------
|
||||
|
||||
The provided data.W file (for LAMMPS) and POSCAR_W file (for VASP) are
|
||||
for a simple 2-atom unit cell of bcc tungsten (W). You could
|
||||
replicate this with LAMMPS to create a larger system. The
|
||||
vasp_wrap.py script needs to be generalized to create an appropriate
|
||||
POSCAR_W file for a larger box.
|
||||
|
||||
VASP input file include the sample INCAR and KPOINTS files provided.
|
||||
A POTCAR file is also needed, which should come from your VASP package
|
||||
installation.
|
|
@ -0,0 +1,15 @@
|
|||
LAMMPS W data file
|
||||
|
||||
2 atoms
|
||||
|
||||
1 atom types
|
||||
|
||||
0.0 3.16 xlo xhi
|
||||
0.0 3.16 ylo yhi
|
||||
0.0 3.16 zlo zhi
|
||||
|
||||
Atoms
|
||||
|
||||
1 1 0.000 0.000 0.000
|
||||
2 1 1.58 1.58 1.58
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
# small W unit cell for use with VASP
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then &
|
||||
"message client md file tmp.couple" &
|
||||
elif "${mode} == zmq" &
|
||||
"message client md zmq localhost:5555" &
|
||||
|
||||
variable x index 1
|
||||
variable y index 1
|
||||
variable z index 1
|
||||
|
||||
units metal
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
read_data data.W
|
||||
mass 1 183.85
|
||||
|
||||
replicate $x $y $z
|
||||
|
||||
velocity all create 300.0 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 10 check no
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 1
|
||||
run 3
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
LAMMPS (20 Apr 2018)
|
||||
# small W unit cell for use with VASP
|
||||
|
||||
#message client aimd file tmp.couple
|
||||
message client aimd zmq localhost:5555
|
||||
#message client aimd mpi/two tmp.couple
|
||||
#message client aimd mpi/one tmp.couple
|
||||
|
||||
units real
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
read_data data.W
|
||||
orthogonal box = (0 0 0) to (3.16 3.16 3.16)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
reading atoms ...
|
||||
2 atoms
|
||||
mass 1 1.0
|
||||
|
||||
#velocity all create 300.0 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 10 check no
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all message/aimd
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 1
|
||||
run 2
|
||||
Per MPI rank memory allocation (min/avg/max) = 1.8 | 1.8 | 1.8 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 0 0 0 -48.069571 -172694.2
|
||||
1 0.063865861 0 0 -48.069381 -172693.93
|
||||
2 0.25546344 0 0 -48.06881 -172693.1
|
||||
Loop time of 0.281842 on 1 procs for 2 steps with 2 atoms
|
||||
|
||||
Performance: 0.613 ns/day, 39.145 hours/ns, 7.096 timesteps/s
|
||||
0.0% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Comm | 3.0994e-06 | 3.0994e-06 | 3.0994e-06 | 0.0 | 0.00
|
||||
Output | 8.9169e-05 | 8.9169e-05 | 8.9169e-05 | 0.0 | 0.03
|
||||
Modify | 0.28174 | 0.28174 | 0.28174 | 0.0 | 99.97
|
||||
Other | | 5.96e-06 | | | 0.00
|
||||
|
||||
Nlocal: 2 ave 2 max 2 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 7 ave 7 max 7 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 0
|
||||
Dangerous builds not checked
|
||||
|
||||
Total wall time: 0:00:06
|
|
@ -0,0 +1,234 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
# http://lammps.sandia.gov, Sandia National Laboratories
|
||||
# Steve Plimpton, sjplimp@sandia.gov
|
||||
# ----------------------------------------------------------------------
|
||||
|
||||
# Syntax: vasp_wrap.py file/zmq POSCARfile
|
||||
|
||||
# wrapper on VASP to act as server program using CSlib
|
||||
# receives message with list of coords from client
|
||||
# creates VASP inputs
|
||||
# invokes VASP to calculate self-consistent energy of that config
|
||||
# reads VASP outputs
|
||||
# sends message with energy, forces, virial to client
|
||||
|
||||
# NOTES:
|
||||
# check to insure basic VASP input files are in place?
|
||||
# worry about archiving VASP input/output in special filenames or dirs?
|
||||
# how to get ordering (by type) of VASP atoms vs LAMMPS atoms
|
||||
# create one initial permutation vector?
|
||||
# could make syntax for launching VASP more flexible
|
||||
# e.g. command-line arg for # of procs
|
||||
|
||||
import sys
|
||||
import commands
|
||||
import xml.etree.ElementTree as ET
|
||||
from cslib import CSlib
|
||||
|
||||
vaspcmd = "srun -N 1 --ntasks-per-node=4 " + \
|
||||
"-n 4 /projects/vasp/2017-build/cts1/vasp5.4.4/vasp_tfermi/bin/vasp_std"
|
||||
|
||||
# enums matching FixClientMD class in LAMMPS
|
||||
|
||||
SETUP,STEP = range(1,2+1)
|
||||
UNITS,DIM,NATOMS,NTYPES,BOXLO,BOXHI,BOXTILT,TYPES,COORDS,CHARGE = range(1,10+1)
|
||||
FORCES,ENERGY,VIRIAL = range(1,3+1)
|
||||
|
||||
# -------------------------------------
|
||||
# functions
|
||||
|
||||
# error message and exit
|
||||
|
||||
def error(txt):
|
||||
print "ERROR:",txt
|
||||
sys.exit(1)
|
||||
|
||||
# -------------------------------------
|
||||
# read initial VASP POSCAR file to setup problem
|
||||
# return natoms,ntypes,box
|
||||
|
||||
def vasp_setup(poscar):
|
||||
|
||||
ps = open(poscar,'r').readlines()
|
||||
|
||||
# box size
|
||||
|
||||
words = ps[2].split()
|
||||
xbox = float(words[0])
|
||||
words = ps[3].split()
|
||||
ybox = float(words[1])
|
||||
words = ps[4].split()
|
||||
zbox = float(words[2])
|
||||
box = [xbox,ybox,zbox]
|
||||
|
||||
ntypes = 0
|
||||
natoms = 0
|
||||
words = ps[6].split()
|
||||
for word in words:
|
||||
if word == '#': break
|
||||
ntypes += 1
|
||||
natoms += int(word)
|
||||
|
||||
return natoms,ntypes,box
|
||||
|
||||
# -------------------------------------
|
||||
# write a new POSCAR file for VASP
|
||||
|
||||
def poscar_write(poscar,natoms,ntypes,types,coords,box):
|
||||
|
||||
psold = open(poscar,'r').readlines()
|
||||
psnew = open("POSCAR",'w')
|
||||
|
||||
# header, including box size
|
||||
|
||||
print >>psnew,psold[0],
|
||||
print >>psnew,psold[1],
|
||||
print >>psnew,"%g 0.0 0.0" % box[0]
|
||||
print >>psnew,"0.0 %g 0.0" % box[1]
|
||||
print >>psnew,"0.0 0.0 %g" % box[2]
|
||||
print >>psnew,psold[5],
|
||||
print >>psnew,psold[6],
|
||||
|
||||
# per-atom coords
|
||||
# grouped by types
|
||||
|
||||
print >>psnew,"Cartesian"
|
||||
|
||||
for itype in range(1,ntypes+1):
|
||||
for i in range(natoms):
|
||||
if types[i] != itype: continue
|
||||
x = coords[3*i+0]
|
||||
y = coords[3*i+1]
|
||||
z = coords[3*i+2]
|
||||
aline = " %g %g %g" % (x,y,z)
|
||||
print >>psnew,aline
|
||||
|
||||
psnew.close()
|
||||
|
||||
# -------------------------------------
|
||||
# read a VASP output vasprun.xml file
|
||||
# uses ElementTree module
|
||||
# see https://docs.python.org/2/library/xml.etree.elementtree.html
|
||||
|
||||
def vasprun_read():
|
||||
tree = ET.parse('vasprun.xml')
|
||||
root = tree.getroot()
|
||||
|
||||
#fp = open("vasprun.xml","r")
|
||||
#root = ET.parse(fp)
|
||||
|
||||
scsteps = root.findall('calculation/scstep')
|
||||
energy = scsteps[-1].find('energy')
|
||||
for child in energy:
|
||||
if child.attrib["name"] == "e_0_energy":
|
||||
eout = float(child.text)
|
||||
|
||||
fout = []
|
||||
sout = []
|
||||
|
||||
varrays = root.findall('calculation/varray')
|
||||
for varray in varrays:
|
||||
if varray.attrib["name"] == "forces":
|
||||
forces = varray.findall("v")
|
||||
for line in forces:
|
||||
fxyz = line.text.split()
|
||||
fxyz = [float(value) for value in fxyz]
|
||||
fout += fxyz
|
||||
if varray.attrib["name"] == "stress":
|
||||
tensor = varray.findall("v")
|
||||
stensor = []
|
||||
for line in tensor:
|
||||
sxyz = line.text.split()
|
||||
sxyz = [float(value) for value in sxyz]
|
||||
stensor.append(sxyz)
|
||||
sxx = stensor[0][0]
|
||||
syy = stensor[1][1]
|
||||
szz = stensor[2][2]
|
||||
sxy = 0.5 * (stensor[0][1] + stensor[1][0])
|
||||
sxz = 0.5 * (stensor[0][2] + stensor[2][0])
|
||||
syz = 0.5 * (stensor[1][2] + stensor[2][1])
|
||||
sout = [sxx,syy,szz,sxy,sxz,syz]
|
||||
|
||||
#fp.close()
|
||||
|
||||
return eout,fout,sout
|
||||
|
||||
# -------------------------------------
|
||||
# main program
|
||||
|
||||
# command-line args
|
||||
|
||||
if len(sys.argv) != 3:
|
||||
print "Syntax: python vasp_wrap.py file/zmq POSCARfile"
|
||||
sys.exit(1)
|
||||
|
||||
mode = sys.argv[1]
|
||||
poscar_template = sys.argv[2]
|
||||
|
||||
if mode == "file": cs = CSlib(1,mode,"tmp.couple",None)
|
||||
elif mode == "zmq": cs = CSlib(1,mode,"*:5555",None)
|
||||
else:
|
||||
print "Syntax: python vasp_wrap.py file/zmq POSCARfile"
|
||||
sys.exit(1)
|
||||
|
||||
natoms,ntypes,box = vasp_setup(poscar_template)
|
||||
|
||||
# initial message for MD protocol
|
||||
|
||||
msgID,nfield,fieldID,fieldtype,fieldlen = cs.recv()
|
||||
if msgID != 0: error("Bad initial client/server handshake")
|
||||
protocol = cs.unpack_string(1)
|
||||
if protocol != "md": error("Mismatch in client/server protocol")
|
||||
cs.send(0,0)
|
||||
|
||||
# endless server loop
|
||||
|
||||
while 1:
|
||||
|
||||
# recv message from client
|
||||
# msgID = 0 = all-done message
|
||||
|
||||
msgID,nfield,fieldID,fieldtype,fieldlen = cs.recv()
|
||||
if msgID < 0: break
|
||||
|
||||
# could generalize this to be more like ServerMD class
|
||||
# allow for box size, atom types, natoms, etc
|
||||
|
||||
# unpack coords from client
|
||||
# create VASP input
|
||||
# NOTE: generalize this for general list of atom types
|
||||
|
||||
coords = cs.unpack(COORDS,1)
|
||||
#types = cs.unpack(2);
|
||||
types = 2*[1]
|
||||
|
||||
poscar_write(poscar_template,natoms,ntypes,types,coords,box)
|
||||
|
||||
# invoke VASP
|
||||
|
||||
print "Launching VASP ..."
|
||||
print vaspcmd
|
||||
out = commands.getoutput(vaspcmd)
|
||||
print out
|
||||
|
||||
# process VASP output
|
||||
|
||||
energy,forces,virial = vasprun_read()
|
||||
|
||||
# return forces, energy, virial to client
|
||||
|
||||
cs.send(msgID,3);
|
||||
cs.pack(FORCES,4,3*natoms,forces)
|
||||
cs.pack_double(ENERGY,energy)
|
||||
cs.pack(VIRIAL,4,6,virial)
|
||||
|
||||
# final reply to client
|
||||
|
||||
cs.send(0,0)
|
||||
|
||||
# clean-up
|
||||
|
||||
del cs
|
|
@ -82,6 +82,7 @@ kim: use of potentials in Knowledge Base for Interatomic Models (KIM)
|
|||
latte: use of LATTE density-functional tight-binding quantum code
|
||||
meam: MEAM test for SiC and shear (same as shear examples)
|
||||
melt: rapid melt of 3d LJ system
|
||||
message: client/server coupling of 2 codes
|
||||
micelle: self-assembly of small lipid-like molecules into 2d bilayers
|
||||
min: energy minimization of 2d LJ melt
|
||||
mscg: parameterize a multi-scale coarse-graining (MSCG) model
|
||||
|
|
|
@ -0,0 +1,102 @@
|
|||
This dir contains scripts that demonstrate how to use LAMMPS as both a
|
||||
client and server code to run a simple MD simulation. LAMMPS as a
|
||||
client performs the MD timestepping. LAMMPS as a server provides the
|
||||
energy and forces between interacting particles. Every timestep the
|
||||
LAMMPS client sends a message to the LAMMPS server and receives a
|
||||
response message in return.
|
||||
|
||||
Another code could replace LAMMPS as the client, e.g. another MD code
|
||||
which wants to use a LAMMPS potential. Another code could replace
|
||||
LAMMPS as the server, e.g. a quantum code computing quantum forces, so
|
||||
that ab initio MD could be performed. See an example of the latter in
|
||||
examples/COUPLE/lammps_vasp.
|
||||
|
||||
See the MESSAGE package (doc/Section_messages.html#MESSAGE)
|
||||
and Section_howto.html#howto10 for more details on how
|
||||
client/server coupling works in LAMMPS.
|
||||
|
||||
--------------
|
||||
|
||||
Note that you can adjust the problem size run by these scripts by
|
||||
setting "x,y,z" variables when you run LAMMPS. The default problem size
|
||||
is x = y = z = 5, which is 500 particles.
|
||||
|
||||
lmp_mpi -v x 10 -v y 10 -v z 20 # 8000 particles
|
||||
|
||||
This applies to either in.message or in.message.client
|
||||
|
||||
The client and server script define a "mode" variable
|
||||
which can be set to file, zmq, mpi/one, or mpi/two,
|
||||
as illustrated below.
|
||||
|
||||
--------------
|
||||
|
||||
To run this problem in the traditional way (no client/server coupling)
|
||||
do one of these:
|
||||
|
||||
% lmp_serial < in.message
|
||||
% mpirun -np 4 lmp_mpi < in.message
|
||||
|
||||
--------------
|
||||
|
||||
To run in client/server mode:
|
||||
|
||||
Both the client and server script must use the same messaging mode.
|
||||
This can be selected by setting the "mode" variable when you run
|
||||
LAMMPS. The default mode = file. The other options for the mode
|
||||
variable are zmq, mpione, mpitwo.
|
||||
|
||||
Here we assume LAMMPS was built to run in parallel, and the MESSAGE
|
||||
package was installed with socket (ZMQ) support. This means any of
|
||||
the 4 messaging modes can be used.
|
||||
|
||||
The next sections illustrate how to launch LAMMPS twice, once as a
|
||||
client, once as a server, for each of the messaging modes.
|
||||
|
||||
In all cases, the client should print out thermodynamic info for 50
|
||||
steps. The server should print out setup info, print nothing until
|
||||
the client exits, at which point the server should also exit.
|
||||
|
||||
The examples below show launching LAMMPS twice from the same window
|
||||
(or batch script), using the "&" character to launch the first time in
|
||||
the background. For all modes except {mpi/one}, you could also launch
|
||||
twice in separate windows on your desktop machine. It does not matter
|
||||
whether you launch the client or server first.
|
||||
|
||||
In these examples either the client or server can be run on one or
|
||||
more processors. If running in a non-MPI mode (file or zmq) you can
|
||||
launch LAMMPS on a single processor without using mpirun.
|
||||
|
||||
IMPORTANT: If you run in mpi/two mode, you must launch LAMMPS both
|
||||
times via mpirun, even if one or both of them runs on a single
|
||||
processor. This is so that MPI can figure out how to connect both MPI
|
||||
processes together to exchange MPI messages between them.
|
||||
|
||||
--------------
|
||||
|
||||
File or ZMQ or mpi/two modes of messaging:
|
||||
|
||||
% mpirun -np 1 lmp_mpi -v mode file -log log.client < in.message.client &
|
||||
% mpirun -np 2 lmp_mpi -v mode file -log log.server < in.message.server
|
||||
|
||||
% mpirun -np 4 lmp_mpi -v mode zmq -log log.client < in.message.client &
|
||||
% mpirun -np 1 lmp_mpi -v mode zmq -log log.server < in.message.server
|
||||
|
||||
% mpirun -np 2 lmp_mpi -v mode mpitwo -log log.client < in.message.client &
|
||||
% mpirun -np 4 lmp_mpi -v mode mpitwo -log log.server < in.message.server
|
||||
|
||||
--------------
|
||||
|
||||
Mpi/one mode of messaging:
|
||||
|
||||
Launch LAMMPS twice in a single mpirun command:
|
||||
|
||||
mpirun -np 2 lmp_mpi -mpi 2 -in in.message.client -v mode mpione -log log.client : -np 4 lmp_mpi -mpi 2 -in in.message.server -v mode mpione -log log.server
|
||||
|
||||
The two -np values determine how many procs the client and the server
|
||||
run on.
|
||||
|
||||
A LAMMPS executable run in this manner must use the -mpi P
|
||||
command-line option as their first option, where P is the number of
|
||||
processors the first code in the mpirun command (client or server) is
|
||||
running on.
|
|
@ -0,0 +1,27 @@
|
|||
# 3d Lennard-Jones melt - no client/server mode
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
|
||||
lattice fcc 0.8442
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
create_box 1 box
|
||||
create_atoms 1 box
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 10
|
||||
run 50
|
|
@ -0,0 +1,38 @@
|
|||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then &
|
||||
"message client md file tmp.couple" &
|
||||
elif "${mode} == zmq" &
|
||||
"message client md zmq localhost:5555" &
|
||||
elif "${mode} == mpione" &
|
||||
"message client md mpi/one" &
|
||||
elif "${mode} == mpitwo" &
|
||||
"message client md mpi/two tmp.couple"
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
create_box 1 box
|
||||
create_atoms 1 box
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
|
@ -0,0 +1,29 @@
|
|||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then &
|
||||
"message server md file tmp.couple" &
|
||||
elif "${mode} == zmq" &
|
||||
"message server md zmq *:5555" &
|
||||
elif "${mode} == mpione" &
|
||||
"message server md mpi/one" &
|
||||
elif "${mode} == mpitwo" &
|
||||
"message server md mpi/two tmp.couple"
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md file tmp.couple
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.00067687 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.303 | 2.303 | 2.303 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 5.12413 on 1 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 4215.352 tau/day, 9.758 timesteps/s
|
||||
0.1% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 9.7752e-06 | 9.7752e-06 | 9.7752e-06 | 0.0 | 0.00
|
||||
Comm | 0.0001719 | 0.0001719 | 0.0001719 | 0.0 | 0.00
|
||||
Output | 0.00022697 | 0.00022697 | 0.00022697 | 0.0 | 0.00
|
||||
Modify | 5.1232 | 5.1232 | 5.1232 | 0.0 | 99.98
|
||||
Other | | 0.0004876 | | | 0.01
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:19
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md file tmp.couple
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 2 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000554085 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.302 | 2.302 | 2.302 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 5.07392 on 2 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 4257.065 tau/day, 9.854 timesteps/s
|
||||
50.1% CPU use with 2 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 2.1458e-06 | 4.0531e-06 | 5.9605e-06 | 0.0 | 0.00
|
||||
Comm | 0.00022864 | 0.00023806 | 0.00024748 | 0.0 | 0.00
|
||||
Output | 0.00020814 | 0.00051165 | 0.00081515 | 0.0 | 0.01
|
||||
Modify | 5.0659 | 5.0695 | 5.073 | 0.2 | 99.91
|
||||
Other | | 0.003713 | | | 0.07
|
||||
|
||||
Nlocal: 250 ave 255 max 245 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 1
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:07
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md mpi/one
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000674009 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.303 | 2.303 | 2.303 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 0.0424271 on 1 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 509109.009 tau/day, 1178.493 timesteps/s
|
||||
99.9% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 8.1062e-06 | 8.1062e-06 | 8.1062e-06 | 0.0 | 0.02
|
||||
Comm | 8.2016e-05 | 8.2016e-05 | 8.2016e-05 | 0.0 | 0.19
|
||||
Output | 0.00010991 | 0.00010991 | 0.00010991 | 0.0 | 0.26
|
||||
Modify | 0.042014 | 0.042014 | 0.042014 | 0.0 | 99.03
|
||||
Other | | 0.0002129 | | | 0.50
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md mpi/one
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 2 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000527859 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.302 | 2.302 | 2.302 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 0.027467 on 2 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 786397.868 tau/day, 1820.365 timesteps/s
|
||||
99.9% CPU use with 2 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 4.0531e-06 | 4.1723e-06 | 4.2915e-06 | 0.0 | 0.02
|
||||
Comm | 0.00017691 | 0.00018024 | 0.00018358 | 0.0 | 0.66
|
||||
Output | 9.3222e-05 | 0.00012612 | 0.00015903 | 0.0 | 0.46
|
||||
Modify | 0.026678 | 0.02676 | 0.026841 | 0.0 | 97.42
|
||||
Other | | 0.0003968 | | | 1.44
|
||||
|
||||
Nlocal: 250 ave 255 max 245 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 1
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md mpi/two tmp.couple
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000490904 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.303 | 2.303 | 2.303 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 0.0624809 on 1 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 345705.501 tau/day, 800.244 timesteps/s
|
||||
40.4% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 7.391e-06 | 7.391e-06 | 7.391e-06 | 0.0 | 0.01
|
||||
Comm | 8.5831e-05 | 8.5831e-05 | 8.5831e-05 | 0.0 | 0.14
|
||||
Output | 0.00011873 | 0.00011873 | 0.00011873 | 0.0 | 0.19
|
||||
Modify | 0.062024 | 0.062024 | 0.062024 | 0.0 | 99.27
|
||||
Other | | 0.0002449 | | | 0.39
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:07
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md mpi/two tmp.couple
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 2 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000692129 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.302 | 2.302 | 2.302 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 0.0186305 on 2 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 1159388.887 tau/day, 2683.771 timesteps/s
|
||||
50.7% CPU use with 2 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 2.861e-06 | 3.8147e-06 | 4.7684e-06 | 0.0 | 0.02
|
||||
Comm | 0.00017023 | 0.00017631 | 0.00018239 | 0.0 | 0.95
|
||||
Output | 0.00010896 | 0.00013852 | 0.00016809 | 0.0 | 0.74
|
||||
Modify | 0.017709 | 0.017821 | 0.017933 | 0.1 | 95.66
|
||||
Other | | 0.0004908 | | | 2.63
|
||||
|
||||
Nlocal: 250 ave 255 max 245 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 1
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:05
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md zmq localhost:5555
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000747919 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.303 | 2.303 | 2.303 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 0.0769799 on 1 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 280592.815 tau/day, 649.520 timesteps/s
|
||||
12.9% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 6.1989e-06 | 6.1989e-06 | 6.1989e-06 | 0.0 | 0.01
|
||||
Comm | 9.5129e-05 | 9.5129e-05 | 9.5129e-05 | 0.0 | 0.12
|
||||
Output | 0.00011516 | 0.00011516 | 0.00011516 | 0.0 | 0.15
|
||||
Modify | 0.076471 | 0.076471 | 0.076471 | 0.0 | 99.34
|
||||
Other | | 0.0002928 | | | 0.38
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:08
|
|
@ -0,0 +1,76 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - client script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message client md file tmp.couple" elif "${mode} == zmq" "message client md zmq localhost:5555" elif "${mode} == mpione" "message client md mpi/one" elif "${mode} == mpitwo" "message client md mpi/two tmp.couple"
|
||||
message client md zmq localhost:5555
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify sort 0 0.0 map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 2 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000608921 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
fix 2 all client/md
|
||||
fix_modify 2 energy yes
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Per MPI rank memory allocation (min/avg/max) = 2.302 | 2.302 | 2.302 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 0 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 0 0 -4.6166043 -2.6072847
|
||||
20 0.628166 0 0 -4.62213 1.0186262
|
||||
30 0.73767593 0 0 -4.6254647 0.49629637
|
||||
40 0.69517962 0 0 -4.6253506 0.69303877
|
||||
50 0.70150496 0 0 -4.6259832 0.59551518
|
||||
Loop time of 0.0453095 on 2 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 476720.759 tau/day, 1103.520 timesteps/s
|
||||
55.6% CPU use with 2 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0 | 0 | 0 | 0.0 | 0.00
|
||||
Neigh | 2.1458e-06 | 4.0531e-06 | 5.9605e-06 | 0.0 | 0.01
|
||||
Comm | 0.0001595 | 0.00015998 | 0.00016046 | 0.0 | 0.35
|
||||
Output | 8.893e-05 | 0.00011587 | 0.00014281 | 0.0 | 0.26
|
||||
Modify | 0.044439 | 0.044582 | 0.044724 | 0.1 | 98.39
|
||||
Other | | 0.0004481 | | | 0.99
|
||||
|
||||
Nlocal: 250 ave 255 max 245 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 1
|
||||
Nghost: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 0 ave 0 max 0 min
|
||||
Histogram: 2 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 0
|
||||
Ave neighs/atom = 0
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:04
|
|
@ -0,0 +1,83 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - no client/server mode
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000540972 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Per MPI rank memory allocation (min/avg/max) = 3.143 | 3.143 | 3.143 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7733681 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 -6.3153532 0 -4.6166043 -2.6072847
|
||||
20 0.628166 -5.5624945 0 -4.62213 1.0186262
|
||||
30 0.73767593 -5.7297655 0 -4.6254647 0.49629637
|
||||
40 0.69517962 -5.6660345 0 -4.6253506 0.69303877
|
||||
50 0.70150496 -5.6761362 0 -4.6259832 0.59551518
|
||||
Loop time of 0.037292 on 1 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 579212.643 tau/day, 1340.770 timesteps/s
|
||||
99.9% CPU use with 1 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0.028156 | 0.028156 | 0.028156 | 0.0 | 75.50
|
||||
Neigh | 0.0069656 | 0.0069656 | 0.0069656 | 0.0 | 18.68
|
||||
Comm | 0.0011504 | 0.0011504 | 0.0011504 | 0.0 | 3.08
|
||||
Output | 0.00013399 | 0.00013399 | 0.00013399 | 0.0 | 0.36
|
||||
Modify | 0.00049257 | 0.00049257 | 0.00049257 | 0.0 | 1.32
|
||||
Other | | 0.0003934 | | | 1.05
|
||||
|
||||
Nlocal: 500 ave 500 max 500 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Nghost: 1946 ave 1946 max 1946 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
Neighs: 18820 ave 18820 max 18820 min
|
||||
Histogram: 1 0 0 0 0 0 0 0 0 0
|
||||
|
||||
Total # of neighbors = 18820
|
||||
Ave neighs/atom = 37.64
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,83 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - no client/server mode
|
||||
|
||||
variable x index 5
|
||||
variable y index 5
|
||||
variable z index 5
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 $x 0 $y 0 $z
|
||||
region box block 0 5 0 $y 0 $z
|
||||
region box block 0 5 0 5 0 $z
|
||||
region box block 0 5 0 5 0 5
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (8.39798 8.39798 8.39798)
|
||||
1 by 2 by 2 MPI processor grid
|
||||
create_atoms 1 box
|
||||
Created 500 atoms
|
||||
Time spent = 0.000635862 secs
|
||||
mass 1 1.0
|
||||
|
||||
velocity all create 1.44 87287 loop geom
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
fix 1 all nve
|
||||
|
||||
thermo 10
|
||||
run 50
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Per MPI rank memory allocation (min/avg/max) = 3.109 | 3.109 | 3.109 Mbytes
|
||||
Step Temp E_pair E_mol TotEng Press
|
||||
0 1.44 -6.7733681 0 -4.6176881 -5.0221006
|
||||
10 1.1347688 -6.3153532 0 -4.6166043 -2.6072847
|
||||
20 0.628166 -5.5624945 0 -4.62213 1.0186262
|
||||
30 0.73767593 -5.7297655 0 -4.6254647 0.49629637
|
||||
40 0.69517962 -5.6660345 0 -4.6253506 0.69303877
|
||||
50 0.70150496 -5.6761362 0 -4.6259832 0.59551518
|
||||
Loop time of 0.0152688 on 4 procs for 50 steps with 500 atoms
|
||||
|
||||
Performance: 1414649.236 tau/day, 3274.651 timesteps/s
|
||||
99.9% CPU use with 4 MPI tasks x no OpenMP threads
|
||||
|
||||
MPI task timing breakdown:
|
||||
Section | min time | avg time | max time |%varavg| %total
|
||||
---------------------------------------------------------------
|
||||
Pair | 0.006639 | 0.007916 | 0.0083909 | 0.8 | 51.84
|
||||
Neigh | 0.0015991 | 0.0018443 | 0.0019469 | 0.3 | 12.08
|
||||
Comm | 0.0041771 | 0.0047471 | 0.0063298 | 1.3 | 31.09
|
||||
Output | 9.6798e-05 | 0.00012475 | 0.00019407 | 0.0 | 0.82
|
||||
Modify | 0.00015974 | 0.0001967 | 0.00023103 | 0.0 | 1.29
|
||||
Other | | 0.0004399 | | | 2.88
|
||||
|
||||
Nlocal: 125 ave 128 max 121 min
|
||||
Histogram: 1 0 0 0 1 0 0 0 1 1
|
||||
Nghost: 1091 ave 1094 max 1089 min
|
||||
Histogram: 1 0 1 0 1 0 0 0 0 1
|
||||
Neighs: 4705 ave 4849 max 4648 min
|
||||
Histogram: 2 1 0 0 0 0 0 0 0 1
|
||||
|
||||
Total # of neighbors = 18820
|
||||
Ave neighs/atom = 37.64
|
||||
Neighbor list builds = 4
|
||||
Dangerous builds = 0
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md file tmp.couple
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 1 by 1 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:05
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md file tmp.couple
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 2 by 2 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 2 by 2 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:05
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md mpi/one
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 1 by 1 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md mpi/one
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 2 by 2 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 2 by 2 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md mpi/two tmp.couple
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 1 by 1 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md mpi/two tmp.couple
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 2 by 2 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 2 by 2 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md zmq *:5555
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 1 by 1 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 1 by 1 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:00
|
|
@ -0,0 +1,44 @@
|
|||
LAMMPS (16 Jul 2018)
|
||||
# 3d Lennard-Jones melt - server script
|
||||
|
||||
variable mode index file
|
||||
|
||||
if "${mode} == file" then "message server md file tmp.couple" elif "${mode} == zmq" "message server md zmq *:5555" elif "${mode} == mpione" "message server md mpi/one" elif "${mode} == mpitwo" "message server md mpi/two tmp.couple"
|
||||
message server md zmq *:5555
|
||||
|
||||
units lj
|
||||
atom_style atomic
|
||||
atom_modify map yes
|
||||
|
||||
lattice fcc 0.8442
|
||||
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
|
||||
region box block 0 1 0 1 0 1
|
||||
create_box 1 box
|
||||
Created orthogonal box = (0 0 0) to (1.6796 1.6796 1.6796)
|
||||
1 by 2 by 2 MPI processor grid
|
||||
mass * 1.0 # masses not used by server
|
||||
|
||||
pair_style lj/cut 2.5
|
||||
pair_coeff 1 1 1.0 1.0 2.5
|
||||
|
||||
neighbor 0.3 bin
|
||||
neigh_modify delay 0 every 1 check yes
|
||||
|
||||
server md
|
||||
1 by 2 by 2 MPI processor grid
|
||||
WARNING: No fixes defined, atoms won't move (../verlet.cpp:55)
|
||||
Neighbor list info ...
|
||||
update every 1 steps, delay 0 steps, check yes
|
||||
max neighbors/atom: 2000, page size: 100000
|
||||
master list distance cutoff = 2.8
|
||||
ghost atom cutoff = 2.8
|
||||
binsize = 1.4, bins = 6 6 6
|
||||
1 neighbor lists, perpetual/occasional/extra = 1 0 0
|
||||
(1) pair lj/cut, perpetual
|
||||
attributes: half, newton on
|
||||
pair build: half/bin/atomonly/newton
|
||||
stencil: half/bin/3d/newton
|
||||
bin: standard
|
||||
Server MD calls = 51
|
||||
Server MD reneighborings 5
|
||||
Total wall time: 0:00:00
|
|
@ -35,6 +35,8 @@ linalg set of BLAS and LAPACK routines needed by USER-ATC package
|
|||
from Axel Kohlmeyer (Temple U)
|
||||
meam modified embedded atom method (MEAM) potential, MEAM package
|
||||
from Greg Wagner (Sandia)
|
||||
message client/server communication library via MPI, sockets, files
|
||||
from Steve Plimpton (Sandia)
|
||||
molfile hooks to VMD molfile plugins, used by the USER-MOLFILE package
|
||||
from Axel Kohlmeyer (Temple U) and the VMD development team
|
||||
mscg hooks to the MSCG library, used by fix_mscg command
|
||||
|
|
|
@ -0,0 +1,118 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# Install.py tool to build the CSlib library
|
||||
# used to automate the steps described in the README file in this dir
|
||||
|
||||
from __future__ import print_function
|
||||
import sys,os,re,subprocess
|
||||
|
||||
# help message
|
||||
|
||||
help = """
|
||||
Syntax from src dir: make lib-message args="-m"
|
||||
or: make lib-message args="-s -z"
|
||||
Syntax from lib dir: python Install.py -m
|
||||
or: python Install.py -s -z
|
||||
|
||||
specify zero or more options, order does not matter
|
||||
|
||||
-m = parallel build of CSlib library
|
||||
-s = serial build of CSlib library
|
||||
-z = build CSlib library with ZMQ socket support, default = no ZMQ support
|
||||
|
||||
Example:
|
||||
|
||||
make lib-message args="-m -z" # build parallel CSlib with ZMQ support
|
||||
make lib-message args="-s" # build serial CSlib with no ZMQ support
|
||||
"""
|
||||
|
||||
# print error message or help
|
||||
|
||||
def error(str=None):
|
||||
if not str: print(help)
|
||||
else: print("ERROR",str)
|
||||
sys.exit()
|
||||
|
||||
# expand to full path name
|
||||
# process leading '~' or relative path
|
||||
|
||||
def fullpath(path):
|
||||
return os.path.abspath(os.path.expanduser(path))
|
||||
|
||||
def which(program):
|
||||
def is_exe(fpath):
|
||||
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
|
||||
|
||||
fpath, fname = os.path.split(program)
|
||||
if fpath:
|
||||
if is_exe(program):
|
||||
return program
|
||||
else:
|
||||
for path in os.environ["PATH"].split(os.pathsep):
|
||||
path = path.strip('"')
|
||||
exe_file = os.path.join(path, program)
|
||||
if is_exe(exe_file):
|
||||
return exe_file
|
||||
|
||||
return None
|
||||
|
||||
# parse args
|
||||
|
||||
args = sys.argv[1:]
|
||||
nargs = len(args)
|
||||
if nargs == 0: error()
|
||||
|
||||
mpiflag = False
|
||||
serialflag = False
|
||||
zmqflag = False
|
||||
|
||||
iarg = 0
|
||||
while iarg < nargs:
|
||||
if args[iarg] == "-m":
|
||||
mpiflag = True
|
||||
iarg += 1
|
||||
elif args[iarg] == "-s":
|
||||
serialflag = True
|
||||
iarg += 1
|
||||
elif args[iarg] == "-z":
|
||||
zmqflag = True
|
||||
iarg += 1
|
||||
else: error()
|
||||
|
||||
if (not mpiflag and not serialflag):
|
||||
error("Must use either -m or -s flag")
|
||||
|
||||
if (mpiflag and serialflag):
|
||||
error("Cannot use -m and -s flag at the same time")
|
||||
|
||||
# build CSlib
|
||||
# copy resulting lib to cslib/src/libmessage.a
|
||||
# copy appropriate Makefile.lammps.* to Makefile.lammps
|
||||
|
||||
print("Building CSlib ...")
|
||||
srcdir = fullpath("./cslib/src")
|
||||
|
||||
if mpiflag and zmqflag:
|
||||
cmd = "cd %s; make lib_parallel" % srcdir
|
||||
elif mpiflag and not zmqflag:
|
||||
cmd = "cd %s; make lib_parallel zmq=no" % srcdir
|
||||
elif not mpiflag and zmqflag:
|
||||
cmd = "cd %s; make lib_serial" % srcdir
|
||||
elif not mpiflag and not zmqflag:
|
||||
cmd = "cd %s; make lib_serial zmq=no" % srcdir
|
||||
|
||||
print(cmd)
|
||||
txt = subprocess.check_output(cmd,stderr=subprocess.STDOUT,shell=True)
|
||||
print(txt.decode('UTF-8'))
|
||||
|
||||
if mpiflag: cmd = "cd %s; cp libcsmpi.a libmessage.a" % srcdir
|
||||
else: cmd = "cd %s; cp libcsnompi.a libmessage.a" % srcdir
|
||||
print(cmd)
|
||||
txt = subprocess.check_output(cmd,stderr=subprocess.STDOUT,shell=True)
|
||||
print(txt.decode('UTF-8'))
|
||||
|
||||
if zmqflag: cmd = "cp Makefile.lammps.zmq Makefile.lammps"
|
||||
else: cmd = "cp Makefile.lammps.nozmq Makefile.lammps"
|
||||
print(cmd)
|
||||
txt = subprocess.check_output(cmd,stderr=subprocess.STDOUT,shell=True)
|
||||
print(txt.decode('UTF-8'))
|
|
@ -0,0 +1,5 @@
|
|||
# Settings that the LAMMPS build will import when this package library is used
|
||||
|
||||
message_SYSINC =
|
||||
message_SYSLIB =
|
||||
message_SYSPATH =
|
|
@ -0,0 +1,5 @@
|
|||
# Settings that the LAMMPS build will import when this package library is used
|
||||
|
||||
message_SYSINC =
|
||||
message_SYSLIB = -lzmq
|
||||
message_SYSPATH =
|
|
@ -0,0 +1,51 @@
|
|||
This directory contains the CSlib library which is required
|
||||
to use the MESSAGE package and its client/server commands
|
||||
in a LAMMPS input script.
|
||||
|
||||
The CSlib libary is included in the LAMMPS distribution. A fuller
|
||||
version including documentation and test programs is available at
|
||||
http:cslib.sandia.gov (by Aug 2018) and was developed by Steve
|
||||
Plimpton at Sandia National Laboratories.
|
||||
|
||||
You can type "make lib-message" from the src directory to see help on
|
||||
how to build this library via make commands, or you can do the same
|
||||
thing by typing "python Install.py" from within this directory, or you
|
||||
can do it manually by following the instructions below.
|
||||
|
||||
The CSlib can be optionally built with support for sockets using
|
||||
the open-source ZeroMQ (ZMQ) library. If it is not installed
|
||||
on your system, it is easy to download and install.
|
||||
|
||||
Go to this website: http://zeromq.org
|
||||
|
||||
-----------------
|
||||
|
||||
Instructions:
|
||||
|
||||
1. Compile CSlib from within cslib/src with one of the following:
|
||||
% make lib_parallel # build parallel library with ZMQ socket support
|
||||
% make lib_serial # build serial library with ZMQ support
|
||||
% make lib_parallel zmq=no # build parallel lib with no ZMQ support
|
||||
% make lib_serial zmq=no # build serial lib with no ZMQ support
|
||||
|
||||
2. Copy the produced cslib/src/libcsmpi.a or libscnompi.a file to
|
||||
cslib/src/libmessage.a
|
||||
|
||||
3. Copy either lib/message/Makefile.lammps.zmq or Makefile.lammps.nozmq
|
||||
to lib/message/Makefile.lammps, depending on whether you
|
||||
build the library with ZMQ support or not.
|
||||
If your ZMQ library is not in a place your shell path finds,
|
||||
you can set the INCLUDE and PATH variables in Makefile.lammps
|
||||
to point to the dirs where the ZMQ include and library files are.
|
||||
|
||||
-----------------
|
||||
|
||||
When these steps are complete you can build LAMMPS
|
||||
with the MESSAGAE package installed:
|
||||
|
||||
% cd lammps/src
|
||||
% make yes-message
|
||||
% make mpi (or whatever target you wish)
|
||||
|
||||
Note that if you download and unpack a new LAMMPS tarball, you will
|
||||
need to re-build the CSlib in this dir.
|
|
@ -0,0 +1,501 @@
|
|||
GNU LESSER GENERAL PUBLIC LICENSE
|
||||
|
||||
Version 2.1, February 1999
|
||||
|
||||
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The licenses for most software are designed to take away your freedom
|
||||
to share and change it. By contrast, the GNU General Public Licenses
|
||||
are intended to guarantee your freedom to share and change free
|
||||
software--to make sure the software is free for all its users.
|
||||
|
||||
This license, the Lesser General Public License, applies to some
|
||||
specially designated software packages--typically libraries--of the
|
||||
Free Software Foundation and other authors who decide to use it. You
|
||||
can use it too, but we suggest you first think carefully about whether
|
||||
this license or the ordinary General Public License is the better
|
||||
strategy to use in any particular case, based on the explanations
|
||||
below.
|
||||
|
||||
When we speak of free software, we are referring to freedom of use,
|
||||
not price. Our General Public Licenses are designed to make sure that
|
||||
you have the freedom to distribute copies of free software (and charge
|
||||
for this service if you wish); that you receive source code or can get
|
||||
it if you want it; that you can change the software and use pieces of
|
||||
it in new free programs; and that you are informed that you can do
|
||||
these things.
|
||||
|
||||
To protect your rights, we need to make restrictions that forbid
|
||||
distributors to deny you these rights or to ask you to surrender these
|
||||
rights. These restrictions translate to certain responsibilities for
|
||||
you if you distribute copies of the library or if you modify it.
|
||||
|
||||
For example, if you distribute copies of the library, whether gratis
|
||||
or for a fee, you must give the recipients all the rights that we gave
|
||||
you. You must make sure that they, too, receive or can get the source
|
||||
code. If you link other code with the library, you must provide
|
||||
complete object files to the recipients, so that they can relink them
|
||||
with the library after making changes to the library and recompiling
|
||||
it. And you must show them these terms so they know their rights.
|
||||
|
||||
We protect your rights with a two-step method: (1) we copyright the
|
||||
library, and (2) we offer you this license, which gives you legal
|
||||
permission to copy, distribute and/or modify the library.
|
||||
|
||||
To protect each distributor, we want to make it very clear that there
|
||||
is no warranty for the free library. Also, if the library is modified
|
||||
by someone else and passed on, the recipients should know that what
|
||||
they have is not the original version, so that the original author's
|
||||
reputation will not be affected by problems that might be introduced
|
||||
by others.
|
||||
|
||||
Finally, software patents pose a constant threat to the existence of
|
||||
any free program. We wish to make sure that a company cannot
|
||||
effectively restrict the users of a free program by obtaining a
|
||||
restrictive license from a patent holder. Therefore, we insist that
|
||||
any patent license obtained for a version of the library must be
|
||||
consistent with the full freedom of use specified in this license.
|
||||
|
||||
Most GNU software, including some libraries, is covered by the
|
||||
ordinary GNU General Public License. This license, the GNU Lesser
|
||||
General Public License, applies to certain designated libraries, and
|
||||
is quite different from the ordinary General Public License. We use
|
||||
this license for certain libraries in order to permit linking those
|
||||
libraries into non-free programs.
|
||||
|
||||
When a program is linked with a library, whether statically or using a
|
||||
shared library, the combination of the two is legally speaking a
|
||||
combined work, a derivative of the original library. The ordinary
|
||||
General Public License therefore permits such linking only if the
|
||||
entire combination fits its criteria of freedom. The Lesser General
|
||||
Public License permits more lax criteria for linking other code with
|
||||
the library.
|
||||
|
||||
We call this license the "Lesser" General Public License because it
|
||||
does Less to protect the user's freedom than the ordinary General
|
||||
Public License. It also provides other free software developers Less
|
||||
of an advantage over competing non-free programs. These disadvantages
|
||||
are the reason we use the ordinary General Public License for many
|
||||
libraries. However, the Lesser license provides advantages in certain
|
||||
special circumstances.
|
||||
|
||||
For example, on rare occasions, there may be a special need to
|
||||
encourage the widest possible use of a certain library, so that it
|
||||
becomes a de-facto standard. To achieve this, non-free programs must
|
||||
be allowed to use the library. A more frequent case is that a free
|
||||
library does the same job as widely used non-free libraries. In this
|
||||
case, there is little to gain by limiting the free library to free
|
||||
software only, so we use the Lesser General Public License.
|
||||
|
||||
In other cases, permission to use a particular library in non-free
|
||||
programs enables a greater number of people to use a large body of
|
||||
free software. For example, permission to use the GNU C Library in
|
||||
non-free programs enables many more people to use the whole GNU
|
||||
operating system, as well as its variant, the GNU/Linux operating
|
||||
system.
|
||||
|
||||
Although the Lesser General Public License is Less protective of the
|
||||
users' freedom, it does ensure that the user of a program that is
|
||||
linked with the Library has the freedom and the wherewithal to run
|
||||
that program using a modified version of the Library.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow. Pay close attention to the difference between a
|
||||
"work based on the library" and a "work that uses the library". The
|
||||
former contains code derived from the library, whereas the latter must
|
||||
be combined with the library in order to run.
|
||||
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. This License Agreement applies to any software library or other
|
||||
program which contains a notice placed by the copyright holder or
|
||||
other authorized party saying it may be distributed under the terms of
|
||||
this Lesser General Public License (also called "this License"). Each
|
||||
licensee is addressed as "you".
|
||||
|
||||
A "library" means a collection of software functions and/or data
|
||||
prepared so as to be conveniently linked with application programs
|
||||
(which use some of those functions and data) to form executables.
|
||||
|
||||
The "Library", below, refers to any such software library or work
|
||||
which has been distributed under these terms. A "work based on the
|
||||
Library" means either the Library or any derivative work under
|
||||
copyright law: that is to say, a work containing the Library or a
|
||||
portion of it, either verbatim or with modifications and/or translated
|
||||
straightforwardly into another language. (Hereinafter, translation is
|
||||
included without limitation in the term "modification".)
|
||||
|
||||
"Source code" for a work means the preferred form of the work for
|
||||
making modifications to it. For a library, complete source code means
|
||||
all the source code for all modules it contains, plus any associated
|
||||
interface definition files, plus the scripts used to control
|
||||
compilation and installation of the library.
|
||||
|
||||
Activities other than copying, distribution and modification are not
|
||||
covered by this License; they are outside its scope. The act of
|
||||
running a program using the Library is not restricted, and output from
|
||||
such a program is covered only if its contents constitute a work based
|
||||
on the Library (independent of the use of the Library in a tool for
|
||||
writing it). Whether that is true depends on what the Library does and
|
||||
what the program that uses the Library does.
|
||||
|
||||
1. You may copy and distribute verbatim copies of the Library's
|
||||
complete source code as you receive it, in any medium, provided that
|
||||
you conspicuously and appropriately publish on each copy an
|
||||
appropriate copyright notice and disclaimer of warranty; keep intact
|
||||
all the notices that refer to this License and to the absence of any
|
||||
warranty; and distribute a copy of this License along with the
|
||||
Library.
|
||||
|
||||
You may charge a fee for the physical act of transferring a copy, and
|
||||
you may at your option offer warranty protection in exchange for a
|
||||
fee.
|
||||
|
||||
2. You may modify your copy or copies of the Library or any portion of
|
||||
it, thus forming a work based on the Library, and copy and distribute
|
||||
such modifications or work under the terms of Section 1 above,
|
||||
provided that you also meet all of these conditions:
|
||||
|
||||
a) The modified work must itself be a software library.
|
||||
|
||||
b) You must cause the files modified to carry prominent notices
|
||||
stating that you changed the files and the date of any change.
|
||||
|
||||
c) You must cause the whole of the work to be licensed at no
|
||||
charge to all third parties under the terms of this License.
|
||||
|
||||
d) If a facility in the modified Library refers to a function or a
|
||||
table of data to be supplied by an application program that uses
|
||||
the facility, other than as an argument passed when the facility
|
||||
is invoked, then you must make a good faith effort to ensure that,
|
||||
in the event an application does not supply such function or
|
||||
table, the facility still operates, and performs whatever part of
|
||||
its purpose remains meaningful.
|
||||
|
||||
(For example, a function in a library to compute square roots has
|
||||
a purpose that is entirely well-defined independent of the
|
||||
application. Therefore, Subsection 2d requires that any
|
||||
application-supplied function or table used by this function must
|
||||
be optional: if the application does not supply it, the square
|
||||
root function must still compute square roots.)
|
||||
|
||||
These requirements apply to the modified work as a whole. If
|
||||
identifiable sections of that work are not derived from the Library,
|
||||
and can be reasonably considered independent and separate works in
|
||||
themselves, then this License, and its terms, do not apply to those
|
||||
sections when you distribute them as separate works. But when you
|
||||
distribute the same sections as part of a whole which is a work based
|
||||
on the Library, the distribution of the whole must be on the terms of
|
||||
this License, whose permissions for other licensees extend to the
|
||||
entire whole, and thus to each and every part regardless of who wrote
|
||||
it.
|
||||
|
||||
Thus, it is not the intent of this section to claim rights or contest
|
||||
your rights to work written entirely by you; rather, the intent is to
|
||||
exercise the right to control the distribution of derivative or
|
||||
collective works based on the Library.
|
||||
|
||||
In addition, mere aggregation of another work not based on the Library
|
||||
with the Library (or with a work based on the Library) on a volume of
|
||||
a storage or distribution medium does not bring the other work under
|
||||
the scope of this License.
|
||||
|
||||
3. You may opt to apply the terms of the ordinary GNU General Public
|
||||
License instead of this License to a given copy of the Library. To do
|
||||
this, you must alter all the notices that refer to this License, so
|
||||
that they refer to the ordinary GNU General Public License, version 2,
|
||||
instead of to this License. (If a newer version than version 2 of the
|
||||
ordinary GNU General Public License has appeared, then you can specify
|
||||
that version instead if you wish.) Do not make any other change in
|
||||
these notices.
|
||||
|
||||
Once this change is made in a given copy, it is irreversible for that
|
||||
copy, so the ordinary GNU General Public License applies to all
|
||||
subsequent copies and derivative works made from that copy.
|
||||
|
||||
This option is useful when you wish to copy part of the code of the
|
||||
Library into a program that is not a library.
|
||||
|
||||
4. You may copy and distribute the Library (or a portion or derivative
|
||||
of it, under Section 2) in object code or executable form under the
|
||||
terms of Sections 1 and 2 above provided that you accompany it with
|
||||
the complete corresponding machine-readable source code, which must be
|
||||
distributed under the terms of Sections 1 and 2 above on a medium
|
||||
customarily used for software interchange.
|
||||
|
||||
If distribution of object code is made by offering access to copy from
|
||||
a designated place, then offering equivalent access to copy the source
|
||||
code from the same place satisfies the requirement to distribute the
|
||||
source code, even though third parties are not compelled to copy the
|
||||
source along with the object code.
|
||||
|
||||
5. A program that contains no derivative of any portion of the
|
||||
Library, but is designed to work with the Library by being compiled or
|
||||
linked with it, is called a "work that uses the Library". Such a work,
|
||||
in isolation, is not a derivative work of the Library, and therefore
|
||||
falls outside the scope of this License.
|
||||
|
||||
However, linking a "work that uses the Library" with the Library
|
||||
creates an executable that is a derivative of the Library (because it
|
||||
contains portions of the Library), rather than a "work that uses the
|
||||
library". The executable is therefore covered by this License. Section
|
||||
6 states terms for distribution of such executables.
|
||||
|
||||
When a "work that uses the Library" uses material from a header file
|
||||
that is part of the Library, the object code for the work may be a
|
||||
derivative work of the Library even though the source code is
|
||||
not. Whether this is true is especially significant if the work can be
|
||||
linked without the Library, or if the work is itself a library. The
|
||||
threshold for this to be true is not precisely defined by law.
|
||||
|
||||
If such an object file uses only numerical parameters, data structure
|
||||
layouts and accessors, and small macros and small inline functions
|
||||
(ten lines or less in length), then the use of the object file is
|
||||
unrestricted, regardless of whether it is legally a derivative
|
||||
work. (Executables containing this object code plus portions of the
|
||||
Library will still fall under Section 6.)
|
||||
|
||||
Otherwise, if the work is a derivative of the Library, you may
|
||||
distribute the object code for the work under the terms of Section
|
||||
6. Any executables containing that work also fall under Section 6,
|
||||
whether or not they are linked directly with the Library itself.
|
||||
|
||||
6. As an exception to the Sections above, you may also combine or link
|
||||
a "work that uses the Library" with the Library to produce a work
|
||||
containing portions of the Library, and distribute that work under
|
||||
terms of your choice, provided that the terms permit modification of
|
||||
the work for the customer's own use and reverse engineering for
|
||||
debugging such modifications.
|
||||
|
||||
You must give prominent notice with each copy of the work that the
|
||||
Library is used in it and that the Library and its use are covered by
|
||||
this License. You must supply a copy of this License. If the work
|
||||
during execution displays copyright notices, you must include the
|
||||
copyright notice for the Library among them, as well as a reference
|
||||
directing the user to the copy of this License. Also, you must do one
|
||||
of these things:
|
||||
|
||||
a) Accompany the work with the complete corresponding
|
||||
machine-readable source code for the Library including whatever
|
||||
changes were used in the work (which must be distributed under
|
||||
Sections 1 and 2 above); and, if the work is an executable linked
|
||||
with the Library, with the complete machine-readable "work that
|
||||
uses the Library", as object code and/or source code, so that the
|
||||
user can modify the Library and then relink to produce a modified
|
||||
executable containing the modified Library. (It is understood that
|
||||
the user who changes the contents of definitions files in the
|
||||
Library will not necessarily be able to recompile the application
|
||||
to use the modified definitions.)
|
||||
|
||||
b) Use a suitable shared library mechanism for linking with the
|
||||
Library. A suitable mechanism is one that (1) uses at run time a
|
||||
copy of the library already present on the user's computer system,
|
||||
rather than copying library functions into the executable, and (2)
|
||||
will operate properly with a modified version of the library, if
|
||||
the user installs one, as long as the modified version is
|
||||
interface-compatible with the version that the work was made with.
|
||||
|
||||
c) Accompany the work with a written offer, valid for at least
|
||||
three years, to give the same user the materials specified in
|
||||
Subsection 6a, above, for a charge no more than the cost of
|
||||
performing this distribution. d) If distribution of the work is
|
||||
made by offering access to copy from a designated place, offer
|
||||
equivalent access to copy the above specified materials from the
|
||||
same place. e) Verify that the user has already received a copy
|
||||
of these materials or that you have already sent this user a copy.
|
||||
|
||||
For an executable, the required form of the "work that uses the
|
||||
Library" must include any data and utility programs needed for
|
||||
reproducing the executable from it. However, as a special exception,
|
||||
the materials to be distributed need not include anything that is
|
||||
normally distributed (in either source or binary form) with the major
|
||||
components (compiler, kernel, and so on) of the operating system on
|
||||
which the executable runs, unless that component itself accompanies
|
||||
the executable.
|
||||
|
||||
It may happen that this requirement contradicts the license
|
||||
restrictions of other proprietary libraries that do not normally
|
||||
accompany the operating system. Such a contradiction means you cannot
|
||||
use both them and the Library together in an executable that you
|
||||
distribute.
|
||||
|
||||
7. You may place library facilities that are a work based on the
|
||||
Library side-by-side in a single library together with other library
|
||||
facilities not covered by this License, and distribute such a combined
|
||||
library, provided that the separate distribution of the work based on
|
||||
the Library and of the other library facilities is otherwise
|
||||
permitted, and provided that you do these two things:
|
||||
|
||||
a) Accompany the combined library with a copy of the same work
|
||||
based on the Library, uncombined with any other library
|
||||
facilities. This must be distributed under the terms of the
|
||||
Sections above.
|
||||
|
||||
b) Give prominent notice with the combined library of the fact
|
||||
that part of it is a work based on the Library, and explaining
|
||||
where to find the accompanying uncombined form of the same work.
|
||||
|
||||
8. You may not copy, modify, sublicense, link with, or distribute the
|
||||
Library except as expressly provided under this License. Any attempt
|
||||
otherwise to copy, modify, sublicense, link with, or distribute the
|
||||
Library is void, and will automatically terminate your rights under
|
||||
this License. However, parties who have received copies, or rights,
|
||||
from you under this License will not have their licenses terminated so
|
||||
long as such parties remain in full compliance.
|
||||
|
||||
9. You are not required to accept this License, since you have not
|
||||
signed it. However, nothing else grants you permission to modify or
|
||||
distribute the Library or its derivative works. These actions are
|
||||
prohibited by law if you do not accept this License. Therefore, by
|
||||
modifying or distributing the Library (or any work based on the
|
||||
Library), you indicate your acceptance of this License to do so, and
|
||||
all its terms and conditions for copying, distributing or modifying
|
||||
the Library or works based on it.
|
||||
|
||||
10. Each time you redistribute the Library (or any work based on the
|
||||
Library), the recipient automatically receives a license from the
|
||||
original licensor to copy, distribute, link with or modify the Library
|
||||
subject to these terms and conditions. You may not impose any further
|
||||
restrictions on the recipients' exercise of the rights granted
|
||||
herein. You are not responsible for enforcing compliance by third
|
||||
parties with this License.
|
||||
|
||||
11. If, as a consequence of a court judgment or allegation of patent
|
||||
infringement or for any other reason (not limited to patent issues),
|
||||
conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot
|
||||
distribute so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not distribute the Library at all. For example, if a patent
|
||||
license would not permit royalty-free redistribution of the Library by
|
||||
all those who receive copies directly or indirectly through you, then
|
||||
the only way you could satisfy both it and this License would be to
|
||||
refrain entirely from distribution of the Library.
|
||||
|
||||
If any portion of this section is held invalid or unenforceable under
|
||||
any particular circumstance, the balance of the section is intended to
|
||||
apply, and the section as a whole is intended to apply in other
|
||||
circumstances.
|
||||
|
||||
It is not the purpose of this section to induce you to infringe any
|
||||
patents or other property right claims or to contest validity of any
|
||||
such claims; this section has the sole purpose of protecting the
|
||||
integrity of the free software distribution system which is
|
||||
implemented by public license practices. Many people have made
|
||||
generous contributions to the wide range of software distributed
|
||||
through that system in reliance on consistent application of that
|
||||
system; it is up to the author/donor to decide if he or she is willing
|
||||
to distribute software through any other system and a licensee cannot
|
||||
impose that choice.
|
||||
|
||||
This section is intended to make thoroughly clear what is believed to
|
||||
be a consequence of the rest of this License.
|
||||
|
||||
12. If the distribution and/or use of the Library is restricted in
|
||||
certain countries either by patents or by copyrighted interfaces, the
|
||||
original copyright holder who places the Library under this License
|
||||
may add an explicit geographical distribution limitation excluding
|
||||
those countries, so that distribution is permitted only in or among
|
||||
countries not thus excluded. In such case, this License incorporates
|
||||
the limitation as if written in the body of this License.
|
||||
|
||||
13. The Free Software Foundation may publish revised and/or new
|
||||
versions of the Lesser General Public License from time to time. Such
|
||||
new versions will be similar in spirit to the present version, but may
|
||||
differ in detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Library
|
||||
specifies a version number of this License which applies to it and
|
||||
"any later version", you have the option of following the terms and
|
||||
conditions either of that version or of any later version published by
|
||||
the Free Software Foundation. If the Library does not specify a
|
||||
license version number, you may choose any version ever published by
|
||||
the Free Software Foundation.
|
||||
|
||||
14. If you wish to incorporate parts of the Library into other free
|
||||
programs whose distribution conditions are incompatible with these,
|
||||
write to the author to ask for permission. For software which is
|
||||
copyrighted by the Free Software Foundation, write to the Free
|
||||
Software Foundation; we sometimes make exceptions for this. Our
|
||||
decision will be guided by the two goals of preserving the free status
|
||||
of all derivatives of our free software and of promoting the sharing
|
||||
and reuse of software generally.
|
||||
|
||||
NO WARRANTY
|
||||
|
||||
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
|
||||
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE
|
||||
LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS
|
||||
AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF
|
||||
ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
|
||||
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
|
||||
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
|
||||
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
|
||||
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
|
||||
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
|
||||
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
|
||||
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
|
||||
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
|
||||
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
|
||||
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
|
||||
DAMAGES.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Libraries
|
||||
|
||||
If you develop a new library, and you want it to be of the greatest
|
||||
possible use to the public, we recommend making it free software that
|
||||
everyone can redistribute and change. You can do so by permitting
|
||||
redistribution under these terms (or, alternatively, under the terms
|
||||
of the ordinary General Public License).
|
||||
|
||||
To apply these terms, attach the following notices to the library. It
|
||||
is safest to attach them to the start of each source file to most
|
||||
effectively convey the exclusion of warranty; and each file should
|
||||
have at least the "copyright" line and a pointer to where the full
|
||||
notice is found.
|
||||
|
||||
one line to give the library's name and an idea of what it does.
|
||||
Copyright (C) year name of author
|
||||
|
||||
This library is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU Lesser General Public License as
|
||||
published by the Free Software Foundation; either version 2.1 of the
|
||||
License, or (at your option) any later version.
|
||||
|
||||
This library is distributed in the hope that it will be useful, but
|
||||
WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
Lesser General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Lesser General Public
|
||||
License along with this library; if not, write to the Free Software
|
||||
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
|
||||
02110-1301 USA
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or
|
||||
your school, if any, to sign a "copyright disclaimer" for the library,
|
||||
if necessary. Here is a sample; alter the names:
|
||||
|
||||
Yoyodyne, Inc., hereby disclaims all copyright interest in
|
||||
the library `Frob' (a library for tweaking knobs) written
|
||||
by James Random Hacker.
|
||||
|
||||
signature of Ty Coon, 1 April 1990
|
||||
Ty Coon, President of Vice
|
||||
|
||||
That's all there is to it!
|
|
@ -0,0 +1,21 @@
|
|||
This is the July 2018 version of the Client/Server messaging library
|
||||
(CSlib). Only the source directory and license file are included here
|
||||
as part of the LAMMPS distribution. The full CSlib distribution,
|
||||
including documentation and test codes, can be found at the website:
|
||||
http://cslib.sandia.gov (as of Aug 2018).
|
||||
|
||||
The contact author is
|
||||
|
||||
Steve Plimpton
|
||||
Sandia National Laboratories
|
||||
sjplimp@sandia.gov
|
||||
http://www.sandia.gov/~sjplimp
|
||||
|
||||
The CSlib is distributed as open-source code under the GNU LGPL
|
||||
license. See the accompanying LICENSE file.
|
||||
|
||||
This directory contains the following:
|
||||
|
||||
README this file
|
||||
LICENSE GNU LGPL license
|
||||
src source files for library
|
|
@ -0,0 +1,107 @@
|
|||
# Makefile for CSlib = client/server messaging library
|
||||
# type "make help" for options
|
||||
|
||||
SHELL = /bin/sh
|
||||
|
||||
# ----------------------------------------
|
||||
# should only need to change this section
|
||||
# compiler/linker settings
|
||||
# ----------------------------------------
|
||||
|
||||
CC = g++
|
||||
CCFLAGS = -g -O3 -DZMQ_$(ZMQ) -DMPI_$(MPI)
|
||||
SHFLAGS = -fPIC
|
||||
ARCHIVE = ar
|
||||
ARCHFLAGS = -rc
|
||||
SHLIBFLAGS = -shared
|
||||
|
||||
# files
|
||||
|
||||
LIB = libcsmpi.a
|
||||
SHLIB = libcsmpi.so
|
||||
SRC = $(wildcard *.cpp)
|
||||
INC = $(wildcard *.h)
|
||||
OBJ = $(SRC:.cpp=.o)
|
||||
|
||||
# build with ZMQ support or not
|
||||
|
||||
zmq = yes
|
||||
ZMQ = $(shell echo $(zmq) | tr a-z A-Z)
|
||||
|
||||
ifeq ($(ZMQ),YES)
|
||||
ZMQLIB = -lzmq
|
||||
else
|
||||
CCFLAGS += -I./STUBS_ZMQ
|
||||
endif
|
||||
|
||||
# build with MPI support or not
|
||||
|
||||
mpi = yes
|
||||
MPI = $(shell echo $(mpi) | tr a-z A-Z)
|
||||
|
||||
ifeq ($(MPI),YES)
|
||||
CC = mpicxx
|
||||
else
|
||||
CCFLAGS += -I./STUBS_MPI
|
||||
LIB = libcsnompi.a
|
||||
SHLIB = libcsnompi.so
|
||||
endif
|
||||
|
||||
# targets
|
||||
|
||||
shlib: shlib_parallel shlib_serial
|
||||
|
||||
lib: lib_parallel lib_serial
|
||||
|
||||
all: shlib lib
|
||||
|
||||
help:
|
||||
@echo 'make default = shlib'
|
||||
@echo 'make shlib build 2 shared CSlibs: parallel & serial'
|
||||
@echo 'make lib build 2 static CSlibs: parallel & serial'
|
||||
@echo 'make all build 4 CSlibs: shlib and lib'
|
||||
@echo 'make shlib_parallel build shared parallel CSlib'
|
||||
@echo 'make shlib_serial build shared serial CSlib'
|
||||
@echo 'make lib_parallel build static parallel CSlib'
|
||||
@echo 'make lib_serial build static serial CSlib'
|
||||
@echo 'make ... zmq=no build w/out ZMQ support'
|
||||
@echo 'make clean remove all *.o files'
|
||||
@echo 'make clean-all remove *.o and lib files'
|
||||
@echo 'make tar create a tarball, 2 levels up'
|
||||
|
||||
shlib_parallel:
|
||||
$(MAKE) clean
|
||||
$(MAKE) shared zmq=$(zmq) mpi=yes
|
||||
|
||||
shlib_serial:
|
||||
$(MAKE) clean
|
||||
$(MAKE) shared zmq=$(zmq) mpi=no
|
||||
|
||||
lib_parallel:
|
||||
$(MAKE) clean
|
||||
$(MAKE) static zmq=$(zmq) mpi=yes
|
||||
|
||||
lib_serial:
|
||||
$(MAKE) clean
|
||||
$(MAKE) static zmq=$(zmq) mpi=no
|
||||
|
||||
static: $(OBJ)
|
||||
$(ARCHIVE) $(ARCHFLAGS) $(LIB) $(OBJ)
|
||||
|
||||
shared: $(OBJ)
|
||||
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) -o $(SHLIB) $(OBJ) $(ZMQLIB)
|
||||
|
||||
clean:
|
||||
@rm -f *.o *.pyc
|
||||
|
||||
clean-all:
|
||||
@rm -f *.o *.pyc lib*.a lib*.so
|
||||
|
||||
tar:
|
||||
cd ../..; tar cvf cslib.tar cslib/README cslib/LICENSE \
|
||||
cslib/doc cslib/src cslib/test
|
||||
|
||||
# rules
|
||||
|
||||
%.o:%.cpp
|
||||
$(CC) $(CCFLAGS) $(SHFLAGS) -c $<
|
|
@ -0,0 +1,95 @@
|
|||
/* -*- c++ -*- ----------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
// MPI constants and dummy functions
|
||||
|
||||
#ifndef MPI_DUMMY_H
|
||||
#define MPI_DUMMY_H
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
#include <string.h>
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
typedef int MPI_Comm;
|
||||
typedef int MPI_Fint;
|
||||
typedef int MPI_Datatype;
|
||||
typedef int MPI_Status;
|
||||
typedef int MPI_Op;
|
||||
typedef int MPI_Info;
|
||||
|
||||
#define MPI_COMM_WORLD 0
|
||||
#define MPI_MAX_PORT_NAME 0
|
||||
#define MPI_INFO_NULL 0
|
||||
#define MPI_INT 1
|
||||
#define MPI_LONG_LONG 2
|
||||
#define MPI_FLOAT 3
|
||||
#define MPI_DOUBLE 4
|
||||
#define MPI_CHAR 5
|
||||
#define MPI_SUM 0
|
||||
|
||||
static void MPI_Init(int *, char ***) {}
|
||||
static MPI_Comm MPI_Comm_f2c(MPI_Comm world) {return world;}
|
||||
static void MPI_Comm_rank(MPI_Comm, int *) {}
|
||||
static void MPI_Comm_size(MPI_Comm, int *) {}
|
||||
|
||||
static void MPI_Open_port(MPI_Info, char *) {}
|
||||
static void MPI_Close_port(const char *) {}
|
||||
static void MPI_Comm_accept(const char *, MPI_Info, int,
|
||||
MPI_Comm, MPI_Comm *) {}
|
||||
static void MPI_Comm_connect(const char *, MPI_Info, int,
|
||||
MPI_Comm, MPI_Comm *) {}
|
||||
|
||||
static void MPI_Comm_split(MPI_Comm, int, int, MPI_Comm *) {}
|
||||
static void MPI_Comm_free(MPI_Comm *) {}
|
||||
|
||||
static void MPI_Send(const void *, int, MPI_Datatype, int, int, MPI_Comm) {}
|
||||
static void MPI_Recv(void *, int, MPI_Datatype, int, int,
|
||||
MPI_Comm, MPI_Status *) {}
|
||||
|
||||
static void MPI_Allreduce(const void *in, void *out, int, MPI_Datatype type,
|
||||
MPI_Op op, MPI_Comm)
|
||||
{
|
||||
if (type == MPI_INT) *((int *) out) = *((int *) in);
|
||||
}
|
||||
static void MPI_Scan(const void *in, void *out, int, MPI_Datatype intype,
|
||||
MPI_Op op,MPI_Comm)
|
||||
{
|
||||
if (intype == MPI_INT) *((int *) out) = *((int *) in);
|
||||
}
|
||||
|
||||
static void MPI_Bcast(void *, int, MPI_Datatype, int, MPI_Comm) {}
|
||||
static void MPI_Allgather(const void *in, int incount, MPI_Datatype intype,
|
||||
void *out, int, MPI_Datatype, MPI_Comm)
|
||||
{
|
||||
// assuming incount = 1
|
||||
if (intype == MPI_INT) *((int *) out) = *((int *) in);
|
||||
}
|
||||
static void MPI_Allgatherv(const void *in, int incount, MPI_Datatype intype,
|
||||
void *out, const int *, const int *,
|
||||
MPI_Datatype, MPI_Comm)
|
||||
{
|
||||
if (intype == MPI_INT) memcpy(out,in,incount*sizeof(int));
|
||||
else if (intype == MPI_LONG_LONG) memcpy(out,in,incount*sizeof(int64_t));
|
||||
else if (intype == MPI_FLOAT) memcpy(out,in,incount*sizeof(float));
|
||||
else if (intype == MPI_DOUBLE) memcpy(out,in,incount*sizeof(double));
|
||||
else if (intype == MPI_CHAR) memcpy(out,in,incount*sizeof(char));
|
||||
}
|
||||
|
||||
static void MPI_Abort(MPI_Comm, int) {exit(1);}
|
||||
static void MPI_Finalize() {}
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,35 @@
|
|||
/* -*- c++ -*- ----------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
// ZMQ constants and dummy functions
|
||||
|
||||
#ifndef ZMQ_DUMMY_H
|
||||
#define ZMQ_DUMMY_H
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
#define ZMQ_REQ 0
|
||||
#define ZMQ_REP 0
|
||||
|
||||
static void *zmq_ctx_new() {return NULL;}
|
||||
static void *zmq_connect(void *, char *) {return NULL;}
|
||||
static int zmq_bind(void *, char *) {return 0;}
|
||||
static void *zmq_socket(void *,int) {return NULL;}
|
||||
static void zmq_close(void *) {}
|
||||
static void zmq_ctx_destroy(void *) {}
|
||||
static void zmq_send(void *, void *, int, int) {}
|
||||
static void zmq_recv(void *, void *, int, int) {}
|
||||
|
||||
};
|
||||
|
||||
#endif
|
|
@ -0,0 +1,768 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <mpi.h>
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <limits.h>
|
||||
|
||||
#include "cslib.h"
|
||||
#include "msg_file.h"
|
||||
#include "msg_zmq.h"
|
||||
#include "msg_mpi_one.h"
|
||||
#include "msg_mpi_two.h"
|
||||
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
#define MAXTYPE 5 // # of defined field data types
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
CSlib::CSlib(int csflag, const char *mode, const void *ptr, const void *pcomm)
|
||||
{
|
||||
if (pcomm) myworld = (uint64_t) *((MPI_Comm *) pcomm);
|
||||
else myworld = 0;
|
||||
|
||||
#ifdef MPI_NO
|
||||
if (pcomm)
|
||||
error_all("constructor(): CSlib invoked with MPI_Comm "
|
||||
"but built w/out MPI support");
|
||||
#endif
|
||||
#ifdef MPI_YES // NOTE: this could be OK to allow ??
|
||||
// would allow a parallel app to invoke CSlib
|
||||
// in parallel and/or in serial
|
||||
if (!pcomm)
|
||||
error_all("constructor(): CSlib invoked w/out MPI_Comm "
|
||||
"but built with MPI support");
|
||||
#endif
|
||||
|
||||
client = server = 0;
|
||||
if (csflag == 0) client = 1;
|
||||
else if (csflag == 1) server = 1;
|
||||
else error_all("constructor(): Invalid client/server arg");
|
||||
|
||||
if (pcomm == NULL) {
|
||||
me = 0;
|
||||
nprocs = 1;
|
||||
|
||||
if (strcmp(mode,"file") == 0) msg = new MsgFile(csflag,ptr);
|
||||
else if (strcmp(mode,"zmq") == 0) msg = new MsgZMQ(csflag,ptr);
|
||||
else if (strcmp(mode,"mpi/one") == 0)
|
||||
error_all("constructor(): No mpi/one mode for serial lib usage");
|
||||
else if (strcmp(mode,"mpi/two") == 0)
|
||||
error_all("constructor(): No mpi/two mode for serial lib usage");
|
||||
else error_all("constructor(): Unknown mode");
|
||||
|
||||
} else if (pcomm) {
|
||||
MPI_Comm world = (MPI_Comm) myworld;
|
||||
MPI_Comm_rank(world,&me);
|
||||
MPI_Comm_size(world,&nprocs);
|
||||
|
||||
if (strcmp(mode,"file") == 0) msg = new MsgFile(csflag,ptr,world);
|
||||
else if (strcmp(mode,"zmq") == 0) msg = new MsgZMQ(csflag,ptr,world);
|
||||
else if (strcmp(mode,"mpi/one") == 0) msg = new MsgMPIOne(csflag,ptr,world);
|
||||
else if (strcmp(mode,"mpi/two") == 0) msg = new MsgMPITwo(csflag,ptr,world);
|
||||
else error_all("constructor(): Unknown mode");
|
||||
}
|
||||
|
||||
maxfield = 0;
|
||||
fieldID = fieldtype = fieldlen = fieldoffset = NULL;
|
||||
maxheader = 0;
|
||||
header = NULL;
|
||||
maxbuf = 0;
|
||||
buf = NULL;
|
||||
|
||||
recvcounts = displs = NULL;
|
||||
maxglobal = 0;
|
||||
allids = NULL;
|
||||
maxfieldbytes = 0;
|
||||
fielddata = NULL;
|
||||
|
||||
pad = "\0\0\0\0\0\0\0"; // just length 7 since will have trailing NULL
|
||||
|
||||
nsend = nrecv = 0;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
CSlib::~CSlib()
|
||||
{
|
||||
deallocate_fields();
|
||||
sfree(header);
|
||||
sfree(buf);
|
||||
|
||||
sfree(recvcounts);
|
||||
sfree(displs);
|
||||
sfree(allids);
|
||||
sfree(fielddata);
|
||||
|
||||
delete msg;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::send(int msgID_caller, int nfield_caller)
|
||||
{
|
||||
if (nfield_caller < 0) error_all("send(): Invalid nfield");
|
||||
|
||||
msgID = msgID_caller;
|
||||
nfield = nfield_caller;
|
||||
allocate_fields();
|
||||
|
||||
fieldcount = 0;
|
||||
nbuf = 0;
|
||||
|
||||
if (fieldcount == nfield) send_message();
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::pack_int(int id, int value)
|
||||
{
|
||||
pack(id,1,1,&value);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::pack_int64(int id, int64_t value)
|
||||
{
|
||||
pack(id,2,1,&value);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::pack_float(int id, float value)
|
||||
{
|
||||
pack(id,3,1,&value);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::pack_double(int id, double value)
|
||||
{
|
||||
pack(id,4,1,&value);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::pack_string(int id, char *value)
|
||||
{
|
||||
pack(id,5,strlen(value)+1,value);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::pack(int id, int ftype, int flen, void *data)
|
||||
{
|
||||
if (find_field(id,fieldcount) >= 0)
|
||||
error_all("pack(): Reuse of field ID");
|
||||
if (ftype < 1 || ftype > MAXTYPE) error_all("pack(): Invalid ftype");
|
||||
if (flen < 0) error_all("pack(): Invalid flen");
|
||||
|
||||
fieldID[fieldcount] = id;
|
||||
fieldtype[fieldcount] = ftype;
|
||||
fieldlen[fieldcount] = flen;
|
||||
|
||||
int nbytes,nbytesround;
|
||||
onefield(ftype,flen,nbytes,nbytesround);
|
||||
|
||||
memcpy(&buf[nbuf],data,nbytes);
|
||||
memcpy(&buf[nbuf+nbytes],pad,nbytesround-nbytes);
|
||||
nbuf += nbytesround;
|
||||
|
||||
fieldcount++;
|
||||
if (fieldcount == nfield) send_message();
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::pack_parallel(int id, int ftype,
|
||||
int nlocal, int *ids, int nper, void *data)
|
||||
{
|
||||
int i,j,k,m;
|
||||
|
||||
if (find_field(id,fieldcount) >= 0)
|
||||
error_all("pack_parallel(): Reuse of field ID");
|
||||
if (ftype < 1 || ftype > MAXTYPE) error_all("pack_parallel(): Invalid ftype");
|
||||
if (nlocal < 0) error_all("pack_parallel(): Invalid nlocal");
|
||||
if (nper < 1) error_all("pack_parallel(): Invalid nper");
|
||||
|
||||
MPI_Comm world = (MPI_Comm) myworld;
|
||||
|
||||
// NOTE: check for overflow of maxglobal and flen
|
||||
|
||||
int nglobal;
|
||||
MPI_Allreduce(&nlocal,&nglobal,1,MPI_INT,MPI_SUM,world);
|
||||
int flen = nper*nglobal;
|
||||
|
||||
fieldID[fieldcount] = id;
|
||||
fieldtype[fieldcount] = ftype;
|
||||
fieldlen[fieldcount] = flen;
|
||||
|
||||
// nlocal datums, each of nper length, from all procs
|
||||
// final data in buf = datums for all natoms, ordered by ids
|
||||
|
||||
if (recvcounts == NULL) {
|
||||
recvcounts = (int *) smalloc(nprocs*sizeof(int));
|
||||
displs = (int *) smalloc(nprocs*sizeof(int));
|
||||
}
|
||||
|
||||
MPI_Allgather(&nlocal,1,MPI_INT,recvcounts,1,MPI_INT,world);
|
||||
|
||||
displs[0] = 0;
|
||||
for (int iproc = 1; iproc < nprocs; iproc++)
|
||||
displs[iproc] = displs[iproc-1] + recvcounts[iproc-1];
|
||||
|
||||
if (ids && nglobal > maxglobal) {
|
||||
sfree(allids);
|
||||
maxglobal = nglobal;
|
||||
// NOTE: maxglobal*sizeof(int) could overflow int
|
||||
allids = (int *) smalloc(maxglobal*sizeof(int));
|
||||
}
|
||||
|
||||
MPI_Allgatherv(ids,nlocal,MPI_INT,allids,
|
||||
recvcounts,displs,MPI_INT,world);
|
||||
|
||||
int nlocalsize = nper*nlocal;
|
||||
MPI_Allgather(&nlocalsize,1,MPI_INT,recvcounts,1,MPI_INT,world);
|
||||
|
||||
displs[0] = 0;
|
||||
for (int iproc = 1; iproc < nprocs; iproc++)
|
||||
displs[iproc] = displs[iproc-1] + recvcounts[iproc-1];
|
||||
|
||||
int nbytes,nbytesround;
|
||||
onefield(ftype,flen,nbytes,nbytesround);
|
||||
|
||||
if (ftype == 1) {
|
||||
int *alldata;
|
||||
if (ids) {
|
||||
if (nbytes > maxfieldbytes) {
|
||||
sfree(fielddata);
|
||||
maxfieldbytes = nbytes;
|
||||
fielddata = (char *) smalloc(maxfieldbytes);
|
||||
}
|
||||
alldata = (int *) fielddata;
|
||||
} else alldata = (int *) &buf[nbuf];
|
||||
MPI_Allgatherv(data,nlocalsize,MPI_INT,alldata,
|
||||
recvcounts,displs,MPI_INT,world);
|
||||
if (ids) {
|
||||
int *bufptr = (int *) &buf[nbuf];
|
||||
m = 0;
|
||||
for (i = 0; i < nglobal; i++) {
|
||||
j = (allids[i]-1) * nper;
|
||||
if (nper == 1) bufptr[j] = alldata[m++];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
bufptr[j++] = alldata[m++];
|
||||
}
|
||||
}
|
||||
|
||||
} else if (ftype == 2) {
|
||||
int64_t *alldata;
|
||||
if (ids) {
|
||||
if (nbytes > maxfieldbytes) {
|
||||
sfree(fielddata);
|
||||
maxfieldbytes = nbytes;
|
||||
fielddata = (char *) smalloc(maxfieldbytes);
|
||||
}
|
||||
alldata = (int64_t *) fielddata;
|
||||
} else alldata = (int64_t *) &buf[nbuf];
|
||||
// NOTE: may be just MPI_LONG on some machines
|
||||
MPI_Allgatherv(data,nlocalsize,MPI_LONG_LONG,alldata,
|
||||
recvcounts,displs,MPI_LONG_LONG,world);
|
||||
if (ids) {
|
||||
int64_t *bufptr = (int64_t *) &buf[nbuf];
|
||||
m = 0;
|
||||
for (i = 0; i < nglobal; i++) {
|
||||
j = (allids[i]-1) * nper;
|
||||
if (nper == 1) bufptr[j] = alldata[m++];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
bufptr[j++] = alldata[m++];
|
||||
}
|
||||
}
|
||||
|
||||
} else if (ftype == 3) {
|
||||
float *alldata;
|
||||
if (ids) {
|
||||
if (nbytes > maxfieldbytes) {
|
||||
sfree(fielddata);
|
||||
maxfieldbytes = nbytes;
|
||||
fielddata = (char *) smalloc(maxfieldbytes);
|
||||
}
|
||||
alldata = (float *) fielddata;
|
||||
} else alldata = (float *) &buf[nbuf];
|
||||
MPI_Allgatherv(data,nlocalsize,MPI_FLOAT,alldata,
|
||||
recvcounts,displs,MPI_FLOAT,world);
|
||||
if (ids) {
|
||||
float *bufptr = (float *) &buf[nbuf];
|
||||
m = 0;
|
||||
for (i = 0; i < nglobal; i++) {
|
||||
j = (allids[i]-1) * nper;
|
||||
if (nper == 1) bufptr[j] = alldata[m++];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
bufptr[j++] = alldata[m++];
|
||||
}
|
||||
}
|
||||
|
||||
} else if (ftype == 4) {
|
||||
double *alldata;
|
||||
if (ids) {
|
||||
if (nbytes > maxfieldbytes) {
|
||||
sfree(fielddata);
|
||||
maxfieldbytes = nbytes;
|
||||
fielddata = (char *) smalloc(maxfieldbytes);
|
||||
}
|
||||
alldata = (double *) fielddata;
|
||||
} else alldata = (double *) &buf[nbuf];
|
||||
MPI_Allgatherv(data,nlocalsize,MPI_DOUBLE,alldata,
|
||||
recvcounts,displs,MPI_DOUBLE,world);
|
||||
if (ids) {
|
||||
double *bufptr = (double *) &buf[nbuf];
|
||||
m = 0;
|
||||
for (i = 0; i < nglobal; i++) {
|
||||
j = (allids[i]-1) * nper;
|
||||
if (nper == 1) bufptr[j] = alldata[m++];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
bufptr[j++] = alldata[m++];
|
||||
}
|
||||
}
|
||||
|
||||
/* eventually ftype = BYTE, but not yet
|
||||
} else if (ftype == 5) {
|
||||
char *alldata;
|
||||
if (ids) {
|
||||
if (nbytes > maxfieldbytes) {
|
||||
sfree(fielddata);
|
||||
maxfieldbytes = nbytes;
|
||||
fielddata = (char *) smalloc(maxfieldbytes);
|
||||
}
|
||||
alldata = (char *) fielddata;
|
||||
} else alldata = (char *) &buf[nbuf];
|
||||
MPI_Allgatherv(data,nlocalsize,MPI_CHAR,alldata,
|
||||
recvcounts,displs,MPI_CHAR,world);
|
||||
if (ids) {
|
||||
char *bufptr = (char *) &buf[nbuf];
|
||||
m = 0;
|
||||
for (i = 0; i < nglobal; i++) {
|
||||
j = (allids[i]-1) * nper;
|
||||
memcpy(&bufptr[j],&alldata[m],nper);
|
||||
m += nper;
|
||||
}
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
memcpy(&buf[nbuf+nbytes],pad,nbytesround-nbytes);
|
||||
nbuf += nbytesround;
|
||||
|
||||
fieldcount++;
|
||||
if (fieldcount == nfield) send_message();
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::send_message()
|
||||
{
|
||||
// setup header message
|
||||
|
||||
int m = 0;
|
||||
header[m++] = msgID;
|
||||
header[m++] = nfield;
|
||||
for (int ifield = 0; ifield < nfield; ifield++) {
|
||||
header[m++] = fieldID[ifield];
|
||||
header[m++] = fieldtype[ifield];
|
||||
header[m++] = fieldlen[ifield];
|
||||
}
|
||||
|
||||
msg->send(nheader,header,nbuf,buf);
|
||||
nsend++;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
int CSlib::recv(int &nfield_caller, int *&fieldID_caller,
|
||||
int *&fieldtype_caller, int *&fieldlen_caller)
|
||||
{
|
||||
msg->recv(maxheader,header,maxbuf,buf);
|
||||
nrecv++;
|
||||
|
||||
// unpack header message
|
||||
|
||||
int m = 0;
|
||||
msgID = header[m++];
|
||||
nfield = header[m++];
|
||||
allocate_fields();
|
||||
|
||||
int nbytes,nbytesround;
|
||||
|
||||
nbuf = 0;
|
||||
for (int ifield = 0; ifield < nfield; ifield++) {
|
||||
fieldID[ifield] = header[m++];
|
||||
fieldtype[ifield] = header[m++];
|
||||
fieldlen[ifield] = header[m++];
|
||||
fieldoffset[ifield] = nbuf;
|
||||
onefield(fieldtype[ifield],fieldlen[ifield],nbytes,nbytesround);
|
||||
nbuf += nbytesround;
|
||||
}
|
||||
|
||||
// return message parameters
|
||||
|
||||
nfield_caller = nfield;
|
||||
fieldID_caller = fieldID;
|
||||
fieldtype_caller = fieldtype;
|
||||
fieldlen_caller = fieldlen;
|
||||
|
||||
return msgID;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
int CSlib::unpack_int(int id)
|
||||
{
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack_int(): Unknown field ID");
|
||||
if (fieldtype[ifield] != 1) error_all("unpack_int(): Mis-match of ftype");
|
||||
if (fieldlen[ifield] != 1) error_all("unpack_int(): Flen is not 1");
|
||||
|
||||
int *ptr = (int *) unpack(id);
|
||||
return *ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
int64_t CSlib::unpack_int64(int id)
|
||||
{
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack_int64(): Unknown field ID");
|
||||
if (fieldtype[ifield] != 2) error_all("unpack_int64(): Mis-match of ftype");
|
||||
if (fieldlen[ifield] != 1) error_all("unpack_int64(): Flen is not 1");
|
||||
|
||||
int64_t *ptr = (int64_t *) unpack(id);
|
||||
return *ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
float CSlib::unpack_float(int id)
|
||||
{
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack_float(): Unknown field ID");
|
||||
if (fieldtype[ifield] != 3) error_all("unpack_float(): Mis-match of ftype");
|
||||
if (fieldlen[ifield] != 1) error_all("unpack_float(): Flen is not 1");
|
||||
|
||||
float *ptr = (float *) unpack(id);
|
||||
return *ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
double CSlib::unpack_double(int id)
|
||||
{
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack_double(): Unknown field ID");
|
||||
if (fieldtype[ifield] != 4) error_all("unpack_double(): Mis-match of ftype");
|
||||
if (fieldlen[ifield] != 1) error_all("unpack_double(): Flen is not 1");
|
||||
|
||||
double *ptr = (double *) unpack(id);
|
||||
return *ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
char *CSlib::unpack_string(int id)
|
||||
{
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack_string(): Unknown field ID");
|
||||
if (fieldtype[ifield] != 5) error_all("unpack_string(): Mis-match of ftype");
|
||||
|
||||
char *ptr = (char *) unpack(id);
|
||||
return ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void *CSlib::unpack(int id)
|
||||
{
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack(): Unknown field ID");
|
||||
return &buf[fieldoffset[ifield]];
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::unpack(int id, void *data)
|
||||
{
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack(): Unknown field ID");
|
||||
|
||||
int ftype = fieldtype[ifield];
|
||||
int nbytes = fieldlen[ifield];
|
||||
if (ftype == 1) nbytes *= sizeof(int);
|
||||
else if (ftype == 2) nbytes *= sizeof(int64_t);
|
||||
else if (ftype == 3) nbytes *= sizeof(float);
|
||||
else if (ftype == 4) nbytes *= sizeof(double);
|
||||
memcpy(data,&buf[fieldoffset[ifield]],nbytes);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::unpack_parallel(int id, int nlocal, int *ids, int nper, void *data)
|
||||
{
|
||||
int i,j,k,m;
|
||||
|
||||
int ifield = find_field(id,nfield);
|
||||
if (ifield < 0) error_all("unpack_parallel(): Unknown field ID");
|
||||
if (nlocal < 0) error_all("unpack_parallel(): Invalid nlocal");
|
||||
if (nper < 1) error_all("pack_parallel(): Invalid nper");
|
||||
|
||||
MPI_Comm world = (MPI_Comm) myworld;
|
||||
|
||||
int upto;
|
||||
if (!ids) {
|
||||
MPI_Scan(&nlocal,&upto,1,MPI_INT,MPI_SUM,world);
|
||||
upto -= nlocal;
|
||||
}
|
||||
|
||||
if (fieldtype[ifield] == 1) {
|
||||
int *local = (int *) data;
|
||||
int *global = (int *) &buf[fieldoffset[ifield]];
|
||||
if (!ids) memcpy(local,&global[nper*upto],nper*nlocal*sizeof(int));
|
||||
else {
|
||||
m = 0;
|
||||
for (i = 0; i < nlocal; i++) {
|
||||
j = (ids[i]-1) * nper;
|
||||
if (nper == 1) local[m++] = global[j];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
local[m++] = global[j++];
|
||||
}
|
||||
}
|
||||
|
||||
} else if (fieldtype[ifield] == 2) {
|
||||
int64_t *local = (int64_t *) data;
|
||||
int64_t *global = (int64_t *) &buf[fieldoffset[ifield]];
|
||||
if (!ids) memcpy(local,&global[nper*upto],nper*nlocal*sizeof(int64_t));
|
||||
else {
|
||||
m = 0;
|
||||
for (i = 0; i < nlocal; i++) {
|
||||
j = (ids[i]-1) * nper;
|
||||
if (nper == 1) local[m++] = global[j];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
local[m++] = global[j++];
|
||||
}
|
||||
}
|
||||
|
||||
} else if (fieldtype[ifield] == 3) {
|
||||
float *local = (float *) data;
|
||||
float *global = (float *) &buf[fieldoffset[ifield]];
|
||||
if (!ids) memcpy(local,&global[nper*upto],nper*nlocal*sizeof(float));
|
||||
else {
|
||||
m = 0;
|
||||
for (i = 0; i < nlocal; i++) {
|
||||
j = (ids[i]-1) * nper;
|
||||
if (nper == 1) local[m++] = global[j];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
local[m++] = global[j++];
|
||||
}
|
||||
}
|
||||
|
||||
} else if (fieldtype[ifield] == 4) {
|
||||
double *local = (double *) data;
|
||||
double *global = (double *) &buf[fieldoffset[ifield]];
|
||||
if (!ids) memcpy(local,&global[nper*upto],nper*nlocal*sizeof(double));
|
||||
else {
|
||||
m = 0;
|
||||
for (i = 0; i < nlocal; i++) {
|
||||
j = (ids[i]-1) * nper;
|
||||
if (nper == 1) local[m++] = global[j];
|
||||
else
|
||||
for (k = 0; k < nper; k++)
|
||||
local[m++] = global[j++];
|
||||
}
|
||||
}
|
||||
|
||||
/* eventually ftype = BYTE, but not yet
|
||||
} else if (fieldtype[ifield] == 5) {
|
||||
char *local = (char *) data;
|
||||
char *global = (char *) &buf[fieldoffset[ifield]];
|
||||
if (!ids) memcpy(local,&global[nper*upto],nper*nlocal*sizeof(char));
|
||||
else {
|
||||
m = 0;
|
||||
for (i = 0; i < nlocal; i++) {
|
||||
j = (ids[i]-1) * nper;
|
||||
memcpy(&local[m],&global[j],nper);
|
||||
m += nper;
|
||||
}
|
||||
}
|
||||
*/
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
int CSlib::extract(int flag)
|
||||
{
|
||||
if (flag == 1) return nsend;
|
||||
if (flag == 2) return nrecv;
|
||||
error_all("extract(): Invalid flag");
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::onefield(int ftype, int flen, int &nbytes, int &nbytesround)
|
||||
{
|
||||
int64_t bigbytes,bigbytesround;
|
||||
int64_t biglen = flen;
|
||||
|
||||
if (ftype == 1) bigbytes = biglen * sizeof(int);
|
||||
else if (ftype == 2) bigbytes = biglen * sizeof(int64_t);
|
||||
else if (ftype == 3) bigbytes = biglen * sizeof(float);
|
||||
else if (ftype == 4) bigbytes = biglen * sizeof(double);
|
||||
else if (ftype == 5) bigbytes = biglen * sizeof(char);
|
||||
bigbytesround = roundup(bigbytes,8);
|
||||
|
||||
if (nbuf + bigbytesround > INT_MAX)
|
||||
error_all("pack(): Message size exceeds 32-bit integer limit");
|
||||
|
||||
nbytes = (int) bigbytes;
|
||||
nbytesround = (int) bigbytesround;
|
||||
if (nbuf + nbytesround > maxbuf) {
|
||||
maxbuf = nbuf + nbytesround;
|
||||
buf = (char *) srealloc(buf,maxbuf);
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
int CSlib::find_field(int id, int n)
|
||||
{
|
||||
int ifield;
|
||||
for (ifield = 0; ifield < n; ifield++)
|
||||
if (id == fieldID[ifield]) return ifield;
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::allocate_fields()
|
||||
{
|
||||
int64_t bigbytes = (2 + 3*((int64_t) nfield)) * sizeof(int);
|
||||
if (bigbytes > INT_MAX)
|
||||
error_all("send(): Message header size exceeds 32-bit integer limit");
|
||||
|
||||
nheader = 2;
|
||||
nheader += 3 * nfield;
|
||||
|
||||
if (nfield > maxfield) {
|
||||
deallocate_fields();
|
||||
maxfield = nfield;
|
||||
fieldID = new int[maxfield];
|
||||
fieldtype = new int[maxfield];
|
||||
fieldlen = new int[maxfield];
|
||||
fieldoffset = new int[maxfield];
|
||||
}
|
||||
|
||||
if (nheader > maxheader) {
|
||||
sfree(header);
|
||||
maxheader = nheader;
|
||||
header = (int *) smalloc(maxheader*sizeof(int));
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::deallocate_fields()
|
||||
{
|
||||
delete [] fieldID;
|
||||
delete [] fieldtype;
|
||||
delete [] fieldlen;
|
||||
delete [] fieldoffset;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void *CSlib::smalloc(int nbytes)
|
||||
{
|
||||
if (nbytes == 0) return NULL;
|
||||
void *ptr = malloc(nbytes);
|
||||
if (ptr == NULL) {
|
||||
char str[128];
|
||||
sprintf(str,"malloc(): Failed to allocate %d bytes",nbytes);
|
||||
error_one(str);
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void *CSlib::srealloc(void *ptr, int nbytes)
|
||||
{
|
||||
if (nbytes == 0) {
|
||||
sfree(ptr);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
ptr = realloc(ptr,nbytes);
|
||||
if (ptr == NULL) {
|
||||
char str[128];
|
||||
sprintf(str,"realloc(): Failed to reallocate %d bytes",nbytes);
|
||||
error_one(str);
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::sfree(void *ptr)
|
||||
{
|
||||
if (ptr == NULL) return;
|
||||
free(ptr);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::error_all(const char *str)
|
||||
{
|
||||
if (me == 0) printf("CSlib ERROR: %s\n",str);
|
||||
MPI_Comm world = (MPI_Comm) myworld;
|
||||
MPI_Abort(world,1);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void CSlib::error_one(const char *str)
|
||||
{
|
||||
printf("CSlib ERROR: %s\n",str);
|
||||
MPI_Comm world = (MPI_Comm) myworld;
|
||||
MPI_Abort(world,1);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
round N up to multiple of nalign and return it
|
||||
NOTE: see mapreduce/src/keyvalue.cpp for doing this as uint64_t
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
int64_t CSlib::roundup(int64_t n, int nalign)
|
||||
{
|
||||
if (n % nalign == 0) return n;
|
||||
n = (n/nalign + 1) * nalign;
|
||||
return n;
|
||||
}
|
|
@ -0,0 +1,87 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef CSLIB_H
|
||||
#define CSLIB_H
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
class CSlib {
|
||||
public:
|
||||
int nsend,nrecv;
|
||||
|
||||
CSlib(int, const char *, const void *, const void *);
|
||||
~CSlib();
|
||||
|
||||
void send(int, int);
|
||||
|
||||
void pack_int(int, int);
|
||||
void pack_int64(int, int64_t);
|
||||
void pack_float(int, float);
|
||||
void pack_double(int, double);
|
||||
void pack_string(int, char *);
|
||||
void pack(int, int, int, void *);
|
||||
void pack_parallel(int, int, int, int *, int, void *);
|
||||
|
||||
int recv(int &, int *&, int *&, int *&);
|
||||
|
||||
int unpack_int(int);
|
||||
int64_t unpack_int64(int);
|
||||
float unpack_float(int);
|
||||
double unpack_double(int);
|
||||
char *unpack_string(int);
|
||||
void *unpack(int);
|
||||
void unpack(int, void *);
|
||||
void unpack_parallel(int, int, int *, int, void *);
|
||||
|
||||
int extract(int);
|
||||
|
||||
private:
|
||||
uint64_t myworld; // really MPI_Comm, but avoids use of mpi.h in this file
|
||||
// so apps can include this file w/ no MPI on system
|
||||
int me,nprocs;
|
||||
int client,server;
|
||||
int nfield,maxfield;
|
||||
int msgID,fieldcount;
|
||||
int nheader,maxheader;
|
||||
int nbuf,maxbuf;
|
||||
int maxglobal,maxfieldbytes;
|
||||
int *fieldID,*fieldtype,*fieldlen,*fieldoffset;
|
||||
int *header;
|
||||
int *recvcounts,*displs; // nprocs size for Allgathers
|
||||
int *allids; // nglobal size for pack_parallel()
|
||||
char *buf; // maxbuf size for msg with all fields
|
||||
char *fielddata; // maxfieldbytes size for one global field
|
||||
const char *pad;
|
||||
|
||||
class Msg *msg;
|
||||
|
||||
void send_message();
|
||||
void onefield(int, int, int &, int &);
|
||||
int find_field(int, int);
|
||||
void allocate_fields();
|
||||
void deallocate_fields();
|
||||
int64_t roundup(int64_t, int);
|
||||
void *smalloc(int);
|
||||
void *srealloc(void *, int);
|
||||
void sfree(void *);
|
||||
void error_all(const char *);
|
||||
void error_one(const char *);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,362 @@
|
|||
# ------------------------------------------------------------------------
|
||||
# CSlib - Client/server library for code coupling
|
||||
# http://cslib.sandia.gov, Sandia National Laboratories
|
||||
# Steve Plimpton, sjplimp@sandia.gov
|
||||
#
|
||||
# Copyright 2018 National Technology & Engineering Solutions of
|
||||
# Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
# NTESS, the U.S. Government retains certain rights in this software.
|
||||
# This software is distributed under the GNU Lesser General Public
|
||||
# License (LGPL).
|
||||
#
|
||||
# See the README file in the top-level CSlib directory.
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
# Python wrapper on CSlib library via ctypes
|
||||
|
||||
# ctypes and Numpy data types:
|
||||
# 32-bit int = c_int = np.intc = np.int32
|
||||
# 64-bit int = c_longlong = np.int64
|
||||
# 32-bit floating point = c_float = np.float32
|
||||
# 64-bit floating point = c_double = np.float = np.float64
|
||||
|
||||
import sys,traceback
|
||||
from ctypes import *
|
||||
|
||||
# Numpy and mpi4py packages may not exist
|
||||
|
||||
try:
|
||||
import numpy as np
|
||||
numpyflag = 1
|
||||
except:
|
||||
numpyflag = 0
|
||||
|
||||
try:
|
||||
from mpi4py import MPI
|
||||
mpi4pyflag = 1
|
||||
except:
|
||||
mpi4pyflag = 0
|
||||
|
||||
# wrapper class
|
||||
|
||||
class CSlib:
|
||||
|
||||
# instantiate CSlib thru its C-interface
|
||||
|
||||
def __init__(self,csflag,mode,ptr,comm):
|
||||
|
||||
# load libcslib.so
|
||||
|
||||
try:
|
||||
if comm: self.lib = CDLL("libcsmpi.so",RTLD_GLOBAL)
|
||||
else: self.lib = CDLL("libcsnompi.so",RTLD_GLOBAL)
|
||||
except:
|
||||
etype,value,tb = sys.exc_info()
|
||||
traceback.print_exception(etype,value,tb)
|
||||
raise OSError,"Could not load CSlib dynamic library"
|
||||
|
||||
# define ctypes API for each library method
|
||||
|
||||
self.lib.cslib_open.argtypes = [c_int,c_char_p,c_void_p,c_void_p,
|
||||
POINTER(c_void_p)]
|
||||
self.lib.cslib_open.restype = None
|
||||
|
||||
self.lib.cslib_close.argtypes = [c_void_p]
|
||||
self.lib.cslib_close.restype = None
|
||||
|
||||
self.lib.cslib_send.argtypes = [c_void_p,c_int,c_int]
|
||||
self.lib.cslib_send.restype = None
|
||||
|
||||
self.lib.cslib_pack_int.argtypes = [c_void_p,c_int,c_int]
|
||||
self.lib.cslib_pack_int.restype = None
|
||||
|
||||
self.lib.cslib_pack_int64.argtypes = [c_void_p,c_int,c_longlong]
|
||||
self.lib.cslib_pack_int64.restype = None
|
||||
|
||||
self.lib.cslib_pack_float.argtypes = [c_void_p,c_int,c_float]
|
||||
self.lib.cslib_pack_float.restype = None
|
||||
|
||||
self.lib.cslib_pack_double.argtypes = [c_void_p,c_int,c_double]
|
||||
self.lib.cslib_pack_double.restype = None
|
||||
|
||||
self.lib.cslib_pack_string.argtypes = [c_void_p,c_int,c_char_p]
|
||||
self.lib.cslib_pack_string.restype = None
|
||||
|
||||
self.lib.cslib_pack.argtypes = [c_void_p,c_int,c_int,c_int,c_void_p]
|
||||
self.lib.cslib_pack.restype = None
|
||||
|
||||
self.lib.cslib_pack_parallel.argtypes = [c_void_p,c_int,c_int,c_int,
|
||||
POINTER(c_int),c_int,c_void_p]
|
||||
self.lib.cslib_pack_parallel.restype = None
|
||||
|
||||
self.lib.cslib_recv.argtypes = [c_void_p,POINTER(c_int),
|
||||
POINTER(POINTER(c_int)),
|
||||
POINTER(POINTER(c_int)),
|
||||
POINTER(POINTER(c_int))]
|
||||
self.lib.cslib_recv.restype = c_int
|
||||
|
||||
self.lib.cslib_unpack_int.argtypes = [c_void_p,c_int]
|
||||
self.lib.cslib_unpack_int.restype = c_int
|
||||
|
||||
self.lib.cslib_unpack_int64.argtypes = [c_void_p,c_int]
|
||||
self.lib.cslib_unpack_int64.restype = c_longlong
|
||||
|
||||
self.lib.cslib_unpack_float.argtypes = [c_void_p,c_int]
|
||||
self.lib.cslib_unpack_float.restype = c_float
|
||||
|
||||
self.lib.cslib_unpack_double.argtypes = [c_void_p,c_int]
|
||||
self.lib.cslib_unpack_double.restype = c_double
|
||||
|
||||
self.lib.cslib_unpack_string.argtypes = [c_void_p,c_int]
|
||||
self.lib.cslib_unpack_string.restype = c_char_p
|
||||
|
||||
# override return in unpack()
|
||||
self.lib.cslib_unpack.argtypes = [c_void_p,c_int]
|
||||
self.lib.cslib_unpack.restype = c_void_p
|
||||
|
||||
self.lib.cslib_unpack_data.argtypes = [c_void_p,c_int,c_void_p]
|
||||
self.lib.cslib_unpack_data.restype = None
|
||||
|
||||
# override last arg in unpack_parallel()
|
||||
self.lib.cslib_unpack_parallel.argtypes = [c_void_p,c_int,c_int,
|
||||
POINTER(c_int),c_int,c_void_p]
|
||||
self.lib.cslib_unpack_parallel.restype = None
|
||||
|
||||
self.lib.cslib_extract.argtypes = [c_void_p,c_int]
|
||||
self.lib.cslib_extract.restype = c_int
|
||||
|
||||
# create an instance of CSlib with or w/out MPI communicator
|
||||
|
||||
self.cs = c_void_p()
|
||||
|
||||
if not comm:
|
||||
self.lib.cslib_open(csflag,mode,ptr,None,byref(self.cs))
|
||||
elif not mpi4pyflag:
|
||||
print "Cannot pass MPI communicator to CSlib w/out mpi4py package"
|
||||
sys.exit()
|
||||
else:
|
||||
address = MPI._addressof(comm)
|
||||
comm_ptr = c_void_p(address)
|
||||
if mode == "mpi/one":
|
||||
address = MPI._addressof(ptr)
|
||||
ptrcopy = c_void_p(address)
|
||||
else: ptrcopy = ptr
|
||||
self.lib.cslib_open(csflag,mode,ptrcopy,comm_ptr,byref(self.cs))
|
||||
|
||||
# destroy instance of CSlib
|
||||
|
||||
def __del__(self):
|
||||
if self.cs: self.lib.cslib_close(self.cs)
|
||||
|
||||
def close(self):
|
||||
self.lib.cslib_close(self.cs)
|
||||
self.lib = None
|
||||
|
||||
# send a message
|
||||
|
||||
def send(self,msgID,nfield):
|
||||
self.nfield = nfield
|
||||
self.lib.cslib_send(self.cs,msgID,nfield)
|
||||
|
||||
# pack one field of message
|
||||
|
||||
def pack_int(self,id,value):
|
||||
self.lib.cslib_pack_int(self.cs,id,value)
|
||||
|
||||
def pack_int64(self,id,value):
|
||||
self.lib.cslib_pack_int64(self.cs,id,value)
|
||||
|
||||
def pack_float(self,id,value):
|
||||
self.lib.cslib_pack_float(self.cs,id,value)
|
||||
|
||||
def pack_double(self,id,value):
|
||||
self.lib.cslib_pack_double(self.cs,id,value)
|
||||
|
||||
def pack_string(self,id,value):
|
||||
self.lib.cslib_pack_string(self.cs,id,value)
|
||||
|
||||
def pack(self,id,ftype,flen,data):
|
||||
cdata = self.data_convert(ftype,flen,data)
|
||||
self.lib.cslib_pack(self.cs,id,ftype,flen,cdata)
|
||||
|
||||
def pack_parallel(self,id,ftype,nlocal,ids,nper,data):
|
||||
cids = self.data_convert(1,nlocal,ids)
|
||||
cdata = self.data_convert(ftype,nper*nlocal,data)
|
||||
self.lib.cslib_pack_parallel(self.cs,id,ftype,nlocal,cids,nper,cdata)
|
||||
|
||||
# convert input data to a ctypes vector to pass to CSlib
|
||||
|
||||
def data_convert(self,ftype,flen,data):
|
||||
|
||||
# tflag = type of data
|
||||
# tflag = 1 if data is list or tuple
|
||||
# tflag = 2 if data is Numpy array
|
||||
# tflag = 3 if data is ctypes vector
|
||||
# same usage of tflag as in unpack function
|
||||
|
||||
txttype = str(type(data))
|
||||
if "numpy" in txttype: tflag = 2
|
||||
elif "c_" in txttype: tflag = 3
|
||||
else: tflag = 1
|
||||
|
||||
# create ctypes vector out of data to pass to lib
|
||||
# cdata = ctypes vector to return
|
||||
# NOTE: error check on ftype and tflag everywhere, also flen
|
||||
|
||||
if ftype == 1:
|
||||
if tflag == 1: cdata = (flen * c_int)(*data)
|
||||
elif tflag == 2: cdata = data.ctypes.data_as(POINTER(c_int))
|
||||
elif tflag == 3: cdata = data
|
||||
elif ftype == 2:
|
||||
if tflag == 1: cdata = (flen * c_longlong)(*data)
|
||||
elif tflag == 2: cdata = data.ctypes.data_as(POINTER(c_longlong))
|
||||
elif tflag == 3: cdata = data
|
||||
elif ftype == 3:
|
||||
if tflag == 1: cdata = (flen * c_float)(*data)
|
||||
elif tflag == 2: cdata = data.ctypes.data_as(POINTER(c_float))
|
||||
elif tflag == 3: cdata = data
|
||||
elif ftype == 4:
|
||||
if tflag == 1: cdata = (flen * c_double)(*data)
|
||||
elif tflag == 2: cdata = data.ctypes.data_as(POINTER(c_double))
|
||||
elif tflag == 3: cdata = data
|
||||
|
||||
return cdata
|
||||
|
||||
# receive a message
|
||||
|
||||
def recv(self):
|
||||
self.lib.cslib_recv.restype = c_int
|
||||
nfield = c_int()
|
||||
fieldID = POINTER(c_int)()
|
||||
fieldtype = POINTER(c_int)()
|
||||
fieldlen = POINTER(c_int)()
|
||||
msgID = self.lib.cslib_recv(self.cs,byref(nfield),
|
||||
byref(fieldID),byref(fieldtype),byref(fieldlen))
|
||||
|
||||
# copy returned C args to native Python int and lists
|
||||
# store them in class so unpack() methods can access the info
|
||||
|
||||
self.nfield = nfield = nfield.value
|
||||
self.fieldID = fieldID[:nfield]
|
||||
self.fieldtype = fieldtype[:nfield]
|
||||
self.fieldlen = fieldlen[:nfield]
|
||||
|
||||
return msgID,self.nfield,self.fieldID,self.fieldtype,self.fieldlen
|
||||
|
||||
# unpack one field of message
|
||||
# tflag = type of data to return
|
||||
# 3 = ctypes vector is default, since no conversion required
|
||||
|
||||
def unpack_int(self,id):
|
||||
return self.lib.cslib_unpack_int(self.cs,id)
|
||||
|
||||
def unpack_int64(self,id):
|
||||
return self.lib.cslib_unpack_int64(self.cs,id)
|
||||
|
||||
def unpack_float(self,id):
|
||||
return self.lib.cslib_unpack_float(self.cs,id)
|
||||
|
||||
def unpack_double(self,id):
|
||||
return self.lib.cslib_unpack_double(self.cs,id)
|
||||
|
||||
def unpack_string(self,id):
|
||||
return self.lib.cslib_unpack_string(self.cs,id)
|
||||
|
||||
def unpack(self,id,tflag=3):
|
||||
index = self.fieldID.index(id)
|
||||
|
||||
# reset data type of return so can morph by tflag
|
||||
# cannot do this for the generic c_void_p returned by CSlib
|
||||
|
||||
if self.fieldtype[index] == 1:
|
||||
self.lib.cslib_unpack.restype = POINTER(c_int)
|
||||
elif self.fieldtype[index] == 2:
|
||||
self.lib.cslib_unpack.restype = POINTER(c_longlong)
|
||||
elif self.fieldtype[index] == 3:
|
||||
self.lib.cslib_unpack.restype = POINTER(c_float)
|
||||
elif self.fieldtype[index] == 4:
|
||||
self.lib.cslib_unpack.restype = POINTER(c_double)
|
||||
#elif self.fieldtype[index] == 5:
|
||||
# self.lib.cslib_unpack.restype = POINTER(c_char)
|
||||
|
||||
cdata = self.lib.cslib_unpack(self.cs,id)
|
||||
|
||||
# tflag = user-requested type of data to return
|
||||
# tflag = 1 to return data as list
|
||||
# tflag = 2 to return data as Numpy array
|
||||
# tflag = 3 to return data as ctypes vector
|
||||
# same usage of tflag as in pack functions
|
||||
# tflag = 2,3 should NOT perform a data copy
|
||||
|
||||
if tflag == 1:
|
||||
data = cdata[:self.fieldlen[index]]
|
||||
elif tflag == 2:
|
||||
if numpyflag == 0:
|
||||
print "Cannot return Numpy array w/out numpy package"
|
||||
sys.exit()
|
||||
data = np.ctypeslib.as_array(cdata,shape=(self.fieldlen[index],))
|
||||
elif tflag == 3:
|
||||
data = cdata
|
||||
|
||||
return data
|
||||
|
||||
# handle data array like pack() or unpack_parallel() ??
|
||||
|
||||
def unpack_data(self,id,tflag=3):
|
||||
index = self.fieldID.index(id)
|
||||
|
||||
# unpack one field of message in parallel
|
||||
# tflag = type of data to return
|
||||
# 3 = ctypes vector is default, since no conversion required
|
||||
# NOTE: allow direct use of user array (e.g. Numpy), if user provides data arg?
|
||||
# as opposed to creating this cdata
|
||||
# does that make any performance difference ?
|
||||
# e.g. should we allow CSlib to populate an existing Numpy array's memory
|
||||
|
||||
def unpack_parallel(self,id,nlocal,ids,nper,tflag=3):
|
||||
cids = self.data_convert(1,nlocal,ids)
|
||||
|
||||
# allocate memory for the returned data
|
||||
# pass cdata ptr to the memory to CSlib unpack_parallel()
|
||||
# this resets data type of last unpack_parallel() arg
|
||||
|
||||
index = self.fieldID.index(id)
|
||||
if self.fieldtype[index] == 1: cdata = (nper*nlocal * c_int)()
|
||||
elif self.fieldtype[index] == 2: cdata = (nlocal*nper * c_longlong)()
|
||||
elif self.fieldtype[index] == 3: cdata = (nlocal*nper * c_float)()
|
||||
elif self.fieldtype[index] == 4: cdata = (nlocal*nper * c_double)()
|
||||
#elif self.fieldtype[index] == 5: cdata = (nlocal*nper * c_char)()
|
||||
|
||||
self.lib.cslib_unpack_parallel(self.cs,id,nlocal,cids,nper,cdata)
|
||||
|
||||
# tflag = user-requested type of data to return
|
||||
# tflag = 1 to return data as list
|
||||
# tflag = 2 to return data as Numpy array
|
||||
# tflag = 3 to return data as ctypes vector
|
||||
# same usage of tflag as in pack functions
|
||||
|
||||
if tflag == 1:
|
||||
data = cdata[:nper*nlocal]
|
||||
elif tflag == 2:
|
||||
if numpyflag == 0:
|
||||
print "Cannot return Numpy array w/out numpy package"
|
||||
sys.exit()
|
||||
# NOTE: next line gives ctypes warning for fieldtype = 2 = 64-bit int
|
||||
# not sure why, reported as bug between ctypes and Numpy here:
|
||||
# https://stackoverflow.com/questions/4964101/pep-3118-
|
||||
# warning-when-using-ctypes-array-as-numpy-array
|
||||
# but why not same warning when just using unpack() ??
|
||||
# in Python these lines give same warning:
|
||||
# >>> import ctypes,numpy
|
||||
# >>> a = (10 * ctypes.c_longlong)()
|
||||
# >>> b = numpy.ctypeslib.as_array(a)
|
||||
data = np.ctypeslib.as_array(cdata,shape=(nlocal*nper,))
|
||||
elif tflag == 3:
|
||||
data = cdata
|
||||
|
||||
return data
|
||||
|
||||
# extract a library value
|
||||
|
||||
def extract(self,flag):
|
||||
return self.lib.cslib_extract(self.cs,flag)
|
|
@ -0,0 +1,239 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
// C style library interface to CSlib class
|
||||
|
||||
#include <mpi.h>
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
#include "cslib_wrap.h"
|
||||
#include "cslib.h"
|
||||
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_open(int csflag, const char *mode, const void *ptr,
|
||||
const void *pcomm, void **csptr)
|
||||
{
|
||||
CSlib *cs = new CSlib(csflag,mode,ptr,pcomm);
|
||||
*csptr = (void *) cs;
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_open_fortran(int csflag, const char *mode, const char *str,
|
||||
const void *pcomm, void **csptr)
|
||||
{
|
||||
MPI_Comm ccomm;
|
||||
void *pccomm = NULL;
|
||||
|
||||
if (pcomm) {
|
||||
MPI_Fint *fcomm = (MPI_Fint *) pcomm;
|
||||
ccomm = MPI_Comm_f2c(*fcomm);
|
||||
pccomm = &ccomm;
|
||||
}
|
||||
|
||||
CSlib *cs = new CSlib(csflag,mode,str,pccomm);
|
||||
*csptr = (void *) cs;
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_open_fortran_mpi_one(int csflag, const char *mode,
|
||||
const void *pboth, const void *pcomm,
|
||||
void **csptr)
|
||||
{
|
||||
MPI_Comm ccomm,cboth;
|
||||
void *pccomm,*pcboth;
|
||||
|
||||
MPI_Fint *fcomm = (MPI_Fint *) pcomm;
|
||||
ccomm = MPI_Comm_f2c(*fcomm);
|
||||
pccomm = &ccomm;
|
||||
|
||||
MPI_Fint *fboth = (MPI_Fint *) pboth;
|
||||
cboth = MPI_Comm_f2c(*fboth);
|
||||
pcboth = &cboth;
|
||||
|
||||
CSlib *cs = new CSlib(csflag,mode,pcboth,pccomm);
|
||||
*csptr = (void *) cs;
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_close(void *ptr)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
delete cs;
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_send(void *ptr, int msgID, int nfield)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->send(msgID,nfield);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_pack_int(void *ptr, int id, int value)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->pack_int(id,value);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_pack_int64(void *ptr, int id, int64_t value)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->pack_int64(id,value);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_pack_float(void *ptr, int id, float value)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->pack_float(id,value);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_pack_double(void *ptr, int id, double value)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->pack_double(id,value);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_pack_string(void *ptr, int id, char *value)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->pack_string(id,value);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_pack(void *ptr, int id, int ftype, int flen, void *data)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->pack(id,ftype,flen,data);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_pack_parallel(void *ptr, int id, int ftype,
|
||||
int nlocal, int *ids, int nper, void *data)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->pack_parallel(id,ftype,nlocal,ids,nper,data);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
int cslib_recv(void *ptr, int *nfield_caller,
|
||||
int **fieldID_caller, int **fieldtype_caller,
|
||||
int **fieldlen_caller)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
|
||||
int nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
int msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
|
||||
*nfield_caller = nfield;
|
||||
*fieldID_caller = fieldID;
|
||||
*fieldtype_caller = fieldtype;
|
||||
*fieldlen_caller = fieldlen;
|
||||
|
||||
return msgID;
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
int cslib_unpack_int(void *ptr, int id)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
return cs->unpack_int(id);
|
||||
}
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
int64_t cslib_unpack_int64(void *ptr, int id)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
return cs->unpack_int64(id);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
float cslib_unpack_float(void *ptr, int id)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
return cs->unpack_float(id);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
double cslib_unpack_double(void *ptr, int id)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
return cs->unpack_double(id);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
char *cslib_unpack_string(void *ptr, int id)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
return cs->unpack_string(id);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void *cslib_unpack(void *ptr, int id)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
return cs->unpack(id);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_unpack_data(void *ptr, int id, void *data)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->unpack(id,data);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
void cslib_unpack_parallel(void *ptr, int id, int nlocal, int *ids,
|
||||
int nper, void *data)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
cs->unpack_parallel(id,nlocal,ids,nper,data);
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------
|
||||
|
||||
int cslib_extract(void *ptr, int flag)
|
||||
{
|
||||
CSlib *cs = (CSlib *) ptr;
|
||||
return cs->extract(flag);
|
||||
}
|
|
@ -0,0 +1,147 @@
|
|||
! ISO_C_binding wrapper on CSlib C interface
|
||||
|
||||
module cslib_wrap
|
||||
|
||||
interface
|
||||
subroutine cslib_open_fortran(csflag,mode,str,pcomm,ptr) bind(c)
|
||||
use iso_c_binding
|
||||
integer(c_int), value :: csflag
|
||||
character(c_char) :: mode(*),str(*)
|
||||
type(c_ptr), value :: pcomm
|
||||
type(c_ptr) :: ptr
|
||||
end subroutine cslib_open_fortran
|
||||
|
||||
subroutine cslib_open_fortran_mpi_one(csflag,mode,pboth,pcomm,ptr) bind(c)
|
||||
use iso_c_binding
|
||||
integer(c_int), value :: csflag
|
||||
character(c_char) :: mode(*)
|
||||
type(c_ptr), value :: pboth,pcomm
|
||||
type(c_ptr) :: ptr
|
||||
end subroutine cslib_open_fortran_mpi_one
|
||||
|
||||
subroutine cslib_close(ptr) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
end subroutine cslib_close
|
||||
|
||||
subroutine cslib_send(ptr,msgID,nfield) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: msgID,nfield
|
||||
end subroutine cslib_send
|
||||
|
||||
subroutine cslib_pack_int(ptr,id,value) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
integer(c_int), value :: value
|
||||
end subroutine cslib_pack_int
|
||||
|
||||
subroutine cslib_pack_int64(ptr,id,value) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
integer(c_int64_t), value :: value
|
||||
end subroutine cslib_pack_int64
|
||||
|
||||
subroutine cslib_pack_float(ptr,id,value) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
real(c_float), value :: value
|
||||
end subroutine cslib_pack_float
|
||||
|
||||
subroutine cslib_pack_double(ptr,id,value) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
real(c_double), value :: value
|
||||
end subroutine cslib_pack_double
|
||||
|
||||
subroutine cslib_pack_string(ptr,id,value) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
character(c_char) :: value(*)
|
||||
end subroutine cslib_pack_string
|
||||
|
||||
subroutine cslib_pack(ptr,id,ftype,flen,data) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id,ftype,flen
|
||||
type(c_ptr), value :: data
|
||||
end subroutine cslib_pack
|
||||
|
||||
subroutine cslib_pack_parallel(ptr,id,ftype,nlocal,ids,nper,data) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id,ftype,nlocal,nper
|
||||
type(c_ptr), value :: ids,data
|
||||
end subroutine cslib_pack_parallel
|
||||
|
||||
function cslib_recv(ptr,nfield,fieldID,fieldtype,fieldlen) bind(c)
|
||||
use iso_c_binding
|
||||
integer(c_int) :: cslib_recv
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int) :: nfield
|
||||
type(c_ptr) :: fieldID,fieldtype,fieldlen
|
||||
end function cslib_recv
|
||||
|
||||
function cslib_unpack_int(ptr,id) bind(c)
|
||||
use iso_c_binding
|
||||
integer(c_int) :: cslib_unpack_int
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
end function cslib_unpack_int
|
||||
|
||||
function cslib_unpack_int64(ptr,id) bind(c)
|
||||
use iso_c_binding
|
||||
integer(c_int64_t) :: cslib_unpack_int64
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
end function cslib_unpack_int64
|
||||
|
||||
function cslib_unpack_float(ptr,id) bind(c)
|
||||
use iso_c_binding
|
||||
real(c_float) :: cslib_unpack_float
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
end function cslib_unpack_float
|
||||
|
||||
function cslib_unpack_double(ptr,id) bind(c)
|
||||
use iso_c_binding
|
||||
real(c_double) :: cslib_unpack_double
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
end function cslib_unpack_double
|
||||
|
||||
function cslib_unpack_string(ptr,id) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr) :: cslib_unpack_string
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
end function cslib_unpack_string
|
||||
|
||||
function cslib_unpack(ptr,id) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr) :: cslib_unpack
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id
|
||||
end function cslib_unpack
|
||||
|
||||
subroutine cslib_unpack_parallel(ptr,id,nlocal,ids,nper,data) bind(c)
|
||||
use iso_c_binding
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: id,nlocal,nper
|
||||
type(c_ptr), value :: ids,data
|
||||
end subroutine cslib_unpack_parallel
|
||||
|
||||
function cslib_extract(ptr,flag) bind(c)
|
||||
use iso_c_binding
|
||||
integer(c_int) :: cslib_extract
|
||||
type(c_ptr), value :: ptr
|
||||
integer(c_int), value :: flag
|
||||
end function cslib_extract
|
||||
end interface
|
||||
|
||||
end module cslib_wrap
|
|
@ -0,0 +1,54 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
/* C style library interface to CSlib class
|
||||
ifdefs allow this file to be included in a C program
|
||||
*/
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
void cslib_open(int, const char *, const void *, const void *, void **);
|
||||
void cslib_open_fortran(int, const char *, const char *, const void *, void **);
|
||||
void cslib_open_fortran_mpi_one(int, const char *, const void *,
|
||||
const void *, void **);
|
||||
void cslib_close(void *);
|
||||
|
||||
void cslib_send(void *, int, int);
|
||||
|
||||
void cslib_pack_int(void *, int, int);
|
||||
void cslib_pack_int64(void *, int, int64_t);
|
||||
void cslib_pack_float(void *, int, float);
|
||||
void cslib_pack_double(void *, int, double);
|
||||
void cslib_pack_string(void *, int, char *);
|
||||
void cslib_pack(void *, int, int, int, void *);
|
||||
void cslib_pack_parallel(void *, int, int, int, int *, int, void *);
|
||||
|
||||
int cslib_recv(void *, int *, int **, int **, int **);
|
||||
|
||||
int cslib_unpack_int(void *, int);
|
||||
int64_t cslib_unpack_int64(void *, int);
|
||||
float cslib_unpack_float(void *, int);
|
||||
double cslib_unpack_double(void *, int);
|
||||
char *cslib_unpack_string(void *, int);
|
||||
void *cslib_unpack(void *, int);
|
||||
void cslib_unpack_data(void *, int, void *);
|
||||
void cslib_unpack_parallel(void *, int, int, int *, int, void *);
|
||||
|
||||
int cslib_extract(void *, int);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
|
@ -0,0 +1,110 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <mpi.h>
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
|
||||
#include "msg.h"
|
||||
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
Msg::Msg(int csflag, const void *ptr, MPI_Comm cworld)
|
||||
{
|
||||
world = cworld;
|
||||
MPI_Comm_rank(world,&me);
|
||||
MPI_Comm_size(world,&nprocs);
|
||||
|
||||
init(csflag);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
Msg::Msg(int csflag, const void *ptr)
|
||||
{
|
||||
world = 0;
|
||||
me = 0;
|
||||
nprocs = 1;
|
||||
|
||||
init(csflag);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void Msg::init(int csflag)
|
||||
{
|
||||
client = server = 0;
|
||||
if (csflag == 0) client = 1;
|
||||
else if (csflag == 1) server = 1;
|
||||
|
||||
nsend = nrecv = 0;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void Msg::allocate(int nheader, int &maxheader, int *&header,
|
||||
int nbuf, int &maxbuf, char *&buf)
|
||||
{
|
||||
if (nheader > maxheader) {
|
||||
sfree(header);
|
||||
maxheader = nheader;
|
||||
header = (int *) smalloc(maxheader*sizeof(int));
|
||||
}
|
||||
|
||||
if (nbuf > maxbuf) {
|
||||
sfree(buf);
|
||||
maxbuf = nbuf;
|
||||
buf = (char *) smalloc(maxbuf*sizeof(char));
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void *Msg::smalloc(int nbytes)
|
||||
{
|
||||
if (nbytes == 0) return NULL;
|
||||
void *ptr = (void *) malloc(nbytes);
|
||||
if (ptr == NULL) {
|
||||
char str[128];
|
||||
sprintf(str,"Failed to allocate %d bytes",nbytes);
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void Msg::sfree(void *ptr)
|
||||
{
|
||||
if (ptr == NULL) return;
|
||||
free(ptr);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void Msg::error_all(const char *str)
|
||||
{
|
||||
if (me == 0) printf("CSlib ERROR: %s\n",str);
|
||||
MPI_Abort(world,1);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void Msg::error_one(const char *str)
|
||||
{
|
||||
printf("CSlib ERROR: %s\n",str);
|
||||
MPI_Abort(world,1);
|
||||
}
|
|
@ -0,0 +1,52 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef MSG_H
|
||||
#define MSG_H
|
||||
|
||||
#include <mpi.h>
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
class Msg {
|
||||
public:
|
||||
int nsend,nrecv;
|
||||
MPI_Comm world;
|
||||
|
||||
Msg(int, const void *, MPI_Comm);
|
||||
Msg(int, const void *);
|
||||
virtual ~Msg() {}
|
||||
virtual void send(int, int *, int, char *) = 0;
|
||||
virtual void recv(int &, int *&, int &, char *&) = 0;
|
||||
|
||||
protected:
|
||||
int me,nprocs;
|
||||
int client,server;
|
||||
|
||||
int nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
int lengths[2];
|
||||
|
||||
void init(int);
|
||||
void allocate(int, int &, int *&, int, int &, char *&);
|
||||
void *smalloc(int);
|
||||
void sfree(void *);
|
||||
void error_all(const char *);
|
||||
void error_one(const char *);
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,143 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <mpi.h>
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include "msg_file.h"
|
||||
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
#define MAXLINE 256
|
||||
#define SLEEP 0.1 // delay in CPU secs to check for message file
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgFile::MsgFile(int csflag, const void *ptr, MPI_Comm cworld) :
|
||||
Msg(csflag, ptr, cworld)
|
||||
{
|
||||
char *filename = (char *) ptr;
|
||||
init(filename);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgFile::MsgFile(int csflag, const void *ptr) : Msg(csflag, ptr)
|
||||
{
|
||||
char *filename = (char *) ptr;
|
||||
init(filename);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgFile::~MsgFile()
|
||||
{
|
||||
delete [] fileroot;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgFile::init(char *filename)
|
||||
{
|
||||
int n = strlen(filename) + 1;
|
||||
fileroot = new char[n];
|
||||
strcpy(fileroot,filename);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgFile::send(int nheader, int *header, int nbuf, char *buf)
|
||||
{
|
||||
char filename[MAXLINE];
|
||||
|
||||
lengths[0] = nheader;
|
||||
lengths[1] = nbuf;
|
||||
|
||||
if (me == 0) {
|
||||
if (client) sprintf(filename,"%s.%s",fileroot,"client");
|
||||
else if (server) sprintf(filename,"%s.%s",fileroot,"server");
|
||||
|
||||
fp = fopen(filename,"wb");
|
||||
if (!fp) error_one("send(): Could not open send message file");
|
||||
fwrite(lengths,sizeof(int),2,fp);
|
||||
fwrite(header,sizeof(int),nheader,fp);
|
||||
fwrite(buf,1,nbuf,fp);
|
||||
fclose(fp);
|
||||
}
|
||||
|
||||
// create empty signal file
|
||||
|
||||
if (me == 0) {
|
||||
if (client) sprintf(filename,"%s.%s",fileroot,"client.signal");
|
||||
else if (server) sprintf(filename,"%s.%s",fileroot,"server.signal");
|
||||
fp = fopen(filename,"w");
|
||||
fclose(fp);
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgFile::recv(int &maxheader, int *&header, int &maxbuf, char *&buf)
|
||||
{
|
||||
char filename[MAXLINE];
|
||||
|
||||
// wait until signal file exists to open message file
|
||||
|
||||
if (me == 0) {
|
||||
if (client) sprintf(filename,"%s.%s",fileroot,"server.signal");
|
||||
else if (server) sprintf(filename,"%s.%s",fileroot,"client.signal");
|
||||
|
||||
int delay = (int) (1000000 * SLEEP);
|
||||
while (1) {
|
||||
fp = fopen(filename,"r");
|
||||
if (fp) break;
|
||||
usleep(delay);
|
||||
}
|
||||
fclose(fp);
|
||||
|
||||
if (client) sprintf(filename,"%s.%s",fileroot,"server");
|
||||
else if (server) sprintf(filename,"%s.%s",fileroot,"client");
|
||||
fp = fopen(filename,"rb");
|
||||
if (!fp) error_one("recv(): Could not open recv message file");
|
||||
}
|
||||
|
||||
// read and broadcast data
|
||||
|
||||
if (me == 0) fread(lengths,sizeof(int),2,fp);
|
||||
if (nprocs > 1) MPI_Bcast(lengths,2,MPI_INT,0,world);
|
||||
|
||||
int nheader = lengths[0];
|
||||
int nbuf = lengths[1];
|
||||
allocate(nheader,maxheader,header,nbuf,maxbuf,buf);
|
||||
|
||||
if (me == 0) fread(header,sizeof(int),nheader,fp);
|
||||
if (nprocs > 1) MPI_Bcast(header,nheader,MPI_INT,0,world);
|
||||
|
||||
if (me == 0) fread(buf,1,nbuf,fp);
|
||||
if (nprocs > 1) MPI_Bcast(buf,nbuf,MPI_CHAR,0,world);
|
||||
|
||||
// delete both message and signal file
|
||||
|
||||
if (me == 0) {
|
||||
fclose(fp);
|
||||
unlink(filename);
|
||||
if (client) sprintf(filename,"%s.%s",fileroot,"server.signal");
|
||||
else if (server) sprintf(filename,"%s.%s",fileroot,"client.signal");
|
||||
unlink(filename);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef MSG_FILE_H
|
||||
#define MSG_FILE_H
|
||||
|
||||
#include <stdio.h>
|
||||
#include "msg.h"
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
class MsgFile : public Msg {
|
||||
public:
|
||||
MsgFile(int, const void *, MPI_Comm);
|
||||
MsgFile(int, const void *);
|
||||
~MsgFile();
|
||||
void send(int, int *, int, char *);
|
||||
void recv(int &, int *&, int &, char *&);
|
||||
|
||||
private:
|
||||
char *fileroot;
|
||||
FILE *fp;
|
||||
|
||||
void init(char *);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,82 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <mpi.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include "msg_mpi_one.h"
|
||||
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgMPIOne::MsgMPIOne(int csflag, const void *ptr, MPI_Comm cworld) :
|
||||
Msg(csflag, ptr, cworld)
|
||||
{
|
||||
// NOTE: ideally would skip this call if mpi/two
|
||||
init(ptr);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgMPIOne::init(const void *ptr)
|
||||
{
|
||||
MPI_Comm *pbothcomm = (MPI_Comm *) ptr;
|
||||
bothcomm = *pbothcomm;
|
||||
|
||||
if (client) {
|
||||
MPI_Comm_size(world,&nprocs);
|
||||
otherroot = nprocs;
|
||||
} else if (server) {
|
||||
otherroot = 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgMPIOne::send(int nheader, int *header, int nbuf, char *buf)
|
||||
{
|
||||
lengths[0] = nheader;
|
||||
lengths[1] = nbuf;
|
||||
|
||||
if (me == 0) {
|
||||
MPI_Send(lengths,2,MPI_INT,otherroot,0,bothcomm);
|
||||
MPI_Send(header,nheader,MPI_INT,otherroot,0,bothcomm);
|
||||
MPI_Send(buf,nbuf,MPI_CHAR,otherroot,0,bothcomm);
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgMPIOne::recv(int &maxheader, int *&header, int &maxbuf, char *&buf)
|
||||
{
|
||||
MPI_Status status;
|
||||
|
||||
if (me == 0) MPI_Recv(lengths,2,MPI_INT,otherroot,0,bothcomm,&status);
|
||||
if (nprocs > 1) MPI_Bcast(lengths,2,MPI_INT,0,world);
|
||||
|
||||
int nheader = lengths[0];
|
||||
int nbuf = lengths[1];
|
||||
allocate(nheader,maxheader,header,nbuf,maxbuf,buf);
|
||||
|
||||
if (me == 0) MPI_Recv(header,nheader,MPI_INT,otherroot,0,bothcomm,&status);
|
||||
if (nprocs > 1) MPI_Bcast(header,nheader,MPI_INT,0,world);
|
||||
|
||||
if (me == 0) MPI_Recv(buf,nbuf,MPI_CHAR,otherroot,0,bothcomm,&status);
|
||||
if (nprocs > 1) MPI_Bcast(buf,nbuf,MPI_CHAR,0,world);
|
||||
}
|
|
@ -0,0 +1,38 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef MSG_MPI_ONE_H
|
||||
#define MSG_MPI_ONE_H
|
||||
|
||||
#include "msg.h"
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
class MsgMPIOne : public Msg {
|
||||
public:
|
||||
MsgMPIOne(int, const void *, MPI_Comm);
|
||||
virtual ~MsgMPIOne() {}
|
||||
void send(int, int *, int, char *);
|
||||
void recv(int &, int *&, int &, char *&);
|
||||
|
||||
protected:
|
||||
MPI_Comm bothcomm;
|
||||
int otherroot;
|
||||
|
||||
void init(const void *);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,81 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <mpi.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include "msg_mpi_two.h"
|
||||
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgMPITwo::MsgMPITwo(int csflag, const void *ptr, MPI_Comm cworld) :
|
||||
MsgMPIOne(csflag, ptr, cworld)
|
||||
{
|
||||
char *filename = (char *) ptr;
|
||||
init(filename);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgMPITwo::~MsgMPITwo()
|
||||
{
|
||||
// free the inter comm that spans both client and server
|
||||
|
||||
MPI_Comm_free(&bothcomm);
|
||||
MPI_Close_port(port);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgMPITwo::init(char *filename)
|
||||
{
|
||||
if (client) {
|
||||
if (me == 0) {
|
||||
FILE *fp = NULL;
|
||||
while (!fp) {
|
||||
fp = fopen(filename,"r");
|
||||
if (!fp) sleep(1);
|
||||
}
|
||||
fgets(port,MPI_MAX_PORT_NAME,fp);
|
||||
//printf("Client port: %s\n",port);
|
||||
fclose(fp);
|
||||
}
|
||||
|
||||
MPI_Bcast(port,MPI_MAX_PORT_NAME,MPI_CHAR,0,world);
|
||||
MPI_Comm_connect(port,MPI_INFO_NULL,0,world,&bothcomm);
|
||||
//if (me == 0) printf("CLIENT comm connect\n");
|
||||
if (me == 0) unlink(filename);
|
||||
|
||||
} else if (server) {
|
||||
MPI_Open_port(MPI_INFO_NULL,port);
|
||||
|
||||
if (me == 0) {
|
||||
//printf("Server name: %s\n",port);
|
||||
FILE *fp = fopen(filename,"w");
|
||||
fprintf(fp,"%s",port);
|
||||
fclose(fp);
|
||||
}
|
||||
|
||||
MPI_Comm_accept(port,MPI_INFO_NULL,0,world,&bothcomm);
|
||||
//if (me == 0) printf("SERVER comm accept\n");
|
||||
}
|
||||
|
||||
otherroot = 0;
|
||||
}
|
|
@ -0,0 +1,35 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef MSG_MPI_TWO_H
|
||||
#define MSG_MPI_TWO_H
|
||||
|
||||
#include "msg_mpi_one.h"
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
class MsgMPITwo : public MsgMPIOne {
|
||||
public:
|
||||
MsgMPITwo(int, const void *, MPI_Comm);
|
||||
~MsgMPITwo();
|
||||
|
||||
private:
|
||||
char port[MPI_MAX_PORT_NAME];
|
||||
|
||||
void init(char *);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,140 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <mpi.h>
|
||||
#include <zmq.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
|
||||
#include "msg_zmq.h"
|
||||
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgZMQ::MsgZMQ(int csflag, const void *ptr, MPI_Comm cworld) :
|
||||
Msg(csflag, ptr, cworld)
|
||||
{
|
||||
char *port = (char *) ptr;
|
||||
init(port);
|
||||
}
|
||||
|
||||
MsgZMQ::MsgZMQ(int csflag, const void *ptr) : Msg(csflag, ptr)
|
||||
{
|
||||
char *port = (char *) ptr;
|
||||
init(port);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
MsgZMQ::~MsgZMQ()
|
||||
{
|
||||
if (me == 0) {
|
||||
zmq_close(socket);
|
||||
zmq_ctx_destroy(context);
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void MsgZMQ::init(char *port)
|
||||
{
|
||||
#ifdef ZMQ_NO
|
||||
error_all("constructor(): Library not built with ZMQ support");
|
||||
#endif
|
||||
|
||||
if (me == 0) {
|
||||
int n = strlen(port) + 8;
|
||||
char *socket_name = new char[n];
|
||||
strcpy(socket_name,"tcp://");
|
||||
strcat(socket_name,port);
|
||||
|
||||
if (client) {
|
||||
context = zmq_ctx_new();
|
||||
socket = zmq_socket(context,ZMQ_REQ);
|
||||
zmq_connect(socket,socket_name);
|
||||
} else if (server) {
|
||||
context = zmq_ctx_new();
|
||||
socket = zmq_socket(context,ZMQ_REP);
|
||||
int rc = zmq_bind(socket,socket_name);
|
||||
if (rc) error_one("constructor(): Server could not make socket connection");
|
||||
}
|
||||
|
||||
delete [] socket_name;
|
||||
}
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
client/server sockets (REQ/REP) must follow this protocol:
|
||||
client sends request (REQ) which server receives
|
||||
server sends response (REP) which client receives
|
||||
every exchange is of this form, server cannot initiate a send
|
||||
thus each ZMQ send below has a following ZMQ recv, except last one
|
||||
if client calls send(), it will next call recv()
|
||||
if server calls send(), it will next call recv() from its wait loop
|
||||
in either case, recv() issues a ZMQ recv to match last ZMQ send here
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void MsgZMQ::send(int nheader, int *header, int nbuf, char *buf)
|
||||
{
|
||||
lengths[0] = nheader;
|
||||
lengths[1] = nbuf;
|
||||
|
||||
if (me == 0) {
|
||||
zmq_send(socket,lengths,2*sizeof(int),0);
|
||||
zmq_recv(socket,NULL,0,0);
|
||||
}
|
||||
|
||||
if (me == 0) {
|
||||
zmq_send(socket,header,nheader*sizeof(int),0);
|
||||
zmq_recv(socket,NULL,0,0);
|
||||
}
|
||||
|
||||
if (me == 0) zmq_send(socket,buf,nbuf,0);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
client/server sockets (REQ/REP) must follow this protocol:
|
||||
client sends request (REQ) which server receives
|
||||
server sends response (REP) which client receives
|
||||
every exchange is of this form, server cannot initiate a send
|
||||
thus each ZMQ recv below has a following ZMQ send, except last one
|
||||
if client calls recv(), it will next call send() to ping server again,
|
||||
if server calls recv(), it will next call send() to respond to client
|
||||
in either case, send() issues a ZMQ send to match last ZMQ recv here
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void MsgZMQ::recv(int &maxheader, int *&header, int &maxbuf, char *&buf)
|
||||
{
|
||||
if (me == 0) {
|
||||
zmq_recv(socket,lengths,2*sizeof(int),0);
|
||||
zmq_send(socket,NULL,0,0);
|
||||
}
|
||||
if (nprocs > 1) MPI_Bcast(lengths,2,MPI_INT,0,world);
|
||||
|
||||
int nheader = lengths[0];
|
||||
int nbuf = lengths[1];
|
||||
allocate(nheader,maxheader,header,nbuf,maxbuf,buf);
|
||||
|
||||
if (me == 0) {
|
||||
zmq_recv(socket,header,nheader*sizeof(int),0);
|
||||
zmq_send(socket,NULL,0,0);
|
||||
}
|
||||
if (nprocs > 1) MPI_Bcast(header,nheader,MPI_INT,0,world);
|
||||
|
||||
if (me == 0) zmq_recv(socket,buf,nbuf,0);
|
||||
if (nprocs > 1) MPI_Bcast(buf,nbuf,MPI_CHAR,0,world);
|
||||
}
|
|
@ -0,0 +1,38 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
CSlib - Client/server library for code coupling
|
||||
http://cslib.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright 2018 National Technology & Engineering Solutions of
|
||||
Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
|
||||
NTESS, the U.S. Government retains certain rights in this software.
|
||||
This software is distributed under the GNU Lesser General Public
|
||||
License (LGPL).
|
||||
|
||||
See the README file in the top-level CSlib directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef MSG_ZMQ_H
|
||||
#define MSG_ZMQ_H
|
||||
|
||||
#include "msg.h"
|
||||
|
||||
namespace CSLIB_NS {
|
||||
|
||||
class MsgZMQ : public Msg {
|
||||
public:
|
||||
MsgZMQ(int, const void *, MPI_Comm);
|
||||
MsgZMQ(int, const void *);
|
||||
~MsgZMQ();
|
||||
void send(int, int *, int, char *);
|
||||
void recv(int &, int *&, int &, char *&);
|
||||
|
||||
private:
|
||||
void *context,*socket;
|
||||
|
||||
void init(char *);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,67 @@
|
|||
# Install/unInstall package files in LAMMPS
|
||||
# mode = 0/1/2 for uninstall/install/update
|
||||
|
||||
mode=$1
|
||||
|
||||
# arg1 = file, arg2 = file it depends on
|
||||
|
||||
# enforce using portable C locale
|
||||
LC_ALL=C
|
||||
export LC_ALL
|
||||
|
||||
action () {
|
||||
if (test $mode = 0) then
|
||||
rm -f ../$1
|
||||
elif (! cmp -s $1 ../$1) then
|
||||
if (test -z "$2" || test -e ../$2) then
|
||||
cp $1 ..
|
||||
if (test $mode = 2) then
|
||||
echo " updating src/$1"
|
||||
fi
|
||||
fi
|
||||
elif (test -n "$2") then
|
||||
if (test ! -e ../$2) then
|
||||
rm -f ../$1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# all package files with no dependencies
|
||||
|
||||
for file in *.cpp *.h; do
|
||||
test -f ${file} && action $file
|
||||
done
|
||||
|
||||
# edit 2 Makefile.package files to include/exclude package info
|
||||
|
||||
if (test $1 = 1) then
|
||||
|
||||
if (test -e ../Makefile.package) then
|
||||
sed -i -e 's/[^ \t]*message[^ \t]* //' ../Makefile.package
|
||||
sed -i -e 's|^PKG_INC =[ \t]*|&-I../../lib/message/cslib/src |' ../Makefile.package
|
||||
sed -i -e 's|^PKG_PATH =[ \t]*|&-L../../lib/message/cslib/src |' ../Makefile.package
|
||||
sed -i -e 's|^PKG_LIB =[ \t]*|&-lmessage |' ../Makefile.package
|
||||
sed -i -e 's|^PKG_SYSINC =[ \t]*|&$(message_SYSINC) |' ../Makefile.package
|
||||
sed -i -e 's|^PKG_SYSLIB =[ \t]*|&$(message_SYSLIB) |' ../Makefile.package
|
||||
sed -i -e 's|^PKG_SYSPATH =[ \t]*|&$(message_SYSPATH) |' ../Makefile.package
|
||||
fi
|
||||
|
||||
if (test -e ../Makefile.package.settings) then
|
||||
sed -i -e '/^include.*message.*$/d' ../Makefile.package.settings
|
||||
# multiline form needed for BSD sed on Macs
|
||||
sed -i -e '4 i \
|
||||
include ..\/..\/lib\/message\/Makefile.lammps
|
||||
' ../Makefile.package.settings
|
||||
fi
|
||||
|
||||
elif (test $1 = 0) then
|
||||
|
||||
if (test -e ../Makefile.package) then
|
||||
sed -i -e 's/[^ \t]*message[^ \t]* //' ../Makefile.package
|
||||
fi
|
||||
|
||||
if (test -e ../Makefile.package.settings) then
|
||||
sed -i -e '/^include.*message.*$/d' ../Makefile.package.settings
|
||||
fi
|
||||
|
||||
fi
|
|
@ -0,0 +1,270 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <cstdio>
|
||||
#include <cstring>
|
||||
#include "fix_client_md.h"
|
||||
#include "cslib.h"
|
||||
#include "atom.h"
|
||||
#include "domain.h"
|
||||
#include "memory.h"
|
||||
#include "error.h"
|
||||
|
||||
#include "comm.h"
|
||||
#include "update.h"
|
||||
|
||||
using namespace LAMMPS_NS;
|
||||
using namespace CSLIB_NS;
|
||||
using namespace FixConst;
|
||||
|
||||
enum{SETUP=1,STEP};
|
||||
enum{UNITS=1,DIM,NATOMS,NTYPES,BOXLO,BOXHI,BOXTILT,TYPES,COORDS,CHARGE};
|
||||
enum{FORCES=1,ENERGY,VIRIAL};
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
FixClientMD::FixClientMD(LAMMPS *lmp, int narg, char **arg) :
|
||||
Fix(lmp, narg, arg)
|
||||
{
|
||||
if (lmp->clientserver != 1)
|
||||
error->all(FLERR,"Fix client/md requires LAMMPS be running as a client");
|
||||
if (!atom->map_style) error->all(FLERR,"Fix client/md requires atom map");
|
||||
|
||||
if (sizeof(tagint) != 4)
|
||||
error->all(FLERR,"Fix client/md requires 4-byte atom IDs");
|
||||
|
||||
scalar_flag = 1;
|
||||
global_freq = 1;
|
||||
extscalar = 1;
|
||||
virial_flag = 1;
|
||||
thermo_virial = 1;
|
||||
|
||||
maxatom = 0;
|
||||
xpbc = NULL;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
FixClientMD::~FixClientMD()
|
||||
{
|
||||
memory->destroy(xpbc);
|
||||
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
// all-done message to server
|
||||
|
||||
cs->send(-1,0);
|
||||
|
||||
int nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
int msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
|
||||
// clean-up
|
||||
|
||||
delete cs;
|
||||
lmp->cslib = NULL;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
int FixClientMD::setmask()
|
||||
{
|
||||
int mask = 0;
|
||||
mask |= POST_FORCE;
|
||||
mask |= MIN_POST_FORCE;
|
||||
mask |= THERMO_ENERGY;
|
||||
return mask;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void FixClientMD::init()
|
||||
{
|
||||
if (3*atom->natoms > INT_MAX)
|
||||
error->all(FLERR,"Fix client/md max atoms is 1/3 of 2^31");
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void FixClientMD::setup(int vflag)
|
||||
{
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
// required fields: NATOMS, NTYPES, BOXLO, BOXHI, TYPES, COORDS
|
||||
// optional fields: others in enum above
|
||||
|
||||
int nfields = 6;
|
||||
if (domain->dimension == 2) nfields++;
|
||||
if (domain->triclinic) nfields++;
|
||||
if (atom->q_flag) nfields++;
|
||||
|
||||
cs->send(SETUP,nfields);
|
||||
|
||||
cs->pack_int(NATOMS,atom->natoms);
|
||||
cs->pack_int(NTYPES,atom->ntypes);
|
||||
cs->pack(BOXLO,4,3,domain->boxlo);
|
||||
cs->pack(BOXHI,4,3,domain->boxhi);
|
||||
cs->pack_parallel(TYPES,1,atom->nlocal,atom->tag,1,atom->type);
|
||||
pack_coords();
|
||||
cs->pack_parallel(COORDS,4,atom->nlocal,atom->tag,3,xpbc);
|
||||
|
||||
if (domain->dimension == 2) cs->pack_int(DIM,domain->dimension);
|
||||
if (domain->triclinic) {
|
||||
double boxtilt[3];
|
||||
boxtilt[0] = domain->xy;
|
||||
if (domain->dimension == 3) {
|
||||
boxtilt[1] = domain->xz;
|
||||
boxtilt[2] = domain->yz;
|
||||
} else boxtilt[1] = boxtilt[2] = 0.0;
|
||||
cs->pack(BOXTILT,4,3,boxtilt);
|
||||
}
|
||||
if (atom->q_flag)
|
||||
cs->pack_parallel(CHARGE,4,atom->nlocal,atom->tag,1,atom->q);
|
||||
|
||||
// receive initial forces, energy, virial
|
||||
|
||||
receive_fev(vflag);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void FixClientMD::min_setup(int vflag)
|
||||
{
|
||||
setup(vflag);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void FixClientMD::post_force(int vflag)
|
||||
{
|
||||
int i,j,m;
|
||||
|
||||
// energy and virial setup
|
||||
|
||||
if (vflag) v_setup(vflag);
|
||||
else evflag = 0;
|
||||
|
||||
// required fields: COORDS
|
||||
// optional fields: BOXLO, BOXHI, BOXTILT
|
||||
|
||||
// send coords
|
||||
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
int nfields = 1;
|
||||
if (domain->box_change) nfields += 2;
|
||||
if (domain->box_change && domain->triclinic) nfields++;;
|
||||
|
||||
cs->send(STEP,nfields);
|
||||
|
||||
pack_coords();
|
||||
cs->pack_parallel(COORDS,4,atom->nlocal,atom->tag,3,xpbc);
|
||||
|
||||
if (domain->box_change) {
|
||||
cs->pack(BOXLO,4,3,domain->boxlo);
|
||||
cs->pack(BOXHI,4,3,domain->boxhi);
|
||||
if (domain->triclinic) {
|
||||
double boxtilt[3];
|
||||
boxtilt[0] = domain->xy;
|
||||
if (domain->dimension == 3) {
|
||||
boxtilt[1] = domain->xz;
|
||||
boxtilt[2] = domain->yz;
|
||||
} else boxtilt[1] = boxtilt[2] = 0.0;
|
||||
cs->pack(BOXTILT,4,3,boxtilt);
|
||||
}
|
||||
}
|
||||
|
||||
// recv forces, energy, virial
|
||||
|
||||
receive_fev(vflag);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void FixClientMD::min_post_force(int vflag)
|
||||
{
|
||||
post_force(vflag);
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
potential energy from QM code
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
double FixClientMD::compute_scalar()
|
||||
{
|
||||
return eng;
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
pack local coords into xpbc, enforcing PBC
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void FixClientMD::pack_coords()
|
||||
{
|
||||
double **x = atom->x;
|
||||
int nlocal = atom->nlocal;
|
||||
|
||||
if (nlocal > maxatom) {
|
||||
memory->destroy(xpbc);
|
||||
maxatom = atom->nmax;
|
||||
memory->create(xpbc,3*maxatom,"message:xpbc");
|
||||
}
|
||||
|
||||
memcpy(xpbc,&x[0][0],3*nlocal*sizeof(double));
|
||||
|
||||
int j = 0;
|
||||
for (int i = 0; i < nlocal; i++) {
|
||||
domain->remap(&xpbc[j]);
|
||||
j += 3;
|
||||
}
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
receive message from server with forces, energy, virial
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void FixClientMD::receive_fev(int vflag)
|
||||
{
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
int nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
|
||||
int msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
|
||||
double *forces = (double *) cs->unpack(FORCES);
|
||||
double **f = atom->f;
|
||||
int nlocal = atom->nlocal;
|
||||
bigint natoms = atom->natoms;
|
||||
int m;
|
||||
|
||||
int j = 0;
|
||||
for (tagint id = 1; id <= natoms; id++) {
|
||||
m = atom->map(id);
|
||||
if (m < 0 || m >= nlocal) j += 3;
|
||||
else {
|
||||
f[m][0] += forces[j++];
|
||||
f[m][1] += forces[j++];
|
||||
f[m][2] += forces[j++];
|
||||
}
|
||||
}
|
||||
|
||||
eng = cs->unpack_double(ENERGY);
|
||||
|
||||
if (vflag) {
|
||||
double *v = (double *) cs->unpack(VIRIAL);
|
||||
double invnprocs = 1.0 / comm->nprocs;
|
||||
for (int i = 0; i < 6; i++)
|
||||
virial[i] = invnprocs*v[i];
|
||||
}
|
||||
}
|
|
@ -0,0 +1,62 @@
|
|||
/* -*- c++ -*- ----------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifdef FIX_CLASS
|
||||
|
||||
FixStyle(client/md,FixClientMD)
|
||||
|
||||
#else
|
||||
|
||||
#ifndef LMP_FIX_CLIENT_MD_H
|
||||
#define LMP_FIX_CLIENT_MD_H
|
||||
|
||||
#include "fix.h"
|
||||
|
||||
namespace LAMMPS_NS {
|
||||
|
||||
class FixClientMD : public Fix {
|
||||
public:
|
||||
FixClientMD(class LAMMPS *, int, char **);
|
||||
~FixClientMD();
|
||||
int setmask();
|
||||
void init();
|
||||
void setup(int);
|
||||
void min_setup(int);
|
||||
void post_force(int);
|
||||
void min_post_force(int);
|
||||
double compute_scalar();
|
||||
|
||||
private:
|
||||
void *cslib;
|
||||
int maxatom;
|
||||
double eng;
|
||||
double *xpbc;
|
||||
|
||||
void pack_coords();
|
||||
void receive_fev(int );
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/* ERROR/WARNING messages:
|
||||
|
||||
E: Illegal ... command
|
||||
|
||||
Self-explanatory. Check the input script syntax and compare to the
|
||||
documentation for the command. You can use -echo screen as a
|
||||
command-line option when running LAMMPS to see the offending line.
|
||||
|
||||
*/
|
|
@ -0,0 +1,90 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <string.h>
|
||||
#include "message.h"
|
||||
#include "error.h"
|
||||
|
||||
// CSlib interface
|
||||
|
||||
#include "cslib.h"
|
||||
|
||||
using namespace LAMMPS_NS;
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
// customize by adding a new server protocol enum
|
||||
|
||||
enum{MD,MC};
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void Message::command(int narg, char **arg)
|
||||
{
|
||||
if (narg < 3) error->all(FLERR,"Illegal message command");
|
||||
|
||||
int clientserver;
|
||||
if (strcmp(arg[0],"client") == 0) clientserver = 1;
|
||||
else if (strcmp(arg[0],"server") == 0) clientserver = 2;
|
||||
else error->all(FLERR,"Illegal message command");
|
||||
lmp->clientserver = clientserver;
|
||||
|
||||
// customize by adding a new server protocol
|
||||
|
||||
int protocol;
|
||||
if (strcmp(arg[1],"md") == 0) protocol = MD;
|
||||
else if (strcmp(arg[1],"mc") == 0) protocol = MC;
|
||||
else error->all(FLERR,"Unknown message protocol");
|
||||
|
||||
// instantiate CSlib with chosen communication mode
|
||||
|
||||
if (strcmp(arg[2],"file") == 0 || strcmp(arg[2],"zmq") == 0 ||
|
||||
strcmp(arg[2],"mpi/two") == 0) {
|
||||
if (narg != 4) error->all(FLERR,"Illegal message command");
|
||||
lmp->cslib = new CSlib(clientserver-1,arg[2],arg[3],&world);
|
||||
|
||||
} else if (strcmp(arg[2],"mpi/one") == 0) {
|
||||
if (narg != 3) error->all(FLERR,"Illegal message command");
|
||||
if (!lmp->cscomm)
|
||||
error->all(FLERR,"Message mpi/one mode, but -mpi cmdline arg not used");
|
||||
lmp->cslib = new CSlib(clientserver-1,arg[2],&lmp->cscomm,&world);
|
||||
|
||||
} else error->all(FLERR,"Illegal message command");
|
||||
|
||||
// perform initial handshake between client and server
|
||||
// other code being coupled to must perform similar operation
|
||||
// client sends protocol with msgID = 0
|
||||
// server matches it and replies
|
||||
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
if (clientserver == 1) {
|
||||
cs->send(0,1);
|
||||
cs->pack_string(1,arg[1]);
|
||||
|
||||
int nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
int msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
if (msgID != 0) error->one(FLERR,"Bad initial client/server handshake");
|
||||
|
||||
} else {
|
||||
int nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
int msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
if (msgID != 0) error->one(FLERR,"Bad initial client/server handshake");
|
||||
char *pstr = cs->unpack_string(1);
|
||||
if (strcmp(pstr,arg[1]) != 0)
|
||||
error->one(FLERR,"Mismatch in client/server protocol");
|
||||
|
||||
cs->send(0,0);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
/* -*- c++ -*- ----------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifdef COMMAND_CLASS
|
||||
|
||||
CommandStyle(message,Message)
|
||||
|
||||
#else
|
||||
|
||||
#ifndef LMP_MESSAGE_H
|
||||
#define LMP_MESSAGE_H
|
||||
|
||||
#include "pointers.h"
|
||||
|
||||
namespace LAMMPS_NS {
|
||||
|
||||
class Message : protected Pointers {
|
||||
public:
|
||||
Message(class LAMMPS *lmp) : Pointers(lmp) {};
|
||||
void command(int, char **);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/* ERROR/WARNING messages:
|
||||
|
||||
*/
|
|
@ -0,0 +1,50 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include <string.h>
|
||||
#include "server.h"
|
||||
#include "error.h"
|
||||
|
||||
// customize by adding a new server protocol include and enum
|
||||
|
||||
#include "server_md.h"
|
||||
#include "server_mc.h"
|
||||
|
||||
using namespace LAMMPS_NS;
|
||||
|
||||
enum{MD,MC};
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void Server::command(int narg, char **arg)
|
||||
{
|
||||
if (narg != 1) error->all(FLERR,"Illegal server command");
|
||||
|
||||
if (lmp->clientserver != 2)
|
||||
error->all(FLERR,"Message command not used to setup LAMMPS as a server");
|
||||
|
||||
// customize by adding a new server protocol
|
||||
|
||||
int protocol;
|
||||
if (strcmp(arg[0],"md") == 0) protocol = MD;
|
||||
else if (strcmp(arg[0],"mc") == 0) protocol = MC;
|
||||
else error->all(FLERR,"Unknown message protocol");
|
||||
|
||||
if (protocol == MD) {
|
||||
ServerMD *server = new ServerMD(lmp);
|
||||
server->loop();
|
||||
} else if (protocol == MC) {
|
||||
ServerMC *server = new ServerMC(lmp);
|
||||
server->loop();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
/* -*- c++ -*- ----------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifdef COMMAND_CLASS
|
||||
|
||||
CommandStyle(server,Server)
|
||||
|
||||
#else
|
||||
|
||||
#ifndef LMP_SERVER_H
|
||||
#define LMP_SERVER_H
|
||||
|
||||
#include "pointers.h"
|
||||
|
||||
namespace LAMMPS_NS {
|
||||
|
||||
class Server : protected Pointers {
|
||||
public:
|
||||
Server(class LAMMPS *lmp) : Pointers(lmp) {};
|
||||
void command(int, char **);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/* ERROR/WARNING messages:
|
||||
|
||||
*/
|
|
@ -0,0 +1,148 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include "server_mc.h"
|
||||
#include "atom.h"
|
||||
#include "update.h"
|
||||
#include "integrate.h"
|
||||
#include "input.h"
|
||||
#include "output.h"
|
||||
#include "thermo.h"
|
||||
#include "error.h"
|
||||
|
||||
// CSlib interface
|
||||
|
||||
#include "cslib.h"
|
||||
|
||||
using namespace LAMMPS_NS;
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
enum{NATOMS=1,EINIT,DISPLACE,ACCEPT,RUN};
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
ServerMC::ServerMC(LAMMPS *lmp) : Pointers(lmp) {}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void ServerMC::loop()
|
||||
{
|
||||
int i,j,m;
|
||||
double xold[3],xnew[3];
|
||||
tagint atomid;
|
||||
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
// require atom map
|
||||
|
||||
if (!atom->map_style) error->all(FLERR,"Server mode requires atom map");
|
||||
|
||||
// initialize LAMMPS for dynamics
|
||||
|
||||
input->one("run 0");
|
||||
|
||||
//update->whichflag = 1;
|
||||
//lmp->init();
|
||||
|
||||
// loop on messages
|
||||
// receive a message, process it, send return message if necessary
|
||||
|
||||
int msgID,nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
|
||||
while (1) {
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
if (msgID < 0) break;
|
||||
|
||||
if (msgID == NATOMS) {
|
||||
|
||||
cs->send(msgID,1);
|
||||
cs->pack_int(1,atom->natoms);
|
||||
|
||||
} else if (msgID == EINIT) {
|
||||
|
||||
double dval;
|
||||
output->thermo->evaluate_keyword((char *) "pe",&dval);
|
||||
|
||||
cs->send(msgID,2);
|
||||
cs->pack_double(1,dval);
|
||||
double *coords = NULL;
|
||||
if (atom->nlocal) coords = &atom->x[0][0];
|
||||
cs->pack_parallel(2,4,atom->nlocal,atom->tag,3,coords);
|
||||
|
||||
} else if (msgID == DISPLACE) {
|
||||
|
||||
atomid = cs->unpack_int(1);
|
||||
double *xnew = (double *) cs->unpack(2);
|
||||
double **x = atom->x;
|
||||
|
||||
m = atom->map(atomid);
|
||||
if (m >= 0 && m < atom->nlocal) {
|
||||
xold[0] = x[m][0];
|
||||
xold[1] = x[m][1];
|
||||
xold[2] = x[m][2];
|
||||
x[m][0] = xnew[0];
|
||||
x[m][1] = xnew[1];
|
||||
x[m][2] = xnew[2];
|
||||
}
|
||||
|
||||
input->one("run 0");
|
||||
double dval;
|
||||
output->thermo->evaluate_keyword((char *) "pe",&dval);
|
||||
|
||||
cs->send(msgID,1);
|
||||
cs->pack_double(1,dval);
|
||||
|
||||
} else if (msgID == ACCEPT) {
|
||||
|
||||
int accept = cs->unpack_int(1);
|
||||
double **x = atom->x;
|
||||
|
||||
if (!accept) {
|
||||
m = atom->map(atomid);
|
||||
if (m >= 0 && m < atom->nlocal) {
|
||||
x[m][0] = xold[0];
|
||||
x[m][1] = xold[1];
|
||||
x[m][2] = xold[2];
|
||||
}
|
||||
}
|
||||
|
||||
cs->send(msgID,0);
|
||||
|
||||
} else if (msgID == RUN) {
|
||||
|
||||
int nsteps = cs->unpack_int(1);
|
||||
|
||||
//input->one("run 100");
|
||||
|
||||
update->nsteps = nsteps;
|
||||
update->firststep = update->ntimestep;
|
||||
update->laststep = update->ntimestep + nsteps;
|
||||
|
||||
update->integrate->setup(1);
|
||||
update->integrate->run(nsteps);
|
||||
|
||||
cs->send(msgID,0);
|
||||
|
||||
} else error->all(FLERR,"Server received unrecognized message");
|
||||
}
|
||||
|
||||
// final reply to client
|
||||
|
||||
cs->send(0,0);
|
||||
|
||||
// clean up
|
||||
|
||||
delete cs;
|
||||
lmp->cslib = NULL;
|
||||
}
|
|
@ -0,0 +1,29 @@
|
|||
/* -*- c++ -*- ----------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef LMP_SERVER_MC_H
|
||||
#define LMP_SERVER_MC_H
|
||||
|
||||
#include "pointers.h"
|
||||
|
||||
namespace LAMMPS_NS {
|
||||
|
||||
class ServerMC : protected Pointers {
|
||||
public:
|
||||
ServerMC(class LAMMPS *);
|
||||
void loop();
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -0,0 +1,333 @@
|
|||
/* ----------------------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#include "mpi.h"
|
||||
#include <cstring>
|
||||
#include "server_md.h"
|
||||
#include "atom.h"
|
||||
#include "atom_vec.h"
|
||||
#include "update.h"
|
||||
#include "integrate.h"
|
||||
#include "kspace.h"
|
||||
#include "force.h"
|
||||
#include "pair.h"
|
||||
#include "neighbor.h"
|
||||
#include "comm.h"
|
||||
#include "domain.h"
|
||||
#include "error.h"
|
||||
|
||||
// CSlib interface
|
||||
|
||||
#include "cslib.h"
|
||||
|
||||
using namespace LAMMPS_NS;
|
||||
using namespace CSLIB_NS;
|
||||
|
||||
enum{SETUP=1,STEP};
|
||||
enum{UNITS=1,DIM,NATOMS,NTYPES,BOXLO,BOXHI,BOXTILT,TYPES,COORDS,CHARGE};
|
||||
enum{FORCES=1,ENERGY,VIRIAL};
|
||||
|
||||
// NOTE: features that could be added to this interface
|
||||
// allow client to set periodicity vs shrink-wrap
|
||||
// currently just assume server is same as client
|
||||
// test that triclinic boxes actually work
|
||||
// send new box size/shape every step, for NPT client
|
||||
// unit check between client/server with unit conversion if needed
|
||||
// option for client to send other per-atom quantities, e.g. rmass
|
||||
// more precise request of energy/virial (global or peratom) by client
|
||||
// maybe Verlet should have a single(eflag,vflag) method to more easily comply
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
ServerMD::ServerMD(LAMMPS *lmp) : Pointers(lmp)
|
||||
{
|
||||
if (domain->box_exist == 0)
|
||||
error->all(FLERR,"Server command before simulation box is defined");
|
||||
|
||||
if (!atom->map_style) error->all(FLERR,"Server md mode requires atom map");
|
||||
if (atom->tag_enable == 0) error->all(FLERR,"Server md mode requires atom IDs");
|
||||
if (sizeof(tagint) != 4) error->all(FLERR,"Server md requires 4-byte atom IDs");
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
void ServerMD::loop()
|
||||
{
|
||||
int i,j,m;
|
||||
|
||||
// cs = instance of CSlib
|
||||
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
// counters
|
||||
|
||||
int forcecalls = 0;
|
||||
int neighcalls = 0;
|
||||
|
||||
// loop on messages
|
||||
// receive a message, process it, send return message
|
||||
|
||||
int msgID,nfield;
|
||||
int *fieldID,*fieldtype,*fieldlen;
|
||||
|
||||
while (1) {
|
||||
msgID = cs->recv(nfield,fieldID,fieldtype,fieldlen);
|
||||
if (msgID < 0) break;
|
||||
|
||||
// SETUP call at beginning of each run
|
||||
// required fields: NATOMS, NTYPES, BOXLO, BOXHI, TYPES, COORDS
|
||||
// optional fields: others in enum above
|
||||
|
||||
if (msgID == SETUP) {
|
||||
|
||||
int natoms = -1;
|
||||
int ntypes = -1;
|
||||
double *boxlo = NULL;
|
||||
double *boxhi = NULL;
|
||||
double *boxtilt = NULL;
|
||||
int *types = NULL;
|
||||
double *coords = NULL;
|
||||
double *charge = NULL;
|
||||
|
||||
for (int ifield = 0; ifield < nfield; ifield++) {
|
||||
if (fieldID[ifield] == UNITS) {
|
||||
char *units = cs->unpack_string(UNITS);
|
||||
if (strcmp(units,update->unit_style) != 0)
|
||||
error->all(FLERR,"Server md units mis-match with client");
|
||||
} else if (fieldID[ifield] == DIM) {
|
||||
int dim = cs->unpack_int(DIM);
|
||||
if (dim != domain->dimension)
|
||||
error->all(FLERR,"Server md dimension mis-match with client");
|
||||
} else if (fieldID[ifield] == NATOMS) {
|
||||
natoms = cs->unpack_int(NATOMS);
|
||||
if (3*natoms > INT_MAX)
|
||||
error->all(FLERR,"Server md max atoms is 1/3 of 2^31");
|
||||
} else if (fieldID[ifield] == NTYPES) {
|
||||
ntypes = cs->unpack_int(NTYPES);
|
||||
if (ntypes != atom->ntypes)
|
||||
error->all(FLERR,"Server md ntypes mis-match with client");
|
||||
} else if (fieldID[ifield] == BOXLO) {
|
||||
boxlo = (double *) cs->unpack(BOXLO);
|
||||
} else if (fieldID[ifield] == BOXHI) {
|
||||
boxhi = (double *) cs->unpack(BOXHI);
|
||||
} else if (fieldID[ifield] == BOXTILT) {
|
||||
boxtilt = (double *) cs->unpack(BOXTILT);
|
||||
} else if (fieldID[ifield] == TYPES) {
|
||||
types = (int *) cs->unpack(TYPES);
|
||||
} else if (fieldID[ifield] == COORDS) {
|
||||
coords = (double *) cs->unpack(COORDS);
|
||||
} else if (fieldID[ifield] == CHARGE) {
|
||||
charge = (double *) cs->unpack(CHARGE);
|
||||
} else error->all(FLERR,"Server md setup field unknown");
|
||||
}
|
||||
|
||||
if (natoms < 0 || ntypes < 0 || !boxlo || !boxhi || !types || !coords)
|
||||
error->all(FLERR,"Required server md setup field not received");
|
||||
|
||||
if (charge && atom->q_flag == 0)
|
||||
error->all(FLERR,"Server md does not define atom attribute q");
|
||||
|
||||
// reset box, global and local
|
||||
// reset proc decomposition
|
||||
|
||||
box_change(boxlo,boxhi,boxtilt);
|
||||
|
||||
domain->set_initial_box();
|
||||
domain->set_global_box();
|
||||
comm->set_proc_grid();
|
||||
domain->set_local_box();
|
||||
|
||||
// clear all atoms
|
||||
|
||||
atom->nlocal = 0;
|
||||
atom->nghost = 0;
|
||||
|
||||
// add atoms that are in my sub-box
|
||||
|
||||
int nlocal = 0;
|
||||
for (int i = 0; i < natoms; i++) {
|
||||
if (!domain->ownatom(i+1,&coords[3*i],NULL,0)) continue;
|
||||
atom->avec->create_atom(types[i],&coords[3*i]);
|
||||
atom->tag[nlocal] = i+1;
|
||||
if (charge) atom->q[nlocal] = charge[i];
|
||||
nlocal++;
|
||||
}
|
||||
|
||||
int ntotal;
|
||||
MPI_Allreduce(&atom->nlocal,&ntotal,1,MPI_INT,MPI_SUM,world);
|
||||
if (ntotal != natoms)
|
||||
error->all(FLERR,"Server md atom cound does not match client");
|
||||
|
||||
atom->map_init();
|
||||
atom->map_set();
|
||||
atom->natoms = natoms;
|
||||
|
||||
// perform system setup() for dynamics
|
||||
// also OK for minimization, since client runs minimizer
|
||||
// return forces, energy, virial to client
|
||||
|
||||
update->whichflag = 1;
|
||||
lmp->init();
|
||||
update->integrate->setup_minimal(1);
|
||||
neighcalls++;
|
||||
forcecalls++;
|
||||
|
||||
send_fev(msgID);
|
||||
|
||||
// STEP call at each timestep of run or minimization
|
||||
// required fields: COORDS
|
||||
// optional fields: BOXLO, BOXHI, BOXTILT
|
||||
|
||||
} else if (msgID == STEP) {
|
||||
|
||||
double *boxlo = NULL;
|
||||
double *boxhi = NULL;
|
||||
double *boxtilt = NULL;
|
||||
double *coords = NULL;
|
||||
|
||||
for (int ifield = 0; ifield < nfield; ifield++) {
|
||||
if (fieldID[ifield] == BOXLO) {
|
||||
boxlo = (double *) cs->unpack(BOXLO);
|
||||
} else if (fieldID[ifield] == BOXHI) {
|
||||
boxhi = (double *) cs->unpack(BOXHI);
|
||||
} else if (fieldID[ifield] == BOXTILT) {
|
||||
boxtilt = (double *) cs->unpack(BOXTILT);
|
||||
} else if (fieldID[ifield] == COORDS) {
|
||||
coords = (double *) cs->unpack(COORDS);
|
||||
} else error->all(FLERR,"Server md step field unknown");
|
||||
}
|
||||
|
||||
if (!coords)
|
||||
error->all(FLERR,"Required server md step field not received");
|
||||
|
||||
// change box size/shape, only if both box lo/hi received
|
||||
|
||||
if (boxlo && boxhi) box_change(boxlo,boxhi,boxtilt);
|
||||
|
||||
// assign received coords to owned atoms
|
||||
// closest_image() insures xyz matches current server PBC
|
||||
|
||||
double **x = atom->x;
|
||||
int nlocal = atom->nlocal;
|
||||
int nall = atom->natoms;
|
||||
|
||||
j = 0;
|
||||
for (tagint id = 1; id <= nall; id++) {
|
||||
m = atom->map(id);
|
||||
if (m < 0 || m >= nlocal) j += 3;
|
||||
else {
|
||||
domain->closest_image(x[m],&coords[j],x[m]);
|
||||
j += 3;
|
||||
}
|
||||
}
|
||||
|
||||
// if no need to reneighbor, just ghost comm
|
||||
// else setup_minimal(1) which includes reneigh
|
||||
// setup_minimal computes forces for 0 or 1
|
||||
|
||||
int nflag = neighbor->decide();
|
||||
if (nflag == 0) {
|
||||
comm->forward_comm();
|
||||
update->integrate->setup_minimal(0);
|
||||
} else {
|
||||
update->integrate->setup_minimal(1);
|
||||
neighcalls++;
|
||||
}
|
||||
|
||||
forcecalls++;
|
||||
|
||||
send_fev(msgID);
|
||||
|
||||
} else error->all(FLERR,"Server MD received unrecognized message");
|
||||
}
|
||||
|
||||
// final reply to client
|
||||
|
||||
cs->send(0,0);
|
||||
|
||||
// stats
|
||||
|
||||
if (comm->me == 0) {
|
||||
if (screen) {
|
||||
fprintf(screen,"Server MD calls = %d\n",forcecalls);
|
||||
fprintf(screen,"Server MD reneighborings = %d\n",neighcalls);
|
||||
}
|
||||
if (logfile) {
|
||||
fprintf(logfile,"Server MD calls = %d\n",forcecalls);
|
||||
fprintf(logfile,"Server MD reneighborings %d\n",neighcalls);
|
||||
}
|
||||
}
|
||||
|
||||
// clean up
|
||||
|
||||
delete cs;
|
||||
lmp->cslib = NULL;
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
box change due to received message
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void ServerMD::box_change(double *boxlo, double *boxhi, double *boxtilt)
|
||||
{
|
||||
domain->boxlo[0] = boxlo[0];
|
||||
domain->boxhi[0] = boxhi[0];
|
||||
domain->boxlo[1] = boxlo[1];
|
||||
domain->boxhi[1] = boxhi[1];
|
||||
if (domain->dimension == 3) {
|
||||
domain->boxlo[2] = boxlo[2];
|
||||
domain->boxhi[2] = boxhi[2];
|
||||
}
|
||||
|
||||
if (boxtilt) {
|
||||
if (!domain->triclinic)
|
||||
error->all(FLERR,"Server md not setup for triclinic box");
|
||||
domain->xy = boxtilt[0];
|
||||
if (domain->dimension == 3) {
|
||||
domain->xz = boxtilt[1];
|
||||
domain->yz = boxtilt[2];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
send return message with forces, energy, pressure tensor
|
||||
pressure tensor should be just pair style virial
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void ServerMD::send_fev(int msgID)
|
||||
{
|
||||
CSlib *cs = (CSlib *) lmp->cslib;
|
||||
|
||||
cs->send(msgID,3);
|
||||
|
||||
double *forces = NULL;
|
||||
if (atom->nlocal) forces = &atom->f[0][0];
|
||||
cs->pack_parallel(FORCES,4,atom->nlocal,atom->tag,3,forces);
|
||||
|
||||
double eng = force->pair->eng_vdwl + force->pair->eng_coul;
|
||||
double engall;
|
||||
MPI_Allreduce(&eng,&engall,1,MPI_DOUBLE,MPI_SUM,world);
|
||||
cs->pack_double(ENERGY,engall);
|
||||
|
||||
double v[6],vall[6];
|
||||
for (int i = 0; i < 6; i++)
|
||||
v[i] = force->pair->virial[i];
|
||||
if (force->kspace)
|
||||
for (int i = 0; i < 6; i++)
|
||||
v[i] += force->kspace->virial[i];
|
||||
|
||||
MPI_Allreduce(&v,&vall,6,MPI_DOUBLE,MPI_SUM,world);
|
||||
cs->pack(VIRIAL,4,6,vall);
|
||||
}
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
/* -*- c++ -*- ----------------------------------------------------------
|
||||
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
|
||||
http://lammps.sandia.gov, Sandia National Laboratories
|
||||
Steve Plimpton, sjplimp@sandia.gov
|
||||
|
||||
Copyright (2003) Sandia Corporation. Under the terms of Contract
|
||||
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
|
||||
certain rights in this software. This software is distributed under
|
||||
the GNU General Public License.
|
||||
|
||||
See the README file in the top-level LAMMPS directory.
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
#ifndef LMP_SERVER_MD_H
|
||||
#define LMP_SERVER_MD_H
|
||||
|
||||
#include "pointers.h"
|
||||
|
||||
namespace LAMMPS_NS {
|
||||
|
||||
class ServerMD : protected Pointers {
|
||||
public:
|
||||
ServerMD(class LAMMPS *);
|
||||
void loop();
|
||||
|
||||
private:
|
||||
void box_change(double *, double *, double *);
|
||||
void send_fev(int);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
|
@ -53,7 +53,7 @@ endif
|
|||
# PACKEXT = subset that require an external (downloaded) library
|
||||
|
||||
PACKAGE = asphere body class2 colloid compress coreshell dipole gpu \
|
||||
granular kim kokkos kspace latte manybody mc meam misc \
|
||||
granular kim kokkos kspace latte manybody mc meam message misc \
|
||||
molecule mpiio mscg opt peri poems \
|
||||
python qeq reax replica rigid shock snap spin srd voronoi
|
||||
|
||||
|
@ -65,14 +65,14 @@ PACKUSER = user-atc user-awpmd user-bocs user-cgdna user-cgsdk user-colvars \
|
|||
user-quip user-reaxc user-smd user-smtbq user-sph user-tally \
|
||||
user-uef user-vtk
|
||||
|
||||
PACKLIB = compress gpu kim kokkos latte meam mpiio mscg poems \
|
||||
PACKLIB = compress gpu kim kokkos latte meam message mpiio mscg poems \
|
||||
python reax voronoi \
|
||||
user-atc user-awpmd user-colvars user-h5md user-lb user-molfile \
|
||||
user-netcdf user-qmmm user-quip user-smd user-vtk
|
||||
|
||||
PACKSYS = compress mpiio python user-lb
|
||||
|
||||
PACKINT = gpu kokkos meam poems reax user-atc user-awpmd user-colvars
|
||||
PACKINT = gpu kokkos meam message poems reax user-atc user-awpmd user-colvars
|
||||
|
||||
PACKEXT = kim mscg voronoi \
|
||||
user-h5md user-molfile user-netcdf user-qmmm user-quip \
|
||||
|
|
|
@ -1533,7 +1533,8 @@ void Atom::set_mass(const char *file, int line, int narg, char **arg)
|
|||
}
|
||||
|
||||
/* ----------------------------------------------------------------------
|
||||
set all masses as read in from restart file
|
||||
set all masses
|
||||
called from reading of restart file, also from ServerMD
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void Atom::set_mass(double *values)
|
||||
|
|
|
@ -1208,13 +1208,9 @@ int Domain::closest_image(double *pos, int j)
|
|||
/* ----------------------------------------------------------------------
|
||||
find and return Xj image = periodic image of Xj that is closest to Xi
|
||||
for triclinic, add/subtract tilt factors in other dims as needed
|
||||
not currently used (Jan 2017):
|
||||
used to be called by pair TIP4P styles but no longer,
|
||||
due to use of other closest_image() method
|
||||
------------------------------------------------------------------------- */
|
||||
|
||||
void Domain::closest_image(const double * const xi, const double * const xj,
|
||||
double * const xjimage)
|
||||
void Domain::closest_image(double *xi, double *xj, double *xjimage)
|
||||
{
|
||||
double dx = xj[0] - xi[0];
|
||||
double dy = xj[1] - xi[1];
|
||||
|
|
|
@ -116,8 +116,7 @@ class Domain : protected Pointers {
|
|||
void minimum_image_once(double *);
|
||||
int closest_image(int, int);
|
||||
int closest_image(double *, int);
|
||||
void closest_image(const double * const, const double * const,
|
||||
double * const);
|
||||
void closest_image(double *, double *, double *);
|
||||
void remap(double *, imageint &);
|
||||
void remap(double *);
|
||||
void remap_near(double *, double *);
|
||||
|
|
229
src/lammps.cpp
229
src/lammps.cpp
|
@ -47,8 +47,8 @@
|
|||
#include "accelerator_omp.h"
|
||||
#include "timer.h"
|
||||
#include "python.h"
|
||||
#include "memory.h"
|
||||
#include "version.h"
|
||||
#include "memory.h"
|
||||
#include "error.h"
|
||||
|
||||
#include "lmpinstalledpkgs.h"
|
||||
|
@ -73,12 +73,50 @@ LAMMPS::LAMMPS(int narg, char **arg, MPI_Comm communicator)
|
|||
output = NULL;
|
||||
python = NULL;
|
||||
|
||||
clientserver = 0;
|
||||
cslib = NULL;
|
||||
cscomm = 0;
|
||||
|
||||
screen = NULL;
|
||||
logfile = NULL;
|
||||
infile = NULL;
|
||||
|
||||
initclock = MPI_Wtime();
|
||||
|
||||
// check if -mpi is first arg
|
||||
// if so, then 2 apps were launched with one mpirun command
|
||||
// this means passed communicator (e.g. MPI_COMM_WORLD) is bigger than LAMMPS
|
||||
// e.g. for client/server coupling with another code
|
||||
// communicator needs to shrink to be just LAMMPS
|
||||
// syntax: -mpi P1 means P1 procs for one app, P-P1 procs for second app
|
||||
// LAMMPS could be either app, based on proc IDs
|
||||
// do the following:
|
||||
// perform an MPI_Comm_split() to create a new LAMMPS-only subcomm
|
||||
// NOTE: assuming other app is doing the same thing, else will hang!
|
||||
// re-create universe with subcomm
|
||||
// store full two-app comm in cscomm
|
||||
// cscomm is used by CSLIB package to exchange messages w/ other app
|
||||
// eventually should extend to N > 2 apps launched with one mpirun command
|
||||
|
||||
int iarg = 1;
|
||||
if (narg-iarg >= 2 && (strcmp(arg[iarg],"-mpi") == 0 ||
|
||||
strcmp(arg[iarg],"-m") == 0)) {
|
||||
int me,nprocs;
|
||||
MPI_Comm_rank(communicator,&me);
|
||||
MPI_Comm_size(communicator,&nprocs);
|
||||
int p1 = atoi(arg[iarg+1]);
|
||||
if (p1 <= 0 || p1 >= nprocs)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
int which = 0;
|
||||
if (me >= p1) which = 1;
|
||||
MPI_Comm subcomm;
|
||||
MPI_Comm_split(communicator,which,me,&subcomm);
|
||||
cscomm = communicator;
|
||||
communicator = subcomm;
|
||||
delete universe;
|
||||
universe = new Universe(this,communicator);
|
||||
}
|
||||
|
||||
// parse input switches
|
||||
|
||||
int inflag = 0;
|
||||
|
@ -107,59 +145,30 @@ LAMMPS::LAMMPS(int narg, char **arg, MPI_Comm communicator)
|
|||
int *pfirst = NULL;
|
||||
int *plast = NULL;
|
||||
|
||||
int iarg = 1;
|
||||
iarg = 1;
|
||||
while (iarg < narg) {
|
||||
if (strcmp(arg[iarg],"-partition") == 0 ||
|
||||
strcmp(arg[iarg],"-p") == 0) {
|
||||
universe->existflag = 1;
|
||||
|
||||
if (strcmp(arg[iarg],"-echo") == 0 ||
|
||||
strcmp(arg[iarg],"-e") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
iarg++;
|
||||
while (iarg < narg && arg[iarg][0] != '-') {
|
||||
universe->add_world(arg[iarg]);
|
||||
iarg++;
|
||||
}
|
||||
iarg += 2;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-help") == 0 ||
|
||||
strcmp(arg[iarg],"-h") == 0) {
|
||||
if (iarg+1 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
helpflag = 1;
|
||||
citeflag = 0;
|
||||
iarg += 1;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-in") == 0 ||
|
||||
strcmp(arg[iarg],"-i") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
inflag = iarg + 1;
|
||||
iarg += 2;
|
||||
} else if (strcmp(arg[iarg],"-screen") == 0 ||
|
||||
strcmp(arg[iarg],"-sc") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
screenflag = iarg + 1;
|
||||
iarg += 2;
|
||||
} else if (strcmp(arg[iarg],"-log") == 0 ||
|
||||
strcmp(arg[iarg],"-l") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
logflag = iarg + 1;
|
||||
iarg += 2;
|
||||
} else if (strcmp(arg[iarg],"-var") == 0 ||
|
||||
strcmp(arg[iarg],"-v") == 0) {
|
||||
if (iarg+3 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
iarg += 3;
|
||||
while (iarg < narg && arg[iarg][0] != '-') iarg++;
|
||||
} else if (strcmp(arg[iarg],"-echo") == 0 ||
|
||||
strcmp(arg[iarg],"-e") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
iarg += 2;
|
||||
} else if (strcmp(arg[iarg],"-pscreen") == 0 ||
|
||||
strcmp(arg[iarg],"-ps") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
partscreenflag = iarg + 1;
|
||||
iarg += 2;
|
||||
} else if (strcmp(arg[iarg],"-plog") == 0 ||
|
||||
strcmp(arg[iarg],"-pl") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
partlogflag = iarg + 1;
|
||||
iarg += 2;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-kokkos") == 0 ||
|
||||
strcmp(arg[iarg],"-k") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
|
@ -172,6 +181,26 @@ LAMMPS::LAMMPS(int narg, char **arg, MPI_Comm communicator)
|
|||
kkfirst = iarg;
|
||||
while (iarg < narg && arg[iarg][0] != '-') iarg++;
|
||||
kklast = iarg;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-log") == 0 ||
|
||||
strcmp(arg[iarg],"-l") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
logflag = iarg + 1;
|
||||
iarg += 2;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-mpi") == 0 ||
|
||||
strcmp(arg[iarg],"-m") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
if (iarg != 1) error->universe_all(FLERR,"Invalid command-line argument");
|
||||
iarg += 2;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-nocite") == 0 ||
|
||||
strcmp(arg[iarg],"-nc") == 0) {
|
||||
citeflag = 0;
|
||||
iarg++;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-package") == 0 ||
|
||||
strcmp(arg[iarg],"-pk") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
|
@ -188,6 +217,69 @@ LAMMPS::LAMMPS(int narg, char **arg, MPI_Comm communicator)
|
|||
else break;
|
||||
}
|
||||
plast[npack++] = iarg;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-partition") == 0 ||
|
||||
strcmp(arg[iarg],"-p") == 0) {
|
||||
universe->existflag = 1;
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
iarg++;
|
||||
while (iarg < narg && arg[iarg][0] != '-') {
|
||||
universe->add_world(arg[iarg]);
|
||||
iarg++;
|
||||
}
|
||||
|
||||
} else if (strcmp(arg[iarg],"-plog") == 0 ||
|
||||
strcmp(arg[iarg],"-pl") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
partlogflag = iarg + 1;
|
||||
iarg += 2;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-pscreen") == 0 ||
|
||||
strcmp(arg[iarg],"-ps") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
partscreenflag = iarg + 1;
|
||||
iarg += 2;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-reorder") == 0 ||
|
||||
strcmp(arg[iarg],"-ro") == 0) {
|
||||
if (iarg+3 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
if (universe->existflag)
|
||||
error->universe_all(FLERR,"Cannot use -reorder after -partition");
|
||||
universe->reorder(arg[iarg+1],arg[iarg+2]);
|
||||
iarg += 3;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-restart") == 0 ||
|
||||
strcmp(arg[iarg],"-r") == 0) {
|
||||
if (iarg+3 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
restartflag = 1;
|
||||
rfile = arg[iarg+1];
|
||||
dfile = arg[iarg+2];
|
||||
// check for restart remap flag
|
||||
if (strcmp(dfile,"remap") == 0) {
|
||||
if (iarg+4 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
restartremapflag = 1;
|
||||
dfile = arg[iarg+3];
|
||||
iarg++;
|
||||
}
|
||||
iarg += 3;
|
||||
// delimit any extra args for the write_data command
|
||||
wdfirst = iarg;
|
||||
while (iarg < narg && arg[iarg][0] != '-') iarg++;
|
||||
wdlast = iarg;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-screen") == 0 ||
|
||||
strcmp(arg[iarg],"-sc") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
screenflag = iarg + 1;
|
||||
iarg += 2;
|
||||
|
||||
} else if (strcmp(arg[iarg],"-suffix") == 0 ||
|
||||
strcmp(arg[iarg],"-sf") == 0) {
|
||||
if (iarg+2 > narg)
|
||||
|
@ -213,45 +305,14 @@ LAMMPS::LAMMPS(int narg, char **arg, MPI_Comm communicator)
|
|||
strcpy(suffix,arg[iarg+1]);
|
||||
iarg += 2;
|
||||
}
|
||||
} else if (strcmp(arg[iarg],"-reorder") == 0 ||
|
||||
strcmp(arg[iarg],"-ro") == 0) {
|
||||
|
||||
} else if (strcmp(arg[iarg],"-var") == 0 ||
|
||||
strcmp(arg[iarg],"-v") == 0) {
|
||||
if (iarg+3 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
if (universe->existflag)
|
||||
error->universe_all(FLERR,"Cannot use -reorder after -partition");
|
||||
universe->reorder(arg[iarg+1],arg[iarg+2]);
|
||||
iarg += 3;
|
||||
} else if (strcmp(arg[iarg],"-restart") == 0 ||
|
||||
strcmp(arg[iarg],"-r") == 0) {
|
||||
if (iarg+3 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
restartflag = 1;
|
||||
rfile = arg[iarg+1];
|
||||
dfile = arg[iarg+2];
|
||||
// check for restart remap flag
|
||||
if (strcmp(dfile,"remap") == 0) {
|
||||
if (iarg+4 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
restartremapflag = 1;
|
||||
dfile = arg[iarg+3];
|
||||
iarg++;
|
||||
}
|
||||
iarg += 3;
|
||||
// delimit any extra args for the write_data command
|
||||
wdfirst = iarg;
|
||||
while (iarg < narg && arg[iarg][0] != '-') iarg++;
|
||||
wdlast = iarg;
|
||||
} else if (strcmp(arg[iarg],"-nocite") == 0 ||
|
||||
strcmp(arg[iarg],"-nc") == 0) {
|
||||
citeflag = 0;
|
||||
iarg++;
|
||||
} else if (strcmp(arg[iarg],"-help") == 0 ||
|
||||
strcmp(arg[iarg],"-h") == 0) {
|
||||
if (iarg+1 > narg)
|
||||
error->universe_all(FLERR,"Invalid command-line argument");
|
||||
helpflag = 1;
|
||||
citeflag = 0;
|
||||
iarg += 1;
|
||||
|
||||
} else error->universe_all(FLERR,"Invalid command-line argument");
|
||||
}
|
||||
|
||||
|
@ -595,6 +656,14 @@ LAMMPS::~LAMMPS()
|
|||
delete [] suffix;
|
||||
delete [] suffix2;
|
||||
|
||||
// free the MPI comm created by -mpi command-line arg
|
||||
// it was passed to universe as if origianl universe world
|
||||
// may have been split later by partitions, universe will free the splits
|
||||
// free a copy of uorig here, so check in universe destructor will still work
|
||||
|
||||
MPI_Comm copy = universe->uorig;
|
||||
if (cscomm) MPI_Comm_free(©);
|
||||
|
||||
delete input;
|
||||
delete universe;
|
||||
delete error;
|
||||
|
|
|
@ -51,6 +51,10 @@ class LAMMPS {
|
|||
int num_package; // number of cmdline package commands
|
||||
int cite_enable; // 1 if generating log.cite, 0 if disabled
|
||||
|
||||
int clientserver; // 0 = neither, 1 = client, 2 = server
|
||||
void *cslib; // client/server messaging via CSlib
|
||||
MPI_Comm cscomm; // MPI comm for client+server in mpi/one mode
|
||||
|
||||
class KokkosLMP *kokkos; // KOKKOS accelerator class
|
||||
class AtomKokkos *atomKK; // KOKKOS version of Atom class
|
||||
class MemoryKokkos *memoryKK; // KOKKOS version of Memory class
|
||||
|
|
Loading…
Reference in New Issue