git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@4807 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp 2010-09-16 19:32:21 +00:00
parent d256c3afd4
commit ad960dcd5b
38 changed files with 3056 additions and 6 deletions

1
README
View File

@ -26,6 +26,7 @@ and directories:
README this file
LICENSE the GNU General Public License (GPL)
bench benchmark problems
couple code coupling examples, calling LAMMPS as a library
doc documentation
examples simple test problems
lib libraries LAMMPS can be linked with

199
couple/README Normal file
View File

@ -0,0 +1,199 @@
This directory has examples of how to use LAMMPS as a library, either
by itself or in tandem with another code or library.
This directory is meant to illustrate what is possible when coupling
codes or calling LAMMPS as a library. All of the examples provided
are just for demonstration purposes. The physics they calculate is
too simple for modeling a realistic problem.
See these sections of the LAMMPS manaul for details:
2.4 Building LAMMPS as a library (doc/Section_start.html#2_4)
4.10 Coupling LAMMPS to other codes (doc/Section_howto.html#4_10)
In all of the examples included here, LAMMPS must first be built as a
library. Basically, you type something like
make makelib
make -f Makefile.lib g++
in the LAMMPS src directory to create liblmp_g++.a
The library interface to LAMMPS is in src/library.cpp. Routines can
be easily added to this file so an external program can perform the
LAMMPS tasks desired.
--------------------------------
These are the sub-directories included in this directory:
lammps_quest MD with quantum forces, coupling to Quest DFT code
lammps_spparks grain-growth Monte Carlo with strain via MD,
coupling to SPPARKS kinetic MC code
library collection of useful inter-code communication routines
simple simple example of driver code calling LAMMPS as library
--------------------------------
The "simple" directory has its own README on how to proceed. The
driver code is provided both in C++ and in C. It simply runs a LAMMPS
simulation, extracts atom coordinates, changes a coordinate, passes
the coordinates back to LAMMPS, and runs some more.
--------------------------------
The "library" directory has a small collection of routines, useful for
exchanging data between 2 codes being run together as a coupled
application. It is used by the LAMMPS <-> Quest and LAMMPS <->
SPPARKS applications in the other 2 directories.
The library dir has a Makefile (which you may need to edit for your
box). If you type
g++ -f Makefile.g++
you should create libcouple.a, which the other coupled applications
link to.
Note that the library uses MPI, so the Makefile you use needs to
include a path to the MPI include file, if it is not someplace
the compiler will find it.
--------------------------------
The "lammps_quest" directory has an application that runs classical MD
via LAMMPS, but uses quantum forces calculated by the Quest DFT
(density functional) code in place of the usual classical MD forces
calculated by a pair style in LAMMPS.
lmpqst.cpp main program
it links LAMMPS as a library
it invokes Quest as an executable
in.lammps LAMMPS input script, without the run command
si_111.in Quest input script for an 8-atom Si unit cell
lmppath.h contains path to LAMMPS home directory
qstexe.h contains full pathname to Quest executable
After editing the Makefile, lmppath.h, and qstexe.h to make them
suitable for your box, type:
g++ -f Makefile.g++
and you should get the lmpqst executable.
NOTE: To run this coupled application, you must of course, have Quest
built on your system. It's WWW site is http://dft.sandia.gov/Quest.
It is not an open-source code, buy you can contact its authors to
obtain a copy.
You can run lmpqst in serial or parallel as:
% lmpqst Niter in.lammps in.quest
% mpirun -np 4 lmpqst Niter in.lammps in.quest
where
Niter = # of MD iterations
in.lammps = LAMMPS input script
in.quest = Quest input script
The log files are for this run:
% lmpqst 10 in.lammps si_111.in
This application is an example of a coupling where the driver code
(lmpqst) runs one code (LAMMPS) as an outer code and facilitates it
calling the other code (Quest) as an inner code. Specifically, the
driver (lmpqst) invokes one code (LAMMPS) to perform its timestep
loop, and grabs information from the other code (Quest) during its
timestep. This is done in LAMMPS using the fix external command,
which makes a "callback" to the driver application (lmpqst), which in
turn invokes Quest with new atom coordinates, lets Quest compute
forces, and returns those forces to the LAMMPS fix external.
The driver code launches LAMMPS in parallel. But Quest is only run on
a single processor. It would be possible to change this by using a
parallel build of Quest.
Since Quest does not currently have a library interface, the driver
code interfaces with Quest via input and output files.
Note that essentially 100% of the run time for this coupled
application is spent in Quest, as the quantum calculation of forces
dominates the calculation.
You can look at the log files in the directory to see sample LAMMPS
output for this simulation. Dump files produced by LAMMPS are stored
as dump.md.
--------------------------------
The "lammps_spparks" directory has an application that models grain
growth in the presence of strain. The grain growth is simulated by a
Potts model in a kinetic Monte Carlo code SPPARKS. Clusters of like
spins on a lattice represent grains. The Hamiltonian for the energy
due of a collection of spins includes a strain term and is described
on this page in the SPPARKS documentation:
http://www.sandia.gov/~sjplimp/spparks/doc/app_potts_strain.html. The
strain is computed by LAMMPS as a particle displacement where pairs of
atoms across a grain boundary are of different types and thus push off
from each other due to a Lennard-Jones sigma between particles of
different types that is larger than the sigma between particles of the
same type (interior to grains).
lmpspk.cpp main program
it links LAMMPS and SPPARKS as libraries
in.spparks SPPARKS input script, without the run command
lmppath.h contains path to LAMMPS home directory
spkpath.h contains path to SPPARKS home directory
After editing the Makefile, lmppath.h, and spkpath.h to make them
suitable for your box, type:
g++ -f Makefile.g++
and you should get the lmpspk executable.
NOTE: To build and run this coupled application, you must of course,
have SPPARKS built on your system. It's WWW site is
http://www.sandia.gov/~sjplimp/spparks.html. It is an open-source
code, written by two of the LAMMPS authors.
You can run lmpspk in serial or parallel as:
% lmpspk Niter Ndelta Sfactor in.spparks
% mpirun -np 4 lmpspk Niter Ndelta Sfactor in.spparks
where
Niter = # of outer iterations
Ndelta = time to run MC in each iteration
Sfactor = multiplier on strain effect
in.spparks = SPPARKS input script
The log files are for this run:
% lmpspk 5 10.0 5 in.spparks
This application is an example of a coupling where the driver code
(lmpspk) alternates back and forth between the 2 applications (LAMMPS
and SPPARKS). Each outer timestep in the driver code, the following
tasks are performed. One code (SPPARKS) is invoked for a few Monte
Carlo steps. Some of its output (spin state) is passed to the other
code (LAMMPS) as input (atom type). The the other code (LAMMPS) is
invoked for a few timesteps. Some of its output (atom coords) is
massaged to become an input (per-atom strain) for the original code
(SPPARKS).
The driver code launches both SPPARKS and LAMMPS in parallel and they
both decompose their spatial domains in the same manner. The datums
in SPPARKS (lattice sites) are the same as the datums in LAMMPS
(coarse-grained particles). If this were not the case, more
sophisticated inter-code communication could be performed.
You can look at the log files in the directory to see sample LAMMPS
and SPPARKS output for this simulation. Dump files produced by the
run are stored as dump.mc and dump.md. The image*.png files show
snapshots from both the LAMMPS and SPPARKS output. Note that the
in.lammps and data.lammps files are not inputs; they are generated by
the lmpspk driver.

View File

@ -0,0 +1,47 @@
# Makefile for MD with quantum forces via LAMMPS <-> Quest coupling
SHELL = /bin/sh
# System-specific settings
LAMMPS = /home/sjplimp/lammps
CC = g++
CCFLAGS = -g -O -DMPICH_IGNORE_CXX_SEEK -I../library
DEPFLAGS = -M
LINK = g++
LINKFLAGS = -g -O -L../library -L${LAMMPS}/src
USRLIB = -lcouple -llmp_g++
SYSLIB = -lfftw -lmpich -lpthread
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
# Files
EXE = lmpqst
SRC = $(wildcard *.cpp)
INC = $(wildcard *.h)
OBJ = $(SRC:.cpp=.o)
# Targets
$(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(OBJ) $(USRLIB) $(SYSLIB) -o $(EXE)
$(SIZE) $(EXE)
clean:
rm $(EXE) *.o
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
include $(DEPENDS)

View File

@ -0,0 +1,20 @@
# LAMMPS input for coupling MD/Quantum
units metal
dimension 3
atom_style atomic
atom_modify sort 0 0.0
lattice diamond 5.43
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
mass 1 28.08
velocity all create 300.0 87293 loop geom
fix 1 all nve
fix 2 all external
dump 1 all custom 1 dump.md id type x y z fx fy fz
thermo 1

View File

@ -0,0 +1 @@
#define LMPPATH /home/sjplimp/lammps

View File

@ -0,0 +1,262 @@
// lmpqst = umbrella driver to couple LAMMPS + Quest
// for MD using quantum forces
// Syntax: lmpqst Niter in.lammps in.quest
// Niter = # of MD iterations
// in.lammps = LAMMPS input script
// in.quest = Quest input script
#include "mpi.h"
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "many2one.h"
#include "one2many.h"
#include "files.h"
#include "memory.h"
#include "error.h"
#define QUOTE_(x) #x
#define QUOTE(x) QUOTE_(x)
#include "lmppath.h"
#include QUOTE(LMPPATH/src/lammps.h)
#include QUOTE(LMPPATH/src/library.h)
#include QUOTE(LMPPATH/src/input.h)
#include QUOTE(LMPPATH/src/modify.h)
#include QUOTE(LMPPATH/src/fix.h)
#include QUOTE(LMPPATH/src/fix_external.h)
#include "qstexe.h"
using namespace LAMMPS_NS;
#define ANGSTROM_per_BOHR 0.529
#define EV_per_RYDBERG 13.6056923
void quest_callback(void *, int, int, int *, double **, double **);
struct Info {
int me;
Memory *memory;
LAMMPS *lmp;
char *quest_input;
};
/* ---------------------------------------------------------------------- */
int main(int narg, char **arg)
{
int n;
char str[128];
// setup MPI
MPI_Init(&narg,&arg);
MPI_Comm comm = MPI_COMM_WORLD;
int me,nprocs;
MPI_Comm_rank(comm,&me);
MPI_Comm_size(comm,&nprocs);
Memory *memory = new Memory(comm);
Error *error = new Error(comm);
// command-line args
if (narg != 4) error->all("Syntax: lmpqst Niter in.lammps in.quest");
int niter = atoi(arg[1]);
n = strlen(arg[2]) + 1;
char *lammps_input = new char[n];
strcpy(lammps_input,arg[2]);
n = strlen(arg[3]) + 1;
char *quest_input = new char[n];
strcpy(quest_input,arg[3]);
// instantiate LAMMPS
LAMMPS *lmp = new LAMMPS(0,NULL,MPI_COMM_WORLD);
// create simulation in LAMMPS from in.lammps
lmp->input->file(lammps_input);
// make info avaiable to callback function
Info info;
info.me = me;
info.memory = memory;
info.lmp = lmp;
info.quest_input = quest_input;
// set callback to Quest inside fix external
int ifix = lmp->modify->find_fix("2");
FixExternal *fix = (FixExternal *) lmp->modify->fix[ifix];
fix->set_callback(quest_callback,&info);
// run LAMMPS for Niter
// each time it needs forces, it will invoke quest_callback
sprintf(str,"run %d",niter);
lmp->input->one(str);
// clean up
delete lmp;
delete memory;
delete error;
delete [] lammps_input;
delete [] quest_input;
MPI_Finalize();
}
/* ----------------------------------------------------------------------
callback to Quest with atom IDs and coords from each proc
invoke Quest to compute forces, load them into f for LAMMPS to use
f can be NULL if proc owns no atoms
------------------------------------------------------------------------- */
void quest_callback(void *ptr, int ntimestep, int nlocal,
int *id, double **x, double **f)
{
int i,j;
char str[128];
Info *info = (Info *) ptr;
// boxlines = LAMMPS box size converted into Quest lattice vectors
char **boxlines = NULL;
if (info->me == 0) {
boxlines = new char*[3];
for (i = 0; i < 3; i++) boxlines[i] = new char[128];
}
double boxxlo = *((double *) lammps_extract_global(info->lmp,"boxxlo"));
double boxxhi = *((double *) lammps_extract_global(info->lmp,"boxxhi"));
double boxylo = *((double *) lammps_extract_global(info->lmp,"boxylo"));
double boxyhi = *((double *) lammps_extract_global(info->lmp,"boxyhi"));
double boxzlo = *((double *) lammps_extract_global(info->lmp,"boxzlo"));
double boxzhi = *((double *) lammps_extract_global(info->lmp,"boxzhi"));
double xprd = (boxxhi-boxxlo)/ANGSTROM_per_BOHR;
double yprd = (boxyhi-boxylo)/ANGSTROM_per_BOHR;
double zprd = (boxzhi-boxzlo)/ANGSTROM_per_BOHR;
if (info->me == 0) {
sprintf(boxlines[0],"%g %g %g\n",xprd,0.0,0.0);
sprintf(boxlines[1],"%g %g %g\n",0.0,yprd,0.0);
sprintf(boxlines[2],"%g %g %g\n",0.0,0.0,zprd);
}
// xlines = x for atoms on each proc converted to text lines
// xlines is suitable for insertion into Quest input file
// convert LAMMPS Angstroms to Quest bohr
int natoms;
MPI_Allreduce(&nlocal,&natoms,1,MPI_INT,MPI_SUM,MPI_COMM_WORLD);
Many2One *lmp2qst = new Many2One(MPI_COMM_WORLD);
lmp2qst->setup(nlocal,id,natoms);
char **xlines = NULL;
double **xquest = NULL;
if (info->me == 0) {
xquest = info->memory->create_2d_double_array(natoms,3,"lmpqst:xquest");
xlines = new char*[natoms];
for (i = 0; i < natoms; i++) xlines[i] = new char[128];
}
if (info->me == 0) lmp2qst->gather(&x[0][0],3,&xquest[0][0]);
else lmp2qst->gather(&x[0][0],3,NULL);
if (info->me == 0) {
for (i = 0; i < natoms; i++) {
xquest[i][0] /= ANGSTROM_per_BOHR;
xquest[i][1] /= ANGSTROM_per_BOHR;
xquest[i][2] /= ANGSTROM_per_BOHR;
}
for (i = 0; i < natoms; i++) {
sprintf(xlines[i],"%d %d %g %g %g\n",i+1,1,
xquest[i][0],xquest[i][1],xquest[i][2]);
}
}
// one-processor tasks:
// whack all lcao.* files
// cp quest_input to lcao.in
// replace atom coords section of lcao.in with new atom coords
// run Quest on one proc, save screen output to file
// flines = atom forces extracted from Quest screen file
// fquest = atom forces
// convert Quest Ryd/bohr to LAMMPS eV/Angstrom
char **flines = NULL;
double **fquest = NULL;
if (info->me == 0) {
fquest = info->memory->create_2d_double_array(natoms,3,"lmpqst:fquest");
flines = new char*[natoms];
for (i = 0; i < natoms; i++) flines[i] = new char[128];
}
if (info->me == 0) {
system("rm lcao.*");
sprintf(str,"cp %s lcao.in",info->quest_input);
system(str);
sprintf(str,"cp %s lcao.x",QUOTE(QUEST));
system(str);
replace("lcao.in","primitive lattice vectors",3,boxlines);
replace("lcao.in","atom, type, position vector",natoms,xlines);
system("lcao.x > lcao.screen");
extract("lcao.screen","atom x force "
"y force z force",natoms,flines);
int itmp;
for (i = 0; i < natoms; i++)
sscanf(flines[i],"%d %lg %lg %lg",&itmp,
&fquest[i][0],&fquest[i][1],&fquest[i][2]);
for (i = 0; i < natoms; i++) {
fquest[i][0] *= EV_per_RYDBERG / ANGSTROM_per_BOHR;
fquest[i][1] *= EV_per_RYDBERG / ANGSTROM_per_BOHR;
fquest[i][2] *= EV_per_RYDBERG / ANGSTROM_per_BOHR;
}
}
// convert fquest on one proc into f for atoms on each proc
One2Many *qst2lmp = new One2Many(MPI_COMM_WORLD);
qst2lmp->setup(natoms,nlocal,id);
double *fvec = NULL;
if (f) fvec = &f[0][0];
if (info->me == 0) qst2lmp->scatter(&fquest[0][0],3,fvec);
else qst2lmp->scatter(NULL,3,fvec);
// clean up
// some data only exists on proc 0
delete lmp2qst;
delete qst2lmp;
info->memory->destroy_2d_double_array(xquest);
info->memory->destroy_2d_double_array(fquest);
if (boxlines) {
for (i = 0; i < 3; i++) delete [] boxlines[i];
delete [] boxlines;
}
if (xlines) {
for (i = 0; i < natoms; i++) delete [] xlines[i];
delete [] xlines;
}
if (flines) {
for (i = 0; i < natoms; i++) delete [] flines[i];
delete [] flines;
}
}

View File

@ -0,0 +1 @@
#define QUEST /home/sjplimp/csrf/quest/src/lcao.x

View File

@ -0,0 +1,161 @@
do setup
do iters
do force
no relax
setup data
title
Si 1x1x1 unit cell
functional
PBE
dimensions of system (0=cluster ... 3=bulk)
3
primitive lattice vectors
10.261212 0.000000 0.000000
0.000000 10.261212 0.000000
0.000000 0.000000 10.261212
grid dimensions
10 10 10
atom types
1
type number, label:
1 Si_pbe
notes5
Originally constructed by Peter A. Schultz, 12Apr01
potential generated by new Hamann program PUNSLDX
Cite use with: D.R. Hamann, unpublished.
Potential: "standard" setting out to l=2
Basis: amended Jun05 for better (2d/1d not 1d/1d) d-function
effective nuclear charge (s2p2 to 10.0)
4.00000000d+00
pseudopotentials: Lmax, and effective gaussian range
2 0.86000000d+00
functional type used in generating potential:
PBE
radial mesh: number of points for local and non-local pot integrals
80 67
mesh points for nuclear potential; ham2dh
0.02500000 0.02696978 0.02909477 0.03138719 0.03386023 0.03652812
0.03940622 0.04251109 0.04586060 0.04947402 0.05337215 0.05757741
0.06211402 0.06700807 0.07228773 0.07798338 0.08412779 0.09075634
0.09790716 0.10562140 0.11394345 0.12292121 0.13260635 0.14305458
0.15432605 0.16648562 0.17960325 0.19375443 0.20902061 0.22548964
0.24325628 0.26242278 0.28309943 0.30540522 0.32946852 0.35542780
0.38343245 0.41364362 0.44623518 0.48139466 0.51932441 0.56024270
0.60438500 0.65200533 0.70337773 0.75879783 0.81858456 0.88308197
0.95266121 1.02772271 1.10869840 1.19605428 1.29029305 1.39195702
1.50163124 1.61994684 1.74758469 1.88527930 2.03382306 2.19407079
2.36694466 2.55343950 2.75462852 2.97166951 3.20581145 3.45840177
3.73089402 4.02485632 4.34198031 4.68409093 5.05315693 5.45130215
5.88081777 6.34417553 6.84404189 7.38329340 7.96503329 8.59260927
9.26963282 10.00000000
radwts: weights for radial points
0.00189603 0.00204542 0.00220659 0.00238045 0.00256800 0.00277034
0.00298862 0.00322410 0.00347813 0.00375218 0.00404781 0.00436675
0.00471081 0.00508198 0.00548240 0.00591436 0.00638036 0.00688308
0.00742541 0.00801047 0.00864162 0.00932251 0.01005704 0.01084945
0.01170429 0.01262649 0.01362135 0.01469459 0.01585240 0.01710143
0.01844888 0.01990249 0.02147064 0.02316234 0.02498733 0.02695611
0.02908002 0.03137128 0.03384307 0.03650961 0.03938625 0.04248955
0.04583736 0.04944895 0.05334510 0.05754823 0.06208254 0.06697411
0.07225109 0.07794385 0.08408515 0.09071034 0.09785753 0.10556786
0.11388570 0.12285891 0.13253914 0.14298208 0.15424783 0.16640123
0.17951222 0.19365623 0.20891467 0.22537535 0.24313298 0.26228977
0.28295594 0.30525043 0.32930153 0.35524766 0.38323811 0.41343397
0.44600900 0.48115067 0.51906119 0.55995874 0.60407867 0.65167486
0.70302122 0.75841323
non-local potential: l,potential*integration weight
0 0.62022930 0.62128855 0.62243016 0.62366033 0.62498568 0.62641328
0.62795061 0.62960563 0.63138673 0.63330275 0.63536294 0.63757692
0.63995464 0.64250630 0.64524218 0.64817253 0.65130735 0.65465605
0.65822713 0.66202767 0.66606269 0.67033437 0.67484108 0.67957602
0.68452576 0.68966817 0.69497006 0.70038419 0.70584566 0.71126756
0.71653578 0.72150290 0.72598113 0.72973436 0.73246932 0.73382636
0.73337030 0.73058243 0.72485505 0.71549107 0.70171167 0.68267654
0.65752236 0.62542611 0.58570073 0.53792896 0.48213811 0.41900888
0.35009536 0.27800640 0.20646172 0.14009458 0.08384960 0.04186877
0.01596164 0.00423035 0.00115036 0.00066636 0.00047879 0.00029939
0.00016329 0.00007995 0.00003517 0.00001362 0.00000445 0.00000111
0.00000016 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
0.00000000 0.00000000
non-local potential: l,potential*integration weight
1 0.59551624 0.59463303 0.59368033 0.59265268 0.59154422 0.59034862
0.58905906 0.58766819 0.58616811 0.58455033 0.58280567 0.58092430
0.57889565 0.57670833 0.57435015 0.57180802 0.56906791 0.56611482
0.56293268 0.55950435 0.55581158 0.55183493 0.54755377 0.54294628
0.53798942 0.53265896 0.52692951 0.52077458 0.51416671 0.50707751
0.49947790 0.49133817 0.48262822 0.47331766 0.46337588 0.45277197
0.44147437 0.42945016 0.41666374 0.40307468 0.38863443 0.37328165
0.35693601 0.33949042 0.32080256 0.30068740 0.27891443 0.25521609
0.22931791 0.20100526 0.17024474 0.13737521 0.10336405 0.07007167
0.04035673 0.01767907 0.00470635 0.00076638 0.00047880 0.00029939
0.00016329 0.00007995 0.00003517 0.00001362 0.00000445 0.00000111
0.00000016 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
0.00000000 0.00000000
non-local potential: l,potential*integration weight
2 0.56305372 0.55961728 0.55591134 0.55191498 0.54760572 0.54295941
0.53795013 0.53255008 0.52672947 0.52045641 0.51369682 0.50641433
0.49857022 0.49012333 0.48103004 0.47124429 0.46071759 0.44939919
0.43723624 0.42417413 0.41015690 0.39512792 0.37903070 0.36181001
0.34341340 0.32379300 0.30290805 0.28072780 0.25723539 0.23243242
0.20634465 0.17902876 0.15058041 0.12114359 0.09092117 0.06018665
0.02929636 -0.00129833 -0.03104046 -0.05926034 -0.08517498 -0.10789810
-0.12646610 -0.13988656 -0.14721657 -0.14767751 -0.14080976 -0.12666296
-0.10600305 -0.08049270 -0.05276798 -0.02629475 -0.00486427 0.00837657
0.01228139 0.00892332 0.00342796 0.00074936 0.00047880 0.00029939
0.00016329 0.00007995 0.00003517 0.00001362 0.00000445 0.00000111
0.00000016 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
0.00000000 0.00000000
number of radial functions **** Si PBE Ham-II basis 20Feb01-PAS ****
5
angular momentum, number of alphas
0 4
alphas - s - 4s/2/4s407 (bulk Si dzp Eopt+reopt c1)
0.10460000d+00 0.27226300d+00 1.30050800d+00 2.60103000d+00
wave function coefficients
0.20995300d+00 0.55978200d+00 -0.99128200d+00 0.33487100d+00
angular momentum, number of alphas
1 3
alphas - p - 3p/2/3p492 (bulk Si dzp Eopt + reopt c1)
0.09424100d+00 0.31767900d+00 1.56114500d+00
wave function coefficients
0.06761600d+00 0.31821200d+00 -0.06638300d+00
angular momentum, number of alphas
0 1
alphas - s - second zeta s polarization
0.10460000d+00
wave function coefficients
1.00000000d+00
angular momentum, number of alphas
1 1
alphas - p - second zeta p polarization
0.09424100d+00
wave function coefficients
1.00000000d+00
angular momentum, number of alphas
2 2
alphas - d - angular polarization (dzp Eopt)
0.32000000d+00 1.40000000d+00
wave function coefficients
0.31557000d+00 1.00000000d+00
shell occupancies for this silicon, Si: s(2.00)p(2.00)
2.00000000 2.00000000 0.00000000 0.00000000 0.00000000 0.00000000
end atom file
number of atoms in unit cell
8
atom, type, position vector
1 1 0.0000000000 0.0000000000 0.0000000000
2 1 5.1306060590 5.1306060590 0.0000000000
3 1 5.1306060590 0.0000000000 5.1306060590
4 1 0.0000000000 5.1306060590 5.1306060590
5 1 2.5653030295 2.5653030295 2.5653030295
6 1 7.6959090885 7.6959090885 2.5653030295
7 1 7.6959090885 2.5653030295 7.6959090885
8 1 2.5653030295 7.6959090885 7.6959090885
kgrid
0 0 0
end setup phase data
run phase input data
end of run phase data

View File

@ -0,0 +1,48 @@
# Makefile for grain growth via LAMMPS <-> SPPARKS coupling
SHELL = /bin/sh
# System-specific settings
LAMMPS = /home/sjplimp/lammps
SPPARKS = /home/sjplimp/spparks
CC = g++
CCFLAGS = -g -O -DMPICH_IGNORE_CXX_SEEK -I../library
DEPFLAGS = -M
LINK = g++
LINKFLAGS = -g -O -L../library -L${LAMMPS}/src -L${SPPARKS}/src
USRLIB = -lcouple -llmp_g++ -lspk_g++
SYSLIB = -lfftw -lmpich -lpthread
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
# Files
EXE = lmpspk
SRC = $(wildcard *.cpp)
INC = $(wildcard *.h)
OBJ = $(SRC:.cpp=.o)
# Targets
lmpspk: $(OBJ)
$(LINK) $(LINKFLAGS) $(OBJ) $(USRLIB) $(SYSLIB) -o $(EXE)
$(SIZE) $(EXE)
clean:
rm $(EXE) *.o
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
include $(DEPENDS)

View File

@ -0,0 +1,23 @@
# SPPARKS input for coupling MD/MC
seed 56789
app_style potts/strain 100
dimension 2
lattice sq/8n 1.0
region box block 0 50 0 50 -0.5 0.5
create_box box
create_sites box
set site range 1 100
set d1 value 0.0
sector yes
solve_style tree
diag_style energy
temperature 1.0
stats 10.0
dump 1 10.0 dump.mc

View File

@ -0,0 +1 @@
#define LMPPATH /home/sjplimp/lammps

View File

@ -0,0 +1,223 @@
// lmpspk = umbrella driver to couple LAMMPS + SPPARKS
// for a strain-induced grain growth model
// Syntax: lmpspk Niter Ndelta Sfactor in.spparks
// Niter = # of outer iterations
// Ndelta = time to run MC in each iteration
// Sfactor = multiplier on strain effect
// in.spparks = SPPARKS input script
#include "mpi.h"
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "lammps_data_write.h"
#include "many2many.h"
#include "memory.h"
#include "error.h"
#define QUOTE_(x) #x
#define QUOTE(x) QUOTE_(x)
#include "spkpath.h"
#include QUOTE(SPKPATH/src/spparks.h)
#include QUOTE(SPKPATH/src/library.h)
#include QUOTE(SPKPATH/src/input.h)
#include "lmppath.h"
#include QUOTE(LMPPATH/src/lammps.h)
#include QUOTE(LMPPATH/src/library.h)
#include QUOTE(LMPPATH/src/input.h)
#include QUOTE(LMPPATH/src/modify.h)
#include QUOTE(LMPPATH/src/compute.h)
using namespace SPPARKS_NS;
using namespace LAMMPS_NS;
/* ---------------------------------------------------------------------- */
int main(int narg, char **arg)
{
int i,n;
char str[128];
// setup MPI
MPI_Init(&narg,&arg);
MPI_Comm comm = MPI_COMM_WORLD;
int me,nprocs;
MPI_Comm_rank(comm,&me);
MPI_Comm_size(comm,&nprocs);
Memory *memory = new Memory(comm);
Error *error = new Error(comm);
// command-line args
if (narg != 5)
error->all("Syntax: lmpspk Niter Ndelta Sfactor in.spparks");
int niter = atoi(arg[1]);
double delta = atof(arg[2]);
double sfactor = atof(arg[3]);
n = strlen(arg[4]) + 1;
char *spparks_input = new char[n];
strcpy(spparks_input,arg[4]);
// instantiate LAMMPS and SPPARKS
SPPARKS *spk = new SPPARKS(0,NULL,MPI_COMM_WORLD);
LAMMPS *lmp = new LAMMPS(0,NULL,MPI_COMM_WORLD);
// create simulation in SPPARKS from in.spparks
spk->input->file(spparks_input);
// extract permanent info from SPPARKS
int dimension,nglobal,nlocal_spparks,nspins;
double boxxlo,boxxhi,boxylo,boxyhi,boxzlo,boxzhi;
int *id_spparks,*spins;
double **xyz;
double *strain;
dimension = *((int *) spparks_extract(spk,"dimension"));
nglobal = *((int *) spparks_extract(spk,"nglobal"));
nlocal_spparks = *((int *) spparks_extract(spk,"nlocal"));
boxxlo = *((double *) spparks_extract(spk,"boxxlo"));
boxxhi = *((double *) spparks_extract(spk,"boxxhi"));
boxylo = *((double *) spparks_extract(spk,"boxylo"));
boxyhi = *((double *) spparks_extract(spk,"boxyhi"));
if (dimension == 3) {
boxzlo = *((double *) spparks_extract(spk,"boxzlo"));
boxzhi = *((double *) spparks_extract(spk,"boxzhi"));
} else {
boxzlo = -0.5;
boxzhi = 0.5;
}
id_spparks = (int *) spparks_extract(spk,"id");
spins = (int *) spparks_extract(spk,"site");
xyz = (double **) spparks_extract(spk,"xyz");
nspins = *((int *) spparks_extract(spk,"nspins"));
strain = (double *) spparks_extract(spk,"strain");
// write a LAMMPS input script using SPPARKS params
if (me == 0) {
FILE *fp = fopen("in.lammps","w");
if (fp == NULL) error->one("Could not create LAMMPS input script");
fprintf(fp,"units lj\n");
sprintf(str,"dimension %d\n",dimension);
fprintf(fp,str);
fprintf(fp,"atom_style atomic\n\n");
fprintf(fp,"read_data data.lammps\n");
fprintf(fp,"mass * 1.0\n\n");
fprintf(fp,"pair_style lj/cut 2.5\n");
fprintf(fp,"pair_coeff * * 1.0 1.2\n");
for (i = 0; i < nspins; i++) {
sprintf(str,"pair_coeff %d %d 1.0 1.0\n",i+1,i+1);
fprintf(fp,str);
}
fprintf(fp,"\n");
fprintf(fp,"compute da all displace/atom\n\n");
fprintf(fp,"dump 1 all atom 10 dump.md\n");
fprintf(fp,"thermo 1\n");
fclose(fp);
}
// write a LAMMPS data file using SPPARKS data
LAMMPSDataWrite *lwd = new LAMMPSDataWrite(MPI_COMM_WORLD);
lwd->file("data.lammps");
lwd->header("%d atoms",nglobal);
lwd->header("%d atom types",nspins);
lwd->header("%g %g xlo xhi",boxxlo,boxxhi);
lwd->header("%g %g ylo yhi",boxylo,boxyhi);
lwd->header("%g %g zlo zhi",boxzlo,boxzhi);
lwd->atoms(nlocal_spparks);
lwd->atoms(id_spparks);
lwd->atoms(spins);
lwd->atoms(3,xyz);
lwd->execute();
delete lwd;
// create simulation in LAMMPS from created input script
lmp->input->file("in.lammps");
// create transfer operators
Many2Many *spk2lmp = new Many2Many(MPI_COMM_WORLD);
Many2Many *lmp2spk = new Many2Many(MPI_COMM_WORLD);
// timestep loop
// run SPPARKS for delta time
// use SPPARKS spins to reset LAMMPS atom types
// perform LAMMPS minimization
// use atom displacements to reset strain values in SPPARKS
// realloc displace as necessary since nlocal_lammps may change
// re-create both xfers every iteration since LAMMPS may migrate atoms
int nmax = 0;
double *displace = NULL;
int nlocal_lammps;
int *id_lammps,*type;
double **displace_lammps;
for (int iter = 0; iter < niter; iter++) {
sprintf(str,"run %g",delta);
spk->input->one(str);
nlocal_lammps = *((int *) lammps_extract_global(lmp,"nlocal"));
id_lammps = (int *) lammps_extract_atom(lmp,"id");
type = (int *) lammps_extract_atom(lmp,"type");
spk2lmp->setup(nlocal_spparks,id_spparks,nlocal_lammps,id_lammps);
spk2lmp->exchange(spins,type);
lmp->input->one("minimize 0.001 0.001 10 1000");
nlocal_lammps = *((int *) lammps_extract_global(lmp,"nlocal"));
id_lammps = (int *) lammps_extract_atom(lmp,"id");
displace_lammps = (double **) lammps_extract_compute(lmp,"da",1,2);
if (nlocal_lammps > nmax) {
memory->sfree(displace);
nmax = nlocal_lammps;
displace = (double *)
memory->smalloc(nmax*sizeof(double),"lmpspk:displace");
}
for (i = 0; i < nlocal_lammps; i++)
displace[i] = sfactor*displace_lammps[i][3];
lmp2spk->setup(nlocal_lammps,id_lammps,nlocal_spparks,id_spparks);
lmp2spk->exchange(displace,strain);
}
memory->sfree(displace);
// clean up
delete spk2lmp;
delete lmp2spk;
delete spk;
delete lmp;
delete [] spparks_input;
delete memory;
delete error;
MPI_Finalize();
}

View File

@ -0,0 +1 @@
#define SPKPATH /home/sjplimp/spparks

View File

@ -0,0 +1,42 @@
# Makefile for coupling library
SHELL = /bin/sh
# System-specific settings
CC = g++
CCFLAGS = -g -O -DMPICH_IGNORE_CXX_SEEK
DEPFLAGS = -M
LINK = g++
LINKFLAGS = -g -O
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
# Files
LIB = libcouple.a
SRC = $(wildcard *.cpp)
INC = $(wildcard *.h)
OBJ = $(SRC:.cpp=.o)
# Targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(LIB) $(OBJ)
clean:
rm $(LIB) *.o *.d
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(DEPFLAGS) $< > $@
# Individual dependencies
DEPENDS = $(OBJ:.o=.d)
include $(DEPENDS)

42
couple/library/error.cpp Normal file
View File

@ -0,0 +1,42 @@
#include "mpi.h"
#include "stdlib.h"
#include "stdio.h"
#include "error.h"
/* ---------------------------------------------------------------------- */
Error::Error(MPI_Comm caller)
{
comm = caller;
MPI_Comm_rank(comm,&me);
}
/* ----------------------------------------------------------------------
called by all procs
------------------------------------------------------------------------- */
void Error::all(const char *str)
{
if (me == 0) printf("ERROR: %s\n",str);
MPI_Finalize();
exit(1);
}
/* ----------------------------------------------------------------------
called by one proc
------------------------------------------------------------------------- */
void Error::one(const char *str)
{
printf("ERROR on proc %d: %s\n",me,str);
MPI_Abort(comm,1);
}
/* ----------------------------------------------------------------------
called by one proc
------------------------------------------------------------------------- */
void Error::warning(const char *str)
{
printf("WARNING: %s\n",str);
}

19
couple/library/error.h Normal file
View File

@ -0,0 +1,19 @@
#ifndef ERROR_H
#define ERROR_H
#include "mpi.h"
class Error {
public:
Error(MPI_Comm);
void all(const char *);
void one(const char *);
void warning(const char *);
private:
MPI_Comm comm;
int me;
};
#endif

48
couple/library/files.cpp Normal file
View File

@ -0,0 +1,48 @@
#include "stdio.h"
#include "string.h"
#include "files.h"
#define MAXLINE 256
/* ---------------------------------------------------------------------- */
void replace(char *file, char *header, int n, char **lines)
{
FILE *fpr = fopen(file,"r");
FILE *fpw = fopen("tmp.file","w");
char line[MAXLINE];
while (fgets(line,MAXLINE,fpr)) {
if (strstr(line,header)) {
fprintf(fpw,"%s",line);
for (int i = 0; i < n; i++) {
fgets(line,MAXLINE,fpr);
fprintf(fpw,"%s",lines[i]);
}
} else fprintf(fpw,"%s",line);
}
fclose(fpr);
fclose(fpw);
rename("tmp.file",file);
}
/* ---------------------------------------------------------------------- */
char **extract(char *file, char *header, int n, char **lines)
{
FILE *fp = fopen(file,"r");
char line[MAXLINE];
while (fgets(line,MAXLINE,fp)) {
if (strstr(line,header)) {
for (int i = 0; i < n; i++) {
fgets(line,MAXLINE,fp);
sprintf(lines[i],"%s",line);
}
break;
}
}
fclose(fp);
}

2
couple/library/files.h Normal file
View File

@ -0,0 +1,2 @@
void replace(char *, char *, int, char **);
char **extract(char *, char *, int, char **);

View File

@ -0,0 +1,393 @@
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "irregular.h"
#include "memory.h"
#include "error.h"
#define MAX(A,B) ((A) > (B)) ? (A) : (B)
enum{UNSET,SET};
enum{NONE,SAME,VARYING};
/* ---------------------------------------------------------------------- */
Irregular::Irregular(MPI_Comm caller)
{
comm = caller;
MPI_Comm_rank(comm,&me);
MPI_Comm_size(comm,&nprocs);
memory = new Memory(comm);
error = new Error(comm);
init();
patternflag = UNSET;
sizestyle = NONE;
}
/* ---------------------------------------------------------------------- */
Irregular::~Irregular()
{
delete memory;
delete error;
deallocate();
}
/* ----------------------------------------------------------------------
n = # of datums contributed by this proc
proclist = which proc each datum is to be sent to
------------------------------------------------------------------------- */
void Irregular::pattern(int n, int *proclist)
{
// free any previous irregular info
deallocate();
init();
patternflag = SET;
sizestyle = NONE;
ndatumsend = n;
// list = 1 for procs I send to, including self
// nrecv = # of messages I receive, not including self
// self = 0 if no data for self, 1 if there is
int *list = new int[nprocs];
int *counts = new int[nprocs];
for (int i = 0; i < nprocs; i++) {
list[i] = 0;
counts[i] = 1;
}
for (int i = 0; i < n; i++) list[proclist[i]] = 1;
MPI_Reduce_scatter(list,&nrecv,counts,MPI_INT,MPI_SUM,comm);
self = 0;
if (list[me]) self = 1;
if (self) nrecv--;
// storage for recv info, not including self
recvproc = new int[nrecv];
recvcount = new int[nrecv];
recvsize = new int[nrecv];
request = new MPI_Request[nrecv];
status = new MPI_Status[nrecv];
// list = # of datums to send to each proc, including self
// nsend = # of messages I send, not including self
for (int i = 0; i < nprocs; i++) list[i] = 0;
for (int i = 0; i < n; i++) list[proclist[i]]++;
nsend = 0;
for (int i = 0; i < nprocs; i++) if (list[i] > 0) nsend++;
if (self) nsend--;
// storage for send info, including self
sendproc = new int[nsend+self];
sendcount = new int[nsend+self];
sendsize = new int[nsend+self];
sendindices = (int *) memory->smalloc(n*sizeof(int),"sendindices");
// setup sendprocs and sendcounts, including self
// each proc begins with iproc > me, and continues until iproc = me
// list ends up with pointer to which send that proc is associated with
int iproc = me;
int isend = 0;
for (int i = 0; i < nprocs; i++) {
iproc++;
if (iproc == nprocs) iproc = 0;
if (list[iproc] > 0) {
sendproc[isend] = iproc;
sendcount[isend] = list[iproc];
list[iproc] = isend;
isend++;
}
}
// post all receives for datum counts
for (int i = 0; i < nrecv; i++)
MPI_Irecv(&recvcount[i],1,MPI_INT,MPI_ANY_SOURCE,0,comm,&request[i]);
// barrier to insure receives are posted
MPI_Barrier(comm);
// send each datum count, packing buf with needed datums
for (int i = 0; i < nsend; i++)
MPI_Send(&sendcount[i],1,MPI_INT,sendproc[i],0,comm);
// insure all MPI_ANY_SOURCE messages are received
// set recvproc
if (nrecv) MPI_Waitall(nrecv,request,status);
for (int i = 0; i < nrecv; i++) recvproc[i] = status[i].MPI_SOURCE;
// ndatumrecv = total datums received, including self
ndatumrecv = 0;
for (int i = 0; i < nrecv; i++)
ndatumrecv += recvcount[i];
if (self) ndatumrecv += sendcount[nsend];
// setup sendindices, including self
// counts = offset into sendindices for each proc I send to
// let sc0 = sendcount[0], sc1 = sendcount[1], etc
// sendindices[0:sc0-1] = indices of datums in 1st message
// sendindices[sc0:sc0+sc1-1] = indices of datums in 2nd message, etc
counts[0] = 0;
for (int i = 1; i < nsend+self; i++)
counts[i] = counts[i-1] + sendcount[i-1];
for (int i = 0; i < n; i++) {
isend = list[proclist[i]];
sendindices[counts[isend]++] = i;
}
// clean up
delete [] counts;
delete [] list;
}
/* ----------------------------------------------------------------------
n = size of each received datum
return total size in bytes of received data on this proc
------------------------------------------------------------------------- */
int Irregular::size(int n)
{
if (patternflag == UNSET) error->all("Cannot size without pattern");
sizestyle = SAME;
nsize = n;
nsendmax = 0;
for (int i = 0; i < nsend+self; i++) {
sendsize[i] = nsize * sendcount[i];
if (i < nsend) nsendmax = MAX(nsendmax,sendsize[i]);
}
for (int i = 0; i < nrecv; i++) recvsize[i] = nsize * recvcount[i];
nbytesrecv = nsize * ndatumrecv;
return nbytesrecv;
}
/* ----------------------------------------------------------------------
slength,rlength = size of each datum to send and recv
soffset = offset into eventual buffer of send data for each datum
soffset can be NULL, in which case will build sendoffset from slength
return total size in bytes of received data on this proc
------------------------------------------------------------------------- */
int Irregular::size(int *slength, int *soffset, int *rlength)
{
if (patternflag == UNSET) error->all("Cannot size without pattern");
sizestyle = VARYING;
// store local copy of pointers to send lengths/offsets
// if soffset not provided, create local copy from slength
sendsizedatum = slength;
if (soffset == NULL) {
sendoffsetflag = 1;
sendoffset = (int *) memory->smalloc(ndatumsend*sizeof(int),"sendoffset");
if (ndatumsend) sendoffset[0] = 0;
for (int i = 1; i < ndatumsend; i++)
sendoffset[i] = sendoffset[i-1] + sendsizedatum[i-1];
} else {
if (sendoffsetflag) memory->sfree(sendoffset);
sendoffsetflag = 0;
sendoffset = soffset;
}
nsendmax = 0;
int m = 0;
for (int i = 0; i < nsend+self; i++) {
sendsize[i] = 0;
for (int j = 0; j < sendcount[i]; j++)
sendsize[i] += sendsizedatum[sendindices[m++]];
if (i < nsend) nsendmax = MAX(nsendmax,sendsize[i]);
}
nbytesrecv = 0;
m = 0;
for (int i = 0; i < nrecv; i++) {
recvsize[i] = 0;
for (int j = 0; j < recvcount[i]; j++) recvsize[i] += rlength[m++];
nbytesrecv += recvsize[i];
}
if (self) nbytesrecv += sendsize[nsend];
return nbytesrecv;
}
/* ----------------------------------------------------------------------
wrapper on 2 versions of exchange
------------------------------------------------------------------------- */
void Irregular::exchange(char *sendbuf, char *recvbuf)
{
if (sizestyle == SAME) exchange_same(sendbuf,recvbuf);
else if (sizestyle == VARYING) exchange_varying(sendbuf,recvbuf);
else error->all("Irregular size was not set");
}
/* ----------------------------------------------------------------------
sendbuf = data to send
recvbuf = buffer to recv all data into
requires nsize,nsendmax,recvsize,sendsize be setup by size(int)
------------------------------------------------------------------------- */
void Irregular::exchange_same(char *sendbuf, char *recvbuf)
{
// post all receives
int recvoffset = 0;
for (int irecv = 0; irecv < nrecv; irecv++) {
MPI_Irecv(&recvbuf[recvoffset],recvsize[irecv],MPI_BYTE,
recvproc[irecv],0,comm,&request[irecv]);
recvoffset += recvsize[irecv];
}
// malloc buf for largest send
char *buf = (char *) memory->smalloc(nsendmax,"buf");
// barrier to insure receives are posted
MPI_Barrier(comm);
// send each message, packing buf with needed datums
int m = 0;
for (int isend = 0; isend < nsend; isend++) {
int bufoffset = 0;
for (int i = 0; i < sendcount[isend]; i++) {
memcpy(&buf[bufoffset],&sendbuf[nsize*sendindices[m++]],nsize);
bufoffset += nsize;
}
MPI_Send(buf,sendsize[isend],MPI_BYTE,sendproc[isend],0,comm);
}
// copy self data directly from sendbuf to recvbuf
if (self)
for (int i = 0; i < sendcount[nsend]; i++) {
memcpy(&recvbuf[recvoffset],&sendbuf[nsize*sendindices[m++]],nsize);
recvoffset += nsize;
}
// free send buffer
memory->sfree(buf);
// wait on all incoming messages
if (nrecv) MPI_Waitall(nrecv,request,status);
}
/* ----------------------------------------------------------------------
sendbuf = data to send
recvbuf = buffer to recv all data into
requires nsendmax,recvsize,sendsize,sendoffset,sendsizedatum
be setup by size(int *, int *)
------------------------------------------------------------------------- */
void Irregular::exchange_varying(char *sendbuf, char *recvbuf)
{
// post all receives
int recvoffset = 0;
for (int irecv = 0; irecv < nrecv; irecv++) {
MPI_Irecv(&recvbuf[recvoffset],recvsize[irecv],MPI_BYTE,
recvproc[irecv],0,comm,&request[irecv]);
recvoffset += recvsize[irecv];
}
// malloc buf for largest send
char *buf = (char *) memory->smalloc(nsendmax,"buf");
// barrier to insure receives are posted
MPI_Barrier(comm);
// send each message, packing buf with needed datums
int index;
int m = 0;
for (int isend = 0; isend < nsend; isend++) {
int bufoffset = 0;
for (int i = 0; i < sendcount[isend]; i++) {
index = sendindices[m++];
memcpy(&buf[bufoffset],&sendbuf[sendoffset[index]],sendsizedatum[index]);
bufoffset += sendsizedatum[index];
}
MPI_Send(buf,sendsize[isend],MPI_BYTE,sendproc[isend],0,comm);
}
// copy self data directly from sendbuf to recvbuf
if (self)
for (int i = 0; i < sendcount[nsend]; i++) {
index = sendindices[m++];
memcpy(&recvbuf[recvoffset],&sendbuf[sendoffset[index]],
sendsizedatum[index]);
recvoffset += sendsizedatum[index];
}
// free send buffer
memory->sfree(buf);
// wait on all incoming messages
if (nrecv) MPI_Waitall(nrecv,request,status);
}
/* ---------------------------------------------------------------------- */
void Irregular::init()
{
sendoffsetflag = 0;
sendproc = sendcount = sendsize = sendindices = NULL;
sendoffset = NULL;
recvproc = recvcount = recvsize = NULL;
request = NULL;
status = NULL;
}
/* ---------------------------------------------------------------------- */
void Irregular::deallocate()
{
delete [] sendproc;
delete [] sendcount;
delete [] sendsize;
memory->sfree(sendindices);
if (sendoffsetflag) memory->sfree(sendoffset);
delete [] recvproc;
delete [] recvcount;
delete [] recvsize;
delete [] request;
delete [] status;
}

View File

@ -0,0 +1,58 @@
#ifndef IRREGULAR_H
#define IRREGULAR_H
#include "mpi.h"
class Irregular {
public:
Irregular(MPI_Comm);
~Irregular();
void pattern(int, int *);
int size(int);
int size(int *, int *, int *);
void exchange(char *, char *);
private:
int me,nprocs;
int patternflag; // UNSET,SET
int sizestyle; // NONE,SAME,VARYING
int self; // 0 = no data to copy to self, 1 = yes
int ndatumsend; // # of datums to send w/ self
int ndatumrecv; // # of datums to recv w/ self
int nbytesrecv; // total bytes in received data w/ self
int nsend; // # of messages to send w/out self
int nrecv; // # of messages to recv w/out self
int nsendmax; // # of bytes in largest send message, w/out self
int *sendproc; // list of procs to send to w/out self
int *sendcount; // # of datums to send to each proc w/ self
int *sendsize; // # of bytes to send to each proc w/ self
int *sendindices; // indices of datums to send to each proc w/ self
int nsize; // size of every datum in bytes (SAME)
int *sendsizedatum; // bytes in each datum to send w/ self (VARYING)
int *sendoffset; // byte offset to where each datum starts w/ self
int sendoffsetflag; // 1 if allocated sendoffset, 0 if passed in
int *recvproc; // list of procs to recv from w/out self
int *recvcount; // # of datums to recv from each proc w/out self
int *recvsize; // # of bytes to recv from each proc w/out self
MPI_Request *request; // MPI requests for posted recvs
MPI_Status *status; // MPI statuses for Waitall
MPI_Comm comm; // MPI communicator for all communication
class Memory *memory;
class Error *error;
void exchange_same(char *, char *);
void exchange_varying(char *, char *);
void init();
void deallocate();
};
#endif

View File

@ -0,0 +1,249 @@
#include "stdlib.h"
#include "string.h"
#include "lammps_data_write.h"
#include "memory.h"
#include "error.h"
#define DELTA 4;
enum{INT,DOUBLE,DOUBLE2};
/* ---------------------------------------------------------------------- */
LAMMPSDataWrite::LAMMPSDataWrite(MPI_Comm caller_comm) : Send2One(caller_comm)
{
outfile = NULL;
nheader = maxheader = 0;
format = NULL;
headtype = NULL;
ihead = NULL;
dhead = NULL;
ddhead = NULL;
nper = maxper = 0;
atomtype = NULL;
ivec = NULL;
dvec = NULL;
stride = NULL;
}
/* ---------------------------------------------------------------------- */
LAMMPSDataWrite::~LAMMPSDataWrite()
{
delete [] outfile;
for (int i = 0; i < nheader; i++) delete [] format[i];
memory->sfree(format);
memory->sfree(headtype);
memory->sfree(ihead);
memory->sfree(dhead);
memory->destroy_2d_double_array(ddhead);
memory->sfree(atomtype);
memory->sfree(ivec);
memory->sfree(dvec);
memory->sfree(stride);
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::pre()
{
if (me == 0) {
fp = fopen(outfile,"w");
if (fp == NULL)
error->one("Could not open data_write output file");
}
if (me == 0) {
fprintf(fp,"%s","LAMMPS data file\n\n");
for (int i = 0; i < nheader; i++)
if (headtype[i] == INT) fprintf(fp,format[i],ihead[i]);
else if (headtype[i] == DOUBLE) fprintf(fp,format[i],dhead[i]);
else if (headtype[i] == DOUBLE2) fprintf(fp,format[i],
ddhead[i][0],ddhead[i][1]);
fprintf(fp,"\nAtoms\n\n");
}
}
/* ---------------------------------------------------------------------- */
int LAMMPSDataWrite::size()
{
return nper*nlocal*sizeof(double);
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::pack(char *cbuf)
{
int i,j;
double *dbuf = (double *) cbuf;
int m = 0;
for (i = 0; i < nlocal; i++)
for (j = 0; j < nper; j++) {
if (i == 0) {
if (atomtype[j] == 0)
printf("AT %d %d %p %d\n",
atomtype[j],stride[j],ivec[j],ivec[j][0]);
else
printf("AT %d %d %p %g\n",
atomtype[j],stride[j],dvec[j],dvec[j][0]);
}
if (atomtype[j] == INT) dbuf[m++] = ivec[j][i*stride[j]];
else if (atomtype[j] == DOUBLE) dbuf[m++] = dvec[j][i*stride[j]];
}
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::process(int nbuf, char *cbuf)
{
int i,j;
double *dbuf = (double *) cbuf;
int n = nbuf/nper/sizeof(double);
int m = 0;
for (i = 0; i < n; i++) {
for (j = 0; j < nper; j++) {
double dvalue = dbuf[m++];
if (atomtype[j] == INT) fprintf(fp,"%d ",static_cast<int> (dvalue));
else if (atomtype[j] == DOUBLE) fprintf(fp,"%g ",dvalue);
}
fprintf(fp,"\n");
}
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::post()
{
if (me == 0) fclose(fp);
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::file(char *fname)
{
int n = strlen(fname) + 1;
outfile = new char[n];
strcpy(outfile,fname);
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::header(char *str, int ivalue)
{
if (nheader == maxheader) grow_header();
int n = strlen(str) + 2;
format[nheader] = new char[n];
strcpy(format[nheader],str);
format[nheader][n-2] = '\n';
format[nheader][n-1] = '\0';
headtype[nheader] = INT;
ihead[nheader] = ivalue;
nheader++;
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::header(char *str, double dvalue)
{
if (nheader == maxheader) grow_header();
int n = strlen(str) + 2;
format[nheader] = new char[n];
strcpy(format[nheader],str);
format[nheader][n-2] = '\n';
format[nheader][n-1] = '\0';
headtype[nheader] = DOUBLE;
dhead[nheader] = dvalue;
nheader++;
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::header(char *str, double dvalue1, double dvalue2)
{
if (nheader == maxheader) grow_header();
int n = strlen(str) + 2;
format[nheader] = new char[n];
strcpy(format[nheader],str);
format[nheader][n-2] = '\n';
format[nheader][n-1] = '\0';
headtype[nheader] = DOUBLE2;
ddhead[nheader][0] = dvalue1;
ddhead[nheader][1] = dvalue2;
nheader++;
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::atoms(int n)
{
nlocal = n;
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::atoms(int *vec)
{
if (nper == maxper) grow_peratom();
atomtype[nper] = INT;
ivec[nper] = vec;
stride[nper] = 1;
nper++;
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::atoms(double *vec)
{
if (nper == maxper) grow_peratom();
atomtype[nper] = DOUBLE;
dvec[nper] = vec;
stride[nper] = 1;
nper++;
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::atoms(int n, double **vec)
{
if (nper+n >= maxper) grow_peratom();
for (int i = 0; i < n; i++) {
atomtype[nper] = DOUBLE;
dvec[nper] = &vec[0][i];
stride[nper] = n;
nper++;
}
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::grow_header()
{
int n = maxheader + DELTA;
format = (char **) memory->srealloc(format,n*sizeof(char *),"ldw:format");
headtype = (int *) memory->srealloc(headtype,n*sizeof(int),"ldw:headtype");
ihead = (int *) memory->srealloc(ihead,n*sizeof(int),"ldw:ihead");
dhead = (double *) memory->srealloc(dhead,n*sizeof(double),"ldw:dhead");
ddhead = memory->grow_2d_double_array(ddhead,n,2,"ldw:ddhead");
maxheader = n;
}
/* ---------------------------------------------------------------------- */
void LAMMPSDataWrite::grow_peratom()
{
int n = maxper + DELTA;
atomtype = (int *) memory->srealloc(atomtype,n*sizeof(int *),"ldw:atomtype");
ivec = (int **) memory->srealloc(ivec,n*sizeof(int *),"ldw:ihead");
dvec = (double **) memory->srealloc(dvec,n*sizeof(double *),"ldw:dhead");
stride = (int *) memory->srealloc(stride,n*sizeof(int *),"ldw:stride");
maxper = n;
}

View File

@ -0,0 +1,48 @@
#ifndef LAMMPS_DATA_WRITE_H
#define LAMMPS_DATA_WRITE_H
#include "send2one.h"
#include "stdio.h"
class LAMMPSDataWrite : public Send2One {
public:
LAMMPSDataWrite(MPI_Comm);
~LAMMPSDataWrite();
void pre();
int size();
void pack(char *);
void process(int, char *);
void post();
void file(char *);
void header(char *, int);
void header(char *, double);
void header(char *, double, double);
void atoms(int);
void atoms(int *);
void atoms(double *);
void atoms(int, double **);
private:
char *outfile;
int nlocal;
FILE *fp;
int nheader,maxheader;
char **format;
int *headtype,*ihead;
double *dhead;
double **ddhead;
int nper,maxper;
int *atomtype;
int **ivec;
double **dvec;
int *stride;
void grow_header();
void grow_peratom();
};
#endif

View File

@ -0,0 +1,316 @@
#include "mpi.h"
#include "many2many.h"
#include "irregular.h"
#include "memory.h"
#include "error.h"
#include <map>
#define MAX(A,B) ((A) > (B)) ? (A) : (B)
/* ---------------------------------------------------------------------- */
Many2Many::Many2Many(MPI_Comm caller)
{
comm = caller;
MPI_Comm_rank(comm,&me);
MPI_Comm_size(comm,&nprocs);
memory = new Memory(comm);
error = new Error(comm);
src_own = dest_own = NULL;
src_off = dest_off = NULL;
src_iwork = dest_iwork = NULL;
src_dwork = dest_dwork = NULL;
irregular = NULL;
}
/* ---------------------------------------------------------------------- */
Many2Many::~Many2Many()
{
delete memory;
delete error;
deallocate();
}
/* ----------------------------------------------------------------------
create a many2many pattern, deallocating any previous pattern
each proc will contribute nsrc items with IDs listed in id_src
each proc will receive ndest items with IDs listed in id_dest
only sets up communication via rendezvous algorithm and Irregular class
if id_src does not match id_dest on all procs
------------------------------------------------------------------------- */
void Many2Many::setup(int nsrc, int *id_src, int ndest, int *id_dest)
{
int i,j,isrc,idest,nsend,nrecv;
int *proclist,*work;
std::map<int,int> hash;
std::map<int,int>::iterator loc;
// free any previous many2many info
deallocate();
// allocate on-proc and off-proc index lists
src_own =
(int *) memory->smalloc(nsrc*sizeof(int),"many2many:src_own");
dest_own =
(int *) memory->smalloc(ndest*sizeof(int),"many2many:dest_own");
src_off =
(int *) memory->smalloc(nsrc*sizeof(int),"many2many:src_off");
dest_off =
(int *) memory->smalloc(ndest*sizeof(int),"many2many:dest_off");
// store destination IDs in hash
for (int i = 0; i < ndest; i++)
hash.insert(std::pair<int,int> (id_dest[i],i));
// src_own, dest_own = list of IDs in both src and dest
// nsrc_off = list of IDs in src owned by other procs
nown = nsrc_off = 0;
nsrc_off = 0;
for (i = 0; i < nsrc; i++) {
loc = hash.find(id_src[i]);
if (loc != hash.end()) {
src_own[nown] = i;
dest_own[nown] = loc->second;
nown++;
} else src_off[nsrc_off++] = i;
}
// all done if all procs have one-to-one mapping of src and dest IDs
// else figure out irregular comm needed
int flag = 0;
if (nown == nsrc && nown == ndest) flag = 1;
int flagall;
MPI_Allreduce(&flag,&flagall,1,MPI_INT,MPI_MIN,comm);
if (flagall) return;
// ndest_off = list of IDs in dest owned by other procs
work = (int *) memory->smalloc(ndest*sizeof(int),"many2many:work");
for (i = 0; i < ndest; i++) work[i] = 0;
for (i = 0; i < nown; i++) work[dest_own[i]] = 1;
ndest_off = 0;
for (i = 0; i < ndest; i++)
if (work[i] == 0) dest_off[ndest_off++] = i;
memory->sfree(work);
// realloc off-proc arrays to smaller size
src_off = (int *)
memory->srealloc(src_off,nsrc_off*sizeof(int),"many2many:src_off");
dest_off = (int *)
memory->srealloc(dest_off,ndest_off*sizeof(int),"many2many:dest_off");
// send off-proc src and dest Datums to 3rd-party proc via irregular comm
// proc = ID % nprocs
nsend = nsrc_off + ndest_off;
proclist = new int[nsend];
Datum1 *send1 = new Datum1[nsend];
for (i = 0; i < nsrc_off; i++) {
proclist[i] = id_src[src_off[i]] % nprocs;
send1[i].id = id_src[src_off[i]];
send1[i].proc = me;
send1[i].index = src_off[i];
}
for (i = 0, j = nsrc_off; i < ndest_off; i++, j++) {
proclist[j] = id_dest[dest_off[i]] % nprocs;
send1[j].id = -id_dest[dest_off[i]];
send1[j].proc = me;
send1[j].index = dest_off[i];
}
irregular = new Irregular(comm);
irregular->pattern(nsend,proclist);
nrecv = irregular->size(sizeof(Datum1)) / sizeof(Datum1);
Datum1 *recv1 = new Datum1[nrecv];
irregular->exchange((char *) send1, (char *) recv1);
delete irregular;
delete [] proclist;
// as 3rd-party proc, now have matching pairs of off-proc IDs
// store src IDs (which are positive) in hash
// loop over dest IDs (which are negative) to find matches
// send match info back to src procs via a 2nd irregular comm
nsend = nrecv/2;
proclist = new int[nsend];
Datum2 *send2 = new Datum2[nsend];
nsend = 0;
hash.clear();
for (isrc = 0; isrc < nrecv; isrc++)
if (recv1[isrc].id > 0)
hash.insert(std::pair<int,int> (recv1[isrc].id,isrc));
for (idest = 0; idest < nrecv; idest++)
if (recv1[idest].id < 0) {
loc = hash.find(-recv1[idest].id);
if (loc != hash.end()) {
isrc = loc->second;
proclist[nsend] = recv1[isrc].proc;
send2[nsend].slocal = recv1[isrc].index;
send2[nsend].dlocal = recv1[idest].index;
send2[nsend].dproc = recv1[idest].proc;
nsend++;
} else error->one("Did not receive matching src/dest ID");
}
irregular = new Irregular(comm);
irregular->pattern(nsend,proclist);
nrecv = irregular->size(sizeof(Datum2)) / sizeof(Datum2);
Datum2 *recv2 = new Datum2[nrecv];
irregular->exchange((char *) send2, (char *) recv2);
delete irregular;
delete [] proclist;
// use list of received src->dest Datums to build final irregular commm
// irregular comm will communicate off-proc info from src to dest directly
// work = local indices of dest IDs to send initially
nsend = nrecv;
proclist = new int[nsend];
work = new int[nsend];
for (i = 0; i < nrecv; i++) {
src_off[i] = recv2[i].slocal;
work[i] = recv2[i].dlocal;
proclist[i] = recv2[i].dproc;
}
irregular = new Irregular(comm);
irregular->pattern(nsend,proclist);
// send receiver's local indices
// receiver stores them as indirection list in dest_off
nrecv = irregular->size(sizeof(int)) / sizeof(int);
irregular->exchange((char *) work, (char *) dest_off);
delete [] proclist;
delete [] work;
// create work arrays for data exchange of int/double data
src_iwork =
(int *) memory->smalloc(nsrc_off*sizeof(int),"many2many:src_iwork");
dest_iwork =
(int *) memory->smalloc(ndest_off*sizeof(int),"many2many:dest_iwork");
src_dwork =
(double *) memory->smalloc(nsrc_off*sizeof(double),"many2many:src_dwork");
dest_dwork =
(double *) memory->smalloc(ndest_off*sizeof(double),
"many2many:dest_dwork");
// clean up
delete [] send1;
delete [] recv1;
delete [] send2;
delete [] recv2;
// debug checks for full coverage of srd/dest - can delete eventually
work = new int[MAX(nsrc,ndest)];
for (i = 0; i < nsrc; i++) work[i] = 0;
for (i = 0; i < nown; i++) work[src_own[i]]++;
for (i = 0; i < nsrc_off; i++) work[src_off[i]]++;
for (i = 0; i < nsrc; i++)
if (work[i] != 1) {
char str[128];
sprintf(str,"BAD SRC %d: %d %d\n",me,i,work[i]);
error->one(str);
}
for (i = 0; i < ndest; i++) work[i] = 0;
for (i = 0; i < nown; i++) work[dest_own[i]]++;
for (i = 0; i < ndest_off; i++) work[dest_off[i]]++;
for (i = 0; i < ndest; i++)
if (work[i] != 1) {
char str[128];
sprintf(str,"BAD DEST %d: %d %d\n",me,i,work[i]);
error->one(str);
}
delete [] work;
}
/* ----------------------------------------------------------------------
transfer one src entity to dest entity, matched by IDs in create()
operates on an int vector
------------------------------------------------------------------------- */
void Many2Many::exchange(int *src, int *dest)
{
int i;
// copy on-proc info
for (i = 0; i < nown; i++)
dest[dest_own[i]] = src[src_own[i]];
// communicate off-proc info
// user src_off and dest_off to pack/unpack data
if (irregular) {
int nrecv = irregular->size(sizeof(int)) / sizeof(int);
for (i = 0; i < nsrc_off; i++) src_iwork[i] = src[src_off[i]];
irregular->exchange((char *) src_iwork, (char *) dest_iwork);
for (i = 0; i < ndest_off; i++) dest[dest_off[i]] = dest_iwork[i];
}
}
/* ----------------------------------------------------------------------
transfer one src entity to dest entity, matched by IDs in create()
operates on a double vector
------------------------------------------------------------------------- */
void Many2Many::exchange(double *src, double *dest)
{
int i;
// copy on-proc info
for (int i = 0; i < nown; i++)
dest[dest_own[i]] = src[src_own[i]];
// communicate off-proc info
// user src_off and dest_off to pack/unpack data
if (irregular) {
int nrecv = irregular->size(sizeof(double)) / sizeof(double);
for (i = 0; i < nsrc_off; i++) src_dwork[i] = src[src_off[i]];
irregular->exchange((char *) src_dwork, (char *) dest_dwork);
for (i = 0; i < ndest_off; i++) dest[dest_off[i]] = dest_dwork[i];
}
}
/* ---------------------------------------------------------------------- */
void Many2Many::deallocate()
{
memory->sfree(src_own);
memory->sfree(dest_own);
memory->sfree(src_off);
memory->sfree(dest_off);
memory->sfree(src_iwork);
memory->sfree(dest_iwork);
memory->sfree(src_dwork);
memory->sfree(dest_dwork);
delete irregular;
}

View File

@ -0,0 +1,47 @@
#ifndef MANY2MANY_H
#define MANY2MANY_H
#include "mpi.h"
class Many2Many {
public:
Many2Many(MPI_Comm);
~Many2Many();
void setup(int, int *, int, int *);
void exchange(int *, int *);
void exchange(double *, double *);
protected:
int me,nprocs;
MPI_Comm comm;
class Memory *memory;
class Error *error;
int nown; // # of IDs common to src and dest
int nsrc_off,ndest_off; // # of off-processor IDs
int *src_own,*dest_own; // indices of the owned IDs
int *src_off,*dest_off; // indices of the off-proc IDs
int *src_iwork,*dest_iwork; // work arrays for comm of ints
double *src_dwork,*dest_dwork; // work arrays for comm of doubles
class Irregular *irregular; // irregular comm from src->dest
struct Datum1 {
int id; // src or dest global ID
int proc; // owning proc
int index; // local index on owning proc
};
struct Datum2 {
int slocal; // local index of src ID on sending proc
int dlocal; // local index of dest ID on receiving proc
int dproc; // receiving proc
};
void deallocate();
};
#endif

103
couple/library/many2one.cpp Normal file
View File

@ -0,0 +1,103 @@
#include "mpi.h"
#include "stdio.h"
#include "stdlib.h"
#include "many2one.h"
#include "memory.h"
/* ---------------------------------------------------------------------- */
Many2One::Many2One(MPI_Comm caller_comm)
{
comm = caller_comm;
MPI_Comm_rank(comm,&me);
MPI_Comm_size(comm,&nprocs);
memory = new Memory(comm);
if (me == 0) {
counts = new int[nprocs];
multicounts = new int[nprocs];
displs = new int[nprocs];
multidispls = new int[nprocs];
} else counts = multicounts = displs = multidispls = NULL;
idall = NULL;
}
/* ---------------------------------------------------------------------- */
Many2One::~Many2One()
{
delete memory;
delete [] counts;
delete [] multicounts;
delete [] displs;
delete [] multidispls;
memory->sfree(idall);
}
/* ---------------------------------------------------------------------- */
void Many2One::setup(int nsrc_in, int *id, int ndest)
{
nsrc = nsrc_in;
MPI_Allreduce(&nsrc,&nall,1,MPI_INT,MPI_SUM,comm);
MPI_Gather(&nsrc,1,MPI_INT,counts,1,MPI_INT,0,comm);
if (me == 0) {
displs[0] = 0;
for (int i = 1; i < nprocs; i++)
displs[i] = displs[i-1] + counts[i-1];
}
// gather IDs into idall
idall = NULL;
if (me == 0)
idall = (int *) memory->smalloc(nall*sizeof(int),"many2one:idall");
MPI_Gatherv(id,nsrc,MPI_INT,idall,counts,displs,MPI_INT,0,comm);
}
/* ---------------------------------------------------------------------- */
void Many2One::gather(double *src, int n, double *dest)
{
int i,j,ii,jj,m;
if (me == 0)
for (int i = 0; i < nprocs; i++) {
multicounts[i] = n*counts[i];
multidispls[i] = n*displs[i];
}
// allgather src into desttmp
double *desttmp = NULL;
if (me == 0)
desttmp = (double *) memory->smalloc(n*nall*sizeof(double),
"many2one:idsttmp");
MPI_Gatherv(src,n*nsrc,MPI_DOUBLE,desttmp,multicounts,multidispls,
MPI_DOUBLE,0,comm);
// use idall to move datums from desttmp to dest
if (me == 0) {
if (n == 1)
for (i = 0; i < nall; i++) {
j = idall[i] - 1;
dest[j] = desttmp[i];
}
else
for (i = 0; i < nall; i++) {
j = idall[i] - 1;
ii = n*i;
jj = n*j;
for (m = 0; m < n; m++)
dest[jj++] = desttmp[ii++];
}
}
memory->sfree(desttmp);
}

25
couple/library/many2one.h Normal file
View File

@ -0,0 +1,25 @@
#ifndef MANY2ONE_H
#define MANY2ONE_H
#include "mpi.h"
class Many2One {
public:
Many2One(MPI_Comm);
~Many2One();
void setup(int, int *, int);
void gather(double *, int, double *);
protected:
int me,nprocs;
MPI_Comm comm;
class Memory *memory;
int nsrc,nall;
int *counts,*multicounts;
int *displs,*multidispls;
int *idall;
};
#endif

120
couple/library/memory.cpp Normal file
View File

@ -0,0 +1,120 @@
#include "mpi.h"
#include "stdlib.h"
#include "stdio.h"
#include "memory.h"
#include "error.h"
/* ---------------------------------------------------------------------- */
Memory::Memory(MPI_Comm comm)
{
error = new Error(comm);
}
/* ---------------------------------------------------------------------- */
Memory::~Memory()
{
delete error;
}
/* ----------------------------------------------------------------------
safe malloc
------------------------------------------------------------------------- */
void *Memory::smalloc(int n, const char *name)
{
if (n == 0) return NULL;
void *ptr = malloc(n);
if (ptr == NULL) {
char str[128];
sprintf(str,"Failed to allocate %d bytes for array %s",n,name);
error->one(str);
}
return ptr;
}
/* ----------------------------------------------------------------------
safe free
------------------------------------------------------------------------- */
void Memory::sfree(void *ptr)
{
if (ptr == NULL) return;
free(ptr);
}
/* ----------------------------------------------------------------------
safe realloc
------------------------------------------------------------------------- */
void *Memory::srealloc(void *ptr, int n, const char *name)
{
if (n == 0) {
sfree(ptr);
return NULL;
}
ptr = realloc(ptr,n);
if (ptr == NULL) {
char str[128];
sprintf(str,"Failed to reallocate %d bytes for array %s",n,name);
error->one(str);
}
return ptr;
}
/* ----------------------------------------------------------------------
create a 2d double array
------------------------------------------------------------------------- */
double **Memory::create_2d_double_array(int n1, int n2, const char *name)
{
double *data = (double *) smalloc(n1*n2*sizeof(double),name);
double **array = (double **) smalloc(n1*sizeof(double *),name);
int n = 0;
for (int i = 0; i < n1; i++) {
array[i] = &data[n];
n += n2;
}
return array;
}
/* ----------------------------------------------------------------------
grow or shrink 1st dim of a 2d double array
last dim must stay the same
if either dim is 0, return NULL
------------------------------------------------------------------------- */
double **Memory::grow_2d_double_array(double **array,
int n1, int n2, const char *name)
{
if (array == NULL) return create_2d_double_array(n1,n2,name);
double *data = (double *) srealloc(array[0],n1*n2*sizeof(double),name);
array = (double **) srealloc(array,n1*sizeof(double *),name);
int n = 0;
for (int i = 0; i < n1; i++) {
array[i] = &data[n];
n += n2;
}
return array;
}
/* ----------------------------------------------------------------------
free a 2d double array
------------------------------------------------------------------------- */
void Memory::destroy_2d_double_array(double **array)
{
if (array == NULL) return;
sfree(array[0]);
sfree(array);
}

23
couple/library/memory.h Normal file
View File

@ -0,0 +1,23 @@
#ifndef MEMORY_H
#define MEMORY_H
#include "mpi.h"
class Memory {
public:
Memory(MPI_Comm);
~Memory();
void *smalloc(int n, const char *);
void sfree(void *);
void *srealloc(void *, int n, const char *name);
double **create_2d_double_array(int, int, const char *);
double **grow_2d_double_array(double **, int, int, const char *);
void destroy_2d_double_array(double **);
private:
class Error *error;
};
#endif

View File

@ -0,0 +1,76 @@
#include "mpi.h"
#include "one2many.h"
#include "memory.h"
#include <map>
/* ---------------------------------------------------------------------- */
One2Many::One2Many(MPI_Comm caller_comm)
{
comm = caller_comm;
MPI_Comm_rank(comm,&me);
MPI_Comm_size(comm,&nprocs);
memory = new Memory(comm);
hash = new std::map<int,int>();
}
/* ---------------------------------------------------------------------- */
One2Many::~One2Many()
{
delete memory;
delete hash;
}
/* ---------------------------------------------------------------------- */
void One2Many::setup(int nsrc_in, int ndest, int *id)
{
nsrc = nsrc_in;
// store my local IDs in hash
hash->clear();
for (int i = 0; i < ndest; i++)
hash->insert(std::pair<int,int> (id[i],i));
}
/* ---------------------------------------------------------------------- */
void One2Many::scatter(double *src, int n, double *dest)
{
int i,j,k,m;
// allocate src on procs that don't have it
int flag = 0;
if (src == NULL) {
src = (double *) memory->smalloc(n*nsrc*sizeof(double),"one2many:src");
flag = 1;
}
// broadcast src from 0 to other procs
MPI_Bcast(src,n*nsrc,MPI_DOUBLE,0,comm);
// each proc loops over entire src
// if I own the global ID, copy src values into dest
std::map<int,int>::iterator loc;
for (m = 1; m <= nsrc; m++) {
loc = hash->find(m);
if (loc == hash->end()) continue;
i = n*loc->second;
j = 3*(m-1);
if (n == 1) dest[i] = src[j];
else
for (k = 0; k < n; k++)
dest[i++] = src[j++];
}
// free locally allocated src
if (flag) memory->sfree(src);
}

24
couple/library/one2many.h Normal file
View File

@ -0,0 +1,24 @@
#ifndef ONE2MANY_H
#define ONE2MANY_H
#include "mpi.h"
#include <map>
class One2Many {
public:
One2Many(MPI_Comm);
~One2Many();
void setup(int, int, int *);
void scatter(double *, int, double *);
protected:
int me,nprocs;
MPI_Comm comm;
class Memory *memory;
std::map<int,int> *hash;
int nsrc;
};
#endif

View File

@ -0,0 +1,83 @@
#include "mpi.h"
#include "stdlib.h"
#include "stdio.h"
#include "send2one.h"
#include "memory.h"
#include "error.h"
/* ---------------------------------------------------------------------- */
Send2One::Send2One(MPI_Comm caller_comm)
{
comm = caller_comm;
MPI_Comm_rank(comm,&me);
MPI_Comm_size(comm,&nprocs);
memory = new Memory(comm);
error = new Error(comm);
buf = NULL;
maxbuf = 0;
}
/* ---------------------------------------------------------------------- */
Send2One::~Send2One()
{
delete memory;
delete error;
memory->sfree(buf);
}
/* ---------------------------------------------------------------------- */
void Send2One::execute()
{
int nme,nmax,nsize,ping;
MPI_Status status;
MPI_Request request;
// pre-processing before ping loop
pre();
// nme = size of data I contribute, in bytes
// nmax = max size of data on any proc, in bytes
// reallocate buf if necessary
nme = size();
MPI_Allreduce(&nme,&nmax,1,MPI_INT,MPI_MAX,comm);
if (nmax > maxbuf) {
maxbuf = nmax;
memory->sfree(buf);
buf = (char *) memory->smalloc(maxbuf,"foo:buf");
}
// pack my data into buf
pack(buf);
// proc 0 pings each proc, receives its data
// all other procs wait for ping, send their data to proc 0
// invoke process() to work with data
if (me == 0) {
for (int iproc = 0; iproc < nprocs; iproc++) {
if (iproc) {
MPI_Irecv(buf,maxbuf,MPI_CHAR,iproc,0,comm,&request);
MPI_Send(&ping,0,MPI_INT,iproc,0,comm);
MPI_Wait(&request,&status);
MPI_Get_count(&status,MPI_CHAR,&nsize);
} else nsize = nme;
process(nsize,buf);
}
} else {
MPI_Recv(&ping,0,MPI_INT,0,0,comm,&status);
MPI_Rsend(buf,nme,MPI_CHAR,0,0,comm);
}
post();
}

29
couple/library/send2one.h Normal file
View File

@ -0,0 +1,29 @@
#ifndef SEND2ONE_H
#define SEND2ONE_H
#include "mpi.h"
class Send2One {
public:
Send2One(MPI_Comm);
virtual ~Send2One();
void execute();
protected:
int me,nprocs;
MPI_Comm comm;
class Memory *memory;
class Error *error;
int maxbuf;
char *buf;
virtual void pre() = 0;
virtual int size() = 0;
virtual void pack(char *) = 0;
virtual void process(int, char *) = 0;
virtual void post() = 0;
};
#endif

58
couple/simple/README Normal file
View File

@ -0,0 +1,58 @@
This directory has a simple C and C++ code that shows how LAMMPS can
be linked to a driver application as a library. The purpose is to
illustrate how another code could perform computations while using
LAMMPS to perform MD, or how an umbrella code or script could call
both LAMMPS and some other code to perform a coupled calculation.
simple.cpp is the C++ driver
simple.c is the C driver
The 2 codes do the same thing, so you can compare them to see how to
drive LAMMPS in this manner. The C driver is similar in spirit to
what one could use from a Fortran program or scripting language.
You can then build either driver code with a compile line something
like this, which includes paths to the LAMMPS library interface, MPI,
and FFTW (assuming you built LAMMPS as a library with its PPPM
solver).
This builds the C++ driver with the LAMMPS library using a C++ compiler:
g++ -I/home/sjplimp/lammps/src -c simple.cpp
g++ -L/home/sjplimp/lammps/src simple.o \
-llmp_g++ -lfftw -lmpich -lpthread -o simpleCC
This builds the C driver with the LAMMPS library using a C compiler:
gcc -I/home/sjplimp/lammps/src -c simple.c
gcc -L/home/sjplimp/lammps/src simple.o \
-llmp_g++ -lfftw -lmpich -lpthread -lstdc++ -o simpleC
You then run simpleCC or simpleC on a parallel machine on some number
of processors Q with 2 arguments:
mpirun -np Q simpleCC P in.lj
P is the number of procs you want LAMMPS to run on (must be <= Q) and
in.lj is a LAMMPS input script.
The driver will launch LAMMPS on P procs, read the input script a line
at a time, and pass each command line to LAMMPS. The final line of
the script is a "run" command, so LAMMPS will run the problem.
The driver then requests all the atom coordinates from LAMMPS, moves
one of the atoms a small amount "epsilon", passes the coordinates back
to LAMMPS, and runs LAMMPS again. If you look at the output, you
should see a small energy change between runs, due to the moved atom.
The C driver is calling C-style routines in the src/library.cpp file
of LAMMPS. You could add any functions you wish to this file to
manipulate LAMMPS data however you wish.
The C++ driver does the same thing, except that it instantiates LAMMPS
as an object first. Some of the functions in src/library.cpp can be
invoked directly as methods within appropriate LAMMPS classes, which
is what the driver does. Any public LAMMPS class method could be
called from the driver this way. However the get/put functions are
only implemented in src/library.cpp, so the C++ driver calls them as
C-style functions.

24
couple/simple/in.lj Normal file
View File

@ -0,0 +1,24 @@
# 3d Lennard-Jones melt
units lj
atom_style atomic
atom_modify map array
lattice fcc 0.8442
region box block 0 4 0 4 0 4
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 10

116
couple/simple/simple.c Normal file
View File

@ -0,0 +1,116 @@
/* ----------------------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
www.cs.sandia.gov/~sjplimp/lammps.html
Steve Plimpton, sjplimp@sandia.gov, Sandia National Laboratories
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
/* c_driver = simple example of how an umbrella program
can invoke LAMMPS as a library on some subset of procs
Syntax: c_driver P in.lammps
P = # of procs to run LAMMPS on
must be <= # of procs the driver code itself runs on
in.lammps = LAMMPS input script
See README for compilation instructions */
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "mpi.h"
#include "library.h" /* this is a LAMMPS include file */
int main(int narg, char **arg)
{
/* setup MPI and various communicators
driver runs on all procs in MPI_COMM_WORLD
comm_lammps only has 1st P procs (could be all or any subset) */
MPI_Init(&narg,&arg);
if (narg != 3) {
printf("Syntax: c_driver P in.lammps\n");
exit(1);
}
int me,nprocs;
MPI_Comm_rank(MPI_COMM_WORLD,&me);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
int nprocs_lammps = atoi(arg[1]);
if (nprocs_lammps > nprocs) {
if (me == 0)
printf("ERROR: LAMMPS cannot use more procs than available\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
int lammps;
if (me < nprocs_lammps) lammps = 1;
else lammps = MPI_UNDEFINED;
MPI_Comm comm_lammps;
MPI_Comm_split(MPI_COMM_WORLD,lammps,0,&comm_lammps);
/* open LAMMPS input script */
FILE *fp;
if (me == 0) {
fp = fopen(arg[2],"r");
if (fp == NULL) {
printf("ERROR: Could not open LAMMPS input script\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
}
/* run the input script thru LAMMPS one line at a time until end-of-file
driver proc 0 reads a line, Bcasts it to all procs
(could just send it to proc 0 of comm_lammps and let it Bcast)
all LAMMPS procs call lammps_command() on the line */
void *ptr;
if (lammps == 1) lammps_open(0,NULL,comm_lammps,&ptr);
int n;
char line[1024];
while (1) {
if (me == 0) {
if (fgets(line,1024,fp) == NULL) n = 0;
else n = strlen(line) + 1;
if (n == 0) fclose(fp);
}
MPI_Bcast(&n,1,MPI_INT,0,MPI_COMM_WORLD);
if (n == 0) break;
MPI_Bcast(line,n,MPI_CHAR,0,MPI_COMM_WORLD);
if (lammps == 1) lammps_command(ptr,line);
}
/* run 10 more steps
get coords from LAMMPS
change coords of 1st atom
put coords back into LAMMPS
run a single step with changed coords */
if (lammps == 1) {
lammps_command(ptr,"run 10");
int natoms = lammps_get_natoms(ptr);
double *x = (double *) malloc(3*natoms*sizeof(double));
lammps_get_coords(ptr,x);
double epsilon = 0.1;
x[0] += epsilon;
lammps_put_coords(ptr,x);
free(x);
lammps_command(ptr,"run 1");
}
if (lammps == 1) lammps_close(ptr);
/* close down MPI */
MPI_Finalize();
}

121
couple/simple/simple.cpp Normal file
View File

@ -0,0 +1,121 @@
/* ----------------------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
www.cs.sandia.gov/~sjplimp/lammps.html
Steve Plimpton, sjplimp@sandia.gov, Sandia National Laboratories
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
// c++_driver = simple example of how an umbrella program
// can invoke LAMMPS as a library on some subset of procs
// Syntax: c++_driver P in.lammps
// P = # of procs to run LAMMPS on
// must be <= # of procs the driver code itself runs on
// in.lammps = LAMMPS input script
// See README for compilation instructions
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "mpi.h"
#include "lammps.h" // these are LAMMPS include files
#include "input.h"
#include "atom.h"
#include "library.h"
using namespace LAMMPS_NS;
int main(int narg, char **arg)
{
// setup MPI and various communicators
// driver runs on all procs in MPI_COMM_WORLD
// comm_lammps only has 1st P procs (could be all or any subset)
MPI_Init(&narg,&arg);
if (narg != 3) {
printf("Syntax: c++_driver P in.lammps\n");
exit(1);
}
int me,nprocs;
MPI_Comm_rank(MPI_COMM_WORLD,&me);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
int nprocs_lammps = atoi(arg[1]);
if (nprocs_lammps > nprocs) {
if (me == 0)
printf("ERROR: LAMMPS cannot use more procs than available\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
int lammps;
if (me < nprocs_lammps) lammps = 1;
else lammps = MPI_UNDEFINED;
MPI_Comm comm_lammps;
MPI_Comm_split(MPI_COMM_WORLD,lammps,0,&comm_lammps);
// open LAMMPS input script
FILE *fp;
if (me == 0) {
fp = fopen(arg[2],"r");
if (fp == NULL) {
printf("ERROR: Could not open LAMMPS input script\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
}
// run the input script thru LAMMPS one line at a time until end-of-file
// driver proc 0 reads a line, Bcasts it to all procs
// (could just send it to proc 0 of comm_lammps and let it Bcast)
// all LAMMPS procs call input->one() on the line
LAMMPS *lmp;
if (lammps == 1) lmp = new LAMMPS(0,NULL,comm_lammps);
int n;
char line[1024];
while (1) {
if (me == 0) {
if (fgets(line,1024,fp) == NULL) n = 0;
else n = strlen(line) + 1;
if (n == 0) fclose(fp);
}
MPI_Bcast(&n,1,MPI_INT,0,MPI_COMM_WORLD);
if (n == 0) break;
MPI_Bcast(line,n,MPI_CHAR,0,MPI_COMM_WORLD);
if (lammps == 1) lmp->input->one(line);
}
// run 10 more steps
// get coords from LAMMPS
// change coords of 1st atom
// put coords back into LAMMPS
// run a single step with changed coords
if (lammps == 1) {
lmp->input->one("run 10");
int natoms = static_cast<int> (lmp->atom->natoms);
double *x = new double[3*natoms];
lammps_get_coords(lmp,x); // no LAMMPS class function for this
double epsilon = 0.1;
x[0] += epsilon;
lammps_put_coords(lmp,x); // no LAMMPS class function for this
delete [] x;
lmp->input->one("run 1");
}
if (lammps == 1) delete lmp;
// close down MPI
MPI_Finalize();
}

View File

@ -62,10 +62,6 @@ You can visualize the dump file as follows:
------------------------------------------
There is also a COUPLE directory which has examples of how to link to
LAMMPS as a library and drive it from a wrapper program. See the
README file for more info.
There is also an ELASTIC directory with an example script for
computing elastic constants, using a zero temperature Si example. See
the in.elastic file for more info.

File diff suppressed because one or more lines are too long