forked from lijiext/lammps
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@12503 f3b2605a-c512-4ea7-a41b-209d697bcdaa
This commit is contained in:
parent
1c68c2b3c0
commit
40da763955
|
@ -136,8 +136,8 @@ library.
|
|||
</P>
|
||||
<P>The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
</P>
|
||||
<P>When using the USER-CUDA package, you must use exactly one MPI task
|
||||
per physical GPU.
|
||||
|
|
|
@ -133,8 +133,8 @@ library.
|
|||
|
||||
The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
|
||||
When using the USER-CUDA package, you must use exactly one MPI task
|
||||
per physical GPU.
|
||||
|
|
|
@ -132,8 +132,8 @@ re-compiled and linked to the new GPU library.
|
|||
</P>
|
||||
<P>The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
</P>
|
||||
<P>When using the GPU package, you cannot assign more than one GPU to a
|
||||
single MPI task. However multiple MPI tasks can share the same GPU,
|
||||
|
|
|
@ -129,8 +129,8 @@ re-compiled and linked to the new GPU library.
|
|||
|
||||
The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
|
||||
When using the GPU package, you cannot assign more than one GPU to a
|
||||
single MPI task. However multiple MPI tasks can share the same GPU,
|
||||
|
|
|
@ -117,8 +117,8 @@ higher is recommended.
|
|||
</P>
|
||||
<P>The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
</P>
|
||||
<P>If LAMMPS was also built with the USER-OMP package, you need to choose
|
||||
how many OpenMP threads per MPI task will be used by the USER-OMP
|
||||
|
|
|
@ -114,8 +114,8 @@ higher is recommended.
|
|||
|
||||
The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
|
||||
If LAMMPS was also built with the USER-OMP package, you need to choose
|
||||
how many OpenMP threads per MPI task will be used by the USER-OMP
|
||||
|
|
|
@ -177,8 +177,8 @@ double precision.
|
|||
</P>
|
||||
<P>The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
</P>
|
||||
<P>When using KOKKOS built with host=OMP, you need to choose how many
|
||||
OpenMP threads per MPI task will be used (via the "-k" command-line
|
||||
|
|
|
@ -174,8 +174,8 @@ double precision.
|
|||
|
||||
The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
|
||||
When using KOKKOS built with host=OMP, you need to choose how many
|
||||
OpenMP threads per MPI task will be used (via the "-k" command-line
|
||||
|
|
|
@ -56,8 +56,8 @@ Intel compilers the CCFLAGS setting also needs to include "-restrict".
|
|||
</P>
|
||||
<P>The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
</P>
|
||||
<P>You need to choose how many threads per MPI task will be used by the
|
||||
USER-OMP package. Note that the product of MPI tasks * threads/task
|
||||
|
|
|
@ -53,8 +53,8 @@ Intel compilers the CCFLAGS setting also needs to include "-restrict".
|
|||
|
||||
The mpirun or mpiexec command sets the total number of MPI tasks used
|
||||
by LAMMPS (one or multiple per compute node) and the number of MPI
|
||||
tasks used per node. E.g. the mpirun command does this via its -np
|
||||
and -ppn switches.
|
||||
tasks used per node. E.g. the mpirun command in MPICH does this via
|
||||
its -np and -ppn switches. Ditto OpenMPI via -np and -npernode.
|
||||
|
||||
You need to choose how many threads per MPI task will be used by the
|
||||
USER-OMP package. Note that the product of MPI tasks * threads/task
|
||||
|
|
|
@ -483,10 +483,10 @@ USER-OMP package.
|
|||
each MPI task. For example, if your system has nodes with dual
|
||||
quad-core processors, it has a total of 8 cores per node. You could
|
||||
use two MPI tasks per node (e.g. using the -ppn option of the mpirun
|
||||
command), and set <I>Nthreads</I> = 4. This would use all 8 cores on each
|
||||
node. Note that the product of MPI tasks * threads/task should not
|
||||
exceed the physical number of cores (on a node), otherwise performance
|
||||
will suffer.
|
||||
command in MPICH or -npernode in OpenMPI), and set <I>Nthreads</I> = 4.
|
||||
This would use all 8 cores on each node. Note that the product of MPI
|
||||
tasks * threads/task should not exceed the physical number of cores
|
||||
(on a node), otherwise performance will suffer.
|
||||
</P>
|
||||
<P>Setting <I>Nthread</I> = 0 instructs LAMMPS to use whatever value is the
|
||||
default for the given OpenMP environment. This is usually determined
|
||||
|
|
|
@ -482,10 +482,10 @@ The {Nthread} argument sets the number of OpenMP threads allocated for
|
|||
each MPI task. For example, if your system has nodes with dual
|
||||
quad-core processors, it has a total of 8 cores per node. You could
|
||||
use two MPI tasks per node (e.g. using the -ppn option of the mpirun
|
||||
command), and set {Nthreads} = 4. This would use all 8 cores on each
|
||||
node. Note that the product of MPI tasks * threads/task should not
|
||||
exceed the physical number of cores (on a node), otherwise performance
|
||||
will suffer.
|
||||
command in MPICH or -npernode in OpenMPI), and set {Nthreads} = 4.
|
||||
This would use all 8 cores on each node. Note that the product of MPI
|
||||
tasks * threads/task should not exceed the physical number of cores
|
||||
(on a node), otherwise performance will suffer.
|
||||
|
||||
Setting {Nthread} = 0 instructs LAMMPS to use whatever value is the
|
||||
default for the given OpenMP environment. This is usually determined
|
||||
|
|
Loading…
Reference in New Issue