!45400 modify format

Merge pull request !45400 from 俞涵/code_docs_1110
This commit is contained in:
i-robot 2022-11-11 01:49:28 +00:00 committed by Gitee
commit 47e0db0c53
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
4 changed files with 22 additions and 17 deletions

View File

@ -2,10 +2,11 @@ mindspore.communication
========================
集合通信接口。
注意集合通信接口需要预先设置环境变量。对于Ascend用户需要配置rank_table设置rank_id和device_id相关教程可参考
`Ascend指导文档 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html>`_
对于GPU用户需要预先配置host_file以及mpi相关教程参考
`GPU指导文档 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_gpu.html>`_
注意,集合通信接口需要预先设置环境变量。
针对Ascend设备用户需要准备rank表设置rank_id和device_id详见 `Ascend指导文档 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#准备环节>`_
针对GPU设备用户需要准备host文件和mpi详见 `GPU指导文档 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_gpu.html#准备环节>`_
目前尚不支持CPU。

View File

@ -13,12 +13,17 @@
# limitations under the License.
# ============================================================================
"""
Collective communication interface. Note the API in the file needs to preset communication environment variables. For
the Ascend cards, users need to prepare the rank table, set rank_id and device_id. Please see the `Ascend tutorial \
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html>`_ for more details.
For the GPU device, users need to prepare the host file and mpi, please see the `GPU tutorial \
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_gpu.html>`_
Collective communication interface.
Note the API in the file needs to preset communication environment variables.
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
Please see the `Ascend tutorial
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
for more details.
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
"""
from mindspore.communication.management import GlobalComm, init, release, get_rank, \

View File

@ -1881,8 +1881,8 @@ def bitwise_and(x, y):
Args:
x (Tensor): The first input tensor with shape :math:`(N,*)` where :math:`*` means
any number of additional dimensions. The supported data types are:
int8, uint8, int16, uint16, int32, uint32, int64 and uint64.
any number of additional dimensions. The supported data types are:
int8, uint8, int16, uint16, int32, uint32, int64 and uint64.
y (Tensor): The second input tensor with the same dtype as `x`.
Returns:
@ -1919,8 +1919,8 @@ def bitwise_or(x, y):
Args:
x (Tensor): The first input tensor with shape :math:`(N,*)` where :math:`*` means
any number of additional dimensions. The supported data types are:
int8, uint8, int16, uint16, int32, uint32, int64 and uint64.
any number of additional dimensions. The supported data types are:
int8, uint8, int16, uint16, int32, uint32, int64 and uint64.
y (Tensor): The second input tensor with the same dtype as `x`.
Returns:
@ -1957,8 +1957,8 @@ def bitwise_xor(x, y):
Args:
x (Tensor): The first input tensor with shape :math:`(N,*)` where :math:`*` means
any number of additional dimensions. The supported data types are:
int8, uint8, int16, uint16, int32, uint32, int64 and uint64.
any number of additional dimensions. The supported data types are:
int8, uint8, int16, uint16, int32, uint32, int64 and uint64.
y (Tensor): The second input tensor with the same dtype as `x`.
Returns:

View File

@ -7053,9 +7053,8 @@ class Dropout3D(PrimitiveWithInfer):
Dropout3D can improve the independence between channel feature maps.
Note:
The keep probability :math:`keep\_prob` is equal to :math:`1 - p` in :func:`mindspore.ops.dropout2d`.
The keep probability :math:`keep\_prob` is equal to :math:`1 - p` in :func:`mindspore.ops.dropout3d`.
Refer to :func:`mindspore.ops.dropout3d` for more detail.