forked from mindspore-Ecosystem/mindspore
commit
d5aa0b0588
|
@ -2,7 +2,7 @@ mindspore.communication
|
|||
========================
|
||||
集合通信接口。
|
||||
|
||||
注意,集合通信接口需要预先设置环境变量。
|
||||
注意,集合通信接口需要先配置好通信环境变量。
|
||||
|
||||
针对Ascend设备,用户需要准备rank表,设置rank_id和device_id,详见 `Ascend指导文档 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#准备环节>`_ 。
|
||||
|
||||
|
@ -25,7 +25,6 @@ mindspore.communication
|
|||
- HCCL的全称是华为集合通信库(Huawei Collective Communication Library)。
|
||||
- NCCL的全称是英伟达集合通信库(NVIDIA Collective Communication Library)。
|
||||
- MCCL的全称是MindSpore集合通信库(MindSpore Collective Communication Library)。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **backend_name** (str) - 分布式后端的名称,可选HCCL或NCCL。在Ascend硬件平台下,应使用HCCL,在GPU硬件平台下,应使用NCCL。如果未设置则根据硬件平台类型(device_target)自动进行推断,默认值为None。
|
||||
|
@ -34,24 +33,32 @@ mindspore.communication
|
|||
- **TypeError** - 参数 `backend_name` 不是字符串。
|
||||
- **RuntimeError** - 1)硬件设备类型无效;2)后台服务无效;3)分布式计算初始化失败;4)后端是HCCL的情况下,未设置环境变量 `RANK_ID` 或 `MINDSPORE_HCCL_CONFIG_PATH` 的情况下初始化HCCL服务。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.release()
|
||||
|
||||
释放分布式资源,例如 `HCCL` 或 `NCCL` 服务。
|
||||
|
||||
.. note::
|
||||
- `release` 方法应该在 `init` 方法之后使用。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
异常:
|
||||
- **RuntimeError** - 在释放分布式资源失败时抛出。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.get_rank(group=GlobalComm.WORLD_COMM_GROUP)
|
||||
|
||||
在指定通信组中获取当前的设备序号。
|
||||
|
||||
.. note::
|
||||
- `get_rank` 方法应该在 `init` 方法之后使用。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **group** (str) - 通信组名称,通常由 `create_group` 方法创建,否则将使用默认组。默认值: `GlobalComm.WORLD_COMM_GROUP` 。
|
||||
|
@ -64,13 +71,17 @@ mindspore.communication
|
|||
- **ValueError** - 在后台不可用时抛出。
|
||||
- **RuntimeError** - 在 `HCCL` 或 `NCCL` 服务不可用时抛出。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.get_group_size(group=GlobalComm.WORLD_COMM_GROUP)
|
||||
|
||||
获取指定通信组实例的rank_size。
|
||||
|
||||
.. note::
|
||||
- `get_group_size` 方法应该在 `init` 方法之后使用。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **group** (str) - 指定工作组实例(由 create_group 方法创建)的名称,支持数据类型为str,默认值为 `WORLD_COMM_GROUP` 。
|
||||
|
@ -83,6 +94,11 @@ mindspore.communication
|
|||
- **ValueError** - 在后台不可用时抛出。
|
||||
- **RuntimeError** - 在 `HCCL` 或 `NCCL` 服务不可用时抛出。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.get_world_rank_from_group_rank(group, group_rank_id)
|
||||
|
||||
由指定通信组中的设备序号获取通信集群中的全局设备序号。
|
||||
|
@ -91,7 +107,6 @@ mindspore.communication
|
|||
- GPU 版本的MindSpore不支持此方法。
|
||||
- 参数 `group` 不能是 `hccl_world_group`。
|
||||
- `get_world_rank_from_group_rank` 方法应该在 `init` 方法之后使用。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **group** (str) - 传入的通信组名称,通常由 `create_group` 方法创建。
|
||||
|
@ -105,6 +120,11 @@ mindspore.communication
|
|||
- **ValueError** - 参数 `group` 是 `hccl_world_group` 或后台不可用。
|
||||
- **RuntimeError** - `HCCL` 服务不可用时,或者使用了GPU版本的MindSpore。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.get_group_rank_from_world_rank(world_rank_id, group)
|
||||
|
||||
由通信集群中的全局设备序号获取指定用户通信组中的rank ID。
|
||||
|
@ -113,7 +133,6 @@ mindspore.communication
|
|||
- GPU 版本的MindSpore不支持此方法。
|
||||
- 参数 `group` 不能是 `hccl_world_group`。
|
||||
- `get_group_rank_from_world_rank` 方法应该在 `init` 方法之后使用。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **world_rank_id** (`int`) - 通信集群内的全局rank ID。
|
||||
|
@ -127,6 +146,11 @@ mindspore.communication
|
|||
- **ValueError** - 在参数 `group` 是 `hccl_world_group` 或后台不可用时抛出。
|
||||
- **RuntimeError** - `HCCL` 服务不可用时,或者使用了GPU版本的MindSpore。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.create_group(group, rank_ids)
|
||||
|
||||
创建用户自定义的通信组实例。
|
||||
|
@ -137,7 +161,6 @@ mindspore.communication
|
|||
- 列表rank_ids内不能有重复数据。
|
||||
- `create_group` 方法应该在 `init` 方法之后使用。
|
||||
- 如果没有使用mpirun启动,PyNative模式下仅支持全局单通信组。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **group** (str) - 输入用户自定义的通信组实例名称,支持数据类型为str。
|
||||
|
@ -148,6 +171,11 @@ mindspore.communication
|
|||
- **ValueError** - 列表rank_ids的长度小于1,或列表rank_ids内有重复数据,以及后台无效。
|
||||
- **RuntimeError** - `HCCL` 服务不可用时,或者使用了GPU版本的MindSpore。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.get_local_rank(group=GlobalComm.WORLD_COMM_GROUP)
|
||||
|
||||
获取指定通信组中当前设备的本地设备序号。
|
||||
|
@ -155,7 +183,6 @@ mindspore.communication
|
|||
.. note::
|
||||
- GPU 版本的MindSpore不支持此方法。
|
||||
- `get_local_rank` 方法应该在 `init` 方法之后使用。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **group** (`str`) - 通信组名称,通常由 `create_group` 方法创建,否则将使用默认组名称。默认值: `WORLD_COMM_GROUP` 。
|
||||
|
@ -168,6 +195,11 @@ mindspore.communication
|
|||
- **ValueError** - 在后台不可用时抛出。
|
||||
- **RuntimeError** - `HCCL` 服务不可用时,或者使用了GPU版本的MindSpore。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.get_local_rank_size(group=GlobalComm.WORLD_COMM_GROUP)
|
||||
|
||||
获取指定通信组的本地设备总数。
|
||||
|
@ -175,7 +207,6 @@ mindspore.communication
|
|||
.. note::
|
||||
- GPU 版本的MindSpore不支持此方法。
|
||||
- `get_local_rank_size` 方法应该在 `init` 方法之后使用。
|
||||
- 在运行以下示例之前,用户需要预设通信环境变量,请查看mindspore.communication的文档注释。
|
||||
|
||||
参数:
|
||||
- **group** (str) - 传入的通信组名称,通常由 `create_group` 方法创建,或默认使用 `WORLD_COMM_GROUP` 。
|
||||
|
@ -188,6 +219,11 @@ mindspore.communication
|
|||
- **ValueError** - 在后台不可用时抛出。
|
||||
- **RuntimeError** - `HCCL` 服务不可用时,或者使用了GPU版本的MindSpore。
|
||||
|
||||
样例:
|
||||
|
||||
.. note::
|
||||
.. include:: ops/mindspore.ops.comm_note.rst
|
||||
|
||||
.. py:function:: mindspore.communication.destroy_group(group)
|
||||
|
||||
注销用户通信组。
|
||||
|
|
|
@ -16,6 +16,7 @@ mindspore.ops
|
|||
mindspore.ops.adaptive_avg_pool2d
|
||||
mindspore.ops.adaptive_avg_pool3d
|
||||
mindspore.ops.adaptive_max_pool1d
|
||||
mindspore.ops.adaptive_max_pool2d
|
||||
mindspore.ops.adaptive_max_pool3d
|
||||
mindspore.ops.avg_pool1d
|
||||
mindspore.ops.avg_pool2d
|
||||
|
@ -359,6 +360,7 @@ Reduction函数
|
|||
mindspore.ops.matrix_solve
|
||||
mindspore.ops.mm
|
||||
mindspore.ops.ger
|
||||
mindspore.ops.orgqr
|
||||
mindspore.ops.renorm
|
||||
mindspore.ops.tensor_dot
|
||||
|
||||
|
@ -413,7 +415,6 @@ Array操作
|
|||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.adaptive_max_pool2d
|
||||
mindspore.ops.affine_grid
|
||||
mindspore.ops.arange
|
||||
mindspore.ops.argsort
|
||||
|
|
|
@ -10,7 +10,7 @@ mindspore.nn.MaxUnpool1d
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -10,8 +10,8 @@ mindspore.nn.MaxUnpool2d
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -10,9 +10,9 @@ mindspore.nn.MaxUnpool3d
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel_size[2] \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel\_size[2] \\
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.ops.Custom
|
|||
|
||||
`Custom` 算子是MindSpore自定义算子的统一接口。用户可以利用该接口自行定义MindSpore内置算子库尚未包含的算子。
|
||||
根据输入函数的不用,你可以创建多个自定义算子,并且把它们用在神经网络中。
|
||||
关于自定义算子的详细说明和介绍,包括参数的正确书写,见 `教程 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/operation/op_custom.html>`_ 。
|
||||
关于自定义算子的详细说明和介绍,包括参数的正确书写,见 `自定义算子教程 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/operation/op_custom.html>`_ 。
|
||||
|
||||
.. warning::
|
||||
这是一个实验性接口,后续可能删除或修改。
|
||||
|
|
|
@ -6,9 +6,9 @@ mindspore.ops.ExtractVolumePatches
|
|||
从输入中提取数据,并将它放入"depth"输出维度中,"depth"为输出的第二维。
|
||||
|
||||
参数:
|
||||
- **kernel_size** (Union[int, tuple[int], list[int]]) - 长度为3或5的int列表。输入每个维度表示滑动窗口大小。必须是:[1, 1, k_d, k_h, k_w]或[k_d, k_h, k_w]。如果k_d = k_h = k_w,则可以输入整数。
|
||||
- **kernel_size** (Union[int, tuple[int], list[int]]) - 长度为3或5的int列表。输入每个维度表示滑动窗口大小。必须是: :math:`[1, 1, k_d, k_h, k_w]` 或 :math:`[k_d, k_h, k_w]` 。如果 :math:`k_d = k_h = k_w` ,则可以输入整数。
|
||||
- **strides** (Union[int, tuple[int], list[int]]) - 长度为3或5的int列表。
|
||||
两个连续色块的中心在输入中的距离。必须是:[1, 1, s_d, s_h, s_w]或[s_d, s_h, s_w]。如果s_d = s_h = s_w,则可以输入整数。
|
||||
两个连续色块的中心在输入中的距离。必须是: :math:`[1, 1, s_d, s_h, s_w]` 或 :math:`[s_d, s_h, s_w]` 。如果 :math:`s_d = s_h = s_w` ,则可以输入整数。
|
||||
- **padding** (str) - 要使用的填充算法的类型。可选值有"SAME"和"VALID"。
|
||||
|
||||
输入:
|
||||
|
|
|
@ -3,6 +3,6 @@ mindspore.ops.InplaceSub
|
|||
|
||||
.. py:class:: mindspore.ops.InplaceSub(indices)
|
||||
|
||||
从 `x` 的特定行减去 `input_v` 。即计算 `y` = `x`; y[i,] -= `input_v` 。
|
||||
从 `x` 的特定行减去 `input_v` 。即计算 :math:`y = x`; :math:`y[i,] -= input\_v` 。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.inplace_sub`。
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.ListDiff
|
|||
|
||||
给定一个列表 `x` 和一个列表 `y`,此操作返回一个列表 `out`,表示在 `x` 中但不在 `y` 中的所有值。
|
||||
返回列表 `out` 的排序顺序与数字出现在 `x` 中的顺序相同(保留重复项)。此操作还会返回一个列表 `idx` ,
|
||||
表示每个 `out` 元素在 `x` 中的位置。即: :math:`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` 。
|
||||
表示每个 `out` 元素在 `x` 中的位置。即: :code:`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` 。
|
||||
|
||||
参数:
|
||||
- **out_idx** (:class:`mindspore.dtype`,可选) - `idx` 的数据类型,可选值: `mindspore.dtype.int32` 和 `mindspore.dtype.int64` 。默认值: `mindspore.dtype.int32` 。
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.ops.Lstsq
|
|||
|
||||
计算满秩矩阵 `x` :math:`(m \times n)` 与满秩矩阵 `a` :math:`(m \times k)` 的最小二乘问题或最小范数问题的解。
|
||||
|
||||
若 :math:`m \geq n` , `lstsq` 解决最小二乘问题:
|
||||
若 :math:`m \geq n` , `Lstsq` 解决最小二乘问题:
|
||||
|
||||
.. math::
|
||||
|
||||
|
@ -13,7 +13,7 @@ mindspore.ops.Lstsq
|
|||
\min_y & \|xy-a\|_2.
|
||||
\end{array}
|
||||
|
||||
若 :math:`m < n` , `lstsq` 解决最小范数问题:
|
||||
若 :math:`m < n` , `Lstsq` 解决最小范数问题:
|
||||
|
||||
.. math::
|
||||
|
||||
|
|
|
@ -26,6 +26,6 @@ mindspore.ops.LuUnpack
|
|||
- **ValueError** - 若 `LU_data` 的维度小于2。
|
||||
- **ValueError** - 若 `LU_pivots` 的维度小于1。
|
||||
- **ValueError** - 若 `LU_pivots` 最后一维的大小不等于 `LU_data` 的最后两维的较小者。
|
||||
- **ValueError** - 若 `lu_data` 与 `LU_pivots` 的batch维度不匹配。
|
||||
- **ValueError** - 若 `LU_data` 与 `LU_pivots` 的batch维度不匹配。
|
||||
- **ValueError** - 在CPU平台上,若 `LU_pivots` 的值不在 :math:`[1, LU\_data.shape[-2]]` 范围内。
|
||||
- **RuntimeError** - 在Ascend平台上,若 `LU_pivots` 的值不在 :math:`[1, LU\_data.shape[-2]]` 范围内。
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
- 输入数据不支持0。
|
||||
- 当输出包含的元素个数超过2048时,该算子不能保证双千分之一的精度要求。
|
||||
- 由于架构的差异,在NPU和CPU上生成的结果可能不一致。
|
||||
- 如果shape表示为 :math:`(D1、D2...、Dn)` ,则D1\*D2... \*DN<=1000000,n<=8。
|
||||
- 如果shape表示为 :math:`(D1、D2...、Dn)` ,则 :math:`D1*D2... *DN<=1000000,n<=8` 。
|
||||
|
||||
输入:
|
||||
- **x** (Union[Tensor, numbers.Number, bool]) - 第一个输入是数值型、bool,或数据类型为数值型的Tensor。
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.ops.NeighborExchangeV2
|
|||
|
||||
NeighborExchangeV2是一个集合通讯操作。
|
||||
|
||||
将数据从本地rank发送到 `send_rank_ids` 中指定的rank,同时从 `recv_rank_ids` 接收数据。请参考 `MindSpore <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#neighborexchangev2>`_ 了解具体的数据是如何在相邻设备间交换的。
|
||||
将数据从本地rank发送到 `send_rank_ids` 中指定的rank,同时从 `recv_rank_ids` 接收数据。请参考 `分布式集合通信原语 - NeighborExchangeV2 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#neighborexchangev2>`_ 了解具体的数据是如何在相邻设备间交换的。
|
||||
|
||||
.. note::
|
||||
在运行以下示例之前,用户需要预置环境变量,请在 `NeighborExchangeV2数据交换 <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 的官方网站上查看详细信息。
|
||||
|
|
|
@ -8,7 +8,7 @@ mindspore.ops.NextAfter
|
|||
比如有两个数 :math:`a` , :math:`b` ,数据类型为float32。并且设float32数据类型的可表示值增量为 :math:`eps` 。如果 :math:`a < b` ,那么 :math:`a` 指向 :math:`b` 的下一个可表示值就是 :math:`a+eps` , :math:`b` 指向 :math:`a` 的下一个可表示值就是 :math:`b-eps` 。
|
||||
|
||||
.. math::
|
||||
out_{i} = nextafter{x1_{i}, x2_{i}}
|
||||
out_{i} = nextafter({x1_{i}, x2_{i}})
|
||||
|
||||
更多详细信息请参见 `A Self Regularized Non-Monotonic Neural Activation Function <https://arxiv.org/abs/1908.08681>`_ 。
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.ReduceScatter
|
|||
|
||||
.. py:class:: mindspore.ops.ReduceScatter(op=ReduceOp.SUM, group=GlobalComm.WORLD_COMM_GROUP)
|
||||
|
||||
规约并且分发指定通信组中的张量。更多细节请参考 `ReduceScatter <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#reducescatter>`_ 。
|
||||
规约并且分发指定通信组中的张量。更多细节请参考 `分布式集合通信原语 - ReduceScatter <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#reducescatter>`_ 。
|
||||
|
||||
.. note::
|
||||
在集合的所有过程中,Tensor必须具有相同的shape和格式。
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.ops.SegmentMean
|
|||
|
||||
计算Tensor的片段均值。
|
||||
|
||||
计算一个Tensor,使得 :math:`output_i=\mean_j(input\_x_j)` ,其中求和是在j上,满足::math:`segment\_ids[j] == i` 。如果给定 ID :math:`i` 的分段的和为空,则有 :math:`output[i] = 0` 。
|
||||
计算一个Tensor,使得 :math:`output_i=mean_j(input\_x_j)` ,其中求和是在j上,满足::math:`segment\_ids[j] == i` 。如果给定 ID :math:`i` 的分段的和为空,则有 :math:`output[i] = 0` 。
|
||||
|
||||
.. warning::
|
||||
如果 `input_x` 的数据类型是复数,则无法计算其梯度。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.inplace_sub
|
|||
|
||||
.. py:function:: mindspore.ops.inplace_sub(x, v, indices)
|
||||
|
||||
将 `v` 依照索引 `indices` 从 `x` 中减去。计算 `y` = `x`; y[i,] -= `v`。
|
||||
将 `v` 依照索引 `indices` 从 `x` 中减去。计算 :math:`y = x`; :math:`y[i,] -= input\_v`。
|
||||
|
||||
.. note::
|
||||
`indices` 只能沿着最高轴进行索引。
|
||||
|
|
|
@ -10,7 +10,7 @@ mindspore.ops.max_unpool1d
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -10,8 +10,8 @@ mindspore.ops.max_unpool2d
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -10,9 +10,9 @@ mindspore.ops.max_unpool3d
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel_size[2] \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel\_size[2] \\
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -16,6 +16,7 @@ Neural Network
|
|||
mindspore.ops.adaptive_avg_pool2d
|
||||
mindspore.ops.adaptive_avg_pool3d
|
||||
mindspore.ops.adaptive_max_pool1d
|
||||
mindspore.ops.adaptive_max_pool2d
|
||||
mindspore.ops.adaptive_max_pool3d
|
||||
mindspore.ops.avg_pool1d
|
||||
mindspore.ops.avg_pool2d
|
||||
|
@ -359,6 +360,7 @@ Linear Algebraic Functions
|
|||
mindspore.ops.matrix_solve
|
||||
mindspore.ops.mm
|
||||
mindspore.ops.ger
|
||||
mindspore.ops.orgqr
|
||||
mindspore.ops.renorm
|
||||
mindspore.ops.tensor_dot
|
||||
|
||||
|
@ -413,7 +415,6 @@ Array Operation
|
|||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.adaptive_max_pool2d
|
||||
mindspore.ops.affine_grid
|
||||
mindspore.ops.arange
|
||||
mindspore.ops.argsort
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
"""
|
||||
Collective communication interface.
|
||||
|
||||
Note the API in the file needs to preset communication environment variables.
|
||||
Note that the APIs in the following list need to preset communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
|
|
|
@ -96,8 +96,6 @@ def init(backend_name=None):
|
|||
- The full name of HCCL is Huawei Collective Communication Library.
|
||||
- The full name of NCCL is NVIDIA Collective Communication Library.
|
||||
- The full name of MCCL is MindSpore Collective Communication Library.
|
||||
- The user needs to preset communication environment variables
|
||||
before running the following example, please see the docstring of the mindspore.communication.
|
||||
|
||||
Args:
|
||||
backend_name (str): Backend, using HCCL/NCCL/MCCL. HCCL should be used for Ascend hardware platforms and
|
||||
|
@ -114,6 +112,17 @@ def init(backend_name=None):
|
|||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> from mindspore.communication import init
|
||||
>>> init()
|
||||
"""
|
||||
|
@ -169,8 +178,7 @@ def release():
|
|||
Release distributed resource. e.g. HCCL/NCCL.
|
||||
|
||||
Note:
|
||||
This method should be used after init(). The user needs to preset communication environment variables
|
||||
before running the following example, please see the docstring of the mindspore.communication.
|
||||
This method should be used after init().
|
||||
|
||||
Raises:
|
||||
RuntimeError: If failed to release distributed resource.
|
||||
|
@ -179,6 +187,17 @@ def release():
|
|||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> from mindspore.communication import init, release
|
||||
>>> init()
|
||||
>>> release()
|
||||
|
@ -191,8 +210,7 @@ def get_rank(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
Get the rank ID for the current device in the specified collective communication group.
|
||||
|
||||
Note:
|
||||
This method should be used after init(). The user needs to preset communication environment variables
|
||||
before running the following example, please see the docstring of the mindspore.communication.
|
||||
This method should be used after init().
|
||||
|
||||
Args:
|
||||
group (str): The communication group to work on. Normally, the group should be created by create_group,
|
||||
|
@ -210,6 +228,17 @@ def get_rank(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> from mindspore.communication import init, get_rank
|
||||
>>> init()
|
||||
>>> rank_id = get_rank()
|
||||
|
@ -228,8 +257,7 @@ def get_local_rank(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
|
||||
Note:
|
||||
GPU version of MindSpore doesn't support this method.
|
||||
This method should be used after init(). The user needs to preset communication environment variables
|
||||
before running the following example, please see the docstring of the mindspore.communication.
|
||||
This method should be used after init().
|
||||
|
||||
Args:
|
||||
group (str): The communication group to work on. Normally, the group should be created by create_group,
|
||||
|
@ -247,6 +275,17 @@ def get_local_rank(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
``Ascend``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> import mindspore as ms
|
||||
>>> from mindspore.communication.management import init, get_rank, get_local_rank
|
||||
>>> ms.set_context(device_target="Ascend")
|
||||
|
@ -268,8 +307,7 @@ def get_group_size(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
Get the rank size of the specified collective communication group.
|
||||
|
||||
Note:
|
||||
This method should be used after init(). The user needs to preset communication environment variables before
|
||||
running the following example, please see the docstring of the mindspore.communication.
|
||||
This method should be used after init().
|
||||
|
||||
Args:
|
||||
group (str): The communication group to work on. Normally, the group should be created by create_group,
|
||||
|
@ -287,6 +325,17 @@ def get_group_size(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> import mindspore as ms
|
||||
>>> from mindspore.communication.management import init, get_group_size
|
||||
>>> ms.set_auto_parallel_context(device_num=8)
|
||||
|
@ -307,8 +356,7 @@ def get_local_rank_size(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
|
||||
Note:
|
||||
GPU version of MindSpore doesn't support this method.
|
||||
This method should be used after init(). The user needs to preset communication environment variables before
|
||||
running the following example, please see the docstring of the mindspore.communication.
|
||||
This method should be used after init().
|
||||
|
||||
Args:
|
||||
group (str): The communication group to work on. The group is created by create_group
|
||||
|
@ -326,6 +374,17 @@ def get_local_rank_size(group=GlobalComm.WORLD_COMM_GROUP):
|
|||
``Ascend``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> import mindspore as ms
|
||||
>>> from mindspore.communication.management import init, get_local_rank_size
|
||||
>>> ms.set_context(device_target="Ascend")
|
||||
|
@ -349,8 +408,7 @@ def get_world_rank_from_group_rank(group, group_rank_id):
|
|||
Note:
|
||||
GPU version of MindSpore doesn't support this method.
|
||||
The parameter group should not be "hccl_world_group".
|
||||
This method should be used after init(). The user needs to preset communication environment variables
|
||||
before running the following example, please see the docstring of the mindspore.communication.
|
||||
This method should be used after init().
|
||||
|
||||
Args:
|
||||
group (str): The communication group to work on. The group is created by create_group.
|
||||
|
@ -368,6 +426,17 @@ def get_world_rank_from_group_rank(group, group_rank_id):
|
|||
``Ascend``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> from mindspore import set_context
|
||||
>>> from mindspore.communication.management import init, create_group, get_world_rank_from_group_rank
|
||||
>>> set_context(device_target="Ascend")
|
||||
|
@ -394,8 +463,6 @@ def get_group_rank_from_world_rank(world_rank_id, group):
|
|||
GPU version of MindSpore doesn't support this method.
|
||||
The parameter group should not be "hccl_world_group".
|
||||
This method should be used after init().
|
||||
The user needs to preset communication environment variables before running the following example, please see
|
||||
the docstring of the mindspore.managerment.
|
||||
|
||||
Args:
|
||||
world_rank_id (int): A rank ID in the world communication group.
|
||||
|
@ -413,6 +480,17 @@ def get_group_rank_from_world_rank(world_rank_id, group):
|
|||
``Ascend``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> from mindspore import set_context
|
||||
>>> from mindspore.communication.management import init, create_group, get_group_rank_from_world_rank
|
||||
>>> set_context(device_target="Ascend")
|
||||
|
@ -439,8 +517,6 @@ def create_group(group, rank_ids):
|
|||
The size of rank_ids should be larger than 1, rank_ids should not have duplicate data.
|
||||
This method should be used after init().
|
||||
Only support global single communication group in PyNative mode if you do not start with mpirun.
|
||||
The user needs to preset communication environment variables before running the following example, please see
|
||||
the docstring of the mindspore.managerment.
|
||||
|
||||
Args:
|
||||
group (str): The name of the communication group to be created.
|
||||
|
@ -455,6 +531,17 @@ def create_group(group, rank_ids):
|
|||
``Ascend``
|
||||
|
||||
Examples:
|
||||
.. note::
|
||||
Before running the following examples, you need to configure the communication environment variables.
|
||||
|
||||
For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
|
||||
Please see the `Ascend tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#preparations>`_
|
||||
for more details.
|
||||
|
||||
For the GPU devices, users need to prepare the host file and mpi, please see the `GPU tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#preparation>`_ .
|
||||
|
||||
>>> from mindspore import set_context
|
||||
>>> import mindspore.ops as ops
|
||||
>>> from mindspore.communication.management import init, create_group
|
||||
|
|
|
@ -1474,7 +1474,7 @@ class MaxUnpool1d(Cell):
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
@ -1554,8 +1554,8 @@ class MaxUnpool2d(Cell):
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
@ -1644,9 +1644,9 @@ class MaxUnpool3d(Cell):
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel_size[2] \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel\_size[2] \\
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
|
|
@ -1475,8 +1475,8 @@ def inplace_index_add(var, indices, updates, axis):
|
|||
|
||||
|
||||
def inplace_sub(x, v, indices):
|
||||
"""
|
||||
Subtracts `v` into specified rows of `x`. Computes `y` = `x`; y[i,] -= `v`.
|
||||
r"""
|
||||
Subtracts `v` into specified rows of `x`. Computes :math:`y = x`; :math:`y[i,] -= input\_v`.
|
||||
|
||||
Note:
|
||||
`indices` refers to the left-most dimension.
|
||||
|
|
|
@ -698,7 +698,7 @@ def max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=No
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
@ -797,8 +797,8 @@ def max_unpool2d(x, indices, kernel_size, stride=None, padding=0, output_size=No
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
H_{out} = (H{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
W_{out} = (W{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
@ -898,9 +898,9 @@ def max_unpool3d(x, indices, kernel_size, stride=None, padding=0, output_size=No
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel_size[2] \\
|
||||
D_{out} = (D{in} - 1) \times stride[0] - 2 \times padding[0] + kernel\_size[0] \\
|
||||
H_{out} = (H{in} - 1) \times stride[1] - 2 \times padding[1] + kernel\_size[1] \\
|
||||
W_{out} = (W{in} - 1) \times stride[2] - 2 \times padding[2] + kernel\_size[2] \\
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
|
|
@ -1553,12 +1553,12 @@ def coo_log1p(x: COOTensor) -> COOTensor:
|
|||
|
||||
|
||||
def csr_round(x: CSRTensor) -> CSRTensor:
|
||||
"""
|
||||
r"""
|
||||
Returns half to even of a csr_tensor element-wise.
|
||||
|
||||
.. math::
|
||||
|
||||
out_i \approx x_i
|
||||
out_i \\approx x_i
|
||||
|
||||
Args:
|
||||
x (CSRTensor): The input csr_tensor.
|
||||
|
@ -1995,12 +1995,12 @@ def coo_isinf(x: COOTensor) -> COOTensor:
|
|||
|
||||
|
||||
def csr_atanh(x: CSRTensor) -> CSRTensor:
|
||||
"""
|
||||
r"""
|
||||
Computes inverse hyperbolic tangent of the input element-wise.
|
||||
|
||||
.. math::
|
||||
|
||||
out_i = \tanh^{-1}(x_{i})
|
||||
out_i = \\tanh^{-1}(x_{i})
|
||||
|
||||
.. warning::
|
||||
This is an experimental prototype that is subject to change and/or deletion.
|
||||
|
|
|
@ -5190,8 +5190,7 @@ class BatchToSpace(PrimitiveWithInfer):
|
|||
Each list contains 2 integers.
|
||||
All values must be not less than 0. crops[i] specifies the crop values for the spatial dimension i, which
|
||||
corresponds to the input dimension i+2. It is required that
|
||||
|
||||
:math:`input\_shape[i+2]*block\_size > crops[i][0]+crops[i][1]`
|
||||
:math:`input\_shape[i+2]*block\_size > crops[i][0]+crops[i][1]` .
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by
|
||||
|
@ -6537,7 +6536,7 @@ class ListDiff(Primitive):
|
|||
is sorted in the same order that the numbers appear in `x` (duplicates are
|
||||
preserved). This operation also returns a list `idx` that represents the
|
||||
position of each `out` element in `x`. In other words:
|
||||
:math:`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` .
|
||||
:code:`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]` .
|
||||
|
||||
Args:
|
||||
out_idx (:class:`mindspore.dtype`, optional): The dtype of `idx`,
|
||||
|
@ -6723,11 +6722,11 @@ class ExtractVolumePatches(Primitive):
|
|||
|
||||
Args:
|
||||
kernel_size (Union[int, tuple[int], list[int]]): A list of ints which's length is 3 or 5.
|
||||
The size of the sliding window for each dimension of input. Must be: [1, 1, k_d, k_h, k_w] or
|
||||
[k_d, k_h, k_w]. If k_d = k_h = k_w, you can enter an integer.
|
||||
The size of the sliding window for each dimension of input. Must be: :math:`[1, 1, k_d, k_h, k_w]` or
|
||||
:math:`[k_d, k_h, k_w]`. If :math:`k_d = k_h = k_w`, you can enter an integer.
|
||||
strides (Union[int, tuple[int], list[int]]): A list of ints which's length is 3 or 5.
|
||||
How far the centers of two consecutive patches are in input. Must be: [1, 1, s_d, s_h, s_w] or
|
||||
[s_d, s_h, s_w]. If s_d = s_h = s_w, you can enter an integer.
|
||||
How far the centers of two consecutive patches are in input. Must be: :math:`[1, 1, s_d, s_h, s_w]` or
|
||||
:math:`[s_d, s_h, s_w]`. If :math:`s_d = s_h = s_w`, you can enter an integer.
|
||||
padding (str): A string from: "SAME", "VALID". The type of padding algorithm to use.
|
||||
|
||||
Inputs:
|
||||
|
@ -6859,7 +6858,7 @@ class Lstsq(Primitive):
|
|||
Computes the solutions of the least squares and minimum norm problems of full-rank
|
||||
matrix `x` of size :math:`(m \times n)` and matrix `a` of size :math:`(m \times k)`.
|
||||
|
||||
If :math:`m \geq n`, `lstsq` solves the least-squares problem:
|
||||
If :math:`m \geq n`, `Lstsq` solves the least-squares problem:
|
||||
|
||||
.. math::
|
||||
|
||||
|
@ -6867,7 +6866,7 @@ class Lstsq(Primitive):
|
|||
\min_y & \|xy-a\|_2.
|
||||
\end{array}
|
||||
|
||||
If :math:`m < n`, `lstsq` solves the least-norm problem:
|
||||
If :math:`m < n`, `Lstsq` solves the least-norm problem:
|
||||
|
||||
.. math::
|
||||
|
||||
|
@ -7680,7 +7679,7 @@ class SegmentMean(Primitive):
|
|||
r"""
|
||||
Computes the mean along segments of a tensor.
|
||||
|
||||
Computes a tensor such that :math:`output_i = \mean_j input\_x_j` where mean is over :math:`j` such that
|
||||
Computes a tensor such that :math:`output_i = mean_j(input\_x_j)` where mean is over :math:`j` such that
|
||||
:math:`segment\_ids[j] == i`. If the mean is empty for a given segment ID :math:`i`, :math:`output[i] = 0`.
|
||||
|
||||
.. warning::
|
||||
|
|
|
@ -409,7 +409,7 @@ class _HostAllGather(PrimitiveWithInfer):
|
|||
class ReduceScatter(Primitive):
|
||||
r"""
|
||||
Reduces and scatters tensors from the specified communication group.
|
||||
For more details about it, please refer to `ReduceScatter \
|
||||
For more details about it, please refer to `Distributed Set Communication Primitives - ReduceScatter \
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/communicate_ops.html#reducescatter>`_ .
|
||||
|
||||
Note:
|
||||
|
@ -884,7 +884,7 @@ class NeighborExchangeV2(Primitive):
|
|||
|
||||
NeighborExchangeV2 sends data from the local rank to ranks in the `send_rank_ids`,
|
||||
as while receive data from `recv_rank_ids`. Please refer to
|
||||
`NeighborExchangeV2 data exchange rules \
|
||||
`Distributed Set Communication Primitives - NeighborExchangeV2 \
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#neighborexchangev2>`_
|
||||
to learn about how the data is exchanged between neighborhood devices.
|
||||
|
||||
|
|
|
@ -149,6 +149,9 @@ class Custom(ops.PrimitiveWithInfer):
|
|||
primitives. You can construct a `Custom` object with a predefined function, which describes the computation
|
||||
logic of a user defined operator. You can also construct another `Custom` object with another predefined
|
||||
function if needed. Then these `Custom` objects can be directly used in neural networks.
|
||||
Detailed description and introduction of user-defined operators, including correct writing of parameters,
|
||||
please refer to `Custom Operators Tutorial
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/operation/op_custom.html>`_ .
|
||||
|
||||
.. warning::
|
||||
This is an experimental prototype that is subject to change.
|
||||
|
|
|
@ -1996,8 +1996,8 @@ class InplaceIndexAdd(Primitive):
|
|||
|
||||
|
||||
class InplaceSub(Primitive):
|
||||
"""
|
||||
Subtracts `v` into specified rows of `x`. Computes `y` = `x`; y[i,] -= `v`.
|
||||
r"""
|
||||
Subtracts `v` into specified rows of `x`. Computes :math:`y = x`; :math:`y[i,] -= input\_v`.
|
||||
|
||||
Refer to :func:`mindspore.ops.inplace_sub` for more details.
|
||||
|
||||
|
@ -3367,7 +3367,7 @@ class Mod(_MathBinaryOp):
|
|||
- When the elements of input exceed 2048, the accuracy of operator cannot guarantee the requirement of
|
||||
double thousandths in the mini form.
|
||||
- Due to different architectures, the calculation results of this operator on NPU and CPU may be inconsistent.
|
||||
- If shape is expressed as (D1,D2... ,Dn), then D1\*D2... \*DN<=1000000,n<=8.
|
||||
- If shape is expressed as :math:`(D1,D2... ,Dn)`, then :math:`D1*D2... *DN<=1000000,n<=8`.
|
||||
|
||||
Inputs:
|
||||
- **x** (Union[Tensor, numbers.Number, bool]) - The first input is a number, a bool
|
||||
|
@ -7277,7 +7277,7 @@ class NextAfter(Primitive):
|
|||
|
||||
.. math::
|
||||
|
||||
out_{i} = nextafter{x1_{i}, x2_{i}}
|
||||
out_{i} = nextafter({x1_{i}, x2_{i}})
|
||||
|
||||
Inputs:
|
||||
- **x1** (Tensor) - The shape of tensor is
|
||||
|
|
Loading…
Reference in New Issue