q3 collective review part5 master

This commit is contained in:
lilinjie 2022-11-07 21:22:15 +08:00
parent 550ff897c2
commit 366136aca5
19 changed files with 128 additions and 54 deletions

View File

@ -3,12 +3,12 @@ mindspore.ops.Conv2DTranspose
.. py:class:: mindspore.ops.Conv2DTranspose(out_channel, kernel_size, pad_mode='valid', pad=0, pad_list=None, mode=1, stride=1, dilation=1, group=1, data_format='NCHW')
计算二维转置卷积,也称为反卷积(实际不是真正的反卷积)
计算二维转置卷积,也称为反卷积,实际不是真正的反卷积。因为它不能完全的恢复输入矩阵的数据,但能恢复输入矩阵的形状
参数:
- **out_channel** (int) - 输出的通道数。
- **kernel_size** (Union[int, tuple[int]]) - 卷积核的大小。
- **pad_mode** (str) - 填充的模式。它可以是"valid"、"same"或"pad"。默认值:"valid"。
- **pad_mode** (str) - 填充的模式。它可以是"valid"、"same"或"pad"。默认值:"valid"。请参考 :class:`mindspore.nn.Conv2DTranspose` 了解更多 `pad_mode` 的使用规则。
- **pad** (Union[int, tuple[int]]) - 指定要填充的填充值。默认值0。如果 `pad` 是整数,则顶部、底部、左侧和右侧的填充都等于 `pad` 。如果 `pad` 是四个整数的tuple则顶部、底部、左侧和右侧的填充分别等于pad[0]、pad[1]、pad[2]和pad[3]。
- **pad_list** (Union[str, None]) - 卷积填充方式顶部、底部、左、右。默认值None表示不使用此参数。
- **mode** (int) - 指定不同的卷积模式。当前未使用该值。默认值1。

View File

@ -3,7 +3,7 @@ mindspore.ops.ExtractVolumePatches
.. py:class:: mindspore.ops.ExtractVolumePatches(kernel_size, strides, padding)
从输入中提取数据,并将它放入"depth"输出维度中。
从输入中提取数据,并将它放入"depth"输出维度中"depth"为输出的第二维
参数:
- **kernel_size** (Union[int, tuple[int], list[int]]) - 长度为3或5的int列表。输入每个维度表示滑动窗口大小。必须是[1, 1, k_d, k_h, k_w]或[k_d, k_h, k_w]。如果k_d = k_h = k_w则可以输入整数。

View File

@ -25,3 +25,4 @@ mindspore.ops.Multinomial
- **TypeError** - 如果 `x` 不是数据类型为float16、float32或者float64的Tensor。
- **TypeError** - 如果 `num_sample` 不是int类型。
- **TypeError** - 如果 `dtype` 不是int32或者int64类型。
- **TypeError** - 如果 `seed` 或者 `seed2` 小于零。

View File

@ -3,19 +3,30 @@ mindspore.ops.NeighborExchangeV2
.. py:class:: mindspore.ops.NeighborExchangeV2(send_rank_ids, recv_rank_ids, send_lens, recv_lens, data_format, group=GlobalComm.WORLD_COMM_GROUP)
NeighborExchangeV2是一个集合通信函数
NeighborExchangeV2是一个集合通讯操作
将数据从本地rank发送到 `send_rank_ids` 中指定的rank同时从 `recv_rank_ids` 接收数据
将数据从本地rank发送到 `send_rank_ids` 中指定的rank同时从 `recv_rank_ids` 接收数据。请参考 `MindSpore <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#neighborexchangev2>`_ 了解具体的数据是如何在相邻设备间交换的
.. note::
在运行以下示例之前,用户需要预置环境变量,请在 `MindSpore <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 的官方网站上查看详细信息。
在运行以下示例之前,用户需要预置环境变量,请在 `NeighborExchangeV2数据交换 <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 的官方网站上查看详细信息。
要求全连接配网每台设备具有相同的vlan idip和mask在同一子网请查看 `分布式集合通信原语注意事项 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#注意事项>`_
参数:
- **send_rank_ids** (list(int)) - 指定发送数据的rank。8个rank_id分别代表8个方向上的数据要向哪个rank发送如果某个方向上不发送数据则设为-1。
- **recv_rank_ids** (list(int)) - 指定接收数据的rank。8个rank_id分别代表8个方向上的数据要从哪个rank接收如果某个方向上不接收数据则设为-1。
- **send_lens** (list(int)) - 指定 `send_rank_ids` 发送数据的长度4个数字分别代表[top, bottom, left, right]4个方向上的长度。
- **recv_lens** (list(int)) - 指定 `recv_rank_ids` 接收数据的长度4个数字分别代表[top, bottom, left, right]4个方向上的长度。
- **send_lens** (list(int)) - 指定 `send_rank_ids` 发送数据的长度4个数字分别代表[send_top, send_bottom, send_left, send_right]4个方向上的长度。
- **recv_lens** (list(int)) - 指定 `recv_rank_ids` 接收数据的长度4个数字分别代表[recv_top, recv_bottom, recv_left, recv_right]4个方向上的长度。
- **data_format** (str) - 数据格式现在只支持NCHW。
- **group** (str, 可选) - 工作的通信组。默认值:"GlobalComm.WORLD_COMM_GROUP"即Ascend平台为"hccl_world_group"GPU平台为"nccl_world_group" )。
输入:
- **input_x** (Tensor) - 交换前的输入Tensor其shape为 :math:`(N, C, H, W)`
输出:
数据交换后的输出Tensor如果输入的shape是 :math:`(N, C, H, W)` 则输出shape为 :math:`(N, C, H+recv\_top+recv\_bottom, W+recv\_left+recv\_right)`
异常:
- **TypeError** - 如果 `group` 不是一个string或者 `send_rank_ids``recv_rank_ids``send_lens``recv_lens` 中任意一个不是一个list。
- **ValueError** - 如果 `send_rank_ids` 或者 `recv_rank_ids` 存在小于-1的值或者存在重复值。
- **ValueError** - 如果 `send_lens` 或者 `recv_lens` 存在小于零的值。
- **ValueError** - 如果 `data_format` 不是"NCHW"。

View File

@ -3,7 +3,7 @@ mindspore.ops.ReduceScatter
.. py:class:: mindspore.ops.ReduceScatter(op=ReduceOp.SUM, group=GlobalComm.WORLD_COMM_GROUP)
规约并且分发指定通信组中的张量。
规约并且分发指定通信组中的张量。更多细节请参考 `ReduceScatter <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#neighborexchangev2>`_
.. note::
在集合的所有过程中Tensor必须具有相同的shape和格式。
@ -14,6 +14,13 @@ mindspore.ops.ReduceScatter
- **op** (str) - 指定用于元素的规约操作如SUM和MAX。默认值ReduceOp.SUM。
- **group** (str) - 要处理的通信组。默认值:"GlobalComm.WORLD_COMM_group"。
输入:
- **input_x** (Tensor) - 输入Tensor假设其形状为 :math:`(N, *)` ,其中 `*` 为任意数量的额外维度。N必须能够被rank_size整除rank_size为当前通讯组里面的计算卡数量。
输出:
Tensor数据类型与 `input_x` 一致shape为 :math:`(N/rank\_size, *)`
异常:
- **TypeError** - 如果 `op``group` 不是字符串。
- **ValueError** - 如果输入的第一个维度不能被rank size整除。rank size是指通信组通信的卡数。
- **ValueError** - 如果输入的第一个维度不能被rank_size整除。

View File

@ -151,4 +151,4 @@
动态Shape场景下算子输入/输出Tensor的数据排布。
参数:
- **unknown_shape_formats** (list) - 表示动态Shape场景下算子输入/输出Tensor的数据排布。默认值:()不支持动态Shape。
- **unknown_shape_formats** (list) - 表示动态Shape场景下算子输入/输出Tensor的数据排布。

View File

@ -8,7 +8,7 @@ mindspore.ops.bmm
.. math::
\text{output}[..., :, :] = \text{matrix}(x[..., :, :]) * \text{matrix}(y[..., :, :])
第一个输入Tensor必不能小于 `3`,第二个输入必不能小于 `2`
`input_x` 的维度不能小于 `3` `mat2` 的维度不能小于 `2`
参数:
- **input_x** (Tensor) - 输入相乘的第一个Tensor。其shape为 :math:`(*B, N, C)` ,其中 :math:`*B` 表示批处理大小,可以是多维度, :math:`N`:math:`C` 是最后两个维度的大小。
@ -18,4 +18,6 @@ mindspore.ops.bmm
Tensor输出Tensor的shape为 :math:`(*B, N, M)`
异常:
- **ValueError** - `input_x` 的shape长度不等于 `mat2` 的shape长度或 `input_x` 的shape长度小于 `3`
- **ValueError** - `input_x` 的维度小于 `3` 或者 `mat2` 的维度小于2。
- **ValueError** - `input_x` 第三维的长度不等于 `mat2` 第二维的长度。

View File

@ -8,7 +8,7 @@
参数:
- **input** (Tensor) - 输入Tensor。
- **other** (Tensor) - 输入Tensor
- **other** (Tensor) - 另一个Tensor数据类型和shape必须和 `input` 一致,并且他们的 `dim` 维度的长度应该为3
- **dim** (int) - 沿着此维进行叉积操作。默认值None。
返回:

View File

@ -13,3 +13,6 @@ mindspore.ops.custom_info_register
返回:
function返回算子信息注册的装饰器。
异常:
- **TypeError** - 如果 `reg_info` 不是tuple。

View File

@ -51,11 +51,11 @@ mindspore.ops.strided_slice
- **begin** (tuple[int]) - 指定开始切片的索引。仅支持大于或等于0的int值。
- **end** (tuple[int]) - 指定结束切片的索引。仅支持大于0的int值。
- **strides** (tuple[int]) - 指定各维度切片的步长。输入为一个tuple仅支持int值。`strides` 的元素必须非零。可能为负值,这会导致反向切片。
- **begin_mask** (int可选) - 表示切片的起始索引掩码。使用二进制flag对输入Tensor不同维度进行标志第i位设置为1则 `begin[i]` 失效表示该维度的起始索引从0开始。默认值0。
- **end_mask** (int可选) - 表示切片的结束索引掩码。`begin_mask` 类似。使用二进制flag对输入Tensor不同维度进行标志第i位设置为1则 `end[i]` 失效,表示该维度切分的结束索引取最大值,即切分到尽可能大的维度。默认值0。
- **ellipsis_mask** (int可选) - 不为0的维度不需要进行切片操作。为int型掩码。默认值0。
- **new_axis_mask** (int可选) - 表示切片的新增维度掩码。若第i位出现1`begin[i]``end[i]``strides[i]` 失效并在第i位上增加一个大小为1的维度。为int型掩码。默认值0。
- **shrink_axis_mask** (int可选) - 表示切片的收缩维度掩码。如果第i位设置为1则意味着第i维度缩小为1。为int型掩码。默认值0。
- **begin_mask** (int可选) - 表示切片的起始索引掩码。默认值0。
- **end_mask** (int可选) - 表示切片的结束索引掩码。默认值0。
- **ellipsis_mask** (int可选) - 维度掩码值为1说明不需要进行切片操作。为int型掩码。默认值0。
- **new_axis_mask** (int可选) - 表示切片的新增维度掩码。默认值0。
- **shrink_axis_mask** (int可选) - 表示切片的收缩维度掩码。为int型掩码。默认值0。
返回:
返回根据起始索引、结束索引和步长进行提取出的切片Tensor。

View File

@ -7,9 +7,12 @@
`x``y` 的输入遵循隐式类型转换规则使数据类型一致。输入必须是两个Tensor或一个Tensor和一个Scalar。当输入是两个Tensor时它们的数据类型不能同时为bool它们的shape可以广播。当输入是一个Tensor和一个Scalar时Scalar只能是一个常量。
.. note::
`x``y` 数据类型都为复数的时候, 须同时为complex64或者complex128。
参数:
- **x** (Union[Tensor, Number, bool]) - floatcomplex或bool类型的Tensor。
- **y** (Union[Tensor, Number, bool]) - float、complex或bool类型的Tensor。`x``y` 不能同时为bool类型。
- **x** (Union[Tensor, Number, bool]) - number.Number或bool类型的Tensor或者是一个bool或者number。
- **y** (Union[Tensor, Number, bool]) - number.Number或bool类型的Tensor或者是一个bool或者number `x``y` 不能同时为bool类型。
返回:
Tensorshape与广播后的shape相同数据类型为两个输入中精度较高或数据类型较高的类型。

View File

@ -1127,7 +1127,9 @@ def strided_slice(input_x,
must not greater than the dim of `input_x`.
During the slicing process, the fragment (end-begin)/strides are extracted from each dimension.
Example: For a 5*6*7 Tensor `input_x`, set `begin`, `end` and `strides` to (1, 3, 2), (3, 5, 6),
Example: For Tensor `input_x` with shape :math:`(5, 6, 7)`,
set `begin`, `end` and `strides` to (1, 3, 2), (3, 5, 6),
(1, 1, 2) respectively, then elements from index 1 to 3 are extrected for dim 0, index 3 to 5
are extrected for dim 1 and index 2 to 6 with a `stirded` of 2 are extrected for dim 2, this
process is equivalent to a pythonic slice `input_x[1:3, 3:5, 2:6:2]`.
@ -1135,14 +1137,17 @@ def strided_slice(input_x,
If the length of `begin` `end` and `strides` is smaller than the dim of `input_x`,
then all elements are extracted from the missing dims, it behaves like all the
missing dims are filled with zeros, size of that missing dim and ones.
Example: For a 5*6*7 Tensor `input_x`, set `begin`, `end` and `strides` to (1, 3),
Example: For Tensor `input_x` with shape :math:`(5, 6, 7)`,
set `begin`, `end` and `strides` to (1, 3),
(3, 5), (1, 1) respectively, then elements from index 1 to 3 are extrected
for dim 0, index 3 to 5 are extrected for dim 1 and index 3 to 5 are extrected
for dim 2, this process is equivalent to a pythonic slice `input_x[1:3, 3:5, 0:7]`.
Here's how a mask works:
For each specific mask, it will be converted to a binary representation internally, and then
reverse the result to start the calculation. For a 5*6*7 Tensor with a given mask value of 3 which
reverse the result to start the calculation. For Tensor `input_x` with
shape :math:`(5, 6, 7)`. Given mask value of 3 which
can be represented as 0b011. Reverse that we get 0b110, which implies the first and second dim of the
original Tensor will be effected by this mask. See examples below, for simplicity all mask mentioned
below are all in their reverted binary form:
@ -1151,7 +1156,7 @@ def strided_slice(input_x,
If the ith bit of `begin_mask` is 1, `begin[i]` is ignored and the fullest
possible range in that dimension is used instead. `end_mask` is analogous,
except with the end range. For a 5*6*7*8 Tensor `input_x`, if `begin_mask`
except with the end range. For Tensor `input_x` with shape :math:`(5, 6, 7, 8)`, if `begin_mask`
is 0b110, `end_mask` is 0b011, the slice `input_x[0:3, 0:6, 2:7:2]` is produced.
- `ellipsis_mask`
@ -1166,14 +1171,15 @@ def strided_slice(input_x,
If the ith bit of `new_axis_mask` is 1, `begin`, `end` and `strides` are
ignored and a new length 1 dimension is added at the specified position
in the output Tensor. For a 5*6*7 Tensor `input_x`, if `new_axis_mask`
in the output Tensor. For Tensor `input_x` with shape :math:`(5, 6, 7)`, if `new_axis_mask`
is 0b110, a new dim is added to the second dim, which will produce
a Tensor with shape :math:`(5, 1, 6, 7)`.
- `shrink_axis_mask`
If the ith bit of `shrink_axis_mask` is 1, `begin`, `end` and `strides`
are ignored and dimension i will be shrunk to 0. For a 5*6*7 Tensor `input_x`,
are ignored and dimension i will be shrunk to 0.
For Tensor `input_x` with shape :math:`(5, 6, 7)`,
if `shrink_axis_mask` is 0b010, it is equivalent to slice `x[:, 5, :]`
and results in an output shape of :math:`(5, 7)`.

View File

@ -687,8 +687,7 @@ def floor_div(x, y):
x (Union[Tensor, Number, bool]): The first input is a number or
a bool or a tensor whose data type is number or bool.
y (Union[Tensor, Number, bool]): The second input is a number or
a bool when the first input is a tensor or a tensor whose data type is number or bool.
a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.
Returns:
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.
@ -789,7 +788,7 @@ def floor_mod(x, y):
x (Union[Tensor, Number, bool]): The first input is a number or
a bool or a tensor whose data type is number or bool.
y (Union[Tensor, Number, bool]): The second input is a number or
a bool when the first input is a tensor or a tensor whose data type is number or bool.
a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.
Returns:
Tensor, the shape is the same as the one after broadcasting,
@ -2552,7 +2551,7 @@ def less(x, y):
x (Union[Tensor, Number, bool]): The first input is a number or
a bool or a tensor whose data type is number or bool.
y (Union[Tensor, Number, bool]): The second input is a number or
a bool when the first input is a tensor or a tensor whose data type is number or bool.
a bool when the first input is a tensor, or it can be a tensor whose data type is number or bool.
Returns:
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
@ -5460,7 +5459,7 @@ def bmm(input_x, mat2):
\text{output}[..., :, :] = \text{matrix}(input_x[..., :, :]) * \text{matrix}(mat2[..., :, :])
The first input tensor must be not less than `3` and the second input must be not less than `2`.
The dim of `input_x` can not be less than `3` and the dim of `mat2` can not be less than `2`.
Args:
input_x (Tensor): The first tensor to be multiplied. The shape of the tensor is :math:`(*B, N, C)`,
@ -5472,8 +5471,9 @@ def bmm(input_x, mat2):
Tensor, the shape of the output tensor is :math:`(*B, N, M)`.
Raises:
ValueError: If length of shape of `input_x` is not equal to length of shape of `mat2` or
length of shape of `input_x` is less than `3`.
ValueError: If dim of `input_x` is less than `3` or dim of `mat2` is less than `2`.
ValueError: If the length of the third dim of `input_x` is not equal to
the length of the second dim of `mat2`.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@ -5740,12 +5740,13 @@ def xdivy(x, y):
When the inputs are one tensor and one scalar,
the scalar could only be a constant.
.. note::
When `x` and `y` are both of datatype complex, they should be both complex64 or complex128 at the same time.
Args:
x (Union[Tensor, Number, bool]): The first input is a number, or a bool,
or a tensor whose data type is float16, float32, float64, complex64, complex128 or bool.
y (Union[Tensor, Number, bool]): The second input is a number,
or a bool when the first input is a tensor, or a tensor whose data type is float16,
float32, float64, complex64, complex128 or bool.
x (Union[Tensor, Number, bool]): Tensor of datatype number.Number或bool, or it can be a bool or number.
y (Union[Tensor, Number, bool]): Tensor of datatype number.Number或bool, or it can be a bool or number.
`x` and `y` can not be both bool at the same time.
Returns:
Tensor, the shape is the same as the one after broadcasting,
@ -6486,7 +6487,7 @@ def cross(input, other, dim=None):
Args:
input (Tensor): input is a tensor.
other (Tensor): other is a tensor. input `other` must have the same shape and type as input `input`, and
other (Tensor): The other Tensor, `other` must have the same shape and type as input `input`, and
the size of their `dim` dimension should be `3`.
dim (int): dimension to apply cross product in. Default: None.

View File

@ -105,6 +105,9 @@ def custom_info_register(*reg_info):
Returns:
Function, returns a decorator for op info register.
Raises:
TypeError: If `reg_info` is not a tuple.
Examples:
>>> from mindspore.ops import custom_info_register, CustomRegOp, DataType
>>> custom_func_ascend_info = CustomRegOp() \
@ -520,7 +523,6 @@ class TBERegOp(RegOp):
Args:
unknown_shape_formats (list): Description data arrangement of operator input / output tensor in dynamic
shape scene.
Default: (), means dynamic shape is not supported.
"""
RegOp._is_list(unknown_shape_formats)
self.unknown_shape_formats_.append(unknown_shape_formats)

View File

@ -6824,6 +6824,7 @@ class TensorScatterElements(Primitive):
class ExtractVolumePatches(Primitive):
r"""
Extract patches from input and put them in the "depth" output dimension.
"depth" dimension is the second dim of output.
Args:
kernel_size (Union[int, tuple[int], list[int]]): A list of ints which's length is 3 or 5.

View File

@ -383,8 +383,10 @@ class _HostAllGather(PrimitiveWithInfer):
class ReduceScatter(Primitive):
"""
r"""
Reduces and scatters tensors from the specified communication group.
For more details about it, please refer to `ReduceScatter \
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#neighborexchangev2>`_ .
Note:
The tensors must have the same shape and format in all processes of the collection. The user needs to preset
@ -397,10 +399,17 @@ class ReduceScatter(Primitive):
like SUM and MAX. Default: ReduceOp.SUM.
group (str): The communication group to work on. Default: "GlobalComm.WORLD_COMM_GROUP".
Inputs:
- **input_x** (Tensor) - Input Tensor, suppose it has a shape :math:`(N, *)`, where `*`
means any number of additional dimensions. N must be divisible by rank_size.
rank_size refers to the number of cards in the communication group.
Outputs:
Tensor, it has the same dtype as `input_x` with a shape of :math:`(N/rank\_size, *)`.
Raises:
TypeError: If any of operation and group is not a string.
ValueError: If the first dimension of the input cannot be divided by the rank size. Rank size refers to the
number of cards in the communication group.
ValueError: If the first dimension of the input cannot be divided by the rank_size.
Supported Platforms:
``Ascend`` ``GPU``
@ -754,7 +763,7 @@ class AlltoAll(PrimitiveWithInfer):
:math:`y_{concat\_dim} = x_{concat\_dim} * split\_count`
:math:`y_other = x_other`.
:math:`y_{other} = x_{other}`.
Raises:
TypeError: If group is not a string.
@ -824,11 +833,14 @@ class AlltoAll(PrimitiveWithInfer):
class NeighborExchangeV2(Primitive):
"""
NeighborExchangeV2 is a collective operation.
r"""
NeighborExchangeV2 is a collective communication operation.
NeighborExchangeV2 sends data from the local rank to ranks in the `send_rank_ids`,
as while receive data from `recv_rank_ids`.
as while receive data from `recv_rank_ids`. Please refer to
`NeighborExchangeV2 data exchange rules \
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/communicate_ops.html#neighborexchangev2>`_
to learn about how the data is exchanged between neighborhood devices.
Note:
The user needs to preset
@ -846,13 +858,27 @@ class NeighborExchangeV2(Primitive):
recv_rank_ids (list(int)): Ranks which the data is received from. 8 rank_ids represents 8 directions,
if one direction is not recv from , set it -1.
send_lens (list(int)): Data lens which send to the send_rank_ids, 4 numbers represent the lens of
[top, bottom, left, right].
[send_top, send_bottom, send_left, send_right].
recv_lens (list(int)): Data lens which received from recv_rank_ids, 4 numbers represent the lens of
[top, bottom, left, right].
[recv_top, recv_bottom, recv_left, recv_right].
data_format (str): Data format, only support NCHW now.
group (str, optional): The communication group to work on. Default: "GlobalComm.WORLD_COMM_GROUP", which means
"hccl_world_group" in Ascend, and "nccl_world_group" in GPU.
Inputs:
- **input_x** (Tensor) - The Tensor before being exchanged. It has a shape of :math:`(N, C, H, W)`.
Outputs:
The Tensor after being exchanged. If input shape is :math:`(N, C, H, W)`, output shape is
:math:`(N, C, H+recv\_top+recv\_bottom, W+recv\_left+recv\_right)`.
Raises:
TypeError: If `group` is not a string or any one of `send_rank_ids`,
`recv_rank_ids`, `send_lens`, `recv_lens` is not a list.
ValueError: If `send_rank_ids` or `recv_rank_ids` has value less than -1 or has repeated values.
ValueError: If `send_lens`, `recv_lens` has value less than 0.
ValueError: If `data_format` is not "NCHW".
Supported Platforms:
``Ascend``

View File

@ -70,8 +70,10 @@ class ScalarSummary(Primitive):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>>
>>>
>>> class SummaryDemo(nn.Cell):
@ -178,8 +180,10 @@ class TensorSummary(Primitive):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>>
>>>
>>> class SummaryDemo(nn.Cell):
@ -227,8 +231,10 @@ class HistogramSummary(PrimitiveWithInfer):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>>
>>>
>>> class SummaryDemo(nn.Cell):
@ -242,7 +248,9 @@ class HistogramSummary(PrimitiveWithInfer):
... name = "x"
... self.summary(name, x)
... return x
...
>>> summary = SummaryDemo()(Tensor([1, 2]), Tensor([3, 4]))
>>> print(summary)
Tensor(shape=[2], dtype=Int64, value= [3, 5])
"""
@prim_attr_register

View File

@ -2651,13 +2651,15 @@ class MaxPool3DWithArgmax(Primitive):
class Conv2DTranspose(Conv2DBackpropInput):
"""
Compute a 2D transposed convolution, which is also known as a deconvolution
(although it is not an actual deconvolution).
Calculates a 2D transposed convolution, which can be regarded as Conv2d for the gradient of the input,
also called deconvolution, although it is not an actual deconvolution. Because it cannot restore
the original input data completely, but it can restore the shape of the original input.
Args:
out_channel (int): The dimensionality of the output space.
kernel_size (Union[int, tuple[int]]): The size of the convolution window.
pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
Please refer to :class:`mindspore.nn.Conv2DTranspose` for more specifications about `pad_mode`.
pad (Union[int, tuple[int]]): The pad value to be filled. Default: 0. If `pad` is an integer, the paddings of
top, bottom, left and right are the same, equal to pad. If `pad` is a tuple of four integers, the
padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

View File

@ -795,7 +795,8 @@ class Multinomial(Primitive):
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `x` is not a Tensor whose dtype is float16, float32, float64.
TypeError: If dtype of `num_samples` is not int.
TypeError: If dtype is not int32 or int64.
TypeError: If `dtype` is not int32 or int64.
ValueError: If `seed` or `seed2` is less than 0.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``