change_same_comment_as_torch

This commit is contained in:
yide12 2023-02-06 16:55:54 +08:00
parent 83d047b2d6
commit 4b09b17757
17 changed files with 98 additions and 90 deletions

View File

@ -9,7 +9,7 @@ mindspore.nn.CTCLoss
参数:
- **blank** (int) - 空白标签。默认值0。
- **reduction** (str) - 指定输出结果的计算方式。可选值为"none"、"mean"或"sum"。默认值:"mean"。
- **reduction** (str) - 对输出应用归约方法。可选值为"none"、"mean"或"sum"。默认值:"mean"。
- **zero_infinity** (bool) - 是否设置无限损失和相关梯度为零。默认值False。
输入:

View File

@ -3,8 +3,7 @@ mindspore.nn.FractionalMaxPool2d
.. py:class:: mindspore.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)
对多个输入平面组成的输入上应用2D分数最大池化。在 :math:`(kH_{in}, kW_{in})` 区域上应用最大池化操作由输出shape决定随机步长。对于任何输入shape指定输出shape为 :math:`(H, W)` 。输出特征的数量等于输入平面的数量。
在一个输入Tensor上应用2D fractional max pooling可被视为组成一个2D平面。
在输入 `input_x` 上应用二维分数最大池化。输出Tensor的shape可以由 `output_size` 和 `output_ratio` 其中之一确定,步长由 `_random_samples` 决定。 `output_size` 和 `output_ratio` 不能同时使用。
分数最大池化的详细描述在 `Fractional Max-Pooling <https://arxiv.org/pdf/1412.6071>`_
@ -12,7 +11,7 @@ mindspore.nn.FractionalMaxPool2d
- **kernel_size** (Union[int, tuple[int]]) - 指定池化核尺寸大小如果为int则代表池化核的高和宽。如果为tuple其值必须包含两个正整数值分别表示池化核的高和宽。取值必须为正整数。
- **output_size** (Union[int, tuple[int]],可选) - 目标输出shape。如果是整数则表示输出目标的高和宽。如果是tuple其值必须包含两个整数值分别表示目标输出的高和宽。默认值None。
- **output_ratio** (Union[float, tuple[float]],可选) - 目标输出shape与输入shape的比率。通过输入shape和 `output_ratio` 确定输出shape。支持数据类型float16、float32、double数值介于0到1之间。默认值None。
- **return_indices** (bool可选) - 如果为 `True` ,返回分数最大池化的最大值的的索引值。默认值False。
- **return_indices** (bool可选) - 是否返回最大值的的索引值。默认值False。
- **_random_samples** (Tensor可选) - 3D张量分数最大池化的随机步长。支持的数据类型float16、float32、double。数值介于0到1之间。shape为 :math:`(N, C, 2)` 的Tensor。默认值None。
输入:

View File

@ -3,7 +3,7 @@ mindspore.nn.FractionalMaxPool3d
.. py:class:: mindspore.nn.FractionalMaxPool3d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)
对多个输入平面组成的输入上应用3D分数最大池化。在 :math:`(kD_{in}, kH_{in}, kW_{in})` 区域上应用最大池化操作由输出shape决定随机步长。输出特征的数量等于输入平面的数量
在输入 `input_x` 上应用三维分数最大池化。输出Tensor的shape可以由 `output_size` 和 `output_ratio` 其中之一确定,步长由 `_random_samples` 决定。 `output_size` 和 `output_ratio` 不能同时使用
分数最大池化的详细描述在 `Fractional MaxPooling by Ben Graham <https://arxiv.org/abs/1412.6071>`_
@ -13,7 +13,7 @@ mindspore.nn.FractionalMaxPool3d
- **kernel_size** (Union[int, tuple[int]]) - 指定池化核尺寸大小如果为int则代表池化核的深度高和宽。如果为tuple其值必须包含三个正整数值分别表示池化核的深度高和宽。取值必须为正整数。
- **output_size** (Union[int, tuple[int]],可选) - 目标输出大小。如果是整数则表示输出目标的深、高和宽。如果是tuple其值必须包含三个整数值分别表示目标输出的深、高和宽。默认值None。
- **output_ratio** (Union[float, tuple[float]],可选) - 目标输出shape与输入shape的比率。通过输入shape和 `output_ratio` 确定输出shape。支持数据类型float16、float32、double数值介于0到1之间。默认值None。
- **return_indices** (bool可选) - 如果为 `True` ,返回分数最大池化的最大值的的索引值。默认值False。
- **return_indices** (bool可选) - 是否返回最大值的的索引值。默认值False。
- **_random_samples** (Tensor可选) - 随机步长。支持的数据类型float16、float32、double。shape为 :math:`(N, C, 3)` 的Tensor。数值介于0到1之间。默认值None。
输入:

View File

@ -3,12 +3,15 @@ mindspore.nn.PixelShuffle
.. py:class:: mindspore.nn.PixelShuffle(upscale_factor)
在多个输入平面组成的输入上面应用PixelShuffle算法。在平面上应用高效亚像素卷积步长为 :math:`1/r` 。关于PixelShuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
`x` 应用像素重组操作,它实现了步长为 :math:`1/r` 的子像素卷积。关于PixelShuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
通常情况下输入shape :math:`(*, C \times r^2, H, W)` 输出shape :math:`(*, C, H \times r, W \times r)``r` 是缩小因子。 `*` 是大于等于0的维度。
.. note::
Ascend上输入Tensor的维度要小于7。
参数:
- **upscale_factor** (int) - 增加空间分辨率的因子,是正整数。
- **upscale_factor** (int) - 打乱输入Tensor的因子是正整数。 `upscale_factor` 是上面提到的 :math:`r`
输入:
- **x** (Tensor) - Tensorshape为 :math:`(*, C \times r^2, H, W)` 。输入Tensor的维度需要大于2并且倒数第三维length可以被 `upscale_factor` 的平方整除。

View File

@ -3,12 +3,12 @@ mindspore.nn.PixelUnshuffle
.. py:class:: mindspore.nn.PixelUnshuffle(downscale_factor)
在多个输入平面组成的输入上面应用PixelUnshuffle算法。关于PixelUnshuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
`x` 应用逆像素重组操作,这是像素重组的逆操作。关于PixelUnshuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
通常情况下输入shape :math:`(*, C, H \times r, W \times r)` 输出shape :math:`(*, C \times r^2, H, W)``r` 是缩小因子。 `*` 是大于等于0的维度。
参数:
- **downscale_factor** (int) - 减小空间分辨率的因子,是正整数
- **downscale_factor** (int) - 恢复输入Tensor的因子是正整数。 `downscale_factor` 是上面提到的 :math:`r`
输入:
- **x** (Tensor) - Tensorshape为 :math:`(*, C, H \times r, W \times r)` 。输入Tensor的维度需要大于2并且倒数第一和倒数第二维length可以被 `downscale_factor` 整除。

View File

@ -3,18 +3,18 @@ mindspore.ops.diagflat
.. py:function:: mindspore.ops.diagflat(x, offset=0)
创建一个二维Tensor用展开后的输入作为它的对角线。
创建一个二维Tensor用展开后的 `x` 作为它的对角线。
参数:
- **x** (Tensor) - 输入Tensor展开后作为输出的对角线。
- **offset** (int, 可选) - `offset` 控制选择哪条对角线。默认值0。
- 当 `offset` 是0时选择的对角线是主对角线。
- 当 `offset` 大于0时,选择的对角线在主对角线上。
- 当 `offset` 小于0时,选择的对角线在主对角线下。
- 当 `offset` 是正整数时,选择的对角线在主对角线上。
- 当 `offset` 是负整数时,选择的对角线在主对角线下。
返回:
二维Tensor。
二维Tensor,对角线是展开的 `x`
异常:
- **TypeError** - `x` 不是Tensor。

View File

@ -3,8 +3,7 @@ mindspore.ops.fractional_max_pool2d
.. py:function:: mindspore.ops.fractional_max_pool2d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)
对多个输入平面组成的输入上应用2D分数最大池化。在 :math:`(kH_{in}, kW_{in})` 区域上应用最大池化操作由输出shape决定随机步长。对于任何输入shape指定输出shape为 :math:`(H, W)` 。输出特征的数量等于输入平面的数量。
在一个输入Tensor上应用2D fractional max pooling可被视为组成一个2D平面。
在输入 `input_x` 上应用二维分数最大池化。输出Tensor的shape可以由 `output_size` 和 `output_ratio` 其中之一确定,步长由 `_random_samples` 随机。 `output_size` 和 `output_ratio` 不能同时使用。
分数最大池化的详细描述在 `Fractional Max-Pooling <https://arxiv.org/pdf/1412.6071>`_
@ -13,7 +12,7 @@ mindspore.ops.fractional_max_pool2d
- **kernel_size** (Union[int, tuple[int]]) - 指定池化核尺寸大小如果为int则代表池化核的高和宽。如果为tuple其值必须包含两个正int值分别表示池化核的高和宽。取值必须为正int。
- **output_size** (Union[int, tuple[int]],可选) - 目标输出shape。如果是int则表示输出目标的高和宽。如果是tuple其值必须包含两个int值分别表示目标输出的高和宽。默认值None。
- **output_ratio** (Union[float, tuple[float]],可选) - 目标输出shape与输入shape的比率。通过输入shape和 `output_ratio` 确定输出shape。支持数据类型float16、float32、double数值范围01。默认值None。
- **return_indices** (bool可选) - 如果为 `True` ,返回分数最大池化的最大值的的索引值。默认值False。
- **return_indices** (bool可选) - 是否返回最大值的的索引值。默认值False。
- **_random_samples** (Tensor可选) - 3D Tensor分数最大池化的随机步长。支持的数据类型float16、float32、double。数值范围01。shape为 :math:`(N, C, 2)` 的Tensor。默认值None。
返回:

View File

@ -3,7 +3,7 @@ mindspore.ops.fractional_max_pool3d
.. py:function:: mindspore.ops.fractional_max_pool3d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)
对多个输入平面组成的输入上应用3D分数最大池化。在 :math:`(kD_{in}, kH_{in}, kW_{in})` 区域上应用最大池化操作由输出shape决定随机步长。输出特征的数量等于输入平面的数量
在输入 `input_x` 上应用三维分数最大池化。输出Tensor的shape可以由 `output_size` 和 `output_ratio` 其中之一确定,步长由 `_random_samples` 随机。 `output_size` 和 `output_ratio` 不能同时使用
分数最大池化的详细描述在 `Fractional MaxPooling by Ben Graham <https://arxiv.org/abs/1412.6071>`_
@ -14,7 +14,7 @@ mindspore.ops.fractional_max_pool3d
- **kernel_size** (Union[int, tuple[int]]) - 指定池化核尺寸大小如果为int则代表池化核的深度高和宽。如果为tuple其值必须包含三个正int值分别表示池化核的深度高和宽。取值必须为正int。
- **output_size** (Union[int, tuple[int]],可选) - 目标输出大小。如果是int则表示输出目标的深、高和宽。如果是tuple其值必须包含三个int值分别表示目标输出的深、高和宽。默认值None。
- **output_ratio** (Union[float, tuple[float]],可选) - 目标输出shape与输入shape的比率。通过输入shape和 `output_ratio` 确定输出shape。支持数据类型float16、float32、double数值范围01。默认值None。
- **return_indices** (bool可选) - 如果为 `True` ,返回分数最大池化的最大值的的索引值。默认值False。
- **return_indices** (bool可选) - 是否返回最大值的的索引值。默认值False。
- **_random_samples** (Tensor可选) - 随机步长。支持的数据类型float16、float32、double。shape为 :math:`(N, C, 3)` 的Tensor。数值范围01。默认值None。
返回:

View File

@ -3,13 +3,16 @@ mindspore.ops.pixel_shuffle
.. py:function:: mindspore.ops.pixel_shuffle(x, upscale_factor)
在多个输入平面组成的输入上面应用pixel_shuffle算法。在平面上应用高效亚像素卷积步长为 :math:`1/r` 。关于pixel_shuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
对输入 `x` 应用像素重组操作,它实现了步长为 :math:`1/r` 的子像素卷积。关于pixel_shuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
通常情况下,`x` shape :math:`(*, C \times r^2, H, W)` 输出shape :math:`(*, C, H \times r, W \times r)``r` 是缩小因子。 `*` 是大于等于0的维度。
.. note::
Ascend上输入Tensor的维度要小于7。
参数:
- **x** (Tensor) - Tensorshape为 :math:`(*, C \times r^2, H, W)``x` 的维度需要大于2并且倒数第三维length可以被 `upscale_factor` 的平方整除。
- **upscale_factor** (int) - 增加空间分辨率的因子,是正整数。
- **upscale_factor** (int) - 打乱输入Tensor的因子是正整数。 `upscale_factor` 是上面提到的 :math:`r`
返回:
- **output** (Tensor) - Tensorshape为 :math:`(*, C, H \times r, W \times r)`

View File

@ -3,13 +3,13 @@ mindspore.ops.pixel_unshuffle
.. py:function:: mindspore.ops.pixel_unshuffle(x, downscale_factor)
在多个输入平面组成的输入上面应用pixel_unshuffle算法。关于pixel_unshuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
对输入 `x` 应用逆像素重组操作,这是像素重组的逆操作。关于pixel_unshuffle算法详细介绍请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network <https://arxiv.org/abs/1609.05158>`_
通常情况下,`x` shape :math:`(*, C, H \times r, W \times r)` 输出shape :math:`(*, C \times r^2, H, W)``r` 是缩小因子。 `*` 是大于等于0的维度。
参数:
- **x** (Tensor) - Tensorshape为 :math:`(*, C, H \times r, W \times r)``x` 的维度需要大于2并且倒数第一和倒数第二维length可以被 `downscale_factor` 整除。
- **downscale_factor** (int) - 减小空间分辨率的因子,是正整数
- **downscale_factor** (int) - 恢复输入Tensor的因子是正整数。 `downscale_factor` 是上面提到的 :math:`r`
返回:
- **output** (Tensor) - Tensorshape为 :math:`(*, C \times r^2, H, W)`

View File

@ -3,7 +3,7 @@ mindspore.ops.threshold
.. py:function:: mindspore.ops.threshold(input_x, thr, value)
threshold激活函数按元素计算输出
将使用 `thr` 参数对 `input_x` 逐元素阈值化后的结果作为Tensor返回
threshold定义为

View File

@ -532,16 +532,20 @@ class CentralCrop(Cell):
class PixelShuffle(Cell):
r"""
Applies a pixelshuffle operation over an input signal composed of several input planes. This is useful for
implementiong efficient sub-pixel convolution with a stride of :math:`1/r`. For more details, refer to
Applies the PixelShuffle operation over input `x` which implements sub-pixel convolutions
with stride :math:`1/r` . For more details, refer to
`Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
<https://arxiv.org/abs/1609.05158>`_ .
Typically, the input is of shape :math:`(*, C \times r^2, H, W)` , and the output is of shape
:math:`(*, C, H \times r, W \times r)`, where r is an upscale factor and * is zero or more batch dimensions.
Note:
The dimension of input Tensor on Ascend should be less than 7.
Args:
upscale_factor (int): factor to increase spatial resolution by, and is a positive integer.
upscale_factor (int): factor to shuffle the input, and is a positive integer.
`upscale_factor` is the above-mentioned :math:`r`.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(*, C \times r^2, H, W)` . The dimension of `x` is larger than 2, and
@ -559,12 +563,12 @@ class PixelShuffle(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = np.arange(3 * 2 * 9 * 4 * 4).reshape((3, 2, 9, 4, 4))
>>> input_x = np.arange(3 * 2 * 8 * 4 * 4).reshape((3, 2, 8, 4, 4))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> pixel_shuffle = nn.PixelShuffle(3)
>>> pixel_shuffle = nn.PixelShuffle(2)
>>> output = pixel_shuffle(input_x)
>>> print(output.shape)
(3, 2, 1, 12, 12)
(3, 2, 2, 8, 8)
"""
def __init__(self, upscale_factor):
super(PixelShuffle, self).__init__()
@ -576,15 +580,16 @@ class PixelShuffle(Cell):
class PixelUnshuffle(Cell):
r"""
Applies a pixelunshuffle operation over an input signal composed of several input planes. For more details, refer to
`Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
Applies the PixelUnshuffle operation over input `x` which is the inverse of PixelShuffle. For more details, refer
to `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
<https://arxiv.org/abs/1609.05158>`_ .
Typically, the input is of shape :math:`(*, C, H \times r, W \times r)` , and the output is of shape
:math:`(*, C \times r^2, H, W)` , where r is a downscale factor and * is zero or more batch dimensions.
Args:
downscale_factor (int): factor to decrease spatial resolution by, and is a positive integer.
downscale_factor (int): factor to unshuffle the input, and is a positive integer.
`downscale_factor` is the above-mentioned :math:`r`.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(*, C, H \times r, W \times r)` . The dimension of `x` is larger than
@ -602,12 +607,12 @@ class PixelUnshuffle(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> pixel_unshuffle = nn.PixelUnshuffle(3)
>>> input_x = np.arange(12 * 12).reshape((1, 1, 12, 12))
>>> pixel_unshuffle = nn.PixelUnshuffle(2)
>>> input_x = np.arange(8 * 8).reshape((1, 1, 8, 8))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = pixel_unshuffle(input_x)
>>> print(output.shape)
(1, 9, 4, 4)
(1, 4, 4, 4)
"""
def __init__(self, downscale_factor):
super(PixelUnshuffle, self).__init__()

View File

@ -1190,12 +1190,11 @@ class AdaptiveMaxPool3d(Cell):
class FractionalMaxPool2d(Cell):
r"""
Applies a 2D fractional max pooling to an input signal composed of multiple input planes.
The max-pooling operation is applied in kH × kW regions by a stochastic step size determined by
the target output size. For any input size, the size of the specified output is H x W. The number
of output features is equal to the number of input planes.
Applies the 2D FractionalMaxPool operatin over `input_x`. The output Tensor shape can be determined by either
`output_size` or `output_ratio`, and the step size is determined by `_random_samples`.
`output_size` or `output_ratio` cannot be used at the same time.
Fractional MaxPooling is described in the paper `Fractional Max-Pooling <https://arxiv.org/pdf/1412.6071>`_.
Refer to the paper `Fractional MaxPooling by Ben Graham <https://arxiv.org/abs/1412.6071>`_ for more details.
Args:
kernel_size (Union[int, tuple[int]]): The size of kernel used to take the maximum value,
@ -1211,8 +1210,7 @@ class FractionalMaxPool2d(Cell):
Specifying the size of the output tensor by using a ratio of the input size.
Data type : float16, float32, double, and value is between (0, 1).
Default: None.
return_indices (bool, optional): If `return_indices` is True, the indices of max value would be output.
Default: False.
return_indices (bool, optional): Whether to return the indices of max value. Default: False.
_random_samples (Tensor, optional): The random step of FractionalMaxPool2d, which is a 3D tensor.
Tensor of data type : float16, float32, double, and value is between (0, 1).
Supported shape :math:`(N, C, 2)`.
@ -1293,9 +1291,9 @@ class FractionalMaxPool2d(Cell):
class FractionalMaxPool3d(Cell):
r"""
This operator applies a 3D fractional max pooling over an input signal composed of several input planes.
The max-pooling operation is applied in kD x kH x kW regions by a stochastic step size determined
by the target output size.The number of output features is equal to the number of input planes.
Applies the 3D FractionalMaxPool operatin over `input_x`. The output Tensor shape can be determined by either
`output_size` or `output_ratio`, and the step size is determined by `_random_samples`.
`output_size` or `output_ratio` cannot be used at the same time.
Refer to the paper `Fractional MaxPooling by Ben Graham <https://arxiv.org/abs/1412.6071>`_ for more details.
@ -1316,8 +1314,7 @@ class FractionalMaxPool3d(Cell):
Specifying the size of the output tensor by using a ratio of the input size.
Data type : float16, float32, double, and value is between (0, 1).
Default: None.
return_indices (bool, optional): If `return_indices` is True, the indices of max value would be output.
Default: False.
return_indices (bool, optional): Whether to return the indices of max value. Default: False.
_random_samples (Tensor, optional): The random step of FractionalMaxPool3d, which is a 3D tensor.
Tensor of data type : float16, float32, double, and value is between (0, 1).
Supported shape :math:`(N, C, 3)`

View File

@ -2302,13 +2302,13 @@ class CTCLoss(LossBase):
Recurrent Neural Networks <http://www.cs.toronto.edu/~graves/icml_2006.pdf>`_ .
Args:
blank (int): The blank label. Default: 0.
reduction (str): Apply specific reduction method to the output: 'none', 'mean', or 'sum'. Default: 'mean'.
zero_infinity (bool): Whether to set infinite loss and correlation gradient to zero. Default: False.
blank (int): The blank tag. Default: 0.
reduction (str): Implements the reduction method to the output with 'none', 'mean', or 'sum'. Default: 'mean'.
zero_infinity (bool): Whether to set infinite loss and correlation gradient to 0. Default: False.
Inputs:
- **log_probs** (Tensor) - A tensor of shape (T, N, C) or (T, C), where T is input length, N is batch size and
C is number of classes (including blank). T, N and C are positive integers.
- **log_probs** (Tensor) - A tensor of shape (T, N, C) or (T, C), where T is length of input,
N is size of the batch and C is the number of classes. T, N and C are positive integers.
- **targets** (Tensor) - A tensor of shape (N, S) or (sum( `target_lengths` )), where S is max target length,
means the target sequences.
- **input_lengths** (Union[tuple, Tensor]) - A tuple or Tensor of shape(N). It means the lengths of the input.

View File

@ -502,7 +502,7 @@ def ravel(x):
>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = ops.ravel(x)
>>> print(output)
[0. 1. 2. 1]
[0. 1. 2. 1.]
>>> print(output.shape)
(4,)
"""
@ -4887,18 +4887,18 @@ def diag(input_x):
def diagflat(x, offset=0):
r"""
Creates a two-dimensional Tensor with the flattened input as a diagonal.
Create a 2-D Tensor which diagonal is the flattened `x` .
Args:
x (Tensor): Input Tensor, which is flattened and set as the diagonal of the output.
offset (int, optional): `offset` controls which diagonal to consider. Default: 0.
offset (int, optional): `offset` controls which diagonal to choose. Default: 0.
- When `offset` is zero, the diagonal chosen is the main diagonal.
- When `offset` is greater than zero, the diagonal chosen is above the main diagonal.
- When `offset` is less than zero, the diagonal chosen is below the main diagonal.
- When `offset` is a positive integer, the diagonal chosen is up the main diagonal.
- When `offset` is a negative integer, the diagonal chosen is down the main diagonal.
Returns:
The 2-D Tensor.
The 2-D Tensor, whose diagonal is the flattened `x`.
Raises:
TypeError: If `x` is not a tensor.

View File

@ -9717,9 +9717,10 @@ def isposinf(x):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> output = ops.isposinf(Tensor([-float("inf"), float("inf"), 1.2], mstype.float32))
>>> output = ops.isposinf(Tensor([[-float("inf"), float("inf")], [1, float("inf")]], mstype.float32))
>>> print(output)
[False True False]
[[False True]
[False True]]
"""
if not isinstance(x, (Tensor, Tensor_)):
raise TypeError(f"For isposinf, the input x must be a Tensor, but got {type(x)}")
@ -9743,9 +9744,10 @@ def isneginf(x):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> output = ops.isneginf(Tensor([-float("inf"), float("inf"), 1.2], mstype.float32))
>>> output = ops.isneginf(Tensor([[-float("inf"), float("inf")], [1, -float("inf")]], mstype.float32))
>>> print(output)
[ True False False]
[[ True False]
[False True]]
"""
if not isinstance(x, (Tensor, Tensor_)):
raise TypeError(f"For isneginf, the input x must be a Tensor, but got {type(x)}")

View File

@ -1455,13 +1455,11 @@ def _check_float_range_inc_right(arg_value, lower_limit, upper_limit, arg_name=N
def fractional_max_pool2d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False,
_random_samples=None):
r"""
Applies a 2D fractional max pooling to an input signal.
The input is composed of multiple input planes.
The max-pooling operation is applied in kH × kW regions by a stochastic step size determined by
the target output size. For any input size, the size of the specified output is H x W. The number
of output features is equal to the number of input planes.
Applies the 2D FractionalMaxPool operatin over `input_x`. The output Tensor shape can be determined by either
`output_size` or `output_ratio`, and the step size is determined by `_random_samples`.
`output_size` or `output_ratio` cannot be used at the same time.
Fractional MaxPooling is described in the paper `Fractional Max-Pooling <https://arxiv.org/pdf/1412.6071>`_.
Refer to the paper `Fractional MaxPooling by Ben Graham <https://arxiv.org/abs/1412.6071>`_ for more details.
Args:
input_x (Tensor): Tensor of shape :math:`(N, C, H_{in}, W_{in})`,
@ -1479,8 +1477,7 @@ def fractional_max_pool2d(input_x, kernel_size, output_size=None, output_ratio=N
Specifying the size of the output tensor by using a ratio of the input size.
Data type: float16, float32, double, and value is between (0, 1).
Default: None.
return_indices (bool, optional): If `return_indices` is True, the indices of max value would be output.
Default: False.
return_indices (bool, optional): Whether to return the indices of max value. Default: False.
_random_samples (Tensor, optional): The random step of FractionalMaxPool2d, which is a 3D tensor.
Tensor of data type: float16, float32, double, and value is between (0, 1).
Supported shape :math:`(N, C, 2)`.
@ -1556,10 +1553,9 @@ def fractional_max_pool2d(input_x, kernel_size, output_size=None, output_ratio=N
def fractional_max_pool3d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False,
_random_samples=None):
r"""
This operator applies a 3D fractional max pooling over an input signal.
The input is composed of several input planes.
The max-pooling operation is applied in kD x kH x kW regions by a stochastic step size determined
by the target output size.The number of output features is equal to the number of input planes.
Applies the 3D FractionalMaxPool operatin over `input_x`. The output Tensor shape can be determined by either
`output_size` or `output_ratio`, and the step size is determined by `_random_samples`.
`output_size` or `output_ratio` cannot be used at the same time.
Refer to the paper `Fractional MaxPooling by Ben Graham <https://arxiv.org/abs/1412.6071>`_ for more details.
@ -1583,8 +1579,7 @@ def fractional_max_pool3d(input_x, kernel_size, output_size=None, output_ratio=N
Specifying the size of the output tensor by using a ratio of the input size.
Data type: float16, float32, double, and value is between (0, 1).
Default: None.
return_indices (bool, optional): If `return_indices` is True, the indices of max value would be output.
Default: False.
return_indices (bool, optional): Whether to return the indices of max value. Default: False.
_random_samples (Tensor, optional): The random step of FractionalMaxPool3d, which is a 3D tensor.
Tensor of data type: float16, float32, double, and value is between (0, 1).
Supported shape :math:`(N, C, 3)`.
@ -3414,7 +3409,7 @@ def smooth_l1_loss(logits, labels, beta=1.0, reduction='none'):
def threshold(input_x, thr, value):
r"""
thresholds each element of the input Tensor.
Returns each element of `input_x` after thresholding by `thr` as a Tensor.
The formula is defined as follows:
@ -3427,7 +3422,7 @@ def threshold(input_x, thr, value):
Args:
input_x (Tensor): The input of threshold with data type of float16 or float32.
thr (Union[int, float]): The value to threshold at.
thr (Union[int, float]): The value of the threshold.
value (Union[int, float]): The value to replace with when element is less than threshold.
Returns:
@ -3442,10 +3437,10 @@ def threshold(input_x, thr, value):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> inputs = mindspore.Tensor([0.0, 0.2, 0.3], mindspore.float32)
>>> outputs = ops.threshold(inputs, 0.1, 20)
>>> inputs = mindspore.Tensor([0.0, 2, 3], mindspore.float32)
>>> outputs = ops.threshold(inputs, 1, 100)
>>> print(outputs)
[ 20.0 0.2 0.3]
[100. 2. 3.]
"""
_check_is_tensor('input_x', input_x, "threshold")
_check_value_type("thr", thr, [float, int], "threshold")
@ -4984,18 +4979,22 @@ def _check_positive_int(arg_value, arg_name=None, prim_name=None):
def pixel_shuffle(x, upscale_factor):
r"""
Applies a pixel_shuffle operation over an input signal composed of several input planes. This is useful for
implementiong efficient sub-pixel convolution with a stride of :math:`1/r`. For more details, refer to
Applies the PixelShuffle operation over input `x` which implements sub-pixel convolutions
with stride :math:`1/r` . For more details, refer to
`Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
<https://arxiv.org/abs/1609.05158>`_ .
Typically, the `x` is of shape :math:`(*, C \times r^2, H, W)` , and the output is of shape
:math:`(*, C, H \times r, W \times r)`, where `r` is an upscale factor and `*` is zero or more batch dimensions.
Note:
The dimension of input Tensor on Ascend should be less than 7.
Args:
x (Tensor): Tensor of shape :math:`(*, C \times r^2, H, W)` . The dimension of `x` is larger than 2, and the
length of third to last dimension can be divisible by `upscale_factor` squared.
upscale_factor (int): factor to increase spatial resolution by, and is a positive integer.
upscale_factor (int): factor to shuffle the input Tensor, and is a positive integer.
`upscale_factor` is the above-mentioned :math:`r`.
Returns:
- **output** (Tensor) - Tensor of shape :math:`(*, C, H \times r, W \times r)` .
@ -5040,7 +5039,7 @@ def pixel_shuffle(x, upscale_factor):
def pixel_unshuffle(x, downscale_factor):
r"""
Applies a pixel_unshuffle operation over an input signal composed of several input planes. For more details, refer
Applies the PixelUnshuffle operation over input `x` which is the inverse of PixelShuffle. For more details, refer
to `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
<https://arxiv.org/abs/1609.05158>`_ .
@ -5050,7 +5049,8 @@ def pixel_unshuffle(x, downscale_factor):
Args:
x (Tensor): Tensor of shape :math:`(*, C, H \times r, W \times r)` . The dimension of `x` is larger than 2,
and the length of second to last dimension or last dimension can be divisible by `downscale_factor` .
downscale_factor (int): factor to decrease spatial resolution by, and is a positive integer.
downscale_factor (int): factor to unshuffle the input Tensor, and is a positive integer.
`downscale_factor` is the above-mentioned :math:`r`.
Returns:
- **output** (Tensor) - Tensor of shape :math:`(*, C \times r^2, H, W)` .
@ -5064,11 +5064,11 @@ def pixel_unshuffle(x, downscale_factor):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = np.arange(12 * 12).reshape((1, 1, 12, 12))
>>> input_x = np.arange(8 * 8).reshape((1, 1, 8, 8))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = ops.pixel_unshuffle(input_x, 3)
>>> output = ops.pixel_unshuffle(input_x, 2)
>>> print(output.shape)
(1, 9, 4, 4)
(1, 4, 4, 4)
"""
_check_positive_int(downscale_factor, "downscale_factor")
idx = x.shape