refactor adaptive_max_pool1d use 2d

This commit is contained in:
fandawei 2023-02-17 11:32:08 +08:00
parent 7cf3697fa6
commit c87cdd2e13
14 changed files with 485 additions and 313 deletions

View File

@ -15,16 +15,15 @@ mindspore.nn.AdaptiveAvgPool1d
- **output_size** (int) - 目标输出大小 :math:`L_{out}`
输入:
- **x** (Tensor) - shape为 :math:`(N, C_{in}, L_{in})` 的Tensor数据类型为float16或float32。
- **input** (Tensor) - shape为 :math:`(N, C_{in}, L_{in})` 的Tensor数据类型为float16或float32。
输出:
Tensor其shape为 :math:`(N, C_{in}, L_{out})`,数据类型与 `x` 相同。
Tensor其shape为 :math:`(N, C_{in}, L_{out})`,数据类型与 `input` 相同。
异常:
- **TypeError** - `output_size` 不是int。
- **TypeError** - `x` 不是float16或float32。
- **TypeError** - `input` 不是float16或float32。
- **ValueError** - `output_size` 小于1。
- **ValueError** - `x` 的shape长度不等于3。
- **ValueError** - `x` 的最后一个维度小于 `output_size`
- **ValueError** - `x` 的最后一个维度不能被 `output_size` 整除。
- **ValueError** - `input` 的shape长度不等于3。
- **ValueError** - `input` 的最后一个维度小于 `output_size`
- **ValueError** - `input` 的最后一个维度不能被 `output_size` 整除。

View File

@ -21,13 +21,13 @@ mindspore.nn.AdaptiveAvgPool2d
- **output_size** (Union[int, tuple]) - 输出特征图的尺寸为H * W。可以是int类型的H和W组成的tuple也可以为一个int值代表相同H和W或None如果是None则意味着输出大小与输入相同。
输入:
- **x** (Tensor) - AdaptiveAvgPool2d的输入为三维或四维的Tensor数据类型为float16、float32或者float64。
- **input** (Tensor) - AdaptiveAvgPool2d的输入为三维或四维的Tensor数据类型为float16、float32或者float64。
输出:
Tensor输出shape为 :math:`(N, C_{out}, H_{out}, W_{out})`
异常:
- **ValueError** - 如果 `output_size` 是tuple并且 `output_size` 的长度不是2。
- **TypeError** - 如果 `x` 不是Tensor。
- **TypeError** - 如果 `x` 的数据类型不是float16、float32或者float64。
- **ValueError** - 如果 `x` 的维度小于或等于 `output_size` 的维度。
- **TypeError** - 如果 `input` 不是Tensor。
- **TypeError** - 如果 `input` 的数据类型不是float16、float32或者float64。
- **ValueError** - 如果 `input` 的维度小于或等于 `output_size` 的维度。

View File

@ -5,13 +5,13 @@ mindspore.nn.AdaptiveAvgPool3d
对输入Tensor提供三维的自适应平均池化操作。也就是说对于输入任何尺寸指定输出的尺寸都为 :math:`(D, H, W)`。但是输入和输出特征的数目不会变化。
假设输入 `x` 最后三维大小分别为 :math:`(inD, inH, inW)`,则输出的最后三维大小分别为 :math:`(outD, outH, outW)`。运算如下:
假设输入 `input` 最后三维大小分别为 :math:`(inD, inH, inW)`,则输出的最后三维大小分别为 :math:`(outD, outH, outW)`。运算如下:
.. math::
\begin{array}{ll} \\
\forall \quad od \in [0,outD-1], oh \in [0,outH-1], ow \in [0,outW-1]\\
output[od,oh,ow] = \\
\qquad mean(x[istartD:iendD+1,istartH:iendH+1,istartW:iendW+1])\\
\qquad mean(input[istartD:iendD+1,istartH:iendH+1,istartW:iendW+1])\\
where,\\
\qquad istartD= \left\lceil \frac{od * inD}{outD} \right\rceil \\
\qquad iendD=\left\lfloor \frac{(od+1)* inD}{outD} \right\rfloor \\
@ -25,13 +25,13 @@ mindspore.nn.AdaptiveAvgPool3d
- **output_size** (Union[int, tuple]) - 指定输出特征图的尺寸可以是个tuple :math:`(D, H, W)`也可以是一个int值D来表示输出尺寸为 :math:`(D, D, D)`:math:`D`:math:`H`:math:`W` 可以是int值或者None其中None表示输出大小与对应的输入的大小相同。
输入:
- **x** (Tensor) - AdaptiveAvgPool3d的输入是4D或者5D的Tensor。数据类型float16float32或者float64。
- **input** (Tensor) - AdaptiveAvgPool3d的输入是4D或者5D的Tensor。数据类型float16float32或者float64。
输出:
Tensor与输入 `x` 的数据类型相同。
Tensor与输入 `input` 的数据类型相同。
异常:
- **TypeError** - `x` 不是Tensor。
- **TypeError** - `x` 的数据类型不是float16、float32或者float64。
- **ValueError** - `x` 维度不是4D或者5D。
- **TypeError** - `input` 不是Tensor。
- **TypeError** - `input` 的数据类型不是float16、float32或者float64。
- **ValueError** - `input` 维度不是4D或者5D。
- **ValueError** - `output_size` 的值不是正数。

View File

@ -5,25 +5,26 @@ mindspore.nn.AdaptiveMaxPool1d
在一个输入Tensor上应用1D自适应最大池化运算可被视为组成一个1D输入平面。
通常输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` AdaptiveMaxPool1d在 :math:`L_{in}` 维度上计算区域最大值
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})` ,其中, :math:`L_{out}``output_size`
通常输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` :math:`(C_{in}, L_{in})`
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})` :math:`(C_{in}, L_{out})` ,其中 :math:`L_{out}``output_size` 定义
.. note::
:math:`L_{in}` 必须能被 `output_size` 整除
Ascend平台不支持 `return_indices` 参数
参数:
- **output_size** (int) - 目标输出大小 :math:`L_{out}`
- **output_size** (int) - 目标输出大小 :math:`L_{out}`
- **return_indices** (bool) - 如果为True输出最大值的索引默认值为False。
输入:
- **x** (Tensor) - shape为 :math:`(N, C_{in}, L_{in})` 的Tensor数据类型为float16或float32。
- **input** (Tensor) - 输入shape为 :math:`(N_{in}, C_{in}, L_{in})`:math:`(C_{in}, L_{in})` 数据类型为float16、float32。
输出:
Tensor其shape为 :math:`(N, C_{in}, L_{out})`,数据类型与 `x` 相同。
Tensor数据类型与 `input` 相同。
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})`:math:`(C_{in}, L_{out})`
异常:
- **TypeError** - `x` 不是float16或float32。
- **TypeError** - `output_size` 不是int。
- **ValueError** - `output_size` 小于1。
- **ValueError** - `x` 的最后一个维度小于 `output_size`
- **ValueError** - `x` 的最后一个维度不能被 `output_size` 整除。
- **ValueError** - `x` 的shape长度不等于3。
- **TypeError** - 如果 `input` 不是Tensor。
- **TypeError** - 如果 `output_size` 不是int类型。
- **TypeError** - 如果 `return_indices` 不是bool类型。
- **ValueError** - 如果 `output_size` 小于1。
- **ValueError** - 如果 `input` 的维度不等于2或3。

View File

@ -18,23 +18,23 @@ mindspore.nn.AdaptiveMaxPool2d
\end{align}
.. note::
Ascend平台input_x输入仅支持float16类型。
Ascend平台input输入仅支持float16类型。
参数:
- **output_size** (Union[int, tuple]) - 输出特征图的尺寸为H * W。可以是int类型的H和W组成的tuple也可以为一个int值代表相同H和W或None如果是None则意味着输出大小与输入相同。
- **return_indices** (bool) - 如果为True输出最大值的索引默认值为False。
输入:
- **input_x** (Tensor) - AdaptiveMaxPool2d的输入为三维或四维的Tensor数据类型为float16、float32或者float64。
- **input** (Tensor) - AdaptiveMaxPool2d的输入为三维或四维的Tensor数据类型为float16、float32或者float64。
输出:
Tensor数据类型与 `input_x` 相同。
输出的shape为 `input_x_shape[:len(input_x_shape) - len(out_shape)] + out_shape`
Tensor数据类型与 `input` 相同。
输出的shape为 `input_shape[:len(input_shape) - len(out_shape)] + out_shape`
异常:
- **TypeError** - `input_x` 不是Tensor。
- **TypeError** - `input_x` 中的数据不是float16, float32, float64.
- **TypeError** - `input` 不是Tensor。
- **TypeError** - `input` 中的数据不是float16, float32, float64.
- **TypeError** - `output_size` 不是int或者tuple。
- **TypeError** - `return_indices` 不是bool。
- **ValueError** - `output_size` 是tuple但大小不是2。
- **ValueError** - `input_x` 的维度不是CHW或者NCHW。
- **ValueError** - `input` 的维度不是CHW或者NCHW。

View File

@ -10,14 +10,14 @@ mindspore.nn.AdaptiveMaxPool3d
- **return_indices** (bool) - 如果 `return_indices` 为True将会输出最大值对应的索引否则不输出索引。默认为False。
输入:
- **x** (Tensor) - shape为 :math:`(C, D, H, W)`:math:`(NC, D, H, W)` 的Tensor支持的数据类型包括int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64。
- **input** (Tensor) - shape为 :math:`(C, D, H, W)`:math:`(NC, D, H, W)` 的Tensor支持的数据类型包括int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64。
输出:
- **y** (Tensor) - Tensor与输入 `x` 的数据类型和维度相同。
- **y** (Tensor) - Tensor与输入 `input` 的数据类型和维度相同。
- **argmax** (Tensor) - Tensor最大值对应的索引数据类型为int32并与 `y` 的shape相同。仅当 `return_indices` 为True的时候才返回该值。
异常:
- **TypeError** - `x` 不是Tensor。
- **ValueError** - `x` 的维度不是4D或者5D。
- **TypeError** - `x` 的数据类型不是int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64其中之一。
- **TypeError** - `input` 不是Tensor。
- **ValueError** - `input` 的维度不是4D或者5D。
- **TypeError** - `input` 的数据类型不是int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64其中之一。
- **ValueError** - `output_size` 不是一个int整数或者shape为(3,)的tuple。

View File

@ -1,7 +1,7 @@
mindspore.ops.adaptive_avg_pool1d
=================================
.. py:function:: mindspore.ops.adaptive_avg_pool1d(input_x, output_size)
.. py:function:: mindspore.ops.adaptive_avg_pool1d(input, output_size)
对可以看作是由一系列1D平面组成的输入Tensor应用一维自适应平均池化操作。
@ -13,17 +13,17 @@ mindspore.ops.adaptive_avg_pool1d
:math:`L_{in}` 必须能被 `output_size` 整除。
参数:
- **input_x** (Tensor) - 输入shape为 :math:`(N_{in}, C_{in}, L_{in})`数据类型为float16、float32。
- **input** (Tensor) - 输入shape为 :math:`(N_{in}, C_{in}, L_{in})`数据类型为float16、float32。
- **output_size** (int) - 大小为 :math:`L_{out}`
返回:
Tensor数据类型与 `input_x` 相同。
Tensor数据类型与 `input` 相同。
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})`
异常:
- **TypeError** - 如果 `output_size` 不是int类型。
- **TypeError** - 如果 `input_x` 的数据类型不是float16或者float32。
- **TypeError** - 如果 `input` 的数据类型不是float16或者float32。
- **ValueError** - 如果 `output_size` 小于1。
- **ValueError** - 如果 `input_x` 的维度不等于3。
- **ValueError** - 如果 `input_x` 的最后一维尺寸小于等于 `output_size`
- **ValueError** - 如果 `input_x` 的最后一维尺寸不能被 `output_size` 整除。
- **ValueError** - 如果 `input` 的维度不等于3。
- **ValueError** - 如果 `input` 的最后一维尺寸小于等于 `output_size`
- **ValueError** - 如果 `input` 的最后一维尺寸不能被 `output_size` 整除。

View File

@ -1,11 +1,11 @@
mindspore.ops.adaptive_avg_pool3d
=================================
.. py:function:: mindspore.ops.adaptive_avg_pool3d(input_x, output_size)
.. py:function:: mindspore.ops.adaptive_avg_pool3d(input, output_size)
对由多个平面组成的的输入Tensor进行三维的自适应平均池化操作。对于任何输入尺寸指定输出的尺寸都为 :math:`(D, H, W)`,但是输入和输出特征的数目不会变化。
假设输入 `input_x` 最后三维大小分别为 :math:`(inD, inH, inW)`,则输出的最后三维大小分别为 :math:`(outD, outH, outW)`,运算如下:
假设输入 `input` 最后三维大小分别为 :math:`(inD, inH, inW)`,则输出的最后三维大小分别为 :math:`(outD, outH, outW)`,运算如下:
.. math::
\begin{array}{ll} \\
@ -22,14 +22,14 @@ mindspore.ops.adaptive_avg_pool3d
\end{array}
参数:
- **input_x** (Tensor) - adaptive_avg_pool3d的输入是4D或者5D的Tensor。
- **input** (Tensor) - adaptive_avg_pool3d的输入是4D或者5D的Tensor。
- **output_size** (Union[int, tuple]) - 指定输出特征图的尺寸可以是个tuple :math:`(D, H, W)`也可以是一个int值D来表示输出尺寸为 :math:`(D, D, D)`:math:`D`:math:`H`:math:`W` 可以是int值或者None其中None表示输出大小与对应的输入的大小相同。
返回:
Tensor与输入 `input_x` 的数据类型相同。
Tensor与输入 `input` 的数据类型相同。
异常:
- **TypeError** - `input_x` 不是Tensor。
- **TypeError** - `input_x` 的数据类型不是float16、float32或者float64。
- **ValueError** - `input_x` 维度不是4D或者5D。
- **TypeError** - `input` 不是Tensor。
- **TypeError** - `input` 的数据类型不是float16、float32或者float64。
- **ValueError** - `input` 维度不是4D或者5D。
- **ValueError** - `output_size` 的值不是正数。

View File

@ -1,28 +1,28 @@
mindspore.ops.adaptive_max_pool1d
=================================
.. py:function:: mindspore.ops.adaptive_max_pool1d(input_x, output_size)
.. py:function:: mindspore.ops.adaptive_max_pool1d(input_x, output_size, return_indices=False)
对可以看作是由一系列1D平面组成的输入Tensor应用一维自适应最大池化操作。
通常输入的shape为 :math:`(N_{in}, C_{in}, L_{in})`adaptive_max_pool1d输出区域最大值在 :math:`L_{in}` 区间
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})`,其中 :math:`L_{out}``output_size` 定义。
通常输入的shape为 :math:`(N_{in}, C_{in}, L_{in})`:math:`(C_{in}, L_{in})`
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})`:math:`(C_{in}, L_{out})` ,其中 :math:`L_{out}``output_size` 定义。
.. note::
:math:`L_{in}` 必须能被 `output_size` 整除
Ascend平台不支持 `return_indices` 参数
参数:
- **input_x** (Tensor) - 输入shape为 :math:`(N_{in}, C_{in}, L_{in})`数据类型为float16、float32。
- **input_x** (Tensor) - 输入shape为 :math:`(N_{in}, C_{in}, L_{in})`:math:`(C_{in}, L_{in})` 数据类型为float16、float32。
- **output_size** (int) - 大小为 :math:`L_{out}`
- **return_indices** (bool) - 如果为True输出最大值的索引默认值为False。
返回:
Tensor数据类型与 `input_x` 相同。
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})`
输出的shape为 :math:`(N_{in}, C_{in}, L_{out})` :math:`(C_{in}, L_{out})`
异常:
- **TypeError** - 如果 `input_x` 不是Tensor。
- **TypeError** - 如果 `output_size` 不是int类型。
- **TypeError** - 如果 `input_x` 的数据类型不是float16或者float32
- **TypeError** - 如果 `return_indices` 不是bool类型
- **ValueError** - 如果 `output_size` 小于1。
- **ValueError** - 如果 `input_x` 的维度不等于3。
- **ValueError** - 如果 `input_x` 的最后一维尺寸小于等于 `output_size`
- **ValueError** - 如果 `input_x` 的最后一维尺寸不能被 `output_size` 整除。
- **ValueError** - 如果 `input_x` 的维度不等于2或3。

View File

@ -1,21 +1,21 @@
mindspore.ops.adaptive_max_pool3d
=================================
.. py:function:: mindspore.ops.adaptive_max_pool3d(x, output_size, return_indices=False)
.. py:function:: mindspore.ops.adaptive_max_pool3d(input, output_size, return_indices=False)
对由多个平面组成的的输入Tensor应用三维的自适应最大池化操作。对于任何输入尺寸指定输出的尺寸都为 :math:`(D, H, W)`,但是输入和输出特征的数目不会变化。
参数:
- **x** (Tensor) - shape为 :math:`(C, D, H, W)`:math:`(NC, D, H, W)` 的Tensor支持的数据类型包括int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64。
- **input** (Tensor) - shape为 :math:`(C, D, H, W)`:math:`(NC, D, H, W)` 的Tensor支持的数据类型包括int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64。
- **output_size** (Union[int, tuple]) - 表示输出特征图的尺寸输入可以是个tuple :math:`(D, H, W)`也可以是一个int值D来表示输出尺寸为 :math:`(D, D, D)`:math:`D` :math:`H`:math:`W` 可以是int型整数或者None其中None表示输出大小与对应的输入的大小相同。
- **return_indices** (bool可选) - 如果 `return_indices` 为True将会输出最大值对应的索引否则不输出索引。默认值为False。
返回:
- **y** (Tensor) - Tensor与输入 `x` 的数据类型和维度相同。
- **y** (Tensor) - Tensor与输入 `input` 的数据类型和维度相同。
- **argmax** (Tensor) - Tensor最大值对应的索引数据类型为int32并与 `y` 的shape相同。仅当 `return_indices` 为True的时候才返回该值。
异常:
- **TypeError** - `x` 不是Tensor。
- **ValueError** - `x` 的维度不是4D或者5D。
- **TypeError** - `x` 的数据类型不是int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64其中之一。
- **TypeError** - `input` 不是Tensor。
- **ValueError** - `input` 的维度不是4D或者5D。
- **TypeError** - `input` 的数据类型不是int8、int16、int32、int64、uint8、uint16、uint32、uint64、float16、float32、float64其中之一。
- **ValueError** - `output_size` 不是一个int或者shape为(3,)的tuple。

View File

@ -773,18 +773,18 @@ class AdaptiveAvgPool1d(Cell):
output_size (int): the target output size :math:`L_{out}`.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, L_{in})`, with float16 or float32 data type.
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, L_{in})`, with float16 or float32 data type.
Outputs:
Tensor of shape :math:`(N, C_{in}, L_{out})`, has the same type as `x`.
Tensor of shape :math:`(N, C_{in}, L_{out})`, has the same type as `input`.
Raises:
TypeError: If `output_size` is not an int.
TypeError: If `x` is neither float16 nor float32.
TypeError: If `input` is neither float16 nor float32.
ValueError: If `output_size` is less than 1.
ValueError: If length of shape of `x` is not equal to 3.
ValueError: If the last dimension of `x` is smaller than `output_size`.
ValueError: If the last dimension of `x` is not divisible by `output_size`.
ValueError: If length of shape of `input` is not equal to 3.
ValueError: If the last dimension of `input` is smaller than `output_size`.
ValueError: If the last dimension of `input` is not divisible by `output_size`.
Supported Platforms:
@ -795,8 +795,8 @@ class AdaptiveAvgPool1d(Cell):
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> pool = nn.AdaptiveAvgPool1d(output_size=2)
>>> x = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = pool(x)
>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = pool(input)
>>> result = output.shape
>>> print(result)
(1, 3, 2)
@ -813,22 +813,22 @@ class AdaptiveAvgPool1d(Cell):
self.output_size = output_size
self.dtype = P.DType()
def construct(self, x):
_adaptive_dtype_check(self.dtype(x), self.cls_name)
def construct(self, input):
_adaptive_dtype_check(self.dtype(input), self.cls_name)
_, _, width = self.shape(x)
_, _, width = self.shape(input)
stride = width // self.output_size
kernel_size = width - (self.output_size - 1) * stride
stride = (1, width // self.output_size)
kernel_size = (1, kernel_size)
x = self.expand(x, 2)
input = self.expand(input, 2)
avg_pool = P.AvgPool(kernel_size=kernel_size, strides=stride)
x = avg_pool(x)
x = self.squeeze(x)
input = avg_pool(input)
input = self.squeeze(input)
return x
return input
class AdaptiveAvgPool2d(Cell):
@ -856,7 +856,7 @@ class AdaptiveAvgPool2d(Cell):
If it is None, it means the output size is the same as the input size.
Inputs:
- **x** (Tensor) - The input of AdaptiveAvgPool2d, which is a 3D or 4D tensor,
- **input** (Tensor) - The input of AdaptiveAvgPool2d, which is a 3D or 4D tensor,
with float16, float32 or float64 data type.
Outputs:
@ -864,9 +864,9 @@ class AdaptiveAvgPool2d(Cell):
Raises:
ValueError: If `output_size` is a tuple and the length of `output_size` is not 2.
TypeError: If `x` is not a Tensor.
TypeError: If dtype of `x` is not float16, float32 or float64.
ValueError: If the dimension of `x` is less than or equal to the dimension of `output_size`.
TypeError: If `input` is not a Tensor.
TypeError: If dtype of `input` is not float16, float32 or float64.
ValueError: If the dimension of `input` is less than or equal to the dimension of `output_size`.
Supported Platforms:
``GPU``
@ -887,8 +887,8 @@ class AdaptiveAvgPool2d(Cell):
super(AdaptiveAvgPool2d, self).__init__()
self.adaptive_avgpool2d = P.AdaptiveAvgPool2D(output_size)
def construct(self, x):
return self.adaptive_avgpool2d(x)
def construct(self, input):
return self.adaptive_avgpool2d(input)
class AdaptiveAvgPool3d(Cell):
@ -897,14 +897,14 @@ class AdaptiveAvgPool3d(Cell):
That is, for any input size, the size of the specified output is :math:`(D, H, W)`.
The number of output features is equal to the number of input planes.
Suppose the last 3 dimension size of x is :math:`(inD, inH, inW)`, then the last 3 dimension size of output is
Suppose the last 3 dimension size of input is :math:`(inD, inH, inW)`, then the last 3 dimension size of output is
:math:`(outD, outH, outW)`.
.. math::
\begin{array}{ll} \\
\forall \quad od \in [0,outD-1], oh \in [0,outH-1], ow \in [0,outW-1]\\
output[od,oh,ow] = \\
\qquad mean(x[istartD:iendD+1,istartH:iendH+1,istartW:iendW+1])\\
\qquad mean(input[istartD:iendD+1,istartH:iendH+1,istartW:iendW+1])\\
where,\\
\qquad istartD= \left\lceil \frac{od * inD}{outD} \right\rceil \\
\qquad iendD=\left\lfloor \frac{(od+1)* inD}{outD} \right\rfloor \\
@ -920,20 +920,20 @@ class AdaptiveAvgPool3d(Cell):
which means the output size is the same as that of the input.
Inputs:
- **x** (Tensor) - The input of AdaptiveAvgPool3d, which is a 5D or 4D Tensor,
- **input** (Tensor) - The input of AdaptiveAvgPool3d, which is a 5D or 4D Tensor,
with float16, float32 or float64 data type.
Outputs:
Tensor, with the same type as the `x`.
Tensor, with the same type as the `input`.
Raises:
TypeError: If `x` is not a Tensor.
TypeError: If dtype of `x` is not float16, float32 or float64.
ValueError: If the dimension of `x` is not 4D or 5D.
TypeError: If `input` is not a Tensor.
TypeError: If dtype of `input` is not float16, float32 or float64.
ValueError: If the dimension of `input` is not 4D or 5D.
ValueError: If `output_size` value is not positive.
Supported Platforms:
``GPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> # case 1: output_size=(3, 3, 4)
@ -967,8 +967,8 @@ class AdaptiveAvgPool3d(Cell):
super(AdaptiveAvgPool3d, self).__init__()
self.adaptive_avg_pool3d = AdaptiveAvgPool3D(output_size)
def construct(self, x):
return self.adaptive_avg_pool3d(x)
def construct(self, input):
return self.adaptive_avg_pool3d(input)
class AdaptiveMaxPool1d(Cell):
@ -976,30 +976,31 @@ class AdaptiveMaxPool1d(Cell):
Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as
a composition of 1D input planes.
Typically, the input is of shape :math:`(N_{in}, C_{in}, L_{in})`,
AdaptiveMaxPool1d outputs regional maximum in the :math:`L_{in}`-dimension. The output is of
shape :math:`(N_{in}, C_{in}, L_{out})`, where :math:`L_{out}` is defined by `output_size`.
Typically, the input is of shape :math:`(N_{in}, C_{in}, L_{in})` or :math:`(C_{in}, L_{in})`,
The output is of shape :math:`(N_{in}, C_{in}, L_{out})` or :math:`(C_{in}, L_{out})`,
where :math:`L_{out}` is defined by `output_size`.
Note:
:math:`L_{in}` must be divisible by `output_size`.
Ascend platform does not support the `return_indices` parameter.
Args:
output_size (int): the target output size :math:`L_{out}`.
return_indices (bool): If `return_indices` is True, the indices of max value would be output.
Default: False.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, L_{in})`, with float16 or float32 data type.
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, L_{in})` or `(C_{in}, L_{in})`, with
float16 or float32 data type.
Outputs:
Tensor of shape :math:`(N, C_{in}, L_{out})`, has the same type as `x`.
Tensor of shape :math:`(N_{in}, C_{in}, L_{out})` or :math:`(C_{in}, L_{out})`, has the same type as `input`.
Raises:
TypeError: If `x` is neither float16 nor float32.
TypeError: If `input` is not a Tensor.
TypeError: If `output_size` is not an int.
TypeError: If `return_indices` is not a bool.
ValueError: If `output_size` is less than 1.
ValueError: If the last dimension of `x` is smaller than `output_size`.
ValueError: If the last dimension of `x` is not divisible by `output_size`.
ValueError: If length of shape of `x` is not equal to 3.
ValueError: If dimension of `input` is not equal to 2 or 3.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@ -1009,40 +1010,22 @@ class AdaptiveMaxPool1d(Cell):
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> pool = nn.AdaptiveMaxPool1d(output_size=3)
>>> x = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = pool(x)
>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = pool(input)
>>> result = output.shape
>>> print(result)
(1, 3, 3)
"""
def __init__(self, output_size):
def __init__(self, output_size, return_indices=False):
"""Initialize AdaptiveMaxPool1d."""
super(AdaptiveMaxPool1d, self).__init__()
validator.check_int(output_size, 1, Rel.GE, "output_size", self.cls_name)
validator.check_value_type('output_size', output_size, [int], self.cls_name)
self.expand = P.ExpandDims()
self.squeeze = P.Squeeze(2)
self.output_size = output_size
self.shape = F.shape
self.dtype = P.DType()
self.return_indices = return_indices
def construct(self, x):
_adaptive_dtype_check(self.dtype(x), self.cls_name)
_, _, width = self.shape(x)
stride = width // self.output_size
kernel_size = width - (self.output_size - 1) * stride
stride = (1, width // self.output_size)
kernel_size = (1, kernel_size)
max_pool = P.MaxPool(kernel_size=kernel_size, strides=stride)
x = self.expand(x, 2)
x = max_pool(x)
x = self.squeeze(x)
return x
def construct(self, input):
input = ops.adaptive_max_pool1d(input, self.output_size, self.return_indices)
return input
class AdaptiveMaxPool2d(Cell):
@ -1067,7 +1050,7 @@ class AdaptiveMaxPool2d(Cell):
\end{align}
Note:
Ascend platform only supports float16 type for input_x.
Ascend platform only supports float16 type for input.
Args:
output_size (Union[int, tuple]): The target output size is H x W.
@ -1078,31 +1061,31 @@ class AdaptiveMaxPool2d(Cell):
Default: False.
Inputs:
- **input_x** (Tensor) - The input of AdaptiveMaxPool2d, which is a 3D or 4D tensor,
- **input** (Tensor) - The input of AdaptiveMaxPool2d, which is a 3D or 4D tensor,
with float16, float32 or float64 data type.
Outputs:
Tensor, with the same type as the `input_x`.
Shape of the output is `input_x_shape[:len(input_x_shape) - len(out_shape)] + out_shape`.
Tensor, with the same type as the `input`.
Shape of the output is `input_shape[:len(input_shape) - len(out_shape)] + out_shape`.
Raises:
TypeError: If `output_size` is not int or tuple.
TypeError: If `input_x` is not a tensor.
TypeError: If `input` is not a tensor.
TypeError: If `return_indices` is not a bool.
TypeError: If dtype of `input_x` is not float16, float32 or float64.
TypeError: If dtype of `input` is not float16, float32 or float64.
ValueError: If `output_size` is a tuple and the length of `output_size` is not 2.
ValueError: If the dimension of `input_x` is not NCHW or CHW.
ValueError: If the dimension of `input` is not NCHW or CHW.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
>>> input = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> adaptive_max_pool_2d = nn.AdaptiveMaxPool2d((None, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> output = adaptive_max_pool_2d(input)
>>> print(output)
[[[[2. 3.]
[5. 6.]
@ -1115,7 +1098,7 @@ class AdaptiveMaxPool2d(Cell):
[8. 9.]]]]
>>> # case 2: output_size=2
>>> adaptive_max_pool_2d = nn.AdaptiveMaxPool2d(2)
>>> output = adaptive_max_pool_2d(input_x)
>>> output = adaptive_max_pool_2d(input)
>>> print(output)
[[[[5. 6.]
[8. 9.]]
@ -1125,7 +1108,7 @@ class AdaptiveMaxPool2d(Cell):
[8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> adaptive_max_pool_2d = nn.AdaptiveMaxPool2d((1, 2))
>>> output = adaptive_max_pool_2d(input_x)
>>> output = adaptive_max_pool_2d(input)
>>> print(output)
[[[[8. 9.]]
[[8. 9.]]
@ -1139,8 +1122,8 @@ class AdaptiveMaxPool2d(Cell):
self.adaptive_max_pool2d = AdaptiveMaxPool2D(output_size)
self.return_indices = return_indices
def construct(self, input_x):
output = self.adaptive_max_pool2d(input_x)
def construct(self, input):
output = self.adaptive_max_pool2d(input)
if self.return_indices:
return output
return output[0]
@ -1162,18 +1145,18 @@ class AdaptiveMaxPool3d(Cell):
Default: False.
Inputs:
- **x** (Tensor) - Tensor, has shape of :math:`(C, D, H, W)` or :math:`(N, C, D, H, W)` . The suppoerted dtypes
are int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 and float64 data type.
- **input** (Tensor) - Tensor, has shape of :math:`(C, D, H, W)` or :math:`(N, C, D, H, W)` . The suppoerted
dtypes are int8, int16, int32, int64, uint8, uint16, uint32, uint64, float16, float32 and float64 data type.
Outputs:
- **y** (Tensor) - Tensor, has the same number of dims and data type as the `x` .
- **y** (Tensor) - Tensor, has the same number of dims and data type as the `input` .
- **argmax** (Tensor) - Tensor, the indices of the maximum values along with the outputs, has the same shape as
`y` and a dtype of int32. Return this only when `return_indices` is True.
Raises:
TypeError: If `x` is not a Tensor.
ValueError: If the dimensions number of `x` is not 4 or 5.
TypeError: If dtype of `x` is not int8, int16, int32, int64, uint8, uint16, uint32, uint64,
TypeError: If `input` is not a Tensor.
ValueError: If the dimensions number of `input` is not 4 or 5.
TypeError: If dtype of `input` is not int8, int16, int32, int64, uint8, uint16, uint32, uint64,
float16, float32 or float64.
ValueError: If `output_size` is neither an int nor a tuple with shape (3,).
@ -1181,10 +1164,10 @@ class AdaptiveMaxPool3d(Cell):
``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32))
>>> input = Tensor(np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32))
>>> output_size = (1, 1, 2)
>>> net = nn.AdaptiveMaxPool3d(output_size, True)
>>> output = net(x)
>>> output = net(input)
>>> print(output[0].asnumpy())
[[[[33. 35.]]]]
>>> print(output[1].asnumpy())
@ -1198,8 +1181,8 @@ class AdaptiveMaxPool3d(Cell):
self.return_indices = return_indices
self.adaptive_max_pool3d = AdaptiveMaxPool3D()
def construct(self, x):
output = self.adaptive_max_pool3d(x, self.output_size)
def construct(self, input):
output = self.adaptive_max_pool3d(input, self.output_size)
if self.return_indices:
return output
return output[0]

View File

@ -48,7 +48,7 @@ scalar_to_tensor_ = P.ScalarToTensor()
sigmoid_ = NN_OPS.Sigmoid()
def adaptive_avg_pool2d(input_x, output_size):
def adaptive_avg_pool2d(input, output_size):
r"""
This operator applies a 2D adaptive average pooling to an input signal composed of multiple input planes.
That is, for any input size, the size of the specified output is H x W.
@ -70,16 +70,16 @@ def adaptive_avg_pool2d(input_x, output_size):
\end{align}
Args:
input_x (Tensor): The input of adaptive_avg_pool2d, which is a 3D or 4D tensor,
input (Tensor): The input of adaptive_avg_pool2d, which is a 3D or 4D tensor,
with float16, float32 or float64 data type.
output_size (Union[int, tuple]): The target output size is H x W.
`ouput_size` can be a tuple consisted of int type H and W, or a single H for H x H, or None.
If it is None, it means the output size is the same as the input size.
Returns:
Tensor, with the same type as the `input_x`.
Tensor, with the same type as the `input`.
Shape of the output is `input_x_shape[:len(input_x_shape) - len(out_shape)] + out_shape`.
Shape of the output is `input_shape[:len(input_shape) - len(out_shape)] + out_shape`.
.. math::
@ -93,19 +93,19 @@ def adaptive_avg_pool2d(input_x, output_size):
Raises:
ValueError: If `output_size` is a tuple and the length of `output_size` is not 2.
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is not float16, float32 or float64.
ValueError: If the dimension of `input_x` is less than or equal to the dimension of `output_size`.
TypeError: If `input` is not a Tensor.
TypeError: If dtype of `input` is not float16, float32 or float64.
ValueError: If the dimension of `input` is less than or equal to the dimension of `output_size`.
Supported Platforms:
``GPU``
Examples:
>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
>>> input = Tensor(np.array([[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]), mindspore.float32)
>>> output = ops.adaptive_avg_pool2d(input_x, (None, 2))
>>> output = ops.adaptive_avg_pool2d(input, (None, 2))
>>> print(output)
[[[1.5 2.5]
[4.5 5.5]
@ -117,7 +117,7 @@ def adaptive_avg_pool2d(input_x, output_size):
[4.5 5.5]
[7.5 8.5]]]
>>> # case 2: output_size=2
>>> output = ops.adaptive_avg_pool2d(input_x, 2)
>>> output = ops.adaptive_avg_pool2d(input, 2)
>>> print(output)
[[[3. 4.]
[6. 7.]]
@ -126,17 +126,17 @@ def adaptive_avg_pool2d(input_x, output_size):
[[3. 4.]
[6. 7.]]]
>>> # case 3: output_size=(1, 2)
>>> output = ops.adaptive_avg_pool2d(input_x, (1, 2))
>>> output = ops.adaptive_avg_pool2d(input, (1, 2))
>>> print(output)
[[[4.5 5.5]]
[[4.5 5.5]]
[[4.5 5.5]]]
"""
adaptive_avgpool2d_ = _get_cache_prim(P.AdaptiveAvgPool2D)(output_size)
return adaptive_avgpool2d_(input_x)
return adaptive_avgpool2d_(input)
def adaptive_avg_pool3d(input_x, output_size):
def adaptive_avg_pool3d(input, output_size):
r"""
This operator applies a 3D adaptive average pooling to an input signal composed of multiple input planes.
That is, for any input size, the size of the specified output is :math:`(D, H, W)`.
@ -160,48 +160,48 @@ def adaptive_avg_pool3d(input_x, output_size):
\end{array}
Args:
input_x (Tensor): The input of adaptive_avg_pool3d, which is a 5D or 4D Tensor.
input (Tensor): The input of adaptive_avg_pool3d, which is a 5D or 4D Tensor.
output_size (Union[int, tuple]): The target output size. `ouput_size` can be a tuple :math:`(D, H, W)`,
or an int D for :math:`(D, D, D)`. :math:`D`, :math:`H` and :math:`W` can be int or None
which means the output size is the same as that of the input.
Returns:
Tensor, with the same type as the `input_x`.
Tensor, with the same type as the `input`.
Raises:
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is not float16, float32 or float64.
ValueError: If the dimension of `input_x` is not 4D or 5D.
TypeError: If `input` is not a Tensor.
TypeError: If dtype of `input` is not float16, float32 or float64.
ValueError: If the dimension of `input` is not 4D or 5D.
ValueError: If `output_size` value is not positive.
Supported Platforms:
``GPU`` ``CPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> # case 1: output_size=(3, 3, 4)
>>> output_size=(3, 3, 4)
>>> input_x_val = np.random.randn(4, 3, 5, 6, 7)
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input_x, output_size)
>>> input_val = np.random.randn(4, 3, 5, 6, 7)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(4, 3, 3, 3, 4)
>>> # case 2: output_size=4
>>> output_size=5
>>> input_x_val = np.random.randn(2, 3, 8, 6, 12)
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input_x, output_size)
>>> input_val = np.random.randn(2, 3, 8, 6, 12)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(2, 3, 5, 5, 5)
>>> # case 3: output_size=(None, 4, 5)
>>> output_size=(None, 4, 5)
>>> input_x_val = np.random.randn(4, 1, 9, 10, 8)
>>> input_x = Tensor(input_x_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input_x, output_size)
>>> input_val = np.random.randn(4, 1, 9, 10, 8)
>>> input = Tensor(input_val, mindspore.float32)
>>> output = ops.adaptive_avg_pool3d(input, output_size)
>>> print(output.shape)
(4, 1, 9, 4, 5)
"""
adaptive_avg_pool3d_ = _get_cache_prim(NN_OPS.AdaptiveAvgPool3D)(output_size)
return adaptive_avg_pool3d_(input_x)
return adaptive_avg_pool3d_(input)
@constexpr
@ -538,13 +538,80 @@ def avg_pool3d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, cou
return avg_pool_op(input_x)
@constexpr
def _check_adaptive_max_pool1d_output_size(output_size):
"""Check the output_size value in adaptive_max_pool1d op."""
validator.check_int(output_size, 1, Rel.GE, "output_size", 'adaptive_max_pool1d')
validator.check_value_type('output_size', output_size, [int], 'adaptive_max_pool1d')
def adaptive_max_pool1d(input, output_size, return_indices=False):
r"""
Apply 1D adaptive max pooling to a 1-D signal with batch and channel dimensions.
Typically, the input is of shape :math:`(N_{in}, C_{in}, L_{in})` or :math:`(C_{in}, L_{in})`.
The output is of shape :math:`(N_{in}, C_{in}, L_{out})` or :math:`(C_{in}, L_{out})`,
where :math:`L_{out}` is defined by `output_size`.
Note:
Ascend platform does not support the `return_indices` parameter.
Args:
input (Tensor): Tensor of shape :math:`(N, C_{in}, L_{in})` or :math:`(N, C_{in}, L_{in})`.
output_size (int): the target output size :math:`L_{out}`.
return_indices (bool): If `return_indices` is True, the indices of max value would be output.
Default: False.
Returns:
Tensor of shape :math:`(N_{in}, C_{in}, L_{out})` or :math:`(C_{in}, L_{out})`, has the same type as `input`.
Raises:
TypeError: If `input` is not a Tensor.
TypeError: If `output_size` is not an int.
TypeError: If `return_indices` is not a bool.
ValueError: If `output_size` is less than 1.
ValueError: If dimension of `input` is not equal to 3 or 2.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_max_pool1d(input, output_size=2)
>>> print(output.shape)
(1, 3, 2)
"""
if not isinstance(input, Tensor):
raise TypeError(f"For adaptive_max_pool1d, the type of 'input' must be Tensor, but got {type(input)}.")
_check_adaptive_max_pool1d_output_size(output_size)
_check_value_type("return_indices", return_indices, bool, "adaptive_max_pool1d")
dim = input.ndim
if dim not in (2, 3):
raise ValueError("For adaptive_max_pool1d input must have 2 or 3 dim, but got {}.".format(dim))
output_size = (output_size, 1)
input = input.unsqueeze(-1)
if dim == 2:
input = input.unsqueeze(0)
_adaptive_max_pool2d = _get_cache_prim(NN_OPS.AdaptiveMaxPool2D)(output_size)
out, indices = _adaptive_max_pool2d(input)
out = out.squeeze(-1)
if dim == 2:
out = out.squeeze(0)
if return_indices:
indices = indices.squeeze(-1)
if dim == 2:
indices = indices.squeeze(0)
out = (out, indices)
return out
@constexpr
def _check_adaptive_max_pool2d(return_indices):
"""check the type of return_indices"""
validator.check_value_type("return_indices", return_indices, bool, "adaptive_max_pool2d")
def adaptive_max_pool2d(input_x, output_size, return_indices=False):
def adaptive_max_pool2d(input, output_size, return_indices=False):
r"""
This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes.
That is, for any input size, the size of the specified output is H x W.
@ -564,10 +631,10 @@ def adaptive_max_pool2d(input_x, output_size, return_indices=False):
\end{align}
Note:
Ascend platform only supports float16 type for input_x.
Ascend platform only supports float16 type for input.
Args:
input_x (Tensor): The input of adaptive_max_pool2d, which is a 3D or 4D tensor,
input (Tensor): The input of adaptive_max_pool2d, which is a 3D or 4D tensor,
with float16, float32 or float64 data type.
output_size (Union[int, tuple]): The target output size is H x W.
@ -578,27 +645,27 @@ def adaptive_max_pool2d(input_x, output_size, return_indices=False):
Default: False.
Returns:
Tensor, with the same type as the `input_x`.
Tensor, with the same type as the `input`.
Shape of the output is `input_x_shape[:len(input_x_shape) - len(out_shape)] + out_shape`.
Shape of the output is `input_shape[:len(input_shape) - len(out_shape)] + out_shape`.
Raises:
TypeError: If `output_size` is not int or tuple.
TypeError: If `input_x` is not a tensor.
TypeError: If `input` is not a tensor.
TypeError: If `return_indices` is not a bool.
TypeError: If dtype of `input_x` is not float16, float32 or float64.
TypeError: If dtype of `input` is not float16, float32 or float64.
ValueError: If `output_size` is a tuple and the length of `output_size` is not 2.
ValueError: If the dimension of `input_x` is not NCHW or CHW.
ValueError: If the dimension of `input` is not NCHW or CHW.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> # case 1: output_size=(None, 2)
>>> input_x = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
>>> input = Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), mindspore.float32)
>>> output = ops.adaptive_max_pool2d(input_x, (None, 2))
>>> output = ops.adaptive_max_pool2d(input, (None, 2))
>>> print(output)
[[[[2. 3.]
[5. 6.]
@ -610,7 +677,7 @@ def adaptive_max_pool2d(input_x, output_size, return_indices=False):
[5. 6.]
[8. 9.]]]]
>>> # case 2: output_size=2
>>> output = ops.adaptive_max_pool2d(input_x, 2)
>>> output = ops.adaptive_max_pool2d(input, 2)
>>> print(output)
[[[[5. 6.]
[8. 9.]]
@ -619,7 +686,7 @@ def adaptive_max_pool2d(input_x, output_size, return_indices=False):
[[5. 6.]
[8. 9.]]]]
>>> # case 3: output_size=(1, 2)
>>> output = ops.adaptive_max_pool2d(input_x, (1, 2))
>>> output = ops.adaptive_max_pool2d(input, (1, 2))
>>> print(output)
[[[[8. 9.]]
[[8. 9.]]
@ -627,12 +694,12 @@ def adaptive_max_pool2d(input_x, output_size, return_indices=False):
"""
_check_adaptive_max_pool2d(return_indices)
_adaptive_max_pool2d = _get_cache_prim(NN_OPS.AdaptiveMaxPool2D)(output_size)
out = _adaptive_max_pool2d(input_x)
out = _adaptive_max_pool2d(input)
output = out if return_indices else out[0]
return output
def adaptive_max_pool3d(x, output_size, return_indices=False):
def adaptive_max_pool3d(input, output_size, return_indices=False):
r"""
Applies a 3D adaptive max pooling over an input signal composed of several input planes.
@ -640,7 +707,7 @@ def adaptive_max_pool3d(x, output_size, return_indices=False):
The number of output features is equal to the number of input planes.
Args:
x (Tensor): Tensor, with shape :math:`(C, D, H, W)` or :math:`(N, C, D, H, W)`, which support int8, int16,
input (Tensor): Tensor, with shape :math:`(C, D, H, W)` or :math:`(N, C, D, H, W)`, which support int8, int16,
int32, int64, uint8, uint16, uint32, uint64, float16, float32 or float64 data type.
output_size (Union[int, tuple]): The target output size. `ouput_size` can be a tuple :math:`(D, H, W)`,
or an int D for :math:`(D, D, D)`. :math:`D`, :math:`H` and :math:`W` can be int or None
@ -649,14 +716,14 @@ def adaptive_max_pool3d(x, output_size, return_indices=False):
else would not be output. Default: False.
Returns:
- **y** (Tensor) - Tensor, with the same number of dims and data type as the `x`.
- **y** (Tensor) - Tensor, with the same number of dims and data type as the `input`.
- **argmax** (Tensor) - Tensor, the indices of max value, which has the same shape as the
`y` and it's data type is int32. It will output only when `return_indices` is True.
Raises:
TypeError: If `x` is not a Tensor.
ValueError: If the dimensions number of `x` is not 4 or 5.
TypeError: If dtype of `x` is not int8, int16, int32, int64, uint8, uint16, uint32, uint64,
TypeError: If `input` is not a Tensor.
ValueError: If the dimensions number of `input` is not 4 or 5.
TypeError: If dtype of `input` is not int8, int16, int32, int64, uint8, uint16, uint32, uint64,
float16, float32 or float64.
ValueError: If `output_size` is neither an int nor a tuple with shape (3,).
@ -664,9 +731,9 @@ def adaptive_max_pool3d(x, output_size, return_indices=False):
``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32))
>>> input = Tensor(np.arange(0,36).reshape((1, 3, 3, 4)).astype(np.float32))
>>> output_size = (1, 1, 2)
>>> output = ops.adaptive_max_pool3d(x, output_size, True)
>>> output = ops.adaptive_max_pool3d(input, output_size, True)
>>> print(output[0].asnumpy())
[[[[33. 35.]]]]
>>> print(output[1].asnumpy())
@ -674,7 +741,7 @@ def adaptive_max_pool3d(x, output_size, return_indices=False):
"""
adaptive_max_pool3d_ = _get_cache_prim(NN_OPS.AdaptiveMaxPool3D)()
output_size_ = Tensor(output_size, dtype=mstype.int32)
out = adaptive_max_pool3d_(x, output_size_)
out = adaptive_max_pool3d_(input, output_size_)
output = out if return_indices else out[0]
return output
@ -4536,7 +4603,7 @@ def huber_loss(x, target, reduction='mean', delta=1.0):
return _get_loss(loss, reduction, "huber_loss")
def adaptive_avg_pool1d(input_x, output_size):
def adaptive_avg_pool1d(input, output_size):
r"""
Applies a 1D adaptive average pooling over an input Tensor which can be regarded as a composition of 1D input
planes.
@ -4549,34 +4616,34 @@ def adaptive_avg_pool1d(input_x, output_size):
:math:`L_{in}` must be divisible by `output_size`.
Args:
input_x (Tensor): Tensor of shape :math:`(N, C_{in}, L_{in})`, with float16 or float32 data type.
input (Tensor): Tensor of shape :math:`(N, C_{in}, L_{in})`, with float16 or float32 data type.
output_size (int): the target output size :math:`L_{out}`.
Returns:
Tensor of shape :math:`(N, C_{in}, L_{out})`, has the same type as `input_x`.
Tensor of shape :math:`(N, C_{in}, L_{out})`, has the same type as `input`.
Raises:
TypeError: If `output_size` is not an int.
TypeError: If `input_x` is neither float16 nor float32.
TypeError: If `input` is neither float16 nor float32.
ValueError: If `output_size` is less than 1.
ValueError: If length of shape of `input_x` is not equal to 3.
ValueError: If the last dimension of `input_x` is smaller than `output_size`.
ValueError: If the last dimension of `input_x` is not divisible by `output_size`.
ValueError: If length of shape of `input` is not equal to 3.
ValueError: If the last dimension of `input` is smaller than `output_size`.
ValueError: If the last dimension of `input` is not divisible by `output_size`.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_avg_pool1d(input_x, output_size=2)
>>> input = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_avg_pool1d(input, output_size=2)
>>> print(output.shape)
(1, 3, 2)
"""
if not isinstance(input_x, (Tensor, Tensor_)):
raise TypeError("For adaptive_avg_pool1d, the input input_x must be tensor")
if not isinstance(input, (Tensor, Tensor_)):
raise TypeError("For adaptive_avg_pool1d, the input input must be tensor")
x_in_shape = input_x.shape
x_dtype = _get_cache_prim(P.DType)()(input_x)
x_in_shape = input.shape
x_dtype = _get_cache_prim(P.DType)()(input)
validator.check_int(output_size, 1, Rel.GE, "output_size", 'adaptive_avg_pool1d')
validator.check_value_type('output_size', output_size, [int], 'adaptive_avg_pool1d')
@ -4590,7 +4657,7 @@ def adaptive_avg_pool1d(input_x, output_size):
raise ValueError("For adaptive_avg_pool1d input's last dimension must be divisible by "
"output size {}, but got {}.".format(output_size, x_in_shape[2]))
if x_dtype not in [mstype.float16, mstype.float32]:
raise TypeError("For adaptive_avg_pool1d, the input_x dtype must be float16 or float32, "
raise TypeError("For adaptive_avg_pool1d, the input dtype must be float16 or float32, "
"but got {}.".format(x_dtype))
expand_ = _get_cache_prim(P.ExpandDims)()
@ -4604,93 +4671,11 @@ def adaptive_avg_pool1d(input_x, output_size):
avg_pool_ = _get_cache_prim(P.AvgPool)(kernel_size=kernel_size, strides=stride)
input_x = expand_(input_x, 2)
input_x = avg_pool_(input_x)
input_x = squeeze_(input_x)
input = expand_(input, 2)
input = avg_pool_(input)
input = squeeze_(input)
return input_x
@constexpr
def _check_adaptive_max_pool1d_output_size(output_size):
"""Check the output_size value in adaptive_max_pool1d op."""
validator.check_int(output_size, 1, Rel.GE, "output_size", 'adaptive_max_pool1d')
validator.check_value_type('output_size', output_size, [int], 'adaptive_max_pool1d')
def adaptive_max_pool1d(input_x, output_size):
r"""
Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as
a composition of 1D input planes.
Typically, the input is of shape :math:`(N_{in}, C_{in}, L_{in})`,
adaptive_max_pool1d outputs regional maximum in the :math:`L_{in}`-dimension. The output is of
shape :math:`(N_{in}, C_{in}, L_{out})`, where :math:`L_{out}` is defined by `output_size`.
Note:
:math:`L_{in}` must be divisible by `output_size`.
Args:
input_x (Tensor): Tensor of shape :math:`(N, C_{in}, L_{in})`, with float16 or float32 data type.
output_size (int): the target output size :math:`L_{out}`.
Returns:
Tensor of shape :math:`(N, C_{in}, L_{out})`, has the same type as `input_x`.
Raises:
TypeError: If `input_x` is neither float16 nor float32.
TypeError: If `output_size` is not an int.
ValueError: If `output_size` is less than 1.
ValueError: If the last dimension of `input_x` is smaller than `output_size`.
ValueError: If the last dimension of `input_x` is not divisible by `output_size`.
ValueError: If length of shape of `input_x` is not equal to 3.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.random.randint(0, 10, [1, 3, 6]), mindspore.float32)
>>> output = ops.adaptive_max_pool1d(input_x, output_size=2)
>>> print(output.shape)
(1, 3, 2)
"""
if not isinstance(input_x, (Tensor, Tensor_)):
raise TypeError("For adaptive_max_pool1d, the input input_x must be tensor")
_check_adaptive_max_pool1d_output_size(output_size)
x_in_shape = input_x.shape
x_dtype = _get_cache_prim(P.DType)()(input_x)
if len(x_in_shape) != 3:
raise ValueError("For adaptive_max_pool1d input must have 3 dim, but got {}.".format(len(x_in_shape)))
if x_in_shape[2] < output_size:
raise ValueError("For adaptive_max_pool1d input's last dimension must be greater or equal to "
"output size {}, but got {}.".format(output_size, x_in_shape[2]))
if x_in_shape[2] % output_size != 0:
raise ValueError("For adaptive_max_pool1d input's last dimension must be divisible by "
"output size {}, but got {}.".format(output_size, x_in_shape[2]))
if x_dtype not in [mstype.float16, mstype.float32]:
raise TypeError("For adaptive_max_pool1d, the input_x dtype must be float16 or float32, "
"but got {}.".format(x_dtype))
expand_ = _get_cache_prim(P.ExpandDims)()
squeeze_ = _get_cache_prim(P.Squeeze)(2)
width = x_in_shape[2]
stride = width // output_size
kernel_size = width - (output_size - 1) * stride
stride = (1, width // output_size)
kernel_size = (1, kernel_size)
max_pool_ = _get_cache_prim(P.MaxPool)(kernel_size=kernel_size, strides=stride)
input_x = expand_(input_x, 2)
input_x = max_pool_(input_x)
input_x = squeeze_(input_x)
return input_x
return input
def batch_norm(input_x, running_mean, running_var, weight, bias, training=False, momentum=0.1, eps=1e-5):

View File

@ -0,0 +1,105 @@
# Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import pytest
import numpy as np
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor
class AdaptiveMaxPool1dNet(nn.Cell):
"""AdaptiveMaxPool1d."""
def __init__(self, output_size, return_indices):
super(AdaptiveMaxPool1dNet, self).__init__()
self.adaptive_max_pool_1d = nn.AdaptiveMaxPool1d(output_size, return_indices)
def construct(self, x):
return self.adaptive_max_pool_1d(x)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_nn_adaptivemaxpool1d_2d(mode):
"""
Feature: nn.AdaptiveMaxPool1d
Description: Verify the result of AdaptiveMaxPool1d
Expectation: success
"""
ms.set_context(mode=mode)
a = np.arange(12).reshape(3, 4).astype(np.float32)
x = Tensor(a)
except_out_val = np.array([[1., 2., 3.],
[5., 6., 7.],
[9., 10., 11.]], dtype=np.float32)
except_out_indices = np.array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
if ms.get_context("device_target") == "Ascend":
return_indices = False
else:
return_indices = True
net = AdaptiveMaxPool1dNet(3, return_indices)
out = net(x)
if return_indices:
assert np.allclose(out[0].asnumpy(), except_out_val)
assert np.array_equal(out[1].asnumpy(), except_out_indices)
else:
assert np.allclose(out.asnumpy(), except_out_val)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_nn_adaptivemaxpool1d_3d(mode):
"""
Feature: nn.AdaptiveMaxPool1d
Description: Verify the result of AdaptiveMaxPool1d
Expectation: success
"""
ms.set_context(mode=mode)
a = np.arange(16).reshape(2, 2, 4).astype(np.float32)
x = Tensor(a)
except_out_val = np.array([[[1., 2., 3.],
[5., 6., 7.]],
[[9., 10., 11.],
[13., 14., 15.]]], dtype=np.float32)
except_out_indices = np.array([[[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3]]])
if ms.get_context("device_target") == "Ascend":
return_indices = False
else:
return_indices = True
net = AdaptiveMaxPool1dNet(3, return_indices)
out = net(x)
if return_indices:
assert np.allclose(out[0].asnumpy(), except_out_val)
assert np.array_equal(out[1].asnumpy(), except_out_indices)
else:
assert np.allclose(out.asnumpy(), except_out_val)

View File

@ -0,0 +1,99 @@
# Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import pytest
import numpy as np
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor, ops
class Net(nn.Cell):
def construct(self, x, output_size, return_indices):
return ops.adaptive_max_pool1d(x, output_size, return_indices)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_adaptive_max_pool1d_2d(mode):
"""
Feature: ops.adaptive_max_pool1d
Description: Verify the result of adaptive_max_pool1d of 2d
Expectation: success
"""
ms.set_context(mode=mode)
a = np.arange(12).reshape(3, 4).astype(np.float32)
x = Tensor(a)
except_out_val = np.array([[1., 2., 3.],
[5., 6., 7.],
[9., 10., 11.]], dtype=np.float32)
except_out_indices = np.array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
if ms.get_context("device_target") == "Ascend":
return_indices = False
else:
return_indices = True
net = Net()
out = net(x, 3, return_indices)
if return_indices:
assert np.allclose(out[0].asnumpy(), except_out_val)
assert np.array_equal(out[1].asnumpy(), except_out_indices)
else:
assert np.allclose(out.asnumpy(), except_out_val)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_adaptive_max_pool1d_3d(mode):
"""
Feature: ops.adaptive_max_pool1d
Description: Verify the result of adaptive_max_pool1d of 3d
Expectation: success
"""
ms.set_context(mode=mode)
a = np.arange(16).reshape(2, 2, 4).astype(np.float32)
x = Tensor(a)
except_out_val = np.array([[[1., 2., 3.],
[5., 6., 7.]],
[[9., 10., 11.],
[13., 14., 15.]]], dtype=np.float32)
except_out_indices = np.array([[[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3]]])
if ms.get_context("device_target") == "Ascend":
return_indices = False
else:
return_indices = True
net = Net()
out = net(x, 3, return_indices)
if return_indices:
assert np.allclose(out[0].asnumpy(), except_out_val)
assert np.array_equal(out[1].asnumpy(), except_out_indices)
else:
assert np.allclose(out.asnumpy(), except_out_val)