forked from mindspore-Ecosystem/mindspore
add LPPool1d
This commit is contained in:
parent
5ac3e9e5de
commit
c3d8bbc738
|
@ -196,6 +196,8 @@ Dropout层
|
|||
mindspore.nn.AvgPool3d
|
||||
mindspore.nn.FractionalMaxPool2d
|
||||
mindspore.nn.FractionalMaxPool3d
|
||||
mindspore.nn.LPPool1d
|
||||
mindspore.nn.LPPool2d
|
||||
mindspore.nn.MaxPool1d
|
||||
mindspore.nn.MaxPool2d
|
||||
mindspore.nn.MaxPool3d
|
||||
|
|
|
@ -32,6 +32,8 @@ mindspore.ops.function
|
|||
mindspore.ops.dropout3d
|
||||
mindspore.ops.flatten
|
||||
mindspore.ops.interpolate
|
||||
mindspore.ops.lp_pool1d
|
||||
mindspore.ops.lp_pool2d
|
||||
mindspore.ops.lrn
|
||||
mindspore.ops.max_pool3d
|
||||
mindspore.ops.multi_margin_loss
|
||||
|
|
|
@ -0,0 +1,36 @@
|
|||
mindspore.nn.LPPool1d
|
||||
======================
|
||||
|
||||
.. py:class:: mindspore.nn.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False)
|
||||
|
||||
对输入的多维数据进行一维平面上的LP池化运算。
|
||||
|
||||
在一个输入Tensor上应用1D LP pooling,可被视为组成一个1D输入平面。
|
||||
|
||||
通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` 或 :math:`(C, L_{in})`,输出的shape为 :math:`(N_{in}, C_{in}, L_{in})` 或 :math:`(C, L_{in})`,输出与输入的shape一致,公式如下:
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
参数:
|
||||
- **norm_type** (float) - 标准化类型,代表公式里的p,
|
||||
|
||||
- 如果 p = 1,得到的结果为池化核内元素之和(与平均池化成比例);
|
||||
- 如果 p = :math:`\infty`,得到的结果为最大池化的结果。
|
||||
|
||||
- **kernel_size** (int) - 池化核的尺寸大小。
|
||||
- **stride** (int) - 池化操作的移动步长,数据类型为整型。如果值为None,则使用默认值 `kernel_size`。
|
||||
- **ceil_mode** (bool) - 若为True,使用ceil来计算输出shape。若为False,使用floor来计算输出shape。默认值:False。
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - shape为 :math:`(N, C_{in}, L_{in})` 或 :math:`(C, L_{in})` 的Tensor。
|
||||
|
||||
输出:
|
||||
- **output** - LPPool1d的计算结果,shape为 :math:`(N, C_{out}, L_{out})` 或 :math:`(C, L_{in})`的Tensor,与输入 `x` 的类型一致。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - `kernel_size` 或 `stride` 不是int。
|
||||
- **TypeError** - `ceil_mode` 不是bool。
|
||||
- **ValueError** - `kernel_size` 或 `stride` 小于1。
|
||||
- **ValueError** - `x` 的shape长度不等于2或3。
|
|
@ -0,0 +1,37 @@
|
|||
mindspore.nn.LPPool2d
|
||||
======================
|
||||
|
||||
.. py:class:: mindspore.nn.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False)
|
||||
|
||||
对输入的多维数据进行二维平面上的LP池化运算。
|
||||
|
||||
在一个输入Tensor上应用2D LP pooling,可被视为组成一个2D输入平面。
|
||||
|
||||
通常,输入的shape为 :math:`(N, C, H_{in}, W_{in})`,输出的shape为 :math:`(N, C, H_{in}, W_{in})`,输出与输入的shape一致,公式如下:
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
参数:
|
||||
- **norm_type** (Union[int, float]) - 标准化类型,代表公式里的p,
|
||||
|
||||
- 如果 p = 1,得到的结果为池化核内元素之和(与平均池化成比例);
|
||||
- 如果 p = :math:`\infty`,得到的结果为最大池化的结果。
|
||||
|
||||
- **kernel_size** (Union[int, tuple[int]]) - 池化核尺寸大小。如果为整数,则代表池化核的高和宽。如果为tuple,其值必须包含两个整数值分别表示池化核的高和宽。
|
||||
- **stride** (Union[int, tuple[int]]) - 池化操作的移动步长,如果为整数,则代表stride的高和宽。如果为tuple,其值必须包含两个整数值分别表示stride的高和宽。如果值为None,则使用默认值 `kernel_size`。
|
||||
- **ceil_mode** (bool) - 若为True,使用ceil模式来计算输出shape。若为False,使用floor模式来计算输出shape。默认值:False。
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - shape为 :math:`(N, C, H_{in}, W_{in})` 的Tensor。
|
||||
|
||||
输出:
|
||||
- **output** - LPPool2d的计算结果,shape为 :math:`(N, C, H_{in}, W_{in})`的Tensor,与 输入 `x` 的类型一致。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - `kernel_size` 或 `stride` 不是int也不是tuple。
|
||||
- **TypeError** - `ceil_mode` 不是bool。
|
||||
- **ValueError** - `kernel_size` 或 `stride` 小于1。
|
||||
- **ValueError** - `kernel_size` 或 `stride` 是一个长度不为2的tuple。
|
||||
- **ValueError** - `x` 的shape长度不等于4。
|
|
@ -0,0 +1,34 @@
|
|||
mindspore.ops.lp_pool1d
|
||||
=======================
|
||||
|
||||
.. py:function:: mindspore.ops.lp_pool1d(x, norm_type, kernel_size, stride=None, ceil_mode=False)
|
||||
|
||||
对输入的多维数据进行一维平面上的LP池化运算。
|
||||
|
||||
在一个输入Tensor上应用1D LP pooling,可被视为组成一个1D输入平面。
|
||||
|
||||
通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` 或 :math:`(C, L_{in})`,输出的shape为 :math:`(N_{in}, C_{in}, L_{in})` 或 :math:`(C, L_{in})`,输出与输入的shape一致,公式如下:
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - shape为 :math:`(N, C_{in}, L_{in})` 或 :math:`(C, L_{in})` 的Tensor。
|
||||
- **norm_type** (float) - 标准化类型,代表公式里的p,
|
||||
|
||||
- 如果 p = 1,得到的结果为池化核内元素之和(与平均池化成比例),
|
||||
- 如果 p = :math:`\infty`,得到的结果为最大池化的结果。
|
||||
|
||||
- **kernel_size** (int) - 池化核的尺寸大小。
|
||||
- **stride** (int) - 池化操作的移动步长,数据类型为整型。如果值为None,则使用默认值 `kernel_size`。
|
||||
- **ceil_mode** (bool) - 若为True,使用ceil来计算输出shape。若为False,使用floor来计算输出shape。默认值:False。
|
||||
|
||||
返回:
|
||||
- **output** - LPPool1d的计算结果,shape为 :math:`(N, C_{out}, L_{out})` 或 :math:`(C, L_{in})`的Tensor,与 输入 `x` 的类型一致。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - `kernel_size` 或 `stride` 不是int。
|
||||
- **TypeError** - `ceil_mode` 不是bool。
|
||||
- **ValueError** - `kernel_size` 或 `stride` 小于1。
|
||||
- **ValueError** - `x` 的shape长度不等于2或3。
|
|
@ -0,0 +1,35 @@
|
|||
mindspore.ops.lp_pool2d
|
||||
=======================
|
||||
|
||||
.. py:function:: mindspore.ops.lp_pool2d(x, norm_type, kernel_size, stride=None, ceil_mode=False)
|
||||
|
||||
对输入的多维数据进行二维平面上的LP池化运算。
|
||||
|
||||
在一个输入Tensor上应用2D LP pooling,可被视为组成一个2D输入平面。
|
||||
|
||||
通常,输入的shape为 :math:`(N, C, H_{in}, W_{in})`,输出的shape为 :math:`(N, C, H_{in}, W_{in})`,输出与输入的shape一致,公式如下:
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - shape为 :math:`(N, C, H_{in}, W_{in})` 的Tensor。
|
||||
- **norm_type** (Union[int, float]) - 标准化类型,代表公式里的p,
|
||||
|
||||
- 如果 p = 1,得到的结果为池化核内元素之和(与平均池化成比例),
|
||||
- 如果 p = :math:`\infty`,得到的结果为最大池化的结果。
|
||||
|
||||
- **kernel_size** (Union[int, tuple[int]]) - 池化核尺寸大小。如果为整数,则代表池化核的高和宽。如果为tuple,其值必须包含两个整数值分别表示池化核的高和宽。
|
||||
- **stride** (Union[int, tuple[int]]) - 池化操作的移动步长,如果为整数,则代表stride的高和宽。如果为tuple,其值必须包含两个整数值分别表示stride的高和宽。如果值为None,则使用默认值 `kernel_size`。
|
||||
- **ceil_mode** (bool) - 若为True,使用ceil模式来计算输出shape。若为False,使用floor模式来计算输出shape。默认值:False。
|
||||
|
||||
返回:
|
||||
- **output** - LPPool2d的计算结果,shape为 :math:`(N, C, H_{in}, W_{in})`的Tensor,与 输入 `x` 的类型一致。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - `kernel_size` 或 `stride` 不是int也不是tuple。
|
||||
- **TypeError** - `ceil_mode` 不是bool。
|
||||
- **ValueError** - `kernel_size` 或 `stride` 小于1。
|
||||
- **ValueError** - `kernel_size` 或 `stride` 是一个长度不为2的tuple。
|
||||
- **ValueError** - `x` 的shape长度不等于4。
|
|
@ -196,6 +196,8 @@ Pooling Layer
|
|||
mindspore.nn.AvgPool3d
|
||||
mindspore.nn.FractionalMaxPool2d
|
||||
mindspore.nn.FractionalMaxPool3d
|
||||
mindspore.nn.LPPool1d
|
||||
mindspore.nn.LPPool2d
|
||||
mindspore.nn.MaxPool1d
|
||||
mindspore.nn.MaxPool2d
|
||||
mindspore.nn.MaxPool3d
|
||||
|
|
|
@ -32,6 +32,8 @@ Neural Network
|
|||
mindspore.ops.dropout3d
|
||||
mindspore.ops.flatten
|
||||
mindspore.ops.interpolate
|
||||
mindspore.ops.lp_pool1d
|
||||
mindspore.ops.lp_pool2d
|
||||
mindspore.ops.lrn
|
||||
mindspore.ops.max_pool3d
|
||||
mindspore.ops.multi_margin_loss
|
||||
|
|
|
@ -17,6 +17,7 @@ from __future__ import absolute_import
|
|||
|
||||
from mindspore.ops import operations as P
|
||||
from mindspore.ops import functional as F
|
||||
import mindspore.ops as ops
|
||||
from mindspore._checkparam import Rel, Validator as validator
|
||||
from mindspore.ops.primitive import constexpr
|
||||
from mindspore.common.tensor import Tensor
|
||||
|
@ -31,7 +32,8 @@ from mindspore.nn.cell import Cell
|
|||
|
||||
__all__ = ['AvgPool3d', 'MaxPool3d', 'AvgPool2d', 'MaxPool2d', 'AvgPool1d', 'MaxPool1d', 'FractionalMaxPool2d',
|
||||
'FractionalMaxPool3d', 'AdaptiveAvgPool1d', 'AdaptiveMaxPool1d', 'AdaptiveMaxPool2d', 'AdaptiveMaxPool3d',
|
||||
'AdaptiveAvgPool2d', 'AdaptiveAvgPool3d', 'MaxUnpool1d', 'MaxUnpool2d', 'MaxUnpool3d']
|
||||
'AdaptiveAvgPool2d', 'AdaptiveAvgPool3d', 'MaxUnpool1d', 'MaxUnpool2d', 'MaxUnpool3d', 'LPPool1d',
|
||||
'LPPool2d']
|
||||
|
||||
|
||||
class _PoolNd(Cell):
|
||||
|
@ -80,6 +82,155 @@ def _shape_check(in_shape, prim_name=None):
|
|||
raise ValueError(f"{msg_prefix} input must has 3 dim, but got {len(in_shape)}")
|
||||
|
||||
|
||||
class LPPool1d(Cell):
|
||||
r"""
|
||||
LPPool1d pooling operation.
|
||||
|
||||
Applies a 1D power lp pooling over an input signal composed of several input planes.
|
||||
|
||||
Typically the input is of shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`, the output is of shape
|
||||
:math:`(N, C, L_{in})` or :math:`(C, L_{in})`, with the same shape as input, the operation is as follows.
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
Args:
|
||||
norm_type (Union[int, float]): Type of normalization, represents p in the formula,
|
||||
|
||||
- if p = 1, one gets Sum Pooling (which is proportional to Average Pooling),
|
||||
- if p = :math:`\infty`, one gets Max Pooling.
|
||||
|
||||
kernel_size (int): The size of kernel window.
|
||||
stride (int): The distance of kernel moving, an int number that represents
|
||||
the width of movement is stride, if the value is None, the default value `kernel_size` is used;
|
||||
ceil_mode (bool): Whether to use ceil or floor to calculate output shape. Default: False.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - Tensor of shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`.
|
||||
|
||||
Outputs:
|
||||
- **output** (Tensor) - LPPool1d result, with shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`,
|
||||
It has the same data type as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not an Tensor.
|
||||
TypeError: If `kernel_size` or `stride` is not an int.
|
||||
TypeError: If `ceil_mode` is not a bool.
|
||||
ValueError: If `kernel_size` or `stride` is less than 1.
|
||||
ValueError: If length of shape of `x` is not equal to 2 or 3.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor
|
||||
>>> import numpy as np
|
||||
>>> a = Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
|
||||
>>> net = nn.LPPool1d(norm_type=1, kernel_size=3, stride=1)
|
||||
>>> out = net(a)
|
||||
>>> print(out)
|
||||
[[[ 3. 6.]
|
||||
[15. 18.]
|
||||
[27. 30.]]
|
||||
[[39. 42.]
|
||||
[51. 54.]
|
||||
[63. 66.]]]
|
||||
"""
|
||||
|
||||
def __init__(self, norm_type, kernel_size, stride=None, ceil_mode=False):
|
||||
super(LPPool1d, self).__init__()
|
||||
self.norm_type = norm_type
|
||||
self.kernel_size = kernel_size
|
||||
self.stride = stride
|
||||
self.ceil_mode = ceil_mode
|
||||
|
||||
def construct(self, x):
|
||||
return ops.lp_pool1d(x, float(self.norm_type), self.kernel_size,
|
||||
self.stride, self.ceil_mode)
|
||||
|
||||
|
||||
class LPPool2d(Cell):
|
||||
r"""
|
||||
LPPool2d pooling operation.
|
||||
|
||||
Applies a 2D power lp pooling over an input signal composed of several input planes.
|
||||
|
||||
Typically the input is of shape :math:`(N, C, H_{in}, W_{in})`, the output is of shape
|
||||
:math:`(N, C, H_{in}, W_{in})`, with the same shape as input, the operation is as follows.
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
Args:
|
||||
norm_type(Union[int, float]) - Type of normalization, represents p in the formula,
|
||||
|
||||
- if p = 1, one gets Sum Pooling (which is proportional to Average Pooling),
|
||||
- if p = :math:`\infty`, one gets Max Pooling.
|
||||
|
||||
kernel_size(Union[int, tuple[int]]): The size of kernel window.
|
||||
The data type of kernel_size must be int and the value represents the height and width,
|
||||
or a tuple of two int numbers that represent height and width respectively.
|
||||
stride(Union[int, tuple[int]]): The distance of kernel moving, an int number that represents
|
||||
the height and width of movement are both stride, or a tuple of two int numbers that
|
||||
represent height and width of movement respectively, if the value is None,
|
||||
the default value `kernel_size` is used;
|
||||
ceil_mode(bool): Whether to use ceil or floor to calculate output shape. Default: False.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - Tensor of shape :math:`(N, C, H_{in}, W_{in})`.
|
||||
|
||||
Outputs:
|
||||
- **output** (Tensor) - LPPool2d result, with shape :math:`(N, C, H_{in}, W_{in})`,
|
||||
It has the same data type as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not an Tensor.
|
||||
TypeError: If `kernel_size` or `stride` is neither int nor tuple.
|
||||
TypeError: If `ceil_mode` is not a bool.
|
||||
ValueError: If `kernel_size` or `stride` is less than 1.
|
||||
ValueError: If `kernel_size` or `stride` is a tuple whose length is not equal to `2`.
|
||||
ValueError: If length of shape of `x` is not equal to 4.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor
|
||||
>>> import numpy as np
|
||||
>>> a = Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
|
||||
>>> net = nn.LPPool2d(norm_type=1, kernel_size=3, stride=1)
|
||||
>>> out = net(a)
|
||||
>>> print(out)
|
||||
[[[[ 54. 63. 72.]
|
||||
[ 99. 108. 117.]]
|
||||
[[ 234. 243. 252.]
|
||||
[ 279. 288. 297.]]
|
||||
[[ 414. 423. 432.]
|
||||
[ 459. 468. 477.]]]
|
||||
[[[ 594. 603. 612.]
|
||||
[ 639. 648. 657.]]
|
||||
[[ 774. 783. 792.]
|
||||
[ 819. 828. 837.]]
|
||||
[[ 954. 963. 972.]
|
||||
[ 999. 1008. 1017.]]]]
|
||||
"""
|
||||
|
||||
def __init__(self, norm_type, kernel_size, stride=None, ceil_mode=False):
|
||||
super(LPPool2d, self).__init__()
|
||||
self.norm_type = norm_type
|
||||
self.kernel_size = kernel_size
|
||||
self.stride = stride
|
||||
self.ceil_mode = ceil_mode
|
||||
|
||||
def construct(self, x):
|
||||
return ops.lp_pool2d(x, float(self.norm_type), self.kernel_size,
|
||||
self.stride, self.ceil_mode)
|
||||
|
||||
|
||||
class MaxPool3d(Cell):
|
||||
r"""
|
||||
3D max pooling operation.
|
||||
|
|
|
@ -340,6 +340,8 @@ from .nn_func import (
|
|||
elu,
|
||||
gelu,
|
||||
hinge_embedding_loss,
|
||||
lp_pool1d,
|
||||
lp_pool2d,
|
||||
)
|
||||
from .linalg_func import (
|
||||
svd,
|
||||
|
|
|
@ -201,6 +201,23 @@ def adaptive_avg_pool3d(input_x, output_size):
|
|||
return adaptive_avg_pool3d_(input_x)
|
||||
|
||||
|
||||
@constexpr
|
||||
def _check_avgpool_1d_type_and_int(kernel_size, stride, ceil_mode, count_include_pad):
|
||||
"""Checks the type of avgpool1d input"""
|
||||
validator.check_value_type('kernel_size', kernel_size, [int], 'avg_pool1d')
|
||||
validator.check_value_type('stride', stride, [int], 'avg_pool1d')
|
||||
validator.check_value_type('ceil_mode', ceil_mode, bool, 'avg_pool1d')
|
||||
validator.check_value_type('count_include_pad', count_include_pad, bool, 'avg_pool1d')
|
||||
validator.check_int(kernel_size, 1, Rel.GE, "kernel_size", 'avg_pool1d')
|
||||
validator.check_int(stride, 1, Rel.GE, "stride", 'avg_pool1d')
|
||||
|
||||
|
||||
@constexpr
|
||||
def check_non_negative_int(arg_value, arg_name=None, prim_name=None):
|
||||
"""Check argument is non-negative integer, which mean arg_value >= 0."""
|
||||
validator.check_non_negative_int(arg_value, arg_name, prim_name)
|
||||
|
||||
|
||||
def avg_pool1d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True):
|
||||
r"""
|
||||
1D average pooling for temporal data.
|
||||
|
@ -256,21 +273,15 @@ def avg_pool1d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, cou
|
|||
if len(input_x.shape) != 3:
|
||||
raise ValueError("For avg_pool1d, input must have 3 dim, but got {}.".format(len(input_x.shape)))
|
||||
|
||||
validator.check_value_type('kernel_size', kernel_size, [int], 'avg_pool1d')
|
||||
validator.check_value_type('stride', stride, [int], 'avg_pool1d')
|
||||
validator.check_value_type('ceil_mode', ceil_mode, bool, 'avg_pool1d')
|
||||
validator.check_value_type('count_include_pad', count_include_pad, bool, 'avg_pool1d')
|
||||
validator.check_int(kernel_size, 1, Rel.GE, "kernel_size", 'avg_pool1d')
|
||||
validator.check_int(stride, 1, Rel.GE, "stride", 'avg_pool1d')
|
||||
|
||||
_check_avgpool_1d_type_and_int(kernel_size, stride, ceil_mode, count_include_pad)
|
||||
if isinstance(padding, int):
|
||||
validator.check_non_negative_int(padding, 'padding', 'avg_pool1d')
|
||||
check_non_negative_int(padding, 'padding', 'avg_pool1d')
|
||||
padding = (0, 0, 0, 0, padding, padding)
|
||||
elif isinstance(padding, tuple):
|
||||
if len(padding) != 2:
|
||||
raise ValueError("For avg_pool1d, padding should be int or tuple of length 2.")
|
||||
for item in padding:
|
||||
validator.check_non_negative_int(item, 'padding', 'avg_pool1d')
|
||||
check_non_negative_int(item, 'padding', 'avg_pool1d')
|
||||
padding = (0, 0, 0, 0, padding[0], padding[1])
|
||||
else:
|
||||
raise TypeError("For avg_pool1d, padding should be int or tuple of length 2.")
|
||||
|
@ -290,6 +301,65 @@ def avg_pool1d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, cou
|
|||
return input_x
|
||||
|
||||
|
||||
@constexpr
|
||||
def _check_avgpool_2d_kernel_size(kernel_size):
|
||||
"""check and calculate the avgpool2d kernel_size"""
|
||||
if isinstance(kernel_size, int):
|
||||
validator.check_int(kernel_size, 1, Rel.GE, "kernel_size", 'avg_pool2d')
|
||||
kernel_size = (1, kernel_size, kernel_size)
|
||||
elif isinstance(kernel_size, tuple):
|
||||
if len(kernel_size) != 2:
|
||||
raise ValueError("For avg_pool2d, kernel_size should be int or tuple of length 2.")
|
||||
for item in kernel_size:
|
||||
validator.check_int(item, 1, Rel.GE, "kernel_size", 'avg_pool2d')
|
||||
kernel_size = (1, kernel_size[0], kernel_size[1])
|
||||
else:
|
||||
raise TypeError("For avg_pool2d, kernel_size should be int or tuple of length 2.")
|
||||
return kernel_size
|
||||
|
||||
|
||||
@constexpr
|
||||
def _check_avgpool_2d_stride(stride):
|
||||
"""check and calculate the avgpool2d stride"""
|
||||
if isinstance(stride, int):
|
||||
validator.check_int(stride, 1, Rel.GE, "stride", 'avg_pool2d')
|
||||
stride = (1, stride, stride)
|
||||
elif isinstance(stride, tuple):
|
||||
if len(stride) != 2:
|
||||
raise ValueError("For avg_pool2d, stride should be int or tuple of length 2.")
|
||||
for item in stride:
|
||||
validator.check_int(item, 1, Rel.GE, "stride", 'avg_pool2d')
|
||||
stride = (1, stride[0], stride[1])
|
||||
else:
|
||||
raise TypeError("For avg_pool2d, stride should be int or tuple of length 2.")
|
||||
return stride
|
||||
|
||||
|
||||
@constexpr
|
||||
def _check_avgpool_2d_padding(padding):
|
||||
"""check and calculate the avgpool2d padding"""
|
||||
if isinstance(padding, int):
|
||||
validator.check_non_negative_int(padding, 'padding', 'avg_pool2d')
|
||||
padding = (0, 0, padding, padding, padding, padding)
|
||||
elif isinstance(padding, tuple):
|
||||
if len(padding) != 4:
|
||||
raise ValueError("For avg_pool2d, padding should be int or tuple of length 4.")
|
||||
for item in padding:
|
||||
validator.check_non_negative_int(item, 'padding', 'avg_pool2d')
|
||||
padding = (0, 0, padding[0], padding[1], padding[2], padding[3])
|
||||
else:
|
||||
raise TypeError("For avg_pool2d, padding should be int or tuple of length 4.")
|
||||
return padding
|
||||
|
||||
|
||||
@constexpr
|
||||
def _check_avg_pool2d_type_and_value(ceil_mode, count_include_pad, divisor_override):
|
||||
"""check the type of avgpool2d input"""
|
||||
validator.check_value_type('ceil_mode', ceil_mode, bool, 'avg_pool2d')
|
||||
validator.check_value_type('count_include_pad', count_include_pad, bool, 'avg_pool2d')
|
||||
validator.check_non_negative_int(divisor_override, 'divisor_override', 'avg_pool2d')
|
||||
|
||||
|
||||
def avg_pool2d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, count_include_pad=True,
|
||||
divisor_override=0):
|
||||
r"""
|
||||
|
@ -358,45 +428,10 @@ def avg_pool2d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, cou
|
|||
if len(input_x.shape) != 4:
|
||||
raise ValueError("For avg_pool2d, input must have 4 dim, but got {}.".format(len(input_x.shape)))
|
||||
|
||||
if isinstance(kernel_size, int):
|
||||
validator.check_int(kernel_size, 1, Rel.GE, "kernel_size", 'avg_pool2d')
|
||||
kernel_size = (1, kernel_size, kernel_size)
|
||||
elif isinstance(kernel_size, tuple):
|
||||
if len(kernel_size) != 2:
|
||||
raise ValueError("For avg_pool2d, kernel_size should be int or tuple of length 2.")
|
||||
for item in kernel_size:
|
||||
validator.check_int(item, 1, Rel.GE, "kernel_size", 'avg_pool2d')
|
||||
kernel_size = (1, kernel_size[0], kernel_size[1])
|
||||
else:
|
||||
raise TypeError("For avg_pool2d, kernel_size should be int or tuple of length 2.")
|
||||
|
||||
if isinstance(stride, int):
|
||||
validator.check_int(stride, 1, Rel.GE, "stride", 'avg_pool2d')
|
||||
stride = (1, stride, stride)
|
||||
elif isinstance(stride, tuple):
|
||||
if len(stride) != 2:
|
||||
raise ValueError("For avg_pool2d, stride should be int or tuple of length 2.")
|
||||
for item in stride:
|
||||
validator.check_int(item, 1, Rel.GE, "stride", 'avg_pool2d')
|
||||
stride = (1, stride[0], stride[1])
|
||||
else:
|
||||
raise TypeError("For avg_pool2d, stride should be int or tuple of length 2.")
|
||||
|
||||
if isinstance(padding, int):
|
||||
validator.check_non_negative_int(padding, 'padding', 'avg_pool2d')
|
||||
padding = (0, 0, padding, padding, padding, padding)
|
||||
elif isinstance(padding, tuple):
|
||||
if len(padding) != 4:
|
||||
raise ValueError("For avg_pool2d, padding should be int or tuple of length 4.")
|
||||
for item in padding:
|
||||
validator.check_non_negative_int(item, 'padding', 'avg_pool2d')
|
||||
padding = (0, 0, padding[0], padding[1], padding[2], padding[3])
|
||||
else:
|
||||
raise TypeError("For avg_pool2d, padding should be int or tuple of length 4.")
|
||||
|
||||
validator.check_value_type('ceil_mode', ceil_mode, bool, 'avg_pool2d')
|
||||
validator.check_value_type('count_include_pad', count_include_pad, bool, 'avg_pool2d')
|
||||
validator.check_non_negative_int(divisor_override, 'divisor_override', 'avg_pool2d')
|
||||
kernel_size = _check_avgpool_2d_kernel_size(kernel_size)
|
||||
stride = _check_avgpool_2d_stride(stride)
|
||||
padding = _check_avgpool_2d_padding(padding)
|
||||
_check_avg_pool2d_type_and_value(ceil_mode, count_include_pad, divisor_override)
|
||||
|
||||
expand_op = _get_cache_prim(P.ExpandDims)()
|
||||
squeeze_op = _get_cache_prim(P.Squeeze)(2)
|
||||
|
@ -3498,6 +3533,162 @@ def gelu(input_x, approximate='none'):
|
|||
return output
|
||||
|
||||
|
||||
@constexpr
|
||||
def _shape_check(in_shape, dim_list, prim_name=None):
|
||||
msg_prefix = f"For '{prim_name}', the" if prim_name else "The"
|
||||
if len(in_shape) not in dim_list:
|
||||
raise ValueError(f"{msg_prefix} input must has dim in {dim_list}, but got {len(in_shape)}")
|
||||
|
||||
|
||||
def lp_pool1d(x, norm_type, kernel_size, stride=None, ceil_mode=False):
|
||||
r"""
|
||||
LPPool1d pooling operation.
|
||||
|
||||
Applies a 1D power lp pooling over an input signal composed of several input planes.
|
||||
|
||||
Typically the input is of shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`, the output is of shape
|
||||
:math:`(N, C, L_{in})` or :math:`(C, L_{in})`, with the same shape as input, the operation is as follows.
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
Args:
|
||||
x (Tensor) - Tensor of shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`.
|
||||
norm_type (Union[int, float]) - Type of normalization, represents p in the formula,
|
||||
|
||||
- if p = 1, one gets Sum Pooling (which is proportional to Average Pooling),
|
||||
- if p = :math:`\infty`, one gets Max Pooling.
|
||||
|
||||
kernel_size (int): The size of kernel window.
|
||||
stride (int): The distance of kernel moving, an int number that represents
|
||||
the width of movement is stride, if the value is None, the default value `kernel_size` is used;
|
||||
ceil_mode (bool): Whether to use ceil or floor to calculate output shape. Default: False.
|
||||
|
||||
Returns:
|
||||
- **output** (Tensor) - LPPool1d result, with shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`,
|
||||
It has the same data type as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not an Tensor.
|
||||
TypeError: If `kernel_size` or `stride` is not an int.
|
||||
TypeError: If `ceil_mode` is not a bool.
|
||||
ValueError: If `kernel_size` or `stride` is less than 1.
|
||||
ValueError: If length of shape of `x` is not equal to 2 or 3.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
>>> import mindspore.ops as ops
|
||||
>>> from mindspore import Tensor
|
||||
>>> import numpy as np
|
||||
>>> x = Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
|
||||
>>> out = ops.lp_pool1d(x, norm_type=1, kernel_size=3, stride=1, ceil_mode=False)
|
||||
>>> print(out)
|
||||
[[[ 3. 6.]
|
||||
[15. 18.]
|
||||
[27. 30.]]
|
||||
[[39. 42.]
|
||||
[51. 54.]
|
||||
[63. 66.]]]
|
||||
"""
|
||||
_shape_check(x.shape, [2, 3], "lp_pool1d")
|
||||
sign = _get_cache_prim(ops.Sign)()
|
||||
squeeze = _get_cache_prim(ops.Squeeze)(0)
|
||||
expand_dims = _get_cache_prim(ops.ExpandDims)()
|
||||
_is_squeeze = False
|
||||
if len(x.shape) == 2:
|
||||
x = expand_dims(x, 0)
|
||||
_is_squeeze = True
|
||||
if stride is not None:
|
||||
out = ops.avg_pool1d(x.pow(norm_type), kernel_size=kernel_size, stride=stride, padding=0, ceil_mode=ceil_mode)
|
||||
else:
|
||||
out = ops.avg_pool1d(x.pow(norm_type), kernel_size=kernel_size, stride=kernel_size, padding=0,
|
||||
ceil_mode=ceil_mode)
|
||||
if _is_squeeze:
|
||||
out = squeeze(out)
|
||||
return ((sign(out) * ops.relu(ops.abs(out))) * kernel_size).pow(1.0 / norm_type)
|
||||
|
||||
|
||||
def lp_pool2d(x, norm_type, kernel_size, stride=None, ceil_mode=False):
|
||||
r"""
|
||||
LPPool2d pooling operation.
|
||||
|
||||
Applies a 2D power lp pooling over an input signal composed of several input planes.
|
||||
|
||||
Typically the input is of shape :math:`(N, C, H_{in}, W_{in})`, the output is of shape
|
||||
:math:`(N, C, H_{in}, W_{in})`, with the same shape as input, the operation is as follows.
|
||||
|
||||
.. math::
|
||||
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
|
||||
|
||||
Args:
|
||||
x (Tensor) - Tensor of shape :math:`(N, C, H_{in}, W_{in})`.
|
||||
norm_type (Union[int, float]) - Type of normalization, represents p in the formula,
|
||||
|
||||
- if p = 1, one gets Sum Pooling (which is proportional to Average Pooling),
|
||||
- if p = :math:`\infty`, one gets Max Pooling.
|
||||
|
||||
kernel_size (Union[int, tuple[int]]): The size of kernel window.
|
||||
The data type of kernel_size must be int and the value represents the height and width,
|
||||
or a tuple of two int numbers that represent height and width respectively.
|
||||
stride (Union[int, tuple[int]]): The distance of kernel moving, an int number that represents
|
||||
the height and width of movement are both strides, or a tuple of two int numbers that
|
||||
represent height and width of movement respectively, if the value is None,
|
||||
the default value `kernel_size` is used;
|
||||
ceil_mode (bool): Whether to use ceil or floor to calculate output shape. Default: False.
|
||||
|
||||
Returns:
|
||||
- **output** (Tensor) - LPPool2d result, with shape :math:`(N, C, H_{in}, W_{in})`,
|
||||
It has the same data type as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not an Tensor.
|
||||
TypeError: If `kernel_size` or `stride` is neither int nor tuple.
|
||||
TypeError: If `ceil_mode` is not a bool.
|
||||
ValueError: If `kernel_size` or `stride` is less than 1.
|
||||
ValueError: If `kernel_size` or `stride` is a tuple whose length is not equal to `2`.
|
||||
ValueError: If length of shape of `x` is not equal to 4.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>>import mindspore as ms
|
||||
>>> import mindspore.ops as ops
|
||||
>>> from mindspore import Tensor
|
||||
>>> import numpy as np
|
||||
>>> x = Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
|
||||
>>> out = ops.lp_pool2d(x, norm_type=1, kernel_size=3, stride=1, ceil_mode=False)
|
||||
>>> print(out)
|
||||
[[[[ 54. 63. 72.]
|
||||
[ 99. 108. 117.]]
|
||||
[[ 234. 243. 252.]
|
||||
[ 279. 288. 297.]]
|
||||
[[ 414. 423. 432.]
|
||||
[ 459. 468. 477.]]]
|
||||
[[[ 594. 603. 612.]
|
||||
[ 639. 648. 657.]]
|
||||
[[ 774. 783. 792.]
|
||||
[ 819. 828. 837.]]
|
||||
[[ 954. 963. 972.]
|
||||
[ 999. 1008. 1017.]]]]
|
||||
|
||||
"""
|
||||
_shape_check(x.shape, [4], "lp_pool2d")
|
||||
sign = _get_cache_prim(ops.Sign)()
|
||||
if not isinstance(x, tuple):
|
||||
kernel_size = tuple((kernel_size, kernel_size))
|
||||
kw, kh = kernel_size
|
||||
if stride is not None:
|
||||
out = ops.avg_pool2d(x.pow(norm_type), kernel_size=kernel_size, stride=stride, padding=0, ceil_mode=ceil_mode)
|
||||
else:
|
||||
out = ops.avg_pool2d(x.pow(norm_type), kernel_size=kernel_size, stride=kernel_size, padding=0,
|
||||
ceil_mode=ceil_mode)
|
||||
return ((sign(out) * ops.relu(ops.abs(out))) * (kw * kh)).pow(1.0 / norm_type)
|
||||
|
||||
|
||||
__all__ = [
|
||||
'adaptive_avg_pool1d',
|
||||
'adaptive_avg_pool2d',
|
||||
|
@ -3551,6 +3742,8 @@ __all__ = [
|
|||
'multi_label_margin_loss',
|
||||
'elu',
|
||||
'gelu',
|
||||
'hinge_embedding_loss'
|
||||
'hinge_embedding_loss',
|
||||
'lp_pool1d',
|
||||
'lp_pool2d',
|
||||
]
|
||||
__all__.sort()
|
||||
|
|
|
@ -0,0 +1,63 @@
|
|||
# Copyright 2022 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
import mindspore as ms
|
||||
import mindspore.nn as nn
|
||||
|
||||
|
||||
class Net(nn.Cell):
|
||||
def __init__(self):
|
||||
super(Net, self).__init__()
|
||||
self.pool = nn.LPPool1d(norm_type=1, kernel_size=3, stride=1)
|
||||
|
||||
def construct(self, x):
|
||||
out = self.pool(x)
|
||||
return out
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.platform_arm_cpu
|
||||
@pytest.mark.platform_x86_gpu_training
|
||||
@pytest.mark.platform_arm_ascend_training
|
||||
@pytest.mark.platform_x86_ascend_training
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
|
||||
def test_lppool1d_normal(mode):
|
||||
"""
|
||||
Feature: LPPool1d
|
||||
Description: Verify the result of LPPool1d
|
||||
Expectation: success
|
||||
"""
|
||||
ms.set_context(mode=mode)
|
||||
net = Net()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
|
||||
y = ms.Tensor(np.arange(3 * 4).reshape((3, 4)), dtype=ms.float32)
|
||||
out = net(x)
|
||||
out2 = net(y)
|
||||
expect_out = np.array([[[3., 6.],
|
||||
[15., 18.],
|
||||
[27., 30.]],
|
||||
[[39., 42.],
|
||||
[51., 54.],
|
||||
[63., 66.]]])
|
||||
expect_out2 = np.array([[3., 6.],
|
||||
[15., 18.],
|
||||
[27., 30.]])
|
||||
assert np.allclose(out.asnumpy(), expect_out)
|
||||
assert np.allclose(out2.asnumpy(), expect_out2)
|
|
@ -0,0 +1,63 @@
|
|||
# Copyright 2022 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
import mindspore as ms
|
||||
import mindspore.nn as nn
|
||||
|
||||
|
||||
class Net(nn.Cell):
|
||||
def __init__(self):
|
||||
super(Net, self).__init__()
|
||||
self.pool = nn.LPPool2d(norm_type=1, kernel_size=3, stride=1)
|
||||
|
||||
def construct(self, x):
|
||||
out = self.pool(x)
|
||||
return out
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.platform_arm_cpu
|
||||
@pytest.mark.platform_x86_gpu_training
|
||||
@pytest.mark.platform_arm_ascend_training
|
||||
@pytest.mark.platform_x86_ascend_training
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
|
||||
def test_lppool1d_normal(mode):
|
||||
"""
|
||||
Feature: LPPool1d
|
||||
Description: Verify the result of LPPool1d
|
||||
Expectation: success
|
||||
"""
|
||||
ms.set_context(mode=mode)
|
||||
net = Net()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
|
||||
out = net(x)
|
||||
expect_out = np.array([[[[54., 63., 72.],
|
||||
[99., 108., 117.]],
|
||||
[[234., 243., 252.],
|
||||
[279., 288., 297.]],
|
||||
[[414., 423., 432.],
|
||||
[459., 468., 477.]]],
|
||||
[[[594., 603., 612.],
|
||||
[639., 648., 657.]],
|
||||
[[774., 783., 792.],
|
||||
[819., 828., 837.]],
|
||||
[[954., 963., 972.],
|
||||
[999., 1008., 1017.]]]])
|
||||
assert np.allclose(out.asnumpy(), expect_out)
|
|
@ -0,0 +1,60 @@
|
|||
# Copyright 2022 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
import mindspore as ms
|
||||
import mindspore.nn as nn
|
||||
import mindspore.ops as ops
|
||||
|
||||
|
||||
class Net(nn.Cell):
|
||||
def construct(self, x):
|
||||
out = ops.lp_pool1d(x, norm_type=1, kernel_size=3, stride=1)
|
||||
return out
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.platform_arm_cpu
|
||||
@pytest.mark.platform_x86_gpu_training
|
||||
@pytest.mark.platform_arm_ascend_training
|
||||
@pytest.mark.platform_x86_ascend_training
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
|
||||
def test_lppool1d_normal(mode):
|
||||
"""
|
||||
Feature: LPPool1d
|
||||
Description: Verify the result of LPPool1d
|
||||
Expectation: success
|
||||
"""
|
||||
ms.set_context(mode=mode)
|
||||
net = Net()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
|
||||
y = ms.Tensor(np.arange(3 * 4).reshape((3, 4)), dtype=ms.float32)
|
||||
out = net(x)
|
||||
out2 = net(y)
|
||||
expect_out = np.array([[[3., 6.],
|
||||
[15., 18.],
|
||||
[27., 30.]],
|
||||
[[39., 42.],
|
||||
[51., 54.],
|
||||
[63., 66.]]])
|
||||
expect_out2 = np.array([[3., 6.],
|
||||
[15., 18.],
|
||||
[27., 30.]])
|
||||
assert np.allclose(out.asnumpy(), expect_out)
|
||||
assert np.allclose(out2.asnumpy(), expect_out2)
|
|
@ -0,0 +1,60 @@
|
|||
# Copyright 2022 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
import mindspore as ms
|
||||
import mindspore.nn as nn
|
||||
import mindspore.ops as ops
|
||||
|
||||
|
||||
class Net(nn.Cell):
|
||||
def construct(self, x):
|
||||
out = ops.lp_pool2d(x, norm_type=1, kernel_size=3, stride=1)
|
||||
return out
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.platform_arm_cpu
|
||||
@pytest.mark.platform_x86_gpu_training
|
||||
@pytest.mark.platform_arm_ascend_training
|
||||
@pytest.mark.platform_x86_ascend_training
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
|
||||
def test_lppool1d_normal(mode):
|
||||
"""
|
||||
Feature: LPPool1d
|
||||
Description: Verify the result of LPPool1d
|
||||
Expectation: success
|
||||
"""
|
||||
ms.set_context(mode=mode)
|
||||
net = Net()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
|
||||
out = net(x)
|
||||
expect_out = np.array([[[[54., 63., 72.],
|
||||
[99., 108., 117.]],
|
||||
[[234., 243., 252.],
|
||||
[279., 288., 297.]],
|
||||
[[414., 423., 432.],
|
||||
[459., 468., 477.]]],
|
||||
[[[594., 603., 612.],
|
||||
[639., 648., 657.]],
|
||||
[[774., 783., 792.],
|
||||
[819., 828., 837.]],
|
||||
[[954., 963., 972.],
|
||||
[999., 1008., 1017.]]]])
|
||||
assert np.allclose(out.asnumpy(), expect_out)
|
|
@ -64,6 +64,52 @@ def test_compile_avg():
|
|||
Description: Test the functionality of AvgPool3d
|
||||
Expectation: Success
|
||||
"""
|
||||
net = MaxPoolNet()
|
||||
net = AvgPoolNet()
|
||||
x = ms.Tensor(np.random.randint(0, 10, [1, 2, 4, 4, 5]), ms.float32)
|
||||
_cell_graph_executor.compile(net, x)
|
||||
|
||||
|
||||
class LPPool1d(nn.Cell):
|
||||
"""LPPool1d"""
|
||||
|
||||
def __init__(self):
|
||||
super(LPPool1d, self).__init__()
|
||||
self.pool = nn.LPPool1d(norm_type=1, kernel_size=3, stride=1)
|
||||
|
||||
def construct(self, x):
|
||||
output1 = self.pool(x)
|
||||
return output1
|
||||
|
||||
|
||||
def test_compile_lpool1d():
|
||||
"""
|
||||
Feature: Test LPPool1d
|
||||
Description: Test the functionality of LPPool1d
|
||||
Expectation: Success
|
||||
"""
|
||||
net = LPPool1d()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
|
||||
y = ms.Tensor(np.arange(3 * 4).reshape((3, 4)), dtype=ms.float32)
|
||||
_cell_graph_executor.compile(net, x)
|
||||
_cell_graph_executor.compile(net, y)
|
||||
|
||||
|
||||
class LPPool2d(nn.Cell):
|
||||
def __init__(self):
|
||||
super(LPPool2d, self).__init__()
|
||||
self.pool = nn.LPPool2d(norm_type=1, kernel_size=3, stride=1)
|
||||
|
||||
def construct(self, x):
|
||||
out = self.pool(x)
|
||||
return out
|
||||
|
||||
|
||||
def test_compile_lppool2d():
|
||||
"""
|
||||
Feature: Test LPPool2d
|
||||
Description: Test the functionality of LPPool2d
|
||||
Expectation: Success
|
||||
"""
|
||||
net = LPPool2d()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
|
||||
_cell_graph_executor.compile(net, x)
|
||||
|
|
|
@ -0,0 +1,63 @@
|
|||
# Copyright 2022 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
"""
|
||||
test pooling api
|
||||
"""
|
||||
import numpy as np
|
||||
|
||||
import mindspore as ms
|
||||
import mindspore.nn as nn
|
||||
import mindspore.ops as ops
|
||||
from mindspore.common.api import _cell_graph_executor
|
||||
|
||||
|
||||
class LPPool1d(nn.Cell):
|
||||
"""LPPool1d"""
|
||||
|
||||
def construct(self, x):
|
||||
output = ops.lp_pool1d(x, norm_type=1, kernel_size=3, stride=1)
|
||||
return output
|
||||
|
||||
|
||||
def test_compile_lpool1d():
|
||||
"""
|
||||
Feature: Test LPPool1d
|
||||
Description: Test the functionality of LPPool1d
|
||||
Expectation: Success
|
||||
"""
|
||||
net = LPPool1d()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4)), dtype=ms.float32)
|
||||
y = ms.Tensor(np.arange(3 * 4).reshape((3, 4)), dtype=ms.float32)
|
||||
_cell_graph_executor.compile(net, x)
|
||||
_cell_graph_executor.compile(net, y)
|
||||
|
||||
|
||||
class LPPool2d(nn.Cell):
|
||||
"""LPPool2d"""
|
||||
|
||||
def construct(self, x):
|
||||
out = ops.lp_pool2d(x, norm_type=1, kernel_size=3, stride=1)
|
||||
return out
|
||||
|
||||
|
||||
def test_compile_lppool2d():
|
||||
"""
|
||||
Feature: Test LPPool2d
|
||||
Description: Test the functionality of LPPool2d
|
||||
Expectation: Success
|
||||
"""
|
||||
net = LPPool2d()
|
||||
x = ms.Tensor(np.arange(2 * 3 * 4 * 5).reshape((2, 3, 4, 5)), dtype=ms.float32)
|
||||
_cell_graph_executor.compile(net, x)
|
Loading…
Reference in New Issue