Rectification of operator ease of use part 2

This commit is contained in:
dinglinhe 2021-06-08 14:05:30 +08:00
parent f98497ca09
commit b1b4375417
14 changed files with 657 additions and 326 deletions

View File

@ -77,9 +77,9 @@ class Softmax(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> softmax = nn.Softmax()
>>> output = softmax(input_x)
>>> output = softmax(x)
>>> print(output)
[0.03168 0.01166 0.0861 0.636 0.2341 ]
"""
@ -127,9 +127,9 @@ class LogSoftmax(Cell):
``Ascend`` ``GPU``
Examples:
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> log_softmax = nn.LogSoftmax()
>>> output = log_softmax(input_x)
>>> output = log_softmax(x)
>>> print(output)
[[-5.00672150e+00 -6.72150636e-03 -1.20067215e+01]
[-7.00091219e+00 -1.40009127e+01 -9.12250078e-04]]
@ -165,23 +165,24 @@ class ELU(Cell):
alpha (float): The coefficient of negative factor whose type is float. Default: 1.0.
Inputs:
- **input_data** (Tensor) - The input of ELU with data type of float16 or float32.
- **x** (Tensor) - The input of ELU with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If `alpha` is not a float.
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
ValueError: If `alpha` is not equal to 1.0.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float32)
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float32)
>>> elu = nn.ELU()
>>> result = elu(input_x)
>>> result = elu(x)
>>> print(result)
[-0.63212055 -0.86466473 0. 2. 1.]
"""
@ -212,21 +213,22 @@ class ReLU(Cell):
Activation_function#/media/File:Activation_rectified_linear.svg>`_.
Inputs:
- **input_data** (Tensor) - The input of ReLU.
- **x** (Tensor) - The input of ReLU. The data type is Number.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is not a number.
TypeError: If dtype of `x` is not a number.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([-1, 2, -3, 2, -1]), mindspore.float16)
>>> x = Tensor(np.array([-1, 2, -3, 2, -1]), mindspore.float16)
>>> relu = nn.ReLU()
>>> output = relu(input_x)
>>> output = relu(x)
>>> print(output)
[0. 2. 0. 2. 0.]
"""
@ -255,21 +257,22 @@ class ReLU6(Cell):
The input is a Tensor of any valid shape.
Inputs:
- **input_data** (Tensor) - The input of ReLU6 with data type of float16 or float32.
- **x** (Tensor) - The input of ReLU6 with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, which has the same type as `input_data`.
Tensor, which has the same type as `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> relu6 = nn.ReLU6()
>>> output = relu6(input_x)
>>> output = relu6(x)
>>> print(output)
[0. 0. 0. 2. 1.]
"""
@ -300,10 +303,11 @@ class LeakyReLU(Cell):
alpha (Union[int, float]): Slope of the activation function at x < 0. Default: 0.2.
Inputs:
- **input_x** (Tensor) - The input of LeakyReLU.
- **x** (Tensor) - The input of LeakyReLU.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, has the same type and shape as the `input_x`.
Tensor, has the same type and shape as the `x`.
Raises:
TypeError: If `alpha` is not a float or an int.
@ -312,9 +316,9 @@ class LeakyReLU(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> leaky_relu = nn.LeakyReLU()
>>> output = leaky_relu(input_x)
>>> output = leaky_relu(x)
>>> print(output)
[[-0.2 4. -1.6]
[ 2. -1. 9. ]]
@ -352,21 +356,22 @@ class Tanh(Cell):
where :math:`x_i` is an element of the input Tensor.
Inputs:
- **input_data** (Tensor) - The input of Tanh with data type of float16 or float32.
- **x** (Tensor) - The input of Tanh with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([1, 2, 3, 2, 1]), mindspore.float16)
>>> x = Tensor(np.array([1, 2, 3, 2, 1]), mindspore.float16)
>>> tanh = nn.Tanh()
>>> output = tanh(input_x)
>>> output = tanh(x)
>>> print(output)
[0.7617 0.964 0.995 0.964 0.7617]
"""
@ -399,21 +404,22 @@ class GELU(Cell):
Activation_function#/media/File:Activation_gelu.png>`_.
Inputs:
- **input_data** (Tensor) - The input of GELU with data type of float16 or float32.
- **x** (Tensor) - The input of GELU with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> gelu = nn.GELU()
>>> output = gelu(input_x)
>>> output = gelu(x)
>>> print(output)
[[-1.5880802e-01 3.9999299e+00 -3.1077917e-21]
[ 1.9545976e+00 -2.2918017e-07 9.0000000e+00]]
@ -443,21 +449,22 @@ class FastGelu(Cell):
where :math:`x_i` is the element of the input.
Inputs:
- **input_data** (Tensor) - The input of FastGelu with data type of float16 or float32.
- **x** (Tensor) - The input of FastGelu with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend``
Examples:
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> fast_gelu = nn.FastGelu()
>>> output = fast_gelu(input_x)
>>> output = fast_gelu(x)
>>> print(output)
[[-1.5418735e-01 3.9921875e+00 -9.7473649e-06]
[ 1.9375000e+00 -1.0052517e-03 8.9824219e+00]]
@ -490,21 +497,22 @@ class Sigmoid(Cell):
Sigmoid_function#/media/File:Logistic-curve.svg>`_.
Inputs:
- **input_data** (Tensor) - The input of Sigmoid with data type of float16 or float32.
- **x** (Tensor) - The input of Sigmoid with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> sigmoid = nn.Sigmoid()
>>> output = sigmoid(input_x)
>>> output = sigmoid(x)
>>> print(output)
[0.2688 0.11914 0.5 0.881 0.7305 ]
"""
@ -544,25 +552,26 @@ class PReLU(Cell):
w (Union[float, list, Tensor]): The initial value of w. Default: 0.25.
Inputs:
- **input_data** (Tensor) - The input of PReLU with data type of float16 or float32.
- **x** (Tensor) - The input of PReLU with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If `channel` is not an int.
TypeError: If `w` is not one of float, list, Tensor.
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
ValueError: If `channel` is less than 1.
ValueError: If length of shape of `input_data` is equal to 1.
ValueError: If length of shape of `x` is equal to 1.
Supported Platforms:
``Ascend`` ``GPU``
Examples:
>>> input_x = Tensor(np.array([[[[0.1, 0.6], [0.9, 0.9]]]]), mindspore.float32)
>>> x = Tensor(np.array([[[[0.1, 0.6], [0.9, 0.9]]]]), mindspore.float32)
>>> prelu = nn.PReLU()
>>> output = prelu(input_x)
>>> output = prelu(x)
>>> print(output)
[[[[0.1 0.6]
[0.9 0.9]]]]
@ -610,21 +619,22 @@ class HSwish(Cell):
where :math:`x_{i}` is the :math:`i`-th slice in the given dimension of the input Tensor.
Inputs:
- **input_data** (Tensor) - The input of HSwish, data type must be float16 or float32.
- **x** (Tensor) - The input of HSwish, data type must be float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> hswish = nn.HSwish()
>>> result = hswish(input_x)
>>> result = hswish(x)
>>> print(result)
[-0.3333 -0.3333 0 1.666 0.6665]
"""
@ -652,21 +662,22 @@ class HSigmoid(Cell):
where :math:`x_{i}` is the :math:`i`-th slice in the given dimension of the input Tensor.
Inputs:
- **input_data** (Tensor) - The input of HSigmoid, data type must be float16 or float32.
- **x** (Tensor) - The input of HSigmoid, data type must be float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> hsigmoid = nn.HSigmoid()
>>> result = hsigmoid(input_x)
>>> result = hsigmoid(x)
>>> print(result)
[0.3333 0.1666 0.5 0.833 0.6665]
"""
@ -694,21 +705,22 @@ class LogSigmoid(Cell):
where :math:`x_{i}` is the element of the input.
Inputs:
- **input_data** (Tensor) - The input of LogSigmoid with data type of float16 or float32.
- **x** (Tensor) - The input of LogSigmoid with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, with the same type and shape as the `input_data`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If dtype of `input_data` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``GPU``
Examples:
>>> net = nn.LogSigmoid()
>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> output = net(input_x)
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> output = net(x)
>>> print(output)
[-0.31326166 -0.12692806 -0.04858734]
"""

View File

@ -52,7 +52,8 @@ class L1Regularizer(Cell):
scale (int, float): l1 regularization factor which greater than 0.
Inputs:
- **weights** (Tensor) - The input tensor
- **weights** (Tensor) - The input of L1Regularizer with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, which dtype is higher precision data type between mindspore.float32 and weights dtype,
@ -102,7 +103,7 @@ class Dropout(Cell):
The outputs are scaled by a factor of :math:`\frac{1}{keep\_prob}` during training so
that the output layer remains at a similar scale. During inference, this
layer returns the same tensor as the input.
layer returns the same tensor as the `x`.
This technique is proposed in paper `Dropout: A Simple Way to Prevent Neural Networks from Overfitting
<http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf>`_ and proved to be effective to reduce
@ -116,19 +117,20 @@ class Dropout(Cell):
Args:
keep_prob (float): The keep rate, greater than 0 and less equal than 1. E.g. rate=0.9,
dropping out 10% of input units. Default: 0.5.
dtype (:class:`mindspore.dtype`): Data type of input. Default: mindspore.float32.
dtype (:class:`mindspore.dtype`): Data type of `x`. Default: mindspore.float32.
Inputs:
- **input** (Tensor) - The input of Dropout with data type of float16 or float32.
- **x** (Tensor) - The input of Dropout with data type of float16 or float32.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, output tensor with the same shape as the input.
Tensor, output tensor with the same shape as the `x`.
Raises:
TypeError: If `keep_prob` is not a float.
TypeError: If dtype of `input` is not neither float16 nor float32.
TypeError: If dtype of `x` is not neither float16 nor float32.
ValueError: If `keep_prob` is not in range (0, 1].
ValueError: If length of shape of `input` is less than 1.
ValueError: If length of shape of `x` is less than 1.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@ -177,25 +179,31 @@ class Flatten(Cell):
Flattens a tensor without changing dimension of batch size on the 0-th axis.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, \ldots)` to be flattened.
- **x** (Tensor) - Tensor of shape :math:`(N, \ldots)` to be flattened. The data type is Number.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions
and the shape can't be ().
Outputs:
Tensor, the shape of the output tensor is :math:`(N, X)`, where :math:`X` is
the product of the remaining dimensions.
Raises:
TypeError: If `input` is not a subclass of Tensor.
TypeError: If `x` is not a subclass of Tensor.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input = Tensor(np.array([[[1.2, 1.2], [2.1, 2.1]], [[2.2, 2.2], [3.2, 3.2]]]), mindspore.float32)
>>> x = Tensor(np.array([[[1.2, 1.2], [2.1, 2.1]], [[2.2, 2.2], [3.2, 3.2]]]), mindspore.float32)
>>> net = nn.Flatten()
>>> output = net(input)
>>> output = net(x)
>>> print(output)
[[1.2 1.2 2.1 2.1]
[2.2 2.2 3.2 3.2]]
>>> print(f"before flatten the x shape is {x.shape}")
before flatten the x shape is (2, 2, 2)
>>> print(f"after flatten the output shape is {output.shape}")
after flatten the output shape is (24)
"""
def __init__(self):
@ -220,26 +228,27 @@ class Dense(Cell):
Applies dense connected layer for the input. This layer implements the operation as:
.. math::
\text{outputs} = \text{activation}(\text{inputs} * \text{kernel} + \text{bias}),
\text{outputs} = \text{activation}(\text{X} * \text{kernel} + \text{bias}),
where :math:`\text{activation}` is the activation function passed as the activation
where :math:`X` is the input tensors, :math:`\text{activation}` is the activation function passed as the activation
argument (if passed in), :math:`\text{kernel}` is a weight matrix with the same
data type as the inputs created by the layer, and :math:`\text{bias}` is a bias vector
with the same data type as the inputs created by the layer (only if has_bias is True).
data type as the :math:`X` created by the layer, and :math:`\text{bias}` is a bias vector
with the same data type as the :math:`X` created by the layer (only if has_bias is True).
Args:
in_channels (int): The number of channels in the input space.
out_channels (int): The number of channels in the output space.
weight_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable weight_init parameter. The dtype
is same as input x. The values of str refer to the function `initializer`. Default: 'normal'.
is same as `x`. The values of str refer to the function `initializer`. Default: 'normal'.
bias_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable bias_init parameter. The dtype is
same as input x. The values of str refer to the function `initializer`. Default: 'zeros'.
same as `x`. The values of str refer to the function `initializer`. Default: 'zeros'.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: True.
activation (Union[str, Cell, Primitive]): activate function applied to the output of the fully connected layer,
eg. 'ReLU'.Default: None.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(*, in\_channels)`.
- **x** (Tensor) - Tensor of shape :math:`(*, in\_channels)`. The `in_channels` in `Args` should be equal
to :math:`in\_channels` in `Inputs`
Outputs:
Tensor of shape :math:`(*, out\_channels)`.
@ -257,9 +266,9 @@ class Dense(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input = Tensor(np.array([[180, 234, 154], [244, 48, 247]]), mindspore.float32)
>>> x = Tensor(np.array([[180, 234, 154], [244, 48, 247]]), mindspore.float32)
>>> net = nn.Dense(3, 4)
>>> output = net(input)
>>> output = net(x)
>>> print(output.shape)
(2, 4)
"""
@ -368,25 +377,25 @@ class ClipByNorm(Cell):
Default: None, all dimensions to calculate.
Inputs:
- **input** (Tensor) - Tensor of shape N-D. The type must be float32 or float16.
- **x** (Tensor) - Tensor of shape N-D. The type must be float32 or float16.
- **clip_norm** (Tensor) - A scalar Tensor of shape :math:`()` or :math:`(1)`.
Or a tensor shape can be broadcast to input shape.
Outputs:
Tensor, clipped tensor with the same shape as the input, whose type is float32.
Tensor, clipped tensor with the same shape as the `x`, whose type is float32.
Raises:
TypeError: If `axis` is not one of None, int, tuple.
TypeError: If dtype of `input` is neither float32 nor float16.
TypeError: If dtype of `x` is neither float32 nor float16.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> net = nn.ClipByNorm()
>>> input = Tensor(np.random.randint(0, 10, [4, 16]), mindspore.float32)
>>> x = Tensor(np.random.randint(0, 10, [4, 16]), mindspore.float32)
>>> clip_norm = Tensor(np.array([100]).astype(np.float32))
>>> output = net(input, clip_norm)
>>> output = net(x, clip_norm)
>>> print(output.shape)
(4, 16)
@ -450,11 +459,12 @@ class Norm(Cell):
the dimensions in `axis` are removed from the output shape. Default: False.
Inputs:
- **input** (Tensor) - Tensor which is not empty.
- **x** (Tensor) - Tensor which is not empty. The data type should be float16 or float32.
:math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, output tensor with dimensions in 'axis' reduced to 1 will be returned if 'keep_dims' is True;
otherwise a Tensor with dimensions in 'axis' removed is returned.
otherwise a Tensor with dimensions in 'axis' removed is returned. The data type is the same with `x`
Raises:
TypeError: If `axis` is neither an int nor tuple.
@ -465,10 +475,32 @@ class Norm(Cell):
Examples:
>>> net = nn.Norm(axis=0)
>>> input = Tensor(np.array([[4, 4, 9, 1], [2, 1, 3, 6]]), mindspore.float32)
>>> output = net(input)
>>> x = Tensor(np.array([[4, 4, 9, 1], [2, 1, 3, 6]]), mindspore.float32)
>>> print(x.shape)
(2, 4)
>>> output = net(x)
>>> print(output)
[4.472136 4.1231055 9.486833 6.0827627]
>>> print(output.shape)
(4,)
>>> net = nn.Norm(axis=0, keep_dims=True)
>>> x = Tensor(np.array([[4, 4, 9, 1], [2, 1, 3, 6]]), mindspore.float32)
>>> print(x.shape)
(2, 4)
>>> output = net(x)
>>> print(output)
[4.472136 4.1231055 9.486833 6.0827627]
>>> print(output.shape)
(1, 4)
>>> net = nn.Norm(axis=1)
>>> x = Tensor(np.array([[4, 4, 9, 1], [2, 1, 3, 6]]), mindspore.float32)
>>> print(x.shape)
(2, 4)
>>> output = net(x)
>>> print(output)
[10.677078 7.071068]
>>> print(output.shape)
(2,)
"""
def __init__(self, axis=(), keep_dims=False):
@ -535,11 +567,12 @@ class OneHot(Cell):
data type of indices. Default: mindspore.float32.
Inputs:
- **indices** (Tensor) - A tensor of indices with data type of int32 or int64 and arbitrary shape.
- **indices** (Tensor) - A tensor of indices with data type of int32 or int64.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, the one-hot tensor of data type `dtype` with dimension at `axis` expanded to `depth` and filled with
on_value and off_value.
on_value and off_value. The dimension of the `Outputs` is equal to the dimension of the `indices` plus one.
Raises:
TypeError: If `axis` or `depth` is not an int.
@ -551,6 +584,7 @@ class OneHot(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> # 1st sample: add new coordinates at axis 1
>>> net = nn.OneHot(depth=4, axis=1)
>>> indices = Tensor([[1, 3], [0, 2]], dtype=mindspore.int32)
>>> output = net(indices)
@ -563,6 +597,46 @@ class OneHot(Cell):
[0. 0.]
[0. 1.]
[0. 0.]]]
>>> # The results are shown below:
>>> print(output.shape)
(2, 4, 2)
>>> # 2nd sample: add new coordinates at axis 0
>>> net = nn.OneHot(depth=4, axis=0)
>>> indices = Tensor([[1, 3], [0, 2]], dtype=mindspore.int32)
>>> output = net(indices)
>>> print(output)
[[[0. 0.]
[1. 0.]]
[[1. 0.]
[0. 0.]]
[[0. 0.]
[0. 1.]]
[[0. 1.]
[0. 0.]]]
>>> # The results are shown below:
>>> print(output.shape)
(4, 2, 2)
>>> # 3rd sample: add new coordinates at the last dimension.
>>> net = nn.OneHot(depth=4, axis=-1)
>>> indices = Tensor([[1, 3], [0, 2]], dtype=mindspore.int32)
>>> output = net(indices)
>>> # The results are shown below:
>>> print(output)
[[[0. 1. 0. 0.]
[0. 0. 0. 1.]]
[[1. 0. 0. 0.]
[0. 0. 1. 0.]]]
>>> print(output.shape)
(2, 2, 4)
>>> indices = Tensor([1, 3, 0, 2], dtype=mindspore.int32)
>>> output = net(indices)
>>> print(output)
[[0. 1. 0. 0.]
[0. 0. 0. 1.]
[1. 0. 0. 0.]
[0. 0. 1. 0.]]
>>> print(output.shape)
(4 4)
"""
def __init__(self, axis=-1, depth=1, on_value=1.0, off_value=0.0, dtype=mstype.float32):
@ -584,28 +658,38 @@ class Pad(Cell):
Args:
paddings (tuple): The shape of parameter `paddings` is (N, 2). N is the rank of input data. All elements of
paddings are int type. For `D` th dimension of input, paddings[D, 0] indicates how many sizes to be
paddings are int type. For `D` th dimension of the `x`, paddings[D, 0] indicates how many sizes to be
extended ahead of the `D` th dimension of the input tensor, and paddings[D, 1] indicates how many sizes to
be extended behind of the `D` th dimension of the input tensor. The padded size of each dimension D of the
output is: :math:`paddings[D, 0] + input\_x.dim\_size(D) + paddings[D, 1]`
eg:
- mode = "CONSTANT".
- paddings = [[1,1], [2,2]].
- x = [[1,2,3], [4,5,6], [7,8,9]].
- The above can be seen: 1st dimension of x is 3, 2nd dimension of x is 3.
- Substitute into the formula to get:
- 1st dimension of output is paddings[0][0] + 3 + paddings[0][1] = 1 + 3 + 1 = 4.
- 2nd dimension of output is paddings[1][0] + 3 + paddings[1][1] = 2 + 3 + 2 = 7.
- so output.shape is (4, 7)
mode (str): Specifies padding mode. The optional values are "CONSTANT", "REFLECT", "SYMMETRIC".
Default: "CONSTANT".
Inputs:
- **input_x** (Tensor) - The input tensor.
- **x** (Tensor) - The input tensor.
Outputs:
Tensor, the tensor after padding.
- If `mode` is "CONSTANT", it fills the edge with 0, regardless of the values of the `input_x`.
If the `input_x` is [[1,2,3], [4,5,6], [7,8,9]] and `paddings` is [[1,1], [2,2]], then the
- If `mode` is "CONSTANT", it fills the edge with 0, regardless of the values of the `x`.
If the `x` is [[1,2,3], [4,5,6], [7,8,9]] and `paddings` is [[1,1], [2,2]], then the
Outputs is [[0,0,0,0,0,0,0], [0,0,1,2,3,0,0], [0,0,4,5,6,0,0], [0,0,7,8,9,0,0], [0,0,0,0,0,0,0]].
- If `mode` is "REFLECT", it uses a way of symmetrical copying through the axis of symmetry to fill in.
If the `input_x` is [[1,2,3], [4,5,6], [7,8,9]] and `paddings` is [[1,1], [2,2]], then the
If the `x` is [[1,2,3], [4,5,6], [7,8,9]] and `paddings` is [[1,1], [2,2]], then the
Outputs is [[6,5,4,5,6,5,4], [3,2,1,2,3,2,1], [6,5,4,5,6,5,4], [9,8,7,8,9,8,7], [6,5,4,5,6,5,4]].
- If `mode` is "SYMMETRIC", the filling method is similar to the "REFLECT". It is also copied
according to the symmetry axis, except that it includes the symmetry axis. If the `input_x`
according to the symmetry axis, except that it includes the symmetry axis. If the `x`
is [[1,2,3], [4,5,6], [7,8,9]] and `paddings` is [[1,1], [2,2]], then the Outputs is
[[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]].
@ -619,23 +703,86 @@ class Pad(Cell):
Examples:
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> import mindspore.nn as nn
>>> import numpy as np
>>> # If `mode` is "CONSTANT"
>>> class Net(nn.Cell):
... def __init__(self):
... super(Net, self).__init__()
... self.pad = nn.Pad(paddings=((1, 1), (2, 2)), mode="CONSTANT")
... def construct(self, x):
... return self.pad(x)
>>> x = np.array([[0.3, 0.5, 0.2], [0.5, 0.7, 0.3]], dtype=np.float32)
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindsprore.float32)
>>> pad = Net()
>>> output = pad(Tensor(x))
>>> output = pad(x)
>>> print(output)
[[0. 0. 0. 0. 0. 0. 0. ]
[0. 0. 0.3 0.5 0.2 0. 0. ]
[0. 0. 0.5 0.7 0.3 0. 0. ]
[0. 0. 0. 0. 0. 0. 0. ]]
[[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 2. 3. 0. 0.]
[0. 0. 4. 5. 6. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]]
>>> # Another way to call
>>> pad = ops.Pad(paddings=((1, 1), (2, 2)))
>>> # From the above code, we can see following:
>>> # "paddings=((1, 1), (2, 2))",
>>> # paddings[0][0] = 1, indicates a row of values is filled top of the input data in the 1st dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0.]
>>> # [1. 2. 3.]
>>> # [4. 5. 6.]]
>>> # paddings[0][1] = 1 indicates a row of values is filled below input data in the 1st dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0.]
>>> # [1. 2. 3.]
>>> # [4. 5. 6.]
>>> # [0. 0. 0.]]
>>> # paddings[1][0] = 2, indicates 2 rows of values is filled in front of input data in the 2nd dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0. 0. 0.]
>>> # [0. 0. 1. 2. 3.]
>>> # [0. 0. 4. 5. 6.]
>>> # [0. 0. 0. 0. 0.]]
>>> # paddings[1][1] = 2, indicates 2 rows of values is filled in front of input data in the 2nd dimension.
>>> # Shown as follows:
>>> # [[0. 0. 0. 0. 0. 0. 0.]
>>> # [0. 0. 1. 2. 3. 0. 0.]
>>> # [0. 0. 4. 5. 6. 0. 0.]
>>> # [0. 0. 0. 0. 0. 0. 0.]]
>>> output = pad(x)
>>> print(output)
[[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 2. 3. 0. 0.]
[0. 0. 4. 5. 6. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]]
>>> # if mode is "REFLECT"
>>> class Net(nn.Cell):
... def __init__(self):
... super(Net, self).__init__()
... self.pad = nn.Pad(paddings=((1, 1), (2, 2)), mode="REFLECT")
... def construct(self, x):
... return self.pad(x)
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindsprore.float32)
>>> pad = Net()
>>> output = pad(x)
>>> print(output)
[[6. 5. 4. 5. 6. 5. 4.]
[3. 2. 1. 2. 3. 2. 1.]
[6. 5. 4. 5. 6. 5. 4.]
[3. 2. 1. 2. 3. 2. 1.]]
>>> # if mode is "SYMMETRIC"
>>> class Net(nn.Cell):
... def __init__(self):
... super(Net, self).__init__()
... self.pad = nn.Pad(paddings=((1, 1), (2, 2)), mode="SYMMETRIC")
... def construct(self, x):
... return self.pad(x)
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6]]), mindsprore.float32)
>>> pad = Net()
>>> output = pad(x)
>>> print(output)
[[2. 1. 1. 2. 3. 3. 2.]
[2. 1. 1. 2. 3. 3. 2.]
[5. 4. 4. 5. 6. 6. 5.]
[5. 4. 4. 5. 6. 6. 5.]]
"""
def __init__(self, paddings, mode="CONSTANT"):
@ -721,9 +868,18 @@ class ResizeBilinear(Cell):
``Ascend`` ``CPU``
Examples:
>>> tensor = Tensor([[[[1, 2, 3, 4], [5, 6, 7, 8]]]], mindspore.float32)
>>> x = Tensor([[[[1, 2, 3, 4], [5, 6, 7, 8]]]], mindspore.float32)
>>> resize_bilinear = nn.ResizeBilinear()
>>> result = resize_bilinear(tensor, size=(5,5))
>>> result = resize_bilinear(x, size=(5,5))
>>> print(x)
[[[[1. 2. 3. 4.]
[5. 6. 7. 8.]]]]
>>> print(result)
[[[[1. 1.8. 2.6 3.4 4. ]
[2.6 3.4 4.2000003 5. 5.6000004]
[4.2 5.0000005 5.8 6.6 7.2 ]
[5. 5.8 6.6 7.4 8. ]
[5. 5.8 6.6 7.4000006 8. ]]]]
>>> print(result.shape)
(1, 1, 5, 5)
"""
@ -758,11 +914,11 @@ class Unfold(Cell):
- valid: Means that the taken patch area must be completely covered in the original image.
Inputs:
- **input_x** (Tensor) - A 4-D tensor whose shape is [in_batch, in_depth, in_row, in_col] and
- **x** (Tensor) - A 4-D tensor whose shape is [in_batch, in_depth, in_row, in_col] and
data type is number.
Outputs:
Tensor, a 4-D tensor whose data type is same as `input_x`,
Tensor, a 4-D tensor whose data type is same as `x`,
and the shape is [out_batch, out_depth, out_row, out_col] where `out_batch` is the same as the `in_batch`.
:math:`out\_depth = ksize\_row * ksize\_col * in\_depth`
@ -781,7 +937,15 @@ class Unfold(Cell):
Examples:
>>> net = Unfold(ksizes=[1, 2, 2, 1], strides=[1, 2, 2, 1], rates=[1, 2, 2, 1])
>>> # As stated in the above code:
>>> # ksize_row = 2, ksize_col = 2, rate_row = 2, rate_col = 2, stride_row = 2, stride_col = 2.
>>> image = Tensor(np.ones([2, 3, 6, 6]), dtype=mstype.float16)
>>> # in_batch = 2, in_depth = 3, in_row = 6, in_col = 6.
>>> # Substituting the formula to get:
>>> # out_batch = in_batch = 2
>>> # out_depth = 2 * 2 * 3 = 12
>>> # out_row = (6 - (2 + (2 - 1) * (2 - 1))) // 2 + 1 = 2
>>> # out_col = (6 - (2 + (2 - 1) * (2 - 1))) // 2 + 1 = 2
>>> output = net(image)
>>> print(output.shape)
(2, 12, 2, 2)
@ -827,11 +991,12 @@ class Tril(Cell):
Returns a tensor with elements above the kth diagonal zeroed.
Inputs:
- **x** (Tensor) - The input tensor.
- **x** (Tensor) - The input tensor. The data type is Number.
:math:`(N,*)` where :math:`*` means, any number of additional dimensions.
- **k** (Int) - The index of diagonal. Default: 0
Outputs:
Tensor, has the same type as input `x`.
Tensor, has the same shape and type as input `x`.
Raises:
TypeError: If `k` is not an int.
@ -841,12 +1006,50 @@ class Tril(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([[1, 2], [3, 4]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> tril = nn.Tril()
>>> result = tril(x)
>>> print(result)
[[1 0]
[3 4]]
[[ 1, 0, 0, 0],
[ 5, 6, 0, 0],
[10, 11, 12, 0],
[14, 15, 16, 17]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> tril = nn.Tril()
>>> result = tril(x, 1)
>>> print(result)
[[ 1, 2, 0, 0],
[ 5, 6, 7, 0],
[10, 11, 12, 13],
[14, 15, 16, 17]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> tril = nn.Tril()
>>> result = tril(x, 2)
>>> print(result)
[[ 1, 2, 3, 0],
[ 5, 6, 7, 8],
[10, 11, 12, 13],
[14, 15, 16, 17]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> tril = nn.Tril()
>>> result = tril(x, -1)
>>> print(result)
[[ 0, 0, 0, 0],
[ 5, 0, 0, 0],
[10, 11, 0, 0],
[14, 15, 16, 0]]))
"""
def __init__(self):
@ -875,11 +1078,12 @@ class Triu(Cell):
Returns a tensor with elements below the kth diagonal zeroed.
Inputs:
- **x** (Tensor) - The input tensor.
- **x** (Tensor) - The input tensor. The data type is Number.
:math:`(N,*)` where :math:`*` means, any number of additional dimensions.
- **k** (Int) - The index of diagonal. Default: 0
Outputs:
Tensor, has the same type as input `x`.
Tensor, has the same type and shape as input `x`.
Raises:
TypeError: If `k` is not an int.
@ -889,12 +1093,50 @@ class Triu(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([[1, 2], [3, 4]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> triu = nn.Triu()
>>> result = triu(x)
>>> print(result)
[[1 2]
[0 4]]
[[ 1, 2, 3, 4],
[ 0, 6, 7, 8],
[ 0, 0, 12, 13],
[ 0, 0, 0, 17]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> triu = nn.Triu()
>>> result = triu(x, 1)
>>> print(result)
[[ 0, 2, 3, 4],
[ 0, 0, 7, 8],
[ 0, 0, 0, 13],
[ 0, 0, 0, 0]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> triu = nn.Triu()
>>> result = triu(x, 2)
>>> print(result)
[[ 0, 0, 3, 4],
[ 0, 0, 0, 8],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0]]))
>>> x = Tensor(np.array([[ 1, 2, 3, 4],
>>> [ 5, 6, 7, 8],
>>> [10, 11, 12, 13],
>>> [14, 15, 16, 17]]))
>>> triu = nn.Triu()
>>> result = triu(x, -1)
>>> print(result)
[[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 0, 11, 12, 13],
[ 0, 0, 16, 17]]))
"""
def __init__(self):
@ -937,6 +1179,7 @@ class MatrixDiag(Cell):
Inputs:
- **x** (Tensor) - The diagonal values. It can be one of the following data types:
float32, float16, int32, int8, and uint8.
The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
Outputs:
Tensor, has the same type as input `x`. The shape must be x.shape + (x.shape[-1], ).
@ -951,9 +1194,39 @@ class MatrixDiag(Cell):
>>> x = Tensor(np.array([1, -1]), mindspore.float32)
>>> matrix_diag = nn.MatrixDiag()
>>> output = matrix_diag(x)
>>> print(x.shape)
(2,)
>>> print(output)
[[ 1. 0.]
[ 0. -1.]]
>>> print(output.shape)
(2, 2)
>>> x = Tensor(np.array([[1, -1], [1, -1]]), mindspore.float32)
>>> matrix_diag = nn.MatrixDiag()
>>> output = matrix_diag(x)
>>> print(x.shape)
(2, 2)
>>> print(output)
[[[ 1. 0.]
[ 0. -1.]]
[[ 1. 0.]
[ 0. -1.]]]
>>> print(output.shape)
(2, 2, 2)
>>> x = Tensor(np.array([[1, -1, 1], [1, -1, 1]]), mindspore.float32)
>>> matrix_diag = nn.MatrixDiag()
>>> output = matrix_diag(x)
>>> print(x.shape)
(2, 3)
>>> print(output)
[[[ 1. 0. 0.]
[ 0. -1. 0.]
[ 0. 0. 1.]
[[ 1. 0. 0.]
[ 0. -1. 0.]
[ 0. 0. 1.]]]
>>> print(output.shape)
(2, 3, 3)
"""
def __init__(self):
@ -992,13 +1265,23 @@ class MatrixDiagPart(Cell):
``Ascend``
Examples:
>>> x = Tensor([[[-1, 0], [0, 1]], [[-1, 0], [0, 1]], [[-1, 0], [0, 1]]], mindspore.float32)
>>> x = Tensor([[[-1, 0], [0, 1]],
... [[-1, 0], [0, 1]],
... [[-1, 0], [0, 1]]], mindspore.float32)
>>> matrix_diag_part = nn.MatrixDiagPart()
>>> output = matrix_diag_part(x)
>>> print(output)
[[-1. 1.]
[-1. 1.]
[-1. 1.]]
>>> x = Tensor([[-1, 0, 0, 1],
... [-1, 0, 0, 1],
... [-1, 0, 0, 1],
... [-1, 0, 0, 1], mindspore.float32)
>>> matrix_diag_part = nn.MatrixDiagPart()
>>> output = matrix_diag_part(x)
>>> print(output)
[-1 0 0 1]
"""
def __init__(self):

View File

@ -42,12 +42,12 @@ class Conv2dBnAct(Cell):
the kernel. A tuple of 2 ints means the first value is for the height and the other is for the
width of the kernel.
stride (int): Specifies stride for all spatial dimensions with the same value. The value of stride must be
greater than or equal to 1 and lower than any one of the height and width of the input. Default: 1.
greater than or equal to 1 and lower than any one of the height and width of the `x`. Default: 1.
pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
padding (int): Implicit paddings on both sides of the input. Default: 0.
padding (int): Implicit paddings on both sides of the `x`. Default: 0.
dilation (int): Specifies the dilation rate to use for dilated convolution. If set to be :math:`k > 1`,
there will be :math:`k - 1` pixels skipped for each sampling location. Its value must be greater than
or equal to 1 and lower than any one of the height and width of the input. Default: 1.
or equal to 1 and lower than any one of the height and width of the `x`. Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. Default: 1.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
@ -71,10 +71,10 @@ class Conv2dBnAct(Cell):
after_fake(bool): Determine whether there must be a fake quantization operation after Cond2dBnAct.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`. The data type is float32.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`. The data type is float32.
Raises:
TypeError: If `in_channels`, `out_channels`, `stride`, `padding` or `dilation` is not an int.
@ -87,8 +87,8 @@ class Conv2dBnAct(Cell):
Examples:
>>> net = nn.Conv2dBnAct(120, 240, 4, has_bn=True, activation='relu')
>>> input = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float32)
>>> result = net(input)
>>> x = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float32)
>>> result = net(x)
>>> output = result.shape
>>> print(output)
(1, 240, 1024, 640)
@ -157,9 +157,9 @@ class DenseBnAct(Cell):
in_channels (int): The number of channels in the input space.
out_channels (int): The number of channels in the output space.
weight_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable weight_init parameter. The dtype
is same as input. The values of str refer to the function `initializer`. Default: 'normal'.
is same as `x`. The values of str refer to the function `initializer`. Default: 'normal'.
bias_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable bias_init parameter. The dtype is
same as input. The values of str refer to the function `initializer`. Default: 'zeros'.
same as `x`. The values of str refer to the function `initializer`. Default: 'zeros'.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: True.
has_bn (bool): Specifies to use batchnorm or not. Default: False.
momentum (float): Momentum for moving average for batchnorm, must be [0, 1]. Default:0.9
@ -172,10 +172,10 @@ class DenseBnAct(Cell):
after_fake(bool): Determine whether there must be a fake quantization operation after DenseBnAct.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, in\_channels)`.
- **x** (Tensor) - Tensor of shape :math:`(N, in\_channels)`. The data type is float32.
Outputs:
Tensor of shape :math:`(N, out\_channels)`.
Tensor of shape :math:`(N, out\_channels)`. The data type is float32.
Raises:
TypeError: If `in_channels` or `out_channels` is not an int.
@ -188,8 +188,8 @@ class DenseBnAct(Cell):
Examples:
>>> net = nn.DenseBnAct(3, 4)
>>> input = Tensor(np.random.randint(0, 255, [2, 3]), mindspore.float32)
>>> result = net(input)
>>> x = Tensor(np.random.randint(0, 255, [2, 3]), mindspore.float32)
>>> result = net(x)
>>> output = result.shape
>>> print(output)
(2, 4)

View File

@ -110,10 +110,10 @@ class SequentialCell(Cell):
args (list, OrderedDict): List of subclass of Cell.
Inputs:
- **input** (Tensor) - Tensor with shape according to the first Cell in the sequence.
- **x** (Tensor) - Tensor with shape according to the first Cell in the sequence.
Outputs:
Tensor, the output Tensor with shape depending on the input and defined sequence of Cells.
Tensor, the output Tensor with shape depending on the input `x` and defined sequence of Cells.
Raises:
TypeError: If the type of the `args` is not list or OrderedDict.

View File

@ -124,7 +124,7 @@ class Conv2d(_Conv):
where :math:`\text{kernel_size[0]}` and :math:`\text{kernel_size[1]}` are the height and width of
the convolution kernel. The full kernel has shape
:math:`(C_{out}, C_{in} // \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]})`,
where group is the group number to split the input in the channel dimension.
where group is the group number to split the input `x` in the channel dimension.
If the 'pad_mode' is set to be "valid", the output height and width will be
:math:`\left \lfloor{1 + \frac{H_{in} + \text{padding[0]} + \text{padding[1]} - \text{kernel_size[0]} -
@ -149,7 +149,7 @@ class Conv2d(_Conv):
"same", "valid", "pad". Default: "same".
- same: Adopts the way of completion. The height and width of the output will be the same as
the input. The total number of padding will be calculated in horizontal and vertical
the input `x`. The total number of padding will be calculated in horizontal and vertical
directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the
last extra padding will be done from the bottom and the right side. If this mode is set, `padding`
must be 0.
@ -158,10 +158,10 @@ class Conv2d(_Conv):
without padding. Extra pixels will be discarded. If this mode is set, `padding`
must be 0.
- pad: Implicit paddings on both sides of the input. The number of `padding` will be padded to the input
- pad: Implicit paddings on both sides of the input `x`. The number of `padding` will be padded to the input
Tensor borders. `padding` must be greater than or equal to 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input `x`. If `padding` is one integer,
the paddings of top, bottom, left and right are the same, equal to padding. If `padding` is a tuple
with four integers, the paddings of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0.
@ -169,7 +169,7 @@ class Conv2d(_Conv):
to use for dilated convolution. If set to be :math:`k > 1`, there will
be :math:`k - 1` pixels skipped for each sampling location. Its value must
be greater or equal to 1 and bounded by the height and width of the
input. Default: 1.
input `x`. Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. If the group is equal to `in_channels` and `out_channels`,
this 2D convolution layer also can be called 2D depthwise convolution layer. Default: 1.
@ -187,7 +187,7 @@ class Conv2d(_Conv):
Default: 'NCHW'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})` \
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})` \
or :math:`(N, H_{in}, W_{in}, C_{in})`.
Outputs:
@ -306,7 +306,7 @@ class Conv1d(_Conv):
filter and :math:`out_{j}` corresponds to the :math:`j`-th channel of the output. :math:`W_{ij}` is a slice
of kernel and it has shape :math:`(\text{ks_w})`, where :math:`\text{ks_w}` is the width of the convolution kernel.
The full kernel has shape :math:`(C_{out}, C_{in} // \text{group}, \text{ks_w})`, where group is the group number
to split the input in the channel dimension.
to split the input `x` in the channel dimension.
If the 'pad_mode' is set to be "valid", the output width will be
:math:`\left \lfloor{1 + \frac{W_{in} + 2 \times \text{padding} - \text{ks_w} -
@ -325,7 +325,7 @@ class Conv1d(_Conv):
pad_mode (str): Specifies padding mode. The optional values are
"same", "valid", "pad". Default: "same".
- same: Adopts the way of completion. The output width will be the same as the input.
- same: Adopts the way of completion. The output width will be the same as the input `x`.
The total number of padding will be calculated in the horizontal
direction and evenly distributed to left and right if possible. Otherwise, the
last extra padding will be done from the bottom and the right side. If this mode is set, `padding`
@ -335,15 +335,15 @@ class Conv1d(_Conv):
without padding. Extra pixels will be discarded. If this mode is set, `padding`
must be 0.
- pad: Implicit paddings on both sides of the input. The number of `padding` will be padded to the input
- pad: Implicit paddings on both sides of the input `x`. The number of `padding` will be padded to the input
Tensor borders. `padding` must be greater than or equal to 0.
padding (int): Implicit paddings on both sides of the input. Default: 0.
padding (int): Implicit paddings on both sides of the input `x`. Default: 0.
dilation (int): The data type is int. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will
be :math:`k - 1` pixels skipped for each sampling location. Its value must
be greater or equal to 1 and bounded by the height and width of the
input. Default: 1.
input `x`. Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. Default: 1.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
@ -358,7 +358,7 @@ class Conv1d(_Conv):
Initializer for more details. Default: 'zeros'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, W_{out})`.
@ -520,7 +520,7 @@ class Conv3d(_Conv):
"same", "valid", "pad". Default: "same".
- same: Adopts the way of completion. The depth, height and width of the output will be the same as
the input. The total number of padding will be calculated in depth, horizontal and vertical
the input `x`. The total number of padding will be calculated in depth, horizontal and vertical
directions and evenly distributed to head and tail, top and bottom, left and right if possible.
Otherwise, the last extra padding will be done from the tail, bottom and the right side.
If this mode is set, `padding` must be 0.
@ -529,10 +529,10 @@ class Conv3d(_Conv):
will be returned without padding. Extra pixels will be discarded. If this mode is set, `padding`
must be 0.
- pad: Implicit paddings on both sides of the input in depth, height, width. The number of `padding` will
be padded to the input Tensor borders. `padding` must be greater than or equal to 0.
- pad: Implicit paddings on both sides of the input `x` in depth, height, width. The number of `padding`
will be padded to the input Tensor borders. `padding` must be greater than or equal to 0.
padding (Union(int, tuple[int])): Implicit paddings on both sides of the input.
padding (Union(int, tuple[int])): Implicit paddings on both sides of the input `x`.
The data type is int or a tuple of 6 integers. Default: 0. If `padding` is an integer,
the paddings of head, tail, top, bottom, left and right are the same, equal to padding.
If `paddings` is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to
@ -541,7 +541,7 @@ class Conv3d(_Conv):
: math:`(dilation_d, dilation_h, dilation_w)`. Currently, dilation on depth only supports the case of 1.
Specifies the dilation rate to use for dilated convolution. If set to be :math:`k > 1`,
there will be :math:`k - 1` pixels skipped for each sampling location.
Its value must be greater or equal to 1 and bounded by the height and width of the input. Default: 1.
Its value must be greater or equal to 1 and bounded by the height and width of the input `x`. Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. Default: 1. Only 1 is currently supported.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
@ -557,7 +557,7 @@ class Conv3d(_Conv):
data_format (str): The optional value for data format. Currently only support "NCDHW".
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`.
Currently input data type only support float16 and float32.
Outputs:
@ -663,7 +663,7 @@ class Conv3dTranspose(_Conv):
Compute a 3D transposed convolution, which is also known as a deconvolution
(although it is not an actual deconvolution).
Input is typically of shape :math:`(N, C, D, H, W)`, where :math:`N` is batch size and :math:`C` is channel number.
`x` is typically of shape :math:`(N, C, D, H, W)`, where :math:`N` is batch size and :math:`C` is channel number.
If the 'pad_mode' is set to be "pad", the height and width of output are defined as:
@ -689,7 +689,7 @@ class Conv3dTranspose(_Conv):
"pad", "same", "valid". Default: "same".
- same: Adopts the way of completion. The depth, height and width of the output will be the same as
the input. The total number of padding will be calculated in depth, horizontal and vertical
the input `x`. The total number of padding will be calculated in depth, horizontal and vertical
directions and evenly distributed to head and tail, top and bottom, left and right if possible.
Otherwise, the last extra padding will be done from the tail, bottom and the right side.
If this mode is set, `padding` and `output_padding` must be 0.
@ -698,7 +698,7 @@ class Conv3dTranspose(_Conv):
will be returned without padding. Extra pixels will be discarded. If this mode is set, `padding`
and `output_padding` must be 0.
- pad: Implicit paddings on both sides of the input in depth, height, width. The number of `pad` will
- pad: Implicit paddings on both sides of the input `x` in depth, height, width. The number of `pad` will
be padded to the input Tensor borders. `padding` must be greater than or equal to 0.
padding (Union(int, tuple[int])): The pad value to be filled. Default: 0. If `padding` is an integer,
@ -709,7 +709,7 @@ class Conv3dTranspose(_Conv):
: math:`(dilation_d, dilation_h, dilation_w)`. Currently, dilation on depth only supports the case of 1.
Specifies the dilation rate to use for dilated convolution. If set to be :math:`k > 1`,
there will be :math:`k - 1` pixels skipped for each sampling location.
Its value must be greater or equal to 1 and bounded by the height and width of the input. Default: 1.
Its value must be greater or equal to 1 and bounded by the height and width of the input `x`. Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. Default: 1. Only 1 is currently supported.
output_padding (Union(int, tuple[int])): Add extra size to each dimension of the output. Default: 0.
@ -727,7 +727,7 @@ class Conv3dTranspose(_Conv):
data_format (str): The optional value for data format. Currently only support 'NCDHW'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`.
Currently input data type only support float16 and float32.
Outputs:
@ -857,7 +857,7 @@ class Conv2dTranspose(_Conv):
Compute a 2D transposed convolution, which is also known as a deconvolution
(although it is not an actual deconvolution).
Input is typically of shape :math:`(N, C, H, W)`, where :math:`N` is batch size and :math:`C` is channel number.
`x` is typically of shape :math:`(N, C, H, W)`, where :math:`N` is batch size and :math:`C` is channel number.
If the 'pad_mode' is set to be "pad", the height and width of output are defined as:
@ -886,12 +886,12 @@ class Conv2dTranspose(_Conv):
pad_mode (str): Select the mode of the pad. The optional values are
"pad", "same", "valid". Default: "same".
- pad: Implicit paddings on both sides of the input.
- pad: Implicit paddings on both sides of the input `x`.
- same: Adopted the way of completion.
- valid: Adopted the way of discarding.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is one integer,
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input `x`. If `padding` is one integer,
the paddings of top, bottom, left and right are the same, equal to padding. If `padding` is a tuple
with four integers, the paddings of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0.
@ -899,7 +899,7 @@ class Conv2dTranspose(_Conv):
to use for dilated convolution. If set to be :math:`k > 1`, there will
be :math:`k - 1` pixels skipped for each sampling location. Its value must
be greater than or equal to 1 and bounded by the height and width of the
input. Default: 1.
input `x`. Default: 1.
group (int): Splits filter into groups, `in_channels` and `out_channels` must be
divisible by the number of groups. This does not support for Davinci devices when group > 1. Default: 1.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
@ -914,7 +914,7 @@ class Conv2dTranspose(_Conv):
Initializer for more details. Default: 'zeros'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -1042,7 +1042,7 @@ class Conv1dTranspose(_Conv):
Compute a 1D transposed convolution, which is also known as a deconvolution
(although it is not an actual deconvolution).
Input is typically of shape :math:`(N, C, W)`, where :math:`N` is batch size and :math:`C` is channel number.
`x` is typically of shape :math:`(N, C, W)`, where :math:`N` is batch size and :math:`C` is channel number.
If the 'pad_mode' is set to be "pad", the width of output is defined as:
@ -1062,17 +1062,17 @@ class Conv1dTranspose(_Conv):
pad_mode (str): Select the mode of the pad. The optional values are
"pad", "same", "valid". Default: "same".
- pad: Implicit paddings on both sides of the input.
- pad: Implicit paddings on both sides of the input `x`.
- same: Adopted the way of completion.
- valid: Adopted the way of discarding.
padding (int): Implicit paddings on both sides of the input. Default: 0.
padding (int): Implicit paddings on both sides of the input `x`. Default: 0.
dilation (int): The data type is int. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will
be :math:`k - 1` pixels skipped for each sampling location. Its value must
be greater or equal to 1 and bounded by the width of the
input. Default: 1.
input `x`. Default: 1.
group (int): Splits filter into groups, `in_channels` and `out_channels` must be
divisible by the number of groups. This is not support for Davinci devices when group > 1. Default: 1.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
@ -1087,7 +1087,7 @@ class Conv1dTranspose(_Conv):
Initializer for more details. Default: 'zeros'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, W_{out})`.

View File

@ -57,7 +57,7 @@ class Embedding(Cell):
the corresponding word embeddings.
Note:
When 'use_one_hot' is set to True, the type of the input must be mindspore.int32.
When 'use_one_hot' is set to True, the type of the `x` must be mindspore.int32.
Args:
vocab_size (int): Size of the dictionary of embeddings.
@ -66,16 +66,16 @@ class Embedding(Cell):
embedding_table (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the embedding_table.
Refer to class `initializer` for the values of string when a string
is specified. Default: 'normal'.
dtype (:class:`mindspore.dtype`): Data type of input. Default: mindspore.float32.
dtype (:class:`mindspore.dtype`): Data type of `x`. Default: mindspore.float32.
padding_idx (int, None): When the padding_idx encounters index, the output embedding vector of this index
will be initialized to zero. Default: None. The feature is inactivated.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(\text{batch_size}, \text{input_length})`. The elements of
- **x** (Tensor) - Tensor of shape :math:`(\text{batch_size}, \text{x_length})`. The elements of
the Tensor must be integer and not larger than vocab_size. Otherwise the corresponding embedding vector will
be zero.
be zero. The data type is int32 or int64.
Outputs:
Tensor of shape :math:`(\text{batch_size}, \text{input_length}, \text{embedding_size})`.
Tensor of shape :math:`(\text{batch_size}, \text{x_length}, \text{embedding_size})`.
Raises:
TypeError: If `vocab_size` or `embedding_size` is not an int.
@ -87,10 +87,9 @@ class Embedding(Cell):
Examples:
>>> net = nn.Embedding(20000, 768, True)
>>> input_data = Tensor(np.ones([8, 128]), mindspore.int32)
>>>
>>> x = Tensor(np.ones([8, 128]), mindspore.int32)
>>> # Maps the input word IDs to word embedding.
>>> output = net(input_data)
>>> output = net(x)
>>> result = output.shape
>>> print(result)
(8, 128, 768)

View File

@ -206,6 +206,11 @@ class SSIM(Cell):
assessment: from error visibility to structural similarity <https://ieeexplore.ieee.org/document/1284395>`_.
IEEE transactions on image processing.
SSIM is a measure of the similarity of two pictures.
Like PSNR, SSIM is often used as an evaluation of image quality. SSIM is a number between 0 and 1.The larger it is,
the smaller the gap between the output image and the undistorted image, that is, the better the image quality.
When the two images are exactly the same, SSIM=1.
.. math::
l(x,y)&=\frac{2\mu_x\mu_y+C_1}{\mu_x^2+\mu_y^2+C_1}, C_1=(K_1L)^2.\\

View File

@ -90,17 +90,17 @@ class LSTM(Cell):
hidden_size (int): Number of features of hidden layer.
num_layers (int): Number of layers of stacked LSTM . Default: 1.
has_bias (bool): Whether the cell has bias `b_ih` and `b_hh`. Default: True.
batch_first (bool): Specifies whether the first dimension of input is batch_size. Default: False.
batch_first (bool): Specifies whether the first dimension of input `x` is batch_size. Default: False.
dropout (float, int): If not 0, append `Dropout` layer on the outputs of each
LSTM layer except the last layer. Default 0. The range of dropout is [0.0, 1.0].
bidirectional (bool): Specifies whether it is a bidirectional LSTM. Default: False.
Inputs:
- **input** (Tensor) - Tensor of shape (seq_len, batch_size, `input_size`) or
- **x** (Tensor) - Tensor of shape (seq_len, batch_size, `input_size`) or
(batch_size, seq_len, `input_size`).
- **hx** (tuple) - A tuple of two Tensors (h_0, c_0) both of data type mindspore.float32 or
mindspore.float16 and shape (num_directions * `num_layers`, batch_size, `hidden_size`).
Data type of `hx` must be the same as `input`.
Data type of `hx` must be the same as `x`.
Outputs:
Tuple, a tuple contains (`output`, (`h_n`, `c_n`)).
@ -318,25 +318,26 @@ class LSTMCell(Cell):
`Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling
<https://static.googleusercontent.com/media/research.google.com/zh-CN//pubs/archive/43905.pdf>`_.
LSTMCell is a single-layer RNN, you can achieve multi-layer RNN by stacking LSTMCell.
Note:
LSTMCell is a single-layer RNN, you can achieve multi-layer RNN by stacking LSTMCell.
Args:
input_size (int): Number of features of input.
hidden_size (int): Number of features of hidden layer.
has_bias (bool): Whether the cell has bias `b_ih` and `b_hh`. Default: True.
batch_first (bool): Specifies whether the first dimension of input is batch_size. Default: False.
batch_first (bool): Specifies whether the first dimension of input `x` is batch_size. Default: False.
dropout (float, int): If not 0, append `Dropout` layer on the outputs of each
LSTM layer except the last layer. Default 0. The range of dropout is [0.0, 1.0].
bidirectional (bool): Specifies whether this is a bidirectional LSTM. If set True,
number of directions will be 2 otherwise number of directions is 1. Default: False.
Inputs:
- **input** (Tensor) - Tensor of shape (seq_len, batch_size, `input_size`).
- **x** (Tensor) - Tensor of shape (seq_len, batch_size, `input_size`).
- **h** - data type mindspore.float32 or
mindspore.float16 and shape (num_directions, batch_size, `hidden_size`).
- **c** - data type mindspore.float32 or
mindspore.float16 and shape (num_directions, batch_size, `hidden_size`).
Data type of `h' and 'c' must be the same of `input`.
Data type of `h' and 'c' must be the same of `x`.
- **w** - data type mindspore.float32 or
mindspore.float16 and shape (`weight_size`, 1, 1).
The value of `weight_size` depends on `input_size`, `hidden_size` and `bidirectional`

View File

@ -56,8 +56,6 @@ class ReduceLogSumExp(Cell):
Reduces a dimension of a tensor by calculating exponential for all elements in the dimension,
then calculate logarithm of the sum.
The dtype of the tensor to be reduced is number.
.. math::
ReduceLogSumExp(x) = \log(\sum(e^x))
@ -91,9 +89,9 @@ class ReduceLogSumExp(Cell):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = nn.ReduceLogSumExp(1, keep_dims=True)
>>> output = op(input_x)
>>> output = op(x)
>>> print(output.shape)
(3, 1, 5, 6)
"""
@ -187,10 +185,11 @@ class LGamma(Cell):
lgamma(+/-inf) = +inf
Thus, the behaviour of LGamma follows:
when x > 0.5, return log(Gamma(x))
when x < 0.5 and is not an integer, return the real part of Log(Gamma(x)) where Log is the complex logarithm
when x is an integer less or equal to 0, return +inf
when x = +/- inf, return +inf
- when x > 0.5, return log(Gamma(x))
- when x < 0.5 and is not an integer, return the real part of Log(Gamma(x)) where Log is the complex logarithm
- when x is an integer less or equal to 0, return +inf
- when x = +/- inf, return +inf
Inputs:
- **x** (Tensor) - The input tensor. Only float16, float32 are supported.
@ -205,9 +204,9 @@ class LGamma(Cell):
``Ascend`` ``GPU``
Examples:
>>> input_x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> op = nn.LGamma()
>>> output = op(input_x)
>>> output = op(x)
>>> print(output)
[3.5762787e-07 6.9314754e-01 1.7917603e+00]
"""
@ -317,9 +316,9 @@ class DiGamma(Cell):
``Ascend`` ``GPU``
Examples:
>>> input_x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> op = nn.DiGamma()
>>> output = op(input_x)
>>> output = op(x)
>>> print(output)
[0.42278463 0.92278427 1.2561178]
"""
@ -581,10 +580,10 @@ class IGamma(Cell):
``Ascend`` ``GPU``
Examples:
>>> input_a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> input_x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
>>> igamma = nn.IGamma()
>>> output = igamma(input_a, input_x)
>>> output = igamma(a, x)
>>> print (output)
[0.593994 0.35276785 0.21486944 0.13337152]
"""
@ -670,10 +669,10 @@ class LBeta(Cell):
``Ascend`` ``GPU``
Examples:
>>> input_x = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> input_y = Tensor(np.array([2.0, 3.0, 14.0, 15.0]).astype(np.float32))
>>> x = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> y = Tensor(np.array([2.0, 3.0, 14.0, 15.0]).astype(np.float32))
>>> lbeta = nn.LBeta()
>>> output = lbeta(input_y, input_x)
>>> output = lbeta(y, x)
>>> print(output)
[-1.7917596 -4.094345 -12.000229 -14.754799]
"""
@ -820,15 +819,16 @@ class MatMul(Cell):
nn.MatMul will be deprecated in future versions. Please use ops.matmul instead.
- If both x1 and x2 are 1-dimensional, the dot product is returned.
- If the dimensions of x1 and x2 are all not greater than 2, the matrix-matrix product will be returned. Note if
one of 'x1' and 'x2' is 1-dimensional, the argument will first be expanded to 2 dimension. After the matrix
multiply, the expanded dimension will be removed.
- If at least one of x1 and x2 is N-dimensional (N>2), the none-matrix dimensions(batch) of inputs will be
broadcasted and must be broadcastable. Note if one of 'x1' and 'x2' is 1-dimensional, the argument will first be
expanded to 2 dimension and then the none-matrix dimensions will be broadcasted. After the matrix multiply, the
expanded dimension will be removed. For example, if `x1` is a :math:`(j \times 1 \times n \times m)` tensor and
`x2` is a :math:`(k \times m \times p)` tensor, the output will be a :math:`(j \times k \times n \times p)`
- If both `x1` and `x2` are 1-dimensional, the dot product is returned.
- If the dimensions of `x1` and `x2` are all not greater than 2, the matrix-matrix product will
be returned. Note if one of 'x1' and 'x2' is 1-dimensional, the argument will first be
expanded to 2 dimension. After the matrix multiply, the expanded dimension will be removed.
- If at least one of `x1` and `x2` is N-dimensional (N>2), the none-matrix dimensions(batch) of inputs
will be broadcasted and must be broadcastable. Note if one of 'x1' and 'x2' is 1-dimensional,
the argument will first be expanded to 2 dimension and then the none-matrix dimensions will be broadcasted.
after the matrix multiply, the expanded dimension will be removed. For example,
if `x1` is a :math:`(j \times 1 \times n \times m)` tensor and
`x2` is b :math:`(k \times m \times p)` tensor, the output will be a :math:`(j \times k \times n \times p)`
tensor.
Args:
@ -836,25 +836,25 @@ class MatMul(Cell):
transpose_x2 (bool): If true, `b` is transposed before multiplication. Default: False.
Inputs:
- **input_x1** (Tensor) - The first tensor to be multiplied.
- **input_x2** (Tensor) - The second tensor to be multiplied.
- **x1** (Tensor) - The first tensor to be multiplied.
- **x2** (Tensor) - The second tensor to be multiplied.
Outputs:
Tensor, the shape of the output tensor depends on the dimension of input tensors.
Raises:
TypeError: If `transpose_x1` or `transpose_x2` is not a bool.
ValueError: If the column of matrix dimensions of `input_x1` is not equal to
the row of matrix dimensions of `input_x2`.
ValueError: If the column of matrix dimensions of `x1` is not equal to
the row of matrix dimensions of `x2`.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> net = nn.MatMul()
>>> input_x1 = Tensor(np.ones(shape=[3, 2, 3]), mindspore.float32)
>>> input_x2 = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
>>> output = net(input_x1, input_x2)
>>> x1 = Tensor(np.ones(shape=[3, 2, 3]), mindspore.float32)
>>> x2 = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
>>> output = net(x1, x2)
>>> print(output.shape)
(3, 2, 4)
"""
@ -918,24 +918,50 @@ class Moments(Cell):
If false, don't keep these dimensions. Default: None.
Inputs:
- **input_x** (Tensor) - The tensor to be calculated. Only float16 and float32 are supported.
- **x** (Tensor) - The tensor to be calculated. Only float16 and float32 are supported.
:math:`(N,*)` where :math:`*` means,any number of additional dimensions.
Outputs:
- **mean** (Tensor) - The mean of input x, with the same date type as input x.
- **variance** (Tensor) - The variance of input x, with the same date type as input x.
- **mean** (Tensor) - The mean of `x`, with the same date type as input `x`.
- **variance** (Tensor) - The variance of `x`, with the same date type as input `x`.
Raises:
TypeError: If `axis` is not one of int, tuple, None.
TypeError: If `keep_dims` is neither bool nor None.
TypeError: If dtype of `input_x` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([[[[1, 2, 3, 4], [3, 4, 5, 6]]]]), mindspore.float32)
>>> net = nn.Moments(axis=0, keep_dims=True)
>>> output = net(x)
>>> print(output)
(Tensor(shape=[1, 1, 2, 4], dtype=Float32, value=
[[[[ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00, 4.00000000e+00],
[ 3.00000000e+00, 4.00000000e+00, 5.00000000e+00, 6.00000000e+00]]]]),
Tensor(shape=[1, 1, 2, 4], dtype=Float32, value=
[[[[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]]]))
>>> net = nn.Moments(axis=1, keep_dims=True)
>>> output = net(x)
>>> print(output)
(Tensor(shape=[1, 1, 2, 4], dtype=Float32, value=
[[[[ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00, 4.00000000e+00],
[ 3.00000000e+00, 4.00000000e+00, 5.00000000e+00, 6.00000000e+00]]]]),
Tensor(shape=[1, 1, 2, 4], dtype=Float32, value=
[[[[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]]]))
>>> net = nn.Moments(axis=2, keep_dims=True)
>>> output = net(x)
>>> print(output)
(Tensor(shape=[1, 1, 1, 4], dtype=Float32, value=
[[[[ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00, 4.00000000e+00]]]]),
Tensor(shape=[1, 1, 1, 4], dtype=Float32, value=
[[[[ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00, 1.00000000e+00]]]]))
>>> net = nn.Moments(axis=3, keep_dims=True)
>>> input_x = Tensor(np.array([[[[1, 2, 3, 4], [3, 4, 5, 6]]]]), mindspore.float32)
>>> output = net(input_x)
>>> output = net(x)
>>> print(output)
(Tensor(shape=[1, 1, 2, 1], dtype=Float32, value=
[[[[ 2.50000000e+00],
@ -983,22 +1009,22 @@ class MatInverse(Cell):
Calculates the inverse of Positive-Definite Hermitian matrix using Cholesky decomposition.
Inputs:
- **a** (Tensor[Number]) - The input tensor. It must be a positive-definite matrix.
- **x** (Tensor[Number]) - The input tensor. It must be a positive-definite matrix.
With float16 or float32 data type.
Outputs:
Tensor, has the same dtype as the `a`.
Tensor, has the same dtype as the `x`.
Raises:
TypeError: If dtype of `a` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``GPU``
Examples:
>>> input_a = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> x = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> op = nn.MatInverse()
>>> output = op(input_a)
>>> output = op(x)
>>> print(output)
[[49.36112 -13.555558 2.1111116]
[-13.555558 3.7777784 -0.5555557]
@ -1024,22 +1050,22 @@ class MatDet(Cell):
Calculates the determinant of Positive-Definite Hermitian matrix using Cholesky decomposition.
Inputs:
- **a** (Tensor[Number]) - The input tensor. It must be a positive-definite matrix.
- **x** (Tensor[Number]) - The input tensor. It must be a positive-definite matrix.
With float16 or float32 data type.
Outputs:
Tensor, has the same dtype as the `a`.
Tensor, has the same dtype as the `x`.
Raises:
TypeError: If dtype of `a` is neither float16 nor float32.
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``GPU``
Examples:
>>> input_a = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> x = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> op = nn.MatDet()
>>> output = op(input_a)
>>> output = op(x)
>>> print(output)
35.999996
"""

View File

@ -303,7 +303,7 @@ class BatchNorm1d(_BatchNorm):
the running mean and variance. Default: None.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in})`.
Outputs:
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out})`.
@ -322,9 +322,9 @@ class BatchNorm1d(_BatchNorm):
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.BatchNorm1d(num_features=4)
>>> input_tensor = Tensor(np.array([[0.7, 0.5, 0.5, 0.6],
... [0.5, 0.4, 0.6, 0.9]]).astype(np.float32))
>>> output = net(input_tensor)
>>> x = Tensor(np.array([[0.7, 0.5, 0.5, 0.6],
... [0.5, 0.4, 0.6, 0.9]]).astype(np.float32))
>>> output = net(x)
>>> print(output)
[[ 0.6999965 0.4999975 0.4999975 0.59999704 ]
[ 0.4999975 0.399998 0.59999704 0.89999545 ]]
@ -407,7 +407,7 @@ class BatchNorm2d(_BatchNorm):
Default: 'NCHW'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -427,8 +427,8 @@ class BatchNorm2d(_BatchNorm):
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.BatchNorm2d(num_features=3)
>>> input_tensor = Tensor(np.ones([1, 3, 2, 2]).astype(np.float32))
>>> output = net(input_tensor)
>>> x = Tensor(np.ones([1, 3, 2, 2]).astype(np.float32))
>>> output = net(x)
>>> print(output)
[[[[ 0.999995 0.999995 ]
[ 0.999995 0.999995 ]]
@ -512,7 +512,7 @@ class BatchNorm3d(Cell):
data_format (str): The optional value for data format is 'NCDHW'. Default: 'NCDHW'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`.
Outputs:
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out}, D_{out},H_{out}, W_{out})`.
@ -532,8 +532,8 @@ class BatchNorm3d(Cell):
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.BatchNorm3d(num_features=3)
>>> input_tensor = Tensor(np.ones([16, 3, 10, 32, 32]).astype(np.float32))
>>> output = net(input_tensor)
>>> x = Tensor(np.ones([16, 3, 10, 32, 32]).astype(np.float32))
>>> output = net(x)
>>> print(output.shape)
(16, 3, 10, 32, 32)
"""
@ -614,7 +614,7 @@ class GlobalBatchNorm(_BatchNorm):
mean and variance. Default: None.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -645,8 +645,8 @@ class GlobalBatchNorm(_BatchNorm):
>>> context.reset_auto_parallel_context()
>>> context.set_auto_parallel_context(parallel_mode=ParallelMode.DATA_PARALLEL)
>>> global_bn_op = nn.GlobalBatchNorm(num_features=3, device_num_each_group=2)
>>> input_tensor = Tensor(np.ones([1, 3, 2, 2]).astype(np.float32))
>>> output = global_bn_op(input_tensor)
>>> x = Tensor(np.ones([1, 3, 2, 2]).astype(np.float32))
>>> output = global_bn_op(x)
>>> print(output)
[[[[ 0.999995 0.999995 ]
[ 0.999995 0.999995 ]]
@ -733,7 +733,7 @@ class SyncBatchNorm(_BatchNorm):
synchronization across all devices.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -765,8 +765,8 @@ class SyncBatchNorm(_BatchNorm):
>>> context.reset_auto_parallel_context()
>>> context.set_auto_parallel_context(parallel_mode=ParallelMode.DATA_PARALLEL)
>>> sync_bn_op = nn.SyncBatchNorm(num_features=3, process_groups=[[0, 1], [2, 3]])
>>> input_tensor = Tensor(np.ones([1, 3, 2, 2]), mstype.float32)
>>> output = sync_bn_op(input_tensor)
>>> x = Tensor(np.ones([1, 3, 2, 2]), mstype.float32)
>>> output = sync_bn_op(x)
>>> print(output)
[[[[ 0.999995 0.999995 ]
[ 0.999995 0.999995 ]]
@ -836,11 +836,11 @@ class LayerNorm(Cell):
epsilon (float): A value added to the denominator for numerical stability. Default: 1e-7.
Inputs:
- **input_x** (Tensor) - The shape of 'input_x' is :math:`(x_1, x_2, ..., x_R)`,
- **x** (Tensor) - The shape of 'x' is :math:`(x_1, x_2, ..., x_R)`,
and `input_shape[begin_norm_axis:]` is equal to `normalized_shape`.
Outputs:
Tensor, the normalized and scaled offset tensor, has the same shape and data type as the `input_x`.
Tensor, the normalized and scaled offset tensor, has the same shape and data type as the `x`.
Raises:
TypeError: If `normalized_shape` is neither a list nor tuple.
@ -931,11 +931,11 @@ class InstanceNorm2d(Cell):
The values of str refer to the function `initializer` including 'zeros', 'ones', etc. Default: 'zeros'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C, H, W)`. Data type: float16 or float32.
- **x** (Tensor) - Tensor of shape :math:`(N, C, H, W)`. Data type: float16 or float32.
Outputs:
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C, H, W)`. Same type and
shape as the `input_x`.
shape as the `x`.
Supported Platforms:
``GPU``
@ -958,8 +958,8 @@ class InstanceNorm2d(Cell):
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> net = nn.InstanceNorm2d(3)
>>> input_tensor = Tensor(np.ones([2, 3, 2, 2]), mindspore.float32)
>>> output = net(input_tensor)
>>> x = Tensor(np.ones([2, 3, 2, 2]), mindspore.float32)
>>> output = net(x)
>>> print(output.shape)
(2, 3, 2, 2)
"""
@ -1050,10 +1050,10 @@ class GroupNorm(Cell):
'he_uniform', etc. Default: 'zeros'. If beta_init is a Tensor, the shape must be [num_channels].
Inputs:
- **input_x** (Tensor) - The input feature with shape [N, C, H, W].
- **x** (Tensor) - The input feature with shape [N, C, H, W].
Outputs:
Tensor, the normalized and scaled offset tensor, has the same shape and data type as the `input_x`.
Tensor, the normalized and scaled offset tensor, has the same shape and data type as the `x`.
Raises:
TypeError: If `num_groups` or `num_channels` is not an int.

View File

@ -105,7 +105,7 @@ class MaxPool2d(_PoolNd):
Default: 'NCHW'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -115,7 +115,7 @@ class MaxPool2d(_PoolNd):
ValueError: If `pad_mode` is neither 'valid' nor 'same' with not case sensitive.
ValueError: If `data_format` is neither 'NCHW' nor 'NHWC'.
ValueError: If `kernel_size` or `strides` is less than 1.
ValueError: If length of shape of `input` is not equal to 4.
ValueError: If length of shape of `x` is not equal to 4.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@ -173,7 +173,7 @@ class MaxPool1d(_PoolNd):
will be returned without padding. Extra pixels will be discarded.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C, L_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C, L_{in})`.
Outputs:
Tensor of shape :math:`(N, C, L_{out}))`.
@ -183,7 +183,7 @@ class MaxPool1d(_PoolNd):
ValueError: If `pad_mode` is neither 'valid' nor 'same' with not case sensitive.
ValueError: If `data_format` is neither 'NCHW' nor 'NHWC'.
ValueError: If `kernel_size` or `strides` is less than 1.
ValueError: If length of shape of `input` is not equal to 4.
ValueError: If length of shape of `x` is not equal to 4.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@ -263,7 +263,7 @@ class AvgPool2d(_PoolNd):
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -273,7 +273,7 @@ class AvgPool2d(_PoolNd):
ValueError: If `pad_mode` is neither 'valid' nor 'same' with not case sensitive.
ValueError: If `data_format` is neither 'NCHW' nor 'NHWC'.
ValueError: If `kernel_size` or `strides` is less than 1.
ValueError: If length of shape of `input` is not equal to 4.
ValueError: If length of shape of `x` is not equal to 4.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@ -336,7 +336,7 @@ class AvgPool1d(_PoolNd):
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, L_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, L_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, L_{out})`.
@ -345,7 +345,7 @@ class AvgPool1d(_PoolNd):
TypeError: If `kernel_size` or `stride` is not an int.
ValueError: If `pad_mode` is neither 'same' nor 'valid' with not case sensitive.
ValueError: If `kernel_size` or `strides` is less than 1.
ValueError: If length of shape of `input` is not equal to 3.
ValueError: If length of shape of `x` is not equal to 3.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

View File

@ -339,10 +339,10 @@ class FakeQuantWithMinMaxObserver(UniformQuantObserver):
mode (string): Optional quantization mode, currently only `DEFAULT`(QAT) and `LEARNED_SCALE` are supported.
Default: ("DEFAULT")
Inputs:
- **input** (Tensor) - The input of FakeQuantWithMinMaxObserver.
- **x** (Tensor) - The input of FakeQuantWithMinMaxObserver. The input dimension is preferably 2D or 4D.
Outputs:
Tensor, with the same type and shape as the `input`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If `min_init` or `max_init` is not int, float or list.
@ -360,8 +360,8 @@ class FakeQuantWithMinMaxObserver(UniformQuantObserver):
>>> import mindspore
>>> from mindspore import Tensor
>>> fake_quant = nn.FakeQuantWithMinMaxObserver()
>>> input_data = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
>>> result = fake_quant(input_data)
>>> x = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
>>> result = fake_quant(x)
>>> print(result)
[[ 0.9882355 1.9764705 0.9882355]
[-1.9764705 0. -0.9882355]]
@ -584,7 +584,7 @@ class Conv2dBnFoldQuantOneConv(Cell):
kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. Default: 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
eps (float): Parameters for Batch Normalization. Default: 1e-5.
momentum (float): Parameters for Batch Normalization op. Default: 0.997.
dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
@ -610,7 +610,7 @@ class Conv2dBnFoldQuantOneConv(Cell):
quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -634,8 +634,8 @@ class Conv2dBnFoldQuantOneConv(Cell):
>>> qconfig = quant.create_quant_config()
>>> conv2d_bnfold = nn.Conv2dBnFoldQuantOneConv(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
... weight_init="ones", quant_config=qconfig)
>>> input_data = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_bnfold(input_data)
>>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_bnfold(x)
>>> print(result)
[[[[5.9296875 13.8359375]
[11.859375 17.78125]]]]
@ -822,7 +822,7 @@ class Conv2dBnFoldQuant(Cell):
kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. Default: 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
eps (float): Parameters for Batch Normalization. Default: 1e-5.
momentum (float): Parameters for Batch Normalization op. Default: 0.997.
dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
@ -850,7 +850,7 @@ class Conv2dBnFoldQuant(Cell):
Default: 100000.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -874,8 +874,8 @@ class Conv2dBnFoldQuant(Cell):
>>> qconfig = quant.create_quant_config()
>>> conv2d_bnfold = nn.Conv2dBnFoldQuant(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
... weight_init="ones", quant_config=qconfig)
>>> input_data = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_bnfold(input_data)
>>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_bnfold(x)
>>> print(result)
[[[[5.9296875 13.8359375]
[11.859375 17.78125]]]]
@ -1049,7 +1049,7 @@ class Conv2dBnWithoutFoldQuant(Cell):
kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. Default: 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. Default: 1.
@ -1065,7 +1065,7 @@ class Conv2dBnWithoutFoldQuant(Cell):
quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -1088,8 +1088,8 @@ class Conv2dBnWithoutFoldQuant(Cell):
>>> qconfig = quant.create_quant_config()
>>> conv2d_no_bnfold = nn.Conv2dBnWithoutFoldQuant(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
... weight_init='ones', quant_config=qconfig)
>>> input_data = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_no_bnfold(input_data)
>>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_no_bnfold(x)
>>> print(result)
[[[[5.929658 13.835868]
[11.859316 17.78116]]]]
@ -1194,7 +1194,7 @@ class Conv2dQuant(Cell):
kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. Default: 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. Default: 1.
@ -1208,7 +1208,8 @@ class Conv2dQuant(Cell):
quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
The input dimension is preferably 2D or 4D.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -1231,8 +1232,8 @@ class Conv2dQuant(Cell):
>>> qconfig = quant.create_quant_config()
>>> conv2d_quant = nn.Conv2dQuant(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
... weight_init='ones', quant_config=qconfig)
>>> input_data = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_quant(input_data)
>>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
>>> result = conv2d_quant(x)
>>> print(result)
[[[[5.9296875 13.8359375]
[11.859375 17.78125]]]]
@ -1332,9 +1333,9 @@ class DenseQuant(Cell):
in_channels (int): The dimension of the input space.
out_channels (int): The dimension of the output space.
weight_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable weight_init parameter. The dtype
is same as input. The values of str refer to the function `initializer`. Default: 'normal'.
is same as `x`. The values of str refer to the function `initializer`. Default: 'normal'.
bias_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable bias_init parameter. The dtype is
same as input. The values of str refer to the function `initializer`. Default: 'zeros'.
same as `x`. The values of str refer to the function `initializer`. Default: 'zeros'.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: True.
activation (Union[str, Cell, Primitive]): The regularization function applied to the output of the layer,
eg. 'relu'. Default: None.
@ -1344,7 +1345,8 @@ class DenseQuant(Cell):
quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
The input dimension is preferably 2D or 4D.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -1368,8 +1370,8 @@ class DenseQuant(Cell):
>>> from mindspore import Tensor
>>> qconfig = quant.create_quant_config()
>>> dense_quant = nn.DenseQuant(2, 1, weight_init='ones', quant_config=qconfig)
>>> input_data = Tensor(np.array([[1, 5], [3, 4]]), mindspore.float32)
>>> result = dense_quant(input_data)
>>> x = Tensor(np.array([[1, 5], [3, 4]]), mindspore.float32)
>>> result = dense_quant(x)
>>> print(result)
[[5.929413]
[6.9176483]]
@ -1468,10 +1470,10 @@ class ActQuant(_QuantActivation):
quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
Inputs:
- **input** (Tensor) - The input of ActQuant.
- **x** (Tensor) - The input of ActQuant. The input dimension is preferably 2D or 4D.
Outputs:
Tensor, with the same type and shape as the `input`.
Tensor, with the same type and shape as the `x`.
Raises:
TypeError: If `activation` is not an instance of Cell.
@ -1486,8 +1488,8 @@ class ActQuant(_QuantActivation):
>>> from mindspore import Tensor
>>> qconfig = quant.create_quant_config()
>>> act_quant = nn.ActQuant(nn.ReLU(), quant_config=qconfig)
>>> input_data = Tensor(np.array([[1, 2, -1], [-2, 0, -1]]), mindspore.float32)
>>> result = act_quant(input_data)
>>> x = Tensor(np.array([[1, 2, -1], [-2, 0, -1]]), mindspore.float32)
>>> result = act_quant(x)
>>> print(result)
[[0.9882355 1.9764705 0. ]
[0. 0. 0. ]]
@ -1554,14 +1556,15 @@ class TensorAddQuant(Cell):
quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
Inputs:
- **input_x1** (Tensor) - The first tensor of TensorAddQuant.
- **input_x2** (Tensor) - The second tensor of TensorAddQuant.
- **x1** (Tensor) - The first tensor of TensorAddQuant. The input dimension is preferably 2D or 4D.
- **x2** (Tensor) - The second tensor of TensorAddQuant. Has the same shape with `x1`.
Outputs:
Tensor, with the same type and shape as the `input_x1`.
Tensor, with the same type and shape as the `x1`.
Raises:
TypeError: If `ema_decay` is not a float.
ValueError: If the shape of `x2` is different with `x1`.
Supported Platforms:
``Ascend`` ``GPU``
@ -1572,9 +1575,9 @@ class TensorAddQuant(Cell):
>>> from mindspore import Tensor
>>> qconfig = quant.create_quant_config()
>>> add_quant = nn.TensorAddQuant(quant_config=qconfig)
>>> input_x1 = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
>>> input_x2 = Tensor(np.ones((2, 3)), mindspore.float32)
>>> output = add_quant(input_x1, input_x2)
>>> x1 = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
>>> x2 = Tensor(np.ones((2, 3)), mindspore.float32)
>>> output = add_quant(x1, x2)
>>> print(output)
[[ 1.9764705 3.011765 1.9764705]
[-0.9882355 0.9882355 0. ]]
@ -1615,14 +1618,15 @@ class MulQuant(Cell):
quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
Inputs:
- **input_x1** (Tensor) - The first tensor of MulQuant.
- **input_x2** (Tensor) - The second tensor of MulQuant.
- **x1** (Tensor) - The first tensor of MulQuant. The input dimension is preferably 2D or 4D.
- **x2** (Tensor) - The second tensor of MulQuant. Has the same shape with `x1`.
Outputs:
Tensor, with the same type and shape as the `input_x1`.
Tensor, with the same type and shape as the `x1`.
Raises:
TypeError: If `ema_decay` is not a float.
ValueError: If the shape of `x2` is different with `x1`.
Supported Platforms:
``Ascend`` ``GPU``
@ -1633,9 +1637,9 @@ class MulQuant(Cell):
>>> from mindspore import Tensor
>>> qconfig = quant.create_quant_config()
>>> mul_quant = nn.MulQuant(quant_config=qconfig)
>>> input_x1 = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
>>> input_x2 = Tensor(np.ones((2, 3)) * 2, mindspore.float32)
>>> output = mul_quant(input_x1, input_x2)
>>> x1 = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
>>> x2 = Tensor(np.ones((2, 3)) * 2, mindspore.float32)
>>> output = mul_quant(x1, x2)
>>> print(output)
[[ 1.9764705 4.0000005 1.9764705]
[-4. 0. -1.9764705]]

View File

@ -48,15 +48,15 @@ class DenseThor(Cell):
in_channels (int): The number of the input channels.
out_channels (int): The number of the output channels.
weight_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable weight_init parameter. The dtype
is same as input x. The values of str refer to the function `initializer`. Default: 'normal'.
is same as `x`. The values of str refer to the function `initializer`. Default: 'normal'.
bias_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable bias_init parameter. The dtype is
same as input x. The values of str refer to the function `initializer`. Default: 'zeros'.
same as `x`. The values of str refer to the function `initializer`. Default: 'zeros'.
has_bias (bool): Specifies whether the layer uses a bias vector. Default: True.
activation (str): activate function applied to the output of the fully connected layer, eg. 'ReLU'.
Default: None.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, in\_channels)`.
- **x** (Tensor) - Tensor of shape :math:`(N, in\_channels)`.
Outputs:
Tensor of shape :math:`(N, out\_channels)`.
@ -289,7 +289,7 @@ class Conv2dThor(_ConvThor):
of kernel and it has shape :math:`(\text{ks_h}, \text{ks_w})`, where :math:`\text{ks_h}` and
:math:`\text{ks_w}` are the height and width of the convolution kernel. The full kernel has shape
:math:`(C_{out}, C_{in} // \text{group}, \text{ks_h}, \text{ks_w})`, where group is the group number
to split the input in the channel dimension.
to split the input `x` in the channel dimension.
If the 'pad_mode' is set to be "valid", the output height and width will be
:math:`\left \lfloor{1 + \frac{H_{in} + 2 \times \text{padding} - \text{ks_h} -
@ -311,7 +311,7 @@ class Conv2dThor(_ConvThor):
"same", "valid", "pad". Default: "same".
- same: Adopts the way of completion. The shape of the output will be the same as
the input. The total number of padding will be calculated in horizontal and vertical
the `x`. The total number of padding will be calculated in horizontal and vertical
directions and evenly distributed to top and bottom, left and right if possible. Otherwise, the
last extra padding will be done from the bottom and the right side. If this mode is set, `padding`
must be 0.
@ -319,17 +319,17 @@ class Conv2dThor(_ConvThor):
- valid: Adopts the way of discarding. The possible largest height and width of output will be returned
without padding. Extra pixels will be discarded. If this mode is set, `padding` must be 0.
- pad: Implicit paddings on both sides of the input. The number of `padding` will be padded to the input
- pad: Implicit paddings on both sides of the input `x`. The number of `padding` will be padded to the input
Tensor borders. `padding` must be greater than or equal to 0.
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input. If `padding` is an integer,
padding (Union[int, tuple[int]]): Implicit paddings on both sides of the input `x`. If `padding` is an integer,
the paddings of top, bottom, left and right are the same, equal to padding. If `padding` is a tuple
with four integers, the paddings of top, bottom, left and right will be equal to padding[0],
padding[1], padding[2], and padding[3] accordingly. Default: 0.
dilation (Union[int, tuple[int]]): The data type is int or a tuple of 2 integers. Specifies the dilation rate
to use for dilated convolution. If set to be :math:`k > 1`, there will
be :math:`k - 1` pixels skipped for each sampling location. Its value must
be greater or equal to 1 and bounded by the height and width of the input.
be greater or equal to 1 and bounded by the height and width of the input `x`.
Default: 1.
group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
divisible by the number of groups. If the group is equal to `in_channels` and `out_channels`,
@ -346,7 +346,7 @@ class Conv2dThor(_ConvThor):
Initializer for more details. Default: 'zeros'.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
- **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
Outputs:
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
@ -531,7 +531,7 @@ class EmbeddingThor(Cell):
needed for THOR, the detail can be seen in paper: https://www.aaai.org/AAAI21Papers/AAAI-6611.ChenM.pdf
Note:
When 'use_one_hot' is set to True, the type of the input must be mindspore.int32.
When 'use_one_hot' is set to True, the type of the input `x` must be mindspore.int32.
Args:
vocab_size (int): The size of the dictionary of embeddings.
@ -539,23 +539,23 @@ class EmbeddingThor(Cell):
use_one_hot (bool): Specifies whether to apply one_hot encoding form. Default: False.
embedding_table (Union[Tensor, str, Initializer, numbers.Number]): Initializes the embedding_table.
Refer to class `initializer` for the values of string when a string is specified. Default: 'normal'.
dtype (:class:`mindspore.dtype`): Data type of input. Default: mindspore.float32.
dtype (:class:`mindspore.dtype`): Data type of input `x`. Default: mindspore.float32.
padding_idx (int, None): When the padding_idx encounters index, the output embedding vector of this index
will be initialized to zero. Default: None. The feature is inactivated.
Inputs:
- **input** (Tensor) - Tensor of input shape :math:`(\text{batch_size}, \text{input_length})`. The elements of
- **x** (Tensor) - Tensor of input shape :math:`(\text{batch_size}, \text{x_length})`. The elements of
the Tensor must be integer and not larger than vocab_size. Otherwise the corresponding embedding vector will
be zero.
Outputs:
Tensor of output shape :math:`(\text{batch_size}, \text{input_length}, \text{embedding_size})`.
Tensor of output shape :math:`(\text{batch_size}, \text{x_length}, \text{embedding_size})`.
Examples:
>>> net = nn.EmbeddingThor(20000, 768, True)
>>> input_data = Tensor(np.ones([8, 128]), mindspore.int32)
>>> x = Tensor(np.ones([8, 128]), mindspore.int32)
>>>
>>> # Maps the input word IDs to word embedding.
>>> output = net(input_data)
>>> output = net(x)
>>> output.shape
(8, 128, 768)
"""

View File

@ -61,7 +61,7 @@ class TimeDistributed(Cell):
The time distributed layer.
Time distributed is a wrapper which allows to apply a layer to every temporal slice of an input.
And the input should be at least 3D.
And the `x` should be at least 3D.
There are two cases in the implementation.
When reshape_with_axis provided, the reshape method will be chosen, which is more efficient;
otherwise, the method of dividing the inputs along time axis will be used, which is more general.
@ -73,7 +73,8 @@ class TimeDistributed(Cell):
reshape_with_axis(int): The axis which will be reshaped with time_axis. Default: None.
Inputs:
- **input** (Tensor) - Tensor of shape :math:`(N, T, *)`.
- **x** (Tensor) - Tensor of shape :math:`(N, T, *)`,
where :math:`*` means any number of additional dimensions.
Outputs:
Tensor of shape :math:`(N, T, *)`