add raises description for BCELoss, ReLU, BatchNorm1d, etc. operators.
This commit is contained in:
parent
d02f6f4d10
commit
30f99f2722
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -63,11 +63,17 @@ class Softmax(Cell):
|
|||
axis (Union[int, tuple[int]]): The axis to apply Softmax operation, -1 means the last dimension. Default: -1.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - The input of Softmax.
|
||||
- **x** (Tensor) - The input of Softmax with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, which has the same type and shape as `x` with values in the range[0,1].
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is neither an int not a tuple.
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
ValueError: If `axis` is a tuple whose length is less than 1.
|
||||
ValueError: If `axis` is a tuple whose elements are not all in range [-len(x), len(x)).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -107,11 +113,16 @@ class LogSoftmax(Cell):
|
|||
axis (int): The axis to apply LogSoftmax operation, -1 means the last dimension. Default: -1.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - The input of LogSoftmax.
|
||||
- **x** (Tensor) - The input of LogSoftmax, with float16 or float32 data type.
|
||||
|
||||
Outputs:
|
||||
Tensor, which has the same type and shape as the input as `x` with values in the range[-inf,0).
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is not an int.
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
ValueError: If `axis` is not in range [-len(x), len(x)).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -153,11 +164,16 @@ class ELU(Cell):
|
|||
alpha (float): The coefficient of negative factor whose type is float. Default: 1.0.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of ELU.
|
||||
- **input_data** (Tensor) - The input of ELU with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `alpha` is not a float.
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
ValueError: If `alpha` is not equal to 1.0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -199,6 +215,9 @@ class ReLU(Cell):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is not a number.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -233,11 +252,14 @@ class ReLU6(Cell):
|
|||
The input is a Tensor of any valid shape.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of ReLU6.
|
||||
- **input_data** (Tensor) - The input of ReLU6 with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, which has the same type as `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -279,6 +301,9 @@ class LeakyReLU(Cell):
|
|||
Outputs:
|
||||
Tensor, has the same type and shape as the `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `alpha` is not a float or an int.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -322,11 +347,14 @@ class Tanh(Cell):
|
|||
where :math:`x_i` is an element of the input Tensor.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of Tanh.
|
||||
- **input_data** (Tensor) - The input of Tanh with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -365,11 +393,14 @@ class GELU(Cell):
|
|||
Activation_function#/media/File:Activation_gelu.png>`_.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of GELU.
|
||||
- **input_data** (Tensor) - The input of GELU with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -410,6 +441,9 @@ class FastGelu(Cell):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -448,11 +482,14 @@ class Sigmoid(Cell):
|
|||
Sigmoid_function#/media/File:Logistic-curve.svg>`_.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of Tanh.
|
||||
- **input_data** (Tensor) - The input of Sigmoid with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -495,14 +532,21 @@ class PReLU(Cell):
|
|||
|
||||
Args:
|
||||
channel (int): The dimension of input. Default: 1.
|
||||
w (float): The initial value of w. Default: 0.25.
|
||||
w (Union[float, list, Tensor]): The initial value of w. Default: 0.25.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of PReLU.
|
||||
- **input_data** (Tensor) - The input of PReLU with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `channel` is not an int.
|
||||
TypeError: If `w` is not one of float, list, Tensor.
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
ValueError: If `channel` is less than 1.
|
||||
ValueError: If length of shape of `input_data` is equal to 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -518,6 +562,7 @@ class PReLU(Cell):
|
|||
@cell_attr_register(attrs="")
|
||||
def __init__(self, channel=1, w=0.25):
|
||||
super(PReLU, self).__init__()
|
||||
validator.check_positive_int(channel, 'channel', self.cls_name)
|
||||
if isinstance(w, (np.float32, float)):
|
||||
tmp = np.empty((channel,), dtype=np.float32)
|
||||
tmp.fill(w)
|
||||
|
@ -526,7 +571,7 @@ class PReLU(Cell):
|
|||
w = Tensor(w)
|
||||
|
||||
if not isinstance(w, Tensor):
|
||||
raise TypeError("w only support np.float32, float or Tensor type.")
|
||||
raise TypeError("w only support np.float32, float, list or Tensor type.")
|
||||
|
||||
self.w = Parameter(initializer(w, [channel]), name='a')
|
||||
self.prelu = P.PReLU()
|
||||
|
@ -555,11 +600,14 @@ class HSwish(Cell):
|
|||
where :math:`x_{i}` is the :math:`i`-th slice in the given dimension of the input Tensor.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of HSwish.
|
||||
- **input_data** (Tensor) - The input of HSwish, data type must be float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
|
@ -593,11 +641,14 @@ class HSigmoid(Cell):
|
|||
where :math:`x_{i}` is the :math:`i`-th slice in the given dimension of the input Tensor.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of HSigmoid.
|
||||
- **input_data** (Tensor) - The input of HSigmoid, data type must be float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
|
@ -631,11 +682,14 @@ class LogSigmoid(Cell):
|
|||
where :math:`x_{i}` is the element of the input.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - The input of LogSigmoid.
|
||||
- **input_data** (Tensor) - The input of LogSigmoid with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -51,10 +51,6 @@ class L1Regularizer(Cell):
|
|||
Args:
|
||||
scale (int, float): l1 regularization factor which greater than 0.
|
||||
|
||||
Raises:
|
||||
ValueError: If `scale(regularization factor)` is not greater than 0.
|
||||
If `scale(regularization factor)` is math.inf or math.nan.
|
||||
|
||||
Inputs:
|
||||
- **weights** (Tensor) - The input tensor
|
||||
|
||||
|
@ -62,6 +58,11 @@ class L1Regularizer(Cell):
|
|||
Tensor, which dtype is higher precision data type between mindspore.float32 and weights dtype,
|
||||
and Tensor shape is ()
|
||||
|
||||
Raises:
|
||||
TypeError: If `scale` is neither an int nor float.
|
||||
ValueError: If `scale` is not greater than 0.
|
||||
ValueError: If `scale` is math.inf or math.nan.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -116,15 +117,18 @@ class Dropout(Cell):
|
|||
dropping out 10% of input units. Default: 0.5.
|
||||
dtype (:class:`mindspore.dtype`): Data type of input. Default: mindspore.float32.
|
||||
|
||||
Raises:
|
||||
ValueError: If `keep_prob` is not in range (0, 1].
|
||||
|
||||
Inputs:
|
||||
- **input** (Tensor) - The input tensor.
|
||||
- **input** (Tensor) - The input of Dropout with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, output tensor with the same shape as the input.
|
||||
|
||||
Raises:
|
||||
TypeError: If `keep_prob` is not a float.
|
||||
TypeError: If dtype of `input` is not neither float16 nor float32.
|
||||
ValueError: If `keep_prob` is not in range (0, 1].
|
||||
ValueError: If length of shape of `input` is less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -177,6 +181,9 @@ class Flatten(Cell):
|
|||
Tensor, the shape of the output tensor is :math:`(N, X)`, where :math:`X` is
|
||||
the product of the remaining dimensions.
|
||||
|
||||
Raises:
|
||||
TypeError: If `input` is not a subclass of Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -220,15 +227,21 @@ class Dense(Cell):
|
|||
activation (Union[str, Cell, Primitive]): activate function applied to the output of the fully connected layer,
|
||||
eg. 'ReLU'.Default: None.
|
||||
|
||||
Raises:
|
||||
ValueError: If weight_init or bias_init shape is incorrect.
|
||||
|
||||
Inputs:
|
||||
- **input** (Tensor) - Tensor of shape :math:`(*, in\_channels)`.
|
||||
|
||||
Outputs:
|
||||
Tensor of shape :math:`(*, out\_channels)`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `in_channels` or `out_channels` is not an int.
|
||||
TypeError: If `has_bias` is not a bool.
|
||||
TypeError: If `activation` is not one of str, Cell, Primitive, None.
|
||||
ValueError: If length of shape of `weight_init` is not equal to 2 or shape[0] of `weight_init`
|
||||
is not equal to `out_channels` or shape[1] of `weight_init` is not equal to `in_channels`.
|
||||
ValueError: If length of shape of `bias_init` is not equal to 1
|
||||
or shape[0] of `bias_init` is not equal to `out_channels`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -350,6 +363,10 @@ class ClipByNorm(Cell):
|
|||
Outputs:
|
||||
Tensor, clipped tensor with the same shape as the input, whose type is float32.
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is not one of None, int, tuple.
|
||||
TypeError: If dtype of `input` is neither float32 nor float16.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -426,6 +443,10 @@ class Norm(Cell):
|
|||
Tensor, output tensor with dimensions in 'axis' reduced to 1 will be returned if 'keep_dims' is True;
|
||||
otherwise a Tensor with dimensions in 'axis' removed is returned.
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is neither an int nor tuple.
|
||||
TypeError: If `keep_dims` is not a bool.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -500,12 +521,18 @@ class OneHot(Cell):
|
|||
data type of indices. Default: mindspore.float32.
|
||||
|
||||
Inputs:
|
||||
- **indices** (Tensor) - A tensor of indices of data type mindspore.int32 and arbitrary shape.
|
||||
- **indices** (Tensor) - A tensor of indices with data type of int32 or int64 and arbitrary shape.
|
||||
|
||||
Outputs:
|
||||
Tensor, the one-hot tensor of data type `dtype` with dimension at `axis` expanded to `depth` and filled with
|
||||
on_value and off_value.
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` or `depth` is not an int.
|
||||
TypeError: If dtype of `indices` is neither int32 nor int64.
|
||||
ValueError: If `axis` is not in range [-1, len(indices_shape)].
|
||||
ValueError: If `depth` is less than 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -571,6 +598,11 @@ class Pad(Cell):
|
|||
is [[1,2,3], [4,5,6], [7,8,9]] and `paddings` is [[1,1], [2,2]], then the Outputs is
|
||||
[[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]].
|
||||
|
||||
Raises:
|
||||
TypeError: If `paddings` is not a tuple.
|
||||
ValueError: If length of `paddings` is more than 4 or its shape is not (n, 2).
|
||||
ValueError: If `mode` is not one of 'CONSTANT', 'REFLECT', 'SYMMETRIC'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -663,6 +695,16 @@ class ResizeBilinear(Cell):
|
|||
If scale is set, the result is 4-D tensor with shape:math:`(batch, channels, scale_factor * height,
|
||||
scale_factor * width)` in float32
|
||||
|
||||
Raises:
|
||||
TypeError: If `size` is not one of tuple, list, None.
|
||||
TypeError: If `scale_factor` is neither int nor None.
|
||||
TypeError: If `align_corners` is not a bool.
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
ValueError: If `size` and `scale_factor` are both None or not None.
|
||||
ValueError: If length of shape of `x` is not equal to 4.
|
||||
ValueError: If `scale_factor` is an int which is less than 1.
|
||||
ValueError: If `size` is a list or tuple whose length is not equal to 2.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -718,6 +760,11 @@ class Unfold(Cell):
|
|||
|
||||
out_col = (in_col - (ksize_col + (ksize_col - 1) * (rate_col - 1))) // stride_col + 1
|
||||
|
||||
Raises:
|
||||
TypeError: If `ksizes`, `strides` or `rates` is neither a tuple nor list.
|
||||
ValueError: If shape of `ksizes`, `strides` or `rates` is not (1, x_row, x_col, 1).
|
||||
ValueError: If the second and third element of `ksizes`, `strides` or `rates` is less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -774,6 +821,10 @@ class Tril(Cell):
|
|||
Outputs:
|
||||
Tensor, has the same type as input `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `k` is not an int.
|
||||
ValueError: If length of shape of `x` is less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -817,6 +868,10 @@ class Triu(Cell):
|
|||
Outputs:
|
||||
Tensor, has the same type as input `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `k` is not an int.
|
||||
ValueError: If length of shape of `x` is less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -875,6 +930,9 @@ class MatrixDiag(Cell):
|
|||
Outputs:
|
||||
Tensor, has the same type as input `x`. The shape must be x.shape + (x.shape[-1], ).
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `x` is not one of float32, float16, int32, int8 or uint8.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -918,6 +976,9 @@ class MatrixDiagPart(Cell):
|
|||
Outputs:
|
||||
Tensor, has the same type as input `x`. The shape must be x.shape[:-2] + [min(x.shape[-2:])].
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `x` is not one of float32, float16, int32, int8 or uint8.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -966,6 +1027,12 @@ class MatrixSetDiag(Cell):
|
|||
Outputs:
|
||||
Tensor, has the same type and shape as input `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `x` or `diagonal` is not one of float32, float16, int32, int8 or uint8.
|
||||
ValueError: If length of shape of `x` is less than 2.
|
||||
ValueError: If x_shape[-2] < x_shape[-1] and x_shape[:-1] != diagonal_shape.
|
||||
ValueError: If x_shape[-2] >= x_shape[-1] and x_shape[:-2] + x_shape[-1:] != diagonal_shape.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -71,15 +71,15 @@ class SequentialCell(Cell):
|
|||
Args:
|
||||
args (list, OrderedDict): List of subclass of Cell.
|
||||
|
||||
Raises:
|
||||
TypeError: If the type of the argument is not list or OrderedDict.
|
||||
|
||||
Inputs:
|
||||
- **input** (Tensor) - Tensor with shape according to the first Cell in the sequence.
|
||||
|
||||
Outputs:
|
||||
Tensor, the output Tensor with shape depending on the input and defined sequence of Cells.
|
||||
|
||||
Raises:
|
||||
TypeError: If the type of the `args` is not list or OrderedDict.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -196,6 +196,16 @@ class Conv2d(_Conv):
|
|||
Outputs:
|
||||
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})` or :math:`(N, H_{out}, W_{out}, C_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `in_channels`, `out_channels` or `group` is not an int.
|
||||
TypeError: If `kernel_size`, `stride`, `padding` or `dilation` is neither an int not a tuple.
|
||||
ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
|
||||
ValueError: If `padding` is less than 0.
|
||||
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
|
||||
ValueError: If `padding` is a tuple whose length is not equal to 4.
|
||||
ValueError: If `pad_mode` is not equal to 'pad' and `padding` is not equal to (0, 0, 0, 0).
|
||||
ValueError: If `data_format` is neither 'NCHW' not 'NHWC'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -355,6 +365,12 @@ class Conv1d(_Conv):
|
|||
Outputs:
|
||||
Tensor of shape :math:`(N, C_{out}, W_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `in_channels`, `out_channels`, `kernel_size`, `stride`, `padding` or `dilation` is not an int.
|
||||
ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
|
||||
ValueError: If `padding` is less than 0.
|
||||
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -529,6 +545,15 @@ class Conv2dTranspose(_Conv):
|
|||
Outputs:
|
||||
Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `in_channels`, `out_channels` or `group` is not an int.
|
||||
TypeError: If `kernel_size`, `stride`, `padding` or `dilation` is neither an int not a tuple.
|
||||
ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
|
||||
ValueError: If `padding` is less than 0.
|
||||
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
|
||||
ValueError: If `padding` is a tuple whose length is not equal to 4.
|
||||
ValueError: If `pad_mode` is not equal to 'pad' and `padding` is not equal to (0, 0, 0, 0).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -708,6 +733,12 @@ class Conv1dTranspose(_Conv):
|
|||
Outputs:
|
||||
Tensor of shape :math:`(N, C_{out}, W_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `in_channels`, `out_channels`, `kernel_size`, `stride`, `padding` or `dilation` is not an int.
|
||||
ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
|
||||
ValueError: If `padding` is less than 0.
|
||||
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -75,6 +75,11 @@ class Embedding(Cell):
|
|||
Outputs:
|
||||
Tensor of shape :math:`(\text{batch_size}, \text{input_length}, \text{embedding_size})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `vocab_size` or `embedding_size` is not an int.
|
||||
TypeError: If `use_one_hot` is not a bool.
|
||||
ValueError: If `padding_idx` is an int which not in range [0, `vocab_size`].
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -188,6 +193,17 @@ class EmbeddingLookup(Cell):
|
|||
Outputs:
|
||||
Tensor, the shape of tensor is :math:`(z_1, z_2, ..., z_N)`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `vocab_size` or `embedding_size` or `vocab_cache_size` is not an int.
|
||||
TypeError: If `sparse` is not a bool or `manual_shapes` is not a tuple.
|
||||
ValueError: If `vocab_size` or `embedding_size` is less than 1.
|
||||
ValueError: If `vocab_cache_size` is less than 0.
|
||||
ValueError: If `target` is neither 'CPU' nor 'DEVICE'.
|
||||
ValueError: If `slice_mode` is not one of 'batch_slice' or 'field_slice' or
|
||||
'table_row_slice' or 'table_column_slice'.
|
||||
ValueError: If `sparse` is False and `target` is 'CPU'.
|
||||
ValueError: If `slice_mode` is 'field_slice' and `manual_shapes` is None.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU``
|
||||
|
||||
|
@ -402,7 +418,7 @@ class MultiFieldEmbeddingLookup(EmbeddingLookup):
|
|||
max_norm (Union[float, None]): A maximum clipping value. The data type must be float16, float32
|
||||
or None. Default: None
|
||||
sparse (bool): Using sparse mode. When 'target' is set to 'CPU', 'sparse' has to be true. Default: True.
|
||||
operator (string): The pooling method for the features in one field. Support 'SUM, 'MEAN' and 'MAX'
|
||||
operator (str): The pooling method for the features in one field. Support 'SUM, 'MEAN' and 'MAX'
|
||||
|
||||
Inputs:
|
||||
- **input_indices** (Tensor) - The shape of tensor is :math:`(batch\_size, seq\_length)`.
|
||||
|
@ -417,6 +433,16 @@ class MultiFieldEmbeddingLookup(EmbeddingLookup):
|
|||
Outputs:
|
||||
Tensor, the shape of tensor is :math:`(batch\_size, field\_size, embedding\_size)`. Type is Float32.
|
||||
|
||||
Raises:
|
||||
TypeError: If `vocab_size` or `embedding_size` or `field_size` is not an int.
|
||||
TypeError: If `sparse` is not a bool or `feature_num_list` is not a tuple.
|
||||
ValueError: If `vocab_size` or `embedding_size` or `field_size` is less than 1.
|
||||
ValueError: If `target` is neither 'CPU' nor 'DEVICE'.
|
||||
ValueError: If `slice_mode` is not one of 'batch_slice', 'field_slice', 'table_row_slice', 'table_column_slice'.
|
||||
ValueError: If `sparse` is False and `target` is 'CPU'.
|
||||
ValueError: If `slice_mode` is 'field_slice' and `feature_num_list` is None.
|
||||
ValueError: If `operator` is not one of 'SUM', 'MAX', 'MEAN'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2020 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -106,6 +106,12 @@ class LSTM(Cell):
|
|||
- **hx_n** (tuple) - A tuple of two Tensor (h_n, c_n) both of shape
|
||||
(num_directions * `num_layers`, batch_size, `hidden_size`).
|
||||
|
||||
Raises:
|
||||
TypeError: If `input_size`, `hidden_size` or `num_layers` is not an int.
|
||||
TypeError: If `has_bias`, `batch_first` or `bidirectional` is not a bool.
|
||||
TypeError: If `dropout` is neither a float nor an int.
|
||||
ValueError: If `dropout` is not in range [0.0, 1.0].
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -334,6 +340,12 @@ class LSTMCell(Cell):
|
|||
- **reserve** - reserved
|
||||
- **state** - reserved
|
||||
|
||||
Raises:
|
||||
TypeError: If `input_size` or `hidden_size` or `num_layers` is not an int.
|
||||
TypeError: If `has_bias` or `batch_first` or `bidirectional` is not a bool.
|
||||
TypeError: If `dropout` is neither a float nor an int.
|
||||
ValueError: If `dropout` is not in range [0.0, 1.0].
|
||||
|
||||
Supported Platforms:
|
||||
``GPU`` ``CPU``
|
||||
|
||||
|
|
|
@ -47,6 +47,7 @@ class _BatchNorm(Cell):
|
|||
input_dims='2d',
|
||||
data_format='NCHW'):
|
||||
super(_BatchNorm, self).__init__()
|
||||
validator.check_value_type('num_features', num_features, [int], self.cls_name)
|
||||
if num_features < 1:
|
||||
raise ValueError("num_features must be at least 1")
|
||||
|
||||
|
@ -270,6 +271,12 @@ class BatchNorm1d(_BatchNorm):
|
|||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `num_features` is not an int.
|
||||
TypeError: If `eps` is not a float.
|
||||
ValueError: If `num_features` is less than 1.
|
||||
ValueError: If `momentum` is not in range [0, 1].
|
||||
|
||||
Examples:
|
||||
>>> net = nn.BatchNorm1d(num_features=4)
|
||||
>>> np.random.seed(0)
|
||||
|
@ -359,6 +366,13 @@ class BatchNorm2d(_BatchNorm):
|
|||
Outputs:
|
||||
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `num_features` is not an int.
|
||||
TypeError: If `eps` is not a float.
|
||||
ValueError: If `num_features` is less than 1.
|
||||
ValueError: If `momentum` is not in range [0, 1].
|
||||
ValueError: If `data_format` is neither 'NHWC' not 'NCHW'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -544,6 +558,13 @@ class GlobalBatchNorm(_BatchNorm):
|
|||
Outputs:
|
||||
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `num_features` or `device_num_each_group` is not an int.
|
||||
TypeError: If `eps` is not a float.
|
||||
ValueError: If `num_features` is less than 1.
|
||||
ValueError: If `momentum` is not in range [0, 1].
|
||||
ValueError: If `device_num_each_group` is less than 2.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -642,6 +663,11 @@ class LayerNorm(Cell):
|
|||
Outputs:
|
||||
Tensor, the normalized and scaled offset tensor, has the same shape and data type as the `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `normalized_shape` is neither a list nor tuple.
|
||||
TypeError: If `begin_norm_axis` or `begin_params_axis` is not an int.
|
||||
TypeError: If `epsilon` is not a float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -675,7 +701,8 @@ class LayerNorm(Cell):
|
|||
self.beta = Parameter(initializer(
|
||||
beta_init, normalized_shape), name="beta")
|
||||
self.layer_norm = _selected_ops.LayerNorm(begin_norm_axis=self.begin_norm_axis,
|
||||
begin_params_axis=self.begin_params_axis)
|
||||
begin_params_axis=self.begin_params_axis,
|
||||
epsilon=self.epsilon)
|
||||
|
||||
def construct(self, input_x):
|
||||
y, _, _ = self.layer_norm(input_x, self.gamma, self.beta)
|
||||
|
@ -831,6 +858,13 @@ class GroupNorm(Cell):
|
|||
Outputs:
|
||||
Tensor, the normalized and scaled offset tensor, has the same shape and data type as the `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `num_groups` or `num_channels` is not an int.
|
||||
TypeError: If `eps` is not a float.
|
||||
TypeError: If `affine` is not a bool.
|
||||
ValueError: If `num_groups` or `num_channels` is less than 1.
|
||||
ValueError: If `num_channels` is not divided by `num_groups`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -96,14 +96,15 @@ class L1Loss(_Loss):
|
|||
Default: "mean".
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - Tensor of shape :math:`(x_1, x_2, ..., x_R)`. The data type must be float16 or
|
||||
float32.
|
||||
- **target_data** (Tensor) - Tensor of shape :math:`(y_1, y_2, ..., y_S)`. The data type must be float16 or
|
||||
float32.
|
||||
- **input_data** (Tensor) - Tensor of shape :math:`(x_1, x_2, ..., x_R)`.
|
||||
- **target_data** (Tensor) - Tensor of shape :math:`(y_1, y_2, ..., y_S)`.
|
||||
|
||||
Outputs:
|
||||
Tensor, loss float tensor.
|
||||
|
||||
Raises:
|
||||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -149,6 +150,9 @@ class MSELoss(_Loss):
|
|||
Outputs:
|
||||
Tensor, weighted loss float tensor.
|
||||
|
||||
Raises:
|
||||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -193,12 +197,18 @@ class SmoothL1Loss(_Loss):
|
|||
quadratic to linear. Default: 1.0.
|
||||
|
||||
Inputs:
|
||||
- **input_data** (Tensor) - Tensor of shape :math:`(x_1, x_2, ..., x_R)`.
|
||||
- **target_data** (Tensor) - Tensor of shape :math:`(y_1, y_2, ..., y_S)`.
|
||||
- **input_data** (Tensor) - Tensor of shape :math:`(x_1, x_2, ..., x_R)`. Data type must be float16 or float32.
|
||||
- **target_data** (Tensor) - Ground truth data, with the same type and shape as `input_data`.
|
||||
|
||||
Outputs:
|
||||
Tensor, loss float tensor.
|
||||
|
||||
Raises:
|
||||
TypeError: If `beta` is not a float.
|
||||
TypeError: If dtype of `input_data` or `target_data` is neither float16 not float32.
|
||||
ValueError: If `beta` is less than or equal to 0.
|
||||
ValueError: If shape of `input_data` is not the same as `target_data`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -249,14 +259,20 @@ class SoftmaxCrossEntropyWithLogits(_Loss):
|
|||
If "none", do not perform reduction. Default: "none".
|
||||
|
||||
Inputs:
|
||||
- **logits** (Tensor) - Tensor of shape (N, C).
|
||||
- **logits** (Tensor) - Tensor of shape (N, C). Data type must be float16 or float32.
|
||||
- **labels** (Tensor) - Tensor of shape (N, ). If `sparse` is True, The type of
|
||||
`labels` is mindspore.int32. If `sparse` is False, the type of `labels` is the same as the type of `logits`.
|
||||
`labels` is int32 or int64. If `sparse` is False, the type of `labels` is the same as the type of `logits`.
|
||||
|
||||
Outputs:
|
||||
Tensor, a tensor of the same shape as logits with the component-wise
|
||||
logistic losses.
|
||||
|
||||
Raises:
|
||||
TypeError: If `sparse` is not a bool.
|
||||
TypeError: If `sparse` is True and dtype of `labels` is neither int32 not int64.
|
||||
TypeError: If `sparse` is False and dtype of `labels` is neither float16 not float32.
|
||||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -274,7 +290,7 @@ class SoftmaxCrossEntropyWithLogits(_Loss):
|
|||
sparse=False,
|
||||
reduction='none'):
|
||||
super(SoftmaxCrossEntropyWithLogits, self).__init__(reduction)
|
||||
self.sparse = sparse
|
||||
self.sparse = validator.check_bool(sparse, "sparse")
|
||||
self.reduction = reduction
|
||||
self.softmax_cross_entropy = _selected_ops.SoftmaxCrossEntropyWithLogits()
|
||||
self.one_hot = P.OneHot()
|
||||
|
@ -363,7 +379,7 @@ class SampledSoftmaxLoss(_Loss):
|
|||
num_sampled (int): The number of classes to randomly sample per batch.
|
||||
num_classes (int): The number of possible classes.
|
||||
num_true (int): The number of target classes per training example.
|
||||
sampled_values (Tuple): Tuple of (`sampled_candidates`, `true_expected_count`,
|
||||
sampled_values (Union[list, tuple]): List or tuple of (`sampled_candidates`, `true_expected_count`,
|
||||
`sampled_expected_count`) returned by a `*CandidateSampler` function.
|
||||
Default to None, `UniformCandidateSampler` is applied.
|
||||
remove_accidental_hits (bool): Whether to remove "accidental hits"
|
||||
|
@ -383,6 +399,13 @@ class SampledSoftmaxLoss(_Loss):
|
|||
Outputs:
|
||||
Tensor, a tensor of shape (N) with the per-example sampled softmax losses.
|
||||
|
||||
Raises:
|
||||
TypeError: If `sampled_values` is not a list or tuple.
|
||||
TypeError: If dtype of `labels` is neither int32 not int64.
|
||||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
ValueError: If `num_sampled` or `num_true` is great than `num_classes`.
|
||||
ValueError: If length of `sampled_values` is not equal to 3.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
|
@ -413,7 +436,7 @@ class SampledSoftmaxLoss(_Loss):
|
|||
raise ValueError(f"num_true {num_true} is great than num_classes {num_classes}.")
|
||||
if sampled_values is not None:
|
||||
if not isinstance(sampled_values, (list, tuple)):
|
||||
raise TypeError(f"sampled_values {sampled_values} is not a list.")
|
||||
raise TypeError(f"sampled_values {sampled_values} is not a list or tuple.")
|
||||
if len(sampled_values) != 3:
|
||||
raise ValueError(f"sampled_values size {len(sampled_values)} is not 3.")
|
||||
|
||||
|
@ -605,6 +628,11 @@ class BCELoss(_Loss):
|
|||
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and has the same shape as `inputs`.
|
||||
Otherwise, the output is a scalar.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `inputs`, `labels` or `weight`(if given) is neither float16 not float32.
|
||||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
ValueError: If shape of `inputs` is not the same as `labels` or `weight`(if given).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -640,6 +668,7 @@ class BCELoss(_Loss):
|
|||
def _check_reduced_shape_valid(ori_shape, reduced_shape, axis, cls_name):
|
||||
validator.check_reduce_shape(ori_shape, reduced_shape, axis, cls_name)
|
||||
|
||||
|
||||
class CosineEmbeddingLoss(_Loss):
|
||||
r"""
|
||||
Computes the similarity between two tensors using cosine distance.
|
||||
|
@ -667,13 +696,18 @@ class CosineEmbeddingLoss(_Loss):
|
|||
- **loss** (Tensor) - If `reduction` is "none", its shape is the same as `y`'s shape, otherwise a scalar value
|
||||
will be returned.
|
||||
|
||||
Raises:
|
||||
TypeError: If `margin` is not a float.
|
||||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
ValueError: If `margin` is not in range [-1, 1].
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> x1 = Tensor(np.array([[0.3, 0.8], [0.4, 0.3]]), mindspore.float32)
|
||||
>>> x2 = Tensor(np.array([[0.4, 1.2], [-0.4, -0.9]]), mindspore.float32)
|
||||
>>> y = Tensor(np.array([1,-1]), mindspore.int32)
|
||||
>>> y = Tensor(np.array([1, -1]), mindspore.int32)
|
||||
>>> cosine_embedding_loss = nn.CosineEmbeddingLoss()
|
||||
>>> output = cosine_embedding_loss(x1, x2, y)
|
||||
>>> print(output)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# coding: utf-8
|
||||
|
||||
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
|
@ -614,6 +614,10 @@ class Squeeze(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, the shape of tensor is :math:`(x_1, x_2, ..., x_S)`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is neither an int nor tuple.
|
||||
TypeError: If `axis` is a tuple whose elements are not all int.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
|
|
@ -88,6 +88,9 @@ class Flatten(PrimitiveWithInfer):
|
|||
Tensor, the shape of the output tensor is :math:`(N, X)`, where :math:`X` is
|
||||
the product of the remaining dimension.
|
||||
|
||||
Raises:
|
||||
ValueError: If length of shape of `input_x` is less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -135,6 +138,12 @@ class Softmax(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the logits.
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is neither an int not a tuple.
|
||||
TypeError: If dtype of `logits` is neither float16 nor float32.
|
||||
ValueError: If `axis` is a tuple whose length is less than 1.
|
||||
ValueError: If `axis` is a tuple whose elements are not all in range [-len(logits), len(logits)).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -163,7 +172,7 @@ class Softmax(PrimitiveWithInfer):
|
|||
return logits
|
||||
|
||||
def infer_dtype(self, logits):
|
||||
validator.check_tensor_dtype_valid("logits", logits, mstype.float_type, self.name)
|
||||
validator.check_tensor_dtype_valid("logits", logits, (mstype.float16, mstype.float32), self.name)
|
||||
return logits
|
||||
|
||||
|
||||
|
@ -189,6 +198,11 @@ class LogSoftmax(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the logits.
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is not an int.
|
||||
TypeError: If dtype of `logits` is neither float16 nor float32.
|
||||
ValueError: If `axis` is not in range [-len(logits), len(logits)).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -210,7 +224,7 @@ class LogSoftmax(PrimitiveWithInfer):
|
|||
return logits
|
||||
|
||||
def infer_dtype(self, logits):
|
||||
validator.check_tensor_dtype_valid("logits", logits, mstype.float_type, self.name)
|
||||
validator.check_tensor_dtype_valid("logits", logits, (mstype.float16, mstype.float32), self.name)
|
||||
return logits
|
||||
|
||||
|
||||
|
@ -306,6 +320,9 @@ class ReLU(PrimitiveWithCheck):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is not a number.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -439,6 +456,9 @@ class ReLU6(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -551,11 +571,16 @@ class Elu(PrimitiveWithInfer):
|
|||
only support '1.0' currently. Default: 1.0.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The input tensor whose data type must be float.
|
||||
- **input_x** (Tensor) - The input of Elu with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, has the same shape and data type as `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `alpha` is not a float.
|
||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
ValueError: If `alpha` is not equal to 1.0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -578,7 +603,7 @@ class Elu(PrimitiveWithInfer):
|
|||
return input_x
|
||||
|
||||
def infer_dtype(self, input_x):
|
||||
validator.check_tensor_dtype_valid('input_x', input_x, mstype.float_type, self.name)
|
||||
validator.check_tensor_dtype_valid('input_x', input_x, (mstype.float16, mstype.float32), self.name)
|
||||
return input_x
|
||||
|
||||
|
||||
|
@ -601,6 +626,9 @@ class HSwish(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
|
@ -641,6 +669,9 @@ class Sigmoid(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the input_x.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -683,6 +714,9 @@ class HSigmoid(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_data`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_data` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
|
@ -718,11 +752,14 @@ class Tanh(PrimitiveWithInfer):
|
|||
where :math:`x_i` is an element of the input Tensor.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The input of Tanh.
|
||||
- **input_x** (Tensor) - The input of Tanh with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the input_x.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -742,7 +779,7 @@ class Tanh(PrimitiveWithInfer):
|
|||
return input_x
|
||||
|
||||
def infer_dtype(self, input_x):
|
||||
validator.check_subclass("input_x", input_x, mstype.tensor, self.name)
|
||||
validator.check_tensor_dtype_valid("input_x", input_x, (mstype.float16, mstype.float32), self.name)
|
||||
return input_x
|
||||
|
||||
|
||||
|
@ -1355,6 +1392,15 @@ class Conv2D(PrimitiveWithCheck):
|
|||
Outputs:
|
||||
Tensor, the value that applied 2D convolution. The shape is :math:`(N, C_{out}, H_{out}, W_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int not a tuple.
|
||||
TypeError: If `out_channel` or `group` is not an int.
|
||||
ValueError: If `kernel_size`, `stride` or `dilation` is less than 1.
|
||||
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
|
||||
ValueError: If `pad` is a tuple whose length is not equal to 4.
|
||||
ValueError: If `pad_mode` it not equal to 'pad' and `pad` is not equal to (0, 0, 0, 0).
|
||||
ValueError: If `data_format` is neither 'NCHW' not 'NHWC'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -1883,6 +1929,15 @@ class Conv2DBackpropInput(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, the gradients w.r.t the input of convolution. It has the same shape as the input.
|
||||
|
||||
Raises:
|
||||
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int not a tuple.
|
||||
TypeError: If `out_channel` or `group` is not an int.
|
||||
ValueError: If `kernel_size`, `stride` or `dilation` is less than 1.
|
||||
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
|
||||
ValueError: If `padding` is a tuple whose length is not equal to 4.
|
||||
ValueError: If `pad_mode` it not equal to 'pad' and `pad` is not equal to (0, 0, 0, 0).
|
||||
ValueError: If `data_format` is neither 'NCHW' not 'NHWC'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -2203,6 +2258,10 @@ class SoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tuple of 2 tensors, the `loss` shape is `(N,)`, and the `dlogits` with the same shape as `logits`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `logits` or `labels` is neither float16 nor float32.
|
||||
ValueError: If shape of `logits` is not the same as `labels`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -2262,6 +2321,12 @@ class SparseSoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):
|
|||
Tensor, if `is_grad` is False, the output tensor is the value of loss which is a scalar tensor;
|
||||
if `is_grad` is True, the output tensor is the gradient of input with the same shape as `logits`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `is_grad` is not a bool.
|
||||
TypeError: If dtype of `logits' is neither float16 nor float32.
|
||||
TypeError: If dtype of `labels' is neither int32 nor int64.
|
||||
ValueError: If logits_shape[0] != labels_shape[0].
|
||||
|
||||
Supported Platforms:
|
||||
``GPU`` ``CPU``
|
||||
|
||||
|
@ -2394,6 +2459,12 @@ class SmoothL1Loss(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as `prediction`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `beta` is not a float.
|
||||
TypeError: If dtype of `prediction` or `target` is neither float16 not float32.
|
||||
ValueError: If `beta` is less than or equal to 0.
|
||||
ValueError: If shape of `prediction` is not the same as `target`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -2874,6 +2945,10 @@ class LayerNorm(Primitive):
|
|||
- **mean** (Tensor) - Tensor of shape :math:`(C,)`.
|
||||
- **variance** (Tensor) - Tensor of shape :math:`(C,)`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `begin_norm_axis` or `begin_params_axis` is not an int.
|
||||
TypeError: If `epsilon` is not a float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -3073,6 +3148,12 @@ class ResizeBilinear(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, resized image. 4-D with shape [batch, channels, new_height, new_width] in `float32`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `size` is neither a tuple nor list.
|
||||
TypeError: If `align_corners` is not a bool.
|
||||
TypeError: If dtype of `input` is neither float16 nor float32.
|
||||
ValueError: If length of shape of `input` is not equal to 4.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -3091,6 +3172,7 @@ class ResizeBilinear(PrimitiveWithInfer):
|
|||
@prim_attr_register
|
||||
def __init__(self, size, align_corners=False):
|
||||
validator.check_value_type("size", size, [tuple, list], self.name)
|
||||
validator.check_value_type("align_corners", align_corners, [bool], self.name)
|
||||
|
||||
def infer_shape(self, input_shape):
|
||||
validator.check("input shape rank", len(input_shape), "", 4, Rel.EQ, self.name)
|
||||
|
@ -3135,6 +3217,12 @@ class OneHot(PrimitiveWithInfer):
|
|||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` or `depth` is not an int.
|
||||
TypeError: If dtype of `indices` is neither int32 nor int64.
|
||||
ValueError: If `axis` is not in range [-1, len(indices_shape)].
|
||||
ValueError: If `depth` is less than 0.
|
||||
|
||||
Examples:
|
||||
>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
|
||||
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
|
||||
|
@ -3192,6 +3280,9 @@ class Gelu(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as input.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -3233,6 +3324,9 @@ class FastGelu(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as input.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -3337,6 +3431,11 @@ class PReLU(PrimitiveWithInfer):
|
|||
|
||||
For detailed information, please refer to `nn.PReLU`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` or `weight` is neither float16 nor float32.
|
||||
ValueError: If length of shape of `input_x` is equal to 1.
|
||||
ValueError: If length of shape of `weight` is not equal to 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
|
@ -3397,6 +3496,36 @@ class LSTM(PrimitiveWithInfer):
|
|||
For detailed information, please refer to `nn.LSTM
|
||||
<https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.LSTM.html>`_.
|
||||
|
||||
Args:
|
||||
input_size (int): Number of features of input.
|
||||
hidden_size (int): Number of features of hidden layer.
|
||||
num_layers (int): Number of layers of stacked LSTM . Default: 1.
|
||||
has_bias (bool): Whether the cell has bias `b_ih` and `b_hh`. Default: True.
|
||||
bidirectional (bool): Specifies whether it is a bidirectional LSTM. Default: False.
|
||||
dropout (float): If not 0, append `Dropout` layer on the outputs of each
|
||||
LSTM layer except the last layer. Default 0. The range of dropout is [0.0, 1.0].
|
||||
|
||||
Inputs:
|
||||
- **input** (Tensor) - Tensor of shape (seq_len, batch_size, `input_size`) or
|
||||
(batch_size, seq_len, `input_size`).
|
||||
- **h** (tuple) - Tensor of shape (num_directions * `num_layers`, batch_size, `hidden_size`).
|
||||
- **c** (tuple) - Tensor of shape (num_directions * `num_layers`, batch_size, `hidden_size`).
|
||||
|
||||
Outputs:
|
||||
Tuple, a tuple contains (`output`, `h_n`, `c_n`, `reserve`, `state`).
|
||||
|
||||
- **output** (Tensor) - Tensor of shape (seq_len, batch_size, num_directions * `hidden_size`).
|
||||
- **h_n** (Tensor) - Tensor of shape (num_directions * `num_layers`, batch_size, `hidden_size`).
|
||||
- **c_n** (Tensor) - Tensor of shape (num_directions * `num_layers`, batch_size, `hidden_size`).
|
||||
- **reserve** (Tensor) - Tensor of shape (r, 1).
|
||||
- **state** (Tensor) - Random number generator state and its shape is (s, 1).
|
||||
|
||||
Raises:
|
||||
TypeError: If `input_size`, `hidden_size` or `num_layers` is not an int.
|
||||
TypeError: If `has_bias` or `bidirectional` is not a bool.
|
||||
TypeError: If `dropout` is not a float.
|
||||
ValueError: If `dropout` is not in range [0.0, 1.0].
|
||||
|
||||
Supported Platforms:
|
||||
``GPU`` ``CPU``
|
||||
|
||||
|
@ -3562,6 +3691,10 @@ class Pad(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, the tensor after padding.
|
||||
|
||||
Raises:
|
||||
TypeError: If `paddings` is not a tuple.
|
||||
ValueError: If shape of `paddings` is not (n, 2).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
|
@ -4619,6 +4752,11 @@ class BinaryCrossEntropy(PrimitiveWithInfer):
|
|||
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and has the same shape as `input_x`.
|
||||
Otherwise, the output is a scalar.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x`, `input_y` or `weight`(if given) is neither float16 not float32.
|
||||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
ValueError: If shape of `input_y` is not the same as `input_x` or `weight`(if given).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
@ -6347,14 +6485,26 @@ class Dropout(PrimitiveWithInfer):
|
|||
Args:
|
||||
keep_prob (float): The keep rate, between 0 and 1, e.g. keep_prob = 0.9,
|
||||
means dropping out 10% of input units.
|
||||
Seed0 (int): Seed0 value for random generating. Default: 0.
|
||||
Seed1 (int): Seed1 value for random generating. Default: 0.
|
||||
|
||||
Inputs:
|
||||
- **input** (Tensor) - The input tensor.
|
||||
- **input** (Tensor) - The input of Dropout with data type of float16 or float32.
|
||||
|
||||
Outputs:
|
||||
- **output** (Tensor) - with the same shape as the input tensor.
|
||||
- **mask** (Tensor) - with the same shape as the input tensor.
|
||||
|
||||
Raises:
|
||||
TypeError: If `keep_prob` is not a float.
|
||||
TypeError: If `Seed0` or `Seed1` is not an int.
|
||||
TypeError: If dtype of `input` is not neither float16 nor float32.
|
||||
ValueError: If `keep_prob` is not in range (0, 1].
|
||||
ValueError: If length of shape of `input` is less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> dropout = ops.Dropout(keep_prob=0.5)
|
||||
>>> x = Tensor((20, 16, 50, 50), mindspore.float32)
|
||||
|
|
|
@ -58,7 +58,7 @@ class LogSoftmaxNet(nn.Cell):
|
|||
@non_graph_engine
|
||||
def test_compile_logsoftmax():
|
||||
net = LogSoftmaxNet(0)
|
||||
input_tensor = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]]))
|
||||
input_tensor = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]], dtype=np.float32))
|
||||
net(input_tensor)
|
||||
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ def test_activation_param():
|
|||
# test softmax
|
||||
def test_softmax_axis():
|
||||
layer = nn.Softmax(1)
|
||||
x = Tensor(np.ones([3, 3]))
|
||||
x = Tensor(np.ones([3, 3]).astype(np.float32))
|
||||
assert layer.softmax.axis == (1,)
|
||||
output = layer.construct(x)
|
||||
output_np = output.asnumpy()
|
||||
|
@ -58,7 +58,7 @@ def test_softmax_axis():
|
|||
|
||||
def test_softmax_axis_none():
|
||||
layer = nn.Softmax()
|
||||
x = Tensor(np.ones([3, 2]))
|
||||
x = Tensor(np.ones([3, 2]).astype(np.float32))
|
||||
assert layer.softmax.axis == (-1,)
|
||||
output = layer.construct(x)
|
||||
output_np = output.asnumpy()
|
||||
|
|
Loading…
Reference in New Issue