forked from mindspore-Ecosystem/mindspore
API comment for operations part1
This commit is contained in:
parent
cd5a1bf1f4
commit
76bbfdea75
|
@ -33,7 +33,7 @@ class InplaceAssign(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **variable** (Parameter) - The `Parameter`.
|
||||
- **value** (Tensor) - The value to assign.
|
||||
- **value** (Tensor) - The value to be assigned.
|
||||
- **depend** (Tensor) - The dependent tensor to keep this op connected in graph.
|
||||
|
||||
Outputs:
|
||||
|
@ -274,7 +274,7 @@ class EqualCount(GraphKernel):
|
|||
"""
|
||||
Computes the number of the same elements of two tensors.
|
||||
|
||||
The two input tensors should have same shape and data type.
|
||||
The two input tensors should have the same shape and data type.
|
||||
|
||||
Inputs:
|
||||
x (Tensor): the first input tensor.
|
||||
|
@ -1139,9 +1139,9 @@ class LambNextMV(GraphKernel):
|
|||
Outputs:
|
||||
Tuple of 2 Tensor.
|
||||
|
||||
- **add3** (Tensor) - The shape is the same as the shape after broadcasting, and the data type is
|
||||
- **add3** (Tensor) - the shape is the same as the one after broadcasting, and the data type is
|
||||
the one with high precision or high digits among the inputs.
|
||||
- **realdiv4** (Tensor) - The shape is the same as the shape after broadcasting, and the data type is
|
||||
- **realdiv4** (Tensor) - the shape is the same as the one after broadcasting, and the data type is
|
||||
the one with high precision or high digits among the inputs.
|
||||
|
||||
Examples:
|
||||
|
|
|
@ -194,7 +194,7 @@ class Adam(Optimizer):
|
|||
Default: 0.999.
|
||||
eps (float): Term added to the denominator to improve numerical stability. Should be greater than 0. Default:
|
||||
1e-8.
|
||||
use_locking (bool): Whether to enable a lock to protect updating variable tensors.
|
||||
use_locking (bool): Whether to enable a lock to protect variable tensors from being updated.
|
||||
If true, updates of the var, m, and v tensors will be protected by a lock.
|
||||
If false, the result is unpredictable. Default: False.
|
||||
use_nesterov (bool): Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients.
|
||||
|
|
|
@ -141,7 +141,7 @@ class LazyAdam(Optimizer):
|
|||
Default: 0.999.
|
||||
eps (float): Term added to the denominator to improve numerical stability. Should be greater than 0. Default:
|
||||
1e-8.
|
||||
use_locking (bool): Whether to enable a lock to protect updating variable tensors.
|
||||
use_locking (bool): Whether to enable a lock to protect variable tensors from being updated.
|
||||
If true, updates of the var, m, and v tensors will be protected by a lock.
|
||||
If false, the result is unpredictable. Default: False.
|
||||
use_nesterov (bool): Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients.
|
||||
|
|
|
@ -80,7 +80,7 @@ class RMSProp(Optimizer):
|
|||
.. math::
|
||||
w = w - m_{t}
|
||||
|
||||
where, :math:`w` represents `params`, which will be updated.
|
||||
where :math:`w` represents `params`, which will be updated.
|
||||
:math:`g_{t}` is mean gradients, :math:`g_{t-1}` is the last moment of :math:`g_{t}`.
|
||||
:math:`s_{t}` is the mean square gradients, :math:`s_{t-1}` is the last moment of :math:`s_{t}`,
|
||||
:math:`m_{t}` is moment, the delta of `w`, :math:`m_{t-1}` is the last moment of :math:`m_{t}`.
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
"""
|
||||
Primitive operator classes.
|
||||
|
||||
A collection of operators to build nerual networks or computing functions.
|
||||
A collection of operators to build neural networks or to compute functions.
|
||||
"""
|
||||
|
||||
from .image_ops import (CropAndResize)
|
||||
|
|
|
@ -220,11 +220,11 @@ class Cast(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **input_x** (Union[Tensor, Number]) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
|
||||
The tensor to be casted.
|
||||
The tensor to be cast.
|
||||
- **type** (dtype.Number) - The valid data type of the output tensor. Only constant value is allowed.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape of tensor is :math:`(x_1, x_2, ..., x_R)`, same as `input_x`.
|
||||
Tensor, the shape of tensor is the same as `input_x`, :math:`(x_1, x_2, ..., x_R)`.
|
||||
|
||||
Examples:
|
||||
>>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
|
||||
|
@ -964,7 +964,7 @@ class TupleToArray(PrimitiveWithInfer):
|
|||
- **input_x** (tuple) - A tuple of numbers. These numbers have the same type. Only constant value is allowed.
|
||||
|
||||
Outputs:
|
||||
Tensor, if the input tuple contain `N` numbers, then the output tensor shape is (N,).
|
||||
Tensor, if the input tuple contain `N` numbers, then the shape of the output tensor is (N,).
|
||||
|
||||
Examples:
|
||||
>>> type = P.TupleToArray()((1,2,3))
|
||||
|
@ -1129,11 +1129,11 @@ class Argmax(PrimitiveWithInfer):
|
|||
"""
|
||||
Returns the indices of the max value of a tensor across the axis.
|
||||
|
||||
If the shape of input tensor is :math:`(x_1, ..., x_N)`, the output tensor shape is
|
||||
If the shape of input tensor is :math:`(x_1, ..., x_N)`, the shape of the output tensor will be
|
||||
:math:`(x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
|
||||
|
||||
Args:
|
||||
axis (int): Axis on which Argmax operation applies. Default: -1.
|
||||
axis (int): Axis where the Argmax operation applies to. Default: -1.
|
||||
output_type (:class:`mindspore.dtype`): An optional data type of `mindspore.dtype.int32`.
|
||||
Default: `mindspore.dtype.int32`.
|
||||
|
||||
|
@ -1176,11 +1176,11 @@ class Argmin(PrimitiveWithInfer):
|
|||
"""
|
||||
Returns the indices of the min value of a tensor across the axis.
|
||||
|
||||
If the shape of input tensor is :math:`(x_1, ..., x_N)`, the output tensor shape is
|
||||
If the shape of input tensor is :math:`(x_1, ..., x_N)`, the shape of the output tensor is
|
||||
:math:`(x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
|
||||
|
||||
Args:
|
||||
axis (int): Axis on which Argmin operation applies. Default: -1.
|
||||
axis (int): Axis where the Argmin operation applies to. Default: -1.
|
||||
output_type (:class:`mindspore.dtype`): An optional data type of `mindspore.dtype.int32`.
|
||||
Default: `mindspore.dtype.int32`.
|
||||
|
||||
|
@ -1222,16 +1222,17 @@ class Argmin(PrimitiveWithInfer):
|
|||
|
||||
class ArgMaxWithValue(PrimitiveWithInfer):
|
||||
"""
|
||||
Calculates maximum value with corresponding index.
|
||||
Calculates the maximum value with the corresponding index.
|
||||
|
||||
Calculates maximum value along with given axis for the input tensor. Returns the maximum values and indices.
|
||||
Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and
|
||||
indices.
|
||||
|
||||
Note:
|
||||
In auto_parallel and semi_auto_parallel mode, the first output index can not be used.
|
||||
|
||||
Args:
|
||||
axis (int): The dimension to reduce. Default: 0.
|
||||
keep_dims (bool): Whether to reduce dimension, if true the output will keep same dimension with the input,
|
||||
keep_dims (bool): Whether to reduce dimension, if true, the output will keep same dimension with the input,
|
||||
the output will reduce dimension if false. Default: False.
|
||||
|
||||
Inputs:
|
||||
|
@ -1239,11 +1240,12 @@ class ArgMaxWithValue(PrimitiveWithInfer):
|
|||
:math:`(x_1, x_2, ..., x_N)`.
|
||||
|
||||
Outputs:
|
||||
tuple(Tensor), tuple of 2 tensors, corresponding index and maximum value of input tensor.
|
||||
- index (Tensor) - The index for maximum value of input tensor. If `keep_dims` is true, the output tensors shape
|
||||
is :math:`(x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)`. Else, the shape is
|
||||
tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input
|
||||
tensor.
|
||||
- index (Tensor) - The index for the maximum value of the input tensor. If `keep_dims` is true, the shape of
|
||||
output tensors is :math:`(x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)`. Otherwise, the shape is
|
||||
:math:`(x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
|
||||
- output_x (Tensor) - The maximum value of input tensor, the shape same as index.
|
||||
- output_x (Tensor) - The maximum value of input tensor, with the same shape as index.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.random.rand(5), mindspore.float32)
|
||||
|
@ -1272,16 +1274,17 @@ class ArgMaxWithValue(PrimitiveWithInfer):
|
|||
|
||||
class ArgMinWithValue(PrimitiveWithInfer):
|
||||
"""
|
||||
Calculates minimum value with corresponding index, return indices and values.
|
||||
Calculates the minimum value with corresponding index, return indices and values.
|
||||
|
||||
Calculates minimum value along with given axis for the input tensor. Returns the minimum values and indices.
|
||||
Calculates the minimum value along with the given axis for the input tensor. It returns the minimum values and
|
||||
indices.
|
||||
|
||||
Note:
|
||||
In auto_parallel and semi_auto_parallel mode, the first output index can not be used.
|
||||
|
||||
Args:
|
||||
axis (int): The dimension to reduce. Default: 0.
|
||||
keep_dims (bool): Whether to reduce dimension, if true the output will keep same dimension as the input,
|
||||
keep_dims (bool): Whether to reduce dimension, if true the output will keep the same dimension as the input,
|
||||
the output will reduce dimension if false. Default: False.
|
||||
|
||||
Inputs:
|
||||
|
@ -1289,9 +1292,12 @@ class ArgMinWithValue(PrimitiveWithInfer):
|
|||
:math:`(x_1, x_2, ..., x_N)`.
|
||||
|
||||
Outputs:
|
||||
Tensor, corresponding index and minimum value of input tensor. If `keep_dims` is true, the output tensors shape
|
||||
is :math:`(x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)`. Else, the shape is
|
||||
tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input
|
||||
tensor.
|
||||
- index (Tensor) - The index for the maximum value of the input tensor. If `keep_dims` is true, the shape of
|
||||
output tensors is :math:`(x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)`. Otherwise, the shape is
|
||||
:math:`(x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
|
||||
- output_x (Tensor) - The minimum value of input tensor, with the same shape as index.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.random.rand(5))
|
||||
|
@ -1568,9 +1574,9 @@ class Concat(PrimitiveWithInfer):
|
|||
|
||||
Note:
|
||||
The input data is a tuple of tensors. These tensors have the same rank `R`. Set the given axis as `m`, and
|
||||
:math:`0 \le m < N`. Set the number of input tensors as `N`. For the :math:`i`-th tensor :math:`t_i` has
|
||||
the shape :math:`(x_1, x_2, ..., x_{mi}, ..., x_R)`. :math:`x_{mi}` is the :math:`m`-th dimension of the
|
||||
:math:`i`-th tensor. Then, the output tensor shape is
|
||||
:math:`0 \le m < N`. Set the number of input tensors as `N`. For the :math:`i`-th tensor :math:`t_i`, it has
|
||||
the shape of :math:`(x_1, x_2, ..., x_{mi}, ..., x_R)`. :math:`x_{mi}` is the :math:`m`-th dimension of the
|
||||
:math:`i`-th tensor. Then, the shape of the output tensor is
|
||||
|
||||
.. math::
|
||||
(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)
|
||||
|
@ -1579,7 +1585,7 @@ class Concat(PrimitiveWithInfer):
|
|||
axis (int): The specified axis. Default: 0.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (tuple, list) - Tuple or list of input tensors.
|
||||
- **input_x** (tuple, list) - A tuple or a list of input tensors.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is :math:`(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)`.
|
||||
|
@ -1691,7 +1697,7 @@ class Pack(PrimitiveWithInfer):
|
|||
Packs the list of input tensors with the same rank `R`, output is a tensor of rank `(R+1)`.
|
||||
|
||||
Given input tensors of shape :math:`(x_1, x_2, ..., x_R)`. Set the number of input tensors as `N`.
|
||||
If :math:`0 \le axis`, the output tensor shape is :math:`(x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)`.
|
||||
If :math:`0 \le axis`, the shape of the output tensor is :math:`(x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)`.
|
||||
|
||||
Args:
|
||||
axis (int): Dimension along which to pack. Default: 0.
|
||||
|
@ -2364,7 +2370,7 @@ class ScatterNd(PrimitiveWithInfer):
|
|||
Inputs:
|
||||
- **indices** (Tensor) - The index of scattering in the new tensor. With int32 data type.
|
||||
- **update** (Tensor) - The source Tensor to be scattered.
|
||||
- **shape** (tuple[int]) - Define the shape of the output tensor. Has the same type as indices.
|
||||
- **shape** (tuple[int]) - Define the shape of the output tensor, has the same type as indices.
|
||||
|
||||
Outputs:
|
||||
Tensor, the new tensor, has the same type as `update` and the same shape as `shape`.
|
||||
|
@ -3055,7 +3061,7 @@ class SpaceToBatch(PrimitiveWithInfer):
|
|||
of the input are zero padded according to paddings if necessary.
|
||||
|
||||
Args:
|
||||
block_size (int): The block size of dividing block with value >= 2.
|
||||
block_size (int): The block size of division, has the value not less than 2.
|
||||
paddings (list): The padding value for H and W dimension, containing 2 sub list, each containing 2 int value.
|
||||
All values must be >= 0. paddings[i] specifies the paddings for spatial dimension i, which corresponds to
|
||||
input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible
|
||||
|
@ -3066,7 +3072,7 @@ class SpaceToBatch(PrimitiveWithInfer):
|
|||
|
||||
Outputs:
|
||||
Tensor, the output tensor with the same type as input. Assume input shape is :math:`(n, c, h, w)` with
|
||||
:math:`block\_size` and :math:`padddings`. The output tensor shape will be :math:`(n', c', h', w')`, where
|
||||
:math:`block\_size` and :math:`paddings`. The shape of the output tensor will be :math:`(n', c', h', w')`, where
|
||||
|
||||
:math:`n' = n*(block\_size*block\_size)`
|
||||
|
||||
|
@ -3124,11 +3130,12 @@ class BatchToSpace(PrimitiveWithInfer):
|
|||
dimension and block_size with given amount to crop from dimension, respectively.
|
||||
|
||||
Args:
|
||||
block_size (int): The block size of dividing block with value >= 2.
|
||||
crops (Union[list(int), tuple(int)]): The crop value for H and W dimension, containing 2 sub list,
|
||||
each containing 2 int value.
|
||||
All values must be >= 0. crops[i] specifies the crop values for spatial dimension i, which corresponds to
|
||||
input dimension i+2. It is required that input_shape[i+2]*block_size >= crops[i][0]+crops[i][1].
|
||||
block_size (int): The block size of division, has the value not less than 2.
|
||||
crops (Union[list(int), tuple(int)]): The crop value for H and W dimension, containing 2 sub lists.
|
||||
Each list contains 2 integers.
|
||||
All values must be not less than 0. crops[i] specifies the crop values for the spatial dimension i, which
|
||||
corresponds to the input dimension i+2. It is required that
|
||||
input_shape[i+2]*block_size >= crops[i][0]+crops[i][1].
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 should be divisible by
|
||||
|
@ -3210,7 +3217,8 @@ class SpaceToBatchND(PrimitiveWithInfer):
|
|||
- **input_x** (Tensor) - The input tensor. It must be a 4-D tensor.
|
||||
Outputs:
|
||||
Tensor, the output tensor with the same type as input. Assume input shape is :math:`(n, c, h, w)` with
|
||||
:math:`block\_shape` and :math:`padddings`. The output tensor shape will be :math:`(n', c', h', w')`, where
|
||||
:math:`block\_shape` and :math:`padddings`. The shape of the output tensor will be :math:`(n', c', h', w')`,
|
||||
where
|
||||
|
||||
:math:`n' = n*(block\_shape[0]*block\_shape[1])`
|
||||
|
||||
|
@ -3276,11 +3284,11 @@ class SpaceToBatchND(PrimitiveWithInfer):
|
|||
|
||||
class BatchToSpaceND(PrimitiveWithInfer):
|
||||
r"""
|
||||
Divide batch dimension with blocks and interleaves these blocks back into spatial dimensions.
|
||||
Divide batch dimension with blocks and interleave these blocks back into spatial dimensions.
|
||||
|
||||
This operation will divide batch dimension N into blocks with block_shape, the output tensor's N dimension
|
||||
is the corresponding number of blocks after division. The output tensor's H, W dimension is product of original H, W
|
||||
dimension and block_shape with given amount to crop from dimension, respectively.
|
||||
dimension and block_shape with given amount to crop from dimension, respectively.B
|
||||
|
||||
Args:
|
||||
block_shape (Union[list(int), tuple(int)]): The block shape of dividing block with all value >= 1.
|
||||
|
|
|
@ -47,17 +47,17 @@ class AllReduce(PrimitiveWithInfer):
|
|||
|
||||
Note:
|
||||
The operation of AllReduce does not support "prod" currently.
|
||||
Tensor must have same shape and format in all processes participating in the collective.
|
||||
The tensors must have the same shape and format in all processes of the collection.
|
||||
|
||||
Args:
|
||||
op (str): Specifies an operation used for element-wise reductions,
|
||||
like sum, max, min. Default: ReduceOp.SUM.
|
||||
like sum, max, and min. Default: ReduceOp.SUM.
|
||||
group (str): The communication group to work on. Default: "hccl_world_group".
|
||||
|
||||
Raises:
|
||||
TypeError: If any of op and group is not a string
|
||||
or fusion is not a integer or the input's dtype is bool.
|
||||
ValueError: If op is "prod"
|
||||
TypeError: If any of operation and group is not a string,
|
||||
or fusion is not an integer, or the input's dtype is bool.
|
||||
ValueError: If the operation is "prod".
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
|
||||
|
@ -113,7 +113,7 @@ class AllGather(PrimitiveWithInfer):
|
|||
Gathers tensors from the specified communication group.
|
||||
|
||||
Note:
|
||||
Tensor must have the same shape and format in all processes participating in the collective.
|
||||
The tensors must have the same shape and format in all processes of the collection.
|
||||
|
||||
Args:
|
||||
group (str): The communication group to work on. Default: "hccl_world_group".
|
||||
|
@ -177,7 +177,7 @@ class _HostAllGather(PrimitiveWithInfer):
|
|||
Gathers tensors from the specified communication group on host.
|
||||
|
||||
Note:
|
||||
Tensor must have the same shape and format in all processes participating in the collective.
|
||||
The tensors must have the same shape and format in all processes of the collection.
|
||||
_HostAllGather is a host-side operator, it depends on OpenMPI and must use build option -M on
|
||||
to enable it. Using mpirun command to run it:
|
||||
mpirun -output-filename log -merge-stderr-to-stdout -np 3 python test_host_all_gather.py
|
||||
|
@ -227,8 +227,8 @@ class ReduceScatter(PrimitiveWithInfer):
|
|||
Reduces and scatters tensors from the specified communication group.
|
||||
|
||||
Note:
|
||||
The back propagation of the op is not surported yet. Stay tuned for more.
|
||||
Tensor must have the same shape and format in all processes participating in the collective.
|
||||
The back propagation of the op is not supported yet. Stay tuned for more.
|
||||
The tensors must have the same shape and format in all processes of the collection.
|
||||
|
||||
Args:
|
||||
op (str): Specifies an operation used for element-wise reductions,
|
||||
|
@ -236,7 +236,7 @@ class ReduceScatter(PrimitiveWithInfer):
|
|||
group (str): The communication group to work on. Default: "hccl_world_group".
|
||||
|
||||
Raises:
|
||||
TypeError: If any of op and group is not a string
|
||||
TypeError: If any of operation and group is not a string.
|
||||
ValueError: If the first dimension of input can not be divided by rank size.
|
||||
|
||||
Examples:
|
||||
|
@ -288,7 +288,7 @@ class _HostReduceScatter(PrimitiveWithInfer):
|
|||
Reduces and scatters tensors from the specified communication group on host.
|
||||
|
||||
Note:
|
||||
Tensor must have the same shape and format in all processes participating in the collective.
|
||||
The tensors must have the same shape and format in all processes of the collection.
|
||||
_HostReduceScatter is a host-side operator, it depends on OpenMPI and must use build option
|
||||
-M on to enable it. Using mpirun command to run it:
|
||||
mpirun -output-filename log -merge-stderr-to-stdout -np 3 python test_host_reduce_scatter.py
|
||||
|
@ -337,7 +337,7 @@ class Broadcast(PrimitiveWithInfer):
|
|||
Broadcasts the tensor to the whole group.
|
||||
|
||||
Note:
|
||||
Tensor must have the same shape and format in all processes participating in the collective.
|
||||
The tensors must have the same shape and format in all processes of the collection.
|
||||
|
||||
Args:
|
||||
root_rank (int): Source rank. Required in all processes except the one
|
||||
|
@ -402,7 +402,7 @@ class _AlltoAll(PrimitiveWithInfer):
|
|||
- The gather phase: Each process concatenates the received blocks along the concat_dimension.
|
||||
|
||||
Note:
|
||||
Tensor must have the same shape and format in all processes participating in the collective.
|
||||
The tensors must have the same shape and format in all processes of the collection.
|
||||
|
||||
Args:
|
||||
split_count (int): On each process, divide blocks into split_count number.
|
||||
|
|
|
@ -133,7 +133,7 @@ class TensorAdd(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -689,12 +689,12 @@ class BatchMatMul(MatMul):
|
|||
|
||||
`result[..., :, :] = tensor(a[..., :, :]) * tensor(b[..., :, :])`.
|
||||
|
||||
The two input tensors must have same rank and the rank must be `3` at least.
|
||||
The two input tensors must have the same rank and the rank must be not less than `3`.
|
||||
|
||||
Args:
|
||||
transpose_a (bool): If True, `a` is transposed on the last two dimensions before multiplication.
|
||||
transpose_a (bool): If True, the last two dimensions of `a` is transposed before multiplication.
|
||||
Default: False.
|
||||
transpose_b (bool): If True, `b` is transposed on the last two dimensions before multiplication.
|
||||
transpose_b (bool): If True, the last two dimensions of `b` is transposed before multiplication.
|
||||
Default: False.
|
||||
|
||||
Inputs:
|
||||
|
@ -860,11 +860,11 @@ class AccumulateNV2(PrimitiveWithInfer):
|
|||
"""
|
||||
Computes accumulation of all input tensors element-wise.
|
||||
|
||||
AccumulateNV2 is like AddN with a significant difference: AccumulateNV2 won't
|
||||
wait for all of its inputs to be ready before beginning to sum. That is to say,
|
||||
AccumulateNV2 will be able to save memory when inputs are ready at different
|
||||
times since minimum temporary storage is proportional to the output size rather
|
||||
than the inputs size.
|
||||
AccumulateNV2 is similar to AddN, but there is a significant difference
|
||||
among them: AccumulateNV2 will not wait for all of its inputs to be ready
|
||||
before summing. That is to say, AccumulateNV2 is able to save
|
||||
memory when inputs are ready at different time since the minimum temporary
|
||||
storage is proportional to the output size rather than the input size.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Union(tuple[Tensor], list[Tensor])) - The input tuple or list
|
||||
|
@ -1086,7 +1086,7 @@ class Sub(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1125,7 +1125,7 @@ class Mul(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1165,7 +1165,7 @@ class SquaredDifference(_MathBinaryOp):
|
|||
float16, float32, int32 or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1355,7 +1355,7 @@ class Pow(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1641,7 +1641,7 @@ class Minimum(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1680,7 +1680,7 @@ class Maximum(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1719,7 +1719,7 @@ class RealDiv(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1759,7 +1759,7 @@ class Div(_MathBinaryOp):
|
|||
is a number or a bool, the second input should be a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Raises:
|
||||
|
@ -1799,7 +1799,7 @@ class DivNoNan(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Raises:
|
||||
|
@ -1842,7 +1842,7 @@ class FloorDiv(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1873,7 +1873,7 @@ class TruncateDiv(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1903,7 +1903,7 @@ class TruncateMod(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -1931,7 +1931,7 @@ class Mod(_MathBinaryOp):
|
|||
the second input should be a tensor whose data type is number.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Raises:
|
||||
|
@ -1999,7 +1999,7 @@ class FloorMod(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -2058,7 +2058,7 @@ class Xdivy(_MathBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is float16, float32 or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -2093,7 +2093,7 @@ class Xlogy(_MathBinaryOp):
|
|||
The value must be positive.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
Tensor, the shape is the same as the one after broadcasting,
|
||||
and the data type is the one with high precision or high digits among the two inputs.
|
||||
|
||||
Examples:
|
||||
|
@ -2110,7 +2110,7 @@ class Xlogy(_MathBinaryOp):
|
|||
|
||||
class Acosh(PrimitiveWithInfer):
|
||||
"""
|
||||
Compute inverse hyperbolic cosine of x element-wise.
|
||||
Compute inverse hyperbolic cosine of the input element-wise.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
|
||||
|
@ -2167,7 +2167,7 @@ class Cosh(PrimitiveWithInfer):
|
|||
|
||||
class Asinh(PrimitiveWithInfer):
|
||||
"""
|
||||
Compute inverse hyperbolic sine of x element-wise.
|
||||
Compute inverse hyperbolic sine of the input element-wise.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
|
||||
|
@ -2254,7 +2254,7 @@ class Equal(_LogicBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
|
||||
|
@ -2275,7 +2275,7 @@ class Equal(_LogicBinaryOp):
|
|||
|
||||
class ApproximateEqual(_LogicBinaryOp):
|
||||
"""
|
||||
Returns the truth value of abs(x1-x2) < tolerance element-wise.
|
||||
Returns true if abs(x1-x2) is smaller than tolerance element-wise, otherwise false.
|
||||
|
||||
Inputs of `x1` and `x2` comply with the implicit type conversion rules to make the data types consistent.
|
||||
If they have different data types, lower priority data type will be converted to
|
||||
|
@ -2320,7 +2320,7 @@ class EqualCount(PrimitiveWithInfer):
|
|||
"""
|
||||
Computes the number of the same elements of two tensors.
|
||||
|
||||
The two input tensors should have same data type and shape.
|
||||
The two input tensors should have the same data type and shape.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The first input tensor.
|
||||
|
@ -2369,7 +2369,7 @@ class NotEqual(_LogicBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
|
||||
|
@ -2406,7 +2406,7 @@ class Greater(_LogicBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
|
||||
|
@ -2443,7 +2443,7 @@ class GreaterEqual(_LogicBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
|
||||
|
@ -2480,7 +2480,7 @@ class Less(_LogicBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
|
||||
|
@ -2517,7 +2517,7 @@ class LessEqual(_LogicBinaryOp):
|
|||
a bool when the first input is a tensor or a tensor whose data type is number or bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
|
||||
|
@ -2583,7 +2583,7 @@ class LogicalAnd(_LogicBinaryOp):
|
|||
a tensor whose data type is bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting, and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting, and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
|
||||
|
@ -2614,7 +2614,7 @@ class LogicalOr(_LogicBinaryOp):
|
|||
a tensor whose data type is bool.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is bool.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
|
||||
|
@ -3163,13 +3163,13 @@ class Tan(PrimitiveWithInfer):
|
|||
|
||||
class Atan(PrimitiveWithInfer):
|
||||
"""
|
||||
Computes the trignometric inverse tangent of x element-wise.
|
||||
Computes the trigonometric inverse tangent of the input element-wise.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor): The input tensor.
|
||||
|
||||
Outputs:
|
||||
A Tensor. Has the same type as x.
|
||||
A Tensor, has the same type as the input.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1.047, 0.785]), mindspore.float32)
|
||||
|
@ -3194,13 +3194,13 @@ class Atan(PrimitiveWithInfer):
|
|||
|
||||
class Atanh(PrimitiveWithInfer):
|
||||
"""
|
||||
Computes inverse hyperbolic tangent of x element-wise.
|
||||
Computes inverse hyperbolic tangent of the input element-wise.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor): The input tensor.
|
||||
|
||||
Outputs:
|
||||
A Tensor. Has the same type as x.
|
||||
A Tensor, has the same type as the input.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([1.047, 0.785]), mindspore.float32)
|
||||
|
@ -3238,7 +3238,7 @@ class Atan2(_MathBinaryOp):
|
|||
- **input_y** (Tensor) - The input tensor.
|
||||
|
||||
Outputs:
|
||||
Tensor, the shape is the same as the shape after broadcasting,and the data type is same as `input_x`.
|
||||
Tensor, the shape is the same as the one after broadcasting,and the data type is same as `input_x`.
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([[0, 1]]), mindspore.float32)
|
||||
|
|
|
@ -742,7 +742,7 @@ class BNTrainingReduce(PrimitiveWithInfer):
|
|||
|
||||
class BNTrainingUpdate(PrimitiveWithInfer):
|
||||
"""
|
||||
primitive operator of bn_training_update's register and info descriptor
|
||||
The primitive operator of the register and info descriptor in bn_training_update.
|
||||
"""
|
||||
@prim_attr_register
|
||||
def __init__(self, isRef=True, epsilon=1e-5, factor=0.1):
|
||||
|
@ -1513,9 +1513,9 @@ class BiasAdd(PrimitiveWithInfer):
|
|||
except for the channel axis.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - Input value. The input shape can be 2-4 dimensions.
|
||||
- **bias** (Tensor) - Bias value, with shape :math:`(C)`.
|
||||
The shape of `bias` must be the same as `input_x` in second dimension.
|
||||
- **input_x** (Tensor) - The input tensor. The shape can be 2-4 dimensions.
|
||||
- **bias** (Tensor) - The bias tensor, with shape :math:`(C)`.
|
||||
The shape of `bias` must be the same as `input_x` in the second dimension.
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same shape and type as `input_x`.
|
||||
|
@ -1606,7 +1606,7 @@ class SoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **logits** (Tensor) - Input logits, with shape :math:`(N, C)`. Data type should be float16 or float32.
|
||||
- **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`. Has the same data type with `logits`.
|
||||
- **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`, has the same data type with `logits`.
|
||||
|
||||
Outputs:
|
||||
Tuple of 2 Tensor, the loss shape is `(N,)`, and the dlogits with the same shape as `logits`.
|
||||
|
@ -1820,7 +1820,7 @@ class L2Loss(PrimitiveWithInfer):
|
|||
- **input_x** (Tensor) - A input Tensor. Data type should be float16 or float32.
|
||||
|
||||
Outputs:
|
||||
Tensor. Has the same dtype as `input_x`. The output tensor is the value of loss which is a scalar tensor.
|
||||
Tensor, has the same dtype as `input_x`. The output tensor is the value of loss which is a scalar tensor.
|
||||
|
||||
Examples
|
||||
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float16)
|
||||
|
@ -2027,7 +2027,7 @@ class ApplyRMSProp(PrimitiveWithInfer):
|
|||
.. math::
|
||||
w = w - m_{t}
|
||||
|
||||
where, :math:`w` represents `var`, which will be updated.
|
||||
where :math:`w` represents `var`, which will be updated.
|
||||
:math:`s_{t}` represents `mean_square`, :math:`s_{t-1}` is the last momentent of :math:`s_{t}`,
|
||||
:math:`m_{t}` represents `moment`, :math:`m_{t-1}` is the last momentent of :math:`m_{t}`.
|
||||
:math:`\\rho` represents `decay`. :math:`\\beta` is the momentum term, represents `momentum`.
|
||||
|
@ -2121,7 +2121,7 @@ class ApplyCenteredRMSProp(PrimitiveWithInfer):
|
|||
.. math::
|
||||
w = w - m_{t}
|
||||
|
||||
where, :math:`w` represents `var`, which will be updated.
|
||||
where :math:`w` represents `var`, which will be updated.
|
||||
:math:`g_{t}` represents `mean_gradient`, :math:`g_{t-1}` is the last momentent of :math:`g_{t}`.
|
||||
:math:`s_{t}` represents `mean_square`, :math:`s_{t-1}` is the last momentent of :math:`s_{t}`,
|
||||
:math:`m_{t}` represents `moment`, :math:`m_{t-1}` is the last momentent of :math:`m_{t}`.
|
||||
|
@ -2989,7 +2989,7 @@ class Adam(PrimitiveWithInfer):
|
|||
`epsilon`.
|
||||
|
||||
Args:
|
||||
use_locking (bool): Whether to enable a lock to protect updating variable tensors.
|
||||
use_locking (bool): Whether to enable a lock to protect variable tensors from being updated.
|
||||
If true, updates of the var, m, and v tensors will be protected by a lock.
|
||||
If false, the result is unpredictable. Default: False.
|
||||
use_nesterov (bool): Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients.
|
||||
|
@ -2998,16 +2998,16 @@ class Adam(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Tensor) - Weights to be updated.
|
||||
- **m** (Tensor) - The 1st moment vector in the updating formula. Has the same type as `var`.
|
||||
- **m** (Tensor) - The 1st moment vector in the updating formula, has the same type as `var`.
|
||||
- **v** (Tensor) - the 2nd moment vector in the updating formula.
|
||||
Mean square gradients, has the same type as `var`.
|
||||
Mean square gradients with the same type as `var`.
|
||||
- **beta1_power** (float) - :math:`beta_1^t` in the updating formula.
|
||||
- **beta2_power** (float) - :math:`beta_2^t` in the updating formula.
|
||||
- **lr** (float) - :math:`l` in the updating formula.
|
||||
- **beta1** (float) - The exponential decay rate for the 1st moment estimations.
|
||||
- **beta2** (float) - The exponential decay rate for the 2nd moment estimations.
|
||||
- **epsilon** (float) - Term added to the denominator to improve numerical stability.
|
||||
- **gradient** (Tensor) - Gradients. Has the same type as `var`.
|
||||
- **gradient** (Tensor) - Gradients, has the same type as `var`.
|
||||
|
||||
Outputs:
|
||||
Tuple of 3 Tensor, the updated parameters.
|
||||
|
@ -3088,7 +3088,7 @@ class FusedSparseAdam(PrimitiveWithInfer):
|
|||
RuntimeError exception will be thrown when the data type conversion of Parameter is required.
|
||||
|
||||
Args:
|
||||
use_locking (bool): Whether to enable a lock to protect updating variable tensors.
|
||||
use_locking (bool): Whether to enable a lock to protect variable tensors from being updated.
|
||||
If true, updates of the var, m, and v tensors will be protected by a lock.
|
||||
If false, the result is unpredictable. Default: False.
|
||||
use_nesterov (bool): Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients.
|
||||
|
@ -3097,10 +3097,10 @@ class FusedSparseAdam(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Parameters to be updated. With float32 data type.
|
||||
- **m** (Parameter) - The 1st moment vector in the updating formula. Has the same type as `var`. With
|
||||
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`. With
|
||||
float32 data type.
|
||||
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients,
|
||||
has the same type as `var`. With float32 data type.
|
||||
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients
|
||||
with the same type as `var`. With float32 data type.
|
||||
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula. With float32 data type.
|
||||
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula. With float32 data type.
|
||||
- **lr** (Tensor) - :math:`l` in the updating formula. With float32 data type.
|
||||
|
@ -3227,7 +3227,7 @@ class FusedSparseLazyAdam(PrimitiveWithInfer):
|
|||
RuntimeError exception will be thrown when the data type conversion of Parameter is required.
|
||||
|
||||
Args:
|
||||
use_locking (bool): Whether to enable a lock to protect updating variable tensors.
|
||||
use_locking (bool): Whether to enable a lock to protect variable tensors from being updated.
|
||||
If true, updates of the var, m, and v tensors will be protected by a lock.
|
||||
If false, the result is unpredictable. Default: False.
|
||||
use_nesterov (bool): Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients.
|
||||
|
@ -3236,10 +3236,10 @@ class FusedSparseLazyAdam(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Parameters to be updated. With float32 data type.
|
||||
- **m** (Parameter) - The 1st moment vector in the updating formula. Has the same type as `var`. With
|
||||
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same type as `var`. With
|
||||
float32 data type.
|
||||
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients,
|
||||
has the same type as `var`. With float32 data type.
|
||||
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients
|
||||
with the same type as `var`. With float32 data type.
|
||||
- **beta1_power** (Tensor) - :math:`beta_1^t` in the updating formula. With float32 data type.
|
||||
- **beta2_power** (Tensor) - :math:`beta_2^t` in the updating formula. With float32 data type.
|
||||
- **lr** (Tensor) - :math:`l` in the updating formula. With float32 data type.
|
||||
|
@ -3356,8 +3356,8 @@ class FusedSparseFtrl(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - The variable to be updated. The data type must be float32.
|
||||
- **accum** (Parameter) - The accum to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - The linear to be updated, must be same type and shape as `var`.
|
||||
- **accum** (Parameter) - The accumulation to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - the linear coefficient to be updated, must be same type and shape as `var`.
|
||||
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
|
||||
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`. The shape
|
||||
of `indices` must be the same as `grad` in first dimension. The type must be int32.
|
||||
|
@ -3450,11 +3450,12 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer):
|
|||
RuntimeError exception will be thrown when the data type conversion of Parameter is required.
|
||||
|
||||
Args:
|
||||
use_locking (bool): If true, updates of the var and accum tensors will be protected. Default: False.
|
||||
use_locking (bool): If true, the var and accumulation tensors will be protected from being updated.
|
||||
Default: False.
|
||||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable tensor to be updated. The data type must be float32.
|
||||
- **accum** (Parameter) - Variable tensor to be updated. Has the same dtype as `var`.
|
||||
- **accum** (Parameter) - Variable tensor to be updated, has the same dtype as `var`.
|
||||
- **lr** (Tensor) - The learning rate value. The data type must be float32.
|
||||
- **l1** (Tensor) - l1 regularization strength. The data type must be float32.
|
||||
- **l2** (Tensor) - l2 regularization strength. The data type must be float32.
|
||||
|
@ -3611,9 +3612,9 @@ class BinaryCrossEntropy(PrimitiveWithInfer):
|
|||
|
||||
.. math::
|
||||
\ell(x, y) = \begin{cases}
|
||||
L, & \text{if reduction} = \text{'none';}\\
|
||||
\operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\
|
||||
\operatorname{sum}(L), & \text{if reduction} = \text{'sum'.}
|
||||
L, & \text{if reduction} = \text{`none';}\\
|
||||
\operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\
|
||||
\operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}
|
||||
\end{cases}
|
||||
|
||||
Args:
|
||||
|
@ -3627,8 +3628,8 @@ class BinaryCrossEntropy(PrimitiveWithInfer):
|
|||
And it should have same shape and data type as `input_x`. Default: None.
|
||||
|
||||
Outputs:
|
||||
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and same shape as `input_x`.
|
||||
Otherwise it is a scalar.
|
||||
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and has the same shape as `input_x`.
|
||||
Otherwise, the output is a scalar.
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
@ -3688,11 +3689,11 @@ class ApplyAdaMax(PrimitiveWithInfer):
|
|||
var = var - \frac{l}{1 - \beta_1^t} * \frac{m_{t}}{v_{t} + \epsilon}
|
||||
\end{array}
|
||||
|
||||
:math:`t` represents updating step while, :math:`m` represents the 1st moment vector, :math:`m_{t-1}`
|
||||
:math:`t` represents updating step while :math:`m` represents the 1st moment vector, :math:`m_{t-1}`
|
||||
is the last momentent of :math:`m_{t}`, :math:`v` represents the 2nd moment vector, :math:`v_{t-1}`
|
||||
is the last momentent of :math:`v_{t}`, :math:`l` represents scaling factor `lr`,
|
||||
:math:`g` represents `grad`, :math:`\beta_1, \beta_2` represent `beta1` and `beta2`,
|
||||
:math:`beta_1^t` represent `beta1_power`, :math:`var` represents Variable to be updated,
|
||||
:math:`beta_1^t` represents `beta1_power`, :math:`var` represents the variable to be updated,
|
||||
:math:`\epsilon` represents `epsilon`.
|
||||
|
||||
Inputs of `var`, `m`, `v` and `grad` comply with the implicit type conversion rules
|
||||
|
@ -3703,10 +3704,10 @@ class ApplyAdaMax(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable to be updated. With float32 or float16 data type.
|
||||
- **m** (Parameter) - The 1st moment vector in the updating formula. Has the same shape and type as `var`.
|
||||
- **m** (Parameter) - The 1st moment vector in the updating formula, has the same shape and type as `var`.
|
||||
With float32 or float16 data type.
|
||||
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients,
|
||||
has the same shape and type as `var`. With float32 or float16 data type.
|
||||
- **v** (Parameter) - The 2nd moment vector in the updating formula. Mean square gradients
|
||||
with the same shape and type as `var`. With float32 or float16 data type.
|
||||
- **beta1_power** (Union[Number, Tensor]) - :math:`beta_1^t` in the updating formula, should be scalar.
|
||||
With float32 or float16 data type.
|
||||
- **lr** (Union[Number, Tensor]) - Learning rate, :math:`l` in the updating formula, should be scalar.
|
||||
|
@ -3717,7 +3718,7 @@ class ApplyAdaMax(PrimitiveWithInfer):
|
|||
should be scalar. With float32 or float16 data type.
|
||||
- **epsilon** (Union[Number, Tensor]) - A small value added for numerical stability, should be scalar.
|
||||
With float32 or float16 data type.
|
||||
- **grad** (Tensor) - A tensor for gradient. Has the same shape and type as `var`.
|
||||
- **grad** (Tensor) - A tensor for gradient, has the same shape and type as `var`.
|
||||
With float32 or float16 data type.
|
||||
|
||||
Outputs:
|
||||
|
@ -3831,13 +3832,13 @@ class ApplyAdadelta(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Weights to be updated. With float32 or float16 data type.
|
||||
- **accum** (Parameter) - Accum to be updated, has the same shape and type as `var`.
|
||||
- **accum** (Parameter) - Accumulation to be updated, has the same shape and type as `var`.
|
||||
With float32 or float16 data type.
|
||||
- **accum_update** (Parameter) - Accum_update to be updated, has the same shape and type as `var`.
|
||||
With float32 or float16 data type.
|
||||
- **lr** (Union[Number, Tensor]) - Learning rate, must be scalar. With float32 or float16 data type.
|
||||
- **rho** (Union[Number, Tensor]) - Decay rate, must be scalar. With float32 or float16 data type.
|
||||
- **epsilon** (Union[Number, Tensor]) - A small value added for numerical stability, must be scalar.
|
||||
- **lr** (Union[Number, Tensor]) - Learning rate, should be scalar. With float32 or float16 data type.
|
||||
- **rho** (Union[Number, Tensor]) - Decay rate, should be scalar. With float32 or float16 data type.
|
||||
- **epsilon** (Union[Number, Tensor]) - A small value added for numerical stability, should be scalar.
|
||||
With float32 or float16 data type.
|
||||
- **grad** (Tensor) - Gradients, has the same shape and type as `var`. With float32 or float16 data type.
|
||||
|
||||
|
@ -3937,7 +3938,7 @@ class ApplyAdagrad(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable to be updated. With float32 or float16 data type.
|
||||
- **accum** (Parameter) - Accum to be updated. The shape and dtype should be the same as `var`.
|
||||
- **accum** (Parameter) - Accumulation to be updated. The shape and dtype should be the same as `var`.
|
||||
With float32 or float16 data type.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value, should be scalar. With float32 or float16 data type.
|
||||
- **grad** (Tensor) - A tensor for gradient. The shape and dtype should be the same as `var`.
|
||||
|
@ -4019,7 +4020,7 @@ class ApplyAdagradV2(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable to be updated. With float16 or float32 data type.
|
||||
- **accum** (Parameter) - Accum to be updated. The shape and dtype should be the same as `var`.
|
||||
- **accum** (Parameter) - Accumulation to be updated. The shape and dtype should be the same as `var`.
|
||||
With float16 or float32 data type.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value, should be a float number or
|
||||
a scalar tensor with float16 or float32 data type.
|
||||
|
@ -4099,11 +4100,12 @@ class SparseApplyAdagrad(PrimitiveWithInfer):
|
|||
Args:
|
||||
lr (float): Learning rate.
|
||||
update_slots (bool): If `True`, `accum` will be updated. Default: True.
|
||||
use_locking (bool): If true, updates of the var and accum tensors will be protected. Default: False.
|
||||
use_locking (bool): If true, the var and accumulation tensors will be protected from being updated.
|
||||
Default: False.
|
||||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable to be updated. The data type must be float16 or float32.
|
||||
- **accum** (Parameter) - Accum to be updated. The shape and dtype should be the same as `var`.
|
||||
- **accum** (Parameter) - Accumulation to be updated. The shape and dtype should be the same as `var`.
|
||||
- **grad** (Tensor) - Gradient. The shape must be the same as `var`'s shape except first dimension.
|
||||
Has the same data type as `var`.
|
||||
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
|
||||
|
@ -4184,12 +4186,13 @@ class SparseApplyAdagradV2(PrimitiveWithInfer):
|
|||
Args:
|
||||
lr (float): Learning rate.
|
||||
epsilon (float): A small value added for numerical stability.
|
||||
use_locking (bool): If `True`, updating of the var and accum tensors will be protected. Default: False.
|
||||
use_locking (bool): If `True`, the var and accumulation tensors will be protected from being updated.
|
||||
Default: False.
|
||||
update_slots (bool): If `True`, the computation logic will be different to `False`. Default: True.
|
||||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable to be updated. The data type must be float16 or float32.
|
||||
- **accum** (Parameter) - Accum to be updated. The shape and dtype should be the same as `var`.
|
||||
- **accum** (Parameter) - Accumulation to be updated. The shape and dtype should be the same as `var`.
|
||||
- **grad** (Tensor) - Gradient. The shape must be the same as `var`'s shape except first dimension.
|
||||
Has the same data type as `var`.
|
||||
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
|
||||
|
@ -4271,18 +4274,19 @@ class ApplyProximalAdagrad(PrimitiveWithInfer):
|
|||
RuntimeError exception will be thrown when the data type conversion of Parameter is required.
|
||||
|
||||
Args:
|
||||
use_locking (bool): If true, updates of the var and accum tensors will be protected. Default: False.
|
||||
use_locking (bool): If true, the var and accumulation tensors will be protected from being updated.
|
||||
Default: False.
|
||||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable to be updated. The data type should be float16 or float32.
|
||||
- **accum** (Parameter) - Accum to be updated. Must has the same shape and dtype as `var`.
|
||||
- **accum** (Parameter) - Accumulation to be updated. Must has the same shape and dtype as `var`.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value, should be scalar. The data type should be
|
||||
float16 or float32.
|
||||
- **l1** (Union[Number, Tensor]) - l1 regularization strength, should be scalar. The data type should be
|
||||
float16 or float32.
|
||||
- **l2** (Union[Number, Tensor]) - l2 regularization strength, should be scalar. The data type should be
|
||||
float16 or float32.
|
||||
- **grad** (Tensor) - Gradient. Must has the same shape and dtype as `var`.
|
||||
- **grad** (Tensor) - Gradient with the same shape and dtype as `var`.
|
||||
|
||||
Outputs:
|
||||
Tuple of 2 Tensor, the updated parameters.
|
||||
|
@ -4373,11 +4377,12 @@ class SparseApplyProximalAdagrad(PrimitiveWithInfer):
|
|||
RuntimeError exception will be thrown when the data type conversion of Parameter is required.
|
||||
|
||||
Args:
|
||||
use_locking (bool): If true, updates of the var and accum tensors will be protected. Default: False.
|
||||
use_locking (bool): If true, the var and accumulation tensors will be protected from being updated.
|
||||
Default: False.
|
||||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable tensor to be updated. The data type must be float16 or float32.
|
||||
- **accum** (Parameter) - Variable tensor to be updated. Has the same dtype as `var`.
|
||||
- **accum** (Parameter) - Variable tensor to be updated, has the same dtype as `var`.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value. Tshould be a float number or
|
||||
a scalar tensor with float16 or float32 data type.
|
||||
- **l1** (Union[Number, Tensor]) - l1 regularization strength. should be a float number or
|
||||
|
@ -4460,7 +4465,7 @@ class ApplyAddSign(PrimitiveWithInfer):
|
|||
var = var - lr_{t} * \text{update}
|
||||
\end{array}
|
||||
|
||||
:math:`t` represents updating step while, :math:`m` represents the 1st moment vector, :math:`m_{t-1}`
|
||||
:math:`t` represents updating step while :math:`m` represents the 1st moment vector, :math:`m_{t-1}`
|
||||
is the last momentent of :math:`m_{t}`, :math:`lr` represents scaling factor `lr`, :math:`g` represents `grad`.
|
||||
|
||||
Inputs of `var`, `accum` and `grad` comply with the implicit type conversion rules
|
||||
|
@ -4471,7 +4476,7 @@ class ApplyAddSign(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - Variable tensor to be updated. With float32 or float16 data type.
|
||||
- **m** (Parameter) - Variable tensor to be updated. Has the same dtype as `var`.
|
||||
- **m** (Parameter) - Variable tensor to be updated, has the same dtype as `var`.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value, should be a scalar.
|
||||
With float32 or float16 data type.
|
||||
- **alpha** (Union[Number, Tensor]) - Should be a scalar. With float32 or float16 data type.
|
||||
|
@ -4567,7 +4572,7 @@ class ApplyPowerSign(PrimitiveWithInfer):
|
|||
var = var - lr_{t} * \text{update}
|
||||
\end{array}
|
||||
|
||||
:math:`t` represents updating step while, :math:`m` represents the 1st moment vector, :math:`m_{t-1}`
|
||||
:math:`t` represents updating step while :math:`m` represents the 1st moment vector, :math:`m_{t-1}`
|
||||
is the last momentent of :math:`m_{t}`, :math:`lr` represents scaling factor `lr`, :math:`g` represents `grad`.
|
||||
|
||||
All of inputs comply with the implicit type conversion rules to make the data types consistent.
|
||||
|
@ -4580,7 +4585,7 @@ class ApplyPowerSign(PrimitiveWithInfer):
|
|||
Inputs:
|
||||
- **var** (Parameter) - Variable tensor to be updated. With float32 or float16 data type.
|
||||
If data type of `var` is float16, all inputs must have the same data type as `var`.
|
||||
- **m** (Parameter) - Variable tensor to be updated. Has the same dtype as `var`.
|
||||
- **m** (Parameter) - Variable tensor to be updated, has the same dtype as `var`.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value, should be a scalar.
|
||||
With float32 or float16 data type.
|
||||
- **logbase** (Union[Number, Tensor]) - Should be a scalar. With float32 or float16 data type.
|
||||
|
@ -4681,10 +4686,10 @@ class ApplyGradientDescent(PrimitiveWithInfer):
|
|||
Inputs:
|
||||
- **var** (Parameter) - Variable tensor to be updated. With float32 or float16 data type.
|
||||
- **alpha** (Union[Number, Tensor]) - Scaling factor, should be a scalar. With float32 or float16 data type.
|
||||
- **delta** (Tensor) - A tensor for the change. Has the same type as `var`.
|
||||
- **delta** (Tensor) - A tensor for the change, has the same type as `var`.
|
||||
|
||||
Outputs:
|
||||
Tensor, representing the updated var.
|
||||
Tensor, represents the updated `var`.
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
@ -4752,10 +4757,10 @@ class ApplyProximalGradientDescent(PrimitiveWithInfer):
|
|||
With float32 or float16 data type.
|
||||
- **l2** (Union[Number, Tensor]) - l2 regularization strength, should be scalar.
|
||||
With float32 or float16 data type.
|
||||
- **delta** (Tensor) - A tensor for the change. Has the same type as `var`.
|
||||
- **delta** (Tensor) - A tensor for the change, has the same type as `var`.
|
||||
|
||||
Outputs:
|
||||
Tensor, representing the updated var.
|
||||
Tensor, represents the updated `var`.
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
@ -4834,7 +4839,7 @@ class LARSUpdate(PrimitiveWithInfer):
|
|||
- **learning_rate** (Union[Number, Tensor]) - Learning rate. It should be a scalar tensor or number.
|
||||
|
||||
Outputs:
|
||||
Tensor, representing the new gradient.
|
||||
Tensor, represents the new gradient.
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor
|
||||
|
@ -4901,8 +4906,8 @@ class ApplyFtrl(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - The variable to be updated. The data type should be float16 or float32.
|
||||
- **accum** (Parameter) - The accum to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - The linear to be updated, must be same type and shape as `var`.
|
||||
- **accum** (Parameter) - The accumulation to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - the linear coefficient to be updated, must be same type and shape as `var`.
|
||||
- **grad** (Tensor) - Gradient. The data type should be float16 or float32.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value, must be positive. Default: 0.001.
|
||||
It should be a float number or a scalar tensor with float16 or float32 data type.
|
||||
|
@ -4915,7 +4920,7 @@ class ApplyFtrl(PrimitiveWithInfer):
|
|||
Default: -0.5. It should be a float number or a scalar tensor with float16 or float32 data type.
|
||||
|
||||
Outputs:
|
||||
Tensor, representing the updated var.
|
||||
Tensor, represents the updated `var`.
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
@ -4997,8 +5002,8 @@ class SparseApplyFtrl(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **var** (Parameter) - The variable to be updated. The data type must be float16 or float32.
|
||||
- **accum** (Parameter) - The accum to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - The linear to be updated, must be same type and shape as `var`.
|
||||
- **accum** (Parameter) - The accumulation to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - the linear coefficient to be updated, must be same type and shape as `var`.
|
||||
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
|
||||
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
|
||||
The shape of `indices` must be the same as `grad` in first dimension. The type must be int32.
|
||||
|
@ -5086,12 +5091,13 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
|
|||
l2_shrinkage (float): L2 shrinkage regularization.
|
||||
lr_power (float): Learning rate power controls how the learning rate decreases during training,
|
||||
must be less than or equal to zero. Use fixed learning rate if `lr_power` is zero.
|
||||
use_locking (bool): If `True`, updating of the var and accum tensors will be protected. Default: False.
|
||||
use_locking (bool): If `True`, the var and accumulation tensors will be protected from being updated.
|
||||
Default: False.
|
||||
|
||||
Inputs:
|
||||
- **var** (Parameter) - The variable to be updated. The data type must be float16 or float32.
|
||||
- **accum** (Parameter) - The accum to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - The linear to be updated, must be same type and shape as `var`.
|
||||
- **accum** (Parameter) - The accumulation to be updated, must be same type and shape as `var`.
|
||||
- **linear** (Parameter) - the linear coefficient to be updated, must be same type and shape as `var`.
|
||||
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
|
||||
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
|
||||
The shape of `indices` must be the same as `grad` in first dimension. The type must be int32.
|
||||
|
@ -5301,14 +5307,14 @@ class DropoutGrad(PrimitiveWithInfer):
|
|||
|
||||
class CTCLoss(PrimitiveWithInfer):
|
||||
"""
|
||||
Calculates the CTC(Connectionist Temporal Classification) loss. Also calculates the gradient.
|
||||
Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.
|
||||
|
||||
Args:
|
||||
preprocess_collapse_repeated (bool): If true, repeated labels are collapsed prior to the CTC calculation.
|
||||
preprocess_collapse_repeated (bool): If true, repeated labels will be collapsed prior to the CTC calculation.
|
||||
Default: False.
|
||||
ctc_merge_repeated (bool): If false, during CTC calculation, repeated non-blank labels will not be merged
|
||||
and are interpreted as individual labels. This is a simplfied version of CTC.
|
||||
Default: True.
|
||||
and these labels will be interpreted as individual ones. This is a simplfied
|
||||
version of CTC. Default: True.
|
||||
ignore_longer_outputs_than_inputs (bool): If True, sequences with longer outputs than inputs will be ignored.
|
||||
Default: False.
|
||||
|
||||
|
@ -5319,15 +5325,15 @@ class CTCLoss(PrimitiveWithInfer):
|
|||
Data type must be float32 or float64.
|
||||
- **labels_indices** (Tensor) - The indices of labels. `labels_indices[i, :] == [b, t]` means `labels_values[i]`
|
||||
stores the id for `(batch b, time t)`. The type must be int64 and rank must be 2.
|
||||
- **labels_values** (Tensor) - A `1-D` input tensor. The values associated with the given batch and time. The
|
||||
type must be int32. `labels_values[i]` must in the range of `[0, num_classes)`.
|
||||
- **labels_values** (Tensor) - A `1-D` input tensor. The values are associated with the given batch and time.
|
||||
The type must be int32. `labels_values[i]` must in the range of `[0, num_classes)`.
|
||||
- **sequence_length** (Tensor) - A tensor containing sequence lengths with the shape of :math:`(batch_size)`.
|
||||
The type must be int32. Each value in the tensor should not greater than `max_time`.
|
||||
The type must be int32. Each value in the tensor should not be greater than `max_time`.
|
||||
|
||||
Outputs:
|
||||
- **loss** (Tensor) - A tensor containing log-probabilities, the shape is :math:`(batch_size)`. Has the same
|
||||
type with `inputs`.
|
||||
- **gradient** (Tensor) - The gradient of `loss`. Has the same type and shape with `inputs`.
|
||||
- **loss** (Tensor) - A tensor containing log-probabilities, the shape is :math:`(batch_size)`. The tensor has
|
||||
the same type with `inputs`.
|
||||
- **gradient** (Tensor) - The gradient of `loss`, has the same type and shape with `inputs`.
|
||||
|
||||
Examples:
|
||||
>>> inputs = Tensor(np.random.random((2, 2, 3)), mindspore.float32)
|
||||
|
@ -5396,7 +5402,7 @@ class CTCGreedyDecoder(PrimitiveWithInfer):
|
|||
- **decoded_shape** (Tensor) - The value of tensor is :math:`[batch_size, max_decoded_legth]`.
|
||||
Data type is int64.
|
||||
- **log_probability** (Tensor) - A tensor with shape of :math:`(batch_size, 1)`,
|
||||
containing sequence log-probability. Has the same type as `inputs`.
|
||||
containing sequence log-probability, has the same type as `inputs`.
|
||||
|
||||
Examples:
|
||||
>>> class CTCGreedyDecoderNet(nn.Cell):
|
||||
|
@ -5441,7 +5447,7 @@ class CTCGreedyDecoder(PrimitiveWithInfer):
|
|||
|
||||
class BasicLSTMCell(PrimitiveWithInfer):
|
||||
r"""
|
||||
Performs the long short term memory(LSTM) on the input.
|
||||
Applies the long short-term memory (LSTM) to the input.
|
||||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
|
@ -5464,10 +5470,10 @@ class BasicLSTMCell(PrimitiveWithInfer):
|
|||
Args:
|
||||
keep_prob (float): If not 1.0, append `Dropout` layer on the outputs of each
|
||||
LSTM layer except the last layer. Default 1.0. The range of dropout is [0.0, 1.0].
|
||||
forget_bias (float): Add forget bias to forget gate biases in order to decrease former scale. Default to 1.0.
|
||||
state_is_tuple (bool): If true, state is tensor tuple, containing h and c; If false, one tensor,
|
||||
need split first. Default to True.
|
||||
activation (str): Activation. Default to "tanh". Only "tanh" is currently supported.
|
||||
forget_bias (float): Add forget bias to forget gate biases in order to decrease former scale. Default: 1.0.
|
||||
state_is_tuple (bool): If true, the state is a tuple of 2 tensors, containing h and c; If false, the state is
|
||||
a tensor and it needs to be split first. Default: True.
|
||||
activation (str): Activation. Default: "tanh". Only "tanh" is currently supported.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - Current words. Tensor of shape (`batch_size`, `input_size`).
|
||||
|
@ -5494,7 +5500,7 @@ class BasicLSTMCell(PrimitiveWithInfer):
|
|||
- **ot** (Tensor) - Forward :math:`o_t` cache at moment `t`. Tensor of shape (`batch_size`, `hidden_size`).
|
||||
Has the same type with input `c`.
|
||||
- **tanhct** (Tensor) - Forward :math:`tanh c_t` cache at moment `t`.
|
||||
Tensor of shape (`batch_size`, `hidden_size`). Has the same type with input `c`.
|
||||
Tensor of shape (`batch_size`, `hidden_size`), has the same type with input `c`.
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.random.rand(1, 32).astype(np.float16))
|
||||
|
@ -5634,7 +5640,7 @@ class LRN(PrimitiveWithInfer):
|
|||
|
||||
class CTCLossV2(PrimitiveWithInfer):
|
||||
r"""
|
||||
Calculates the CTC(Connectionist Temporal Classification) loss. Also calculates the gradient.
|
||||
Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.
|
||||
Note:
|
||||
- Cudnn Uses label value of for the `blank`
|
||||
|
||||
|
@ -5653,9 +5659,9 @@ class CTCLossV2(PrimitiveWithInfer):
|
|||
The type must be int32. Each value in the tensor should not greater than `max_time`.
|
||||
|
||||
Outputs:
|
||||
- **loss** (Tensor) - A tensor containing log-probabilities, the shape is :math:`(batch_size)`. Has the same
|
||||
- **loss** (Tensor) - A tensor containing log-probabilities, the shape is :math:`(batch_size)`, has the same
|
||||
type with `inputs`.
|
||||
- **gradient** (Tensor) - The gradient of `loss`. Has the same type and shape with `inputs`.
|
||||
- **gradient** (Tensor) - The gradient of `loss`, has the same type and shape with `inputs`.
|
||||
|
||||
Examples:
|
||||
>>> inputs = Tensor(np.random.random((2, 2, 3)), mindspore.float32)
|
||||
|
|
|
@ -34,7 +34,7 @@ class Assign(Primitive):
|
|||
|
||||
Inputs:
|
||||
- **variable** (Parameter) - The `Parameter`.
|
||||
- **value** (Tensor) - The value to assign.
|
||||
- **value** (Tensor) - The value to be assigned.
|
||||
|
||||
Outputs:
|
||||
Tensor, has the same type as original `variable`.
|
||||
|
@ -77,7 +77,7 @@ class BoundingBoxEncode(PrimitiveWithInfer):
|
|||
|
||||
Args:
|
||||
means (tuple): Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).
|
||||
stds (tuple): Stds for encoding bounding boxes calculation. Default: (1.0, 1.0, 1.0, 1.0).
|
||||
stds (tuple): The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).
|
||||
|
||||
Inputs:
|
||||
- **anchor_box** (Tensor) - Anchor boxes. The shape of anchor_box must be (n, 4).
|
||||
|
@ -133,8 +133,8 @@ class BoundingBoxDecode(PrimitiveWithInfer):
|
|||
wh_ratio_clip (float): The limit of width and height ratio for decoding box calculation. Default: 0.016.
|
||||
|
||||
Inputs:
|
||||
- **anchor_box** (Tensor) - Anchor boxes. The shape of anchor_box must be (n, 4).
|
||||
- **deltas** (Tensor) - Delta of boxes. Which has the same shape with anchor_box.
|
||||
- **anchor_box** (Tensor) - Anchor boxes. The shape of `anchor_box` must be (n, 4).
|
||||
- **deltas** (Tensor) - Delta of boxes. Which has the same shape with `anchor_box`.
|
||||
|
||||
Outputs:
|
||||
Tensor, decoded boxes.
|
||||
|
@ -183,11 +183,11 @@ class CheckValid(PrimitiveWithInfer):
|
|||
"""
|
||||
Check bounding box.
|
||||
|
||||
Check whether the bounding box cross data and data border.
|
||||
Check whether the bounding box cross data and data border are valid.
|
||||
|
||||
Inputs:
|
||||
- **bboxes** (Tensor) - Bounding boxes tensor with shape (N, 4). Data type should be float16 or float32.
|
||||
- **img_metas** (Tensor) - Raw image size information, format (height, width, ratio).
|
||||
- **img_metas** (Tensor) - Raw image size information with the format of (height, width, ratio).
|
||||
Data type should be float16 or float32.
|
||||
|
||||
Outputs:
|
||||
|
@ -372,17 +372,17 @@ class Depend(Primitive):
|
|||
|
||||
class CheckBprop(PrimitiveWithInfer):
|
||||
"""
|
||||
Checks whether data type and shape of corresponding element from tuple x and y are the same.
|
||||
Checks whether the data type and the shape of corresponding elements from tuples x and y are the same.
|
||||
|
||||
Raises:
|
||||
TypeError: If not the same.
|
||||
TypeError: If tuples x and y are not the same.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (tuple[Tensor]) - The input_x contains the outputs of bprop to be checked.
|
||||
- **input_y** (tuple[Tensor]) - The input_y contains the inputs of bprop to check against.
|
||||
- **input_x** (tuple[Tensor]) - The `input_x` contains the outputs of bprop to be checked.
|
||||
- **input_y** (tuple[Tensor]) - The `input_y` contains the inputs of bprop to check against.
|
||||
|
||||
Outputs:
|
||||
(tuple[Tensor]), the input_x,
|
||||
(tuple[Tensor]), the `input_x`,
|
||||
if data type and shape of corresponding elements from `input_x` and `input_y` are the same.
|
||||
|
||||
Examples:
|
||||
|
|
Loading…
Reference in New Issue