commit
6eb72535ec
|
@ -5,7 +5,7 @@ mindspore.ops.Custom
|
||||||
|
|
||||||
`Custom` 算子是MindSpore自定义算子的统一接口。用户可以利用该接口自行定义MindSpore内置算子库尚未包含的算子。
|
`Custom` 算子是MindSpore自定义算子的统一接口。用户可以利用该接口自行定义MindSpore内置算子库尚未包含的算子。
|
||||||
根据输入函数的不用,你可以创建多个自定义算子,并且把它们用在神经网络中。
|
根据输入函数的不用,你可以创建多个自定义算子,并且把它们用在神经网络中。
|
||||||
关于自定义算子的详细说明和介绍,包括参数的正确书写,见编程指南 https://www.mindspore.cn/tutorials/experts/zh-CN/master/operation/op_custom.html 。
|
关于自定义算子的详细说明和介绍,包括参数的正确书写,见 `教程 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/operation/op_custom.html>`_ 。
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
这是一个实验性接口,后续可能删除或修改。
|
这是一个实验性接口,后续可能删除或修改。
|
||||||
|
|
|
@ -17,7 +17,7 @@ mindspore.ops.scatter_div
|
||||||
- **indices** (Tensor) - 指定相除操作的索引,数据类型必须为mindspore.int32或者mindspore.int64。
|
- **indices** (Tensor) - 指定相除操作的索引,数据类型必须为mindspore.int32或者mindspore.int64。
|
||||||
- **updates** (Tensor) - 指定与 `input_x` 相除的Tensor,数据类型与 `input_x` 相同,shape为 `indices.shape + input_x.shape[1:]` 。
|
- **updates** (Tensor) - 指定与 `input_x` 相除的Tensor,数据类型与 `input_x` 相同,shape为 `indices.shape + input_x.shape[1:]` 。
|
||||||
|
|
||||||
输出:
|
返回:
|
||||||
Tensor,更新后的 `input_x` ,shape和类型与 `input_x` 相同。
|
Tensor,更新后的 `input_x` ,shape和类型与 `input_x` 相同。
|
||||||
|
|
||||||
异常:
|
异常:
|
||||||
|
|
|
@ -18,7 +18,7 @@ mindspore.ops.scatter_max
|
||||||
- **indices** (Tensor) - 指定最大值操作的索引,数据类型必须为mindspore.int32或者mindspore.int64。
|
- **indices** (Tensor) - 指定最大值操作的索引,数据类型必须为mindspore.int32或者mindspore.int64。
|
||||||
- **updates** (Tensor) - 指定与 `input_x` 取最大值操作的Tensor,数据类型与 `input_x` 相同,shape为 `indices.shape + input_x.shape[1:]` 。
|
- **updates** (Tensor) - 指定与 `input_x` 取最大值操作的Tensor,数据类型与 `input_x` 相同,shape为 `indices.shape + input_x.shape[1:]` 。
|
||||||
|
|
||||||
输出:
|
返回:
|
||||||
Tensor,更新后的 `input_x` ,shape和类型与 `input_x` 相同。
|
Tensor,更新后的 `input_x` ,shape和类型与 `input_x` 相同。
|
||||||
|
|
||||||
异常:
|
异常:
|
||||||
|
|
|
@ -18,7 +18,7 @@ mindspore.ops.scatter_min
|
||||||
- **indices** (Tensor) - 指定最小值操作的索引,数据类型必须为mindspore.int32或者mindspore.int64。
|
- **indices** (Tensor) - 指定最小值操作的索引,数据类型必须为mindspore.int32或者mindspore.int64。
|
||||||
- **updates** (Tensor) - 指定与 `input_x` 取最小值操作的Tensor,数据类型与 `input_x` 相同,shape为 `indices.shape + input_x.shape[1:]` 。
|
- **updates** (Tensor) - 指定与 `input_x` 取最小值操作的Tensor,数据类型与 `input_x` 相同,shape为 `indices.shape + input_x.shape[1:]` 。
|
||||||
|
|
||||||
输出:
|
返回:
|
||||||
Tensor,更新后的 `input_x` ,shape和类型与 `input_x` 相同。
|
Tensor,更新后的 `input_x` ,shape和类型与 `input_x` 相同。
|
||||||
|
|
||||||
异常:
|
异常:
|
||||||
|
|
|
@ -65,7 +65,7 @@ def repeat_elements(x, rep, axis=0):
|
||||||
rep (int): The number of times to repeat, must be positive.
|
rep (int): The number of times to repeat, must be positive.
|
||||||
axis (int): The axis along which to repeat, default 0.
|
axis (int): The axis along which to repeat, default 0.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
One tensor with values repeated along the specified axis. If x has shape
|
One tensor with values repeated along the specified axis. If x has shape
|
||||||
(s1, s2, ..., sn) and axis is i, the output will have shape (s1, s2, ...,
|
(s1, s2, ..., sn) and axis is i, the output will have shape (s1, s2, ...,
|
||||||
si * rep, ..., sn). The output type will be the same as the type of `x`.
|
si * rep, ..., sn). The output type will be the same as the type of `x`.
|
||||||
|
@ -142,7 +142,7 @@ def sequence_mask(lengths, maxlen=None):
|
||||||
maxlen (int): size of the last dimension of returned tensor. Must be positive and same
|
maxlen (int): size of the last dimension of returned tensor. Must be positive and same
|
||||||
type as elements in `lengths`. Default is None.
|
type as elements in `lengths`. Default is None.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
One mask tensor of shape `lengths.shape + (maxlen,)` .
|
One mask tensor of shape `lengths.shape + (maxlen,)` .
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
|
|
@ -258,7 +258,7 @@ def tensor_dot(x1, x2, axes):
|
||||||
automatically picks up last N dims from `a` input shape and first N dims from `b` input shape in order
|
automatically picks up last N dims from `a` input shape and first N dims from `b` input shape in order
|
||||||
as axes for each respectively.
|
as axes for each respectively.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the shape of the output tensor is :math:`(N + M)`. Where :math:`N` and :math:`M` are the free axes not
|
Tensor, the shape of the output tensor is :math:`(N + M)`. Where :math:`N` and :math:`M` are the free axes not
|
||||||
contracted in both inputs
|
contracted in both inputs
|
||||||
|
|
||||||
|
@ -346,7 +346,7 @@ def dot(x1, x2):
|
||||||
x2 (Tensor): Second tensor in Dot op with datatype float16 or float32,
|
x2 (Tensor): Second tensor in Dot op with datatype float16 or float32,
|
||||||
The rank must be greater than or equal to 2.
|
The rank must be greater than or equal to 2.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, dot product of x1 and x2.
|
Tensor, dot product of x1 and x2.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -559,7 +559,7 @@ def batch_dot(x1, x2, axes=None):
|
||||||
`a` input shape and last N dimensions from `b` input shape in order as axes for each respectively.
|
`a` input shape and last N dimensions from `b` input shape in order as axes for each respectively.
|
||||||
Default: None.
|
Default: None.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, batch dot product of `x1` and `x2`. For example: The Shape of output
|
Tensor, batch dot product of `x1` and `x2`. For example: The Shape of output
|
||||||
for input `x1` shapes (batch, d1, axes, d2) and `x2` shapes (batch, d3, axes, d4) is (batch, d1, d2, d3, d4),
|
for input `x1` shapes (batch, d1, axes, d2) and `x2` shapes (batch, d3, axes, d4) is (batch, d1, d2, d3, d4),
|
||||||
where d1 and d2 means any number.
|
where d1 and d2 means any number.
|
||||||
|
@ -779,7 +779,7 @@ def resize_nearest_neighbor(input_x, size, align_corners=False):
|
||||||
align_corners (bool): Whether the centers of the 4 corner pixels of the input
|
align_corners (bool): Whether the centers of the 4 corner pixels of the input
|
||||||
and output tensors are aligned. Default: False.
|
and output tensors are aligned. Default: False.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the shape of the output tensor is :math:`(N, C, NEW\_H, NEW\_W)`.
|
Tensor, the shape of the output tensor is :math:`(N, C, NEW\_H, NEW\_W)`.
|
||||||
The data type is the same as the `input_x`.
|
The data type is the same as the `input_x`.
|
||||||
|
|
||||||
|
|
|
@ -1237,7 +1237,7 @@ def squeeze(input_x, axis=()):
|
||||||
all the dimensions of size 1 in the given axis parameter. If specified, it must be int32 or int64.
|
all the dimensions of size 1 in the given axis parameter. If specified, it must be int32 or int64.
|
||||||
Default: (), an empty tuple.
|
Default: (), an empty tuple.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the shape of tensor is :math:`(x_1, x_2, ..., x_S)`.
|
Tensor, the shape of tensor is :math:`(x_1, x_2, ..., x_S)`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -1419,7 +1419,7 @@ def scatter_max(input_x, indices, updates):
|
||||||
updates (Tensor): The tensor doing the max operation with `input_x`,
|
updates (Tensor): The tensor doing the max operation with `input_x`,
|
||||||
the data type is same as `input_x`, the shape is `indices.shape + x.shape[1:]`.
|
the data type is same as `input_x`, the shape is `indices.shape + x.shape[1:]`.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the updated `input_x`, the type and shape same as `input_x`.
|
Tensor, the updated `input_x`, the type and shape same as `input_x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -1511,7 +1511,7 @@ def scatter_min(input_x, indices, updates):
|
||||||
updates (Tensor): The tensor doing the min operation with `input_x`,
|
updates (Tensor): The tensor doing the min operation with `input_x`,
|
||||||
the data type is same as `input_x`, the shape is `indices.shape + input_x.shape[1:]`.
|
the data type is same as `input_x`, the shape is `indices.shape + input_x.shape[1:]`.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the updated `input_x`, has the same shape and type as `input_x`.
|
Tensor, the updated `input_x`, has the same shape and type as `input_x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -1566,7 +1566,7 @@ def scatter_div(input_x, indices, updates):
|
||||||
updates (Tensor): The tensor doing the divide operation with `input_x`,
|
updates (Tensor): The tensor doing the divide operation with `input_x`,
|
||||||
the data type is same as `input_x`, the shape is `indices.shape + input_x.shape[1:]`.
|
the data type is same as `input_x`, the shape is `indices.shape + input_x.shape[1:]`.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the updated `input_x`, has the same shape and type as `input_x`.
|
Tensor, the updated `input_x`, has the same shape and type as `input_x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -3487,7 +3487,7 @@ def index_fill(x, dim, index, value):
|
||||||
a Tensor, it must be a 0-dimensional Tensor and has the same dtype as `x`. Otherwise,
|
a Tensor, it must be a 0-dimensional Tensor and has the same dtype as `x`. Otherwise,
|
||||||
the `value` will be cast to a 0-dimensional Tensor with the same data type as `x`.
|
the `value` will be cast to a 0-dimensional Tensor with the same data type as `x`.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same dtype and shape as input Tensor.
|
Tensor, has the same dtype and shape as input Tensor.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
|
|
@ -2123,7 +2123,7 @@ def linspace(start, stop, num):
|
||||||
num (int): Number of ticks in the interval, inclusive of start and stop.
|
num (int): Number of ticks in the interval, inclusive of start and stop.
|
||||||
Must be positive int number.
|
Must be positive int number.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same dtype as `start`, and the shape of :math:`(num)`
|
Tensor, has the same dtype as `start`, and the shape of :math:`(num)`
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -2380,7 +2380,7 @@ def ldexp(x, other):
|
||||||
x (Tensor): The input tensor.
|
x (Tensor): The input tensor.
|
||||||
other (Tensor): A tensor of exponents, typically integers.
|
other (Tensor): A tensor of exponents, typically integers.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
out (Tensor, optional) : the output tensor.
|
out (Tensor, optional) : the output tensor.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -2600,12 +2600,15 @@ def ge(x, y):
|
||||||
r"""
|
r"""
|
||||||
Computes the boolean value of :math:`x >= y` element-wise.
|
Computes the boolean value of :math:`x >= y` element-wise.
|
||||||
|
|
||||||
Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
Note:
|
||||||
The inputs must be two tensors or one tensor and one scalar.
|
- Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||||
When the inputs are two tensors,
|
- The inputs must be two tensors or one tensor and one scalar.
|
||||||
dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast.
|
- When the inputs are two tensors, dtypes of them cannot be bool at the same time,
|
||||||
When the inputs are one tensor and one scalar,
|
and the shapes of them can be broadcast.
|
||||||
the scalar could only be a constant.
|
- When the inputs are one tensor and one scalar, the scalar could only be a constant.
|
||||||
|
- Broadcasting is supported.
|
||||||
|
- If the input Tensor can be broadcast, the low dimension will be extended to the corresponding high dimension
|
||||||
|
in another input by copying the value of the dimension.
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
|
|
||||||
|
@ -2691,10 +2694,12 @@ def ne(x, y):
|
||||||
r"""
|
r"""
|
||||||
Computes the non-equivalence of two tensors element-wise.
|
Computes the non-equivalence of two tensors element-wise.
|
||||||
|
|
||||||
Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
Note:
|
||||||
The inputs must be two tensors or one tensor and one scalar.
|
- Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||||
When the inputs are two tensors, the shapes of them could be broadcast.
|
- The inputs must be two tensors or one tensor and one scalar.
|
||||||
When the inputs are one tensor and one scalar, the scalar could only be a constant.
|
- When the inputs are two tensors, the shapes of them could be broadcast.
|
||||||
|
- When the inputs are one tensor and one scalar, the scalar could only be a constant.
|
||||||
|
- Broadcasting is supported.
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
|
|
||||||
|
@ -2757,7 +2762,7 @@ def approximate_equal(x, y, tolerance=1e-5):
|
||||||
y (Tensor): A tensor of the same type and shape as `x`.
|
y (Tensor): A tensor of the same type and shape as `x`.
|
||||||
tolerance (float): The maximum deviation that two elements can be considered equal. Default: 1e-05.
|
tolerance (float): The maximum deviation that two elements can be considered equal. Default: 1e-05.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the shape is the same as the shape of `x`, and the data type is bool.
|
Tensor, the shape is the same as the shape of `x`, and the data type is bool.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -2964,13 +2969,15 @@ def maximum(x, y):
|
||||||
"""
|
"""
|
||||||
Computes the maximum of input tensors element-wise.
|
Computes the maximum of input tensors element-wise.
|
||||||
|
|
||||||
Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
Note:
|
||||||
The inputs must be two tensors or one tensor and one scalar.
|
- Inputs of `x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||||
When the inputs are two tensors,
|
- The inputs must be two tensors or one tensor and one scalar.
|
||||||
dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast.
|
- When the inputs are two tensors,
|
||||||
When the inputs are one tensor and one scalar,
|
dtypes of them cannot be bool at the same time, and the shapes of them could be broadcast.
|
||||||
the scalar could only be a constant.
|
- When the inputs are one tensor and one scalar,
|
||||||
If one of the elements being compared is a NaN, then that element is returned.
|
the scalar could only be a constant.
|
||||||
|
- Broadcasting is supported.
|
||||||
|
- If one of the elements being compared is a NaN, then that element is returned.
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
output_i = max(x_i, y_i)
|
output_i = max(x_i, y_i)
|
||||||
|
@ -3416,7 +3423,7 @@ def mv(mat, vec):
|
||||||
mat (Tensor): Input matrix of the tensor. The shape of the tensor is :math:`(N, M)`.
|
mat (Tensor): Input matrix of the tensor. The shape of the tensor is :math:`(N, M)`.
|
||||||
vec (Tensor): Input vector of the tensor. The shape of the tensor is :math:`(M,)`.
|
vec (Tensor): Input vector of the tensor. The shape of the tensor is :math:`(M,)`.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the shape of the output tensor is :math:`(N,)`.
|
Tensor, the shape of the output tensor is :math:`(N,)`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -3478,7 +3485,7 @@ def addmv(x, mat, vec, beta=1, alpha=1):
|
||||||
alpha (scalar[int, float, bool], optional): Multiplier for `mat` @ `vec` (α). The `alpha` must
|
alpha (scalar[int, float, bool], optional): Multiplier for `mat` @ `vec` (α). The `alpha` must
|
||||||
be int or float or bool, Default: 1.
|
be int or float or bool, Default: 1.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the shape of the output tensor is :math:`(N,)`, has the same dtype as `x`.
|
Tensor, the shape of the output tensor is :math:`(N,)`, has the same dtype as `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -3838,7 +3845,7 @@ def deg2rad(x):
|
||||||
x (Tensor[Number]): The input tensor. It must be a positive-definite matrix.
|
x (Tensor[Number]): The input tensor. It must be a positive-definite matrix.
|
||||||
With float16, float32 or float64 data type.
|
With float16, float32 or float64 data type.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same dtype as the `x`.
|
Tensor, has the same dtype as the `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -3876,7 +3883,7 @@ def rad2deg(x):
|
||||||
Args:
|
Args:
|
||||||
x (Tensor): The input tensor.
|
x (Tensor): The input tensor.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same shape and dtype as the `x`.
|
Tensor, has the same shape and dtype as the `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -4127,7 +4134,7 @@ def logsumexp(x, axis, keep_dims=False):
|
||||||
If False, don't keep these dimensions.
|
If False, don't keep these dimensions.
|
||||||
Default : False.
|
Default : False.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same dtype as the `x`.
|
Tensor, has the same dtype as the `x`.
|
||||||
|
|
||||||
- If axis is (), and keep_dims is False,
|
- If axis is (), and keep_dims is False,
|
||||||
|
@ -5020,7 +5027,7 @@ def log2(x):
|
||||||
Args:
|
Args:
|
||||||
x (Tensor): Input Tensor of any dimension. The value must be greater than 0.
|
x (Tensor): Input Tensor of any dimension. The value must be greater than 0.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same shape and dtype as the `x`.
|
Tensor, has the same shape and dtype as the `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -5195,7 +5202,7 @@ def log10(x):
|
||||||
Args:
|
Args:
|
||||||
x (Tensor): Input Tensor of any dimension. The value must be greater than 0.
|
x (Tensor): Input Tensor of any dimension. The value must be greater than 0.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same shape and dtype as the `x`.
|
Tensor, has the same shape and dtype as the `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -5474,9 +5481,9 @@ def remainder(x, y):
|
||||||
y (Union[Tensor, numbers.Number, bool]): When the first input is a tensor, The second input
|
y (Union[Tensor, numbers.Number, bool]): When the first input is a tensor, The second input
|
||||||
could be a number, a bool or a tensor whose data type is number.
|
could be a number, a bool or a tensor whose data type is number.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the shape is the same as the one after broadcasting,
|
Tensor, the shape is the same as the one after broadcasting,
|
||||||
and the data type is the one with higher precision or higher digits among the two inputs.
|
and the data type is the one with higher precision or higher digits among the two inputs.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If neither `x` nor `y` is one of the following: Tensor, number, bool.
|
TypeError: If neither `x` nor `y` is one of the following: Tensor, number, bool.
|
||||||
|
|
|
@ -843,7 +843,7 @@ def softsign(x):
|
||||||
x (Tensor): Tensor of shape :math:`(N, *)`, where :math:`*` means, any number of
|
x (Tensor): Tensor of shape :math:`(N, *)`, where :math:`*` means, any number of
|
||||||
additional dimensions, with float16 or float32 data type.
|
additional dimensions, with float16 or float32 data type.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, with the same type and shape as the `x`.
|
Tensor, with the same type and shape as the `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -921,7 +921,7 @@ def soft_shrink(x, lambd=0.5):
|
||||||
x (Tensor): The input of soft shrink with data type of float16 or float32.
|
x (Tensor): The input of soft shrink with data type of float16 or float32.
|
||||||
lambd(float): The :math:`\lambda` must be no less than zero. Default: 0.5.
|
lambd(float): The :math:`\lambda` must be no less than zero. Default: 0.5.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, has the same shape and data type as `x`.
|
Tensor, has the same shape and data type as `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -1236,9 +1236,9 @@ def mirror_pad(input_x, paddings, mode):
|
||||||
Pads the input tensor according to the paddings and mode.
|
Pads the input tensor according to the paddings and mode.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
**input_x** (Tensor) - Tensor of shape :math:`(N, *)`, where :math:`*` means, any number of
|
input_x (Tensor): Tensor of shape :math:`(N, *)`, where :math:`*` means, any number of
|
||||||
additional dimensions.
|
additional dimensions.
|
||||||
**paddings** (Tensor) - Paddings requires constant tensor. The value of `paddings` is a
|
paddings (Tensor): Paddings requires constant tensor. The value of `paddings` is a
|
||||||
matrix(list), and its shape is (N, 2). N is the rank of input data. All elements of paddings
|
matrix(list), and its shape is (N, 2). N is the rank of input data. All elements of paddings
|
||||||
are int type. For the input in the `D` th dimension, paddings[D, 0] indicates how many sizes
|
are int type. For the input in the `D` th dimension, paddings[D, 0] indicates how many sizes
|
||||||
to be extended ahead of the input tensor in the `D` th dimension, and paddings[D, 1]
|
to be extended ahead of the input tensor in the `D` th dimension, and paddings[D, 1]
|
||||||
|
@ -1248,8 +1248,7 @@ def mirror_pad(input_x, paddings, mode):
|
||||||
mode (str): Specifies the padding mode. The optional values are "REFLECT" and "SYMMETRIC".
|
mode (str): Specifies the padding mode. The optional values are "REFLECT" and "SYMMETRIC".
|
||||||
Default: "REFLECT".
|
Default: "REFLECT".
|
||||||
|
|
||||||
|
Returns:
|
||||||
Outputs:
|
|
||||||
Tensor, the tensor after padding.
|
Tensor, the tensor after padding.
|
||||||
|
|
||||||
- If `mode` is "REFLECT", it uses a way of symmetrical copying through the axis of symmetry to fill in.
|
- If `mode` is "REFLECT", it uses a way of symmetrical copying through the axis of symmetry to fill in.
|
||||||
|
@ -1366,7 +1365,6 @@ def cross_entropy(inputs, target, weight=None, ignore_index=-100, reduction='mea
|
||||||
``Ascend`` ``GPU`` ``CPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
|
|
||||||
>>> # Case 1: Indices labels
|
>>> # Case 1: Indices labels
|
||||||
>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
|
>>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32)
|
||||||
>>> target = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32)
|
>>> target = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32)
|
||||||
|
@ -1447,7 +1445,7 @@ def nll_loss(inputs, target, weight=None, ignore_index=-100, reduction='mean', l
|
||||||
label_smoothing (float): Label smoothing values, a regularization tool used to prevent the model
|
label_smoothing (float): Label smoothing values, a regularization tool used to prevent the model
|
||||||
from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value: 0.0.
|
from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value: 0.0.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, the computed loss value.
|
Tensor, the computed loss value.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
|
@ -1638,7 +1636,7 @@ def log_softmax(logits, axis=-1):
|
||||||
additional dimensions, with float16 or float32 data type.
|
additional dimensions, with float16 or float32 data type.
|
||||||
axis (int): The axis to perform the Log softmax operation. Default: -1.
|
axis (int): The axis to perform the Log softmax operation. Default: -1.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, with the same type and shape as the logits.
|
Tensor, with the same type and shape as the logits.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
|
@ -1845,7 +1843,7 @@ def grid_sample(input_x, grid, interpolation_mode='bilinear', padding_mode='zero
|
||||||
to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. Default:
|
to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. Default:
|
||||||
`False`.
|
`False`.
|
||||||
|
|
||||||
Outputs:
|
Returns:
|
||||||
Tensor, dtype is the same as `input_x` and whose shape is :math:`(N, C, H_{out}, W_{out})` (4-D) and
|
Tensor, dtype is the same as `input_x` and whose shape is :math:`(N, C, H_{out}, W_{out})` (4-D) and
|
||||||
:math:`(N, C, D_{out}, H_{out}, W_{out})` (5-D).
|
:math:`(N, C, D_{out}, H_{out}, W_{out})` (5-D).
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue