!48383 fix API docs issues

Merge pull request !48383 from luojianing/code_docs_master
This commit is contained in:
i-robot 2023-02-06 03:46:22 +00:00 committed by Gitee
commit 1b4b756e48
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
6 changed files with 16 additions and 16 deletions

View File

@ -3,9 +3,9 @@ mindspore.ops.clamp
.. py:function:: mindspore.ops.clamp(x, min=None, max=None)
将输入Tensor值裁剪到指定的最小值和最大值之间。
将输入Tensor值裁剪到指定的最小值和最大值之间。
限制 :math:`x` 的范围,其 :math:`x`最小值为 `min` ,最大值为 `max`
限制 :math:`x` 的范围,其最小值为 `min` ,最大值为 `max`
.. math::
out_i= \left\{

View File

@ -3,7 +3,7 @@ mindspore.ops.full_like
.. py:function:: mindspore.ops.full_like(x, fill_value, *, dtype=None)
返回一个与输入相同大小的Tensor填充 `fill_value`。'ops.full_like(x, fill_value)'相当于'ops.full(x.shape, fill_value, dtype=x.dtype)'
返回一个与输入相同大小的Tensor填充 `fill_value``ops.full_like(x, fill_value)` 相当于 `ops.full(x.shape, fill_value, dtype=x.dtype)`
参数:
- **x** (Tensor) - `x` 的shape决定输出Tensor的shape。

View File

@ -717,8 +717,8 @@ def full(size, fill_value, *, dtype=None): # pylint: disable=redefined-outer-nam
def full_like(x, fill_value, *, dtype=None):
"""
Returns a Tensor with the same size as `x` filled with `fill_value`. 'ops.full_like(x, fill_value)' is
equivalent to 'ops.full(x.shape, fill_value, dtype=x.dtype)'.
Returns a Tensor with the same size as `x` filled with `fill_value`. `ops.full_like(x, fill_value)` is
equivalent to `ops.full(x.shape, fill_value, dtype=x.dtype)` .
Args:
x (Tensor): The shape of `x` will determine shape of the output Tensor.
@ -5862,7 +5862,7 @@ def diagonal(input, offset=0, dim1=0, dim2=1):
Returns specified diagonals of `input`.
If `input` is 2-D, returns the diagonal of `input` with the given offset.
If `a` has more than two
If `input` has more than two
dimensions, then the axes specified by `dim1` and `dim2` are used to determine
the 2-D sub-array whose diagonal is returned. The shape of the resulting
array can be determined by removing `dim1` and `dim2` and appending an index

View File

@ -153,7 +153,7 @@ def clip_by_value(x, clip_value_min=None, clip_value_max=None):
def clamp(x, min=None, max=None):
r"""
Clamps tensor values to a specified min and max.
Clamps tensor values between the specified minimum value and maximum value.
Limits the value of :math:`x` to a range, whose lower limit is `min` and upper limit is `max` .
@ -181,7 +181,7 @@ def clamp(x, min=None, max=None):
max (Union(Tensor, float, int)): The maximum value. Default: None.
Returns:
(Union(Tensor, tuple[Tensor], list[Tensor])), a clipped Tensor or a tuple or a list of clipped Tensor.
Union(Tensor, tuple[Tensor], list[Tensor]), a clipped Tensor or a tuple or a list of clipped Tensor.
The data type and shape are the same as x.
Raises:

View File

@ -1293,7 +1293,7 @@ def logdet(x):
Returns:
Tensor, the log determinant of `x`. If the matrix determinant is smaller than 0, nan will be returned. If the
matrix determinant is 0, -inf will be returned.
matrix determinant is 0, -inf will be returned.
Raises:
TypeError: If dtype of `x` is not float32, float64, Complex64 or Complex128.
@ -4503,11 +4503,11 @@ def heaviside(x, values):
Args:
x (Tensor): The input tensor. With real number data type.
values (Tensor): The values to use where x is zero. Values can be broadcast with x.
'x' should have the same dtype with 'values'.
values (Tensor): The values to use where `x` is zero. Values can be broadcast with `x` .
`x` should have the same dtype with 'values'.
Returns:
Tensor, has the same type as 'x' and 'values'.
Tensor, has the same type as `x` and `values`.
Raises:
TypeError: If `x` or `values` is not Tensor.

View File

@ -686,8 +686,8 @@ def max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=No
max_unpool1d takes the output of maxpool1d as input including the indices of the maximal values
and computes a partial inverse in which all non-maximal values are set to zero. Typically the input
is of shape :math:`(N, C, H_{in})` or :math:`(C, H_{in})`, and the output is of shape :math:`(N, C, H_{out}`
or :math:`(C, H_{out}`. The operation is as follows.
is of shape :math:`(N, C, H_{in})` or :math:`(C, H_{in})`, and the output is of shape :math:`(N, C, H_{out})`
or :math:`(C, H_{out})`. The operation is as follows.
.. math::
\begin{array}{ll} \\
@ -4607,7 +4607,7 @@ def batch_norm(input_x, running_mean, running_var, weight, bias, training=False,
.. warning::
- If this operation is used for inferring and output "reserve_space_1" and "reserve_space_2" are usable,
then "reserve_space_1" and "reserve_space_2" have the same value as "mean" and "variance" respectively.
then "reserve_space_1" and "reserve_space_2" have the same value as "mean" and "variance" respectively.
- For Ascend 310, the result accuracy fails to reach 1 due to the square root instruction.
Note:
@ -5480,7 +5480,7 @@ def lp_pool2d(x, norm_type, kernel_size, stride=None, ceil_mode=False):
stride (Union[int, tuple[int]]): The distance of kernel moving, an int number that represents
the height and width of movement are both strides, or a tuple of two int numbers that
represent height and width of movement respectively, if the value is None,
the default value `kernel_size` is used;
the default value `kernel_size` is used.
ceil_mode (bool): Whether to use ceil or floor to calculate output shape. Default: False.
Returns: