modify format1115

This commit is contained in:
huodagu 2022-11-15 10:14:45 +08:00
parent 4b98ca7361
commit 38bf0b73ba
13 changed files with 35 additions and 142 deletions

View File

@ -614,7 +614,7 @@ Parameter操作算子
自定义算子
-------------
.. mscnautosummary::
.. mscnplatformautosummary::
:toctree: ops
:nosignatures:
:template: classtemplate.rst

View File

@ -3,7 +3,4 @@ mindspore.Tensor.negative
.. py:method:: mindspore.Tensor.negative()
逐元素计算当前Tensor的相反数。
返回:
Tensor每个元素是当前Tensor的相反数。
详情请参考 :func:`mindspore.ops.negative`

View File

@ -26,7 +26,7 @@ mindspore.nn.probability.distribution.HalfNormal
- **ValueError** - `sd` 中元素不大于0。
- **TypeError** - `dtype` 不是float的子类。
.. py:method:: log_prob(value, mean, sd)
.. py:method:: log_prob(value, mean=None, sd=None)
计算给定值对应的概率的对数。

View File

@ -26,7 +26,7 @@ mindspore.nn.probability.distribution.Laplace
- **ValueError** - `sd` 中元素不大于0。
- **TypeError** - `dtype` 不是float的子类。
.. py:method:: log_prob(value, mean, sd)
.. py:method:: log_prob(value, mean=None, sd=None)
计算给定值对应的概率的对数。

View File

@ -30,7 +30,7 @@ mindspore.nn.probability.distribution.StudentT
- **ValueError** - `sd` 中元素不大于0。
- **TypeError** - `dtype` 不是float的子类。
.. py:method:: log_prob(value, df, mean, sd)
.. py:method:: log_prob(value, df=None, mean=None, sd=None)
计算给定值对应的概率的对数。

View File

@ -5,12 +5,12 @@ mindspore.ops.DiagPart
返回输入的对角线部分。
假如`input_x`有维度`[D_1,..., D_k, D_1,..., D_k]`那么输出是一个秩为k的Tensor维度为`[D_1,..., D_k]`,其中:
假如 `input_x` 有维度 :math:`[D_1,..., D_k, D_1,..., D_k]`那么输出是一个秩为k的Tensor维度为 :math:`[D_1,..., D_k]`,其中:
`output[i_1,..., i_k] = input_x[i_1,..., i_k, i_1,..., i_k]`
:math:`output[i_1,..., i_k] = input_x[i_1,..., i_k, i_1,..., i_k]`
输入:
- **input_x** (Tensor) - 输入Tensor。它的秩为`k(k > 0)`
- **input_x** (Tensor) - 输入Tensor。它的秩为2k(k > 0)
输出:
Tensor`input` 有相同的数据类型。

View File

@ -612,7 +612,7 @@ Operator Information Registration
Customizing Operator
--------------------
.. autosummary::
.. msplatformautosummary::
:toctree: ops
:nosignatures:
:template: classtemplate.rst

View File

@ -1,12 +1,12 @@
.. py:method:: log_prob(value, mean, sd)
.. py:method:: log_prob(value, mean=None, sd=None)
the log value of the probability.
**Parameters**
- **value** (Tensor) - the value to compute.
- **mean** (Tensor) - the mean of the distribution. Default value: None.
- **sd** (Tensor) - the standard deviation of the distribution. Default value: None.
- **mean** (Tensor) - the mean of the distribution. Default: None.
- **sd** (Tensor) - the standard deviation of the distribution. Default: None.
**Returns**

View File

@ -1,12 +1,12 @@
.. py:method:: log_prob(value, mean, sd)
.. py:method:: log_prob(value, mean=None, sd=None)
the log value of the probability.
**Parameters**
- **value** (Tensor) - the value to compute.
- **mean** (Tensor) - the mean of the distribution. Default value: None.
- **sd** (Tensor) - the standard deviation of the distribution. Default value: None.
- **mean** (Tensor) - the mean of the distribution. Default: None.
- **sd** (Tensor) - the standard deviation of the distribution. Default: None.
**Returns**

View File

@ -1,13 +1,13 @@
.. py:method:: log_prob(value, mean, sd)
.. py:method:: log_prob(value, df=None, mean=None, sd=None)
the log value of the probability.
**Parameters**
- **value** (Tensor) - the value to compute.
- **df** (Tensor) - the degrees of freedom of the distribution. Default value: None.
- **mean** (Tensor) - the mean of the distribution. Default value: None.
- **sd** (Tensor) - the standard deviation of the distribution. Default value: None.
- **df** (Tensor) - the degrees of freedom of the distribution. Default: None.
- **mean** (Tensor) - the mean of the distribution. Default: None.
- **sd** (Tensor) - the standard deviation of the distribution. Default: None.
**Returns**

View File

@ -919,31 +919,7 @@ class Tensor(Tensor_):
def addcdiv(self, x1, x2, value):
r"""
Performs the element-wise division of tensor x1 by tensor x2,
multiply the result by the scalar value and add it to input_data.
.. math::
y[i] = input\_data[i] + value[i] * (x1[i] / x2[i])
Args:
x1 (Tensor): The numerator tensor.
x2 (Tensor): The denominator tensor.
value (Tensor): The multiplier for tensor x1/x2.
Returns:
Tensor, has the same shape and dtype as x1/x2.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([1, 1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([1, 2, 3, 4]), mindspore.float32)
>>> x2 = Tensor(np.array([4, 3, 2, 1]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> y = x.addcdiv(x1, x2, value)
>>> print(y)
[1.25 1.6666667 2.5 5. ]
For details, please refer to :func:`mindspore.ops.addcdiv`.
"""
self._init_check()
@ -951,33 +927,7 @@ class Tensor(Tensor_):
def addcmul(self, x1, x2, value):
r"""
Performs the element-wise product of tensor x1 and tensor x2,
multiply the result by the scalar value and add it to input_data.
.. math::
y[i] = input\_data[i] + value[i] * (x1[i] * x2[i])
Args:
x1 (Tensor): The tensor to be multiplied.
x2 (Tensor): The tensor to be multiplied.
value (Tensor): The multiplier for tensor x1*x2.
Returns:
Tensor, has the same shape and dtype as x1*x2.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> x1 = Tensor(np.array([[1], [2], [3]]), mindspore.float32)
>>> x2 = Tensor(np.array([[1, 2, 3]]), mindspore.float32)
>>> value = Tensor([1], mindspore.float32)
>>> y = x.addcmul(x1, x2, value)
>>> print(y)
[[ 2. 3. 4.]
[ 3. 5. 7.]
[ 4. 7. 10.]]
For details, please refer to :func:`mindspore.ops.addcmul`.
"""
self._init_check()
@ -1454,19 +1404,7 @@ class Tensor(Tensor_):
def negative(self):
r"""
Return a new tensor with the negative of the elements of input.
Returns:
Tensor, with the negative of the elements of the self Tensor.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> output = x.negative()
>>> print(output)
[-1. -2. 1. -2. 0. 3.5]
For details, please refer to :func:`mindspore.ops.negative`.
"""
self._init_check()
return tensor_operator_registry.get("negative")(self)
@ -3524,31 +3462,7 @@ class Tensor(Tensor_):
def unbind(self, dim=0):
r"""
Removes a tensor dimension in specified axis.
Unstack a tensor of rank `R` along axis dimension, and output tensors will have rank `(R-1)`.
Given a tensor of shape :math:`(x_1, x_2, ..., x_R)`. If :math:`0 \le axis`,
the shape of tensor in output is :math:`(x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)`.
Args:
dim (int): Dimension along which to unpack. Negative values wrap around. The range is [-R, R). Default: 0.
Returns:
A tuple of tensors, the shape of each objects is the same.
Raises:
ValueError: If `dim` is out of the range [-R, R).
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))
>>> output = x.unbind()
>>> print(output)
(Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]), Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]),
Tensor(shape=[3], dtype=Int64, value=[7, 8, 9]))
For details, please refer to :func:`mindspore.ops.unbind`.
"""
self._init_check()
return tensor_operator_registry.get('unbind')(dim)(self)
@ -3947,26 +3861,7 @@ class Tensor(Tensor_):
def erfinv(self):
r"""
Computes the inverse error function of input. The inverse error function is defined in the range `(-1, 1)` as:
.. math::
erfinv(erf(x)) = x
Returns:
Tensor, has the same shape and dtype as input tensor.
Raises:
TypeError: If dtype of input tensor is not float16, float32 or float64.
Supported Platforms:
``Ascend`` ``GPU``
Examples:
>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
>>> output = x.erfinv()
>>> print(output)
[ 0. 0.47695306 -1.1630805 ]
For details, please refer to :func:`mindspore.ops.erfinv`.
"""
self._init_check()
return tensor_operator_registry.get('erfinv')(self)

View File

@ -3822,8 +3822,8 @@ def addbmm(x, batch1, batch2, *, beta=1, alpha=1):
batch2 (Tensor): The second batch of tensor to be multiplied.
Keyword Args:
beta (scalar[int, float], optional): Multiplier for `x`. Default: 1.
alpha (scalar[int, float], optional): Multiplier for `batch1` @ `batch2`. Default: 1.
beta (Union[int, float], optional): Multiplier for `x`. Default: 1.
alpha (Union[int, float], optional): Multiplier for `batch1` @ `batch2`. Default: 1.
Returns:
Tensor, has the same dtype as `x`.
@ -3849,8 +3849,8 @@ def addmm(x, mat1, mat2, *, beta=1, alpha=1):
mat2 (Tensor): The second tensor to be multiplied.
Keyword Args:
beta (scalar[int, float], optional): Multiplier for `x`. Default: 1.
alpha (scalar[int, float], optional): Multiplier for `mat1` @ `mat2`. Default: 1.
beta (Union[int, float], optional): Multiplier for `x`. Default: 1.
alpha (Union[int, float], optional): Multiplier for `mat1` @ `mat2`. Default: 1.
.. math::
output = \beta x + \alpha (mat1 @ mat2)

View File

@ -3279,12 +3279,13 @@ def gaussian_nll_loss(x, target, var, full=False, eps=1e-6, reduction='mean'):
target (Tensor): Tensor of shape :math:`(N, *)` or :math:`(*)`, same shape as the x, or same shape
as the x but with one dimension equal to 1 (to allow broadcasting).
var (Tensor): Tensor of shape :math:`(N, *)` or :math:`(*)`, same shape as x, or same shape as the x
but with one dimension equal to 1, or same shape as the x but with one fewer dimension
(to allow for broadcasting).
full (bool): Include the constant term in the loss calculation. When :math:`full=True`, the constant term
`const.` will be :math:`0.5 * log(2\pi)`. Default: False.
eps (float): Used to improve the stability of log function must be greater than 0. Default: 1e-6.
reduction (str): Apply specific reduction method to the output: 'none', 'mean', or 'sum'. Default: 'mean'.
but with one dimension equal to 1, or same shape as the x but with one fewer dimension
(to allow for broadcasting).
full (bool, optional): Include the constant term in the loss calculation. When :math:`full=True`,
the constant term `const.` will be :math:`0.5 * log(2\pi)`. Default: False.
eps (float, optional): Used to improve the stability of log function must be greater than 0. Default: 1e-6.
reduction (str, optional): Apply specific reduction method to the output: 'none', 'mean', or 'sum'.
Default: 'mean'.
Returns:
Tensor or Tensor scalar, the computed loss depending on `reduction`.