forked from mindspore-Ecosystem/mindspore
!40703 modify the supported platforms in py files
Merge pull request !40703 from 宦晓玲/code_docs_0822
This commit is contained in:
commit
5cd05b9203
|
@ -11,7 +11,7 @@ mindspore.Tensor.atan2
|
||||||
输入 `x` 和 `y` 会通过隐式数据类型转换使数据类型保持一致。如果数据类型不同,低精度的数据类型会被转换到高精度的数据类型。
|
输入 `x` 和 `y` 会通过隐式数据类型转换使数据类型保持一致。如果数据类型不同,低精度的数据类型会被转换到高精度的数据类型。
|
||||||
|
|
||||||
参数:
|
参数:
|
||||||
- **y** (Tensor) - 输入Tensor,shape应能在广播后与 `x` 相同,或 `x` 的shape在广播后与 `y` 相同。
|
- **y** (Tensor) - 输入Tensor。shape应能在广播后与 `x` 相同,或 `x` 的shape在广播后与 `y` 相同。
|
||||||
|
|
||||||
返回:
|
返回:
|
||||||
Tensor,与广播后的输入shape相同,和 `x` 数据类型相同。
|
Tensor,与广播后的输入shape相同,和 `x` 数据类型相同。
|
||||||
|
|
|
@ -10,15 +10,15 @@ mindspore.Tensor.bernoulli
|
||||||
out_{i} \sim Bernoulli(p_{i})
|
out_{i} \sim Bernoulli(p_{i})
|
||||||
|
|
||||||
参数:
|
参数:
|
||||||
- **p** (Union[Tensor, float], 可选) - shape需要可以被广播到当前Tensor。其数据类型为float32或float64。`p` 中每个值代表输出Tensor中对应广播位置为1的概率,数值范围在0到1之间。默认值:0.5。
|
- **p** (Union[Tensor, float], 可选) - shape需要可以被广播到当前Tensor。其数据类型为float32或float64。`p` 中每个值代表输出Tensor中对应广播位置为1的概率。数值范围在0到1之间。默认值:0.5。
|
||||||
- **seed** (int, 可选) - 随机种子,用于生成随机数,数值范围是-1或正整数。默认值:-1,代表取当前时间戳。
|
- **seed** (int, 可选) - 随机种子,用于生成随机数。数值范围是-1或正整数。默认值:-1,代表取当前时间戳。
|
||||||
|
|
||||||
返回:
|
返回:
|
||||||
Tensor,shape和数据类型与当前Tensor相同。
|
Tensor,shape和数据类型与当前Tensor相同。
|
||||||
|
|
||||||
异常:
|
异常:
|
||||||
- **TypeError** - 当前Tensor的数据类型不在int8, uint8, int16, int32, int64, bool, float32和float64中。
|
- **TypeError** - 当前Tensor的数据类型不在int8、uint8、int16、int32、int64、bool、float32和float64中。
|
||||||
- **TypeError** - `p` 的数据类型既不是float32也不是float64。
|
- **TypeError** - `p` 的数据类型既不是float32也不是float64。
|
||||||
- **TypeError** - `seed` 不是int。
|
- **TypeError** - `seed` 的数据类型不是int。
|
||||||
- **ValueError** - `seed` 是负数且不为-1。
|
- **ValueError** - `p` 数值范围不在0到1之间。
|
||||||
- **ValueError** - `p` 数值范围不在0到1之间。
|
- **ValueError** - `seed` 是负数且不为-1。
|
|
@ -3,7 +3,7 @@ mindspore.Tensor.inv
|
||||||
|
|
||||||
.. py:method:: mindspore.Tensor.inv()
|
.. py:method:: mindspore.Tensor.inv()
|
||||||
|
|
||||||
计算当前Tensor的倒数。
|
逐元素计算当前Tensor的倒数。
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
out_i = \frac{1}{x_{i} }
|
out_i = \frac{1}{x_{i} }
|
||||||
|
|
|
@ -3,7 +3,7 @@ mindspore.Tensor.invert
|
||||||
|
|
||||||
.. py:method:: mindspore.Tensor.invert()
|
.. py:method:: mindspore.Tensor.invert()
|
||||||
|
|
||||||
按位翻转当前Tensor。
|
逐元素按位翻转当前Tensor。
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
out_i = \sim x_{i}
|
out_i = \sim x_{i}
|
||||||
|
|
|
@ -3,13 +3,13 @@ mindspore.Tensor.lerp
|
||||||
|
|
||||||
.. py:method:: mindspore.Tensor.lerp(end, weight)
|
.. py:method:: mindspore.Tensor.lerp(end, weight)
|
||||||
|
|
||||||
基于某个浮点数Scalar或权重Tensor的值, 计算当前Tensor和 `end` Tensor之间的线性插值。
|
基于某个浮点数Scalar或权重Tensor的值,计算当前Tensor和 `end` Tensor之间的线性插值。
|
||||||
|
|
||||||
如果参数 `weight` 是一个Tensor,那么另两个输入的维度信息可以被广播到当前Tensor。
|
如果参数 `weight` 是一个Tensor,那么另两个输入的维度信息可以被广播到当前Tensor。
|
||||||
如果参数 `weight` 是一个Scalar, 那么 `end` 的维度信息可以被广播到当前Tensor。
|
如果参数 `weight` 是一个Scalar,那么 `end` 的维度信息可以被广播到当前Tensor。
|
||||||
|
|
||||||
参数:
|
参数:
|
||||||
- **end** (Tensor) - 进行线性插值的Tensor结束点,其数据类型必须为float16或者float32。
|
- **end** (Tensor) - 进行线性插值的Tensor结束点。数据类型必须为float16或者float32。
|
||||||
- **weight** (Union[float, Tensor]) - 线性插值公式的权重参数。当为Scalar时,其数据类型为float,当为Tensor时,其数据类型为float16或者float32。
|
- **weight** (Union[float, Tensor]) - 线性插值公式的权重参数。当为Scalar时,其数据类型为float,当为Tensor时,其数据类型为float16或者float32。
|
||||||
|
|
||||||
返回:
|
返回:
|
||||||
|
|
|
@ -5,6 +5,8 @@ mindspore.Tensor.log1p
|
||||||
|
|
||||||
对当前Tensor逐元素加一后计算自然对数。
|
对当前Tensor逐元素加一后计算自然对数。
|
||||||
|
|
||||||
|
其中 `x` 表示当前Tensor。
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
out_i = {log_e}(x_i + 1)
|
out_i = {log_e}(x_i + 1)
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,7 @@ mindspore.Tensor.logit
|
||||||
|
|
||||||
.. py:method:: mindspore.Tensor.logit(eps=None)
|
.. py:method:: mindspore.Tensor.logit(eps=None)
|
||||||
|
|
||||||
逐元素计算张量的logit值,当 eps 不是 None 时, `x` 中的元素被截断到范围[eps, 1-eps]内。
|
逐元素计算张量的logit值。当 eps 不是 None 时, `x` 中的元素被截断到范围[eps, 1-eps]内。
|
||||||
当 eps 为 None 时,输入 `x` 不进行数值截断。
|
当 eps 为 None 时,输入 `x` 不进行数值截断。
|
||||||
|
|
||||||
`x` 指的当前 Tensor。
|
`x` 指的当前 Tensor。
|
||||||
|
|
|
@ -6,7 +6,7 @@ mindspore.Tensor.std
|
||||||
计算指定维度的标准差。
|
计算指定维度的标准差。
|
||||||
标准差是方差的算术平方根,如::math:`std = sqrt(mean(abs(x - x.mean())**2))` 。
|
标准差是方差的算术平方根,如::math:`std = sqrt(mean(abs(x - x.mean())**2))` 。
|
||||||
|
|
||||||
返回标准差。默认情况下计算展开数组的标准差,否则在指定维度上计算。
|
返回标准差,默认情况下计算展开数组的标准差,否则在指定维度上计算。
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
不支持NumPy参数 `dtype` 、 `out` 和 `where` 。
|
不支持NumPy参数 `dtype` 、 `out` 和 `where` 。
|
||||||
|
|
|
@ -8,8 +8,8 @@ mindspore.Tensor.svd
|
||||||
更多参考详见 :func:`mindspore.ops.svd`。
|
更多参考详见 :func:`mindspore.ops.svd`。
|
||||||
|
|
||||||
参数:
|
参数:
|
||||||
- **full_matrices** (bool, optional) - 如果这个参数为True,则计算完整的 :math:`U` 和 :math:`V` 。否则 :math:`U` 和 :math:`V` 的shape和P有关,P是M和N的较小值, M和N是输入矩阵的行和列。默认值:False。
|
- **full_matrices** (bool, optional) - 如果这个参数为True,则计算完整的 :math:`U` 和 :math:`V` 。否则 :math:`U` 和 :math:`V` 的shape和P有关。P是M和N的较小值。M和N是输入矩阵的行和列。默认值:False。
|
||||||
- **compute_uv** (bool, optional) - 如果这个参数为True,则计算 :math:`U` 和 :math:`V` ,否则只计算 :math:`S` 。默认值:True。
|
- **compute_uv** (bool, optional) - 如果这个参数为True,则计算 :math:`U` 和 :math:`V` 。如果为false,只计算 :math:`S` 。默认值:True。
|
||||||
|
|
||||||
返回:
|
返回:
|
||||||
- **s** (Tensor) - 奇异值。shape为 :math:`(*, P)`。
|
- **s** (Tensor) - 奇异值。shape为 :math:`(*, P)`。
|
||||||
|
|
|
@ -10,7 +10,7 @@ mindspore.Tensor.tan
|
||||||
out_i = tan(x_i)
|
out_i = tan(x_i)
|
||||||
|
|
||||||
返回:
|
返回:
|
||||||
Tensor。
|
Tensor,和当前输入的shape一样。
|
||||||
|
|
||||||
异常:
|
异常:
|
||||||
- **TypeError** - 当前输入不是Tensor。
|
- **TypeError** - 当前输入不是Tensor。
|
|
@ -7,7 +7,7 @@ mindspore.Tensor.var
|
||||||
|
|
||||||
方差是平均值的平方偏差的平均值,即::math:`var = mean(abs(x - x.mean())**2)` 。
|
方差是平均值的平方偏差的平均值,即::math:`var = mean(abs(x - x.mean())**2)` 。
|
||||||
|
|
||||||
返回方差值。默认情况下计算展开Tensor的方差,否则在指定维度上计算。
|
返回方差值,默认情况下计算展开Tensor的方差,否则在指定维度上计算。
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
不支持NumPy参数 `dtype` 、 `out` 和 `where` 。
|
不支持NumPy参数 `dtype` 、 `out` 和 `where` 。
|
||||||
|
|
|
@ -421,6 +421,7 @@ class Tensor(Tensor_):
|
||||||
Convert numpy array to Tensor.
|
Convert numpy array to Tensor.
|
||||||
If the data is not C contiguous, the data will be copied to C contiguous to construct the tensor.
|
If the data is not C contiguous, the data will be copied to C contiguous to construct the tensor.
|
||||||
Otherwise, The tensor will be constructed using this numpy array without copy.
|
Otherwise, The tensor will be constructed using this numpy array without copy.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
array (numpy.array): The input array.
|
array (numpy.array): The input array.
|
||||||
|
|
||||||
|
@ -696,7 +697,7 @@ class Tensor(Tensor_):
|
||||||
r"""
|
r"""
|
||||||
Returns arctangent of x/y element-wise.
|
Returns arctangent of x/y element-wise.
|
||||||
|
|
||||||
`x` refer to self tensor.
|
`x` refers to self tensor.
|
||||||
|
|
||||||
It returns :math:`\theta\ \in\ [-\pi, \pi]`
|
It returns :math:`\theta\ \in\ [-\pi, \pi]`
|
||||||
such that :math:`x = r*\sin(\theta), y = r*\cos(\theta)`, where :math:`r = \sqrt{x^2 + y^2}`.
|
such that :math:`x = r*\sin(\theta), y = r*\cos(\theta)`, where :math:`r = \sqrt{x^2 + y^2}`.
|
||||||
|
@ -706,10 +707,11 @@ class Tensor(Tensor_):
|
||||||
the relatively highest precision data type.
|
the relatively highest precision data type.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
y (Tensor): The input tensor. It has the same shape with `x`.
|
y (Tensor): The input tensor. It has the same shape with `x` after broadcasting,
|
||||||
|
or the shape of `x` is the same as `y` after broadcasting.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tensor, the shape is the same as the one after broadcasting,and the data type is same as `x`.
|
Tensor, the shape is the same as the one after broadcasting, and the data type is same as `x`.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If `x` or `y` is not a Tensor.
|
TypeError: If `x` or `y` is not a Tensor.
|
||||||
|
@ -1246,7 +1248,7 @@ class Tensor(Tensor_):
|
||||||
"""
|
"""
|
||||||
Does a linear interpolation of two tensors start and end based on a float or tensor weight.
|
Does a linear interpolation of two tensors start and end based on a float or tensor weight.
|
||||||
|
|
||||||
If `weight` is a tensor, the shapes of two inputs need to be broadcast;
|
If `weight` is a tensor, the shapes of two inputs need to be broadcast.
|
||||||
If `weight` is a float, the shapes of `end` need to be broadcast.
|
If `weight` is a float, the shapes of `end` need to be broadcast.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -1407,7 +1409,7 @@ class Tensor(Tensor_):
|
||||||
r"""
|
r"""
|
||||||
Computes the determinant of one or more square matrices.
|
Computes the determinant of one or more square matrices.
|
||||||
|
|
||||||
`x` refer to self tensor.
|
`x` refers to self tensor.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
|
|
||||||
|
@ -1435,7 +1437,7 @@ class Tensor(Tensor_):
|
||||||
r"""
|
r"""
|
||||||
Returns the natural logarithm of one plus the input tensor element-wise.
|
Returns the natural logarithm of one plus the input tensor element-wise.
|
||||||
|
|
||||||
`x` refer to self tensor.
|
`x` refers to self tensor.
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
out_i = {log_e}(x_i + 1)
|
out_i = {log_e}(x_i + 1)
|
||||||
|
@ -1465,7 +1467,7 @@ class Tensor(Tensor_):
|
||||||
Calculate the logit of a tensor element-wise. When eps is not None, element in 'x' is clamped to [eps, 1-eps].
|
Calculate the logit of a tensor element-wise. When eps is not None, element in 'x' is clamped to [eps, 1-eps].
|
||||||
When eps is None, input 'x' is not clamped.
|
When eps is None, input 'x' is not clamped.
|
||||||
|
|
||||||
`x` refer to self tensor.
|
`x` refers to self tensor.
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
\begin{align}
|
\begin{align}
|
||||||
|
@ -1508,7 +1510,7 @@ class Tensor(Tensor_):
|
||||||
r"""
|
r"""
|
||||||
Computes the sign and the log of the absolute value of the determinant of one or more square matrices.
|
Computes the sign and the log of the absolute value of the determinant of one or more square matrices.
|
||||||
|
|
||||||
`x` refer to self tensor.
|
`x` refers to self tensor.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tensor, The signs of the log determinants. The shape is :math:`x\_shape[:-2]`, the dtype is same as `x`.
|
Tensor, The signs of the log determinants. The shape is :math:`x\_shape[:-2]`, the dtype is same as `x`.
|
||||||
|
@ -1601,6 +1603,8 @@ class Tensor(Tensor_):
|
||||||
.. math::
|
.. math::
|
||||||
out_i = \frac{1}{x_{i} }
|
out_i = \frac{1}{x_{i} }
|
||||||
|
|
||||||
|
where `x` refers to self Tensor.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tensor, has the same type and shape as self Tensor.
|
Tensor, has the same type and shape as self Tensor.
|
||||||
|
|
||||||
|
@ -1626,6 +1630,8 @@ class Tensor(Tensor_):
|
||||||
.. math::
|
.. math::
|
||||||
out_i = \sim x_{i}
|
out_i = \sim x_{i}
|
||||||
|
|
||||||
|
where `x` refers to self Tensor.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tensor, has the same shape as as self Tensor.
|
Tensor, has the same shape as as self Tensor.
|
||||||
|
|
||||||
|
@ -2191,9 +2197,6 @@ class Tensor(Tensor_):
|
||||||
Returns:
|
Returns:
|
||||||
Tensor, has the same data type as input.
|
Tensor, has the same data type as input.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU`` ``CPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If `order` is not string type.
|
TypeError: If `order` is not string type.
|
||||||
ValueError: If `order` is string type, but not 'C' or 'F'.
|
ValueError: If `order` is string type, but not 'C' or 'F'.
|
||||||
|
@ -2203,6 +2206,9 @@ class Tensor(Tensor_):
|
||||||
|
|
||||||
:func:`mindspore.Tensor.ravel`: Return a contiguous flattened tensor.
|
:func:`mindspore.Tensor.ravel`: Return a contiguous flattened tensor.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
>>> from mindspore import Tensor
|
>>> from mindspore import Tensor
|
||||||
|
@ -2582,12 +2588,12 @@ class Tensor(Tensor_):
|
||||||
Returns:
|
Returns:
|
||||||
Tensor.
|
Tensor.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU`` ``CPU``
|
|
||||||
|
|
||||||
See also:
|
See also:
|
||||||
:func:`mindspore.Tensor.sum`: Return sum of tensor elements over a given axis.
|
:func:`mindspore.Tensor.sum`: Return sum of tensor elements over a given axis.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
>>> from mindspore import Tensor
|
>>> from mindspore import Tensor
|
||||||
|
@ -3773,12 +3779,12 @@ class Tensor(Tensor_):
|
||||||
Returns:
|
Returns:
|
||||||
Tensor, the merged result.
|
Tensor, the merged result.
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU`` ``CPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
ValueError: If the input tensor and any of the `choices` cannot be broadcast.
|
ValueError: If the input tensor and any of the `choices` cannot be broadcast.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
>>> from mindspore import Tensor
|
>>> from mindspore import Tensor
|
||||||
|
@ -4572,7 +4578,7 @@ class Tensor(Tensor_):
|
||||||
Only 2-D tensor is supported for now.
|
Only 2-D tensor is supported for now.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
COOTensor, a sparse representation of the original dense tensor, containing:
|
COOTensor, a sparse representation of the original dense tensor, containing the following parts.
|
||||||
|
|
||||||
- indices (Tensor): 2-D integer tensor, indicates the positions of `values` of the dense tensor.
|
- indices (Tensor): 2-D integer tensor, indicates the positions of `values` of the dense tensor.
|
||||||
- values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.
|
- values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.
|
||||||
|
@ -4606,7 +4612,7 @@ class Tensor(Tensor_):
|
||||||
Only 2-D tensor is supported for now.
|
Only 2-D tensor is supported for now.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
CSRTensor, a sparse representation of the original dense tensor, containing:
|
CSRTensor, a sparse representation of the original dense tensor, containing the following parts.
|
||||||
|
|
||||||
- indptr (Tensor): 1-D integer tensor, indicates the start and end point for `values` in each row.
|
- indptr (Tensor): 1-D integer tensor, indicates the start and end point for `values` in each row.
|
||||||
- indices (Tensor): 1-D integer tensor, indicates the column positions of all non-zero values of the input.
|
- indices (Tensor): 1-D integer tensor, indicates the column positions of all non-zero values of the input.
|
||||||
|
@ -4764,7 +4770,7 @@ class Tensor(Tensor_):
|
||||||
RuntimeError: If `axis` is not in the range of :math:`[-ndim, ndim-1]`.
|
RuntimeError: If `axis` is not in the range of :math:`[-ndim, ndim-1]`.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``GPU``
|
``Ascend`` ``GPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
|
@ -4861,20 +4867,20 @@ class Tensor(Tensor_):
|
||||||
return tensor_operator_registry.get('diag')()(self)
|
return tensor_operator_registry.get('diag')()(self)
|
||||||
|
|
||||||
def xdivy(self, y):
|
def xdivy(self, y):
|
||||||
"""
|
r"""
|
||||||
Divides self tensor by the input tensor element-wise. Returns zero when self is zero. The dtype of
|
Divides self tensor by the input tensor element-wise. Returns zero when self is zero. The dtype of
|
||||||
original Tensor must be one of float, complex or bool. For simplicity, denote the original Tensor by x.
|
original Tensor must be one of float, complex or bool. For simplicity, denote the original Tensor by x.
|
||||||
|
|
||||||
.. math::
|
.. math::
|
||||||
|
|
||||||
out_i = x_{i} / {y_{i}}
|
out_i = x_{i}\y_{i}
|
||||||
|
|
||||||
`x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
`x` and `y` comply with the implicit type conversion rules to make the data types consistent.
|
||||||
'y' must be tensor or scalar, when y is tensor, dtypes of x and y cannot be bool at the same time,
|
'y' must be tensor or scalar, when y is tensor, dtypes of x and y cannot be bool at the same time,
|
||||||
and the shapes of them could be broadcast. When y is scalar, the scalar can only be a constant.
|
and the shapes of them could be broadcast. When y is scalar, the scalar can only be a constant.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
- **y** (Union[Tensor, Number, bool]) - The second input y is a Number,
|
y (Union[Tensor, number.Number, bool]): The second input y is a Number,
|
||||||
or a bool when the first input x is a tensor, or a tensor whose data type is float16,
|
or a bool when the first input x is a tensor, or a tensor whose data type is float16,
|
||||||
float32, float64, complex64, complex128 or bool.
|
float32, float64, complex64, complex128 or bool.
|
||||||
|
|
||||||
|
@ -4964,7 +4970,7 @@ class Tensor(Tensor_):
|
||||||
- On Ascend, the data type of `x` and `y` must be float16 or float32.
|
- On Ascend, the data type of `x` and `y` must be float16 or float32.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
- **y** (Union[Tensor, number.Number, bool]) - The `y` input is a number.Number or
|
y (Union[Tensor, number.Number, bool]): The `y` input is a number.Number or
|
||||||
a bool or a tensor whose data type is number or bool.
|
a bool or a tensor whose data type is number or bool.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
|
@ -5039,7 +5045,7 @@ class Tensor(Tensor_):
|
||||||
.. warning::
|
.. warning::
|
||||||
- If sorted is set to 'False', it will use the aicpu operator, the performance may be reduced.
|
- If sorted is set to 'False', it will use the aicpu operator, the performance may be reduced.
|
||||||
|
|
||||||
`input_x` refer to self tensor.
|
`input_x` refers to self tensor.
|
||||||
|
|
||||||
If the `input_x` is a one-dimensional Tensor, finds the `k` largest entries in the Tensor,
|
If the `input_x` is a one-dimensional Tensor, finds the `k` largest entries in the Tensor,
|
||||||
and outputs its value and index as a Tensor. Therefore, values[`k`] is the `k` largest item in `input_x`,
|
and outputs its value and index as a Tensor. Therefore, values[`k`] is the `k` largest item in `input_x`,
|
||||||
|
|
|
@ -90,7 +90,7 @@ class CELU(Cell):
|
||||||
TypeError: If the dtype of 'input_x' is neither float16 nor float32.
|
TypeError: If the dtype of 'input_x' is neither float16 nor float32.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
|
>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
|
||||||
|
@ -967,7 +967,7 @@ class Softsign(Cell):
|
||||||
Refer to :func:`mindspore.ops.softsign` for more details.
|
Refer to :func:`mindspore.ops.softsign` for more details.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
|
>>> x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
|
||||||
|
@ -1409,7 +1409,7 @@ class Mish(Cell):
|
||||||
Refer to :func:`mindspore.ops.mish` for more details.
|
Refer to :func:`mindspore.ops.mish` for more details.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
||||||
|
|
|
@ -839,9 +839,6 @@ class Conv3dTranspose(_Conv):
|
||||||
\text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\
|
\text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\
|
||||||
\end{array}
|
\end{array}
|
||||||
|
|
||||||
Supported Platforms:
|
|
||||||
``Ascend`` ``GPU``
|
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
TypeError: If `in_channels`, `out_channels` or `group` is not an int.
|
TypeError: If `in_channels`, `out_channels` or `group` is not an int.
|
||||||
TypeError: If `kernel_size`, `stride`, `padding` , `dilation` or `output_padding`
|
TypeError: If `kernel_size`, `stride`, `padding` , `dilation` or `output_padding`
|
||||||
|
@ -854,6 +851,9 @@ class Conv3dTranspose(_Conv):
|
||||||
ValueError: If `pad_mode` is not equal to 'pad' and `padding` is not equal to (0, 0, 0, 0, 0, 0).
|
ValueError: If `pad_mode` is not equal to 'pad' and `padding` is not equal to (0, 0, 0, 0, 0, 0).
|
||||||
ValueError: If `data_format` is not 'NCDHW'.
|
ValueError: If `data_format` is not 'NCDHW'.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> x = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float32)
|
>>> x = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float32)
|
||||||
>>> conv3d_transpose = nn.Conv3dTranspose(in_channels=16, out_channels=3, kernel_size=(4, 6, 2),
|
>>> conv3d_transpose = nn.Conv3dTranspose(in_channels=16, out_channels=3, kernel_size=(4, 6, 2),
|
||||||
|
|
|
@ -409,7 +409,7 @@ def argmin(x, axis=-1):
|
||||||
TypeError: If `axis` is not an int.
|
TypeError: If `axis` is not an int.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
|
>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
|
||||||
|
@ -1998,7 +1998,7 @@ def bessel_i0e(x):
|
||||||
TypeError: If dtype of `x` is not float16, float32 or float64.
|
TypeError: If dtype of `x` is not float16, float32 or float64.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``CPU``
|
``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
|
>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
|
||||||
|
@ -2405,7 +2405,7 @@ def trunc(input_x):
|
||||||
TypeError: If `input_x` is not a Tensor.
|
TypeError: If `input_x` is not a Tensor.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU``
|
``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> input_x = Tensor(np.array([3.4742, 0.5466, -0.8008, -3.9079]),mindspore.float32)
|
>>> input_x = Tensor(np.array([3.4742, 0.5466, -0.8008, -3.9079]),mindspore.float32)
|
||||||
|
|
|
@ -851,7 +851,7 @@ def softsign(x):
|
||||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> from mindspore.ops import functional as F
|
>>> from mindspore.ops import functional as F
|
||||||
|
@ -1729,7 +1729,7 @@ def mish(x):
|
||||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
||||||
|
|
|
@ -588,8 +588,8 @@ class Im2Col(Primitive):
|
||||||
Default: 0.
|
Default: 0.
|
||||||
|
|
||||||
Inputs:
|
Inputs:
|
||||||
- **x** (Tensor) : input tensor, only 4-D input tensors (batched image-like tensors) are supported.
|
- **x** (Tensor) - input tensor, only 4-D input tensors (batched image-like tensors) are supported.
|
||||||
support all real number data type.
|
support all real number data type.
|
||||||
|
|
||||||
Outputs:
|
Outputs:
|
||||||
Tensor, a 4-D Tensor with same type of input `x`.
|
Tensor, a 4-D Tensor with same type of input `x`.
|
||||||
|
@ -3348,7 +3348,7 @@ class StridedSlice(PrimitiveWithInfer):
|
||||||
`x[2:,...]` is equivalent to `x[2:5,:,:,:]`.
|
`x[2:,...]` is equivalent to `x[2:5,:,:,:]`.
|
||||||
|
|
||||||
If the ith bit of `new_axis_mask` is set, `begin`, `end` and `strides` are ignored and a new length 1
|
If the ith bit of `new_axis_mask` is set, `begin`, `end` and `strides` are ignored and a new length 1
|
||||||
dimension is added at the specified position in tthe output tensor.
|
dimension is added at the specified position in the output tensor.
|
||||||
|
|
||||||
As for a 5*6*7 tensor, `x[:2, newaxis, :6]` will produce a tensor with shape :math:`(2, 1, 7)` .
|
As for a 5*6*7 tensor, `x[:2, newaxis, :6]` will produce a tensor with shape :math:`(2, 1, 7)` .
|
||||||
|
|
||||||
|
@ -6460,7 +6460,7 @@ class SearchSorted(PrimitiveWithInfer):
|
||||||
|
|
||||||
Inputs:
|
Inputs:
|
||||||
- **sequence** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R-1, x_R)` or `(x_1)`.
|
- **sequence** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R-1, x_R)` or `(x_1)`.
|
||||||
It must contain monitonically increasing sequence on the innermost dimension.
|
It must contain monitonically increasing sequence on the innermost dimension.
|
||||||
- **values** (Tensor) - The shape of tensor is : math:`(x_1, x_2, ..., x_R-1, x_S)`.
|
- **values** (Tensor) - The shape of tensor is : math:`(x_1, x_2, ..., x_R-1, x_S)`.
|
||||||
|
|
||||||
Outputs:
|
Outputs:
|
||||||
|
@ -8120,6 +8120,9 @@ class PopulationCount(Primitive):
|
||||||
Computes element-wise population count(a.k.a bitsum, bitcount).
|
Computes element-wise population count(a.k.a bitsum, bitcount).
|
||||||
|
|
||||||
Refer to :func:`mindspore.ops.population_count` for more detail.
|
Refer to :func:`mindspore.ops.population_count` for more detail.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@prim_attr_register
|
@prim_attr_register
|
||||||
|
|
|
@ -1859,6 +1859,9 @@ class InplaceAdd(PrimitiveWithInfer):
|
||||||
|
|
||||||
Refer to :func:`mindspore.ops.inplace_add` for more detail.
|
Refer to :func:`mindspore.ops.inplace_add` for more detail.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
|
@ -1912,6 +1915,9 @@ class InplaceSub(PrimitiveWithInfer):
|
||||||
|
|
||||||
Refer to :func:`mindspore.ops.inplace_sub` for more detail.
|
Refer to :func:`mindspore.ops.inplace_sub` for more detail.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> import numpy as np
|
>>> import numpy as np
|
||||||
>>> import mindspore
|
>>> import mindspore
|
||||||
|
@ -4923,7 +4929,7 @@ class BesselI0e(Primitive):
|
||||||
TypeError: If dtype of `x` is not float16, float32 or float64.
|
TypeError: If dtype of `x` is not float16, float32 or float64.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU`` ``GPU``
|
``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> bessel_i0e = ops.BesselI0e()
|
>>> bessel_i0e = ops.BesselI0e()
|
||||||
|
@ -4963,7 +4969,7 @@ class BesselI1e(Primitive):
|
||||||
TypeError: If dtype of `x` is not float16, float32 or float64.
|
TypeError: If dtype of `x` is not float16, float32 or float64.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``CPU`` ``GPU``
|
``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> bessel_i1e = ops.BesselI1e()
|
>>> bessel_i1e = ops.BesselI1e()
|
||||||
|
@ -5773,6 +5779,9 @@ class Trunc(Primitive):
|
||||||
Returns a new tensor with the truncated integer values of the elements of input.
|
Returns a new tensor with the truncated integer values of the elements of input.
|
||||||
|
|
||||||
Refer to :func:`mindspore.ops.trunc` for more detail.
|
Refer to :func:`mindspore.ops.trunc` for more detail.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``CPU``
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@prim_attr_register
|
@prim_attr_register
|
||||||
|
|
|
@ -7253,6 +7253,9 @@ class CTCGreedyDecoder(Primitive):
|
||||||
Performs greedy decoding on the logits given in inputs.
|
Performs greedy decoding on the logits given in inputs.
|
||||||
|
|
||||||
Refer to :func:`mindspore.ops.ctc_greedy_decoder` for more detail.
|
Refer to :func:`mindspore.ops.ctc_greedy_decoder` for more detail.
|
||||||
|
|
||||||
|
Supported Platforms:
|
||||||
|
``Ascend`` ``CPU``
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@prim_attr_register
|
@prim_attr_register
|
||||||
|
@ -8585,7 +8588,7 @@ class Conv3DTranspose(Primitive):
|
||||||
ValueError: If bias is not none. The rank of dout and weight is not 5.
|
ValueError: If bias is not none. The rank of dout and weight is not 5.
|
||||||
|
|
||||||
Supported Platforms:
|
Supported Platforms:
|
||||||
``Ascend`` ``GPU``
|
``Ascend`` ``GPU`` ``CPU``
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
|
>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
|
||||||
|
|
Loading…
Reference in New Issue