!40703 modify the supported platforms in py files

Merge pull request !40703 from 宦晓玲/code_docs_0822
This commit is contained in:
i-robot 2022-08-23 09:19:37 +00:00 committed by Gitee
commit 5cd05b9203
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
19 changed files with 84 additions and 61 deletions

View File

@ -11,7 +11,7 @@ mindspore.Tensor.atan2
输入 `x``y` 会通过隐式数据类型转换使数据类型保持一致。如果数据类型不同,低精度的数据类型会被转换到高精度的数据类型。
参数:
- **y** (Tensor) - 输入Tensorshape应能在广播后与 `x` 相同,或 `x` 的shape在广播后与 `y` 相同。
- **y** (Tensor) - 输入Tensorshape应能在广播后与 `x` 相同,或 `x` 的shape在广播后与 `y` 相同。
返回:
Tensor与广播后的输入shape相同`x` 数据类型相同。

View File

@ -10,15 +10,15 @@ mindspore.Tensor.bernoulli
out_{i} \sim Bernoulli(p_{i})
参数:
- **p** (Union[Tensor, float], 可选) - shape需要可以被广播到当前Tensor。其数据类型为float32或float64。`p` 中每个值代表输出Tensor中对应广播位置为1的概率数值范围在0到1之间。默认值0.5。
- **seed** (int, 可选) - 随机种子,用于生成随机数数值范围是-1或正整数。默认值-1代表取当前时间戳。
- **p** (Union[Tensor, float], 可选) - shape需要可以被广播到当前Tensor。其数据类型为float32或float64。`p` 中每个值代表输出Tensor中对应广播位置为1的概率数值范围在0到1之间。默认值0.5。
- **seed** (int, 可选) - 随机种子,用于生成随机数数值范围是-1或正整数。默认值-1代表取当前时间戳。
返回:
Tensorshape和数据类型与当前Tensor相同。
异常:
- **TypeError** - 当前Tensor的数据类型不在int8, uint8, int16, int32, int64, bool, float32和float64中。
- **TypeError** - 当前Tensor的数据类型不在int8、uint8、int16、int32、int64、bool、float32和float64中。
- **TypeError** - `p` 的数据类型既不是float32也不是float64。
- **TypeError** - `seed` 不是int。
- **ValueError** - `seed` 是负数且不为-1
- **ValueError** - `p` 数值范围不在0到1之间
- **TypeError** - `seed` 的数据类型不是int。
- **ValueError** - `p` 数值范围不在0到1之间
- **ValueError** - `seed` 是负数且不为-1

View File

@ -3,7 +3,7 @@ mindspore.Tensor.inv
.. py:method:: mindspore.Tensor.inv()
计算当前Tensor的倒数。
逐元素计算当前Tensor的倒数。
.. math::
out_i = \frac{1}{x_{i} }

View File

@ -3,7 +3,7 @@ mindspore.Tensor.invert
.. py:method:: mindspore.Tensor.invert()
按位翻转当前Tensor。
逐元素按位翻转当前Tensor。
.. math::
out_i = \sim x_{i}

View File

@ -3,13 +3,13 @@ mindspore.Tensor.lerp
.. py:method:: mindspore.Tensor.lerp(end, weight)
基于某个浮点数Scalar或权重Tensor的值 计算当前Tensor和 `end` Tensor之间的线性插值。
基于某个浮点数Scalar或权重Tensor的值计算当前Tensor和 `end` Tensor之间的线性插值。
如果参数 `weight` 是一个Tensor那么另两个输入的维度信息可以被广播到当前Tensor。
如果参数 `weight` 是一个Scalar 那么 `end` 的维度信息可以被广播到当前Tensor。
如果参数 `weight` 是一个Scalar那么 `end` 的维度信息可以被广播到当前Tensor。
参数:
- **end** (Tensor) - 进行线性插值的Tensor结束点,其数据类型必须为float16或者float32。
- **end** (Tensor) - 进行线性插值的Tensor结束点数据类型必须为float16或者float32。
- **weight** (Union[float, Tensor]) - 线性插值公式的权重参数。当为Scalar时其数据类型为float当为Tensor时其数据类型为float16或者float32。
返回:

View File

@ -5,6 +5,8 @@ mindspore.Tensor.log1p
对当前Tensor逐元素加一后计算自然对数。
其中 `x` 表示当前Tensor。
.. math::
out_i = {log_e}(x_i + 1)

View File

@ -3,7 +3,7 @@ mindspore.Tensor.logit
.. py:method:: mindspore.Tensor.logit(eps=None)
逐元素计算张量的logit值当 eps 不是 None 时, `x` 中的元素被截断到范围[eps, 1-eps]内。
逐元素计算张量的logit值当 eps 不是 None 时, `x` 中的元素被截断到范围[eps, 1-eps]内。
当 eps 为 None 时,输入 `x` 不进行数值截断。
`x` 指的当前 Tensor。

View File

@ -6,7 +6,7 @@ mindspore.Tensor.std
计算指定维度的标准差。
标准差是方差的算术平方根,如::math:`std = sqrt(mean(abs(x - x.mean())**2))`
返回标准差默认情况下计算展开数组的标准差,否则在指定维度上计算。
返回标准差默认情况下计算展开数组的标准差,否则在指定维度上计算。
.. note::
不支持NumPy参数 `dtype``out``where`

View File

@ -8,8 +8,8 @@ mindspore.Tensor.svd
更多参考详见 :func:`mindspore.ops.svd`
参数:
- **full_matrices** (bool, optional) - 如果这个参数为True则计算完整的 :math:`U`:math:`V` 。否则 :math:`U`:math:`V` 的shape和P有关P是M和N的较小值, M和N是输入矩阵的行和列。默认值False。
- **compute_uv** (bool, optional) - 如果这个参数为True则计算 :math:`U`:math:`V` 否则只计算 :math:`S` 。默认值True。
- **full_matrices** (bool, optional) - 如果这个参数为True则计算完整的 :math:`U`:math:`V` 。否则 :math:`U`:math:`V` 的shape和P有关。P是M和N的较小值。M和N是输入矩阵的行和列。默认值False。
- **compute_uv** (bool, optional) - 如果这个参数为True则计算 :math:`U`:math:`V` 。如果为false,只计算 :math:`S` 。默认值True。
返回:
- **s** (Tensor) - 奇异值。shape为 :math:`(*, P)`

View File

@ -10,7 +10,7 @@ mindspore.Tensor.tan
out_i = tan(x_i)
返回:
Tensor。
Tensor和当前输入的shape一样
异常:
- **TypeError** - 当前输入不是Tensor。

View File

@ -7,7 +7,7 @@ mindspore.Tensor.var
方差是平均值的平方偏差的平均值,即::math:`var = mean(abs(x - x.mean())**2)`
返回方差值默认情况下计算展开Tensor的方差否则在指定维度上计算。
返回方差值默认情况下计算展开Tensor的方差否则在指定维度上计算。
.. note::
不支持NumPy参数 `dtype``out``where`

View File

@ -421,6 +421,7 @@ class Tensor(Tensor_):
Convert numpy array to Tensor.
If the data is not C contiguous, the data will be copied to C contiguous to construct the tensor.
Otherwise, The tensor will be constructed using this numpy array without copy.
Args:
array (numpy.array): The input array.
@ -696,7 +697,7 @@ class Tensor(Tensor_):
r"""
Returns arctangent of x/y element-wise.
`x` refer to self tensor.
`x` refers to self tensor.
It returns :math:`\theta\ \in\ [-\pi, \pi]`
such that :math:`x = r*\sin(\theta), y = r*\cos(\theta)`, where :math:`r = \sqrt{x^2 + y^2}`.
@ -706,10 +707,11 @@ class Tensor(Tensor_):
the relatively highest precision data type.
Args:
y (Tensor): The input tensor. It has the same shape with `x`.
y (Tensor): The input tensor. It has the same shape with `x` after broadcasting,
or the shape of `x` is the same as `y` after broadcasting.
Returns:
Tensor, the shape is the same as the one after broadcasting,and the data type is same as `x`.
Tensor, the shape is the same as the one after broadcasting, and the data type is same as `x`.
Raises:
TypeError: If `x` or `y` is not a Tensor.
@ -1246,7 +1248,7 @@ class Tensor(Tensor_):
"""
Does a linear interpolation of two tensors start and end based on a float or tensor weight.
If `weight` is a tensor, the shapes of two inputs need to be broadcast;
If `weight` is a tensor, the shapes of two inputs need to be broadcast.
If `weight` is a float, the shapes of `end` need to be broadcast.
Args:
@ -1407,7 +1409,7 @@ class Tensor(Tensor_):
r"""
Computes the determinant of one or more square matrices.
`x` refer to self tensor.
`x` refers to self tensor.
Returns:
@ -1435,7 +1437,7 @@ class Tensor(Tensor_):
r"""
Returns the natural logarithm of one plus the input tensor element-wise.
`x` refer to self tensor.
`x` refers to self tensor.
.. math::
out_i = {log_e}(x_i + 1)
@ -1465,7 +1467,7 @@ class Tensor(Tensor_):
Calculate the logit of a tensor element-wise. When eps is not None, element in 'x' is clamped to [eps, 1-eps].
When eps is None, input 'x' is not clamped.
`x` refer to self tensor.
`x` refers to self tensor.
.. math::
\begin{align}
@ -1508,7 +1510,7 @@ class Tensor(Tensor_):
r"""
Computes the sign and the log of the absolute value of the determinant of one or more square matrices.
`x` refer to self tensor.
`x` refers to self tensor.
Returns:
Tensor, The signs of the log determinants. The shape is :math:`x\_shape[:-2]`, the dtype is same as `x`.
@ -1601,6 +1603,8 @@ class Tensor(Tensor_):
.. math::
out_i = \frac{1}{x_{i} }
where `x` refers to self Tensor.
Returns:
Tensor, has the same type and shape as self Tensor.
@ -1626,6 +1630,8 @@ class Tensor(Tensor_):
.. math::
out_i = \sim x_{i}
where `x` refers to self Tensor.
Returns:
Tensor, has the same shape as as self Tensor.
@ -2191,9 +2197,6 @@ class Tensor(Tensor_):
Returns:
Tensor, has the same data type as input.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Raises:
TypeError: If `order` is not string type.
ValueError: If `order` is string type, but not 'C' or 'F'.
@ -2203,6 +2206,9 @@ class Tensor(Tensor_):
:func:`mindspore.Tensor.ravel`: Return a contiguous flattened tensor.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import numpy as np
>>> from mindspore import Tensor
@ -2582,12 +2588,12 @@ class Tensor(Tensor_):
Returns:
Tensor.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
See also:
:func:`mindspore.Tensor.sum`: Return sum of tensor elements over a given axis.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import numpy as np
>>> from mindspore import Tensor
@ -3773,12 +3779,12 @@ class Tensor(Tensor_):
Returns:
Tensor, the merged result.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Raises:
ValueError: If the input tensor and any of the `choices` cannot be broadcast.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import numpy as np
>>> from mindspore import Tensor
@ -4572,7 +4578,7 @@ class Tensor(Tensor_):
Only 2-D tensor is supported for now.
Returns:
COOTensor, a sparse representation of the original dense tensor, containing:
COOTensor, a sparse representation of the original dense tensor, containing the following parts.
- indices (Tensor): 2-D integer tensor, indicates the positions of `values` of the dense tensor.
- values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.
@ -4606,7 +4612,7 @@ class Tensor(Tensor_):
Only 2-D tensor is supported for now.
Returns:
CSRTensor, a sparse representation of the original dense tensor, containing:
CSRTensor, a sparse representation of the original dense tensor, containing the following parts.
- indptr (Tensor): 1-D integer tensor, indicates the start and end point for `values` in each row.
- indices (Tensor): 1-D integer tensor, indicates the column positions of all non-zero values of the input.
@ -4764,7 +4770,7 @@ class Tensor(Tensor_):
RuntimeError: If `axis` is not in the range of :math:`[-ndim, ndim-1]`.
Supported Platforms:
``GPU``
``Ascend`` ``GPU``
Examples:
>>> import numpy as np
@ -4861,20 +4867,20 @@ class Tensor(Tensor_):
return tensor_operator_registry.get('diag')()(self)
def xdivy(self, y):
"""
r"""
Divides self tensor by the input tensor element-wise. Returns zero when self is zero. The dtype of
original Tensor must be one of float, complex or bool. For simplicity, denote the original Tensor by x.
.. math::
out_i = x_{i} / {y_{i}}
out_i = x_{i}\y_{i}
`x` and `y` comply with the implicit type conversion rules to make the data types consistent.
'y' must be tensor or scalar, when y is tensor, dtypes of x and y cannot be bool at the same time,
and the shapes of them could be broadcast. When y is scalar, the scalar can only be a constant.
Args:
- **y** (Union[Tensor, Number, bool]) - The second input y is a Number,
y (Union[Tensor, number.Number, bool]): The second input y is a Number,
or a bool when the first input x is a tensor, or a tensor whose data type is float16,
float32, float64, complex64, complex128 or bool.
@ -4964,7 +4970,7 @@ class Tensor(Tensor_):
- On Ascend, the data type of `x` and `y` must be float16 or float32.
Args:
- **y** (Union[Tensor, number.Number, bool]) - The `y` input is a number.Number or
y (Union[Tensor, number.Number, bool]): The `y` input is a number.Number or
a bool or a tensor whose data type is number or bool.
Returns:
@ -5039,7 +5045,7 @@ class Tensor(Tensor_):
.. warning::
- If sorted is set to 'False', it will use the aicpu operator, the performance may be reduced.
`input_x` refer to self tensor.
`input_x` refers to self tensor.
If the `input_x` is a one-dimensional Tensor, finds the `k` largest entries in the Tensor,
and outputs its value and index as a Tensor. Therefore, values[`k`] is the `k` largest item in `input_x`,

View File

@ -90,7 +90,7 @@ class CELU(Cell):
TypeError: If the dtype of 'input_x' is neither float16 nor float32.
Supported Platforms:
``Ascend``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
@ -967,7 +967,7 @@ class Softsign(Cell):
Refer to :func:`mindspore.ops.softsign` for more details.
Supported Platforms:
``Ascend`` ``CPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([0, -1, 2, 30, -30]), mindspore.float32)
@ -1409,7 +1409,7 @@ class Mish(Cell):
Refer to :func:`mindspore.ops.mish` for more details.
Supported Platforms:
``Ascend`` ``CPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)

View File

@ -839,9 +839,6 @@ class Conv3dTranspose(_Conv):
\text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\
\end{array}
Supported Platforms:
``Ascend`` ``GPU``
Raises:
TypeError: If `in_channels`, `out_channels` or `group` is not an int.
TypeError: If `kernel_size`, `stride`, `padding` , `dilation` or `output_padding`
@ -854,6 +851,9 @@ class Conv3dTranspose(_Conv):
ValueError: If `pad_mode` is not equal to 'pad' and `padding` is not equal to (0, 0, 0, 0, 0, 0).
ValueError: If `data_format` is not 'NCDHW'.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float32)
>>> conv3d_transpose = nn.Conv3dTranspose(in_channels=16, out_channels=3, kernel_size=(4, 6, 2),

View File

@ -409,7 +409,7 @@ def argmin(x, axis=-1):
TypeError: If `axis` is not an int.
Supported Platforms:
``Ascend`` ``CPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
@ -1998,7 +1998,7 @@ def bessel_i0e(x):
TypeError: If dtype of `x` is not float16, float32 or float64.
Supported Platforms:
``CPU``
``GPU`` ``CPU``
Examples:
>>> x = Tensor(np.array([-1, -0.5, 0.5, 1]), mindspore.float32)
@ -2405,7 +2405,7 @@ def trunc(input_x):
TypeError: If `input_x` is not a Tensor.
Supported Platforms:
``Ascend`` ``CPU``
``CPU``
Examples:
>>> input_x = Tensor(np.array([3.4742, 0.5466, -0.8008, -3.9079]),mindspore.float32)

View File

@ -851,7 +851,7 @@ def softsign(x):
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``CPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> from mindspore.ops import functional as F
@ -1729,7 +1729,7 @@ def mish(x):
TypeError: If dtype of `x` is neither float16 nor float32.
Supported Platforms:
``Ascend`` ``CPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)

View File

@ -588,8 +588,8 @@ class Im2Col(Primitive):
Default: 0.
Inputs:
- **x** (Tensor) : input tensor, only 4-D input tensors (batched image-like tensors) are supported.
support all real number data type.
- **x** (Tensor) - input tensor, only 4-D input tensors (batched image-like tensors) are supported.
support all real number data type.
Outputs:
Tensor, a 4-D Tensor with same type of input `x`.
@ -3348,7 +3348,7 @@ class StridedSlice(PrimitiveWithInfer):
`x[2:,...]` is equivalent to `x[2:5,:,:,:]`.
If the ith bit of `new_axis_mask` is set, `begin`, `end` and `strides` are ignored and a new length 1
dimension is added at the specified position in tthe output tensor.
dimension is added at the specified position in the output tensor.
As for a 5*6*7 tensor, `x[:2, newaxis, :6]` will produce a tensor with shape :math:`(2, 1, 7)` .
@ -6460,7 +6460,7 @@ class SearchSorted(PrimitiveWithInfer):
Inputs:
- **sequence** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R-1, x_R)` or `(x_1)`.
It must contain monitonically increasing sequence on the innermost dimension.
It must contain monitonically increasing sequence on the innermost dimension.
- **values** (Tensor) - The shape of tensor is : math:`(x_1, x_2, ..., x_R-1, x_S)`.
Outputs:
@ -8120,6 +8120,9 @@ class PopulationCount(Primitive):
Computes element-wise population count(a.k.a bitsum, bitcount).
Refer to :func:`mindspore.ops.population_count` for more detail.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
"""
@prim_attr_register

View File

@ -1859,6 +1859,9 @@ class InplaceAdd(PrimitiveWithInfer):
Refer to :func:`mindspore.ops.inplace_add` for more detail.
Supported Platforms:
``Ascend`` ``CPU``
Examples:
>>> import numpy as np
>>> import mindspore
@ -1912,6 +1915,9 @@ class InplaceSub(PrimitiveWithInfer):
Refer to :func:`mindspore.ops.inplace_sub` for more detail.
Supported Platforms:
``Ascend`` ``CPU``
Examples:
>>> import numpy as np
>>> import mindspore
@ -4923,7 +4929,7 @@ class BesselI0e(Primitive):
TypeError: If dtype of `x` is not float16, float32 or float64.
Supported Platforms:
``Ascend`` ``CPU`` ``GPU``
``GPU`` ``CPU``
Examples:
>>> bessel_i0e = ops.BesselI0e()
@ -4963,7 +4969,7 @@ class BesselI1e(Primitive):
TypeError: If dtype of `x` is not float16, float32 or float64.
Supported Platforms:
``Ascend`` ``CPU`` ``GPU``
``CPU``
Examples:
>>> bessel_i1e = ops.BesselI1e()
@ -5773,6 +5779,9 @@ class Trunc(Primitive):
Returns a new tensor with the truncated integer values of the elements of input.
Refer to :func:`mindspore.ops.trunc` for more detail.
Supported Platforms:
``CPU``
"""
@prim_attr_register

View File

@ -7253,6 +7253,9 @@ class CTCGreedyDecoder(Primitive):
Performs greedy decoding on the logits given in inputs.
Refer to :func:`mindspore.ops.ctc_greedy_decoder` for more detail.
Supported Platforms:
``Ascend`` ``CPU``
"""
@prim_attr_register
@ -8585,7 +8588,7 @@ class Conv3DTranspose(Primitive):
ValueError: If bias is not none. The rank of dout and weight is not 5.
Supported Platforms:
``Ascend`` ``GPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)