!48801 add_ops_std_var_and_mean

Merge pull request !48801 from yide12/tensor_std_master
This commit is contained in:
i-robot 2023-02-27 10:01:05 +00:00 committed by Gitee
commit c2a1ad5b7a
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
13 changed files with 724 additions and 55 deletions

View File

@ -329,6 +329,9 @@ Reduction函数
mindspore.ops.norm
mindspore.ops.prod
mindspore.ops.std
mindspore.ops.std_mean
mindspore.ops.var
mindspore.ops.var_mean
比较函数
^^^^^^^^^^^^^

View File

@ -1,25 +1,28 @@
mindspore.ops.std
==================
.. py:function:: mindspore.ops.std(input_x, axis=(), unbiased=True, keep_dims=False)
.. py:function:: mindspore.ops.std(input, axis=None, ddof=0, keepdims=False)
默认情况下输出Tensor各维度上的标准差与均值也可以对指定维度求标准差与均值。如果 `axis` 是维度列表,则计算对应维度的标准差与均值。
默认情况下输出Tensor各维度上的标准差也可以对指定维度求标准差。如果 `axis` 是维度列表,则计算对应维度的标准差。
.. note::
如果 `ddof` 是0、1、True或False支持的平台只有 `Ascend``CPU` 。其他情况下,支持平台是 `Ascend``GPU``CPU`
参数:
- **input_x** (Tensor[Number]) - 输入Tensor其数据类型为数值型。shape :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。秩应小于8。
- **axis** (Union[int, tuple(int), list(int)]) - 要减少的维度。默认值: (),缩小所有维度。只允许常量值。假设 `input_x` 的秩为r取值范围[-r,r)。
- **unbiased** (bool) - 如果为True使用Bessel校正。否则不使用Bessel校正。默认值True
- **keep_dims** (bool) - 如果为True则保留缩小的维度大小为1。否则移除维度。默认值False。
- **input** (Tensor[Number]) - 输入Tensor其数据类型为数值型。shape :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。秩应小于8。
- **axis** (Union[int, tuple(int), list(int)],可选) - 要减少的维度。只允许常量值。假设 `input` 的秩为r取值范围[-r,r)。默认值: None缩小所有维度。
- **ddof** (Union[int, bool],可选) - δ自由度。如果为整数,计算中使用的除数是 :math:`N - ddof` ,其中 :math:`N` 表示元素的数量。如果为True使用Bessel校正。如果是False使用偏置估计来计算方差。默认值0
- **keepdims** (bool,可选) - 是否保留输出Tensor的维度。如果为True则保留缩小的维度大小为1。否则移除维度。默认值False。
返回:
Tuple含有两个Tensor的tuple分别为标准差与均值.
假设输入 `input_x` 的shape为 :math:`(x_0, x_1, ..., x_R)`
- 如果 `axis` 为(),且 `keep_dims` 为False则输出一个零维Tensor表示输入Tensor `input_x` 中所有元素的标准差。
- 如果 `axis` 为int取值为1并且 `keep_dims` 为False则输出的shape为 :math:`(x_0, x_2, ..., x_R)`
- 如果 `axis` 为tuple(int)或list(int),取值为(1, 2),并且 `keep_dims` 为False则输出Tensor的shape为 :math:`(x_0, x_3, ..., x_R)`
Tensor标准差。
假设输入 `input` 的shape为 :math:`(x_0, x_1, ..., x_R)`
- 如果 `axis` 为(),且 `keepdims` 为False则输出一个零维Tensor表示输入Tensor `input` 中所有元素的标准差。
- 如果 `axis` 为int取值为1并且 `keepdims` 为False则输出的shape为 :math:`(x_0, x_2, ..., x_R)`
- 如果 `axis` 为tuple(int)或list(int),取值为(1, 2),并且 `keepdims` 为False则输出Tensor的shape为 :math:`(x_0, x_3, ..., x_R)`
异常:
- **TypeError** - `input_x` 不是Tensor。
- **TypeError** - `axis` 不是以下数据类型之一int、tuple或list。
- **TypeError** - `keep_dims` 不是bool类型。
- **TypeError** - `input` 不是Tensor。
- **TypeError** - `axis` 不是以下数据类型之一:None、int、tuple或list。
- **TypeError** - `keepdims` 不是bool类型。
- **ValueError** - `axis` 超出范围。

View File

@ -0,0 +1,28 @@
mindspore.ops.std_mean
======================
.. py:function:: mindspore.ops.std_mean(input, axis=None, ddof=0, keepdims=False)
默认情况下输出Tensor各维度上的标准差和均值也可以对指定维度求标准差和均值。如果 `axis` 是维度列表,则计算对应维度的标准差和均值。
.. note::
如果 `ddof` 是0、1、True或False支持的平台只有 `Ascend``CPU` 。其他情况下,支持平台是 `Ascend``GPU``CPU`
参数:
- **input** (Tensor[Number]) - 输入Tensor其数据类型为数值型。shape :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。秩应小于8。
- **axis** (Union[int, tuple(int), list(int)],可选) - 要减少的维度。只允许常量值。假设 `input` 的秩为r取值范围[-r,r)。默认值: None缩小所有维度。
- **ddof** (Union[int, bool],可选) - δ自由度。如果为整数,计算中使用的除数是 :math:`N - ddof` ,其中 :math:`N` 表示元素的数量。如果为True使用Bessel校正。如果是False使用偏置估计来计算方差。默认值0。
- **keepdims** (bool可选) - 是否保留输出Tensor的维度。如果为True则保留缩小的维度大小为1。否则移除维度。默认值False。
返回:
包含标准差和均值的tuple。
假设输入 `input` 的shape为 :math:`(x_0, x_1, ..., x_R)`
- 如果 `axis` 为(),且 `keepdims` 为False则输出一个零维Tensor表示输入Tensor `input` 中所有元素的标准差。
- 如果 `axis` 为int取值为1并且 `keepdims` 为False则输出的shape为 :math:`(x_0, x_2, ..., x_R)`
- 如果 `axis` 为tuple(int)或list(int),取值为(1, 2),并且 `keepdims` 为False则输出Tensor的shape为 :math:`(x_0, x_3, ..., x_R)`
异常:
- **TypeError** - `input` 不是Tensor。
- **TypeError** - `axis` 不是以下数据类型之一None、int、tuple或list。
- **TypeError** - `keepdims` 不是bool类型。
- **ValueError** - `axis` 超出范围。

View File

@ -0,0 +1,28 @@
mindspore.ops.var
==================
.. py:function:: mindspore.ops.var(input, axis=None, ddof=0, keepdims=False)
默认情况下输出Tensor各维度上的方差也可以对指定维度求方差。如果 `axis` 是维度列表,则计算对应维度的方差。
.. note::
如果 `ddof` 是0、1、True或False支持的平台只有 `Ascend``CPU` 。其他情况下,支持平台是 `Ascend``GPU``CPU`
参数:
- **input** (Tensor[Number]) - 输入Tensor其数据类型为数值型。shape :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。秩应小于8。
- **axis** (Union[int, tuple(int), list(int)],可选) - 要减少的维度。只允许常量值。假设 `input` 的秩为r取值范围[-r,r)。默认值: None缩小所有维度。
- **ddof** (Union[int, bool],可选) - δ自由度。如果为整数,计算中使用的除数是 :math:`N - ddof` ,其中 :math:`N` 表示元素的数量。如果为True使用Bessel校正。如果是False使用偏置估计来计算方差。默认值0。
- **keepdims** (bool可选) - 是否保留输出Tensor的维度。如果为True则保留缩小的维度大小为1。否则移除维度。默认值False。
返回:
Tensor方差。
假设输入 `input` 的shape为 :math:`(x_0, x_1, ..., x_R)`
- 如果 `axis` 为(),且 `keepdims` 为False则输出一个零维Tensor表示输入Tensor `input` 中所有元素的方差。
- 如果 `axis` 为int取值为1并且 `keepdims` 为False则输出的shape为 :math:`(x_0, x_2, ..., x_R)`
- 如果 `axis` 为tuple(int)或list(int),取值为(1, 2),并且 `keepdims` 为False则输出Tensor的shape为 :math:`(x_0, x_3, ..., x_R)`
异常:
- **TypeError** - `input` 不是Tensor。
- **TypeError** - `axis` 不是以下数据类型之一None、int、tuple或list。
- **TypeError** - `keepdims` 不是bool类型。
- **ValueError** - `axis` 超出范围。

View File

@ -0,0 +1,28 @@
mindspore.ops.var_mean
======================
.. py:function:: mindspore.ops.var_mean(input, axis=None, ddof=0, keepdims=False)
默认情况下输出Tensor各维度上的方差和均值也可以对指定维度求方差和均值。如果 `axis` 是维度列表,则计算对应维度的方差和均值。
.. note::
如果 `ddof` 是0、1、True或False支持的平台只有 `Ascend``CPU` 。其他情况下,支持平台是 `Ascend``GPU``CPU`
参数:
- **input** (Tensor[Number]) - 输入Tensor其数据类型为数值型。shape :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。秩应小于8。
- **axis** (Union[int, tuple(int), list(int)],可选) - 要减少的维度。只允许常量值。假设 `input` 的秩为r取值范围[-r,r)。默认值: None缩小所有维度。
- **ddof** (Union[int, bool],可选) - δ自由度。如果为整数,计算中使用的除数是 :math:`N - ddof` ,其中 :math:`N` 表示元素的数量。如果为True使用Bessel校正。如果是False使用偏置估计来计算方差。默认值0。
- **keepdims** (bool可选) - 是否保留输出Tensor的维度。如果为True则保留缩小的维度大小为1。否则移除维度。默认值False。
返回:
包含方差和均值的tuple。
假设输入 `input` 的shape为 :math:`(x_0, x_1, ..., x_R)`
- 如果 `axis` 为(),且 `keepdims` 为False则输出一个零维Tensor表示输入Tensor `input` 中所有元素的方差。
- 如果 `axis` 为int取值为1并且 `keepdims` 为False则输出的shape为 :math:`(x_0, x_2, ..., x_R)`
- 如果 `axis` 为tuple(int)或list(int),取值为(1, 2),并且 `keepdims` 为False则输出Tensor的shape为 :math:`(x_0, x_3, ..., x_R)`
异常:
- **TypeError** - `input` 不是Tensor。
- **TypeError** - `axis` 不是以下数据类型之一None、int、tuple或list。
- **TypeError** - `keepdims` 不是bool类型。
- **ValueError** - `axis` 超出范围。

View File

@ -329,6 +329,9 @@ Reduction Functions
mindspore.ops.norm
mindspore.ops.prod
mindspore.ops.std
mindspore.ops.std_mean
mindspore.ops.var
mindspore.ops.var_mean
Comparison Functions
^^^^^^^^^^^^^^^^^^^^

View File

@ -225,6 +225,9 @@ from .math_func import (
tensor_exp,
einsum,
view_as_real,
var,
var_mean,
std_mean,
exp,
tensor_expm1,
expm1,

View File

@ -26,7 +26,6 @@ from mindspore.ops import operations as P
from mindspore.ops import composite as C
from mindspore.ops.operations._inner_ops import Cummin, TileSize
from mindspore.ops.operations.math_ops import STFT
from mindspore.ops.operations.math_ops import ReduceStd
from mindspore.ops.operations.math_ops import Logit
from mindspore.ops.operations.math_ops import LuUnpack
from mindspore.ops.operations.math_ops import Roll
@ -1440,7 +1439,7 @@ def inplace_add(x, v, indices):
return inplace_add_inner(x, v)
def inplace_index_add(var, indices, updates, axis):
def inplace_index_add(var, indices, updates, axis): # pylint: disable=redefined-outer-name
"""
Adds tensor `updates` to specified axis and indices of tensor `var`. The axis should be in [0, len(var.dim) - 1],
and indices should be in [0, the size of `var` - 1] at the axis dimension.
@ -4652,57 +4651,262 @@ def logaddexp2(x1, x2):
return log_op(add_exp) / log_op(tensor_2)
def std(input_x, axis=(), unbiased=True, keep_dims=False):
"""
Returns the standard-deviation and mean of each row of the input tensor by default,
or it can calculate them in specified dimension `axis`.
If `axis` is a list of dimensions, reduce over all of them.
@constexpr
def _check_and_canonicalize_axes(axes, ndim):
"""Check whether the types and values of input axes are valid."""
return validator.check_and_canonicalize_axes(axes, ndim)
def _check_var_std_input(input, ddof, keepdims, axis, cls_name):
if not isinstance(input, Tensor):
raise TypeError(f"For {cls_name}, input should be Tensor, but got {type(input)}")
_check_attr_dtype("ddof", ddof, [int, bool], cls_name)
_check_attr_dtype("keepdims", keepdims, [bool], cls_name)
if axis is None:
axis = ()
else:
axis = _check_and_canonicalize_axes(axis, input.ndim)
return axis
def var(input, axis=None, ddof=0, keepdims=False): # pylint: disable=redefined-outer-name
r"""
Returns the variance of each row of the input Tensor by default, or it can calculate them
in specified dimension `axis`. If `axis` is a list of dimensions, reduce over all of them.
Note:
If ddof is 0, 1, True or Flase, the supported device is only Ascend and CPU. In other cases,
the supported device is Ascend, GPU and CPU.
Args:
input_x (Tensor[Number]): Input tensor with a dtype of number.Number, its shape should be :math:`(N, *)`
where :math:`*` means any number of additional dims, its rank should be less than 8.
axis (Union[int, tuple(int), list(int)]): The dimensions to reduce. Default: (), reduce all dimensions.
Only constant value is allowed.
Must be in the range [-rank(`input_x`), rank(`input_x`)).
unbiased (bool): Whether to use Bessels correction.
If true, will use the Bessel correction unbiased estimation.
If false, will through the biased estimation to calculate the standard deviation.
keep_dims (bool): Whether the output tensor has dim retained or not.
If true, keep these reduced dimensions and the length is 1.
If false, don't keep these dimensions.
input (Tensor[Number]): Input Tensor with a dtype of number.Number, its shape should be :math:`(N, *)`
where :math:`*` means any number of additional dims, its rank should be less than 8.
axis (Union[int, tuple(int), list(int)], optional): The dimensions to reduce. Only constant value is allowed.
Must be in the range [-rank(`input`), rank(`input`)). Default: None, reduce all dimensions.
ddof (Union[int, bool], optional): Means Delta Degrees of Freedom.
If ddof is an integer, the divisor used in calculations is :math:`N - ddof`,
where :math:`N` represents the number of elements.
If ddof is True, will use the Bessel correction unbiased estimation.
If ddof is False, will through the biased estimation to calculate variance.
Default: 0.
keepdims (bool, optional): Whether the output Tensor has dim retained or not.
If true, keep these reduced dimensions and the length is 1.
If false, don't keep these dimensions. Default: False.
Returns:
A tuple of 2 Tensors (output_std, output_mean) containing the standard deviation and mean.
Suppose the shape of `input_x` is :math:`(x_0, x_1, ..., x_R)`:
Tensor, the variance.
Suppose the shape of `input` is :math:`(x_0, x_1, ..., x_R)`:
- If `axis` is () and `keep_dims` is set to False, returns a 0-D Tensor, indicating
the standard deviation of all elements in `input_x`.
- If `axis` is int 1 and `keep_dims` is set to False, then the returned Tensor
- If `axis` is () and `keepdims` is set to False, returns a 0-D Tensor, indicating
the standard deviation of all elements in `input`.
- If `axis` is int 1 and `keepdims` is set to False, then the returned Tensor
has shape :math:`(x_0, x_2, ..., x_R)`.
- If `axis` is tuple(int) or list(int), e.g. (1, 2) and `keep_dims` is set to False,
- If `axis` is tuple(int) or list(int), e.g. (1, 2) and `keepdims` is set to False,
then the returned Tensor has shape :math:`(x_0, x_2, ..., x_R)`.
Raises:
TypeError: If `input_x` is not a Tensor.
TypeError: If `axis` is not one of the following: int, tuple or list.
TypeError: If `keep_dims` is not a bool.
TypeError: If `input` is not a Tensor.
TypeError: If `axis` is not one of the following: None, int, tuple or list.
TypeError: If `keepdims` is not a bool.
ValueError: If `axis` is out of range.
Supported Platforms:
``Ascend`` ``CPU``
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input_x = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> output = ops.std(input_x, 1, True, False)
>>> output_std, output_mean = output[0], output[1]
>>> print(output_std)
[1. 2.5166116]
>>> input = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> output = ops.var(input, 1, True, False)
>>> print(output)
[1. 6.3333325]
"""
axis = _check_var_std_input(input, ddof, keepdims, axis, "var")
output = var_mean(input, axis, ddof, keepdims)
return output[0]
def var_mean(input, axis=None, ddof=0, keepdims=False):
r"""
Returns the variance and mean of each row of the input Tensor by default,
or it can calculate them in specified dimension `axis`.
If `axis` is a list of dimensions, reduce over all of them.
Note:
If ddof is 0, 1, True or Flase, the supported device is only Ascend and CPU. In other cases,
the supported device is Ascend, GPU and CPU.
Args:
input (Tensor[Number]): Input Tensor with a dtype of number.Number, its shape should be :math:`(N, *)`
where :math:`*` means any number of additional dims, its rank should be less than 8.
axis (Union[int, tuple(int), list(int)], optional): The dimensions to reduce. Only constant value is allowed.
Must be in the range [-rank(`input`), rank(`input`)). Default: None, reduce all dimensions.
ddof (Union[int, bool], optional): Means Delta Degrees of Freedom.
If ddof is an integer, the divisor used in calculations is :math:`N - ddof`,
where :math:`N` represents the number of elements.
If ddof is True, will use the Bessel correction unbiased estimation.
If ddof is False, will through the biased estimation to calculate the variance and mean.
Default: 0.
keepdims (bool, optional): Whether the output Tensor has dim retained or not.
If true, keep these reduced dimensions and the length is 1.
If false, don't keep these dimensions. Default: False.
Returns:
A tuple containing the variance and mean.
Suppose the shape of `input` is :math:`(x_0, x_1, ..., x_R)`:
- If `axis` is () and `keepdims` is set to False, returns a 0-D Tensor, indicating
the standard deviation of all elements in `input`.
- If `axis` is int 1 and `keepdims` is set to False, then the returned Tensor
has shape :math:`(x_0, x_2, ..., x_R)`.
- If `axis` is tuple(int) or list(int), e.g. (1, 2) and `keepdims` is set to False,
then the returned Tensor has shape :math:`(x_0, x_2, ..., x_R)`.
Raises:
TypeError: If `input` is not a Tensor.
TypeError: If `axis` is not one of the following: None, int, tuple or list.
TypeError: If `keepdims` is not a bool.
ValueError: If `axis` is out of range.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> output_var, output_mean = ops.var_mean(input, 1, True, False)
>>> print(output_var)
[1. 6.3333325]
>>> print(output_mean)
[2. 1.3333334]
"""
reduce_std_op = ReduceStd(axis=axis, unbiased=unbiased, keep_dims=keep_dims)
output = reduce_std_op(input_x)
return output
axis = _check_var_std_input(input, ddof, keepdims, axis, "var_mean")
if ddof in (0, 1):
output = _get_cache_prim(P.ReduceStd)(axis=axis, unbiased=bool(ddof), keep_dims=keepdims)(input)
return _get_cache_prim(P.Pow)()(output[0], 2), output[1]
x_mean = mean(input, axis, True)
x_sub = _get_cache_prim(P.Sub)()(input, x_mean)
x_pow = _get_cache_prim(P.Pow)()(x_sub, 2)
x_sum = sum(x_pow, axis, keepdims)
nums = 1
if axis == ():
nums = input.size
else:
for ax in axis:
nums *= input.shape[ax]
return true_divide(x_sum, nums - ddof), x_mean
def std(input, axis=None, ddof=0, keepdims=False):
r"""
Returns the standard-deviation of each row of the input Tensor by default, or it can calculate them
in specified dimension `axis`. If `axis` is a list of dimensions, reduce over all of them.
Note:
If ddof is 0, 1, True or Flase, the supported device is only Ascend and CPU. In other cases,
the supported device is Ascend, GPU and CPU.
Args:
input (Tensor[Number]): Input Tensor with a dtype of number.Number, its shape should be :math:`(N, *)`
where :math:`*` means any number of additional dims, its rank should be less than 8.
axis (Union[int, tuple(int), list(int)], optional): The dimensions to reduce. Only constant value is allowed.
Must be in the range [-rank(`input`), rank(`input`)). Default: None, reduce all dimensions.
ddof (Union[int, bool], optional): Means Delta Degrees of Freedom.
If ddof is an integer, the divisor used in calculations is :math:`N - ddof`,
where :math:`N` represents the number of elements.
If ddof is True, will use the Bessel correction unbiased estimation.
If ddof is False, will through the biased estimation to calculate the standard deviation.
Default: 0.
keepdims (bool, optional): Whether the output Tensor has dim retained or not.
If true, keep these reduced dimensions and the length is 1.
If false, don't keep these dimensions. Default: False.
Returns:
Tensor, the standard deviation.
Suppose the shape of `input` is :math:`(x_0, x_1, ..., x_R)`:
- If `axis` is () and `keepdims` is set to False, returns a 0-D Tensor, indicating
the standard deviation of all elements in `input`.
- If `axis` is int 1 and `keepdims` is set to False, then the returned Tensor
has shape :math:`(x_0, x_2, ..., x_R)`.
- If `axis` is tuple(int) or list(int), e.g. (1, 2) and `keepdims` is set to False,
then the returned Tensor has shape :math:`(x_0, x_2, ..., x_R)`.
Raises:
TypeError: If `input` is not a Tensor.
TypeError: If `axis` is not one of the following: None, int, tuple or list.
TypeError: If `keepdims` is not a bool.
ValueError: If `axis` is out of range.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> output = ops.std(input, 1, True, False)
>>> print(output)
[1. 2.5166113]
"""
axis = _check_var_std_input(input, ddof, keepdims, axis, "std")
output = std_mean(input, axis, ddof, keepdims)
return output[0]
def std_mean(input, axis=None, ddof=0, keepdims=False):
r"""
Returns the standard-deviation and mean of each row of the input Tensor by default,
or it can calculate them in specified dimension `axis`.
If `axis` is a list of dimensions, reduce over all of them.
Note:
If ddof is 0, 1, True or Flase, the supported device is only Ascend and CPU. In other cases,
the supported device is Ascend, GPU and CPU.
Args:
input (Tensor[Number]): Input Tensor with a dtype of number.Number, its shape should be :math:`(N, *)`
where :math:`*` means any number of additional dims, its rank should be less than 8.
axis (Union[int, tuple(int), list(int)], optional): The dimensions to reduce. Only constant value is allowed.
Must be in the range [-rank(`input`), rank(`input`)). Default: None, reduce all dimensions.
ddof (Union[int, bool], optional): Means Delta Degrees of Freedom.
If ddof is an integer, the divisor used in calculations is :math:`N - ddof`,
where :math:`N` represents the number of elements.
If ddof is True, will use the Bessel correction unbiased estimation.
If ddof is False, will through the biased estimation to calculate the standard deviation and mean.
Default: 0.
keepdims (bool, optional): Whether the output Tensor has dim retained or not.
If true, keep these reduced dimensions and the length is 1.
If false, don't keep these dimensions. Default: False.
Returns:
A tuple containing the standard deviation and mean.
Suppose the shape of `input` is :math:`(x_0, x_1, ..., x_R)`:
- If `axis` is () and `keepdims` is set to False, returns a 0-D Tensor, indicating
the standard deviation of all elements in `input`.
- If `axis` is int 1 and `keepdims` is set to False, then the returned Tensor
has shape :math:`(x_0, x_2, ..., x_R)`.
- If `axis` is tuple(int) or list(int), e.g. (1, 2) and `keepdims` is set to False,
then the returned Tensor has shape :math:`(x_0, x_2, ..., x_R)`.
Raises:
TypeError: If `input` is not a Tensor.
TypeError: If `axis` is not one of the following: None, int, tuple or list.
TypeError: If `keepdims` is not a bool.
ValueError: If `axis` is out of range.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> input = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> output_std, output_mean = ops.std_mean(input, 1, True, False)
>>> print(output_std)
[1. 2.5166113]
>>> print(output_mean)
[2. 1.3333334]
"""
axis = _check_var_std_input(input, ddof, keepdims, axis, "std_mean")
if ddof in (0, 1):
return _get_cache_prim(P.ReduceStd)(axis=axis, unbiased=bool(ddof), keep_dims=keepdims)(input)
output = var_mean(input, axis, ddof, keepdims)
return _get_cache_prim(P.Pow)()(output[0], 0.5), output[1]
def real(x):
@ -9872,7 +10076,7 @@ def nansum(x, axis=None, keepdims=False, *, dtype=None):
raise TypeError(f"For nansum, input must be Tensor, but got {type(x)}.")
_check_repeat_in_axis(axis, x.ndim, "nansum")
if x.is_complex():
raise TypeError(f'For nansum, input are not supported complex type, but got {type(x)}.')
raise TypeError(f'For nansum, input are not supported complex type, but got {x.dtype}.')
if dtype is not None and dtype in mstype.complex_type:
raise TypeError(f'For nansum, dtype not supported complex type, but got {dtype}.')
if axis is None:
@ -10311,6 +10515,9 @@ __all__ = [
'atleast_3d',
'view_as_real',
'vstack',
'var',
'var_mean',
'std_mean',
'combinations',
'dist',
'copysign',

View File

@ -34,8 +34,8 @@ class NetReduceStd(nn.Cell):
@jit
def construct(self, indice):
if self._axis is None:
return F.std(indice, unbiased=False, keep_dims=self._keep_dims)
return F.std(indice, axis=self._axis, unbiased=False, keep_dims=self._keep_dims)
return F.std_mean(indice, ddof=False, keepdims=self._keep_dims)
return F.std_mean(indice, axis=self._axis, ddof=False, keepdims=self._keep_dims)
@pytest.mark.level0
@ -73,7 +73,7 @@ class ReduceStdDynamicShapeNet(nn.Cell):
def construct(self, x):
x_unique, _ = self.unique(x)
x_unique = self.reshape(x_unique, (2, 5))
return F.std(x_unique, unbiased=False, keep_dims=False)
return F.std_mean(x_unique, ddof=False, keepdims=False)
@pytest.mark.level0

View File

@ -0,0 +1,86 @@
# Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import pytest
import numpy as np
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor, ops
class Net(nn.Cell):
def construct(self, x):
return ops.std(x, axis=0, ddof=True, keepdims=True)
class NetGpu(nn.Cell):
def construct(self, x):
return ops.std(x, axis=2, ddof=3, keepdims=True)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_std(mode):
"""
Feature: ops.std
Description: Verify the result of std on Ascend and CPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = Net()
output = net(x)
expect_output = [[[2.12132025e+00, 7.07106769e-01, 7.07106769e-01, 9.19238853e+00],
[6.36396122e+00, 6.36396122e+00, 9.19238853e+00, 1.41421354e+00],
[6.36396122e+00, 2.12132025e+00, 7.77817440e+00, 1.13137083e+01]]]
assert np.allclose(output.asnumpy(), expect_output)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_std_gpu(mode):
"""
Feature: ops.std
Description: Verify the result of std on GPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = NetGpu()
output = net(x)
expect_output = [[[1.13468056e+01],
[7.81024981e+00],
[1.10453606e+01]],
[[2.59807611e+00],
[1.02347450e+01],
[1.26787224e+01]]]
assert np.allclose(output.asnumpy(), expect_output)

View File

@ -0,0 +1,97 @@
# Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import pytest
import numpy as np
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor, ops
class Net(nn.Cell):
def construct(self, x):
return ops.std_mean(x, axis=0, ddof=True, keepdims=True)
class NetGpu(nn.Cell):
def construct(self, x):
return ops.std_mean(x, axis=2, ddof=3, keepdims=True)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_std_mean(mode):
"""
Feature: ops.std_mean
Description: Verify the result of std_mean on Ascend and CPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = Net()
output = net(x)
expect_output = [[[2.12132025e+00, 7.07106769e-01, 7.07106769e-01, 9.19238853e+00],
[6.36396122e+00, 6.36396122e+00, 9.19238853e+00, 1.41421354e+00],
[6.36396122e+00, 2.12132025e+00, 7.77817440e+00, 1.13137083e+01]]]
expect_output1 = [[[-5.50000000e+00, -6.50000000e+00, -4.50000000e+00, 1.50000000e+00],
[-1.50000000e+00, -2.50000000e+00, -5.00000000e-01, -1.00000000e+00],
[2.50000000e+00, -5.50000000e+00, 2.50000000e+00, 0.00000000e+00]]]
assert np.allclose(output[0].asnumpy(), expect_output)
assert np.allclose(output[1].asnumpy(), expect_output1)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_std_mean_gpu(mode):
"""
Feature: ops.std_mean
Description: Verify the result of std_mean on GPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = NetGpu()
output = net(x)
expect_output = [[[1.13468056e+01],
[7.81024981e+00],
[1.10453606e+01]],
[[2.59807611e+00],
[1.02347450e+01],
[1.26787224e+01]]]
expect_output1 = [[[-1.75000000e+00],
[-5.00000000e-01],
[2.00000000e+00]],
[[-5.75000000e+00],
[-2.25000000e+00],
[-2.25000000e+00]]]
assert np.allclose(output[0].asnumpy(), expect_output)
assert np.allclose(output[1].asnumpy(), expect_output1)

View File

@ -0,0 +1,86 @@
# Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import pytest
import numpy as np
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor, ops
class Net(nn.Cell):
def construct(self, x):
return ops.var(x, axis=0, ddof=True, keepdims=True)
class NetGpu(nn.Cell):
def construct(self, x):
return ops.var(x, axis=2, ddof=3, keepdims=True)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_var(mode):
"""
Feature: ops.var
Description: Verify the result of var on Ascend and CPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = Net()
output = net(x)
expect_output = [[[4.49999952e+00, 4.99999970e-01, 4.99999970e-01, 8.45000076e+01],
[4.05000038e+01, 4.05000038e+01, 8.45000076e+01, 1.99999988e+00],
[4.05000038e+01, 4.49999952e+00, 6.04999962e+01, 1.27999992e+02]]]
assert np.allclose(output.asnumpy(), expect_output)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_var_gpu(mode):
"""
Feature: ops.var
Description: Verify the result of var on GPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = NetGpu()
output = net(x)
expect_output = [[[1.28750000e+02],
[6.10000000e+01],
[1.22000000e+02]],
[[6.75000000e+00],
[1.04750000e+02],
[1.60750000e+02]]]
assert np.allclose(output.asnumpy(), expect_output)

View File

@ -0,0 +1,97 @@
# Copyright 2023 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import pytest
import numpy as np
import mindspore as ms
import mindspore.nn as nn
from mindspore import Tensor, ops
class Net(nn.Cell):
def construct(self, x):
return ops.var_mean(x, axis=0, ddof=True, keepdims=True)
class NetGpu(nn.Cell):
def construct(self, x):
return ops.var_mean(x, axis=2, ddof=3, keepdims=True)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_arm_cpu
@pytest.mark.platform_arm_ascend_training
@pytest.mark.platform_x86_ascend_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_var_mean(mode):
"""
Feature: ops.var_mean
Description: Verify the result of var_mean on Ascend and CPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = Net()
output = net(x)
expect_output = [[[4.49999952e+00, 4.99999970e-01, 4.99999970e-01, 8.45000076e+01],
[4.05000038e+01, 4.05000038e+01, 8.45000076e+01, 1.99999988e+00],
[4.05000038e+01, 4.49999952e+00, 6.04999962e+01, 1.27999992e+02]]]
expect_output1 = [[[-5.50000000e+00, -6.50000000e+00, -4.50000000e+00, 1.50000000e+00],
[-1.50000000e+00, -2.50000000e+00, -5.00000000e-01, -1.00000000e+00],
[2.50000000e+00, -5.50000000e+00, 2.50000000e+00, 0.00000000e+00]]]
assert np.allclose(output[0].asnumpy(), expect_output)
assert np.allclose(output[1].asnumpy(), expect_output1)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('mode', [ms.GRAPH_MODE, ms.PYNATIVE_MODE])
def test_ops_var_mean_gpu(mode):
"""
Feature: ops.var_mean
Description: Verify the result of var_mean on GPU
Expectation: success
"""
ms.set_context(mode=mode)
x = Tensor([[[-4, -6, -5, 8],
[3, 2, -7, 0],
[7, -4, -3, 8]],
[[-7, -7, -4, -5],
[-6, -7, 6, -2],
[-2, -7, 8, -8.]]])
net = NetGpu()
output = net(x)
expect_output = [[[1.28750000e+02],
[6.10000000e+01],
[1.22000000e+02]],
[[6.75000000e+00],
[1.04750000e+02],
[1.60750000e+02]]]
expect_output1 = [[[-1.75000000e+00],
[-5.00000000e-01],
[2.00000000e+00]],
[[-5.75000000e+00],
[-2.25000000e+00],
[-2.25000000e+00]]]
assert np.allclose(output[0].asnumpy(), expect_output)
assert np.allclose(output[1].asnumpy(), expect_output1)