forked from mindspore-Ecosystem/mindspore
!34582 Add tensor&functional interface for TensorScatterMin ops.
Merge pull request !34582 from liangzelang/tensor_scatter_min
This commit is contained in:
commit
2734f2cca9
|
@ -299,6 +299,7 @@ Array操作
|
|||
mindspore.ops.size
|
||||
mindspore.ops.space_to_batch_nd
|
||||
mindspore.ops.tensor_scatter_add
|
||||
mindspore.ops.tensor_scatter_min
|
||||
mindspore.ops.tensor_scatter_div
|
||||
mindspore.ops.tensor_scatter_mul
|
||||
mindspore.ops.tensor_scatter_sub
|
||||
|
|
|
@ -1191,6 +1191,29 @@ mindspore.Tensor
|
|||
- **TypeError** - `indices` 的数据类型既不是int32,也不是int64。
|
||||
- **ValueError** - Tensor的shape长度小于 `indices` 的shape的最后一个维度。
|
||||
|
||||
.. py:method:: tensor_scatter_min(indices, updates)
|
||||
|
||||
根据指定的更新值和输入索引,通过最小值运算,将结果赋值到输出Tensor中。
|
||||
|
||||
索引的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。 `updates` 的shape应该等于`input_x[indices]`的shape。有关更多详细信息,请参见下方样例。
|
||||
|
||||
.. note::
|
||||
如果 `indices` 的某些值超出范围,则相应的 `updates` 不会更新到 `input_x` ,而不是抛出索引错误。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **indices** (Tensor) - Tensor的索引,数据类型为int32或int64的。其rank必须至少为2。
|
||||
- **updates** (Tensor) - 指定与本Tensor相减操作的Tensor,其数据类型与该Tensor相同。updates.shape应等于indices.shape[:-1] + self.shape[indices.shape[-1]:]。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,shape和数据类型与原Tensor相同。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `indices` 的数据类型既不是int32,也不是int64。
|
||||
- **ValueError** - Tensor的shape长度小于 `indices` 的shape的最后一个维度。
|
||||
|
||||
.. py:method:: tensor_scatter_div(indices, updates)
|
||||
|
||||
根据指定的索引, 通过除法进行计算, 将输出赋值到输出Tensor中。
|
||||
|
|
|
@ -3,25 +3,6 @@
|
|||
|
||||
.. py:class:: mindspore.ops.TensorScatterMin
|
||||
|
||||
根据指定的更新值和输入索引,通过最小值运算更新输入Tensor的值。
|
||||
根据指定的更新值和输入索引,通过最小值运算,将结果赋值到输出Tensor中。
|
||||
|
||||
索引的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。 `updates` 的shape应该等于input_x[indices]的shape。
|
||||
有关更多详细信息,请参见使用用例。
|
||||
|
||||
.. note::
|
||||
如果 `indices` 的某些值超出范围,则相应的 `updates` 不会更新为 `input_x` ,而不是抛出索引错误。
|
||||
|
||||
**输入:**
|
||||
|
||||
- **input_x** (Tensor) - 输入Tensor。 `input_x` 的维度必须不小于indices.shape[-1]。
|
||||
- **indices** (Tensor) - 输入Tensor的索引,数据类型为int32或int64。其rank必须至少为2。
|
||||
- **updates** (Tensor) - 指定与 `input_x` 取最小值操作的Tensor,其数据类型与输入相同。updates.shape应该等于indices.shape[:-1] + input_x.shape[indices.shape[-1]:]。
|
||||
|
||||
**输出:**
|
||||
|
||||
Tensor,shape和数据类型与输入 `input_x` 相同。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `indices` 的数据类型既不是int32,也不是int64。
|
||||
- **ValueError** - `input_x` 的shape长度小于 `indices` 的shape的最后一个维度。
|
||||
更多参考详见 :func:`mindspore.ops.tensor_scatter_min`。
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
mindspore.ops.tensor_scatter_min
|
||||
===============================
|
||||
|
||||
.. py:function:: mindspore.ops.tensor_scatter_min(input_x, indices, updates)
|
||||
|
||||
根据指定的更新值和输入索引,通过最小值运算,将结果赋值到输出Tensor中。
|
||||
|
||||
索引的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。 `updates` 的shape应该等于`input_x[indices]`的shape。有关更多详细信息,请参见下方样例。
|
||||
|
||||
.. note::
|
||||
如果 `indices` 的某些值超出范围,则相应的 `updates` 不会更新为 `input_x` ,而不是抛出索引错误。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **input_x** (Tensor) - 输入Tensor。 `input_x` 的维度必须不小于indices.shape[-1]。
|
||||
- **indices** (Tensor) - 输入Tensor的索引,数据类型为int32或int64。其rank必须至少为2。
|
||||
- **updates** (Tensor) - 指定与 `input_x` 取最小值操作的Tensor,其数据类型与输入相同。updates.shape应该等于indices.shape[:-1] + input_x.shape[indices.shape[-1]:]。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,shape和数据类型与输入 `input_x` 相同。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `indices` 的数据类型既不是int32,也不是int64。
|
||||
- **ValueError** - `input_x` 的shape长度小于 `indices` 的shape的最后一个维度。
|
|
@ -298,6 +298,7 @@ Array Operation
|
|||
mindspore.ops.size
|
||||
mindspore.ops.space_to_batch_nd
|
||||
mindspore.ops.tensor_scatter_add
|
||||
mindspore.ops.tensor_scatter_min
|
||||
mindspore.ops.tensor_scatter_div
|
||||
mindspore.ops.tensor_scatter_mul
|
||||
mindspore.ops.tensor_scatter_sub
|
||||
|
|
|
@ -229,6 +229,7 @@ BuiltInTypeMap &GetMethodMap() {
|
|||
{"tensor_scatter_add", std::string("tensor_scatter_add")}, // P.TensorScatterAdd()
|
||||
{"tensor_scatter_mul", std::string("tensor_scatter_mul")}, // tensor_scatter_mul()
|
||||
{"tensor_scatter_sub", std::string("tensor_scatter_sub")}, // P.TensorScatterSub()
|
||||
{"tensor_scatter_min", std::string("tensor_scatter_min")}, // P.TensorScatterMin()
|
||||
{"tensor_scatter_div", std::string("tensor_scatter_div")}, // P.TensorScatterDiv()
|
||||
{"lp_norm", std::string("lp_norm")}, // lp_norm()
|
||||
{"trace", std::string("trace")}, // P.Eye()
|
||||
|
|
|
@ -1839,6 +1839,14 @@ def tensor_sactter_div(input_x, indices, updates):
|
|||
return F.tensor_scatter_div(input_x, indices, updates)
|
||||
|
||||
|
||||
def tensor_scatter_min(x, indices, updates):
|
||||
"""
|
||||
By comparing the value at the position indicated by `indices` in `x` with the value in the `updates`,
|
||||
the value at the index will eventually be equal to the smallest one to create a new tensor.
|
||||
"""
|
||||
return F.tensor_scatter_min(x, indices, updates)
|
||||
|
||||
|
||||
def nonzero(x):
|
||||
"""
|
||||
Return a tensor of the positions of all non-zero values.
|
||||
|
|
|
@ -2171,6 +2171,49 @@ class Tensor(Tensor_):
|
|||
self._init_check()
|
||||
return tensor_operator_registry.get('tensor_scatter_sub')()(self, indices, updates)
|
||||
|
||||
def tensor_scatter_min(self, indices, updates):
|
||||
"""
|
||||
By comparing the value at the position indicated by `indices` in self tensor with the value in the `updates`,
|
||||
the value at the index will eventually be equal to the smallest one to create a new tensor.
|
||||
|
||||
The last axis of the index is the depth of each index vector. For each index vector,
|
||||
there must be a corresponding value in `updates`. The shape of `updates` should be
|
||||
equal to the shape of `input_x[indices]`. For more details, see case below.
|
||||
|
||||
Note:
|
||||
If some values of the `indices` are out of range, instead of raising an index error,
|
||||
the corresponding `updates` will not be updated to `input_x`.
|
||||
|
||||
Args:
|
||||
indices (Tensor): The index of input tensor whose data type is int32 or int64.
|
||||
The rank must be at least 2.
|
||||
updates (Tensor): The tensor to update the input tensor, has the same type as input,
|
||||
and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
|
||||
|
||||
Returns:
|
||||
Tensor, has the same shape and type as `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `indices` is neither int32 nor int64.
|
||||
ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
>>> from mindspore import Tensor
|
||||
>>> x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]).astype('float32'))
|
||||
>>> indices = Tensor(np.array([[0, 0], [0, 0]]).astype('int32'))
|
||||
>>> updates = Tensor(np.array([1.0, 2.2]).astype('float32'))
|
||||
>>> output = x.tensor_scatter_min(indices, updates)
|
||||
>>> print(output)
|
||||
[[ -0.1 0.3 3.6]
|
||||
[ 0.4 0.5 -3.2]]
|
||||
"""
|
||||
self._init_check()
|
||||
return tensor_operator_registry.get('tensor_scatter_min')()(self, indices, updates)
|
||||
|
||||
def fill(self, value):
|
||||
"""
|
||||
Fill the tensor with a scalar value.
|
||||
|
|
|
@ -67,6 +67,7 @@ from .array_func import (
|
|||
tensor_scatter_mul,
|
||||
unique_consecutive,
|
||||
tensor_scatter_div,
|
||||
tensor_scatter_min,
|
||||
scatter_max,
|
||||
scatter_min,
|
||||
scatter_div,
|
||||
|
|
|
@ -52,6 +52,7 @@ tensor_scatter_add_ = P.TensorScatterAdd()
|
|||
tensor_scatter_sub_ = P.TensorScatterSub()
|
||||
tensor_scatter_mul_ = P.TensorScatterMul()
|
||||
tensor_scatter_div_ = P.TensorScatterDiv()
|
||||
tensor_scatter_min_ = P.TensorScatterMin()
|
||||
scalar_to_array_ = P.ScalarToArray()
|
||||
scalar_to_tensor_ = P.ScalarToTensor()
|
||||
tuple_to_array_ = P.TupleToArray()
|
||||
|
@ -1869,6 +1870,49 @@ def tensor_scatter_sub(input_x, indices, updates):
|
|||
return tensor_scatter_sub_(input_x, indices, updates)
|
||||
|
||||
|
||||
def tensor_scatter_min(input_x, indices, updates):
|
||||
"""
|
||||
By comparing the value at the position indicated by `indices` in `input_x` with the value in the `updates`,
|
||||
the value at the index will eventually be equal to the smallest one to create a new tensor.
|
||||
|
||||
The last axis of the index is the depth of each index vector. For each index vector,
|
||||
there must be a corresponding value in `updates`. The shape of `updates` should be
|
||||
equal to the shape of `input_x[indices]`. For more details, see case below.
|
||||
|
||||
Note:
|
||||
If some values of the `indices` are out of range, instead of raising an index error,
|
||||
the corresponding `updates` will not be hw to `input_x`.
|
||||
|
||||
Args:
|
||||
indices (Tensor): The index of input tensor whose data type is int32 or int64.
|
||||
The rank must be at least 2.
|
||||
updates (Tensor): The tensor to update the input tensor, has the same type as input,
|
||||
and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
|
||||
|
||||
Returns:
|
||||
Tensor, has the same shape and type as `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `indices` is neither int32 nor int64.
|
||||
ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
>>> from mindspore import Tensor
|
||||
>>> x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]).astype('float32'))
|
||||
>>> indices = Tensor(np.array([[0, 0], [0, 0]]).astype('int32'))
|
||||
>>> updates = Tensor(np.array([1.0, 2.2]).astype('float32'))
|
||||
>>> output = x.tensor_scatter_min(indices, updates)
|
||||
>>> print(output)
|
||||
[[ -0.1 0.3 3.6]
|
||||
[ 0.4 0.5 -3.2]]
|
||||
"""
|
||||
return tensor_scatter_min_(input_x, indices, updates)
|
||||
|
||||
|
||||
def space_to_batch_nd(input_x, block_size, paddings):
|
||||
r"""
|
||||
Divides a tensor's spatial dimensions into blocks and combines the block sizes with the original batch.
|
||||
|
@ -2576,14 +2620,15 @@ __all__ = [
|
|||
'scatter_nd_min',
|
||||
'tensor_scatter_add',
|
||||
'tensor_scatter_sub',
|
||||
'tensor_scatter_mul',
|
||||
'tensor_scatter_div',
|
||||
'tensor_scatter_min',
|
||||
'gather',
|
||||
'gather_d',
|
||||
'gather_nd',
|
||||
'one_hot',
|
||||
'masked_fill',
|
||||
'masked_select',
|
||||
'tensor_scatter_mul',
|
||||
'tensor_scatter_div',
|
||||
'scatter_max',
|
||||
'scatter_min',
|
||||
'scatter_div',
|
||||
|
|
|
@ -70,7 +70,6 @@ scatter_nd_mul = P.ScatterNdMul()
|
|||
scatter_nd_max = P.ScatterNdMax()
|
||||
scatter_update = P.ScatterUpdate()
|
||||
tensor_scatter_update = P.TensorScatterUpdate()
|
||||
tensor_scatter_min = P.TensorScatterMin()
|
||||
tensor_scatter_max = P.TensorScatterMax()
|
||||
scatter_nd_update = P.ScatterNdUpdate()
|
||||
stack = P.Stack()
|
||||
|
@ -965,6 +964,7 @@ tensor_operator_registry.register('zeros', zeros)
|
|||
tensor_operator_registry.register('tensor_scatter_update', tensor_scatter_update)
|
||||
tensor_operator_registry.register('tensor_scatter_mul', tensor_scatter_mul)
|
||||
tensor_operator_registry.register('tensor_scatter_div', tensor_scatter_div)
|
||||
tensor_operator_registry.register('tensor_scatter_min', P.TensorScatterMin)
|
||||
tensor_operator_registry.register('tensor_scatter_sub', P.TensorScatterSub)
|
||||
tensor_operator_registry.register('tensor_scatter_add', P.TensorScatterAdd)
|
||||
tensor_operator_registry.register('bernoulli', bernoulli)
|
||||
|
|
|
@ -6753,34 +6753,13 @@ class TensorScatterMax(_TensorScatterOp):
|
|||
|
||||
class TensorScatterMin(_TensorScatterOp):
|
||||
"""
|
||||
By comparing the value at the position indicated by the index in input_x with the value in the `updates`,
|
||||
By comparing the value at the position indicated by `indices` in `input_x` with the value in the `updates`,
|
||||
the value at the index will eventually be equal to the smallest one to create a new tensor.
|
||||
|
||||
The last axis of the index is the depth of each index vector. For each index vector,
|
||||
there must be a corresponding value in `updates`. The shape of `updates` should be
|
||||
equal to the shape of input_x[indices].
|
||||
For more details, see use cases.
|
||||
|
||||
Note:
|
||||
If some values of the `indices` are out of bound, instead of raising an index error,
|
||||
the corresponding `updates` will not be updated to `input_x`.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].
|
||||
- **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
|
||||
The rank must be at least 2.
|
||||
- **updates** (Tensor) - The tensor to update the input tensor, has the same type as input,
|
||||
and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
|
||||
|
||||
Outputs:
|
||||
Tensor, has the same shape and type as `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `indices` is neither int32 nor int64.
|
||||
ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
|
||||
Refer to :func:`mindspore.ops.tensor_scatter_min` for more detail.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
|
||||
|
|
Loading…
Reference in New Issue