!34078 Add tensor&functional interface for TensorScatterSub ops.

Merge pull request !34078 from liangzelang/dev_tensorscatterop_cpu
This commit is contained in:
i-robot 2022-05-18 01:15:18 +00:00 committed by Gitee
commit 434d2a82e4
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
12 changed files with 166 additions and 46 deletions

View File

@ -268,6 +268,7 @@ Array操作
mindspore.ops.shape mindspore.ops.shape
mindspore.ops.size mindspore.ops.size
mindspore.ops.tensor_scatter_add mindspore.ops.tensor_scatter_add
mindspore.ops.tensor_scatter_sub
mindspore.ops.tensor_scatter_div mindspore.ops.tensor_scatter_div
mindspore.ops.space_to_batch_nd mindspore.ops.space_to_batch_nd
mindspore.ops.tile mindspore.ops.tile

View File

@ -890,6 +890,29 @@ mindspore.Tensor
- **TypeError** - `indices` 的数据类型既不是int32也不是int64。 - **TypeError** - `indices` 的数据类型既不是int32也不是int64。
- **ValueError** - Tensor的shape长度小于 `indices` 的shape的最后一个维度。 - **ValueError** - Tensor的shape长度小于 `indices` 的shape的最后一个维度。
.. py:method:: tensor_scatter_sub(indices, updates)
根据指定的更新值和输入索引通过减法进行运算将结果赋值到输出Tensor中。当同一索引有不同值时更新的结果将是所有值的总和。此操作几乎等同于使用 :class:`mindspore.ops.ScatterNdSub` 只是更新后的结果是通过算子output返回而不是直接原地更新input。
`indices` 的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。`updates` 的shape应该等于 `input_x[indices]` 的shape。有关更多详细信息请参见使用用例。
.. note::
如果 `indices` 的某些值超出范围,则相应的 `updates` 不会更新到 `input_x` ,而不是抛出索引错误。
**参数:**
- **indices** (Tensor) - Tensor的索引数据类型为int32或int64的。其rank必须至少为2。
- **updates** (Tensor) - 指定与本Tensor相减操作的Tensor其数据类型与该Tensor相同。updates.shape应等于indices.shape[:-1] + self.shape[indices.shape[-1]:]。
**返回:**
Tensorshape和数据类型与原Tensor相同。
**异常:**
- **TypeError** - `indices` 的数据类型既不是int32也不是int64。
- **ValueError** - Tensor的shape长度小于 `indices` 的shape的最后一个维度。
.. py:method:: tensor_scatter_div(indices, updates) .. py:method:: tensor_scatter_div(indices, updates)
根据指定的索引, 通过除法进行计算, 将输出赋值到输出Tensor中。 根据指定的索引, 通过除法进行计算, 将输出赋值到输出Tensor中。

View File

@ -5,23 +5,5 @@
根据指定的更新值和输入索引通过减法运算更新输入Tensor的值。当同一个索引有多个不同值时更新的结果将分别减去这些值。此操作几乎等同于使用 :class:`mindspore.ops.ScatterNdSub` 只是更新后的结果是通过算子output返回而不是直接原地更新input。 根据指定的更新值和输入索引通过减法运算更新输入Tensor的值。当同一个索引有多个不同值时更新的结果将分别减去这些值。此操作几乎等同于使用 :class:`mindspore.ops.ScatterNdSub` 只是更新后的结果是通过算子output返回而不是直接原地更新input。
`indices` 的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。`updates` 的shape应该等于 `input_x[indices]` 的shape。有关更多详细信息请参见使用用例。 更多参考详见 func:`mindspore.ops.tensor_scatter_sub`
.. note::
如果 `indices` 的某些值超出范围,则相应的 `updates` 不会更新为 `input_x` ,而不是抛出索引错误。
**输入:**
- **input_x** (Tensor) - 输入Tensor。 `input_x` 的维度必须不小于indices.shape[-1]。
- **indices** (Tensor) - 输入Tensor的索引数据类型为int32或int64的其rank必须至少为2。
- **updates** (Tensor) - 指定与 `input_x` 相减操作的Tensor其数据类型与输入相同。updates.shape应等于indices.shape[:-1] + input_x.shape[indices.shape[-1]:]。
**输出:**
Tensorshape和数据类型与输入 `input_x` 相同。
**异常:**
- **TypeError** - `indices` 的数据类型既不是int32也不是int64。
- **ValueError** - `input_x` 的shape长度小于 `indices` 的shape的最后一个维度。

View File

@ -0,0 +1,26 @@
mindspore.ops.tensor_scatter_sub
================================
.. py:function:: mindspore.ops.tensor_scatter_sub(input_x, indices, updates)
根据指定的更新值和输入索引通过减法进行运算将结果赋值到输出Tensor中。当同一索引有不同值时更新的结果将是所有值的总和。此操作几乎等同于使用 :class:`mindspore.ops.ScatterNdSub` 只是更新后的结果是通过算子output返回而不是直接原地更新input。
`indices` 的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。 `updates` 的shape应该等于 `input_x[indices]` 的shape。有关更多详细信息请参见使用用例。
.. note::
如果 `indices` 的某些值超出范围,则相应的 `updates` 不会更新到 `input_x` ,而不是抛出索引错误。
**参数:**
- **input_x** (Tensor) - 输入Tensor。 `input_x` 的维度必须不小于indices.shape[-1]。
- **indices** (Tensor) - 输入Tensor的索引数据类型为int32或int64的。其rank必须至少为2。
- **updates** (Tensor) - 指定与 `input_x` 相减操作的Tensor其数据类型与输入相同。updates.shape应等于indices.shape[:-1] + input_x.shape[indices.shape[-1]:]。
**返回:**
Tensorshape和数据类型与输入 `input_x` 相同。
**异常:**
- **TypeError** - `indices` 的数据类型既不是int32也不是int64。
- **ValueError** - `input_x` 的shape长度小于 `indices` 的shape的最后一个维度。

View File

@ -267,6 +267,7 @@ Array Operation
mindspore.ops.shape mindspore.ops.shape
mindspore.ops.size mindspore.ops.size
mindspore.ops.tensor_scatter_add mindspore.ops.tensor_scatter_add
mindspore.ops.tensor_scatter_sub
mindspore.ops.tensor_scatter_div mindspore.ops.tensor_scatter_div
mindspore.ops.space_to_batch_nd mindspore.ops.space_to_batch_nd
mindspore.ops.tile mindspore.ops.tile

View File

@ -219,6 +219,7 @@ BuiltInTypeMap &GetMethodMap() {
{"scatter_nd_add", std::string("scatter_nd_add")}, // scatter_nd_add() {"scatter_nd_add", std::string("scatter_nd_add")}, // scatter_nd_add()
{"scatter_nd_sub", std::string("scatter_nd_sub")}, // scatter_nd_sub() {"scatter_nd_sub", std::string("scatter_nd_sub")}, // scatter_nd_sub()
{"tensor_scatter_add", std::string("tensor_scatter_add")}, // P.TensorScatterAdd() {"tensor_scatter_add", std::string("tensor_scatter_add")}, // P.TensorScatterAdd()
{"tensor_scatter_sub", std::string("tensor_scatter_sub")}, // P.TensorScatterSub()
{"tensor_scatter_div", std::string("tensor_scatter_div")}, // P.TensorScatterDiv() {"tensor_scatter_div", std::string("tensor_scatter_div")}, // P.TensorScatterDiv()
{"lp_norm", std::string("lp_norm")}, // lp_norm() {"lp_norm", std::string("lp_norm")}, // lp_norm()
{"trace", std::string("trace")}, // P.Eye() {"trace", std::string("trace")}, // P.Eye()

View File

@ -1685,6 +1685,15 @@ def tensor_scatter_add(x, indices, updates):
return F.tensor_scatter_add(x, indices, updates) return F.tensor_scatter_add(x, indices, updates)
def tensor_scatter_sub(x, indices, updates):
"""
Creates a new tensor by subtracting the values from the positions in `x` indicated by
`indices`, with values from `updates`. When multiple values are given for the same
index, the updated result will be the sum of all values.
"""
return F.tensor_scatter_sub(x, indices, updates)
def tensor_sactter_div(input_x, indices, updates): def tensor_sactter_div(input_x, indices, updates):
""" """
Create a new tensor by division the values from the positions in `input_x` indicated by Create a new tensor by division the values from the positions in `input_x` indicated by

View File

@ -1989,6 +1989,52 @@ class Tensor(Tensor_):
self._init_check() self._init_check()
return tensor_operator_registry.get("tensor_scatter_add")()(self, indices, updates) return tensor_operator_registry.get("tensor_scatter_add")()(self, indices, updates)
def tensor_scatter_sub(self, indices, updates):
"""
Creates a new tensor by subtracting the values from the positions in self tensor indicated by
`indices`, with values from `updates`. When multiple values are provided for the same
index, the result of the update will be to subtract these values respectively. This operation is almost
equivalent to using :class:`mindspore.ops.ScatterNdSub` , except that the updates are applied on output `Tensor`
instead of input `Parameter`.
The last axis of `indices` is the depth of each index vectors. For each index vector,
there must be a corresponding value in `updates`. The shape of `updates` should be
equal to the shape of `self[indices]`. For more details, see use cases.
Note:
If some values of the `indices` are out of bound, instead of raising an index error,
the corresponding `updates` will not be updated to `input_x`.
Args:
indices (Tensor): The index of input tensor whose data type is int32 or int64.
The rank must be at least 2.
updates (Tensor): The tensor to update the input tensor, has the same type as input,
and updates.shape should be equal to indices.shape[:-1] + self.shape[indices.shape[-1]:].
Returns:
Tensor, has the same shape and type as self tensor.
Raises:
TypeError: If dtype of `indices` is neither int32 nor int64.
ValueError: If length of shape of self tensor is less than the last dimension of shape of `indices`.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import numpy as np
>>> from mindspore import Tensor
>>> x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]).astype('float32'))
>>> indices = Tensor(np.array([[0, 0], [0, 0]]).astype('int32'))
>>> updates = Tensor(np.array([1.0, 2.2]).astype('float32'))
>>> output = x.tensor_scatter_sub(indices, updates)
>>> print(output)
[[-3.3000002 0.3 3.6 ]
[ 0.4 0.5 -3.2 ]]
"""
self._init_check()
return tensor_operator_registry.get('tensor_scatter_sub')()(self, indices, updates)
def fill(self, value): def fill(self, value):
""" """
Fill the tensor with a scalar value. Fill the tensor with a scalar value.

View File

@ -23,9 +23,9 @@ from . import array_func, parameter_func, math_func, nn_func
from .array_func import (unique, eye, matrix_band_part, fill, fill_, tile, size, ones, ones_like, shape, shape_, ger, from .array_func import (unique, eye, matrix_band_part, fill, fill_, tile, size, ones, ones_like, shape, shape_, ger,
dyn_shape, rank, reshape, reshape_, tensor_slice, slice, scalar_to_array, scalar_to_tensor, dyn_shape, rank, reshape, reshape_, tensor_slice, slice, scalar_to_array, scalar_to_tensor,
tuple_to_array, expand_dims, transpose, scatter_nd, scatter_nd_add, scatter_nd_sub, gather, tuple_to_array, expand_dims, transpose, scatter_nd, scatter_nd_add, scatter_nd_sub, gather,
gather_d, gather_nd, scalar_cast, gather_d, gather_nd, scalar_cast, masked_fill, tensor_scatter_add, tensor_scatter_sub,
masked_fill, tensor_scatter_add, tensor_scatter_div, scatter_max, scatter_min, nonzero, tensor_scatter_div, scatter_max, scatter_min, nonzero, space_to_batch_nd, range, select)
space_to_batch_nd, range, select)
from .parameter_func import assign, assign_add, assign_sub, index_add from .parameter_func import assign, assign_add, assign_sub, index_add
from .math_func import (addn, absolute, abs, tensor_add, add, neg_tensor, neg, tensor_lt, less, tensor_le, le, lerp, from .math_func import (addn, absolute, abs, tensor_add, add, neg_tensor, neg, tensor_lt, less, tensor_le, le, lerp,
lp_norm, round, tensor_gt, gt, tensor_ge, ge, tensor_sub, sub, tensor_mul, mul, tensor_div, div, lp_norm, round, tensor_gt, gt, tensor_ge, ge, tensor_sub, sub, tensor_mul, mul, tensor_div, div,

View File

@ -1398,6 +1398,57 @@ def tensor_scatter_add(input_x, indices, updates):
return tensor_scatter_add_(input_x, indices, updates) return tensor_scatter_add_(input_x, indices, updates)
tensor_scatter_sub_ = P.TensorScatterSub()
def tensor_scatter_sub(input_x, indices, updates):
"""
Creates a new tensor by subtracting the values from the positions in `input_x` indicated by
`indices`, with values from `updates`. When multiple values are provided for the same
index, the result of the update will be to subtract these values respectively. This operation is almost
equivalent to using :class:`mindspore.ops.ScatterNdSub` , except that the updates are applied on output `Tensor`
instead of input `Parameter`.
The last axis of `indices` is the depth of each index vectors. For each index vector,
there must be a corresponding value in `updates`. The shape of `updates` should be
equal to the shape of `input_x[indices]`. For more details, see use cases.
Note:
If some values of the `indices` are out of bound, instead of raising an index error,
the corresponding `updates` will not be updated to `input_x`.
Args:
input_x (Tensor): The target tensor. The dimension of input_x must be no less than indices.shape[-1].
indices (Tensor): The index of input tensor whose data type is int32 or int64.
The rank must be at least 2.
updates (Tensor): The tensor to update the input tensor, has the same type as input,
and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
Returns:
Tensor, has the same shape and type as `input_x`.
Raises:
TypeError: If dtype of `indices` is neither int32 nor int64.
ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
>>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> output = ops.tensor_scatter_sub(input_x, indices, updates)
>>> print(output)
[[-3.3000002 0.3 3.6 ]
[ 0.4 0.5 -3.2 ]]
"""
return tensor_scatter_sub_(input_x, indices, updates)
def space_to_batch_nd(input_x, block_size, paddings): def space_to_batch_nd(input_x, block_size, paddings):
r""" r"""
Divides a tensor's spatial dimensions into blocks and combines the block sizes with the original batch. Divides a tensor's spatial dimensions into blocks and combines the block sizes with the original batch.
@ -1724,6 +1775,7 @@ __all__ = [
'scatter_nd_add', 'scatter_nd_add',
'scatter_nd_sub', 'scatter_nd_sub',
'tensor_scatter_add', 'tensor_scatter_add',
'tensor_scatter_sub',
'gather', 'gather',
'gather_d', 'gather_d',
'gather_nd', 'gather_nd',

View File

@ -70,7 +70,6 @@ scatter_update = P.ScatterUpdate()
tensor_scatter_update = P.TensorScatterUpdate() tensor_scatter_update = P.TensorScatterUpdate()
tensor_scatter_min = P.TensorScatterMin() tensor_scatter_min = P.TensorScatterMin()
tensor_scatter_max = P.TensorScatterMax() tensor_scatter_max = P.TensorScatterMax()
tensor_scatter_sub = P.TensorScatterSub()
tensor_scatter_mul = P.TensorScatterMul() tensor_scatter_mul = P.TensorScatterMul()
scatter_nd_update = P.ScatterNdUpdate() scatter_nd_update = P.ScatterNdUpdate()
stack = P.Stack() stack = P.Stack()
@ -942,6 +941,7 @@ tensor_operator_registry.register('scatter_nd_add', scatter_nd_add)
tensor_operator_registry.register('scatter_nd_sub', scatter_nd_sub) tensor_operator_registry.register('scatter_nd_sub', scatter_nd_sub)
tensor_operator_registry.register('tensor_scatter_update', tensor_scatter_update) tensor_operator_registry.register('tensor_scatter_update', tensor_scatter_update)
tensor_operator_registry.register('tensor_scatter_div', tensor_scatter_div) tensor_operator_registry.register('tensor_scatter_div', tensor_scatter_div)
tensor_operator_registry.register('tensor_scatter_sub', P.TensorScatterSub)
tensor_operator_registry.register('tensor_scatter_add', P.TensorScatterAdd) tensor_operator_registry.register('tensor_scatter_add', P.TensorScatterAdd)
tensor_operator_registry.register('lp_norm', lp_norm) tensor_operator_registry.register('lp_norm', lp_norm)
__all__ = [name for name in dir() if name[0] != "_"] __all__ = [name for name in dir() if name[0] != "_"]

View File

@ -6801,31 +6801,10 @@ class TensorScatterSub(_TensorScatterOp):
index, the result of the update will be to subtract these values respectively. This operation is almost index, the result of the update will be to subtract these values respectively. This operation is almost
equivalent to using :class:`mindspore.ops.ScatterNdSub` , except that the updates are applied on output `Tensor` equivalent to using :class:`mindspore.ops.ScatterNdSub` , except that the updates are applied on output `Tensor`
instead of input `Parameter`. instead of input `Parameter`.
Refer to :func:`mindspore.ops.tensor_scatter_sub` for more detail.
The last axis of `indices` is the depth of each index vectors. For each index vector,
there must be a corresponding value in `updates`. The shape of `updates` should be
equal to the shape of `input_x[indices]`. For more details, see use cases.
Note:
If some values of the `indices` are out of bound, instead of raising an index error,
the corresponding `updates` will not be updated to `input_x`.
Inputs:
- **input_x** (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].
- **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
The rank must be at least 2.
- **updates** (Tensor) - The tensor to update the input tensor, has the same type as input,
and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
Outputs:
Tensor, has the same shape and type as `input_x`.
Raises:
TypeError: If dtype of `indices` is neither int32 nor int64.
ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
Supported Platforms: Supported Platforms:
``GPU`` ``Ascend`` ``GPU`` ``CPU``
Examples: Examples:
>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32) >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)