q3 collective review part 4
This commit is contained in:
parent
b0b30f6447
commit
6e7fe8475d
|
@ -3,12 +3,11 @@ mindspore.ops.MultitypeFuncGraph
|
|||
|
||||
.. py:class:: mindspore.ops.MultitypeFuncGraph(name, read_value=False)
|
||||
|
||||
MultitypeFuncGraph是一个用于生成重载函数的类,使用不同类型作为输入。使用 `name` 去初始化一个MultitypeFuncGraph,并且使用带有
|
||||
类型的 `register` 注册器进行装饰注册类型。这样使该函数可以使用不同的类型作为输入调用,一般与 `HyperMap` 与 `Map` 结合使用。
|
||||
MultitypeFuncGraph是一个用于生成重载函数的类,使用不同类型作为输入。使用 `name` 去初始化一个MultitypeFuncGraph对象,然后用带有输入类型的 `register` 注册器进行装饰注册类型。这样使该函数可以使用不同的类型作为输入调用,一般与 `HyperMap` 、 `Map` 结合使用。
|
||||
|
||||
参数:
|
||||
- **name** (str) - 操作名。
|
||||
- **read_value** (bool, 可选) - 如果注册函数不需要对输入的值进行更改,即所有输入都为按值传递,则将 `read_value` 设置为True。默认为:False。
|
||||
- **read_value** (bool, 可选) - 如果注册函数不需要对输入的值进行更改,即所有输入都为按值传递,则将 `read_value` 设置为True。默认值:False。
|
||||
|
||||
异常:
|
||||
- **ValueError** - 找不到给定参数类型所匹配的函数。
|
||||
|
|
|
@ -3,9 +3,10 @@ mindspore.ops.NPUGetFloatStatus
|
|||
|
||||
.. py:class:: mindspore.ops.NPUGetFloatStatus
|
||||
|
||||
更新标识,通过执行 :class:`mindspore.ops.NPUAllocFloatStatus` 或取最新溢出状态。
|
||||
在执行 :class:`mindspore.ops.NPUAllocFloatStatus` 后, :class:`mindspore.ops.NPUGetFloatStatus` 获取最新溢出状态并更新标识。
|
||||
|
||||
标志是一个Tensor,其shape为 :math:`(8,)` ,数据类型为 `mindspore.dtype.float32` 。如果标志的和等于0,则没有发生溢出。如果标志之和大于0,则发生溢出。此外,使用有严格的顺序要求,即在使用 :class:`NPUGetFloatStatus` 算子之前,需要确保 :class:`NPUClearFloatStatus` 和需执行的计算已执行。使用 :class:`mindspore.ops.Depend` 确保执行顺序。
|
||||
.. note::
|
||||
标志是一个Tensor,其shape为 :math:`(8,)` ,数据类型为 `mindspore.dtype.float32` 。如果标志的和等于0,则没有发生溢出。如果标志之和大于0,则发生溢出。此外,使用有严格的顺序要求,即在使用 :class:`NPUGetFloatStatus` 算子之前,需要确保 :class:`NPUClearFloatStatus` 和需执行的计算已执行。使用 :class:`mindspore.ops.Depend` 确保执行顺序。
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - :class:`NPUAllocFloatStatus` 的输出Tensor。数据类型必须为float16或float32。 :math:`(N,*)` ,其中 :math:`*` 表示任意附加维度,其rank应小于8。
|
||||
|
|
|
@ -20,8 +20,8 @@ mindspore.ops.NoRepeatNGram
|
|||
异常:
|
||||
- **TypeError** - 如果 `ngram_size` 不是int。
|
||||
- **TypeError** - 如果 `state_seq` 或 `log_probs` 不是Tensor。
|
||||
- **TypeError** - 如果 `state_seq` 的dtype不是int。
|
||||
- **TypeError** - 如果 `log_probs` 的dtype不是float。
|
||||
- **TypeError** - 如果 `state_seq` 的数据类型不是int。
|
||||
- **TypeError** - 如果 `log_probs` 的数据类型不是float。
|
||||
- **ValueError** - 如果 `ngram_size` 小于0。
|
||||
- **ValueError** - 如果 `ngram_size` 大于m。
|
||||
- **ValueError** - 如果 `state_seq` 或 `log_probs` 不是三维的Tensor。
|
||||
|
|
|
@ -3,16 +3,16 @@ mindspore.ops.ParallelConcat
|
|||
|
||||
.. py:class:: mindspore.ops.ParallelConcat
|
||||
|
||||
根据第一个维度连接Tensor。
|
||||
根据第一个维度连接输入Tensor。
|
||||
|
||||
Concat和并行Concat之间的区别在于,Concat要求在操作开始之前需要计算所有的输入,但不要求在构图期间知道输入shape。
|
||||
Concat和ParallelConcat之间的区别在于,Concat要求在操作开始之前需要计算所有的输入,但不要求在构图期间知道输入shape。
|
||||
ParallelConcat将在输入片段可用时,将其复制到输出中,在某些情况下,这可以提供性能优势。
|
||||
|
||||
.. note::
|
||||
输入Tensor在第一个维度要求长度为1。
|
||||
|
||||
输入:
|
||||
- **values** (tuple, list) - 由Tensor组成的tuple或list。其元素的数据类型和shape必须相同。CPU上支持数据类型为数值型,Ascend上支持数据类型为除去[float64, complex64, complex128]三种数据类型外的数值型。
|
||||
- **values** (tuple, list) - 由Tensor组成的tuple或list。其元素的数据类型和shape必须相同,且每个Tensor的rank不能小于1。CPU上支持数据类型为数值型,Ascend上支持数据类型为除去[float64, complex64, complex128]三种数据类型外的数值型。
|
||||
|
||||
输出:
|
||||
Tensor,数据类型与 `values` 相同。
|
||||
|
@ -21,5 +21,5 @@ mindspore.ops.ParallelConcat
|
|||
- **TypeError** - 如果输入不是Tensor。
|
||||
- **TypeError** - 如果 `values` 中各个Tensor的数据类型和shape不相同。
|
||||
- **ValueError** - 如果任意一个输入Tensor的shape第一维长度不是1。
|
||||
- **ValueError** - 如果 `values` 的秩小于1。
|
||||
- **ValueError** - 如果 `values` 中的Tensor的秩小于1。
|
||||
- **ValueError** - 如果输入各Tensor的shape不一致。
|
||||
|
|
|
@ -2,13 +2,11 @@ mindspore.ops.Partial
|
|||
======================
|
||||
|
||||
.. py:class:: mindspore.ops.Partial
|
||||
|
||||
偏函数。
|
||||
|
||||
对生成偏函数的实例,可传入指定的操作函数及其对应的参数。
|
||||
生成偏函数的实例。通过给一般函数的部分参数提供初始值来衍生出有特定功能的新函数。
|
||||
|
||||
输入:
|
||||
- **args** (Union[FunctionType, Tensor]) - 需传入的函数及其对应的参数。
|
||||
|
||||
输出:
|
||||
函数类型,偏函数及其对应的参数。
|
||||
FunctionType,偏函数及其对应的参数。
|
||||
|
|
|
@ -12,14 +12,14 @@ mindspore.ops.ReduceOp
|
|||
- MIN:ReduceOp.MIN.
|
||||
- PROD:ReduceOp.PROD.
|
||||
|
||||
.. note::
|
||||
有关更多信息,请参阅示例。这需要在具有多个加速卡的环境中运行。
|
||||
在运行以下示例之前,用户需要预设环境变量,请参考官方网站 `MindSpore \
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 。
|
||||
|
||||
有四种操作选项,"SUM"、"MAX"、"MIN"和"PROD"。
|
||||
|
||||
- SUM:求和。
|
||||
- MAX:求最大值。
|
||||
- MIN:求最小值。
|
||||
- PROD:求乘积。
|
||||
|
||||
.. note::
|
||||
有关更多信息,请参阅示例。这需要在具有多个加速卡的环境中运行。
|
||||
在运行以下示例之前,用户需要预设环境变量,请参考官方网站 `MindSpore \
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 。
|
||||
|
|
|
@ -6,7 +6,7 @@ mindspore.ops.Rint
|
|||
逐元素计算最接近输入数据的整数。
|
||||
|
||||
输入:
|
||||
- **input_x** (Tensor) - 待计算的Tensor,数据必须是float16、float32。shape为 :math:`(N,*)` ,其中 :math:`*` 表示任意数量的附加维度。
|
||||
- **input_x** (Tensor) - 待计算的Tensor,数据必须是float16、float32或者float64。shape为 :math:`(N,*)` ,其中 :math:`*` 表示任意数量的附加维度。
|
||||
|
||||
输出:
|
||||
Tensor,shape和数据类型与 `input_x` 相同。
|
||||
|
|
|
@ -9,7 +9,7 @@ mindspore.ops.Rsqrt
|
|||
out_{i} = \frac{1}{\sqrt{x_{i}}}
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - Rsqrt的输入,其rank必须在[0, 7](含)中,并且每个元素必须是非负数。
|
||||
- **x** (Tensor) - Rsqrt的输入,其rank必须在[0, 7]内,并且每个元素必须是非负数。
|
||||
|
||||
输出:
|
||||
Tensor,数据类型和shape与 `x` 相同。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.ScalarSummary
|
|||
|
||||
.. py:class:: mindspore.ops.ScalarSummary
|
||||
|
||||
通过标量汇总算子将标量输出到协议缓冲区。
|
||||
通过ScalarSummary将一个标量输出到协议缓冲区。
|
||||
|
||||
输入:
|
||||
- **name** (str) - 输入变量的名称,不能是空字符串。
|
||||
|
@ -12,3 +12,4 @@ mindspore.ops.ScalarSummary
|
|||
异常:
|
||||
- **TypeError** - 如果 `name` 不是str。
|
||||
- **TypeError** - 如果 `value` 不是Tensor。
|
||||
- **ValueError** - 如果 `value` 的维度大于1。
|
||||
|
|
|
@ -24,4 +24,4 @@ mindspore.ops.Select
|
|||
|
||||
异常:
|
||||
- **TypeError** - 如果 `x` 或者 `y` 不是Tensor。
|
||||
- **ValueError** - 如果 `x` 的shape与 `y` 或者 `condition` 的shape不一致。
|
||||
- **ValueError** - 如果三个输入的shape不一致。
|
|
@ -12,14 +12,14 @@ mindspore.ops.SparseApplyFtrl
|
|||
- **l1** (float) - l1正则化,必须大于或等于零。
|
||||
- **l2** (float) - l2正则化,必须大于或等于零。
|
||||
- **lr_power** (float) - 在训练期间控制降低学习率,必须小于或等于零。如果lr_power为零,则使用固定学习率。
|
||||
- **use_locking** (bool) - 是否对参数更新加锁保护。默认值:False。
|
||||
- **use_locking** (bool, 可选) - 是否对参数更新加锁保护。默认值:False。
|
||||
|
||||
输入:
|
||||
- **var** (Parameter) - 要更新的权重。数据类型必须为float16或float32。shape为 :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。
|
||||
- **accum** (Parameter) - 要更新的累数值,shape和数据类型必须与 `var` 相同。
|
||||
- **linear** (Parameter) - 要更新的线性系数,shape和数据类型必须与 `var` 相同。
|
||||
- **grad** (Tensor) - 梯度,为一个Tensor。数据类型必须与 `var` 相同,且需要满足 :math:`grad.shape[1:] = var.shape[1:] if var.shape > 1`。
|
||||
- **indices** (Tensor) - `var` 和 `accum` 第一维度的索引向量,数据类型为int32或int64,且需要保证 :math:`indices.shape[0] = grad.shape[0]`。
|
||||
- **grad** (Tensor) - 梯度,为一个Tensor。数据类型必须与 `var` 相同,且需要满足:如果 `var.shape > 1`,则 :math:`grad.shape[1:] = var.shape[1:]` 。
|
||||
- **indices** (Tensor) - `var` 和 `accum` 第一维度的索引向量,数据类型为int32或int64,且需要保证 :math:`indices.shape[0] = grad.shape[0]` 。
|
||||
|
||||
输出:
|
||||
- **var** (Tensor) - shape和数据类型与 `var` 相同。
|
||||
|
|
|
@ -21,9 +21,9 @@ mindspore.ops.SparseApplyProximalAdagrad
|
|||
输入:
|
||||
- **var** (Parameter) - 公式中的"var"。数据类型必须为float16或float32。shape为 :math:`(N,*)` ,其中 :math:`*` 表示任何附加维度。
|
||||
- **accum** (Parameter) - 公式中的"accum",与 `var` 的shape和数据类型相同。
|
||||
- **lr** (Union[Number, Tensor]) - 学习率,必须为float或为Tensor,其数据类型为float16或float32。
|
||||
- **l1** (Union[Number, Tensor]) - l1正则化,必须为float或为Tensor,其数据类型为float16或float32。
|
||||
- **l2** (Union[Number, Tensor]) - l2正则化,必须为float或为Tensor,其数据类型为float16或float32。
|
||||
- **lr** (Union[Number, Tensor]) - 学习率,必须为float或为Tensor,其数据类型为float16或float32。必须大于零。
|
||||
- **l1** (Union[Number, Tensor]) - l1正则化,必须为float或为Tensor,其数据类型为float16或float32。必须大于等于零。
|
||||
- **l2** (Union[Number, Tensor]) - l2正则化,必须为float或为Tensor,其数据类型为float16或float32。必须大于等于零。
|
||||
- **grad** (Tensor) - 梯度,数据类型与 `var` 相同。如果 `var` 的shape大于1,那么 :math:`grad.shape[1:] = var.shape[1:]` 。
|
||||
- **indices** (Tensor) - `var` 和 `accum` 第一维度中的索引。如果 `indices` 中存在重复项,则无意义。数据类型必须是int32、int64和 :math:`indices.shape[0] = grad.shape[0]` 。
|
||||
|
||||
|
@ -37,4 +37,5 @@ mindspore.ops.SparseApplyProximalAdagrad
|
|||
- **TypeError** - 如果 `use_locking` 不是bool。
|
||||
- **TypeError** - 如果 `var` 、 `accum` 、 `lr` 、 `l1` 、 `l2` 或 `grad` 的数据类型既不是float16也不是float32。
|
||||
- **TypeError** - 如果 `indices` 的数据类型既不是int32也不是int64。
|
||||
- **ValueError** - 如果 `lr` <= 0 或者 `l1` < 0 或者 `l2` < 0。
|
||||
- **RuntimeError** - 如果不支持参数的 `var` 、 `accum` 和 `grad` 数据类型转换。
|
||||
|
|
|
@ -3,16 +3,16 @@ mindspore.ops.SparseTensorDenseAdd
|
|||
|
||||
.. py:class:: mindspore.ops.SparseTensorDenseAdd
|
||||
|
||||
一个稀疏张量加上稠密张量得到一个稠密张量。
|
||||
一个稀疏tensor加上稠密Tensor得到一个稠密Tensor。
|
||||
|
||||
输入:
|
||||
- **x1_indices** (Tensor) - 二维张量,表示稀疏张量的索引位置,shape为 :math:`(n, 2)`。
|
||||
- **x1_values** (Tensor) - 一维张量,表示稀疏张量索引位置对应值,shape为 :math:`(n,)`。
|
||||
- **x1_shape** (tuple(int)) - 稀疏张量对应的稠密张量维度,shape为 :math:`(N, C)`。
|
||||
- **x2** (Tensor) - 稠密张量,数据类型与稀疏张量一致。
|
||||
- **x1_indices** (Tensor) - 二维Tensor,表示稀疏Tensor的索引位置,shape为 :math:`(n, 2)` ,支持的数据类型为int32和int64,其值必须为非负数。
|
||||
- **x1_values** (Tensor) - 一维Tensor,表示稀疏Tensor索引位置对应值,shape为 :math:`(n,)`。
|
||||
- **x1_shape** (tuple(int)) - 稀疏Tensor对应的稠密Tensor的shape,是一个不含负数, 长度为2的tuple,shape为 :math:`(N, C)`。
|
||||
- **x2** (Tensor) - 稠密Tensor,数据类型与稀疏Tensor一致。
|
||||
|
||||
输出:
|
||||
张量,shape与 `x1_shape` 一致。
|
||||
Tensor,shape与 `x1_shape` 一致。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `x1_indices` 和 `x1_shape` 不是int32或者int64。
|
||||
|
|
|
@ -11,7 +11,7 @@ mindspore.ops.SparseTensorDenseMatmul
|
|||
|
||||
输入:
|
||||
- **indices** (Tensor) - 二维Tensor,表示元素在稀疏Tensor中的位置。支持int32、int64,每个元素值都应该是非负的。shape是 :math:`(n,2)` 。
|
||||
- **values** (Tensor) - 一维Tensor,表示 `indices` 位置上对应的值。支持float16、float32、float64、int32、int64、complex64、complex128。shape应该是 :math:`(n,)` 。
|
||||
- **values** (Tensor) - 一维Tensor,表示 `indices` 位置上对应的值。支持float16、float32、float64、int32、int64、complex64、complex128。shape是 :math:`(n,)` 。
|
||||
- **sparse_shape** (tuple(int) 或 Tensor) - 指定稀疏Tensor的shape,由两个正整数组成,表示稀疏Tensor的shape为 :math:`(N, C)` 。
|
||||
- **dense** (Tensor) - 二维Tensor,数据类型与 `values` 相同。
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ mindspore.ops.SparseToDense
|
|||
- **values** (Tensor) - 一维Tensor,表示 `indices` 位置上对应的值。shape为 :math:`(n,)` 。
|
||||
- **sparse_shape** (tuple(int)) - 指定稀疏Tensor的shape,由两个正整数组成,表示稀疏Tensor的shape为 :math:`(N, C)` 。
|
||||
|
||||
返回:
|
||||
输出:
|
||||
Tensor,计算后的Tensor。数据类型与 `values` 相同,shape由 `sparse_shape` 所指定。
|
||||
|
||||
异常:
|
||||
|
|
|
@ -3,12 +3,14 @@ mindspore.ops.TensorSummary
|
|||
|
||||
.. py:class:: mindspore.ops.TensorSummary
|
||||
|
||||
通过tensor汇总算子将tensor输出到协议缓冲区。
|
||||
通过TensorSummary将一个Tensor输出到协议缓冲区。
|
||||
|
||||
输入:
|
||||
- **name** (str) - 输入变量的名称。
|
||||
- **value** (Tensor) - Tensor的值和Tensor的维度必须大于0。
|
||||
- **value** (Tensor) - Tensor的值,Tensor的维度必须大于0。
|
||||
|
||||
异常:
|
||||
- **TypeError** - 如果 `name` 不是str。
|
||||
- **TypeError** - 如果 `value` 不是Tensor。
|
||||
- **ValueError** - 如果 `value` 的维度等于0。
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ mindspore.ops.Unique
|
|||
- **input_x** (Tensor) - 输入Tensor。shape为 :math:`(N,*)` ,其中 :math:`*` 表示,任意数量的附加维度。
|
||||
|
||||
输出:
|
||||
tuple,形如( `y` , `idx` )的Tensor对象, `y` 与 `input_x` 的数据类型相同,记录的是 `input_x` 中的唯一元素。 `idx` 是一个Tensor,记录的是输入 `input_x` 元素相对应的索引。
|
||||
Tuple,形如( `y` , `idx` )的Tensor对象, `y` 与 `input_x` 的数据类型相同,记录的是 `input_x` 中的唯一元素。 `idx` 是一个Tensor,记录的是输入 `input_x` 元素相对应的索引。
|
||||
|
||||
异常:
|
||||
- **TypeError** - 如果 `input_x` 不是Tensor。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.pdist
|
|||
|
||||
.. py:function:: mindspore.ops.pdist(x, p=2.0)
|
||||
|
||||
计算输入中每对行向量之间的p-范数距离。如果输入 `x` 的shape为 :math:`(N, M)`,那么输出就是一个shape为 :math:`(N * (N - 1) / 2,)` 的Tensor。
|
||||
计算输入中每对行向量之间的p-范数距离。如果输入Tensor `x` 的shape为 :math:`(N, M)`,那么输出就是一个shape为 :math:`(N * (N - 1) / 2,)` 的Tensor。
|
||||
如果输入 `x` 的shape为 :math:`(*B, N, M)`,那么输出就是一个shape为 :math:`(*B, N * (N - 1) / 2)` 的Tensor。
|
||||
|
||||
.. math::
|
||||
|
@ -12,14 +12,14 @@ mindspore.ops.pdist
|
|||
其中 :math:`x_{i}, x_{j}` 是输入中两个不同的行向量。
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - 输入tensor x,其shape为 :math:`(*B, N, M)`,其中 :math:`*B` 表示批处理大小,可以是多维度。类型:float16,float32或float64。
|
||||
- **x** (Tensor) - 输入Tensor `x` ,其shape为 :math:`(*B, N, M)`,其中 :math:`*B` 表示批处理大小,可以是多维度。类型:float16,float32或float64。
|
||||
- **p** (float) - p-范数距离的p值,:math:`p∈[0,∞]`。默认值:2.0。
|
||||
|
||||
返回:
|
||||
Tensor,类型与 `x` 一致。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `x` 不是tensor。
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - `x` 的数据类型不是float16,float32,float64。
|
||||
- **TypeError** - `p` 不是float。
|
||||
- **ValueError** - `p` 是负数。
|
||||
|
|
|
@ -18,7 +18,7 @@ mindspore.ops.prelu
|
|||
|
||||
参数:
|
||||
- **x** (Tensor) - 激活函数的输入Tensor。数据类型为float16或float32。shape为 :math:`(N, C, *)` ,其中 :math:`*` 表示任意的附加维度。
|
||||
- **weight** (Tensor) - 权重Tensor。数据类型为float16或float32。weight只可以是向量,长度与输入x的通道数C相同。在GPU设备上,当输入为标量时,shape为1。
|
||||
- **weight** (Tensor) - 权重Tensor。数据类型为float16或float32。 `weight` 只可以是向量,长度与输入x的通道数C相同。在GPU设备上,当输入为标量时,shape为(1,)。
|
||||
|
||||
返回:
|
||||
Tensor,数据类型与 `x` 相同。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.random_categorical
|
|||
|
||||
.. py:function:: mindspore.ops.random_categorical(logits, num_sample, seed=0, dtype=mstype.int64)
|
||||
|
||||
从分类分布中抽取样本。
|
||||
从一个分类分布中生成随机样本。
|
||||
|
||||
参数:
|
||||
- **logits** (Tensor) - 输入Tensor。Shape为 :math:`(batch\_size, num\_classes)` 的二维Tensor。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.random_poisson
|
|||
|
||||
.. py:function:: mindspore.ops.random_poisson(shape, rate, seed=None, dtype=mstype.float32)
|
||||
|
||||
从各指定均值的泊松分布中,随机采样 `shape` 形状的随机数。
|
||||
从一个指定均值为 `rate` 的泊松分布中,随机生成形状为 `shape` 的随机数Tensor。
|
||||
|
||||
.. math::
|
||||
|
||||
|
|
|
@ -3,9 +3,7 @@ mindspore.ops.rank
|
|||
|
||||
.. py:function:: mindspore.ops.rank(input_x)
|
||||
|
||||
返回输入Tensor的秩。
|
||||
|
||||
返回输入Tensor的秩,是一个0维的,数据类型为int32;Tensor的秩是确定Tensor中每个元素所需的索引数。
|
||||
返回输入Tensor的秩,是一个0维的,数据类型为int32。Tensor的秩是确定Tensor中每个元素所需的索引数。
|
||||
|
||||
参数:
|
||||
- **input_x** (Tensor) - Tensor的shape为 :math:`(x_1,x_2,...,x_R)` 。数据类型为数值型。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.relu
|
|||
|
||||
.. py:function:: mindspore.ops.relu(x)
|
||||
|
||||
线性修正单元激活函数(Rectified Linear Unit)。
|
||||
对输入Tensor逐元素计算线性修正单元激活函数(Rectified Linear Unit)值。
|
||||
|
||||
返回 :math:`\max(x,\ 0)` 的值。负值神经元将被设置为0,正值神经元将保持不变。
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.scatter_mul
|
|||
|
||||
.. py:function:: mindspore.ops.scatter_mul(input_x, indices, updates)
|
||||
|
||||
根据指定更新值和输入索引通过乘法运算更新输入数据的值。
|
||||
根据指定更新值和输入索引通过乘法运算更新输入数据的值。此操作会在数据更新完成之后输出 `input_x` ,便于使用更新后值。
|
||||
|
||||
对于 `indices.shape` 的每个 `i, ..., j` :
|
||||
|
||||
|
@ -13,8 +13,8 @@ mindspore.ops.scatter_mul
|
|||
输入的 `input_x` 和 `updates` 遵循隐式类型转换规则,以确保数据类型一致。如果数据类型不同,则低精度数据类型将转换为高精度的数据类型。当参数的数据类型需要转换时,则会抛出RuntimeError异常。
|
||||
|
||||
参数:
|
||||
- **input_x** (Parameter) - ScatterMul的输入,任意维度的Parameter。
|
||||
- **indices** (Tensor) - 指定相乘操作的索引,数据类型必须为mindspore.int32。
|
||||
- **input_x** (Parameter) - 被更新的Tensor,数据类型为Parameter,shape为 :math:`(N,*)` ,其中 :math:`*` 为任意的额外维度。
|
||||
- **indices** (Tensor) - 指定相乘操作的索引,数据类型必须为int32或者int64。
|
||||
- **updates** (Tensor) - 指定与 `input_x` 相乘的Tensor,数据类型与 `input_x` 相同,shape为 `indices.shape + input_x.shape[1:]` 。
|
||||
|
||||
返回:
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.shape
|
|||
|
||||
.. py:function:: mindspore.ops.shape(input_x)
|
||||
|
||||
返回输入Tensor的shape。用于静态shape。
|
||||
返回输入Tensor的shape。用于静态shape的情况下。
|
||||
|
||||
静态shape:不运行图就可以获得的shape。它是Tensor的固有属性,可能是未知的。静态shape信息可以通过人工设置完成。无论图的输入是什么,静态shape都不会受到影响。
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.ops.softmax
|
|||
|
||||
Softmax函数。
|
||||
|
||||
在指定轴上使用Softmax函数做归一化操作。假设指定轴 :math:`x` 上有切片,那么每个元素 :math:`x_i` 所对应的Softmax函数如下所示:
|
||||
在指定轴上对输入Tensor执行Softmax函数做归一化操作。假设指定轴 :math:`x` 上有切片,那么每个元素 :math:`x_i` 所对应的Softmax函数如下所示:
|
||||
|
||||
.. math::
|
||||
\text{output}(x_i) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)},
|
||||
|
@ -13,14 +13,14 @@ mindspore.ops.softmax
|
|||
其中 :math:`N` 代表Tensor的长度。
|
||||
|
||||
参数:
|
||||
- **axis** (Int) - 指定Softmax操作的轴。默认值:-1。
|
||||
- **x** (Tensor) - Softmax的输入,任意维度的Tensor。其数据类型为float16或float32。
|
||||
- **axis** (Union[int, tuple[int]]) - 指定Softmax操作的轴。默认值:-1。
|
||||
|
||||
返回:
|
||||
Tensor,数据类型和shape与 `x` 相同。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `axis` 不是int。
|
||||
- **TypeError** - `axis` 不是int或者tuple。
|
||||
- **TypeError** - `x` 的数据类型既不是float16也不是float32。
|
||||
- **ValueError** - `axis` 是长度小于1的tuple。
|
||||
- **ValueError** - `axis` 是一个tuple,其元素不全在[-len(x.shape), len(x.shape))范围中。
|
|
@ -3,7 +3,7 @@ mindspore.ops.sparse_add
|
|||
|
||||
.. py:function:: mindspore.ops.sparse_add(x1: COOTensor, x2: COOTensor, thresh: Tensor)
|
||||
|
||||
两个COOTensor相加,根据相加的结果与thresh返回新的COOTensor。
|
||||
两个COOTensor相加,根据相加的结果与 `thresh` 返回新的COOTensor。
|
||||
|
||||
参数:
|
||||
- **x1** (COOTensor) - 一个操作数,与当前操作数相加。
|
||||
|
|
|
@ -3,16 +3,17 @@ mindspore.ops.sqrt
|
|||
|
||||
.. py:function:: mindspore.ops.sqrt(x)
|
||||
|
||||
逐元素返回当前Tensor的平方。
|
||||
逐元素返回当前Tensor的平方根。
|
||||
|
||||
.. math::
|
||||
y_i = \\sqrt(x_i)
|
||||
out_{i} = \\sqrt{x_{i}}
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - 任意维度的输入Tensor。该值必须大于0。
|
||||
- **x** (Tensor) - 输入Tensor,数据类型为number.Number,其rank需要在[0, 7]范围内.
|
||||
|
||||
返回:
|
||||
Tensor,具有与 `x` 相同的shape。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - 如果 `x` 不是Tensor。
|
||||
|
||||
|
|
|
@ -3,23 +3,23 @@ mindspore.ops.std
|
|||
|
||||
.. py:function:: mindspore.ops.std(input_x, axis=(), unbiased=True, keep_dims=False)
|
||||
|
||||
默认情况下,输出Tensor各维度上的标准差与均值,也可以对指定维度求标准差与均值。如果 `axis` 是维度列表,则减少对应的维度。
|
||||
默认情况下,输出Tensor各维度上的标准差与均值,也可以对指定维度求标准差与均值。如果 `axis` 是维度列表,则计算对应维度的标准差与均值。
|
||||
|
||||
参数:
|
||||
- **input_x** (Tensor[Number]) - 输入Tensor,其数据类型为数值型。shape: :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。秩应小于8。
|
||||
- **axis** (Union[int, tuple(int), list(int)]) - 要减少的维度。默认值: (),缩小所有维度。只允许常量值。假设 `x` 的秩为r,取值范围[-r,r)。
|
||||
- **unbiased** (bool) - 如果为True,使用贝塞尔校正。否则不使用贝塞尔校正。默认值:True。
|
||||
- **axis** (Union[int, tuple(int), list(int)]) - 要减少的维度。默认值: (),缩小所有维度。只允许常量值。假设 `input_x` 的秩为r,取值范围[-r,r)。
|
||||
- **unbiased** (bool) - 如果为True,使用Bessel校正。否则不使用Bessel校正。默认值:True。
|
||||
- **keep_dims** (bool) - 如果为True,则保留缩小的维度,大小为1。否则移除维度。默认值:False。
|
||||
|
||||
返回:
|
||||
Tensor。
|
||||
|
||||
- 如果 `axis` 为(),且 `keep_dims` 为False,则输出一个0维Tensor,表示输入Tensor中所有元素的标准差。
|
||||
Tuple,含有两个Tensor的tuple,分别为标准差与均值.
|
||||
假设输入 `input_x` 的shape为 :math:`(x_0, x_1, ..., x_R)` :
|
||||
- 如果 `axis` 为(),且 `keep_dims` 为False,则输出一个0维Tensor,表示输入Tensor `input_x` 中所有元素的标准差。
|
||||
- 如果 `axis` 为int,取值为1,并且 `keep_dims` 为False,则输出的shape为 :math:`(x_0, x_2, ..., x_R)` 。
|
||||
- 如果 `axis` 为tuple(int)或list(int),取值为(1, 2),并且 `keep_dims` 为False,则输出Tensor的shape为 :math:`(x_0, x_3, ..., x_R)` 。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `input_x` 不是Tensor。
|
||||
- **TypeError** - `axis` 不是以下数据类型之一:int、Tuple或List。
|
||||
- **TypeError** - `axis` 不是以下数据类型之一:int、tuple或list。
|
||||
- **TypeError** - `keep_dims` 不是bool类型。
|
||||
- **ValueError** - `axis` 超出范围。
|
||||
|
|
|
@ -11,10 +11,9 @@ mindspore.ops.svd
|
|||
A=U*diag(S)*V^{T}
|
||||
|
||||
参数:
|
||||
- **a** (Tensor) - 待分解的矩阵。shape为 :math:`(*, M, N)` 。
|
||||
- **a** (Tensor) - 待分解的矩阵。shape为 :math:`(*, M, N)` ,支持的数据类型为float32和float64。
|
||||
- **full_matrices** (bool, 可选) - 如果为True,则计算完整的 :math:`U` 和 :math:`V` 。否则仅计算前P个奇异向量,P为M和N中的较小值,M和N分别是输入矩阵的行和列。默认值:False。
|
||||
- **compute_uv** (bool, 可选) - 如果这个参数为True,则计算 :math:`U` 和 :math:`V` , 否则只计算 :math:`S` 。默认值:True。
|
||||
|
||||
- **compute_uv** (bool, 可选) - 如果这个参数为True,则计算 :math:`U` 和 :math:`V` ,否则只计算 :math:`S` 。默认值:True。
|
||||
返回:
|
||||
- **s** (Tensor) - 奇异值。shape为 :math:`(*, P)` 。
|
||||
- **u** (Tensor) - 左奇异向量。如果 `compute_uv` 为False,该值不会返回。shape为 :math:`(*, M, P)` 。如果 `full_matrices` 为True,则shape为 :math:`(*, M, M)` 。
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
根据指定的更新值和输入索引,通过最大值运算,输出结果以Tensor形式返回。
|
||||
|
||||
索引的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。 `updates` 的shape应该等于 `input_x[indices]` 的shape。有关更多详细信息,请参见下方样例。
|
||||
索引的最后一个轴是每个索引向量的深度。对于每个索引向量, `updates` 中必须有相应的值。 `updates` 的shape应该等于 `input_x[indices]` 的shape。
|
||||
|
||||
.. note::
|
||||
如果 `indices` 的某些值超出范围,则 `input_x` 不会更新相应的 `updates`,同时也不会抛出索引错误。
|
||||
|
|
|
@ -19,13 +19,13 @@
|
|||
|
||||
参数:
|
||||
- **input_x** (Tensor) - shape: :math:`(x_1, x_2, ..., x_R)` 。
|
||||
- **segment_ids** (Tensor) - 将形状设置为 :math:`(x_1, x_2, ..., x_N)` ,其中0<N<=R。
|
||||
- **segment_ids** (Tensor) - 将shape设置为 :math:`(x_1, x_2, ..., x_N)` ,其中0<N<=R。
|
||||
- **num_segments** (Union[int, Tensor], 可选) - 分段数量 :math:`z` ,数据类型为int或0维的Tensor。
|
||||
|
||||
返回:
|
||||
Tensor,shape: :math:`(z, x_{N+1}, ..., x_R)` 。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `num_segments` 不是int类型。
|
||||
- **TypeError** - `num_segments` 不是int类型或者0维Tensor。
|
||||
- **ValueError** - `segment_ids` 的维度不等于1。
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
将第一个输入Tensor除以第二个输入Tensor。当 `x` 为零时,则返回零。
|
||||
|
||||
`x` 和 `y` 的输入遵循隐式类型转换规则,使数据类型一致。输入必须是两个Tensor或一个Tensor和一个Scalar。当输入是两个Tensor时,它们的数据类型不能同时bool,它们的shape可以广播。当输入是一个Tensor和一个Scalar时,Scalar只能是一个常量。
|
||||
`x` 和 `y` 的输入遵循隐式类型转换规则,使数据类型一致。输入必须是两个Tensor或一个Tensor和一个Scalar。当输入是两个Tensor时,它们的数据类型不能同时为bool,它们的shape可以广播。当输入是一个Tensor和一个Scalar时,Scalar只能是一个常量。
|
||||
|
||||
参数:
|
||||
- **x** (Union[Tensor, Number, bool]) - float,complex或bool类型的Tensor。
|
||||
|
|
|
@ -793,7 +793,7 @@ def size(input_x):
|
|||
|
||||
def shape(input_x):
|
||||
"""
|
||||
Returns the shape of the input tensor. And it used to be static shape.
|
||||
Returns the shape of the input tensor. This operation is used in static shape cases.
|
||||
|
||||
static shape: A shape that can be obtained without running the graph. It is an inherent property of tensor and
|
||||
may be unknown. The static shape information can be completed by artificial setting.
|
||||
|
@ -916,7 +916,7 @@ def reverse_sequence(x, seq_lengths, seq_dim, batch_dim=0):
|
|||
|
||||
Args:
|
||||
x (Tensor): The input to reverse, supporting all number types including bool.
|
||||
seq_lengths (Tensor): Must be a 1-D vector with int32 or int64 types.
|
||||
seq_lengths (Tensor): Specified reversing length, must be a 1-D vector with int32 or int64 types.
|
||||
seq_dim (int): The dimension where reversal is performed. Required.
|
||||
batch_dim (int): The input is sliced in this dimension. Default: 0.
|
||||
|
||||
|
@ -1651,8 +1651,6 @@ def transpose(input_x, input_perm):
|
|||
|
||||
def scatter_mul(input_x, indices, updates):
|
||||
r"""
|
||||
Updates the value of the input tensor through the multiply operation.
|
||||
|
||||
Using given values to update tensor value through the mul operation, along with the input indices.
|
||||
This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
|
||||
|
||||
|
@ -1668,10 +1666,10 @@ def scatter_mul(input_x, indices, updates):
|
|||
when the data types of parameters need to be converted.
|
||||
|
||||
Args:
|
||||
input_x (Parameter): The target tensor, with data type of Parameter.
|
||||
input_x (Parameter): The target tensor to be updated, with data type of Parameter.
|
||||
The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
|
||||
indices (Tensor): The index to do min operation whose data type must be mindspore.int32.
|
||||
updates (Tensor): The tensor doing the min operation with `input_x`,
|
||||
indices (Tensor): The index to do mul operation whose data type must be int32 or int64.
|
||||
updates (Tensor): The tensor doing the mul operation with `input_x`,
|
||||
the data type is same as `input_x`, the shape is `indices.shape + input_x.shape[1:]`.
|
||||
|
||||
Returns:
|
||||
|
@ -2855,13 +2853,12 @@ def tensor_scatter_sub(input_x, indices, updates):
|
|||
|
||||
def tensor_scatter_max(input_x, indices, updates):
|
||||
"""
|
||||
By comparing the value at the position indicated by `indices` in `x` with the value in the `updates`,
|
||||
By comparing the value at the position indicated by `indices` in `input_x` with the value in the `updates`,
|
||||
the value at the index will eventually be equal to the largest one to create a new tensor.
|
||||
|
||||
The last axis of the index is the depth of each index vector. For each index vector,
|
||||
there must be a corresponding value in `updates`. The shape of `updates` should be
|
||||
equal to the shape of input_x[indices].
|
||||
For more details, see use cases.
|
||||
|
||||
Note:
|
||||
If some values of the `indices` are out of bound, instead of raising an index error,
|
||||
|
@ -4493,7 +4490,7 @@ def unsorted_segment_sum(input_x, segment_ids, num_segments):
|
|||
Tensor, the shape is :math:`(z, x_{N+1}, ..., x_R)`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `num_segments` is not an int.
|
||||
TypeError: If `num_segments` is not an int or 0-D Tensor.
|
||||
ValueError: If length of shape of `segment_ids` is less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
|
|
|
@ -41,7 +41,8 @@ def svd(a, full_matrices=False, compute_uv=True):
|
|||
A=U*diag(S)*V^{T}
|
||||
|
||||
Args:
|
||||
a (Tensor): Tensor of the matrices to be decomposed. The shape should be :math:`(*, M, N)`.
|
||||
a (Tensor): Tensor of the matrices to be decomposed. The shape should be :math:`(*, M, N)`,
|
||||
the supported dtype are float32 and float64..
|
||||
full_matrices (bool, optional): If true, compute full-sized :math:`U` and :math:`V`. If false, compute
|
||||
only the leading P singular vectors, with P is the minimum of M and N.
|
||||
Default: False.
|
||||
|
@ -58,7 +59,7 @@ def svd(a, full_matrices=False, compute_uv=True):
|
|||
Raises:
|
||||
TypeError: If `full_matrices` or `compute_uv` is not the type of bool.
|
||||
TypeError: If the rank of input less than 2.
|
||||
TypeError: If the type of input is not one of the following dtype: mstype.float32, mstype.float64.
|
||||
TypeError: If the type of input is not one of the following dtype: float32, float64.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU`` ``CPU``
|
||||
|
|
|
@ -2186,7 +2186,7 @@ def matrix_determinant(x):
|
|||
dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.
|
||||
|
||||
Returns:
|
||||
Tensor. The shape is `x.shape[:-2]`, and the dtype is same as `x`.
|
||||
Tensor. The shape is :math:`x.shape[:-2]`, and the dtype is same as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not a Tensor.
|
||||
|
@ -2251,7 +2251,8 @@ def log_matrix_determinant(x):
|
|||
dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.
|
||||
|
||||
Returns:
|
||||
Tensor. The signs of the log determinants. The shape is `x.shape[:-2]`, and the dtype is same as `x`.
|
||||
Tensor. The signs of the log determinants. The shape is :math:`x.shape[:-2]`,
|
||||
and the dtype is same as `x`.
|
||||
Tensor. The absolute values of the log determinants. The shape is :math:`x.shape[:-2]`, and
|
||||
the dtype is same as `x`.
|
||||
|
||||
|
@ -2483,8 +2484,8 @@ def ldexp(x, other):
|
|||
|
||||
def logit(x, eps=None):
|
||||
r"""
|
||||
Calculate the logit of a tensor element-wise. When eps is not None, element in 'x' is clamped to [eps, 1-eps].
|
||||
When eps is None, input 'x' is not clamped.
|
||||
Calculate the logit of a tensor element-wise. When eps is not None, element in `x` is clamped to [eps, 1-eps].
|
||||
When eps is None, input `x` is not clamped.
|
||||
|
||||
.. math::
|
||||
\begin{align}
|
||||
|
@ -3395,11 +3396,13 @@ def logaddexp2(x1, x2):
|
|||
|
||||
def std(input_x, axis=(), unbiased=True, keep_dims=False):
|
||||
"""
|
||||
Returns the standard-deviation and mean of each row of the input tensor in the dimension `axis`.
|
||||
Returns the standard-deviation and mean of each row of the input tensor by default,
|
||||
or it can calculate them in specified dimension `axis`.
|
||||
If `axis` is a list of dimensions, reduce over all of them.
|
||||
|
||||
Args:
|
||||
input_x (Tensor[Number]): Input tensor.
|
||||
input_x (Tensor[Number]): Input tensor with a dtype of number.Number, its shape should be :math:`(N, *)`
|
||||
where :math:`*` means any number of additional dims, its rank should be less than 8.
|
||||
axis (Union[int, tuple(int), list(int)]): The dimensions to reduce. Default: (), reduce all dimensions.
|
||||
Only constant value is allowed.
|
||||
Must be in the range [-rank(`input_x`), rank(`input_x`)).
|
||||
|
@ -3411,7 +3414,14 @@ def std(input_x, axis=(), unbiased=True, keep_dims=False):
|
|||
If false, don't keep these dimensions.
|
||||
|
||||
Returns:
|
||||
A tuple (output_std, output_mean) containing the standard deviation and mean.
|
||||
A tuple of 2 Tensors (output_std, output_mean) containing the standard deviation and mean.
|
||||
Suppose the shape of `input_x` is :math:`(x_0, x_1, ..., x_R)`:
|
||||
- If `axis` is () and `keep_dims` is set to False, returns a 0-D Tensor, indicating
|
||||
the standard deviation of all elements in `input_x`.
|
||||
- If `axis` is int 1 and `keep_dims` is set to False, then the returned Tensor
|
||||
has shape :math:`(x_0, x_2, ..., x_R)`.
|
||||
- If `axis` is tuple(int) or list(int), e.g. (1, 2) and `keep_dims` is set to False,
|
||||
then the returned Tensor has shape :math:`(x_0, x_2, ..., x_R)`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `input_x` is not a Tensor.
|
||||
|
@ -3446,8 +3456,7 @@ def sqrt(x):
|
|||
out_{i} = \\sqrt{x_{i}}
|
||||
|
||||
Args:
|
||||
x (Tensor): The input tensor with a dtype of Number, its rank must be in [0, 7] inclusive.
|
||||
|
||||
x (Tensor): The input tensor with a dtype of number.Number, its rank must be in [0, 7] inclusive.
|
||||
Returns:
|
||||
Tensor, has the same shape and dtype as the `x`.
|
||||
|
||||
|
|
|
@ -1593,7 +1593,7 @@ def softmax(x, axis=-1):
|
|||
where :math:`N` is the length of the tensor.
|
||||
|
||||
Args:
|
||||
axis (Int): The axis to perform the Softmax operation. Default: -1.
|
||||
axis (Union[int, tuple[int]], optional): The axis to perform the Softmax operation. Default: -1.
|
||||
x (Tensor): Tensor of shape :math:`(N, *)`, where :math:`*` means, any number of
|
||||
additional dimensions, with float16 or float32 data type.
|
||||
|
||||
|
@ -1601,7 +1601,7 @@ def softmax(x, axis=-1):
|
|||
Tensor, with the same type and shape as the logits.
|
||||
|
||||
Raises:
|
||||
TypeError: If `axis` is nnot an int.
|
||||
TypeError: If `axis` is not an int or a tuple.
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
ValueError: If `axis` is a tuple whose length is less than 1.
|
||||
ValueError: If `axis` is a tuple whose elements are not all in range [-len(logits.shape), len(logits.shape))
|
||||
|
@ -2007,7 +2007,7 @@ def relu(x):
|
|||
r"""
|
||||
Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise.
|
||||
|
||||
It returns max(x, 0) element-wise. Specially, the neurons with the negative output
|
||||
It returns :math:`\max(x,\ 0)` element-wise. Specially, the neurons with the negative output
|
||||
will be suppressed and the active neurons will stay the same.
|
||||
|
||||
.. math::
|
||||
|
@ -2099,7 +2099,7 @@ def prelu(x, weight):
|
|||
The shape is :math:`(N, C, *)` where :math:`*` means, any number of additional dimensions.
|
||||
weight (Tensor): Weight Tensor. The data type is float16 or float32.
|
||||
The weight can only be a vector, and the length is the same as the number of channels C of the `input_x`.
|
||||
On GPU devices, when the input is a scalar, the shape is 1.
|
||||
On GPU devices, when the input is a scalar, the shape is (1,).
|
||||
|
||||
Returns:
|
||||
Tensor, with the same dtype as `x`.
|
||||
|
|
|
@ -339,7 +339,8 @@ def uniform_candidate_sampler(true_classes,
|
|||
|
||||
def random_poisson(shape, rate, seed=None, dtype=mstype.float32):
|
||||
r"""
|
||||
Generates random numbers according to the Poisson random number distribution.
|
||||
Generates random number Tensor with shape `shape` according to a Poisson distribution with mean `rate`.
|
||||
|
||||
|
||||
.. math::
|
||||
|
||||
|
@ -348,7 +349,7 @@ def random_poisson(shape, rate, seed=None, dtype=mstype.float32):
|
|||
Args:
|
||||
shape (Tensor): The shape of random tensor to be sampled from each poisson distribution, 1-D `Tensor` whose
|
||||
dtype is mindspore.dtype.int32 or mindspore.dtype.int64.
|
||||
rate (Tensor): The μ parameter the distribution was constructed with. It represents the mean of the distribution
|
||||
rate (Tensor): The μ parameter the distribution is constructed with. It represents the mean of the distribution
|
||||
and also the variance of the distribution. It should be a `Tensor` whose dtype is mindspore.dtype.int64,
|
||||
mindspore.dtype.int32, mindspore.dtype.float64, mindspore.dtype.float32 or mindspore.dtype.float16.
|
||||
seed (int): Seed is used as entropy source for the random number engines to generate pseudo-random numbers
|
||||
|
|
|
@ -560,7 +560,8 @@ def sparse_concat(sp_input, concat_dim=0):
|
|||
|
||||
def sparse_add(x1: COOTensor, x2: COOTensor, thresh: Tensor) -> COOTensor:
|
||||
"""
|
||||
Computes the sum of x1(COOTensor) and x2(COOTensor).
|
||||
Computes the sum of x1(COOTensor) and x2(COOTensor), and return a new COOTensor
|
||||
based on the computed result and `thresh`.
|
||||
|
||||
Args:
|
||||
x1 (COOTensor): the first COOTensor to sum.
|
||||
|
|
|
@ -2667,9 +2667,7 @@ class ConcatOffsetV1(Primitive):
|
|||
|
||||
class ParallelConcat(Primitive):
|
||||
r"""
|
||||
Concats tensor in the first dimension.
|
||||
|
||||
Concats input tensors along with the first dimension.
|
||||
Concats input tensors along the first dimension.
|
||||
|
||||
The difference between Concat and ParallelConcat is that Concat requires all of the inputs be computed
|
||||
before the operation will begin but doesn't require that the input shapes be known during graph construction.
|
||||
|
@ -2681,7 +2679,8 @@ class ParallelConcat(Primitive):
|
|||
|
||||
Inputs:
|
||||
- **values** (tuple, list) - A tuple or a list of input tensors. The data type and shape of these
|
||||
tensors must be the same. The supported date type is Number on CPU, the same for Ascend except
|
||||
tensors must be the same and their rank should not be less than 1.
|
||||
The supported date type is Number on CPU, the same for Ascend except
|
||||
[float64, complex64, complex128].
|
||||
|
||||
Outputs:
|
||||
|
@ -2691,7 +2690,7 @@ class ParallelConcat(Primitive):
|
|||
TypeError: If any type of the inputs is not a Tensor.
|
||||
TypeError: If the data type of these tensors are not the same.
|
||||
ValueError: If any tensor.shape[0] is not 1.
|
||||
ValueError: If length of shape of `values` is less than 1.
|
||||
ValueError: If rank of any Tensor in `values` is less than 1.
|
||||
ValueError: If the shape of these tensors are not the same.
|
||||
|
||||
Supported Platforms:
|
||||
|
@ -3081,11 +3080,12 @@ class ReverseV2(Primitive):
|
|||
|
||||
class Rint(Primitive):
|
||||
"""
|
||||
Returns an integer that is closest to x element-wise.
|
||||
Returns an integer that is closest to `input_x` element-wise.
|
||||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - The target tensor, which must be one of the following types:
|
||||
float16, float32. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
|
||||
float16, float32, float64. The shape is :math:`(N,*)` where :math:`*` means
|
||||
any number of additional dimensions.
|
||||
|
||||
Outputs:
|
||||
Tensor, has the same shape and type as `input_x`.
|
||||
|
@ -3141,7 +3141,7 @@ class Select(Primitive):
|
|||
|
||||
Raises:
|
||||
TypeError: If `x` or `y` is not a Tensor.
|
||||
ValueError: If shape of `x` is not equal to shape of `y` or shape of `condition`.
|
||||
ValueError: If shape of the three inputs are different.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
|
|
@ -827,8 +827,8 @@ class NeighborExchangeV2(Primitive):
|
|||
"""
|
||||
NeighborExchangeV2 is a collective operation.
|
||||
|
||||
NeighborExchangeV2 sends data from the local rank to ranks in the send_rank_ids,
|
||||
as while receive data from recv_rank_ids.
|
||||
NeighborExchangeV2 sends data from the local rank to ranks in the `send_rank_ids`,
|
||||
as while receive data from `recv_rank_ids`.
|
||||
|
||||
Note:
|
||||
The user needs to preset
|
||||
|
|
|
@ -59,11 +59,12 @@ class ScalarSummary(Primitive):
|
|||
|
||||
Inputs:
|
||||
- **name** (str) - The name of the input variable, it must not be an empty string.
|
||||
- **value** (Tensor) - The value of scalar, and the dim of value must be 0 or 1.
|
||||
- **value** (Tensor) - The value of scalar, and the dim of `value` must be 0 or 1.
|
||||
|
||||
Raises:
|
||||
TypeError: If `name` is not a str.
|
||||
TypeError: If `value` is not a Tensor.
|
||||
TypeError: If dim of `value` is greater than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
@ -84,7 +85,9 @@ class ScalarSummary(Primitive):
|
|||
... self.summary(name, x)
|
||||
... x = self.add(x, y)
|
||||
... return x
|
||||
...
|
||||
>>> summary = SummaryDemo()(Tensor(3), Tensor(4))
|
||||
>>> print(summary)
|
||||
Tensor(shape=[], dtype=Int64, value=7)
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
@ -169,6 +172,7 @@ class TensorSummary(Primitive):
|
|||
Raises:
|
||||
TypeError: If `name` is not a str.
|
||||
TypeError: If `value` is not a Tensor.
|
||||
ValueError: If rank of `value` is 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
@ -189,7 +193,9 @@ class TensorSummary(Primitive):
|
|||
... name = "x"
|
||||
... self.summary(name, x)
|
||||
... return x
|
||||
...
|
||||
>>> summary = SummaryDemo()(Tensor([[1]]), Tensor([[2]]))
|
||||
>>> print(summary)
|
||||
Tensor(shape=[1, 1], dtype=Int64, value=[[3]])
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
|
|
@ -107,7 +107,7 @@ class Randperm(Primitive):
|
|||
|
||||
class NoRepeatNGram(PrimitiveWithInfer):
|
||||
"""
|
||||
Updates log_probs with repeat n-grams.
|
||||
Updates the probability of occurrence of words with its corresponding n-grams.
|
||||
|
||||
During beam search, if consecutive `ngram_size` words exist in the generated word sequence,
|
||||
the consecutive `ngram_size` words will be avoided during subsequent prediction.
|
||||
|
@ -119,8 +119,9 @@ class NoRepeatNGram(PrimitiveWithInfer):
|
|||
ngram_size (int): Size of n-grams, must be greater than 0. Default: 1.
|
||||
|
||||
Inputs:
|
||||
- **state_seq** (Tensor) - A 3-D tensor with shape: (batch_size, beam_width, m).
|
||||
- **log_probs** (Tensor) - A 3-D tensor with shape: (batch_size, beam_width, vocab_size).
|
||||
- **state_seq** (Tensor) - n-gram word series, a 3-D tensor with shape: (batch_size, beam_width, m).
|
||||
- **log_probs** (Tensor) - Probability of occurrence of n-gram word series, a 3-D
|
||||
tensor with shape: (batch_size, beam_width, vocab_size).
|
||||
The value of log_probs will be replaced with -FLOAT_MAX when n-grams repeated.
|
||||
|
||||
Outputs:
|
||||
|
|
|
@ -4245,14 +4245,17 @@ class NPUAllocFloatStatus(Primitive):
|
|||
|
||||
class NPUGetFloatStatus(Primitive):
|
||||
"""
|
||||
Updates the flag which is the output tensor of `NPUAllocFloatStatus` with the latest overflow status.
|
||||
:class:`mindspore.ops.NPUGetFloatStatus` updates the flag which is
|
||||
the output tensor of :class:`mindspore.ops.NPUAllocFloatStatus` with the latest overflow status.
|
||||
|
||||
The flag is a tensor whose shape is `(8,)` and data type is `mindspore.dtype.float32`.
|
||||
If the sum of the flag equals to 0, there is no overflow happened. If the sum of the flag is bigger than 0, there
|
||||
is overflow happened.
|
||||
In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator,
|
||||
need to ensure that the NPUClearFlotStatus and your compute has been executed.
|
||||
We use Depend on ensure the execution order.
|
||||
|
||||
Note:
|
||||
The flag is a tensor whose shape is `(8,)` and data type is `mindspore.dtype.float32`.
|
||||
If the sum of the flag equals to 0, there is no overflow happened. If the sum of the
|
||||
flag is bigger than 0, there is overflow happened.
|
||||
In addition, there are strict sequencing requirements for use, i.e., before
|
||||
using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus
|
||||
and your compute has been executed. We use :class:`mindspore.ops.Depend` to ensure the execution order.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - The output tensor of `NPUAllocFloatStatus`.
|
||||
|
|
|
@ -6273,11 +6273,11 @@ class SparseApplyProximalAdagrad(Primitive):
|
|||
The shape is :math:`(N, *)` where :math:`*` means, any number of additional dimensions.
|
||||
- **accum** (Parameter) - Variable tensor to be updated, has the same shape and dtype as `var`.
|
||||
- **lr** (Union[Number, Tensor]) - The learning rate value, must be a float number or
|
||||
a scalar tensor with float16 or float32 data type.
|
||||
a scalar tensor with float16 or float32 data type. It must be positive.
|
||||
- **l1** (Union[Number, Tensor]) - l1 regularization strength, must be a float number or
|
||||
a scalar tensor with float16 or float32 data type.
|
||||
a scalar tensor with float16 or float32 data type. It must be non-negative.
|
||||
- **l2** (Union[Number, Tensor]) - l2 regularization strength, must be a float number or
|
||||
a scalar tensor with float16 or float32 data type.
|
||||
a scalar tensor with float16 or float32 data type. It must be non-negative.
|
||||
- **grad** (Tensor) - A tensor of the same type as `var` and
|
||||
grad.shape[1:] = var.shape[1:] if var.shape > 1.
|
||||
- **indices** (Tensor) - A tensor of indices in the first dimension of `var` and `accum`.
|
||||
|
@ -6294,6 +6294,7 @@ class SparseApplyProximalAdagrad(Primitive):
|
|||
TypeError: If `use_locking` is not a bool.
|
||||
TypeError: If dtype of `var`, `accum`, `lr`, `l1`, `l2` or `grad` is neither float16 nor float32.
|
||||
TypeError: If dtype of `indices` is neither int32 nor int64.
|
||||
ValueError: If `lr` <= 0 or `l1` < 0 or `l2` < 0.
|
||||
RuntimeError: If the data type of `var`, `accum` and `grad` conversion of Parameter is not supported.
|
||||
|
||||
Supported Platforms:
|
||||
|
@ -6924,7 +6925,7 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
|
|||
l2_shrinkage (float): L2 shrinkage regularization.
|
||||
lr_power (float): Learning rate power controls how the learning rate decreases during training,
|
||||
must be less than or equal to zero. Use fixed learning rate if `lr_power` is zero.
|
||||
use_locking (bool): If `True`, the var and accumulation tensors will be protected from being updated.
|
||||
use_locking (bool, optional): If `True`, the var and accumulation tensors will be protected from being updated.
|
||||
Default: False.
|
||||
|
||||
Inputs:
|
||||
|
|
|
@ -89,6 +89,13 @@ class Primitive(Primitive_):
|
|||
|
||||
Args:
|
||||
device_target (str): The target device to run, support "Ascend", "GPU", and "CPU".
|
||||
|
||||
Examples:
|
||||
>>> import mindspore.ops as ops
|
||||
>>> a = ops.Add()
|
||||
>>> a = a.set_device("GPU")
|
||||
>>> print(a.primitive_target)
|
||||
GPU
|
||||
"""
|
||||
return self.add_prim_attr("primitive_target", device_target)
|
||||
|
||||
|
|
Loading…
Reference in New Issue