forked from mindspore-Ecosystem/mindspore
add func sigmoid and modify docs
This commit is contained in:
parent
7b46b59880
commit
bf29fececa
|
@ -90,6 +90,7 @@ mindspore.ops
|
|||
mindspore.ops.log_softmax
|
||||
mindspore.ops.mish
|
||||
mindspore.ops.selu
|
||||
mindspore.ops.sigmoid
|
||||
mindspore.ops.soft_shrink
|
||||
mindspore.ops.softmax
|
||||
mindspore.ops.softsign
|
||||
|
|
|
@ -24,7 +24,7 @@ mindspore.ops.SparseApplyAdagradV2
|
|||
输入:
|
||||
- **var** (Parameter) - 要更新的变量。为任意维度,其数据类型为float16或float32。
|
||||
- **accum** (Parameter) - 要更新的累积。shape和数据类型必须与 `var` 相同。
|
||||
- **grad** (Tensor) - 梯度,为一个Tensor。shape和数据类型必须与 `var` 相同,且需要满足 :math:`grad.shape[1:] = var.shape[1:] if var.shape > 1`。
|
||||
- **grad** (Tensor) - 梯度,为一个Tensor。shape和数据类型必须与 `var` 相同,且需要满足当 `var.shape > 1` 时 :math:`grad.shape[1:] = var.shape[1:]`。
|
||||
- **indices** (Tensor) - `var` 和 `accum` 第一维度的索引向量,数据类型为int32,且需要保证 :math:`indices.shape[0] = grad.shape[0]`。
|
||||
|
||||
输出:
|
||||
|
|
|
@ -19,7 +19,7 @@ mindspore.ops.SparseApplyFtrlV2
|
|||
- **var** (Parameter) - 要更新的权重。数据类型必须为float16或float32。shape为 :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。
|
||||
- **accum** (Parameter) - 要更新的累加值,shape和数据类型必须与 `var` 相同。
|
||||
- **linear** (Parameter) - 要更新的线性系数,shape和数据类型必须与 `var` 相同。
|
||||
- **grad** (Tensor) - 梯度,为一个Tensor。数据类型必须与 `var` 相同,且需要满足 :math:`grad.shape[1:] = var.shape[1:] if var.shape > 1`。
|
||||
- **grad** (Tensor) - 梯度,为一个Tensor。数据类型必须与 `var` 相同,且需要满足当 `var.shape > 1` 时 :math:`grad.shape[1:] = var.shape[1:]`。
|
||||
- **indices** (Tensor) - `var` 和 `accum` 第一维度的索引向量,数据类型为int32,且需要保证 :math:`indices.shape[0] = grad.shape[0]`。
|
||||
|
||||
输出:
|
||||
|
|
|
@ -91,6 +91,7 @@ Activation Functions
|
|||
mindspore.ops.log_softmax
|
||||
mindspore.ops.mish
|
||||
mindspore.ops.selu
|
||||
mindspore.ops.sigmoid
|
||||
mindspore.ops.softsign
|
||||
mindspore.ops.soft_shrink
|
||||
mindspore.ops.softmax
|
||||
|
|
|
@ -5965,7 +5965,7 @@ class SparseApplyAdagradV2(Primitive):
|
|||
The shape is :math:`(N, *)` where :math:`*` means, any number of additional dimensions.
|
||||
- **accum** (Parameter) - Accumulation to be updated. The shape and data type must be the same as `var`.
|
||||
- **grad** (Tensor) - Gradients has the same data type as `var` and
|
||||
grad.shape[1:] = var.shape[1:] if var.shape > 1.
|
||||
:math:`grad.shape[1:] = var.shape[1:]` if var.shape > 1.
|
||||
- **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`.
|
||||
The type must be int32 and indices.shape[0] = grad.shape[0].
|
||||
|
||||
|
@ -6820,7 +6820,8 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
|
|||
The shape is :math:`(N, *)` where :math:`*` means, any number of additional dimensions.
|
||||
- **accum** (Parameter) - The accumulation to be updated, must be same data type and shape as `var`.
|
||||
- **linear** (Parameter) - the linear coefficient to be updated, must be same data type and shape as `var`.
|
||||
- **grad** (Tensor) - A tensor of the same type as `var` and grad.shape[1:] = var.shape[1:] if var.shape > 1.
|
||||
- **grad** (Tensor) - A tensor of the same type as `var` and
|
||||
:math:`grad.shape[1:] = var.shape[1:]` if var.shape > 1.
|
||||
- **indices** (Tensor) - A vector of indices in the first dimension of `var` and `accum`.
|
||||
The type must be int32 and indices.shape[0] = grad.shape[0].
|
||||
|
||||
|
|
Loading…
Reference in New Issue