fix docs issues

fix docs issues
This commit is contained in:
luojianing 2023-01-12 14:52:56 +08:00
parent 90c3520b13
commit df93feae60
8 changed files with 12 additions and 12 deletions

View File

@ -20,9 +20,9 @@ mindspore.nn.GELU
参数: 参数:
- **approximate** (bool) - 是否启用approximation默认值True。如果approximate的值为True则高斯误差线性激活函数为 - **approximate** (bool) - 是否启用approximation默认值True。如果approximate的值为True则高斯误差线性激活函数为
:math:`0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3)))` :math:`0.5 * x * (1 + tanh(\sqrt(2 / \pi) * (x + 0.044715 * x^3)))`
否则为: :math:`x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2)))`其中P(X) ~ N(0, 1) 。 否则为: :math:`x * P(X <= x) = 0.5 * x * (1 + erf(x / \sqrt(2)))`其中P(X) ~ N(0, 1) 。
输入: 输入:
- **x** (Tensor) - 用于计算GELU的Tensor。数据类型为float16或float32。shape是 :math:`(N,*)` :math:`*` 表示任意的附加维度数。 - **x** (Tensor) - 用于计算GELU的Tensor。数据类型为float16或float32。shape是 :math:`(N,*)` :math:`*` 表示任意的附加维度数。

View File

@ -42,9 +42,9 @@ mindspore.ops.FFTWithSize
- **norm** (str可选) - 表示该操作的规范化方式,可选值:["backward", "forward", "ortho"]。默认值:"backward"。 - **norm** (str可选) - 表示该操作的规范化方式,可选值:["backward", "forward", "ortho"]。默认值:"backward"。
- "backward",正向变换不缩放,逆变换按 :math:`1/sqrt(n)` 缩放,其中 `n` 表示输入 `x` 的元素数量。。 - "backward",正向变换不缩放,逆变换按 :math:`1/n` 缩放,其中 `n` 表示输入 `x` 的元素数量。。
- "ortho",正向变换与逆变换均按 :math:`1/sqrt(n)` 缩放。 - "ortho",正向变换与逆变换均按 :math:`1/\sqrt(n)` 缩放。
- "forward",正向变换按 :math:`1/sqrt(n)` 缩放,逆变换不缩放。 - "forward",正向变换按 :math:`1/n` 缩放,逆变换不缩放。
- **onesided** (bool可选) - 控制输入是否减半以避免冗余。默认值True。 - **onesided** (bool可选) - 控制输入是否减半以避免冗余。默认值True。
- **signal_sizes** (list可选) - 原始信号的大小RFFT变换之前的信号不包含batch这一维只有在IRFFT模式下和设置 `onesided=True` 时需要该参数。默认值: :math:`[]` - **signal_sizes** (list可选) - 原始信号的大小RFFT变换之前的信号不包含batch这一维只有在IRFFT模式下和设置 `onesided=True` 时需要该参数。默认值: :math:`[]`

View File

@ -6,7 +6,7 @@ mindspore.ops.coo_sqrt
逐元素返回当前COOTensor的平方根。 逐元素返回当前COOTensor的平方根。
.. math:: .. math::
out_{i} = \\sqrt{x_{i}} out_{i} = \sqrt{x_{i}}
参数: 参数:
- **x** (COOTensor) - 输入COOTensor数据类型为number.Number其rank需要在[0, 7]范围内. - **x** (COOTensor) - 输入COOTensor数据类型为number.Number其rank需要在[0, 7]范围内.

View File

@ -6,7 +6,7 @@ mindspore.ops.csr_sqrt
逐元素返回当前CSRTensor的平方根。 逐元素返回当前CSRTensor的平方根。
.. math:: .. math::
out_{i} = \\sqrt{x_{i}} out_{i} = \sqrt{x_{i}}
参数: 参数:
- **x** (CSRTensor) - 输入CSRTensor数据类型为number.Number其rank需要在[0, 7]范围内. - **x** (CSRTensor) - 输入CSRTensor数据类型为number.Number其rank需要在[0, 7]范围内.

View File

@ -18,7 +18,7 @@ mindspore.ops.gelu
`approximate``tanh` GELU的定义如下 `approximate``tanh` GELU的定义如下
.. math:: .. math::
GELU(x_i) = 0.5 * x_i * (1 + tanh[\sqrt{\\frac{2}{pi}}(x + 0.044715 * x_{i}^{3})] ) GELU(x_i) = 0.5 * x_i * (1 + tanh(\sqrt(2 / \pi) * (x_i + 0.044715 * x_i^3)))
GELU相关图参见 `GELU <https://en.wikipedia.org/wiki/Activation_function#/media/File:Activation_gelu.png>`_ GELU相关图参见 `GELU <https://en.wikipedia.org/wiki/Activation_function#/media/File:Activation_gelu.png>`_

View File

@ -815,11 +815,11 @@ class GELU(Cell):
If approximate is True, The gaussian error linear activation is: If approximate is True, The gaussian error linear activation is:
:math:`0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3)))` :math:`0.5 * x * (1 + tanh(\sqrt(2 / \pi) * (x + 0.044715 * x^3)))`
else, it is: else, it is:
:math:`x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2)))`, where P(X) ~ N(0, 1). :math:`x * P(X <= x) = 0.5 * x * (1 + erf(x / \sqrt(2)))`, where P(X) ~ N(0, 1).
Inputs: Inputs:
- **x** (Tensor) - The input of GELU with data type of float16 or float32. - **x** (Tensor) - The input of GELU with data type of float16 or float32.

View File

@ -5262,7 +5262,7 @@ def gelu(input_x, approximate='none'):
When `approximate` argument is `tanh`, GeLU is estimated with: When `approximate` argument is `tanh`, GeLU is estimated with:
.. math:: .. math::
GELU(x_i) = 0.5 * x_i * (1 + tanh[\sqrt{\\frac{2}{pi}}(x + 0.044715 * x_{i}^{3})] ) GELU(x_i) = 0.5 * x_i * (1 + tanh(\sqrt(2 / \pi) * (x_i + 0.044715 * x_i^3)))
Args: Args:
input_x (Tensor): The input of the activation function GeLU, the data type is float16, float32 or float64. input_x (Tensor): The input of the activation function GeLU, the data type is float16, float32 or float64.

View File

@ -7155,7 +7155,7 @@ class FFTWithSize(Primitive):
- "backward" has the direct transforms unscaled and the inverse transforms scaled by 1/n, - "backward" has the direct transforms unscaled and the inverse transforms scaled by 1/n,
where n is the input x's element numbers. where n is the input x's element numbers.
- "ortho" has both direct and inverse transforms are scaled by 1/sqrt(n). - "ortho" has both direct and inverse transforms are scaled by 1/\sqrt(n).
- "forward" has the direct transforms scaled by 1/n and the inverse transforms unscaled. - "forward" has the direct transforms scaled by 1/n and the inverse transforms unscaled.
onesided (bool, optional): Controls whether the input is halved to avoid redundancy. Default: True. onesided (bool, optional): Controls whether the input is halved to avoid redundancy. Default: True.