!37540 fix dropout doc bugs
Merge pull request !37540 from zhuzhongrui/code_docs_branch1
This commit is contained in:
commit
9960a2fc27
|
@ -834,6 +834,7 @@ mindspore.Tensor
|
|||
**返回:**
|
||||
|
||||
Tensor,行列式的绝对值的对数的符号, 形状为 `x_shape[:-2]` ,数据类型与 `x` 相同。
|
||||
|
||||
Tensor,行列式的绝对值的对数, 形状为 `x_shape[:-2]` ,数据类型与 `x` 相同。
|
||||
|
||||
**异常:**
|
||||
|
|
|
@ -4,7 +4,7 @@ mindspore.nn.Dropout2d
|
|||
.. py:class:: mindspore.nn.Dropout2d(p=0.5)
|
||||
|
||||
在训练期间,以服从伯努利分布的概率 `p` 随机将输入Tensor的某些通道归零。(对于形状为 `NCHW` 的四维Tensor,其通道特征图指的是后两维 `HW` 形状的二维特征图)。
|
||||
例如,在批处理输入中 :math:`i_th` 批, :math:`j_th` 通道的 `input[i, j]` `2D` Tensor 是一个待处理数据。
|
||||
例如,在批处理输入中 :math:`i\_th` 批, :math:`j\_th` 通道的 `input[i, j]` `2D` Tensor 是一个待处理数据。
|
||||
每个通道将会独立依据伯努利分布概率 `p` 来确定是否被清零。
|
||||
|
||||
`Dropout2d` 可以提高通道特征映射之间的独立性。
|
||||
|
|
|
@ -4,8 +4,8 @@ mindspore.nn.Dropout3d
|
|||
.. py:class:: mindspore.nn.Dropout3d(p=0.5)
|
||||
|
||||
在训练期间,以服从伯努利分布的概率 `p` 随机将输入Tensor的某些通道归零。(对于形状为 `NCDHW` 的 `5D` Tensor。
|
||||
其通道特征图指的是后两维 `DHW` 形状的三维特征图)。
|
||||
例如,在批处理输入中 :math:`i_th` 批, :math:`j_th` 通道的 `input[i, j]` `3D` Tensor 是一个待处理数据。
|
||||
其通道特征图指的是后三维 `DHW` 形状的三维特征图)。
|
||||
例如,在批处理输入中 :math:`i\_th` 批, :math:`j\_th` 通道的 `input[i, j]` `3D` Tensor 是一个待处理数据。
|
||||
每个通道将会独立依据伯努利分布概率 `p` 来确定是否被清零。
|
||||
|
||||
`Dropout3d` 可以提高通道特征映射之间的独立性。
|
||||
|
|
|
@ -1,11 +1,9 @@
|
|||
mindspore.ops.Dropout3D
|
||||
========================
|
||||
=======================
|
||||
|
||||
.. py:class:: mindspore.ops.Dropout3D(keep_prob=0.5)
|
||||
|
||||
在训练期间,以服从伯努利分布的概率 :math:`1-keep\_prob` 随机将输入Tensor的某些通道归零。(对于形状为 `NCDHW` 的 `5D` Tensor。其通道特征图指的是后两维 `DHW` 形状的三维特征图)。
|
||||
例如,在批处理输入中 :math:`i_th` 批, :math:`j_th` 通道的 `input[i, j]` `3D` Tensor 是一个待处理数据。
|
||||
每个通道将会独立依据伯努利分布概率 :math:`1-keep\_prob` 来确定是否被清零。
|
||||
在训练期间,以服从伯努利分布的概率 :math:`1-keep\_prob` 随机将输入Tensor的某些通道归零。(对于形状为 `NCDHW` 的 `5D` Tensor。其通道特征图指的是后三维 `DHW` 形状的三维特征图)。
|
||||
|
||||
.. note::
|
||||
保持概率 :math:`keep\_prob` 等于 :func:`mindspore.ops.dropout3d` 中的 :math:`1 - p` 。
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.ops.dropout2d
|
|||
|
||||
在训练期间,以服从伯努利分布的概率 `p` 随机将输入Tensor的某些通道归零。(对于形状为 `NCHW` 的四维Tensor,
|
||||
其通道特征图指的是后两维 `HW` 形状的二维特征图)。
|
||||
例如,在批处理输入中 :math:`i_th` 批, :math:`j_th` 通道的 `input[i, j]` `2D` Tensor 是一个待处理数据。
|
||||
例如,在批处理输入中 :math:`i\_th` 批, :math:`j\_th` 通道的 `input[i, j]` `2D` Tensor 是一个待处理数据。
|
||||
每个通道将会独立依据伯努利分布概率 `p` 来确定是否被清零。
|
||||
论文 `Dropout: A Simple Way to Prevent Neural Networks from Overfitting <http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf>`_ 中提出了该技术,并证明其能有效地减少过度拟合,防止神经元共适应。更多详细信息,请参见 `Improving neural networks by preventing co-adaptation of feature detectors <https://arxiv.org/pdf/1207.0580.pdf>`_ 。
|
||||
|
||||
|
@ -13,19 +13,20 @@ mindspore.ops.dropout2d
|
|||
|
||||
**参数:**
|
||||
|
||||
- **x** (tensor) - 一个形状为 math:`(N, C, H, W)` 的 `4D` Tensor,其中N是批处理大小,`C` 是的通道数,`H` 是特征高度,`W` 是特征宽度。其数据类型应为int8、int16、int32、int64、float16、float32或float64。
|
||||
- **x** (tensor) - 一个形状为 math:`(N, C, H, W)` 的 `4D` Tensor,其中N是批处理大小,`C` 是通道数,`H` 是特征高度,`W` 是特征宽度。其数据类型应为int8、int16、int32、int64、float16、float32或float64。
|
||||
- **p** (float) - 通道的丢弃概率,介于 0 和 1 之间,例如 `p` = 0.8,意味着80%的清零概率。默认值:0.5。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,具有与输入 `x` 相同的形状和数据类型。
|
||||
掩码(Tensor),形状与 `x` 相同,数据类型为bool。
|
||||
Tensor,输出,具有与输入 `x` 相同的形状和数据类型。
|
||||
|
||||
Tensor,掩码,形状与 `x` 相同,数据类型为bool。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - `x` 的数据类型不是int8、int16、int32、int64、float16、float32或float64。
|
||||
- **TypeError** - `p` 的数据类型不是float。
|
||||
- **ValueError** - `p` 值不在 `[0,1]` 之间。
|
||||
- **ValueError** - `p` 值不在 `[0.0,1.0]` 之间。
|
||||
- **ValueError** - `x` 的维度不等于4。
|
||||
|
||||
|
|
|
@ -4,27 +4,28 @@ mindspore.ops.dropout3d
|
|||
.. py:function:: mindspore.ops.dropout3d(x, p=0.5)
|
||||
|
||||
在训练期间,以服从伯努利分布的概率 `p` 随机将输入Tensor的某些通道归零。(对于形状为 `NCDHW` 的 `5D` Tensor。
|
||||
其通道特征图指的是后两维 `DHW` 形状的三维特征图)。
|
||||
例如,在批处理输入中 :math:`i_th` 批, :math:`j_th` 通道的 `input[i, j]` `3D` Tensor 是一个待处理数据。
|
||||
其通道特征图指的是后三维 `DHW` 形状的三维特征图)。
|
||||
例如,在批处理输入中 :math:`i\_th` 批, :math:`j\_th` 通道的 `input[i, j]` `3D` Tensor 是一个待处理数据。
|
||||
每个通道将会独立依据伯努利分布概率 `p` 来确定是否被清零。
|
||||
|
||||
`dropout3d` 可以提高通道特征映射之间的独立性。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **x** (tensor) - 一个形状为 math:`(N, C, H, D, W)` 的 `5D` Tensor,其中N是批处理大小,`C` 是的通道数,`D` 是特征深度, `H` 是特征高度,`W` 是特征宽度。其数据类型应为int8、int16、int32、int64、float16、float32或float64。
|
||||
- **x** (tensor) - 一个形状为 math:`(N, C, D, H, W)` 的 `5D` Tensor,其中N是批处理大小,`C` 是通道数,`D` 是特征深度, `H` 是特征高度,`W` 是特征宽度。其数据类型应为int8、int16、int32、int64、float16、float32或float64。
|
||||
- **p** (float) - 通道的丢弃概率,介于 0 和 1 之间,例如 `p` = 0.8,意味着80%的清零概率。默认值:0.5。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,具有与输入 `x` 相同的形状和数据类型。
|
||||
掩码(Tensor),形状与 `x` 相同,数据类型为bool。
|
||||
Tensor,输出,具有与输入 `x` 相同的形状和数据类型。
|
||||
|
||||
Tensor,掩码,形状与 `x` 相同,数据类型为bool。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 不是Tensor。
|
||||
- **TypeError** - `x` 的数据类型不是int8、int16、int32、int64、float16、float32或float64。
|
||||
- **TypeError** - `p` 的数据类型不是float。
|
||||
- **ValueError** - `p` 值不在 `[0,1]` 之间。
|
||||
- **ValueError** - `p` 值不在 `[0.0,1.0]` 之间。
|
||||
- **ValueError** - `x` 的维度不等于5。
|
||||
|
||||
|
|
|
@ -12,6 +12,7 @@ mindspore.ops.log_matrix_determinant
|
|||
**返回:**
|
||||
|
||||
Tensor,行列式的绝对值的对数的符号, 形状为 `x_shape[:-2]` ,数据类型与 `x` 相同。
|
||||
|
||||
Tensor,行列式的绝对值的对数, 形状为 `x_shape[:-2]` ,数据类型与 `x` 相同。
|
||||
|
||||
**异常:**
|
||||
|
|
|
@ -1183,14 +1183,14 @@ class Tensor(Tensor_):
|
|||
), tolerance)
|
||||
|
||||
def matrix_determinant(self):
|
||||
"""
|
||||
r"""
|
||||
Computes the determinant of one or more square matrices.
|
||||
|
||||
`x` refer to self tensor.
|
||||
|
||||
Returns:
|
||||
|
||||
Tensor, The shape is `x_shape[:-2]`, the dtype is same as 'x'.
|
||||
Tensor, The shape is :math:`x\_shape[:-2]`, the dtype is same as 'x'.
|
||||
|
||||
Raises:
|
||||
TypeError: If self tensor is not a Tensor.
|
||||
|
@ -1211,15 +1211,16 @@ class Tensor(Tensor_):
|
|||
return tensor_operator_registry.get('matrix_determinant')(self)
|
||||
|
||||
def log_matrix_determinant(self):
|
||||
"""
|
||||
r"""
|
||||
Computes the sign and the log of the absolute value of the determinant of one or more square matrices.
|
||||
|
||||
`x` refer to self tensor.
|
||||
|
||||
Returns:
|
||||
Tensor, The signs of the log determinants. The shape is `x_shape[:-2]`, the dtype is same as `x`.
|
||||
Tensor, The signs of the log determinants. The shape is :math:`x\_shape[:-2]`, the dtype is same as `x`.\n
|
||||
|
||||
Tensor, The absolute values of the log determinants. The shape is `x_shape[:-2]`, the dtype is same as `x`.
|
||||
Tensor, The absolute values of the log determinants. The shape is :math:`x\_shape[:-2]`,
|
||||
the dtype is same as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If self tensor is not a Tensor.
|
||||
|
@ -1786,10 +1787,10 @@ class Tensor(Tensor_):
|
|||
perm = tuple(range(0, self.ndim))
|
||||
if axis2 + 1 < self.ndim:
|
||||
new_perm = perm[0:axis1] + perm[axis2:axis2 + 1] + \
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1] + perm[axis2 + 1:]
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1] + perm[axis2 + 1:]
|
||||
else:
|
||||
new_perm = perm[0:axis1] + perm[axis2:axis2 + 1] + \
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1]
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1]
|
||||
|
||||
return tensor_operator_registry.get('transpose')()(self, new_perm)
|
||||
|
||||
|
|
|
@ -177,13 +177,12 @@ class Dropout(Cell):
|
|||
|
||||
|
||||
class Dropout2d(Cell):
|
||||
"""
|
||||
r"""
|
||||
During training, randomly zeroes some channels of the input tensor with probability `p`
|
||||
from a Bernoulli distribution(For a 4-dimensional tensor with a shape of :math: `NCHW`,
|
||||
the channel feature map refers
|
||||
to a 2-dimensional feature map with the shape of :math: `HW`).
|
||||
from a Bernoulli distribution(For a 4-dimensional tensor with a shape of :math:`NCHW`,
|
||||
the channel feature map refers to a 2-dimensional feature map with the shape of :math:`HW`).
|
||||
|
||||
For example, the :math:`j_th` channel of the :math:`i_th` sample in the batched input is a to-be-processed
|
||||
For example, the :math:`j\_th` channel of the :math:`i\_th` sample in the batched input is a to-be-processed
|
||||
`2D` tensor input[i,j].
|
||||
Each channel will be zeroed out independently on every forward call with probability `p` using samples
|
||||
from a Bernoulli distribution.
|
||||
|
@ -205,7 +204,7 @@ class Dropout2d(Cell):
|
|||
Examples:
|
||||
>>> dropout = nn.Dropout2d(p=0.5)
|
||||
>>> x = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
|
||||
>>> output, mask = dropout(x)
|
||||
>>> output = dropout(x)
|
||||
>>> print(output.shape)
|
||||
(2, 1, 2, 3)
|
||||
"""
|
||||
|
@ -235,13 +234,13 @@ class Dropout2d(Cell):
|
|||
|
||||
|
||||
class Dropout3d(Cell):
|
||||
"""
|
||||
r"""
|
||||
During training, randomly zeroes some channels of the input tensor
|
||||
with probability `p` from a Bernoulli distribution(For a 5-dimensional tensor with
|
||||
a shape of :math: `NCDHW`,
|
||||
the channel feature map refers to a 3-dimensional feature map with a shape of :math: 'DHW').
|
||||
a shape of :math:`NCDHW`, the channel feature map refers to a 3-dimensional feature
|
||||
map with a shape of :math:'DHW').
|
||||
|
||||
For example, the :math:`j_th` channel of the :math:`i_th` sample in the batched input is a to-be-processed
|
||||
For example, the :math:`j\_th` channel of the :math:`i\_th` sample in the batched input is a to-be-processed
|
||||
`3D` tensor input[i,j].
|
||||
Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution
|
||||
probability `p`.
|
||||
|
@ -263,7 +262,7 @@ class Dropout3d(Cell):
|
|||
Examples:
|
||||
>>> dropout = nn.Dropout3d(p=0.5)
|
||||
>>> x = Tensor(np.ones([2, 1, 2, 1, 2]), mindspore.float32)
|
||||
>>> output, mask = dropout(x)
|
||||
>>> output = dropout(x)
|
||||
>>> print(output.shape)
|
||||
(2, 1, 2, 1, 2)
|
||||
"""
|
||||
|
|
|
@ -2032,7 +2032,7 @@ def linspace(start, stop, num):
|
|||
|
||||
|
||||
def matrix_determinant(x):
|
||||
"""
|
||||
r"""
|
||||
Computes the determinant of one or more square matrices.
|
||||
|
||||
Args:
|
||||
|
@ -2040,7 +2040,7 @@ def matrix_determinant(x):
|
|||
dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.
|
||||
|
||||
Returns:
|
||||
Tensor, The shape is `x_shape[:-2]`, the dtype is same as `x`.
|
||||
Tensor, The shape is :math:`x\_shape[:-2]`, the dtype is same as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not a Tensor.
|
||||
|
@ -2061,7 +2061,7 @@ def matrix_determinant(x):
|
|||
|
||||
|
||||
def log_matrix_determinant(x):
|
||||
"""
|
||||
r"""
|
||||
Computes the sign and the log of the absolute value of the determinant of one or more square matrices.
|
||||
|
||||
Args:
|
||||
|
@ -2069,8 +2069,11 @@ def log_matrix_determinant(x):
|
|||
dimensions must be the same size. Data type must be float32, float64, complex64 or complex128.
|
||||
|
||||
Returns:
|
||||
Tensor, The signs of the log determinants. The shape is `x_shape[:-2]`, the dtype is same as `x`.
|
||||
Tensor, The absolute values of the log determinants. The shape is `x_shape[:-2]`, the dtype is same as `x`.
|
||||
|
||||
Tensor, The signs of the log determinants. The shape is :math:`x\_shape[:-2]`, the dtype is same as `x`.\n
|
||||
|
||||
Tensor, The absolute values of the log determinants. The shape is :math:`x\_shape[:-2]`,
|
||||
the dtype is same as `x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not a Tensor.
|
||||
|
|
|
@ -280,13 +280,12 @@ def celu(x, alpha=1.0):
|
|||
|
||||
|
||||
def dropout2d(x, p=0.5):
|
||||
"""
|
||||
r"""
|
||||
During training, randomly zeroes some channels of the input tensor with probability `p`
|
||||
from a Bernoulli distribution(For a 4-dimensional tensor with a shape of :math: `NCHW`,
|
||||
the channel feature map refers
|
||||
to a 2-dimensional feature map with the shape of :math: `HW`).
|
||||
from a Bernoulli distribution(For a 4-dimensional tensor with a shape of :math:`NCHW`,
|
||||
the channel feature map refers to a 2-dimensional feature map with the shape of :math:`HW`).
|
||||
|
||||
For example, the :math:`j_th` channel of the :math:`i_th` sample in the batched input is a to-be-processed
|
||||
For example, the :math:`j\_th` channel of the :math:`i\_th` sample in the batched input is a to-be-processed
|
||||
`2D` tensor input[i,j].
|
||||
Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution
|
||||
probability `p`.
|
||||
|
@ -302,12 +301,12 @@ def dropout2d(x, p=0.5):
|
|||
x (Tensor): A `4D` tensor with shape :math:`(N, C, H, W)`, where `N` is the batch size, `C` is the number
|
||||
of channels, `H` is the feature height, and `W` is the feature width. The data type must be int8,
|
||||
int16, int32, int64, float16, float32 or float64.
|
||||
p (float): The keeping probability of a channel, between 0 and 1, e.g. `p` = 0.8,
|
||||
p (float): The dropping probability of a channel, between 0 and 1, e.g. `p` = 0.8,
|
||||
which means dropping out 80% of channels. Default: 0.5.
|
||||
|
||||
Returns:
|
||||
output (Tensor): With the same shape and data type as `x`.
|
||||
mask (Tensor): With the same shape as `x` and the data type is bool.
|
||||
Tensor, output, with the same shape and data type as `x`.\n
|
||||
Tensor, mask, with the same shape as `x` and the data type is bool.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not a Tensor.
|
||||
|
@ -330,13 +329,13 @@ def dropout2d(x, p=0.5):
|
|||
|
||||
|
||||
def dropout3d(x, p=0.5):
|
||||
"""
|
||||
r"""
|
||||
During training, randomly zeroes some channels of the input tensor
|
||||
with probability `p` from a Bernoulli distribution(For a 5-dimensional tensor
|
||||
with a shape of :math: `NCDHW`,
|
||||
the channel feature map refers to a 3-dimensional feature map with a shape of :math: `DHW`).
|
||||
with a shape of :math:`NCDHW`, the channel feature map refers to a 3-dimensional
|
||||
feature map with a shape of :math:`DHW`).
|
||||
|
||||
For example, the :math:`j_th` channel of the :math:`i_th` sample in the batched input is a to-be-processed
|
||||
For example, the :math:`j\_th` channel of the :math:`i\_th` sample in the batched input is a to-be-processed
|
||||
`3D` tensor input[i,j].
|
||||
Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution
|
||||
probability `p`.
|
||||
|
@ -347,12 +346,12 @@ def dropout3d(x, p=0.5):
|
|||
x (Tensor): A `5D` tensor with shape :math:`(N, C, D, H, W)`, where `N` is the batch size, `C` is the number
|
||||
of channels, `D` is the feature depth, `H` is the feature height, and `W` is the feature width.
|
||||
The data type must be int8, int16, int32, int64, float16, float32 or float64.
|
||||
p (float): The keeping probability of a channel, between 0 and 1, e.g. `p` = 0.8,
|
||||
p (float): The dropping probability of a channel, between 0 and 1, e.g. `p` = 0.8,
|
||||
which means dropping out 80% of channels. Default: 0.5.
|
||||
|
||||
Returns:
|
||||
output (Tensor): With the same shape and data type as `x`.
|
||||
mask (Tensor): With the same shape as `x` and the data type is bool.
|
||||
Tensor, output, with the same shape and data type as `x`.\n
|
||||
Tensor, mask, with the same shape as `x` and the data type is bool.
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not a Tensor.
|
||||
|
|
|
@ -6934,7 +6934,7 @@ class Dropout(PrimitiveWithCheck):
|
|||
|
||||
class Dropout2D(PrimitiveWithInfer):
|
||||
r"""
|
||||
During training, randomly zeroes some of the channels of the input tensor with probability 1-`keep_prob`
|
||||
During training, randomly zeroes some channels of the input tensor with probability 1-`keep_prob`
|
||||
from a Bernoulli distribution(For a 4-dimensional tensor with a shape of NCHW, the channel feature map refers
|
||||
to a 2-dimensional feature map with the shape of HW).
|
||||
|
||||
|
@ -6967,7 +6967,7 @@ class Dropout2D(PrimitiveWithInfer):
|
|||
|
||||
class Dropout3D(PrimitiveWithInfer):
|
||||
r"""
|
||||
During training, randomly zeroes some of the channels of the input tensor
|
||||
During training, randomly zeroes some channels of the input tensor
|
||||
with probability 1-`keep_prob` from a Bernoulli distribution(For a 5-dimensional tensor with a shape of NCDHW,
|
||||
the channel feature map refers to a 3-dimensional feature map with a shape of DHW).
|
||||
|
||||
|
|
Loading…
Reference in New Issue