diff --git a/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.any.rst b/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.any.rst index 808f495772b..e0bda2d1e4f 100644 --- a/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.any.rst +++ b/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.any.rst @@ -10,4 +10,4 @@ mindspore.Tensor.any - **keep_dims** (bool) - 计算结果是否保留维度。默认值:False。 返回: - Tensor。如果在指定轴方向上所有Tensor元素都为True,则其值为True,否则其值为False。如果轴为None或空元组,则默认降维。 \ No newline at end of file + Tensor。如果在指定轴方向上存在任意Tensor元素为True,则其值为True,否则其值为False。如果轴为None或空元组,则默认降维。 \ No newline at end of file diff --git a/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.sigmoid.rst b/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.sigmoid.rst index 6a05b7b6dcf..52b911258b9 100644 --- a/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.sigmoid.rst +++ b/docs/api/api_python/mindspore/Tensor/mindspore.Tensor.sigmoid.rst @@ -3,7 +3,7 @@ mindspore.Tensor.sigmoid .. py:method:: mindspore.Tensor.sigmoid - Sigmoid激活函数,按元素计算Sigmoid激活函数。 + 逐元素计算Sigmoid激活函数。 Sigmoid函数定义为: diff --git a/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool1d.rst b/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool1d.rst index 69c0f223aff..2d89a8abed4 100644 --- a/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool1d.rst @@ -3,9 +3,7 @@ mindspore.nn.AdaptiveAvgPool1d .. py:class:: mindspore.nn.AdaptiveAvgPool1d(output_size) - 对输入的多维数据进行一维平面上的自适应平均池化运算。 - - 在一个输入Tensor上应用1D adaptive average pooling,可视为组成一个1D输入平面。 + 在一个输入Tensor上应用1D自适应平均池化运算,可视为组成一个1D输入平面。 通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` ,AdaptiveAvgPool1d在 :math:`L_{in}` 维度上计算区域平均值。 输出的shape为 :math:`(N_{in}, C_{in}, L_{out})` ,其中, :math:`L_{out}` 为 `output_size`。 diff --git a/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool2d.rst b/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool2d.rst index ebaa63ab2f1..de7817ebbf3 100644 --- a/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool2d.rst @@ -3,8 +3,6 @@ mindspore.nn.AdaptiveAvgPool2d .. py:class:: mindspore.nn.AdaptiveAvgPool2d(output_size) - 二维自适应平均池化。 - 对输入Tensor,提供二维的自适应平均池化操作。也就是说,对于输入任何尺寸,指定输出的尺寸都为H * W。但是输入和输出特征的数目不会变化。 输入和输出数据格式可以是"NCHW"和"CHW"。N是批处理大小,C是通道数,H是特征高度,W是特征宽度。运算如下: diff --git a/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool3d.rst b/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool3d.rst index ae8155b0ac5..ab77e6fccf5 100644 --- a/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AdaptiveAvgPool3d.rst @@ -3,8 +3,6 @@ mindspore.nn.AdaptiveAvgPool3d .. py:class:: mindspore.nn.AdaptiveAvgPool3d(output_size) - 三维自适应平均池化。 - 对输入Tensor,提供三维的自适应平均池化操作。也就是说对于输入任何尺寸,指定输出的尺寸都为 :math:`(D, H, W)`。但是输入和输出特征的数目不会变化。 假设输入 `x` 最后三维大小分别为 :math:`(inD, inH, inW)`,则输出的最后三维大小分别为 :math:`(outD, outH, outW)`。运算如下: diff --git a/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool1d.rst b/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool1d.rst index 9e71638509e..5abff457ac3 100644 --- a/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool1d.rst @@ -3,9 +3,7 @@ mindspore.nn.AdaptiveMaxPool1d .. py:class:: mindspore.nn.AdaptiveMaxPool1d(output_size) - 对输入的多维数据进行一维平面上的自适应最大池化运算。 - - 在一个输入Tensor上应用1D adaptive maximum pooling,可被视为组成一个1D输入平面。 + 在一个输入Tensor上应用1D自适应最大池化运算,可被视为组成一个1D输入平面。 通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` ,AdaptiveMaxPool1d在 :math:`L_{in}` 维度上计算区域最大值。 输出的shape为 :math:`(N_{in}, C_{in}, L_{out})` ,其中, :math:`L_{out}` 为 `output_size`。 diff --git a/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool2d.rst b/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool2d.rst index df3e01e7690..8b454d3ee90 100644 --- a/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool2d.rst @@ -3,8 +3,6 @@ mindspore.nn.AdaptiveMaxPool2d .. py:class:: mindspore.nn.AdaptiveMaxPool2d(output_size, return_indices=False) - 二维自适应最大池化运算。 - 对输入Tensor,提供二维自适应最大池化操作。对于输入任何格式,指定输出的格式都为H * W。但是输入和输出特征的数目不会变化。 输入和输出数据格式可以是"NCHW"和"CHW"。N是批处理大小,C是通道数,H是特征高度,W是特征宽度。运算如下: diff --git a/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool3d.rst b/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool3d.rst index 5a4985d6ce3..7565b412344 100644 --- a/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AdaptiveMaxPool3d.rst @@ -3,9 +3,7 @@ mindspore.nn.AdaptiveMaxPool3d .. py:class:: mindspore.nn.AdaptiveMaxPool3d(output_size, return_indices=False) - 三维自适应最大值池化。 - - 对于任何输入尺寸,输出的大小为 :math:`(D, H, W)` 。输出特征的数量与输入特征的数量相同。 + 对输入Tensor,提供三维自适应最大池化操作。对于任何输入尺寸,输出的大小为 :math:`(D, H, W)` 。输出特征的数量与输入特征的数量相同。 参数: - **output_size** (Union[int, tuple]) - 表示输出特征图的尺寸,输入可以是tuple :math:`(D, H, W)`,也可以是一个int值D来表示输出尺寸为 :math:`(D, D, D)` 。:math:`D` , :math:`H` 和 :math:`W` 可以是int型整数或者None,其中None表示输出大小与对应的输入的大小相同。 diff --git a/docs/api/api_python/nn/mindspore.nn.AvgPool1d.rst b/docs/api/api_python/nn/mindspore.nn.AvgPool1d.rst index a4f1b345462..b4acc0e2859 100644 --- a/docs/api/api_python/nn/mindspore.nn.AvgPool1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AvgPool1d.rst @@ -3,9 +3,7 @@ mindspore.nn.AvgPool1d .. py:class:: mindspore.nn.AvgPool1d(kernel_size=1, stride=1, pad_mode='valid') - 对输入的多维数据进行一维平面上的平均池化运算。 - - 在一个输入Tensor上应用1D average pooling,可被视为组成一个1D输入平面。 + 在一个输入Tensor上应用1D平均池化运算,可被视为组成一个1D输入平面。 通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` ,AvgPool1d在 :math:`(L_{in})` 维度上输出区域平均值。 给定 `kernel_size` 为 :math:`k` 和 `stride` ,公式定义如下: diff --git a/docs/api/api_python/nn/mindspore.nn.AvgPool2d.rst b/docs/api/api_python/nn/mindspore.nn.AvgPool2d.rst index eab8eac87c9..6eaff48e34c 100644 --- a/docs/api/api_python/nn/mindspore.nn.AvgPool2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AvgPool2d.rst @@ -3,9 +3,7 @@ mindspore.nn.AvgPool2d .. py:class:: mindspore.nn.AvgPool2d(kernel_size=1, stride=1, pad_mode='valid', data_format='NCHW') - 对输入的多维数据进行二维的平均池化运算。 - - 在输入张量上应用2D average pooling,可视为二维输入平面的组合。 + 在输入Tensor上应用2D平均池化运算,可视为二维输入平面的组合。 通常,输入的shape为 :math:`(N_{in},C_{in},H_{in},W_{in})` ,AvgPool2d的输出为 :math:`(H_{in},W_{in})` 维度的区域平均值。给定 `kernel_size` 为 :math:`(kH,kW)` 和 `stride` ,公式定义如下: diff --git a/docs/api/api_python/nn/mindspore.nn.AvgPool3d.rst b/docs/api/api_python/nn/mindspore.nn.AvgPool3d.rst index b5387affc4f..5500a68e38f 100644 --- a/docs/api/api_python/nn/mindspore.nn.AvgPool3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.AvgPool3d.rst @@ -3,9 +3,7 @@ mindspore.nn.AvgPool3d .. py:class:: mindspore.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) - 对输入的多维数据进行三维的平均池化运算。 - - 在一个输入Tensor上应用3D max pooling,输入Tensor可看成是由一系列3D平面组成的。 + 在一个输入Tensor上应用3D平均池化运算,输入Tensor可看成是由一系列3D平面组成的。 通常,输入的shape为 :math:`(N_{in}, C_{in}, D_{in}, H_{in}, W_{in})` ,AvgPool3D输出 :math:`(D_{in}, H_{in}, W_{in})` 维度的区域平均值。给定 `kernel_size` 为 :math:`ks = (d_{ker}, h_{ker}, w_{ker})` 和 `stride` 为 :math:`s = (s_0, s_1, s_2)`,公式如下。 diff --git a/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst b/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst index 879498bb9e4..398bdd632ca 100644 --- a/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.BatchNorm1d.rst @@ -3,9 +3,7 @@ mindspore.nn.BatchNorm1d .. py:class:: mindspore.nn.BatchNorm1d(num_features, eps=1e-5, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, data_format='NCHW') - 对输入的二维数据进行批归一化(Batch Normalization Layer)。 - - 在二维输入(mini-batch 一维输入)上应用批归一化,避免内部协变量偏移。归一化在卷积网络中被广泛的应用。请见论文 `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift `_ 。 + 在二维输入(mini-batch 一维输入)上应用批归一化(Batch Normalization Layer),避免内部协变量偏移。归一化在卷积网络中被广泛的应用。请见论文 `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift `_ 。 使用mini-batch数据和学习参数进行训练,计算公式如下。 diff --git a/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst b/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst index efeae93915e..90add4fa271 100644 --- a/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.BatchNorm2d.rst @@ -3,9 +3,7 @@ mindspore.nn.BatchNorm2d .. py:class:: mindspore.nn.BatchNorm2d(num_features, eps=1e-5, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, data_format='NCHW') - 对输入的四维数据进行批归一化(Batch Normalization Layer)。 - - 在四维输入(具有额外通道维度的小批量二维输入)上应用批归一化处理,以避免内部协变量偏移。批归一化广泛应用于卷积网络中。请见论文 `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift `_ 。使用mini-batch数据和学习参数进行训练,这些参数见以下公式: + 在四维输入(具有额外通道维度的小批量二维输入)上应用批归一化处理(Batch Normalization Layer),以避免内部协变量偏移。批归一化广泛应用于卷积网络中。请见论文 `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift `_ 。使用mini-batch数据和学习参数进行训练,这些参数见以下公式: .. math:: y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta diff --git a/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst b/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst index 44a99e1b227..c632f642f64 100644 --- a/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.BatchNorm3d.rst @@ -5,7 +5,7 @@ mindspore.nn.BatchNorm3d 对输入的五维数据进行批归一化(Batch Normalization Layer)。 - 在五维输入(带有附加通道维度的mini-batch 三维输入)上应用批归一化,避免内部协变量偏移。归一化在卷积网络中得到了广泛的应用。 + 在五维输入(带有附加通道维度的mini-batch 三维输入)上应用批归一化(Batch Normalization Layer),避免内部协变量偏移。归一化在卷积网络中得到了广泛的应用。 .. math:: y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta diff --git a/docs/api/api_python/nn/mindspore.nn.Conv1d.rst b/docs/api/api_python/nn/mindspore.nn.Conv1d.rst index c7e68c07291..1afbf221ed0 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv1d.rst @@ -3,9 +3,7 @@ mindspore.nn.Conv1d .. py:class:: mindspore.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros') - 一维卷积层。 - - 对输入Tensor计算一维卷积,该Tensor的shape通常为 :math:`(N, C_{in}, L_{in})` ,其中 :math:`N` 是batch size, :math:`C_{in}` 是空间维度,:math:`L_{in}` 是序列的长度。 + 对输入Tensor计算一维卷积。该Tensor的shape通常为 :math:`(N, C_{in}, L_{in})` ,其中 :math:`N` 是batch size, :math:`C_{in}` 是空间维度,:math:`L_{in}` 是序列的长度。 对于每个batch中的Tensor,其shape为 :math:`(C_{in}, L_{in})` ,公式定义如下: .. math:: diff --git a/docs/api/api_python/nn/mindspore.nn.Conv1dTranspose.rst b/docs/api/api_python/nn/mindspore.nn.Conv1dTranspose.rst index cdf505f2dd2..576ef813400 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv1dTranspose.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv1dTranspose.rst @@ -3,8 +3,6 @@ mindspore.nn.Conv1dTranspose .. py:class:: mindspore.nn.Conv1dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros') - 一维转置卷积层。 - 计算一维转置卷积,可以视为Conv1d对输入求梯度,也称为反卷积(实际不是真正的反卷积)。 输入的shape通常是 :math:`(N, C_{in}, L_{in})` ,其中 :math:`N` 是batch size, :math:`C_{in}` 是空间维度, :math:`L_{in}` 是序列的长度。 diff --git a/docs/api/api_python/nn/mindspore.nn.Conv2d.rst b/docs/api/api_python/nn/mindspore.nn.Conv2d.rst index e4035bed0bd..b3c2dfd0890 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv2d.rst @@ -3,9 +3,7 @@ mindspore.nn.Conv2d .. py:class:: mindspore.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, pad_mode="same", padding=0, dilation=1, group=1, has_bias=False, weight_init="normal", bias_init="zeros", data_format="NCHW") - 二维卷积层。 - - 对输入Tensor计算二维卷积,该Tensor的常见shape为 :math:`(N, C_{in}, H_{in}, W_{in})`,其中 :math:`N` 为batch size,:math:`C_{in}` 为空间维度,:math:`H_{in}, W_{in}` 分别为特征层的高度和宽度。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, H_{in}, W_{in})` ,公式定义如下: + 对输入Tensor计算二维卷积。该Tensor的常见shape为 :math:`(N, C_{in}, H_{in}, W_{in})`,其中 :math:`N` 为batch size,:math:`C_{in}` 为空间维度,:math:`H_{in}, W_{in}` 分别为特征层的高度和宽度。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, H_{in}, W_{in})` ,公式定义如下: .. math:: \text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + diff --git a/docs/api/api_python/nn/mindspore.nn.Conv2dTranspose.rst b/docs/api/api_python/nn/mindspore.nn.Conv2dTranspose.rst index 5977a360d39..1636518bbe0 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv2dTranspose.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv2dTranspose.rst @@ -3,8 +3,6 @@ mindspore.nn.Conv2dTranspose .. py:class:: mindspore.nn.Conv2dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode="same", padding=0, dilation=1, group=1, has_bias=False, weight_init="normal", bias_init="zeros") - 二维转置卷积层。 - 计算二维转置卷积,可以视为Conv2d对输入求梯度,也称为反卷积(实际不是真正的反卷积)。 输入的shape通常为 :math:`(N, C_{in}, H_{in}, W_{in})` ,其中 :math:`N` 是batch size,:math:`C_{in}` 是空间维度, :math:`H_{in}, W_{in}` 分别为特征层的高度和宽度。 diff --git a/docs/api/api_python/nn/mindspore.nn.Conv3d.rst b/docs/api/api_python/nn/mindspore.nn.Conv3d.rst index d88d44b6903..8d4a68ef02f 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv3d.rst @@ -3,9 +3,7 @@ mindspore.nn.Conv3d .. py:class:: mindspore.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCDHW') - 三维卷积层。 - - 对输入Tensor计算三维卷积,该Tensor的shape通常为 :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` ,其中 :math:`N` 为batch size, :math:`C_{in}` 是空间维度。:math:`D_{in}, H_{in}, W_{in}` 分别为特征层的深度、高度和宽度。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, D_{in}, H_{in}, W_{in})` ,公式定义如下: + 对输入Tensor计算三维卷积。该Tensor的shape通常为 :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` ,其中 :math:`N` 为batch size, :math:`C_{in}` 是空间维度。:math:`D_{in}, H_{in}, W_{in}` 分别为特征层的深度、高度和宽度。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, D_{in}, H_{in}, W_{in})` ,公式定义如下: .. math:: \text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + diff --git a/docs/api/api_python/nn/mindspore.nn.Conv3dTranspose.rst b/docs/api/api_python/nn/mindspore.nn.Conv3dTranspose.rst index 1e96bf421a4..ff75081f36b 100644 --- a/docs/api/api_python/nn/mindspore.nn.Conv3dTranspose.rst +++ b/docs/api/api_python/nn/mindspore.nn.Conv3dTranspose.rst @@ -3,8 +3,6 @@ mindspore.nn.Conv3dTranspose .. py:class:: mindspore.nn.Conv3dTranspose(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, output_padding=0, has_bias=False, weight_init='normal', bias_init='zeros', data_format='NCDHW') - 三维转置卷积层。 - 计算三维转置卷积,可以视为Conv3d对输入求梯度,也称为反卷积(实际不是真正的反卷积)。 输入的shape通常为 :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` ,其中 :math:`N` 为batch size, :math:`C_{in}` 是空间维度。:math:`D_{in}, H_{in}, W_{in}` 分别为特征层的深度、高度和宽度。 diff --git a/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool2d.rst b/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool2d.rst index c9f8fe9a663..8684f138f5d 100644 --- a/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool2d.rst @@ -3,9 +3,7 @@ mindspore.nn.FractionalMaxPool2d .. py:class:: mindspore.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None) - 对输入的多维数据进行二维的分数最大池化运算。 - - 对多个输入平面组成的输入上应用2D分数最大池化。在 :math:`(kH_{in}, kW_{in})` 区域上应用最大池化操作,由输出shape决定随机步长。对于任何输入shape,指定输出shape为 :math:`(H, W)` 。输出特征的数量等于输入平面的数量。 + 对多个输入平面组成的输入上应用2D分数最大池化。在 :math:`(kH_{in}, kW_{in})` 区域上应用最大池化操作,由输出shape决定随机步长。对于任何输入shape,指定输出shape为 :math:`(H, W)` 。输出特征的数量等于输入平面的数量。 在一个输入Tensor上应用2D fractional max pooling,可被视为组成一个2D平面。 分数最大池化的详细描述在 `Fractional Max-Pooling `_ 。 diff --git a/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool3d.rst b/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool3d.rst index 8fd293f2096..a60e43dd428 100644 --- a/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.FractionalMaxPool3d.rst @@ -3,9 +3,7 @@ mindspore.nn.FractionalMaxPool3d .. py:class:: mindspore.nn.FractionalMaxPool3d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None) - 对输入的多维数据进行三维的分数最大池化运算。 - - 对多个输入平面组成的输入上应用3D分数最大池化。在 :math:`(kD_{in}, kH_{in}, kW_{in})` 区域上应用最大池化操作,由输出shape决定随机步长。输出特征的数量等于输入平面的数量。 + 对多个输入平面组成的输入上应用3D分数最大池化。在 :math:`(kD_{in}, kH_{in}, kW_{in})` 区域上应用最大池化操作,由输出shape决定随机步长。输出特征的数量等于输入平面的数量。 分数最大池化的详细描述在 `Fractional MaxPooling by Ben Graham `_ 。 diff --git a/docs/api/api_python/nn/mindspore.nn.HSwish.rst b/docs/api/api_python/nn/mindspore.nn.HSwish.rst index ef4f11b7f39..75a881f8e63 100644 --- a/docs/api/api_python/nn/mindspore.nn.HSwish.rst +++ b/docs/api/api_python/nn/mindspore.nn.HSwish.rst @@ -3,8 +3,6 @@ mindspore.nn.HSwish .. py:class:: mindspore.nn.HSwish - Hard Swish激活函数。 - 对输入的每个元素计算Hard Swish。input是具有任何有效形状的张量。 Hard Swish定义如下: diff --git a/docs/api/api_python/nn/mindspore.nn.Hardtanh.rst b/docs/api/api_python/nn/mindspore.nn.Hardtanh.rst index 87a559fcc10..fa6a042de5a 100644 --- a/docs/api/api_python/nn/mindspore.nn.Hardtanh.rst +++ b/docs/api/api_python/nn/mindspore.nn.Hardtanh.rst @@ -3,8 +3,6 @@ mindspore.nn.Hardtanh .. py:class:: mindspore.nn.Hardtanh(min_val=-1.0, max_val=1.0) - Hardtanh激活函数。 - 按元素计算Hardtanh函数。Hardtanh函数定义为: .. math:: diff --git a/docs/api/api_python/nn/mindspore.nn.InstanceNorm1d.rst b/docs/api/api_python/nn/mindspore.nn.InstanceNorm1d.rst index 42a3b2beef8..845be3a8a5f 100644 --- a/docs/api/api_python/nn/mindspore.nn.InstanceNorm1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.InstanceNorm1d.rst @@ -3,9 +3,7 @@ mindspore.nn.InstanceNorm1d .. py:class:: mindspore.nn.InstanceNorm1d(num_features, eps=1e-5, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros') - 对三维输入实现实例归一化(Instance Normalization Layer)。 - - 该层在三维输入(带有额外通道维度的mini-batch一维输入)上应用实例归一化,详见论文 `Instance Normalization: + 该层在三维输入(带有额外通道维度的mini-batch一维输入)上应用实例归一化。详见论文 `Instance Normalization: The Missing Ingredient for Fast Stylization `_ 。 使用mini-batch数据和学习参数进行训练,参数见如下公式。 diff --git a/docs/api/api_python/nn/mindspore.nn.InstanceNorm2d.rst b/docs/api/api_python/nn/mindspore.nn.InstanceNorm2d.rst index 853a3d9d3fb..dabf9e18657 100644 --- a/docs/api/api_python/nn/mindspore.nn.InstanceNorm2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.InstanceNorm2d.rst @@ -3,9 +3,7 @@ mindspore.nn.InstanceNorm2d .. py:class:: mindspore.nn.InstanceNorm2d(num_features, eps=1e-5, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros') - 对四维输入实现实例归一化(Instance Normalization Layer)。 - - 该层在四维输入(带有额外通道维度的mini-batch二维输入)上应用实例归一化,详见论文 `Instance Normalization: + 该层在四维输入(带有额外通道维度的mini-batch二维输入)上应用实例归一化。详见论文 `Instance Normalization: The Missing Ingredient for Fast Stylization `_ 。 使用mini-batch数据和学习参数进行训练,参数见如下公式。 diff --git a/docs/api/api_python/nn/mindspore.nn.InstanceNorm3d.rst b/docs/api/api_python/nn/mindspore.nn.InstanceNorm3d.rst index 50174fa3632..70a26666004 100644 --- a/docs/api/api_python/nn/mindspore.nn.InstanceNorm3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.InstanceNorm3d.rst @@ -3,9 +3,7 @@ mindspore.nn.InstanceNorm3d .. py:class:: mindspore.nn.InstanceNorm3d(num_features, eps=1e-5, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros') - 对五维输入实现实例归一化(Instance Normalization Layer)。 - - 该层在五维输入(带有额外通道维度的mini-batch三维输入)上应用实例归一化,详见论文 `Instance Normalization: + 该层在五维输入(带有额外通道维度的mini-batch三维输入)上应用实例归一化。详见论文 `Instance Normalization: The Missing Ingredient for Fast Stylization `_ 。 使用mini-batch数据和学习参数进行训练,参数见如下公式。 diff --git a/docs/api/api_python/nn/mindspore.nn.LPPool1d.rst b/docs/api/api_python/nn/mindspore.nn.LPPool1d.rst index fd08ed7b15e..5b7f1284743 100644 --- a/docs/api/api_python/nn/mindspore.nn.LPPool1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.LPPool1d.rst @@ -3,9 +3,7 @@ mindspore.nn.LPPool1d .. py:class:: mindspore.nn.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False) - 对输入的多维数据进行一维平面上的LP池化运算。 - - 在一个输入Tensor上应用1D LP pooling,可被视为组成一个1D输入平面。 + 在一个输入Tensor上应用1D LP池化运算,可被视为组成一个1D输入平面。 通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` 或 :math:`(C, L_{in})`,输出的shape为 :math:`(N_{in}, C_{in}, L_{in})` 或 :math:`(C, L_{in})`,输出与输入的shape一致,公式如下: diff --git a/docs/api/api_python/nn/mindspore.nn.LPPool2d.rst b/docs/api/api_python/nn/mindspore.nn.LPPool2d.rst index 7dd1b653618..9d746a9543f 100644 --- a/docs/api/api_python/nn/mindspore.nn.LPPool2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.LPPool2d.rst @@ -3,9 +3,7 @@ mindspore.nn.LPPool2d .. py:class:: mindspore.nn.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False) - 对输入的多维数据进行二维平面上的LP池化运算。 - - 在一个输入Tensor上应用2D LP pooling,可被视为组成一个2D输入平面。 + 在一个输入Tensor上应用2D LP池化运算,可被视为组成一个2D输入平面。 通常,输入的shape为 :math:`(N, C, H_{in}, W_{in})`,输出的shape为 :math:`(N, C, H_{in}, W_{in})`,输出与输入的shape一致,公式如下: diff --git a/docs/api/api_python/nn/mindspore.nn.LogSigmoid.rst b/docs/api/api_python/nn/mindspore.nn.LogSigmoid.rst index fddb2fc1934..be245f3c4c7 100644 --- a/docs/api/api_python/nn/mindspore.nn.LogSigmoid.rst +++ b/docs/api/api_python/nn/mindspore.nn.LogSigmoid.rst @@ -3,8 +3,6 @@ mindspore.nn.LogSigmoid .. py:class:: mindspore.nn.LogSigmoid - Log Sigmoid激活函数。 - 按元素计算Log Sigmoid激活函数。输入是任意格式的Tensor。 Log Sigmoid定义为: diff --git a/docs/api/api_python/nn/mindspore.nn.LogSoftmax.rst b/docs/api/api_python/nn/mindspore.nn.LogSoftmax.rst index 41709375580..21e9a084890 100644 --- a/docs/api/api_python/nn/mindspore.nn.LogSoftmax.rst +++ b/docs/api/api_python/nn/mindspore.nn.LogSoftmax.rst @@ -3,8 +3,6 @@ mindspore.nn.LogSoftmax .. py:class:: mindspore.nn.LogSoftmax(axis=-1) - Log Softmax激活函数。 - 按元素计算Log Softmax激活函数。 输入经Softmax函数、Log函数转换后,值的范围在[-inf,0)。 diff --git a/docs/api/api_python/nn/mindspore.nn.MaxPool1d.rst b/docs/api/api_python/nn/mindspore.nn.MaxPool1d.rst index 471103a131e..0527533cbac 100644 --- a/docs/api/api_python/nn/mindspore.nn.MaxPool1d.rst +++ b/docs/api/api_python/nn/mindspore.nn.MaxPool1d.rst @@ -3,9 +3,7 @@ mindspore.nn.MaxPool1d .. py:class:: mindspore.nn.MaxPool1d(kernel_size=1, stride=1, pad_mode='valid') - 对时间数据进行最大池化运算。 - - 在一个输入张量上应用1D max pooling,该张量可被视为一维平面的组合。 + 在一个输入Tensor上应用1D最大池化运算,该Tensor可被视为一维平面的组合。 通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})` ,MaxPool1d输出 :math:`(L_{in})` 维度区域最大值。 给定 `kernel_size` 和 `stride` ,公式如下: diff --git a/docs/api/api_python/nn/mindspore.nn.MaxPool2d.rst b/docs/api/api_python/nn/mindspore.nn.MaxPool2d.rst index 39330ca404e..05e4a40fe5f 100644 --- a/docs/api/api_python/nn/mindspore.nn.MaxPool2d.rst +++ b/docs/api/api_python/nn/mindspore.nn.MaxPool2d.rst @@ -3,9 +3,7 @@ mindspore.nn.MaxPool2d .. py:class:: mindspore.nn.MaxPool2d(kernel_size=1, stride=1, pad_mode='valid', data_format='NCHW') - 对输入的多维数据进行二维的最大池化运算。 - - 在一个输入Tensor上应用2D max pooling,可被视为组成一个2D平面。 + 在一个输入Tensor上应用2D最大池化运算,可被视为组成一个2D平面。 通常,输入的形状为 :math:`(N_{in}, C_{in}, H_{in}, W_{in})` ,MaxPool2d输出 :math:`(H_{in}, W_{in})` 维度区域最大值。给定 `kernel_size` 为 :math:`(kH,kW)` 和 `stride` ,公式如下。 diff --git a/docs/api/api_python/nn/mindspore.nn.MaxPool3d.rst b/docs/api/api_python/nn/mindspore.nn.MaxPool3d.rst index b7ef225a66f..76c11a0083e 100644 --- a/docs/api/api_python/nn/mindspore.nn.MaxPool3d.rst +++ b/docs/api/api_python/nn/mindspore.nn.MaxPool3d.rst @@ -3,9 +3,7 @@ mindspore.nn.MaxPool3d .. py:class:: mindspore.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False) - 对输入的多维数据进行三维的最大池化运算。 - - 在一个输入Tensor上应用3D max pooling,输入Tensor可看成是由一系列3D平面组成的。 + 在一个输入Tensor上应用3D最大池化运算,输入Tensor可看成是由一系列3D平面组成的。 通常,输入的shape为 :math:`(N_{in}, C_{in}, D_{in}, H_{in}, W_{in})` ,MaxPool3d输出 :math:`(D_{in}, H_{in}, W_{in})` 维度区域最大值。给定 `kernel_size` 为 :math:`ks = (d_{ker}, h_{ker}, w_{ker})` 和 `stride` 为 :math:`s = (s_0, s_1, s_2)`,公式如下。 diff --git a/docs/api/api_python/nn/mindspore.nn.PixelShuffle.rst b/docs/api/api_python/nn/mindspore.nn.PixelShuffle.rst index 06b3bf649f0..675637d3fa6 100644 --- a/docs/api/api_python/nn/mindspore.nn.PixelShuffle.rst +++ b/docs/api/api_python/nn/mindspore.nn.PixelShuffle.rst @@ -3,8 +3,6 @@ mindspore.nn.PixelShuffle .. py:class:: mindspore.nn.PixelShuffle(upscale_factor) - PixelShuffle函数。 - 在多个输入平面组成的输入上面应用PixelShuffle算法。在平面上应用高效亚像素卷积,步长为 :math:`1/r` 。关于PixelShuffle算法详细介绍,请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network `_ 。 通常情况下,输入shape :math:`(*, C \times r^2, H, W)` ,输出shape :math:`(*, C, H \times r, W \times r)` 。`r` 是缩小因子。 `*` 是大于等于0的维度。 diff --git a/docs/api/api_python/nn/mindspore.nn.PixelUnshuffle.rst b/docs/api/api_python/nn/mindspore.nn.PixelUnshuffle.rst index 5bd9bce9fda..1a85cbe129f 100644 --- a/docs/api/api_python/nn/mindspore.nn.PixelUnshuffle.rst +++ b/docs/api/api_python/nn/mindspore.nn.PixelUnshuffle.rst @@ -3,8 +3,6 @@ mindspore.nn.PixelUnshuffle .. py:class:: mindspore.nn.PixelUnshuffle(downscale_factor) - PixelUnshuffle函数。 - 在多个输入平面组成的输入上面应用PixelUnshuffle算法。关于PixelUnshuffle算法详细介绍,请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network `_ 。 通常情况下,输入shape :math:`(*, C, H \times r, W \times r)` ,输出shape :math:`(*, C \times r^2, H, W)` 。`r` 是缩小因子。 `*` 是大于等于0的维度。 diff --git a/docs/api/api_python/nn/mindspore.nn.Softmax.rst b/docs/api/api_python/nn/mindspore.nn.Softmax.rst index 9e99f82ddfa..68076ce6f9f 100644 --- a/docs/api/api_python/nn/mindspore.nn.Softmax.rst +++ b/docs/api/api_python/nn/mindspore.nn.Softmax.rst @@ -3,7 +3,7 @@ mindspore.nn.Softmax .. py:class:: mindspore.nn.Softmax(axis=-1) - Softmax函数,它是二分类函数 :class:`mindspore.nn.Sigmoid` 在多分类上的推广,目的是将多分类的结果以概率的形式展现出来。 + Softmax激活函数,它是二分类函数 :class:`mindspore.nn.Sigmoid` 在多分类上的推广,目的是将多分类的结果以概率的形式展现出来。 对输入Tensor在轴 `axis` 上的元素计算其指数函数值,然后归一化到[0, 1]范围,总和为1。 diff --git a/docs/api/api_python/nn/mindspore.nn.Tanh.rst b/docs/api/api_python/nn/mindspore.nn.Tanh.rst index 04f2807e2cc..2f083faddae 100644 --- a/docs/api/api_python/nn/mindspore.nn.Tanh.rst +++ b/docs/api/api_python/nn/mindspore.nn.Tanh.rst @@ -3,9 +3,7 @@ mindspore.nn.Tanh .. py:class:: mindspore.nn.Tanh - Tanh激活函数。 - - 按元素计算Tanh函数,返回一个新的Tensor,该Tensor是输入元素的双曲正切值。 + 逐元素计算Tanh函数,返回一个新的Tensor,该Tensor是输入元素的双曲正切值。 Tanh函数定义为: diff --git a/docs/api/api_python/ops/mindspore.ops.ArgMaxWithValue.rst b/docs/api/api_python/ops/mindspore.ops.ArgMaxWithValue.rst index 010b55d24a8..d1684dfed96 100644 --- a/docs/api/api_python/ops/mindspore.ops.ArgMaxWithValue.rst +++ b/docs/api/api_python/ops/mindspore.ops.ArgMaxWithValue.rst @@ -3,9 +3,7 @@ .. py:class:: mindspore.ops.ArgMaxWithValue(axis=0, keep_dims=False) - 根据指定的索引计算最大值,并返回索引和值。 - - 在给定轴上计算输入Tensor的最大值。并且返回最大值和索引。 + 在给定轴上计算输入Tensor的最大值,并且返回最大值和索引。 .. note:: 在auto_parallel和semi_auto_parallel模式下,不能使用第一个输出索引。 diff --git a/docs/api/api_python/ops/mindspore.ops.ArgMinWithValue.rst b/docs/api/api_python/ops/mindspore.ops.ArgMinWithValue.rst index da1e987f1cf..af66663e6e7 100644 --- a/docs/api/api_python/ops/mindspore.ops.ArgMinWithValue.rst +++ b/docs/api/api_python/ops/mindspore.ops.ArgMinWithValue.rst @@ -3,8 +3,6 @@ .. py:class:: mindspore.ops.ArgMinWithValue(axis=0, keep_dims=False) - 根据指定的索引计算最小值,并返回索引和值。 - 在给定轴上计算输入Tensor的最小值,并且返回最小值和索引。 .. note:: diff --git a/docs/api/api_python/ops/mindspore.ops.AvgPool3D.rst b/docs/api/api_python/ops/mindspore.ops.AvgPool3D.rst index 384ad39f7bd..7aa10f9d778 100644 --- a/docs/api/api_python/ops/mindspore.ops.AvgPool3D.rst +++ b/docs/api/api_python/ops/mindspore.ops.AvgPool3D.rst @@ -5,8 +5,6 @@ 对输入的多维数据进行三维的平均池化运算。 - 在输入Tensor上应用3D average pooling,可被视为3D输入平面。 - 一般,输入shape为 :math:`(N, C, D_{in}, H_{in}, W_{in})` ,AvgPool3D在 :math:`(D_{in}, H_{in}, W_{in})` 维度上输出区域平均值。给定 `kernel_size` 为 :math:`(kD,kH,kW)` 和 `stride` ,运算如下: .. warning:: diff --git a/docs/api/api_python/ops/mindspore.ops.Cdist.rst b/docs/api/api_python/ops/mindspore.ops.Cdist.rst index cb5bf46e0b2..de6b9415e21 100644 --- a/docs/api/api_python/ops/mindspore.ops.Cdist.rst +++ b/docs/api/api_python/ops/mindspore.ops.Cdist.rst @@ -3,6 +3,6 @@ mindspore.ops.Cdist .. py:class:: mindspore.ops.Cdist - 计算两个tensor的p-范数距离。 + 计算两个Tensor的p-范数距离。 更多参考详见 :func:`mindspore.ops.cdist`。 diff --git a/docs/api/api_python/ops/mindspore.ops.CumulativeLogsumexp.rst b/docs/api/api_python/ops/mindspore.ops.CumulativeLogsumexp.rst index 469355d8552..ff4f9b83299 100644 --- a/docs/api/api_python/ops/mindspore.ops.CumulativeLogsumexp.rst +++ b/docs/api/api_python/ops/mindspore.ops.CumulativeLogsumexp.rst @@ -3,7 +3,7 @@ .. py:class:: mindspore.ops.CumulativeLogsumexp(exclusive=False, reverse=False) - 计算输入 `x` 沿轴 `axis` 的累积LogSumExp函数值,即:若输入 `x` 为[a, b, c],则输出为[a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))]。 + 计算输入 `x` 沿轴 `axis` 的累积LogSumExp函数值。即:若输入 `x` 为[a, b, c],则输出为[a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))]。 参数: - **exclusive** (bool, 可选) - 如果为True,将在计算时跳过最后一个元素,此时输出为:[-inf, a, log(exp(a) * exp(b))],其中-inf在输出时出于性能原因将以一个极小负数的形式呈现。默认值:False。 diff --git a/docs/api/api_python/ops/mindspore.ops.FractionalMaxPool3DWithFixedKsize.rst b/docs/api/api_python/ops/mindspore.ops.FractionalMaxPool3DWithFixedKsize.rst index b49ae75c3b5..b23b9ca7bad 100644 --- a/docs/api/api_python/ops/mindspore.ops.FractionalMaxPool3DWithFixedKsize.rst +++ b/docs/api/api_python/ops/mindspore.ops.FractionalMaxPool3DWithFixedKsize.rst @@ -3,8 +3,6 @@ mindspore.ops.FractionalMaxPool3DWithFixedKsize .. py:class:: mindspore.ops.FractionalMaxPool3DWithFixedKsize(ksize, output_shape, data_format="NCDHW") - 3D分数最大池化操作。 - 此运算对由多个输入平面组成的输入信号进行3D分数最大池化。最大池化操作通过由目标输出大小确定的随机步长在 kD x kH x kW 区域中进行。 输出特征的数量等于输入平面的数量。 diff --git a/docs/api/api_python/ops/mindspore.ops.FractionalMaxPoolWithFixedKsize.rst b/docs/api/api_python/ops/mindspore.ops.FractionalMaxPoolWithFixedKsize.rst index 512bb265e49..cb87d4c3d87 100644 --- a/docs/api/api_python/ops/mindspore.ops.FractionalMaxPoolWithFixedKsize.rst +++ b/docs/api/api_python/ops/mindspore.ops.FractionalMaxPoolWithFixedKsize.rst @@ -3,8 +3,6 @@ mindspore.ops.FractionalMaxPoolWithFixedKsize .. py:class:: mindspore.ops.FractionalMaxPoolWithFixedKsize(ksize, output_shape, data_format="NCHW") - 进行分数最大池化操作。 - 此运算对由多个输入平面组成的输入信号进行2D分数最大池化。最大池化操作通过由目标输出大小确定的随机步长在 `kH x kW` 区域中进行。 对于任何输入大小,指定输出的大小为 `H x W` 。输出特征的数量等于输入平面的数量。 diff --git a/docs/api/api_python/ops/mindspore.ops.Tanh.rst b/docs/api/api_python/ops/mindspore.ops.Tanh.rst index af2d1877198..39fb58c5cbb 100644 --- a/docs/api/api_python/ops/mindspore.ops.Tanh.rst +++ b/docs/api/api_python/ops/mindspore.ops.Tanh.rst @@ -3,8 +3,6 @@ .. py:class:: mindspore.ops.Tanh - Tanh激活函数。 - - 按元素计算输入元素的双曲正切。 + 逐元素计算输入元素的双曲正切。 更多参考详见 :func:`mindspore.ops.tanh`。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_absolute.rst b/docs/api/api_python/ops/mindspore.ops.func_absolute.rst index 2a59a37b385..dbdf0c4f7a3 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_absolute.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_absolute.rst @@ -3,6 +3,4 @@ mindspore.ops.absolute .. py:function:: mindspore.ops.absolute(x) - ops.abs()的别名。 - - 详情请查看 :func:`mindspore.ops.abs`。 + :func:`mindspore.ops.abs` 的别名。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool1d.rst b/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool1d.rst index de3446c1872..8206e29715b 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool1d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool1d.rst @@ -3,8 +3,6 @@ mindspore.ops.adaptive_avg_pool1d .. py:function:: mindspore.ops.adaptive_avg_pool1d(input_x, output_size) - 一维自适应平均池化。 - 对可以看作是由一系列1D平面组成的输入Tensor,应用一维自适应平均池化操作。 通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})`,adaptive_avg_pool1d输出区域平均值在 :math:`L_{in}` 区间。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool2d.rst b/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool2d.rst index 95f88a4429c..9b6e7f49476 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool2d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool2d.rst @@ -3,8 +3,6 @@ mindspore.ops.adaptive_avg_pool2d .. py:function:: mindspore.ops.adaptive_avg_pool2d(input_x, output_size) - 二维自适应平均池化。 - 对输入Tensor,提供二维的自适应平均池化操作。也就是说,对于输入任何尺寸,指定输出的尺寸都为H * W。但是输入和输出特征的数目不会变化。 输入和输出数据格式可以是"NCHW"和"CHW"。N是批处理大小,C是通道数,H是特征高度,W是特征宽度。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool3d.rst b/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool3d.rst index 23fbdb68d7e..1bb34e14498 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool3d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_adaptive_avg_pool3d.rst @@ -3,8 +3,6 @@ mindspore.ops.adaptive_avg_pool3d .. py:function:: mindspore.ops.adaptive_avg_pool3d(input_x, output_size) - 三维自适应平均池化。 - 对由多个平面组成的的输入Tensor,进行三维的自适应平均池化操作。对于任何输入尺寸,指定输出的尺寸都为 :math:`(D, H, W)`,但是输入和输出特征的数目不会变化。 假设输入 `input_x` 最后三维大小分别为 :math:`(inD, inH, inW)`,则输出的最后三维大小分别为 :math:`(outD, outH, outW)`,运算如下: diff --git a/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool1d.rst b/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool1d.rst index 9c2b68491a1..351c3e312e2 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool1d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool1d.rst @@ -3,8 +3,6 @@ mindspore.ops.adaptive_max_pool1d .. py:function:: mindspore.ops.adaptive_max_pool1d(input_x, output_size) - 一维自适应最大池化。 - 对可以看作是由一系列1D平面组成的输入Tensor,应用一维自适应最大池化操作。 通常,输入的shape为 :math:`(N_{in}, C_{in}, L_{in})`,adaptive_max_pool1d输出区域最大值在 :math:`L_{in}` 区间。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool2d.rst b/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool2d.rst index 8ab8a13d94e..239de9be201 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool2d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_adaptive_max_pool2d.rst @@ -3,8 +3,6 @@ mindspore.ops.adaptive_max_pool2d .. py:function:: mindspore.ops.adaptive_max_pool2d(input_x, output_size, return_indices=False) - 二维自适应最大池化运算。 - 对输入Tensor,提供二维自适应最大池化操作。即对于输入任何尺寸,指定输出的尺寸都为H * W。但是输入和输出特征的数目不会变化。 输入和输出数据格式可以是"NCHW"和"CHW"。N是批处理大小,C是通道数,H是特征高度,W是特征宽度。运算如下: diff --git a/docs/api/api_python/ops/mindspore.ops.func_adjoint.rst b/docs/api/api_python/ops/mindspore.ops.func_adjoint.rst index ce2590063c0..564fd9eb412 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_adjoint.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_adjoint.rst @@ -3,16 +3,16 @@ .. py:function:: mindspore.ops.adjoint(x) - 计算张量的共轭,并转置最后两个维度。 + 计算Tensor的共轭,并转置最后两个维度。 .. note:: Ascend尚未支持Complex类型Tensor的输入。 参数: - - **x** (Tensor) - 参与计算的tensor。 + - **x** (Tensor) - 参与计算的Tensor。 返回: Tensor,和 `x` 具有相同的dtype和shape。 异常: - - **TypeError**:`x` 不是tensor。 \ No newline at end of file + - **TypeError**:`x` 不是Tensor。 \ No newline at end of file diff --git a/docs/api/api_python/ops/mindspore.ops.func_amin.rst b/docs/api/api_python/ops/mindspore.ops.func_amin.rst index fd148cfd752..77ce0ce5d37 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_amin.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_amin.rst @@ -3,7 +3,7 @@ mindspore.ops.amin .. py:function:: mindspore.ops.amin(x, axis=(), keep_dims=False) - 默认情况下,移除输入所有维度,返回 `x` 中的最大值。也可仅缩小指定维度 `axis` 大小至1。 `keep_dims` 控制输出和输入的维度是否相同。 + 默认情况下,移除输入所有维度,返回 `x` 中的最小值。也可仅缩小指定维度 `axis` 大小至1。 `keep_dims` 控制输出和输入的维度是否相同。 参数: - **x** (Tensor[Number]) - 输入Tensor,其数据类型为数值型。shape: :math:`(N, *)` ,其中 :math:`*` 表示任意数量的附加维度。秩应小于8。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_arccos.rst b/docs/api/api_python/ops/mindspore.ops.func_arccos.rst index 42692706ad8..d2e4bf74259 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_arccos.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_arccos.rst @@ -3,6 +3,4 @@ mindspore.ops.arccos .. py:function:: mindspore.ops.arccos(x) - ops.acos()的别名。 - - 详情请查看 :func:`mindspore.ops.acos`。 + :func:`mindspore.ops.acos` 的别名。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_clip.rst b/docs/api/api_python/ops/mindspore.ops.func_clip.rst index 2da469a6d63..7c8464c82c1 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_clip.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_clip.rst @@ -3,5 +3,4 @@ mindspore.ops.clip .. py:function:: mindspore.ops.clip(x, min=None, max=None) - ops.clamp()的别名。 - 详情请参考 :func:`mindspore.ops.clamp`。 + :func:`mindspore.ops.clamp` 的别名。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_col2im.rst b/docs/api/api_python/ops/mindspore.ops.func_col2im.rst index 29eba129827..f3566f1be8e 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_col2im.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_col2im.rst @@ -3,7 +3,7 @@ mindspore.ops.col2im .. py:function:: mindspore.ops.col2im(input_x, output_size, kernel_size, dilation, padding_value, stride) - 将一组滑动局部块组合成一个大的张量。 + 将一组滑动局部块组合成一个大的Tensor。 参数: - **input_x** (Tensor) - 四维Tensor,输入的批量的滑动局部块,数据类型支持float16和float32。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_conv2d.rst b/docs/api/api_python/ops/mindspore.ops.func_conv2d.rst index 832ee1c16ed..96c7e4a6dbc 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_conv2d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_conv2d.rst @@ -3,9 +3,7 @@ mindspore.ops.conv2d .. py:function:: mindspore.ops.conv2d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, group=1) - 二维卷积层。 - - 对输入Tensor计算二维卷积,该Tensor的常见shape为 :math:`(N, C_{in}, H_{in}, W_{in})` ,其中 :math:`N` 为batch size,:math:`C_{in}` 为通道数, :math:`H_{in}, W_{in}` 分别为特征层的高度和宽度, :math:`X_i` 为 :math:`i^{th}` 输入值, :math:`b_i` 为 :math:`i^{th}` 输入值的偏置项。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, H_{in}, W_{in})` ,公式定义如下: + 对输入Tensor计算二维卷积。该Tensor的常见shape为 :math:`(N, C_{in}, H_{in}, W_{in})` ,其中 :math:`N` 为batch size,:math:`C_{in}` 为通道数, :math:`H_{in}, W_{in}` 分别为特征层的高度和宽度, :math:`X_i` 为 :math:`i^{th}` 输入值, :math:`b_i` 为 :math:`i^{th}` 输入值的偏置项。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, H_{in}, W_{in})` ,公式定义如下: .. math:: out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j, diff --git a/docs/api/api_python/ops/mindspore.ops.func_conv3d.rst b/docs/api/api_python/ops/mindspore.ops.func_conv3d.rst index 11fc48cb8ae..78b0524f99c 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_conv3d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_conv3d.rst @@ -3,9 +3,7 @@ mindspore.ops.conv3d .. py:function:: mindspore.ops.conv3d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, group=1) - 三维卷积层。 - - 对输入Tensor计算三维卷积,该Tensor的常见shape为 :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` ,其中 :math:`N` 为batch size,:math:`C_{in}` 为通道数,:math:`D` 为深度, :math:`H_{in}, W_{in}` 分别为特征层的高度和宽度。 :math:`X_i` 为 :math:`i^{th}` 输入值, :math:`b_i` 为 :math:`i^{th}` 输入值的偏置项。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, D_{in}, H_{in}, W_{in})` ,公式定义如下: + 对输入Tensor计算三维卷积。该Tensor的常见shape为 :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` ,其中 :math:`N` 为batch size,:math:`C_{in}` 为通道数,:math:`D` 为深度, :math:`H_{in}, W_{in}` 分别为特征层的高度和宽度。 :math:`X_i` 为 :math:`i^{th}` 输入值, :math:`b_i` 为 :math:`i^{th}` 输入值的偏置项。对于每个batch中的Tensor,其shape为 :math:`(C_{in}, D_{in}, H_{in}, W_{in})` ,公式定义如下: .. math:: \operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+ diff --git a/docs/api/api_python/ops/mindspore.ops.func_coo_tanh.rst b/docs/api/api_python/ops/mindspore.ops.func_coo_tanh.rst index 5ef8c18f9c0..974c7ba3413 100755 --- a/docs/api/api_python/ops/mindspore.ops.func_coo_tanh.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_coo_tanh.rst @@ -3,8 +3,6 @@ .. py:function:: mindspore.ops.coo_tanh(x: COOTensor) - Tanh激活函数。 - 按元素计算COOTensor输入元素的双曲正切。Tanh函数定义为: .. math:: diff --git a/docs/api/api_python/ops/mindspore.ops.func_csr_tanh.rst b/docs/api/api_python/ops/mindspore.ops.func_csr_tanh.rst index b1343e06d50..fbbb649063d 100755 --- a/docs/api/api_python/ops/mindspore.ops.func_csr_tanh.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_csr_tanh.rst @@ -3,9 +3,7 @@ .. py:function:: mindspore.ops.csr_tanh(x: CSRTensor) - Tanh激活函数。 - - 按元素计算CSRTensor输入元素的双曲正切。Tanh函数定义为: + 逐元素计算CSRTensor输入元素的双曲正切。Tanh函数定义为: .. math:: tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1}, diff --git a/docs/api/api_python/ops/mindspore.ops.func_cummax.rst b/docs/api/api_python/ops/mindspore.ops.func_cummax.rst index 444829116a5..b4126f2ab3b 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_cummax.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_cummax.rst @@ -3,7 +3,7 @@ mindspore.ops.cummax .. py:function:: mindspore.ops.cummax(x, axis) - 返回一个元组(最值、索引),其中最值是输入张量 `x` 沿维度 `axis` 的累积最大值,索引是每个最大值的索引位置。 + 返回一个元组(最值、索引),其中最值是输入Tensor `x` 沿维度 `axis` 的累积最大值,索引是每个最大值的索引位置。 .. math:: \begin{array}{ll} \\ diff --git a/docs/api/api_python/ops/mindspore.ops.func_cummin.rst b/docs/api/api_python/ops/mindspore.ops.func_cummin.rst index 64146df3b0d..0ccc975eaf9 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_cummin.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_cummin.rst @@ -3,7 +3,7 @@ mindspore.ops.cummin .. py:function:: mindspore.ops.cummin(x, axis) - 返回一个元组(最值、索引),其中最值是输入张量 `x` 沿维度 `axis` 的累积最小值,索引是每个最小值的索引位置。 + 返回一个元组(最值、索引),其中最值是输入Tensor `x` 沿维度 `axis` 的累积最小值,索引是每个最小值的索引位置。 .. math:: \begin{array}{ll} \\ diff --git a/docs/api/api_python/ops/mindspore.ops.func_diag.rst b/docs/api/api_python/ops/mindspore.ops.func_diag.rst index 39cef00cff6..f167a2b9602 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_diag.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_diag.rst @@ -3,7 +3,7 @@ mindspore.ops.diag .. py:function:: mindspore.ops.diag(input_x) - 用给定的对角线值构造对角线张量。 + 用给定的对角线值构造对角线Tensor。 假设输入Tensor维度为 :math:`[D_1,... D_k]` ,则输出是一个rank为2k的tensor,其维度为 :math:`[D_1,..., D_k, D_1,..., D_k]` ,其中 :math:`output[i_1,..., i_k, i_1,..., i_k] = input_x[i_1,..., i_k]` 并且其他位置的值为0。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_divide.rst b/docs/api/api_python/ops/mindspore.ops.func_divide.rst index 8f5252e05b7..55015bec63f 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_divide.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_divide.rst @@ -3,5 +3,4 @@ mindspore.ops.divide .. py:function:: mindspore.ops.divide(x, other, *, rounding_mode=None) - ops.div()的别名。 - 详情请参考 :func:`mindspore.ops.div`。 + :func:`mindspore.ops.div` 的别名。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_expand.rst b/docs/api/api_python/ops/mindspore.ops.func_expand.rst index 71555f1b16f..4585e20b7a9 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_expand.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_expand.rst @@ -3,7 +3,7 @@ mindspore.ops.expand .. py:function:: mindspore.ops.expand(input_x, size) - 返回一个当前张量的新视图,其中单维度扩展到更大的尺寸。 + 返回一个当前Tensor的新视图,其中单维度扩展到更大的尺寸。 .. note:: 将 `-1` 作为维度的 `size` 意味着不更改该维度的大小。张量也可以扩展到更大的维度,新的维度会附加在前面。对于新的维度,`size` 不能设置为-1。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool2d.rst b/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool2d.rst index a7707ca3387..86e7cd50609 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool2d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool2d.rst @@ -3,8 +3,6 @@ mindspore.ops.fractional_max_pool2d .. py:function:: mindspore.ops.fractional_max_pool2d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None) - 对输入的多维数据进行二维的分数最大池化运算。 - 对多个输入平面组成的输入上应用2D分数最大池化。在 :math:`(kH_{in}, kW_{in})` 区域上应用最大池化操作,由输出shape决定随机步长。对于任何输入shape,指定输出shape为 :math:`(H, W)` 。输出特征的数量等于输入平面的数量。 在一个输入Tensor上应用2D fractional max pooling,可被视为组成一个2D平面。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool3d.rst b/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool3d.rst index de912d7160f..1d833135d11 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool3d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_fractional_max_pool3d.rst @@ -3,8 +3,6 @@ mindspore.ops.fractional_max_pool3d .. py:function:: mindspore.ops.fractional_max_pool3d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None) - 对输入的多维数据进行三维的分数最大池化运算。 - 对多个输入平面组成的输入上应用3D分数最大池化。在 :math:`(kD_{in}, kH_{in}, kW_{in})` 区域上应用最大池化操作,由输出shape决定随机步长。输出特征的数量等于输入平面的数量。 分数最大池化的详细描述在 `Fractional MaxPooling by Ben Graham `_ 。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_hardswish.rst b/docs/api/api_python/ops/mindspore.ops.func_hardswish.rst index 7b7248f5916..c057c9d46b7 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_hardswish.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_hardswish.rst @@ -3,9 +3,7 @@ mindspore.ops.hardswish .. py:function:: mindspore.ops.hardswish(x) - Hard Swish激活函数。 - - 对输入的每个元素计算Hard Swish。输入是一个张量,具有任何有效的shape。 + 对输入的每个元素计算Hard Swish。输入是一个Tensor,具有任何有效的shape。 Hard Swish定义如下: diff --git a/docs/api/api_python/ops/mindspore.ops.func_hinge_embedding_loss.rst b/docs/api/api_python/ops/mindspore.ops.func_hinge_embedding_loss.rst index 7444701325c..e6574f7407d 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_hinge_embedding_loss.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_hinge_embedding_loss.rst @@ -3,7 +3,7 @@ mindspore.ops.hinge_embedding_loss .. py:function:: mindspore.ops.hinge_embedding_loss(inputs, targets, margin=1.0, reduction="mean") - Hinge Embedding 损失函数。按输入元素计算输出。衡量输入张量x和标签y(包含1或-1)之间的损失值。通常被用来衡量两个输入之间的相似度。 + Hinge Embedding 损失函数。按输入元素计算输出。衡量输入x和标签y(包含1或-1)之间的损失值。通常被用来衡量两个输入之间的相似度。 mini-batch中的第n个样例的损失函数为: diff --git a/docs/api/api_python/ops/mindspore.ops.func_i0.rst b/docs/api/api_python/ops/mindspore.ops.func_i0.rst index 36083c1b507..88837e361a1 100755 --- a/docs/api/api_python/ops/mindspore.ops.func_i0.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_i0.rst @@ -3,6 +3,4 @@ mindspore.ops.i0 .. py:function:: mindspore.ops.i0(x) - ops.bessel_i0()的别名。 - - 详情请查看 :func:`mindspore.ops.bessel_i0`。 + :func:`mindspore.ops.bessel_i0` 的别名。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_kl_div.rst b/docs/api/api_python/ops/mindspore.ops.func_kl_div.rst index 4df0370c7d7..c9c223dd54d 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_kl_div.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_kl_div.rst @@ -5,7 +5,7 @@ mindspore.ops.kl_div 计算输入 `logits` 和 `labels` 的KL散度。 - 对于相同形状的张量 :math:`x` 和 :math:`target` ,kl_div的计算公式如下: + 对于相同shape的Tensor :math:`x` 和 :math:`target` ,kl_div的计算公式如下: .. math:: L(x, target) = target \cdot (\log target - x) diff --git a/docs/api/api_python/ops/mindspore.ops.func_log_softmax.rst b/docs/api/api_python/ops/mindspore.ops.func_log_softmax.rst index fd6a0414967..4f6113f4060 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_log_softmax.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_log_softmax.rst @@ -3,8 +3,6 @@ mindspore.ops.log_softmax .. py:function:: mindspore.ops.log_softmax(logits, axis=-1) - LogSoftmax激活函数。 - 在指定轴上对输入Tensor应用LogSoftmax函数。假设在指定轴上, :math:`x` 对应每个元素 :math:`x_i` ,则LogSoftmax函数如下所示: .. math:: diff --git a/docs/api/api_python/ops/mindspore.ops.func_logsigmoid.rst b/docs/api/api_python/ops/mindspore.ops.func_logsigmoid.rst index bc17ed3de85..005c8f8032e 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_logsigmoid.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_logsigmoid.rst @@ -3,8 +3,6 @@ mindspore.ops.logsigmoid .. py:function:: mindspore.ops.logsigmoid(x) - logsigmoid激活函数。 - 按元素计算logsigmoid激活函数。输入是任意格式的Tensor。 logsigmoid定义为: diff --git a/docs/api/api_python/ops/mindspore.ops.func_lp_pool1d.rst b/docs/api/api_python/ops/mindspore.ops.func_lp_pool1d.rst index 0fce60f1f08..105124ac6a5 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_lp_pool1d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_lp_pool1d.rst @@ -3,9 +3,7 @@ mindspore.ops.lp_pool1d .. py:function:: mindspore.ops.lp_pool1d(x, norm_type, kernel_size, stride=None, ceil_mode=False) - 对输入的多维数据进行一维平面上的LP池化运算。 - - 在一个输入Tensor上应用1D LP pooling,可被视为组成一个1D输入平面。 + 在输入Tensor上应用1D LP池化运算,可被视为组成一个1D输入平面。 通常,输入的shape为 :math:`(N, C, L_{in})` 或 :math:`(C, L_{in})`,输出的shape为 :math:`(N, C, L_{out})` 或 :math:`(C, L_{out})`。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_lp_pool2d.rst b/docs/api/api_python/ops/mindspore.ops.func_lp_pool2d.rst index ba175404e10..6690a2cf3ef 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_lp_pool2d.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_lp_pool2d.rst @@ -3,9 +3,7 @@ mindspore.ops.lp_pool2d .. py:function:: mindspore.ops.lp_pool2d(x, norm_type, kernel_size, stride=None, ceil_mode=False) - 对输入的多维数据进行二维平面上的LP池化运算。 - - 在一个输入Tensor上应用2D LP pooling,可被视为组成一个2D输入平面。 + 在输入Tensor上应用2D LP池化运算,可被视为组成一个2D输入平面。 通常,输入的shape为 :math:`(N, C, H_{in}, W_{in})`,输出的shape为 :math:`(N, C, H_{in}, W_{in})`,输出与输入的shape一致,公式如下: diff --git a/docs/api/api_python/ops/mindspore.ops.func_masked_select.rst b/docs/api/api_python/ops/mindspore.ops.func_masked_select.rst index 60eb5de3877..6f0c511d090 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_masked_select.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_masked_select.rst @@ -3,7 +3,7 @@ mindspore.ops.masked_select .. py:function:: mindspore.ops.masked_select(x, mask) - 返回一个一维张量,其中的内容是 `x` 张量中对应于 `mask` 张量中True位置的值。`mask` 的shape与 `x` 的shape不需要一样,但必须符合广播规则。 + 返回一个一维Tensor,其中的内容是 `x` 中对应于 `mask` 中True位置的值。`mask` 的shape与 `x` 的shape不需要一样,但必须符合广播规则。 参数: - **x** (Tensor) - 它的shape是 :math:`(x_1, x_2, ..., x_R)`。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_matrix_set_diag.rst b/docs/api/api_python/ops/mindspore.ops.func_matrix_set_diag.rst index 2d42263dee0..7626f3b1e69 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_matrix_set_diag.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_matrix_set_diag.rst @@ -3,7 +3,7 @@ mindspore.ops.matrix_set_diag .. py:function:: mindspore.ops.matrix_set_diag(x, diagonal, k=0, align="RIGHT_LEFT") - 返回具有新的对角线值的批处理矩阵张量。 + 返回具有新的对角线值的批处理矩阵Tensor。 给定输入 `x` 和对角线 `diagonal` ,此操作返回与 `x` 具有相同形状和值的张量,但返回的张量除开最内层矩阵的对角线。这些值将被对角线中的值覆盖。如果某些对角线比 `max_diag_len` 短,则需要被填充。 其中 `max_diag_len` 指的是对角线的最长长度。 `diagonal` 的维度 :math:`shape[-2]` 必须等于对角线个数 `num_diags` :math:`k[1] - k[0] + 1`, `diagonal` 的维度 :math:`shape[-1]` 必须 diff --git a/docs/api/api_python/ops/mindspore.ops.func_max.rst b/docs/api/api_python/ops/mindspore.ops.func_max.rst index 22f4b000c26..6e1c9f443a9 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_max.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_max.rst @@ -3,8 +3,6 @@ mindspore.ops.max .. py:function:: mindspore.ops.max(x, axis=0, keep_dims=False) - 根据指定的索引计算最大值,并返回索引和值。 - 在给定轴上计算输入Tensor的最大值。并且返回最大值和索引。 .. note:: diff --git a/docs/api/api_python/ops/mindspore.ops.func_min.rst b/docs/api/api_python/ops/mindspore.ops.func_min.rst index db2edcf1e58..7e2c94f67eb 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_min.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_min.rst @@ -3,8 +3,6 @@ mindspore.ops.min .. py:function:: mindspore.ops.min(x, axis=0, keep_dims=False) - 根据指定的索引计算最小值,并返回索引和值。 - 在给定轴上计算输入Tensor的最小值。并且返回最小值和索引。 .. note:: diff --git a/docs/api/api_python/ops/mindspore.ops.func_negative.rst b/docs/api/api_python/ops/mindspore.ops.func_negative.rst index 35839663c41..7ff7701b565 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_negative.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_negative.rst @@ -3,6 +3,4 @@ mindspore.ops.negative .. py:function:: mindspore.ops.negative(x) - ops.neg()的别名。 - - 详情请参考 :func:`mindspore.ops.neg`。 + :func:`mindspore.ops.neg` 的别名。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_padding.rst b/docs/api/api_python/ops/mindspore.ops.func_padding.rst index 063202403bf..682239a4631 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_padding.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_padding.rst @@ -3,8 +3,7 @@ mindspore.ops.padding .. py:function:: mindspore.ops.padding(x, pad_dim_size=8) - 通过填充0,将输入张量的最后一个维度从1扩展到指定大小。 - + 通过填充0,将输入Tensor的最后一个维度从1扩展到指定大小。 参数: - **x** (Tensor) - `x` 的shape为 :math:`(x_1, x_2, ..., x_R)`,秩至少为2,它的最后一个维度必须为1。其数据类型为数值型。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_pixel_shuffle.rst b/docs/api/api_python/ops/mindspore.ops.func_pixel_shuffle.rst index ce117c17b43..f5513943cb0 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_pixel_shuffle.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_pixel_shuffle.rst @@ -3,8 +3,6 @@ mindspore.ops.pixel_shuffle .. py:function:: mindspore.ops.pixel_shuffle(x, upscale_factor) - pixel_shuffle函数。 - 在多个输入平面组成的输入上面应用pixel_shuffle算法。在平面上应用高效亚像素卷积,步长为 :math:`1/r` 。关于pixel_shuffle算法详细介绍,请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network `_ 。 通常情况下,`x` shape :math:`(*, C \times r^2, H, W)` ,输出shape :math:`(*, C, H \times r, W \times r)` 。`r` 是缩小因子。 `*` 是大于等于0的维度。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_pixel_unshuffle.rst b/docs/api/api_python/ops/mindspore.ops.func_pixel_unshuffle.rst index 057f5316965..48ee38e5470 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_pixel_unshuffle.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_pixel_unshuffle.rst @@ -3,8 +3,6 @@ mindspore.ops.pixel_unshuffle .. py:function:: mindspore.ops.pixel_unshuffle(x, downscale_factor) - pixel_unshuffle函数。 - 在多个输入平面组成的输入上面应用pixel_unshuffle算法。关于pixel_unshuffle算法详细介绍,请参考 `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network `_ 。 通常情况下,`x` shape :math:`(*, C, H \times r, W \times r)` ,输出shape :math:`(*, C \times r^2, H, W)` 。`r` 是缩小因子。 `*` 是大于等于0的维度。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_scalar_to_tensor.rst b/docs/api/api_python/ops/mindspore.ops.func_scalar_to_tensor.rst index 2b9cca444bf..663d68cc0e3 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_scalar_to_tensor.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_scalar_to_tensor.rst @@ -3,7 +3,7 @@ mindspore.ops.scalar_to_tensor .. py:function:: mindspore.ops.scalar_to_tensor(input_x, dtype=mstype.float32) - 将Scalar转换为指定数据类型的 `Tensor` 。 + 将Scalar转换为指定数据类型的Tensor。 参数: - **input_x** (Union[int, float]) - 输入是Scalar。只能是常量值。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_scatter_div.rst b/docs/api/api_python/ops/mindspore.ops.func_scatter_div.rst index e2563d58659..c8d8e7c16af 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_scatter_div.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_scatter_div.rst @@ -3,8 +3,6 @@ mindspore.ops.scatter_div .. py:function:: mindspore.ops.scatter_div(input_x, indices, updates) - 通过除法操作更新输入张量的值。 - 根据指定更新值和输入索引通过除法操作更新输入数据的值。 该操作在更新完成后输出 `input_x` ,这样方便使用更新后的值。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_scatter_min.rst b/docs/api/api_python/ops/mindspore.ops.func_scatter_min.rst index 7d3e41ea2aa..e1694472f29 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_scatter_min.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_scatter_min.rst @@ -3,8 +3,6 @@ mindspore.ops.scatter_min .. py:function:: mindspore.ops.scatter_min(input_x, indices, updates) - 通过最小操作更新输入张量的值。 - 根据指定更新值和输入索引通过最小值操作更新输入数据的值。 该操作在更新完成后输出 `input_x` ,这样方便使用更新后的值。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_add.rst b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_add.rst index 84de1efe8d1..7ad8b83af9f 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_add.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_add.rst @@ -3,7 +3,7 @@ mindspore.ops.scatter_nd_add .. py:function:: mindspore.ops.scatter_nd_add(input_x, indices, updates, use_locking=False) - 将sparse addition应用于张量中的单个值或切片。 + 将sparse addition应用于Tensor中的单个值或切片。 使用给定值通过加法运算和输入索引更新Tensor值。在更新完成后输出 `input_x` ,这有利于更加方便地使用更新后的值。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_div.rst b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_div.rst index 4dd1d5473eb..19dfb5384f5 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_div.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_div.rst @@ -3,7 +3,7 @@ mindspore.ops.scatter_nd_div .. py:function:: mindspore.ops.scatter_nd_div(input_x, indices, updates, use_locking=False) - 将sparse division应用于张量中的单个值或切片。 + 将sparse division应用于Tensor中的单个值或切片。 使用给定值通过除法运算和输入索引更新 `input_x` 的值。为便于使用,函数返回 `input_x` 的复制。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_max.rst b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_max.rst index 1e93495c5a9..660fce686cc 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_max.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_max.rst @@ -3,7 +3,7 @@ mindspore.ops.scatter_nd_max .. py:function:: mindspore.ops.scatter_nd_max(input_x, indices, updates, use_locking=False) - 对张量中的单个值或切片应用sparse maximum。 + 对Tensor中的单个值或切片应用sparse maximum。 使用给定值通过最大值运算和输入索引更新Parameter值。在更新完成后输出 `input_x` ,这有利于更加方便地使用更新后的值。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_min.rst b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_min.rst index 45204f5a8c0..ef2f422d0eb 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_min.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_scatter_nd_min.rst @@ -3,7 +3,7 @@ mindspore.ops.scatter_nd_min .. py:function:: mindspore.ops.scatter_nd_min(input_x, indices, updates, use_locking=False) - 对张量中的单个值或切片应用sparse minimum。 + 对Tensor中的单个值或切片应用sparse minimum。 使用给定值通过最小值运算和输入索引更新 `input_x` 的值。为便于使用,函数返回 `input_x` 的复制。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_select.rst b/docs/api/api_python/ops/mindspore.ops.func_select.rst index 3833345b0ba..b16c9489119 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_select.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_select.rst @@ -3,7 +3,7 @@ mindspore.ops.select .. py:function:: mindspore.ops.select(cond, x, y) - 根据条件判断Tensor中的元素的值来,决定输出中的相应元素是从 `x` (如果元素值为True)还是从 `y` (如果元素值为False)中选择。 + 根据条件判断Tensor中的元素的值,来决定输出中的相应元素是从 `x` (如果元素值为True)还是从 `y` (如果元素值为False)中选择。 该算法可以被定义为: diff --git a/docs/api/api_python/ops/mindspore.ops.func_sgn.rst b/docs/api/api_python/ops/mindspore.ops.func_sgn.rst index c50686cf89a..f7b421ba674 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_sgn.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_sgn.rst @@ -3,7 +3,7 @@ mindspore.ops.sgn .. py:function:: mindspore.ops.sgn(x) - 此方法为 :func:`mindspore.ops.sign` 在复数张量上的扩展。 + 此方法为 :func:`mindspore.ops.sign` 在复数Tensor上的扩展。 .. math:: \text{out}_{i} = \begin{cases} diff --git a/docs/api/api_python/ops/mindspore.ops.func_sigmoid.rst b/docs/api/api_python/ops/mindspore.ops.func_sigmoid.rst index 55e4965b2b6..d1eaca8a611 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_sigmoid.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_sigmoid.rst @@ -3,7 +3,7 @@ mindspore.ops.sigmoid .. py:function:: mindspore.ops.sigmoid(input_x) - Sigmoid激活函数,逐元素计算Sigmoid激活函数。Sigmoid函数定义为: + 逐元素计算Sigmoid激活函数。Sigmoid函数定义为: .. math:: diff --git a/docs/api/api_python/ops/mindspore.ops.func_softmax.rst b/docs/api/api_python/ops/mindspore.ops.func_softmax.rst index 1fa527aabd6..f850cc16174 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_softmax.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_softmax.rst @@ -3,9 +3,7 @@ mindspore.ops.softmax .. py:function:: mindspore.ops.softmax(x, axis=-1) - Softmax函数。 - - 在指定轴上对输入Tensor执行Softmax函数做归一化操作。假设指定轴 :math:`x` 上有切片,那么每个元素 :math:`x_i` 所对应的Softmax函数如下所示: + 在指定轴上对输入Tensor执行Softmax激活函数做归一化操作。假设指定轴 :math:`x` 上有切片,那么每个元素 :math:`x_i` 所对应的Softmax函数如下所示: .. math:: \text{output}(x_i) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}, diff --git a/docs/api/api_python/ops/mindspore.ops.func_space_to_batch_nd.rst b/docs/api/api_python/ops/mindspore.ops.func_space_to_batch_nd.rst index 06d9b7cde96..f4548ea6a44 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_space_to_batch_nd.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_space_to_batch_nd.rst @@ -3,7 +3,7 @@ mindspore.ops.space_to_batch_nd .. py:function:: mindspore.ops.space_to_batch_nd(input_x, block_size, paddings) - 将空间维度划分为对应大小的块,然后在批次维度重排张量。 + 将空间维度划分为对应大小的块,然后在批次维度重排Tensor。 此函数将输入的空间维度 [1, ..., M] 划分为形状为 `block_size` 的块网格,并将这些块在批次维度上(默认是第零维)中交错排列。 输出的张量在空间维度上的截面是输入在对应空间维度上截面的一个网格,而输出的批次维度的大小为空间维度分解成块网格的数量乘以输入的批次维度的大小。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_tanh.rst b/docs/api/api_python/ops/mindspore.ops.func_tanh.rst index a583c4ed5ce..d4e6e4f2fb1 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_tanh.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_tanh.rst @@ -3,9 +3,7 @@ .. py:function:: mindspore.ops.tanh(input_x) - Tanh激活函数。 - - 按元素计算输入元素的双曲正切。Tanh函数定义为: + 逐元素计算输入元素的双曲正切。Tanh函数定义为: .. math:: tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1}, diff --git a/docs/api/api_python/ops/mindspore.ops.func_tensor_split.rst b/docs/api/api_python/ops/mindspore.ops.func_tensor_split.rst index 4572994763b..6c5d831796b 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_tensor_split.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_tensor_split.rst @@ -3,7 +3,7 @@ mindspore.ops.tensor_split .. py:function:: mindspore.ops.tensor_split(x, indices_or_sections, axis=0) - 根据指定的轴将输入Tensor进行分割成多个子tensor。 + 根据指定的轴将输入Tensor进行分割成多个子Tensor。 参数: - **x** (Tensor) - 待分割的Tensor。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_true_divide.rst b/docs/api/api_python/ops/mindspore.ops.func_true_divide.rst index e9889be2e61..56f4bba34ab 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_true_divide.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_true_divide.rst @@ -3,6 +3,4 @@ .. py:function:: mindspore.ops.true_divide(dividend, divisor) - ops.div()在 :math:`rounding\_mode=None` 时的别名。 - - 获取更多详情请查看: :func:`mindspore.ops.div` 。 + :func:`mindspore.ops.div` 在 :math:`rounding\_mode=None` 时的别名。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_trunc.rst b/docs/api/api_python/ops/mindspore.ops.func_trunc.rst index b03b73805d3..a1551e55876 100755 --- a/docs/api/api_python/ops/mindspore.ops.func_trunc.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_trunc.rst @@ -3,7 +3,7 @@ mindspore.ops.trunc .. py:function:: mindspore.ops.trunc(input) - 返回一个新的张量,该张量具有输入元素的截断整数值。 + 返回一个新的Tensor,该Tensor具有输入元素的截断整数值。 参数: - **input** (Tensor) - 任意维度的Tensor。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_unique_consecutive.rst b/docs/api/api_python/ops/mindspore.ops.func_unique_consecutive.rst index 899910ab305..d4709f54b45 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_unique_consecutive.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_unique_consecutive.rst @@ -3,7 +3,7 @@ mindspore.ops.unique_consecutive .. py:function:: mindspore.ops.unique_consecutive(x, return_idx=False, return_counts=False, axis=None) - 对输入张量中连续且重复的元素去重。 + 对输入Tensor中连续且重复的元素去重。 参数: - **x** (Tensor) - 输入Tensor。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_unique_with_pad.rst b/docs/api/api_python/ops/mindspore.ops.func_unique_with_pad.rst index 4e84d886790..51e9c159fd9 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_unique_with_pad.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_unique_with_pad.rst @@ -3,11 +3,11 @@ mindspore.ops.unique_with_pad .. py:function:: mindspore.ops.unique_with_pad(x, pad_num) - 对输入一维张量中元素去重,返回一维张量中的唯一元素(使用pad_num填充)和相对索引。 + 对输入一维Tensor中元素去重,返回一维Tensor中的唯一元素(使用pad_num填充)和相对索引。 基本操作与unique相同,但unique_with_pad多了pad操作。 - unique运算符对张量处理后所返回的元组( `y` , `idx` ), `y` 与 `idx` 的shape通常会有差别。因此,为了解决上述情况, - unique_with_pad操作符将用用户指定的 `pad_num` 填充 `y` 张量,使其具有与张量 `idx` 相同的形状。 + unique运算符对Tensor处理后所返回的元组( `y` , `idx` ), `y` 与 `idx` 的shape通常会有差别。因此,为了解决上述情况, + unique_with_pad操作符将用用户指定的 `pad_num` 填充 `y` ,使其具有与 `idx` 相同的形状。 参数: - **x** (Tensor) - 需要被去重的Tensor。必须是类型为int32或int64的一维向量。 @@ -18,4 +18,4 @@ mindspore.ops.unique_with_pad 异常: - **TypeError** - `x` 的数据类型既不是int32也不是int64。 - - **ValueError** - `x` 不是一维张量。 + - **ValueError** - `x` 不是一维Tensor。 diff --git a/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_min.rst b/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_min.rst index 2f915128890..598e39edf57 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_min.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_min.rst @@ -5,7 +5,7 @@ mindspore.ops.unsorted_segment_min 沿分段计算输入Tensor的最小值。 - UnsortedSegmentMin的计算过程如下图所示: + unsorted_segment_min的计算过程如下图所示: .. image:: UnsortedSegmentMin.png diff --git a/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_prod.rst b/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_prod.rst index 88068c95360..64aec6ea438 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_prod.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_prod.rst @@ -5,7 +5,7 @@ mindspore.ops.unsorted_segment_prod 沿分段计算输入Tensor元素的乘积。 - UnsortedSegmentProd的计算过程如下图所示: + unsorted_segment_prod的计算过程如下图所示: .. image:: UnsortedSegmentProd.png diff --git a/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_sum.rst b/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_sum.rst index c61544401bf..396c4ae30d6 100644 --- a/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_sum.rst +++ b/docs/api/api_python/ops/mindspore.ops.func_unsorted_segment_sum.rst @@ -7,7 +7,7 @@ 计算输出Tensor :math:`\text{output}[i] = \sum_{segment\_ids[j] == i} \text{data}[j, \ldots]` ,其中 :math:`j,...` 是代表元素索引的Tuple。 `segment_ids` 确定输入Tensor元素的分段。 `segment_ids` 不需要排序,也不需要覆盖 `num_segments` 范围内的所有值。 - UnsortedSegmentSum的计算过程如下图所示: + unsorted_segment_sum的计算过程如下图所示: .. image:: UnsortedSegmentSum.png diff --git a/mindspore/python/mindspore/nn/layer/activation.py b/mindspore/python/mindspore/nn/layer/activation.py index 7ec3bc840fa..a9b564ac3bb 100644 --- a/mindspore/python/mindspore/nn/layer/activation.py +++ b/mindspore/python/mindspore/nn/layer/activation.py @@ -265,8 +265,6 @@ class Softmax(Cell): class LogSoftmax(Cell): r""" - LogSoftmax activation function. - Applies the LogSoftmax function to n-dimensional input tensor. The input is transformed by the Softmax function and then by the log function to lie in range[-inf,0). @@ -661,8 +659,6 @@ class SiLU(Cell): class Tanh(Cell): r""" - Tanh activation function. - Applies the Tanh function element-wise, returns a new tensor with the hyperbolic tangent of the elements of input, The input is a Tensor with any valid shape. @@ -759,8 +755,6 @@ def _dtype_check(x_dtype, prim_name): class Hardtanh(Cell): r""" - Hardtanh activation function. - Applies the Hardtanh function element-wise. The activation function is defined as: .. math:: @@ -1131,8 +1125,6 @@ class PReLU(Cell): class HSwish(Cell): r""" - Hard swish activation function. - Applies hswish-type activation element-wise. The input is a Tensor with any valid shape. Hard swish is defined as: @@ -1214,8 +1206,6 @@ class HSigmoid(Cell): class LogSigmoid(Cell): r""" - Logsigmoid activation function. - Applies logsigmoid activation element-wise. The input is a Tensor with any valid shape. Logsigmoid is defined as: diff --git a/mindspore/python/mindspore/nn/layer/conv.py b/mindspore/python/mindspore/nn/layer/conv.py index 68effe8f76b..90d8f2f0ae7 100644 --- a/mindspore/python/mindspore/nn/layer/conv.py +++ b/mindspore/python/mindspore/nn/layer/conv.py @@ -130,9 +130,8 @@ class _Conv(Cell): class Conv2d(_Conv): r""" - 2D convolution layer. - - Calculates the 2D convolution on the input tensor which is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`, + Calculates the 2D convolution on the input tensor. + The input is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`, where :math:`N` is batch size, :math:`C_{in}` is a number of channels, :math:`H_{in}, W_{in}` are the height and width of the feature layer respectively. For the tensor of each batch, its shape is :math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as: @@ -324,9 +323,7 @@ def _check_input_3d(input_shape, op_name): class Conv1d(_Conv): r""" - 1D convolution layer. - - Calculates the 1D convolution on the input tensor which is typically of shape :math:`(N, C_{in}, L_{in})`, + Calculates the 1D convolution on the input tensor. The input is typically of shape :math:`(N, C_{in}, L_{in})`, where :math:`N` is batch size, :math:`C_{in}` is a number of channels and :math:`L_{in}` is a length of sequence. For the tensor of each batch, its shape is :math:`(C_{in}, L_{in})`, and the formula is defined as: @@ -507,9 +504,7 @@ def _check_input_5dims(input_shape, op_name): class Conv3d(_Conv): r""" - 3D convolution layer. - - Calculates the 3D convolution on the input tensor which is typically of shape + Calculates the 3D convolution on the input tensor. The input is typically of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`, where :math:`N` is batch size, :math:`C_{in}` is a number of channels, :math:`D_{in}, H_{in}, W_{in}` are the depth, height and width of the feature layer respectively. @@ -726,8 +721,6 @@ class Conv3d(_Conv): class Conv3dTranspose(_Conv): r""" - 3D transposed convolution layer. - Calculates a 3D transposed convolution, which can be regarded as Conv3d for the gradient of the input. It also called deconvolution (although it is not an actual deconvolution). @@ -938,8 +931,6 @@ def _deconv_output_length(is_valid, is_same, is_pad, input_length, filter_size, class Conv2dTranspose(_Conv): r""" - 2D transposed convolution layer. - Calculates a 2D transposed convolution, which can be regarded as Conv2d for the gradient of the input, also called deconvolution (although it is not an actual deconvolution). @@ -1134,8 +1125,6 @@ class Conv2dTranspose(_Conv): class Conv1dTranspose(_Conv): r""" - 1D transposed convolution layer. - Calculates a 1D transposed convolution, which can be regarded as Conv1d for the gradient of the input, also called deconvolution (although it is not an actual deconvolution). diff --git a/mindspore/python/mindspore/nn/layer/image.py b/mindspore/python/mindspore/nn/layer/image.py index 95bbe780afe..1a51b7cbe42 100644 --- a/mindspore/python/mindspore/nn/layer/image.py +++ b/mindspore/python/mindspore/nn/layer/image.py @@ -561,8 +561,6 @@ class CentralCrop(Cell): class PixelShuffle(Cell): r""" - PixelShuffle operatrion. - Applies a pixelshuffle operation over an input signal composed of several input planes. This is useful for implementiong efficient sub-pixel convolution with a stride of :math:`1/r`. For more details, refer to `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network @@ -607,8 +605,6 @@ class PixelShuffle(Cell): class PixelUnshuffle(Cell): r""" - PixelUnshuffle operatrion. - Applies a pixelunshuffle operation over an input signal composed of several input planes. For more details, refer to `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network `_ . diff --git a/mindspore/python/mindspore/nn/layer/normalization.py b/mindspore/python/mindspore/nn/layer/normalization.py index df5ffb6c05e..049e8695f2b 100644 --- a/mindspore/python/mindspore/nn/layer/normalization.py +++ b/mindspore/python/mindspore/nn/layer/normalization.py @@ -163,8 +163,6 @@ class _BatchNorm(Cell): class BatchNorm1d(_BatchNorm): r""" - Batch Normalization layer over a 2D input. - This layer applies Batch Normalization over a 2D input (a mini-batch of 1D inputs) to reduce internal covariate shift. Batch Normalization is widely used in convolutional networks. @@ -239,8 +237,6 @@ class BatchNorm1d(_BatchNorm): class BatchNorm2d(_BatchNorm): r""" - Batch Normalization layer over a 4D input. - Batch Normalization is widely used in convolutional networks. This layer applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) to avoid internal covariate shift as described @@ -333,8 +329,6 @@ class BatchNorm2d(_BatchNorm): class BatchNorm3d(Cell): r""" - Batch Normalization layer over a 5D input. - Batch Normalization is widely used in convolutional networks. This layer applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) to avoid internal covariate shift. @@ -759,10 +753,8 @@ class _InstanceNorm(Cell): class InstanceNorm1d(_InstanceNorm): r""" - Instance Normalization layer over a 3D input. - This layer applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with - additional channel dimension) as described in the paper `Instance Normalization: The Missing Ingredient for + additional channel dimension). Refer to the paper `Instance Normalization: The Missing Ingredient for Fast Stylization `_. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula. @@ -840,10 +832,8 @@ class InstanceNorm1d(_InstanceNorm): class InstanceNorm2d(_InstanceNorm): r""" - Instance Normalization layer over a 4D input. - This layer applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with - additional channel dimension) as described in the paper `Instance Normalization: The Missing Ingredient for + additional channel dimension). Refer to the paper `Instance Normalization: The Missing Ingredient for Fast Stylization `_. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula. @@ -921,10 +911,8 @@ class InstanceNorm2d(_InstanceNorm): class InstanceNorm3d(_InstanceNorm): r""" - Instance Normalization layer over a 5D input. - This layer applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with - additional channel dimension) as described in the paper `Instance Normalization: The Missing Ingredient for + additional channel dimension). Refer to the paper `Instance Normalization: The Missing Ingredient for Fast Stylization `_. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula. diff --git a/mindspore/python/mindspore/nn/layer/pooling.py b/mindspore/python/mindspore/nn/layer/pooling.py index 0a02968fdcc..683530e8b38 100644 --- a/mindspore/python/mindspore/nn/layer/pooling.py +++ b/mindspore/python/mindspore/nn/layer/pooling.py @@ -82,8 +82,6 @@ def _shape_check(in_shape, prim_name=None): class LPPool1d(Cell): r""" - LPPool1d pooling operation. - Applies a 1D power lp pooling over an input signal composed of several input planes. Typically the input is of shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`, the output is of shape @@ -153,8 +151,6 @@ class LPPool1d(Cell): class LPPool2d(Cell): r""" - LPPool2d pooling operation. - Applies a 2D power lp pooling over an input signal composed of several input planes. Typically the input is of shape :math:`(N, C, H_{in}, W_{in})`, the output is of shape @@ -351,8 +347,6 @@ class MaxPool3d(Cell): class MaxPool2d(_PoolNd): r""" - 2D max pooling operation for temporal data. - Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes. Typically the input is of shape :math:`(N_{in}, C_{in}, H_{in}, W_{in})`, MaxPool2d outputs @@ -423,8 +417,6 @@ class MaxPool2d(_PoolNd): class MaxPool1d(_PoolNd): r""" - 1D max pooling operation for temporal data. - Applies a 1D max pooling over an input Tensor which can be regarded as a composition of 1D planes. Typically the input is of shape :math:`(N_{in}, C_{in}, L_{in})`, MaxPool1d outputs @@ -506,8 +498,6 @@ class MaxPool1d(_PoolNd): class AvgPool3d(Cell): r""" - 3D average pooling operation. - Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. Typically the input is of shape :math:`(N, C, D_{in}, H_{in}, W_{in})`, and AvgPool3D outputs regional average in the :math:`(D_{in}, H_{in}, W_{in})`-dimension. Given kernel size @@ -599,8 +589,6 @@ class AvgPool3d(Cell): class AvgPool2d(_PoolNd): r""" - 2D average pooling for temporal data. - Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. Typically the input is of shape :math:`(N_{in}, C_{in}, H_{in}, W_{in})`, AvgPool2d outputs @@ -678,8 +666,6 @@ class AvgPool2d(_PoolNd): class AvgPool1d(_PoolNd): r""" - 1D average pooling for temporal data. - Applies a 1D average pooling over an input Tensor which can be regarded as a composition of 1D input planes. Typically the input is of shape :math:`(N_{in}, C_{in}, L_{in})`, AvgPool1d outputs @@ -795,8 +781,6 @@ def _adaptive_dtype_check(x_dtype, prim_name): class AdaptiveAvgPool1d(Cell): r""" - 1D adaptive average pooling for temporal data. - Applies a 1D adaptive average pooling over an input Tensor which can be regarded as a composition of 1D input planes. @@ -873,8 +857,6 @@ class AdaptiveAvgPool1d(Cell): class AdaptiveAvgPool2d(Cell): r""" - 2D adaptive average pooling for temporal data. - This operator applies a 2D adaptive average pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input features. @@ -935,8 +917,6 @@ class AdaptiveAvgPool2d(Cell): class AdaptiveAvgPool3d(Cell): r""" - 3D adaptive average pooling for temporal data. - This operator applies a 3D adaptive average pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is :math:`(D, H, W)`. The number of output features is equal to the number of input planes. @@ -1017,8 +997,6 @@ class AdaptiveAvgPool3d(Cell): class AdaptiveMaxPool1d(Cell): r""" - 1D adaptive maximum pooling for temporal data. - Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as a composition of 1D input planes. @@ -1094,8 +1072,6 @@ class AdaptiveMaxPool1d(Cell): class AdaptiveMaxPool2d(Cell): r""" - AdaptiveMaxPool2d operation. - This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes. @@ -1257,8 +1233,6 @@ class AdaptiveMaxPool3d(Cell): class FractionalMaxPool2d(Cell): r""" - 2D fractional max pooling operation for temporal data. - Applies a 2D fractional max pooling to an input signal composed of multiple input planes. The max-pooling operation is applied in kH × kW regions by a stochastic step size determined by the target output size. For any input size, the size of the specified output is H x W. The number @@ -1362,8 +1336,6 @@ class FractionalMaxPool2d(Cell): class FractionalMaxPool3d(Cell): r""" - 3D fractional max pooling operation for temporal data. - This operator applies a 3D fractional max pooling over an input signal composed of several input planes. The max-pooling operation is applied in kD x kH x kW regions by a stochastic step size determined by the target output size.The number of output features is equal to the number of input planes. diff --git a/mindspore/python/mindspore/ops/function/array_func.py b/mindspore/python/mindspore/ops/function/array_func.py index b0d194b5915..ea2a96976ef 100644 --- a/mindspore/python/mindspore/ops/function/array_func.py +++ b/mindspore/python/mindspore/ops/function/array_func.py @@ -2270,8 +2270,6 @@ def scatter_add(input_x, indices, updates): def scatter_min(input_x, indices, updates): r""" - Updates the value of the input tensor through the minimum operation. - Using given values to update tensor value through the min operation, along with the input indices. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value. @@ -2326,8 +2324,6 @@ def scatter_min(input_x, indices, updates): def scatter_div(input_x, indices, updates): r""" - Updates the value of the input tensor through the divide operation. - Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value. @@ -4234,7 +4230,7 @@ def unsorted_segment_prod(x, segment_ids, num_segments): r""" Computes the product of a tensor along segments. - The following figure shows the calculation process of UnsortedSegmentProd: + The following figure shows the calculation process of unsorted_segment_prod: .. image:: UnsortedSegmentProd.png @@ -5130,8 +5126,6 @@ def dsplit(x, indices_or_sections): def max(x, axis=0, keep_dims=False): """ - Calculates the maximum value with the corresponding index. - Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and indices. @@ -5225,8 +5219,6 @@ def argmax(x, axis=None, keepdims=False): def min(x, axis=0, keep_dims=False): """ - Calculates the minimum value with corresponding index, and returns indices and values. - Calculates the minimum value along with the given axis for the input tensor. It returns the minimum values and indices. @@ -5380,7 +5372,7 @@ def unsorted_segment_sum(input_x, segment_ids, num_segments): up. Segment_ids does not need to be sorted, and it does not need to cover all values in the entire valid value range. - The following figure shows the calculation process of UnsortedSegmentSum: + The following figure shows the calculation process of unsorted_segment_sum: .. image:: UnsortedSegmentSum.png diff --git a/mindspore/python/mindspore/ops/function/clip_func.py b/mindspore/python/mindspore/ops/function/clip_func.py index 5e9854c546e..eeee2612389 100644 --- a/mindspore/python/mindspore/ops/function/clip_func.py +++ b/mindspore/python/mindspore/ops/function/clip_func.py @@ -207,8 +207,7 @@ def clamp(x, min=None, max=None): def clip(x, min=None, max=None): r""" - Alias for ops.clamp. - For details, please refer to :func:`mindspore.ops.clamp`. + Alias for :func:`mindspore.ops.clamp` . Supported Platforms: ``Ascend`` ``GPU`` ``CPU`` diff --git a/mindspore/python/mindspore/ops/function/math_func.py b/mindspore/python/mindspore/ops/function/math_func.py index 15a5bb557cc..30cd78e38eb 100644 --- a/mindspore/python/mindspore/ops/function/math_func.py +++ b/mindspore/python/mindspore/ops/function/math_func.py @@ -247,8 +247,7 @@ def abs(x): def absolute(x): """ - Alias for ops.abs(). - For details, please refer to :func:`mindspore.ops.abs`. + Alias for :func:`mindspore.ops.abs` . """ return abs(x) @@ -583,8 +582,7 @@ def neg(x): def negative(x): r""" - Alias for ops.neg(). - For details, please refer to :func:`mindspore.ops.neg`. + Alias for :func:`mindspore.ops.neg` . """ return neg_tensor(x) @@ -816,8 +814,7 @@ def subtract(x, other, *, alpha=1): def true_divide(dividend, divisor): r""" - Alias for Tensor.div() with :math:`rounding\_mode=None`. - For details, please refer to :func:`mindspore.ops.div`. + Alias for :func:`mindspore.ops.div` with :math:`rounding\_mode=None`. """ return div(dividend, divisor, rounding_mode=None) @@ -933,8 +930,7 @@ def div(input, other, rounding_mode=None): def divide(x, other, *, rounding_mode=None): """ - Alias for ops.div(). - For details, please refer to :func:`mindspore.ops.div`. + Alias for :func:`mindspore.ops.div` . """ return div(x, other, rounding_mode) @@ -1333,8 +1329,7 @@ def floor(x): def i0(x): r""" - Alias for ops.bessel_i0. - For details, please refer to :func:`mindspore.ops.bessel_i0`. + Alias for :func:`mindspore.ops.bessel_i0` . """ return bessel_i0(x) @@ -2126,8 +2121,7 @@ def acos(x): def arccos(x): """ - Alias for ops.acos(). - For details, please refer to :func:`mindspore.ops.acos`. + Alias for :func:`mindspore.ops.acos` . """ return acos(x) @@ -2227,8 +2221,6 @@ def cosh(x): def tanh(input_x): r""" - Tanh activation function. - Computes hyperbolic tangent of input element-wise. The Tanh function is defined as: .. math:: diff --git a/mindspore/python/mindspore/ops/function/nn_func.py b/mindspore/python/mindspore/ops/function/nn_func.py index 5cbf2c9b41c..36be6912e22 100644 --- a/mindspore/python/mindspore/ops/function/nn_func.py +++ b/mindspore/python/mindspore/ops/function/nn_func.py @@ -49,8 +49,6 @@ sigmoid_ = NN_OPS.Sigmoid() def adaptive_avg_pool2d(input_x, output_size): r""" - 2D adaptive average pooling for temporal data. - This operator applies a 2D adaptive average pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input features. @@ -139,8 +137,6 @@ def adaptive_avg_pool2d(input_x, output_size): def adaptive_avg_pool3d(input_x, output_size): r""" - 3D adaptive average pooling for temporal data. - This operator applies a 3D adaptive average pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is :math:`(D, H, W)`. The number of output features is equal to the number of input planes. @@ -550,8 +546,6 @@ def avg_pool3d(input_x, kernel_size=1, stride=1, padding=0, ceil_mode=False, cou def adaptive_max_pool2d(input_x, output_size, return_indices=False): r""" - adaptive_max_pool2d operation. - This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes. @@ -1351,9 +1345,8 @@ def _check_float_range_inc_right(arg_value, lower_limit, upper_limit, arg_name=N def fractional_max_pool2d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None): r""" - 2D fractional max pooling operation for temporal data. - - Applies a 2D fractional max pooling to an input signal composed of multiple input planes. + Applies a 2D fractional max pooling to an input signal. + The input is composed of multiple input planes. The max-pooling operation is applied in kH × kW regions by a stochastic step size determined by the target output size. For any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes. @@ -1453,9 +1446,8 @@ def fractional_max_pool2d(input_x, kernel_size, output_size=None, output_ratio=N def fractional_max_pool3d(input_x, kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None): r""" - 3D fractional max pooling operation for temporal data. - - This operator applies a 3D fractional max pooling over an input signal composed of several input planes. + This operator applies a 3D fractional max pooling over an input signal. + The input is composed of several input planes. The max-pooling operation is applied in kD x kH x kW regions by a stochastic step size determined by the target output size.The number of output features is equal to the number of input planes. @@ -1868,8 +1860,6 @@ def is_floating_point(x): def hardswish(x): r""" - Hard swish activation function. - Applies hswish-type activation element-wise. The input is a Tensor with any valid shape. Hard swish is defined as: @@ -2094,8 +2084,6 @@ def softsign(x): def softmax(x, axis=-1): r""" - Softmax operation. - Applies the Softmax operation to the input tensor on the specified axis. Suppose a slice in the given axis :math:`x`, then for each element :math:`x_i`, the Softmax function is shown as follows: @@ -2260,8 +2248,6 @@ def selu(input_x): def sigmoid(input_x): r""" - Sigmoid activation function. - Computes Sigmoid of input element-wise. The Sigmoid function is defined as: .. math:: @@ -2294,8 +2280,6 @@ def sigmoid(input_x): def logsigmoid(x): r""" - Logsigmoid activation function. - Applies logsigmoid activation element-wise. The input is a Tensor with any valid shape. Logsigmoid is defined as: @@ -3330,8 +3314,6 @@ def intopk(x1, x2, k): def log_softmax(logits, axis=-1): r""" - Log Softmax activation function. - Applies the Log Softmax function to the input tensor on the specified axis. Supposes a slice in the given axis, :math:`x` for each element :math:`x_i`, the Log Softmax function is shown as follows: @@ -4048,9 +4030,8 @@ def conv3d_transpose(inputs, weight, pad_mode='valid', padding=0, stride=1, dila def conv2d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, group=1): r""" - 2D convolution layer. - - Applies a 2D convolution over an input tensor which is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`, + Applies a 2D convolution over an input tensor. + The input tensor is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`, where :math:`N` is batch size, :math:`C` is channel number, :math:`H` is height, :math:`W` is width, :math:`X_i` is the :math:`i^{th}` input value and :math:`b_i` indicates the deviation value of the :math:`i^{th}` input value. For each batch of shape :math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as: @@ -4187,8 +4168,6 @@ def hardsigmoid(input_x): def adaptive_avg_pool1d(input_x, output_size): r""" - 1D adaptive average pooling for temporal data. - Applies a 1D adaptive average pooling over an input Tensor which can be regarded as a composition of 1D input planes. @@ -4272,8 +4251,6 @@ def _check_adaptive_max_pool1d_output_size(output_size): def adaptive_max_pool1d(input_x, output_size): r""" - 1D adaptive maximum pooling for temporal data. - Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as a composition of 1D input planes. @@ -4509,9 +4486,7 @@ def binary_cross_entropy(logits, labels, weight=None, reduction='mean'): def conv3d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, group=1): r""" - 3D convolution layer. - - Applies a 3D convolution over an input tensor which is typically of shape + Applies a 3D convolution over an input tensor. The input tensor is typically of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` and output shape :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})`. Where :math:`N` is batch size, :math:`C` is channel number, :math:`D` is depth, :math:`H` is height, :math:`W` is width. @@ -4647,8 +4622,6 @@ def _check_positive_int(arg_value, arg_name=None, prim_name=None): def pixel_shuffle(x, upscale_factor): r""" - pixel_shuffle operatrion. - Applies a pixel_shuffle operation over an input signal composed of several input planes. This is useful for implementiong efficient sub-pixel convolution with a stride of :math:`1/r`. For more details, refer to `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network @@ -4705,8 +4678,6 @@ def pixel_shuffle(x, upscale_factor): def pixel_unshuffle(x, downscale_factor): r""" - pixel_unshuffle operatrion. - Applies a pixel_unshuffle operation over an input signal composed of several input planes. For more details, refer to `Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network `_ . @@ -5147,8 +5118,6 @@ def channel_shuffle(x, groups): def lp_pool1d(x, norm_type, kernel_size, stride=None, ceil_mode=False): r""" - LPPool1d pooling operation. - Applies a 1D power lp pooling over an input signal composed of several input planes. Typically the input is of shape :math:`(N, C, L_{in})` or :math:`(C, L_{in})`, the output is of shape @@ -5230,8 +5199,6 @@ def lp_pool1d(x, norm_type, kernel_size, stride=None, ceil_mode=False): def lp_pool2d(x, norm_type, kernel_size, stride=None, ceil_mode=False): r""" - LPPool2d pooling operation. - Applies a 2D power lp pooling over an input signal composed of several input planes. Typically the input is of shape :math:`(N, C, H_{in}, W_{in})`, the output is of shape diff --git a/mindspore/python/mindspore/ops/function/sparse_unary_func.py b/mindspore/python/mindspore/ops/function/sparse_unary_func.py index d3eeecae124..6f226fdcb9d 100755 --- a/mindspore/python/mindspore/ops/function/sparse_unary_func.py +++ b/mindspore/python/mindspore/ops/function/sparse_unary_func.py @@ -1622,8 +1622,6 @@ def coo_round(x: COOTensor) -> COOTensor: def csr_tanh(x: CSRTensor) -> CSRTensor: r""" - Tanh activation function. - Computes hyperbolic tangent of input element-wise. The Tanh function is defined as: .. math:: @@ -1662,8 +1660,6 @@ def csr_tanh(x: CSRTensor) -> CSRTensor: def coo_tanh(x: COOTensor) -> COOTensor: r""" - Tanh activation function. - Computes hyperbolic tangent of input element-wise. The Tanh function is defined as: .. math:: diff --git a/mindspore/python/mindspore/ops/operations/array_ops.py b/mindspore/python/mindspore/ops/operations/array_ops.py index dc124c104d4..b199d20d016 100755 --- a/mindspore/python/mindspore/ops/operations/array_ops.py +++ b/mindspore/python/mindspore/ops/operations/array_ops.py @@ -1951,9 +1951,7 @@ class ArgminV2(Primitive): class ArgMaxWithValue(Primitive): """ - Calculates the maximum value with the corresponding index. - - Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and + Calculates the maximum value along with the given axis for the input tensor, and returns the maximum values and indices. Note: @@ -2014,8 +2012,6 @@ class ArgMaxWithValue(Primitive): class ArgMinWithValue(Primitive): """ - Calculates the minimum value with corresponding index, and returns indices and values. - Calculates the minimum value along with the given axis for the input tensor, and returns the minimum values and indices. diff --git a/mindspore/python/mindspore/ops/operations/nn_ops.py b/mindspore/python/mindspore/ops/operations/nn_ops.py index d43c39902a4..e78675c4dcb 100644 --- a/mindspore/python/mindspore/ops/operations/nn_ops.py +++ b/mindspore/python/mindspore/ops/operations/nn_ops.py @@ -924,8 +924,6 @@ class HSigmoid(Primitive): class Tanh(Primitive): r""" - Tanh activation function. - Computes hyperbolic tangent of input element-wise. Refer to :func:`mindspore.ops.tanh` for more details. @@ -1893,8 +1891,6 @@ class MaxPoolWithArgmax(Primitive): class MaxPool3D(Primitive): r""" - 3D max pooling operation. - Applies a 3D max pooling over an input Tensor which can be regarded as a composition of 3D planes. Typically the input is of shape :math:`(N_{in}, C_{in}, D_{in}, H_{in}, W_{in})`, MaxPool outputs @@ -7529,7 +7525,6 @@ class AvgPool3D(Primitive): r""" 3D Average pooling operation. - Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. Typically the input is of shape :math:`(N, C, D_{in}, H_{in}, W_{in})`, AvgPool3D outputs regional average in the :math:`(D_{in}, H_{in}, W_{in})`-dimension. Given kernel size :math:`ks = (d_{ker}, h_{ker}, w_{ker})` and stride :math:`s = (s_0, s_1, s_2)`, the operation is as follows. @@ -9108,8 +9103,6 @@ class FractionalMaxPool(Primitive): class FractionalMaxPool3DWithFixedKsize(Primitive): r""" - 3D fractional max pooling operation. - This operator applies a 3D fractional max pooling over an input signal composed of several input planes. The max-pooling operation is applied in kD x kH x kW regions by a stochastic step size determined by the target output size. @@ -10172,8 +10165,6 @@ class GLU(Primitive): class FractionalMaxPoolWithFixedKsize(Primitive): r""" - Fractional max pooling operation. - Applies a 2D fractional max pooling to an input signal composed of multiple input planes. The max-pooling operation is applied in kH × kW regions by a stochastic step size determined by the target output size. For any input size, the size of the specified output is H x W. The number