forked from mindspore-Ecosystem/mindspore
!42842 modify the files on webpage in 1.9
Merge pull request !42842 from 宦晓玲/code_docs_00924
This commit is contained in:
commit
779278dbbf
|
@ -1,38 +1,6 @@
|
|||
mindspore.ops.functional
|
||||
=============================
|
||||
|
||||
functional算子是经过初始化后的Primitive,可以直接作为函数使用。functional算子的使用示例如下:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from mindspore import Tensor, ops
|
||||
from mindspore import dtype as mstype
|
||||
|
||||
input_x = Tensor(-1, mstype.int32)
|
||||
input_dict = {'x':1, 'y':2}
|
||||
|
||||
result_abs = ops.absolute(input_x)
|
||||
print(result_abs)
|
||||
|
||||
result_in_dict = ops.in_dict('x', input_dict)
|
||||
print(result_in_dict)
|
||||
|
||||
result_not_in_dict = ops.not_in_dict('x', input_dict)
|
||||
print(result_not_in_dict)
|
||||
|
||||
result_isconstant = ops.isconstant(input_x)
|
||||
print(result_isconstant)
|
||||
|
||||
result_typeof = ops.typeof(input_x)
|
||||
print(result_typeof)
|
||||
|
||||
# outputs:
|
||||
# 1
|
||||
# True
|
||||
# False
|
||||
# True
|
||||
# Tensor[Int32]
|
||||
|
||||
神经网络层函数
|
||||
----------------
|
||||
|
||||
|
@ -210,45 +178,6 @@ functional算子是经过初始化后的Primitive,可以直接作为函数使
|
|||
mindspore.ops.xdivy
|
||||
mindspore.ops.xlogy
|
||||
|
||||
.. list-table::
|
||||
:widths: 50 50
|
||||
:header-rows: 1
|
||||
|
||||
* - functional
|
||||
- Description
|
||||
* - mindspore.ops.absolute
|
||||
- `absolute` will be deprecated in the future. Please use `mindspore.ops.abs` instead.
|
||||
* - mindspore.ops.floordiv
|
||||
- `floordiv` will be deprecated in the future. Please use `mindspore.ops.floor_div` instead.
|
||||
* - mindspore.ops.floormod
|
||||
- `floormod` will be deprecated in the future. Please use `mindspore.ops.floor_mod` instead.
|
||||
* - mindspore.ops.neg_tensor
|
||||
- `neg_tensor` will be deprecated in the future. Please use `mindspore.ops.neg` instead.
|
||||
* - mindspore.ops.pows
|
||||
- `pows` will be deprecated in the future. Please use `mindspore.ops.pow` instead.
|
||||
* - mindspore.ops.sqrt
|
||||
- Refer to :class:`mindspore.ops.Sqrt`.
|
||||
* - mindspore.ops.square
|
||||
- Refer to :class:`mindspore.ops.Square`.
|
||||
* - mindspore.ops.tensor_add
|
||||
- `tensor_add` will be deprecated in the future. Please use `mindspore.ops.add` instead.
|
||||
* - mindspore.ops.tensor_div
|
||||
- `tensor_div` will be deprecated in the future. Please use `mindspore.ops.div` instead.
|
||||
* - mindspore.ops.tensor_exp
|
||||
- `tensor_exp` will be deprecated in the future. Please use `mindspore.ops.exp` instead.
|
||||
* - mindspore.ops.tensor_expm1
|
||||
- `tensor_expm1` will be deprecated in the future. Please use `mindspore.ops.expm1` instead.
|
||||
* - mindspore.ops.tensor_floordiv
|
||||
- `tensor_floordiv` will be deprecated in the future. Please use `mindspore.ops.floor_div` instead.
|
||||
* - mindspore.ops.tensor_mod
|
||||
- `tensor_mod` will be deprecated in the future. Please use `mindspore.ops.floor_mod` instead.
|
||||
* - mindspore.ops.tensor_mul
|
||||
- `tensor_mul` will be deprecated in the future. Please use `mindspore.ops.mul` instead.
|
||||
* - mindspore.ops.tensor_pow
|
||||
- `tensor_pow` will be deprecated in the future. Please use `mindspore.ops.pow` instead.
|
||||
* - mindspore.ops.tensor_sub
|
||||
- `tensor_sub` will be deprecated in the future. Please use `mindspore.ops.sub` instead.
|
||||
|
||||
Reduction函数
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
|
@ -316,16 +245,6 @@ Reduction函数
|
|||
- Refer to :class:`mindspore.ops.IsInstance`.
|
||||
* - mindspore.ops.issubclass\_
|
||||
- Refer to :class:`mindspore.ops.IsSubClass`.
|
||||
* - mindspore.ops.not_equal
|
||||
- `not_equal` will be deprecated in the future. Please use `mindspore.ops.ne` instead.
|
||||
* - mindspore.ops.tensor_ge
|
||||
- `tensor_ge` will be deprecated in the future. Please use `mindspore.ops.ge` instead.
|
||||
* - mindspore.ops.tensor_gt
|
||||
- `tensor_gt` will be deprecated in the future. Please use `mindspore.ops.gt` instead.
|
||||
* - mindspore.ops.tensor_le
|
||||
- `tensor_le` will be deprecated in the future. Please use `mindspore.ops.le` instead.
|
||||
* - mindspore.ops.tensor_lt
|
||||
- `tensor_lt` will be deprecated in the future. Please use `mindspore.ops.less` instead.
|
||||
|
||||
线性代数函数
|
||||
^^^^^^^^^^^^^
|
||||
|
@ -397,6 +316,7 @@ Array操作
|
|||
mindspore.ops.broadcast_to
|
||||
mindspore.ops.col2im
|
||||
mindspore.ops.concat
|
||||
mindspore.ops.count_nonzero
|
||||
mindspore.ops.diag
|
||||
mindspore.ops.dyn_shape
|
||||
mindspore.ops.expand_dims
|
||||
|
@ -474,8 +394,6 @@ Array操作
|
|||
- Refer to :class:`mindspore.ops.StridedSlice`.
|
||||
* - mindspore.ops.tensor_scatter_update
|
||||
- Refer to :class:`mindspore.ops.TensorScatterUpdate`.
|
||||
* - mindspore.ops.tensor_slice
|
||||
- `tensor_slice` will be deprecated in the future. Please use `mindspore.ops.slice` instead.
|
||||
|
||||
类型转换
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
@ -566,6 +484,15 @@ Parameter操作函数
|
|||
|
||||
mindspore.ops.print_
|
||||
|
||||
图像函数
|
||||
----------------
|
||||
|
||||
.. msplatformautosummary::
|
||||
:toctree: ops
|
||||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.iou
|
||||
|
||||
其他函数
|
||||
----------------
|
||||
|
@ -646,8 +573,4 @@ Parameter操作函数
|
|||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.arange
|
||||
mindspore.ops.core
|
||||
mindspore.ops.count_nonzero
|
||||
mindspore.ops.iou
|
||||
|
||||
|
|
|
@ -48,20 +48,15 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
|
||||
mindspore.ops.AvgPool
|
||||
mindspore.ops.AvgPool3D
|
||||
mindspore.ops.BasicLSTMCell
|
||||
mindspore.ops.BatchNorm
|
||||
mindspore.ops.Conv2D
|
||||
mindspore.ops.Conv2DBackpropInput
|
||||
mindspore.ops.Conv2DTranspose
|
||||
mindspore.ops.Conv3D
|
||||
mindspore.ops.Conv3DTranspose
|
||||
mindspore.ops.CTCGreedyDecoder
|
||||
mindspore.ops.DepthwiseConv2dNative
|
||||
mindspore.ops.Dropout
|
||||
mindspore.ops.Dropout2D
|
||||
mindspore.ops.Dropout3D
|
||||
mindspore.ops.DropoutDoMask
|
||||
mindspore.ops.DropoutGenMask
|
||||
mindspore.ops.DynamicGRUV2
|
||||
mindspore.ops.DynamicRNN
|
||||
mindspore.ops.Flatten
|
||||
|
@ -118,7 +113,6 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
mindspore.ops.PReLU
|
||||
mindspore.ops.ReLU
|
||||
mindspore.ops.ReLU6
|
||||
mindspore.ops.ReLUV2
|
||||
mindspore.ops.SeLU
|
||||
mindspore.ops.Sigmoid
|
||||
mindspore.ops.Softmax
|
||||
|
@ -136,7 +130,6 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.Adam
|
||||
mindspore.ops.AdamNoUpdateParam
|
||||
mindspore.ops.AdamWeightDecay
|
||||
mindspore.ops.AdaptiveAvgPool2D
|
||||
mindspore.ops.ApplyAdadelta
|
||||
|
@ -153,10 +146,6 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
mindspore.ops.ApplyProximalAdagrad
|
||||
mindspore.ops.ApplyProximalGradientDescent
|
||||
mindspore.ops.ApplyRMSProp
|
||||
mindspore.ops.FusedSparseAdam
|
||||
mindspore.ops.FusedSparseFtrl
|
||||
mindspore.ops.FusedSparseLazyAdam
|
||||
mindspore.ops.FusedSparseProximalAdagrad
|
||||
mindspore.ops.LARSUpdate
|
||||
mindspore.ops.SparseApplyAdagrad
|
||||
mindspore.ops.SparseApplyAdagradV2
|
||||
|
@ -390,7 +379,6 @@ Tensor创建
|
|||
|
||||
mindspore.ops.Gamma
|
||||
mindspore.ops.Multinomial
|
||||
mindspore.ops.Poisson
|
||||
mindspore.ops.RandomCategorical
|
||||
mindspore.ops.RandomChoiceWithMask
|
||||
mindspore.ops.Randperm
|
||||
|
@ -417,7 +405,6 @@ Array操作
|
|||
mindspore.ops.DataFormatDimMap
|
||||
mindspore.ops.DepthToSpace
|
||||
mindspore.ops.DType
|
||||
mindspore.ops.DynamicShape
|
||||
mindspore.ops.ExpandDims
|
||||
mindspore.ops.FloatStatus
|
||||
mindspore.ops.Gather
|
||||
|
@ -446,12 +433,10 @@ Array操作
|
|||
mindspore.ops.Size
|
||||
mindspore.ops.Slice
|
||||
mindspore.ops.Sort
|
||||
mindspore.ops.SpaceToBatch
|
||||
mindspore.ops.SpaceToBatchND
|
||||
mindspore.ops.SpaceToDepth
|
||||
mindspore.ops.SparseGatherV2
|
||||
mindspore.ops.Split
|
||||
mindspore.ops.SplitV
|
||||
mindspore.ops.Squeeze
|
||||
mindspore.ops.Stack
|
||||
mindspore.ops.StridedSlice
|
||||
|
@ -572,7 +557,7 @@ Parameter操作算子
|
|||
mindspore.ops.SparseTensorDenseMatmul
|
||||
mindspore.ops.SparseToDense
|
||||
|
||||
其他算子
|
||||
框架算子
|
||||
----------------
|
||||
|
||||
.. mscnplatformautosummary::
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.nn.AdaptiveAvgPool2d
|
|||
|
||||
.. py:class:: mindspore.nn.AdaptiveAvgPool2d(output_size)
|
||||
|
||||
2维自适应平均池化。
|
||||
二维自适应平均池化。
|
||||
|
||||
对输入Tensor,提供2维的自适应平均池化操作。也就是说,对于输入任何尺寸,指定输出的尺寸都为H * W。但是输入和输出特征的数目不会变化。
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.nn.AdaptiveAvgPool3d
|
|||
|
||||
.. py:class:: mindspore.nn.AdaptiveAvgPool3d(output_size)
|
||||
|
||||
3维自适应平均池化。
|
||||
三维自适应平均池化。
|
||||
|
||||
对输入Tensor,提供3维的自适应平均池化操作。也就是说对于输入任何尺寸,指定输出的尺寸都为 :math:`(D, H, W)`。但是输入和输出特征的数目不会变化。
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.nn.AdaptiveMaxPool3d
|
|||
|
||||
.. py:class:: mindspore.nn.AdaptiveMaxPool3d(output_size, return_indices=False)
|
||||
|
||||
3维自适应最大值池化。
|
||||
三维自适应最大值池化。
|
||||
|
||||
对于任何输入尺寸,输出的大小为 :math:`(D, H, W)` 。输出特征的数量与输入特征的数量相同。
|
||||
|
||||
|
|
|
@ -3,6 +3,6 @@
|
|||
|
||||
.. py:class:: mindspore.ops.AdaptiveAvgPool2D(output_size)
|
||||
|
||||
2维自适应平均池化。
|
||||
二维自适应平均池化。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.adaptive_avg_pool2d`。
|
||||
|
|
|
@ -3,6 +3,6 @@ mindspore.ops.AdaptiveMaxPool3D
|
|||
|
||||
.. py:class:: mindspore.ops.AdaptiveMaxPool3D(output_size)
|
||||
|
||||
3维自适应最大值池化。
|
||||
三维自适应最大值池化。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.adaptive_max_pool3d`。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.Conv3D
|
|||
|
||||
.. py:class:: mindspore.ops.Conv3D(out_channel, kernel_size, mode=1, stride=1, pad_mode='valid', pad=0, dilation=1, group=1, data_format='NCDHW')
|
||||
|
||||
3维卷积操作。
|
||||
三维卷积操作。
|
||||
|
||||
对输入Tensor进行3维卷积操作。输入Tensor的shape通常为 :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})` ,输出的shape为 :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})` ,其中 :math:`N` 为batch size,:math:`C` 是通道数, :math:`D` 、 :math:`H` 、 :math:`W` 分别为特征层的深度、高度和宽度。公式定义如下:
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.MaxPool3DWithArgmax
|
|||
|
||||
.. py:class:: mindspore.ops.MaxPool3DWithArgmax(ksize, strides, pads, dilation=(1, 1, 1), ceil_mode=False, data_format="NCDHW", argmax_type=mstype.int64)
|
||||
|
||||
3维最大值池化,返回最大值结果及其索引值。
|
||||
三维最大值池化,返回最大值结果及其索引值。
|
||||
|
||||
输入是shape为 :math:`(N_{in}, C_{in}, D_{in}, H_{in}, W_{in})` 的Tensor,输出 :math:`(D_{in}, H_{in}, W_{in})` 维度中的最大值。给定 `ksize`
|
||||
:math:`ks = (d_{ker}, h_{ker}, w_{ker})`,和 `strides` :math:`s = (s_0, s_1, s_2)`,运算如下:
|
||||
|
|
|
@ -3,4 +3,6 @@ mindspore.ops.Round
|
|||
|
||||
.. py:class:: mindspore.ops.Round
|
||||
|
||||
逐元素返回Tensor最接近的整数值。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.round`。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.adaptive_avg_pool2d
|
|||
|
||||
.. py:function:: mindspore.ops.adaptive_avg_pool2d(input_x, output_size)
|
||||
|
||||
2维自适应平均池化。
|
||||
二维自适应平均池化。
|
||||
|
||||
对输入Tensor,提供2维的自适应平均池化操作,也就是说,对于输入任何尺寸,指定输出的尺寸都为H * W。但是输入和输出特征的数目不会变化。
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.adaptive_avg_pool3d
|
|||
|
||||
.. py:function:: mindspore.ops.adaptive_avg_pool3d(input_x, output_size)
|
||||
|
||||
3维自适应平均池化。
|
||||
三维自适应平均池化。
|
||||
|
||||
对输入Tensor,提供3维的自适应平均池化操作,即对于输入任何尺寸,指定输出的尺寸都为 :math:`(D, H, W)`,但是输入和输出特征的数目不会变化。
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.adaptive_max_pool3d
|
|||
|
||||
.. py:function:: mindspore.ops.adaptive_max_pool3d(x, output_size, return_indices=False)
|
||||
|
||||
3维自适应最大值池化。
|
||||
三维自适应最大值池化。
|
||||
|
||||
对于任何输入尺寸,输出的大小为 :math:`(D, H, W)` ,其中输出特征的数量与输入特征的数量相同。
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.isclose
|
|||
|
||||
.. py:function:: mindspore.ops.isclose(x1, x2, rtol=1e-05, atol=1e-08, equal_nan=False)
|
||||
|
||||
返回一个布尔型Tensor,表示 `x1` 的每个元素与 `x2` 的对应元素在给定容忍度内是否“接近”,其中“接近”的数学公式为:
|
||||
返回一个布尔型Tensor,表示 `x1` 的每个元素与 `x2` 的对应元素在给定容忍度内是否“接近”。其中“接近”的数学公式为:
|
||||
|
||||
.. math::
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.max_pool3d
|
|||
|
||||
.. py:function:: mindspore.ops.max_pool3d(x, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
|
||||
|
||||
3维最大值池化。
|
||||
三维最大值池化。
|
||||
|
||||
输入是shape为 :math:`(N_{in}, C_{in}, D_{in}, H_{in}, W_{in})` 的Tensor,输出 :math:`(D_{in}, H_{in}, W_{in})` 维度中的最大值。给定 `kernel_size`
|
||||
:math:`ks = (d_{ker}, h_{ker}, w_{ker})`,和 `stride` :math:`s = (s_0, s_1, s_2)`,运算如下:
|
||||
|
|
|
@ -1,38 +1,6 @@
|
|||
mindspore.ops.functional
|
||||
=============================
|
||||
|
||||
The functional operators are initialized Primitives and can be used directly as functions. An example of the use of the functional operator is as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from mindspore import Tensor, ops
|
||||
from mindspore import dtype as mstype
|
||||
|
||||
input_x = Tensor(-1, mstype.int32)
|
||||
input_dict = {'x':1, 'y':2}
|
||||
|
||||
result_abs = ops.absolute(input_x)
|
||||
print(result_abs)
|
||||
|
||||
result_in_dict = ops.in_dict('x', input_dict)
|
||||
print(result_in_dict)
|
||||
|
||||
result_not_in_dict = ops.not_in_dict('x', input_dict)
|
||||
print(result_not_in_dict)
|
||||
|
||||
result_isconstant = ops.isconstant(input_x)
|
||||
print(result_isconstant)
|
||||
|
||||
result_typeof = ops.typeof(input_x)
|
||||
print(result_typeof)
|
||||
|
||||
# outputs:
|
||||
# 1
|
||||
# True
|
||||
# False
|
||||
# True
|
||||
# Tensor[Int32]
|
||||
|
||||
Neural Network Layer Functions
|
||||
------------------------------
|
||||
|
||||
|
@ -211,45 +179,6 @@ Element-by-Element Operations
|
|||
mindspore.ops.xdivy
|
||||
mindspore.ops.xlogy
|
||||
|
||||
.. list-table::
|
||||
:widths: 50 50
|
||||
:header-rows: 1
|
||||
|
||||
* - functional
|
||||
- Description
|
||||
* - mindspore.ops.absolute
|
||||
- `absolute` will be deprecated in the future. Please use `mindspore.ops.abs` instead.
|
||||
* - mindspore.ops.floordiv
|
||||
- `floordiv` will be deprecated in the future. Please use `mindspore.ops.floor_div` instead.
|
||||
* - mindspore.ops.floormod
|
||||
- `floormod` will be deprecated in the future. Please use `mindspore.ops.floor_mod` instead.
|
||||
* - mindspore.ops.neg_tensor
|
||||
- `neg_tensor` will be deprecated in the future. Please use `mindspore.ops.neg` instead.
|
||||
* - mindspore.ops.pows
|
||||
- `pows` will be deprecated in the future. Please use `mindspore.ops.pow` instead.
|
||||
* - mindspore.ops.sqrt
|
||||
- Refer to :class:`mindspore.ops.Sqrt`.
|
||||
* - mindspore.ops.square
|
||||
- Refer to :class:`mindspore.ops.Square`.
|
||||
* - mindspore.ops.tensor_add
|
||||
- `tensor_add` will be deprecated in the future. Please use `mindspore.ops.add` instead.
|
||||
* - mindspore.ops.tensor_div
|
||||
- `tensor_div` will be deprecated in the future. Please use `mindspore.ops.div` instead.
|
||||
* - mindspore.ops.tensor_exp
|
||||
- `tensor_exp` will be deprecated in the future. Please use `mindspore.ops.exp` instead.
|
||||
* - mindspore.ops.tensor_expm1
|
||||
- `tensor_expm1` will be deprecated in the future. Please use `mindspore.ops.expm1` instead.
|
||||
* - mindspore.ops.tensor_floordiv
|
||||
- `tensor_floordiv` will be deprecated in the future. Please use `mindspore.ops.floor_div` instead.
|
||||
* - mindspore.ops.tensor_mod
|
||||
- `tensor_mod` will be deprecated in the future. Please use `mindspore.ops.floor_mod` instead.
|
||||
* - mindspore.ops.tensor_mul
|
||||
- `tensor_mul` will be deprecated in the future. Please use `mindspore.ops.mul` instead.
|
||||
* - mindspore.ops.tensor_pow
|
||||
- `tensor_pow` will be deprecated in the future. Please use `mindspore.ops.pow` instead.
|
||||
* - mindspore.ops.tensor_sub
|
||||
- `tensor_sub` will be deprecated in the future. Please use `mindspore.ops.sub` instead.
|
||||
|
||||
Reduction Functions
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
.. msplatformautosummary::
|
||||
|
@ -316,16 +245,6 @@ Comparison Functions
|
|||
- Refer to :class:`mindspore.ops.IsInstance`.
|
||||
* - mindspore.ops.issubclass\_
|
||||
- Refer to :class:`mindspore.ops.IsSubClass`.
|
||||
* - mindspore.ops.not_equal
|
||||
- `not_equal` will be deprecated in the future. Please use `mindspore.ops.ne` instead.
|
||||
* - mindspore.ops.tensor_ge
|
||||
- `tensor_ge` will be deprecated in the future. Please use `mindspore.ops.ge` instead.
|
||||
* - mindspore.ops.tensor_gt
|
||||
- `tensor_gt` will be deprecated in the future. Please use `mindspore.ops.gt` instead.
|
||||
* - mindspore.ops.tensor_le
|
||||
- `tensor_le` will be deprecated in the future. Please use `mindspore.ops.le` instead.
|
||||
* - mindspore.ops.tensor_lt
|
||||
- `tensor_lt` will be deprecated in the future. Please use `mindspore.ops.less` instead.
|
||||
|
||||
Linear Algebraic Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -397,6 +316,7 @@ Array Operation
|
|||
mindspore.ops.broadcast_to
|
||||
mindspore.ops.col2im
|
||||
mindspore.ops.concat
|
||||
mindspore.ops.count_nonzero
|
||||
mindspore.ops.diag
|
||||
mindspore.ops.dyn_shape
|
||||
mindspore.ops.expand_dims
|
||||
|
@ -474,8 +394,6 @@ Array Operation
|
|||
- Refer to :class:`mindspore.ops.StridedSlice`.
|
||||
* - mindspore.ops.tensor_scatter_update
|
||||
- Refer to :class:`mindspore.ops.TensorScatterUpdate`.
|
||||
* - mindspore.ops.tensor_slice
|
||||
- `tensor_slice` will be deprecated in the future. Please use `mindspore.ops.slice` instead.
|
||||
|
||||
Type Conversion
|
||||
^^^^^^^^^^^^^^^
|
||||
|
@ -566,6 +484,16 @@ Debugging Functions
|
|||
|
||||
mindspore.ops.print_
|
||||
|
||||
Image Functions
|
||||
---------------
|
||||
|
||||
.. msplatformautosummary::
|
||||
:toctree: ops
|
||||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.iou
|
||||
|
||||
Other Functions
|
||||
---------------
|
||||
.. list-table::
|
||||
|
@ -644,7 +572,4 @@ Other Functions
|
|||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.arange
|
||||
mindspore.ops.core
|
||||
mindspore.ops.count_nonzero
|
||||
mindspore.ops.iou
|
||||
|
|
|
@ -48,20 +48,15 @@ Neural Network
|
|||
|
||||
mindspore.ops.AvgPool
|
||||
mindspore.ops.AvgPool3D
|
||||
mindspore.ops.BasicLSTMCell
|
||||
mindspore.ops.BatchNorm
|
||||
mindspore.ops.Conv2D
|
||||
mindspore.ops.Conv2DBackpropInput
|
||||
mindspore.ops.Conv2DTranspose
|
||||
mindspore.ops.Conv3D
|
||||
mindspore.ops.Conv3DTranspose
|
||||
mindspore.ops.CTCGreedyDecoder
|
||||
mindspore.ops.DepthwiseConv2dNative
|
||||
mindspore.ops.Dropout
|
||||
mindspore.ops.Dropout2D
|
||||
mindspore.ops.Dropout3D
|
||||
mindspore.ops.DropoutDoMask
|
||||
mindspore.ops.DropoutGenMask
|
||||
mindspore.ops.DynamicGRUV2
|
||||
mindspore.ops.DynamicRNN
|
||||
mindspore.ops.Flatten
|
||||
|
@ -118,7 +113,6 @@ Activation Function
|
|||
mindspore.ops.PReLU
|
||||
mindspore.ops.ReLU
|
||||
mindspore.ops.ReLU6
|
||||
mindspore.ops.ReLUV2
|
||||
mindspore.ops.SeLU
|
||||
mindspore.ops.Sigmoid
|
||||
mindspore.ops.Softmax
|
||||
|
@ -136,7 +130,6 @@ Optimizer
|
|||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.Adam
|
||||
mindspore.ops.AdamNoUpdateParam
|
||||
mindspore.ops.AdamWeightDecay
|
||||
mindspore.ops.AdaptiveAvgPool2D
|
||||
mindspore.ops.ApplyAdadelta
|
||||
|
@ -153,10 +146,6 @@ Optimizer
|
|||
mindspore.ops.ApplyProximalAdagrad
|
||||
mindspore.ops.ApplyProximalGradientDescent
|
||||
mindspore.ops.ApplyRMSProp
|
||||
mindspore.ops.FusedSparseAdam
|
||||
mindspore.ops.FusedSparseFtrl
|
||||
mindspore.ops.FusedSparseLazyAdam
|
||||
mindspore.ops.FusedSparseProximalAdagrad
|
||||
mindspore.ops.LARSUpdate
|
||||
mindspore.ops.SparseApplyAdagrad
|
||||
mindspore.ops.SparseApplyAdagradV2
|
||||
|
@ -390,7 +379,6 @@ Random Generation Operator
|
|||
|
||||
mindspore.ops.Gamma
|
||||
mindspore.ops.Multinomial
|
||||
mindspore.ops.Poisson
|
||||
mindspore.ops.RandomCategorical
|
||||
mindspore.ops.RandomChoiceWithMask
|
||||
mindspore.ops.Randperm
|
||||
|
@ -417,7 +405,6 @@ Array Operation
|
|||
mindspore.ops.DataFormatDimMap
|
||||
mindspore.ops.DepthToSpace
|
||||
mindspore.ops.DType
|
||||
mindspore.ops.DynamicShape
|
||||
mindspore.ops.ExpandDims
|
||||
mindspore.ops.FloatStatus
|
||||
mindspore.ops.Gather
|
||||
|
@ -446,12 +433,10 @@ Array Operation
|
|||
mindspore.ops.Size
|
||||
mindspore.ops.Slice
|
||||
mindspore.ops.Sort
|
||||
mindspore.ops.SpaceToBatch
|
||||
mindspore.ops.SpaceToBatchND
|
||||
mindspore.ops.SpaceToDepth
|
||||
mindspore.ops.SparseGatherV2
|
||||
mindspore.ops.Split
|
||||
mindspore.ops.SplitV
|
||||
mindspore.ops.Squeeze
|
||||
mindspore.ops.Stack
|
||||
mindspore.ops.StridedSlice
|
||||
|
@ -570,7 +555,7 @@ Sparse Operator
|
|||
mindspore.ops.SparseTensorDenseMatmul
|
||||
mindspore.ops.SparseToDense
|
||||
|
||||
Other Operators
|
||||
Frame Operators
|
||||
---------------
|
||||
|
||||
.. msplatformautosummary::
|
||||
|
|
Loading…
Reference in New Issue