!47163 modify format
Merge pull request !47163 from 俞涵/code_docs_hm1222
This commit is contained in:
commit
9f6e5709fa
|
@ -67,6 +67,7 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
mindspore.ops.FractionalMaxPool
|
||||
mindspore.ops.FractionalMaxPoolWithFixedKsize
|
||||
mindspore.ops.FractionalMaxPool3DWithFixedKsize
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.GridSampler3D
|
||||
mindspore.ops.LayerNorm
|
||||
mindspore.ops.LRN
|
||||
|
@ -196,7 +197,6 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.ComputeAccidentalHits
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.LogUniformCandidateSampler
|
||||
mindspore.ops.UniformCandidateSampler
|
||||
mindspore.ops.UpsampleNearest3D
|
||||
|
|
|
@ -13,8 +13,10 @@ mindspore.load
|
|||
|
||||
- **dec_key** (bytes) - 用于解密的字节类型密钥。有效长度为 16、24 或 32。
|
||||
- **dec_mode** (Union[str, function]) - 指定解密模式,设置dec_key时生效。可选项:'AES-GCM' | 'SM4-CBC' | 'AES-CBC' | 自定义解密函数。默认值:"AES-GCM"。
|
||||
|
||||
- 关于使用自定义解密加载的详情,请查看 `教程 <https://www.mindspore.cn/mindarmour/docs/zh-CN/master/model_encrypt_protection.html>`_。
|
||||
- **obf_func** (function) - 导入混淆模型所需要的函数,可以参考 `obfuscate_model() <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore/mindspore.obfuscate_model.html>` 了解详情。
|
||||
|
||||
- **obf_func** (function) - 导入混淆模型所需要的函数,可以参考 `obfuscate_model() <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore/mindspore.obfuscate_model.html>`_ 了解详情。
|
||||
|
||||
返回:
|
||||
GraphCell,一个可以由 `GraphCell` 构成的可执行的编译图。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.load_distributed_checkpoint
|
|||
|
||||
.. py:function:: mindspore.load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=None, train_strategy_filename=None, strict_load=False, dec_key=None, dec_mode='AES-GCM')
|
||||
|
||||
给分布式预测加载checkpoint文件到网络。用于分布式推理。关于分布式推理的细节,请参考:https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_inference.html。
|
||||
给分布式预测加载checkpoint文件到网络。用于分布式推理。关于分布式推理的细节,请参考: `分布式推理 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_inference.html>`_ 。
|
||||
|
||||
参数:
|
||||
- **network** (Cell) - 分布式预测网络。
|
||||
|
|
|
@ -3,4 +3,4 @@ mindspore.reset_ps_context
|
|||
|
||||
.. py:function:: mindspore.reset_ps_context()
|
||||
|
||||
将参数服务器训练模式上下文中的属性重置为默认值。各字段的含义及其默认值见'set_ps_context'接口。
|
||||
将参数服务器训练模式上下文中的属性重置为默认值。各字段的含义及其默认值见 :func:`mindspore.set_ps_context` 接口。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.nn.GetNextSingleOp
|
|||
|
||||
.. py:class:: mindspore.nn.GetNextSingleOp(dataset_types, dataset_shapes, queue_name)
|
||||
|
||||
用于获取下一条数据的Cell。更详细的信息请参考 `mindspore.ops.GetNext` 。
|
||||
用于获取下一条数据的Cell。更详细的信息请参考 :class:`mindspore.ops.GetNext` 。
|
||||
|
||||
参数:
|
||||
- **dataset_types** (list[:class:`mindspore.dtype`]) - 数据集类型。
|
||||
|
|
|
@ -10,7 +10,7 @@ mindspore.nn.GraphCell
|
|||
参数:
|
||||
- **graph** (FuncGraph) - 从MindIR加载的编译图。
|
||||
- **params_init** (dict) - 需要在图中初始化的参数。key为参数名称,类型为字符串,value为 Tensor 或 Parameter。如果参数名在图中已经存在,则更新其值;如果不存在,则忽略。默认值:None。
|
||||
- **obf_password** (int) - 用于动态混淆保护的password。动态混淆是一种模型保护方法,可以参考 :func:`mindspore.train.serialization.obfuscate_model` 。如果导入的 `graph` 是一个经过混淆的模型,那么 `obf_password` 应该要提供。 `obf_password` 的取值范围是(0, 9223372036854775807]。默认值:None。
|
||||
- **obf_password** (int) - 用于动态混淆保护的password。动态混淆是一种模型保护方法,可以参考 :func:`mindspore.obfuscate_model` 。如果导入的 `graph` 是一个经过混淆的模型,那么 `obf_password` 应该要提供。 `obf_password` 的取值范围是(0, 9223372036854775807]。默认值:None。
|
||||
|
||||
异常:
|
||||
- **TypeError** - 如果图不是FuncGraph类型。
|
||||
|
|
|
@ -28,7 +28,7 @@ mindspore.nn.HingeEmbeddingLoss
|
|||
- **reduction** (str) - 指定应用于输出结果的计算方式,'none'、'mean'、'sum',默认值:'mean'。
|
||||
|
||||
输入:
|
||||
- **logits** (Tensor) - 预测值,公式中表示为 :math:`x`,shape为:math:`(*)`。`*` 代表着任意数量的维度。
|
||||
- **logits** (Tensor) - 预测值,公式中表示为 :math:`x`,shape为 :math:`(*)`。`*` 代表着任意数量的维度。
|
||||
- **labels** (Tensor) - 标签值,公式中表示为 :math:`y`,和 `logits` 具有相同shape,包含1或-1。
|
||||
|
||||
返回:
|
||||
|
|
|
@ -6,7 +6,7 @@ mindspore.nn.ReflectionPad1d
|
|||
根据 `padding` 对输入 `x` 进行填充。
|
||||
|
||||
参数:
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则为(pad_left, pad_right)。
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则为 :math:`(pad\_left, pad\_right)`。
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - 输入Tensor, 2D或3D。shape为 :math:`(C, W_{in})` 或 :math:`(N, C, W_{in})` 。
|
||||
|
|
|
@ -6,13 +6,13 @@ mindspore.nn.ReflectionPad2d
|
|||
根据 `padding` 对输入 `x` 进行填充。
|
||||
|
||||
参数:
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则顺序为 :math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down})`。
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则顺序为 :math:`(pad\_left, pad\_right, pad\_up, pad\_down)`。
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - 输入Tensor, shape为 :math:`(C, H_{in}, W_{in})` 或 :math:`(N, C, H_{in}, W_{in})` 。
|
||||
|
||||
输出:
|
||||
Tensor,填充后的Tensor, shape为 :math:`(C, H_{out}, W_{out})` 或 :math:`(N, C, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`, :math:`W_{out} = W_{in} + pad_{left} + pad_{right}` 。
|
||||
Tensor,填充后的Tensor, shape为 :math:`(C, H_{out}, W_{out})` 或 :math:`(N, C, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad\_up + pad\_down`, :math:`W_{out} = W_{in} + pad\_left + pad\_right` 。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `padding` 不是tuple或int。
|
||||
|
|
|
@ -6,7 +6,7 @@ mindspore.nn.ReflectionPad3d
|
|||
根据 `padding` 对输入 `x` 进行填充。
|
||||
|
||||
参数:
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则顺序为 :math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down}, pad_{front}, pad_{back})`。
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则顺序为 :math:`(pad\_left, pad\_right, pad\_up, pad\_down, pad\_front, pad\_back)`。
|
||||
|
||||
.. note::
|
||||
ReflectionPad3d尚不支持5D Tensor输入。
|
||||
|
@ -15,7 +15,7 @@ mindspore.nn.ReflectionPad3d
|
|||
- **x** (Tensor) - 4D Tensor, shape为 :math:`(N, D_{in}, H_{in}, W_{in})` 。
|
||||
|
||||
输出:
|
||||
Tensor,填充后的Tensor, shape为 :math:`(N, D_{out}, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`, :math:`W_{out} = W_{in} + pad_{left} + pad_{right}`, :math:`D_{out} = D_{in} + pad_{front} + pad_{back}` 。
|
||||
Tensor,填充后的Tensor, shape为 :math:`(N, D_{out}, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad\_up + pad\_down`, :math:`W_{out} = W_{in} + pad\_left + pad\_right`, :math:`D_{out} = D_{in} + pad\_front + pad\_back` 。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `padding` 不是tuple或int。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.GridSampler2D
|
|||
|
||||
.. py:class:: mindspore.ops.GridSampler2D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=False)
|
||||
|
||||
此操作使用基于流场网格的插值对 2D input_x进行采样,该插值通常由 `affine_grid` 生成。
|
||||
此操作使用基于流场网格的插值对 2D input_x进行采样,该插值通常由 :func:`mindspore.ops.affine_grid` 生成。
|
||||
|
||||
参数:
|
||||
- **interpolation_mode** (str,可选) - 指定插值方法的可选字符串。可选值为:"bilinear"、"nearest",默认为:"bilinear"。
|
||||
|
|
|
@ -10,7 +10,7 @@ mindspore.ops.Zeta
|
|||
|
||||
.. math::
|
||||
|
||||
\\zeta \\left ( x,q \\right )= \\textstyle \\sum_{n=0} ^ {\\infty} \\left ( q+n\\right )^{-x}
|
||||
\zeta \left ( x,q \right )= \textstyle \sum_{n=0} ^ {\infty} \left ( q+n\right )^{-x}
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - Tensor,数据类型为:float32、float64。
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
mindspore.ops.arcsinh
|
||||
======================
|
||||
|
||||
.. py:function:: mindspore.ops.arcsinh()
|
||||
.. py:function:: mindspore.ops.arcsinh(x)
|
||||
|
||||
:func:`mindspore.ops.asinh` 的别名。
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
mindspore.ops.arctanh
|
||||
======================
|
||||
|
||||
.. py:function:: mindspore.ops.arctanh()
|
||||
.. py:function:: mindspore.ops.arctanh(x)
|
||||
|
||||
:func:`mindspore.ops.atanh` 的别名。
|
||||
|
|
|
@ -3,9 +3,9 @@ mindspore.ops.broadcast_to
|
|||
|
||||
.. py:function:: mindspore.ops.broadcast_to(x, shape)
|
||||
|
||||
将输入shape广播到目标shape。输入shape维度必须小于等于目标shape维度,设输入shape为 :math: `(x1, x2, ..., xm)`,目标shape为 :math:`(*, y_1, y_2, ..., y_m)`,其中 :math:`*` 为任意额外的维度。广播规则如下:
|
||||
将输入shape广播到目标shape。输入shape维度必须小于等于目标shape维度,设输入shape为 :math:`(x_1, x_2, ..., x_m)`,目标shape为 :math:`(*, y_1, y_2, ..., y_m)`,其中 :math:`*` 为任意额外的维度。广播规则如下:
|
||||
|
||||
依次比较 `x_m` 与 `y_m` 、 `x_{m-1}` 与 `y_{m-1}` 、...、 `x_1` 与 `y_1` 的值确定是否可以广播以及广播后输出shape对应维的值。
|
||||
依次比较 :math:`x_m` 与 :math:`y_m` 、 :math:`x_{m-1}` 与 :math:`y_{m-1}` 、...、 :math:`x_1` 与 :math:`y_1` 的值确定是否可以广播以及广播后输出shape对应维的值。
|
||||
|
||||
- 如果相等,则这个值即为目标shape该维的值。比如说输入shape为 :math:`(2, 3)` ,目标shape为 :math:`(2, 3)` ,则输出shape为 :math:`(2, 3)`。
|
||||
|
||||
|
|
|
@ -6,10 +6,8 @@
|
|||
通过权重梯度总和的比率来裁剪多个Tensor的值。
|
||||
|
||||
.. note::
|
||||
输入 `x` 应为Tensor的tuple或list。否则,将引发错误。
|
||||
|
||||
.. note::
|
||||
在半自动并行模式或自动并行模式下,如果输入是梯度,那么将会自动汇聚所有设备上的梯度的平方和。
|
||||
- 输入 `x` 应为Tensor的tuple或list。否则,将引发错误。
|
||||
- 在半自动并行模式或自动并行模式下,如果输入是梯度,那么将会自动汇聚所有设备上的梯度的平方和。
|
||||
|
||||
参数:
|
||||
- **x** (Union(tuple[Tensor], list[Tensor])) - 由Tensor组成的tuple,其每个元素为任意维度的Tensor。
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.cummax
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = max(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = max(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.cummin
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = min(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = min(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -9,7 +9,7 @@ mindspore.ops.floor
|
|||
out_i = \lfloor x_i \rfloor
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - Floor的输入,任意维度的Tensor,秩应小于8。其数据类型必须为float16、float32。
|
||||
- **x** (Tensor) - floor的输入,任意维度的Tensor,秩应小于8。其数据类型必须为float16、float32。
|
||||
|
||||
返回:
|
||||
Tensor,shape与 `x` 相同。
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.ops.l1_loss
|
||||
=====================
|
||||
|
||||
.. py:function:: mindspore.ops.l1_loss(x, target, reduction='mean'):
|
||||
.. py:function:: mindspore.ops.l1_loss(x, target, reduction='mean')
|
||||
|
||||
l1_loss用于计算预测值和目标值之间的平均绝对误差。
|
||||
|
||||
|
|
|
@ -3,8 +3,8 @@ mindspore.ops.max_unpool1d
|
|||
|
||||
.. py:function:: mindspore.ops.max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=None)
|
||||
|
||||
`Maxpool1d` 的部分逆过程。 `Maxpool1d` 不是完全可逆的,因为非最大值丢失。
|
||||
`max_unpool1d` 以 `MaxPool1d` 的输出为输入,包括最大值的索引。在计算 `Maxpool1d` 部分逆的过程中,非最大值设置为零。
|
||||
`maxpool1d` 的部分逆过程。 `maxpool1d` 不是完全可逆的,因为非最大值丢失。
|
||||
`max_unpool1d` 以 `maxpool1d` 的输出为输入,包括最大值的索引。在计算 `maxpool1d` 部分逆的过程中,非最大值设置为零。
|
||||
支持的输入数据格式为 :math:`(N, C, H_{in})` 或 :math:`(C, H_{in})` ,输出数据的格式为 :math:`(N, C, H_{out})`
|
||||
或 :math:`(C, H_{out})` ,计算公式如下:
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ mindspore.ops.sqrt
|
|||
逐元素返回当前Tensor的平方根。
|
||||
|
||||
.. math::
|
||||
out_{i} = \\sqrt{x_{i}}
|
||||
out_{i} = \sqrt{x_{i}}
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - 输入Tensor,数据类型为number.Number,其rank需要在[0, 7]范围内.
|
||||
|
|
|
@ -66,6 +66,7 @@ Neural Network
|
|||
mindspore.ops.FractionalMaxPool
|
||||
mindspore.ops.FractionalMaxPoolWithFixedKsize
|
||||
mindspore.ops.FractionalMaxPool3DWithFixedKsize
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.GridSampler3D
|
||||
mindspore.ops.LayerNorm
|
||||
mindspore.ops.LRN
|
||||
|
@ -196,7 +197,6 @@ Sampling Operator
|
|||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.ComputeAccidentalHits
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.LogUniformCandidateSampler
|
||||
mindspore.ops.UniformCandidateSampler
|
||||
|
||||
|
|
|
@ -507,7 +507,7 @@ class COOTensor(COOTensor_):
|
|||
TypeError: If (self/other)'s value's type is not matched with thresh's type
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor, COOTensor
|
||||
|
@ -804,7 +804,7 @@ class CSRTensor(CSRTensor_):
|
|||
Tensor or CSRTensor.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
|
|
|
@ -4347,7 +4347,7 @@ class Tensor(Tensor_):
|
|||
ValueError: If all elements of input tensor are not greater than (p-1)/2.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[3, 4, 5], [4, 2, 6]]), mindspore.float32)
|
||||
|
@ -4384,7 +4384,7 @@ class Tensor(Tensor_):
|
|||
ValueError: If the shape of input tensor and `tensor2` could not broadcast together.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.arange(2*3*4).reshape(2, 3, 4), mindspore.float32)
|
||||
|
@ -4439,7 +4439,7 @@ class Tensor(Tensor_):
|
|||
ValueError: If `input` and `other` are not the same shape.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
|
||||
|
@ -4492,7 +4492,7 @@ class Tensor(Tensor_):
|
|||
ValueError: If input tensor and `value` are not the same shape.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
|
||||
|
@ -4522,7 +4522,7 @@ class Tensor(Tensor_):
|
|||
Tensor, has the same shape and dtype as input.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
|
||||
|
@ -4557,7 +4557,7 @@ class Tensor(Tensor_):
|
|||
TypeError: If neither input tensor and `other` is a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
|
||||
|
@ -4593,7 +4593,7 @@ class Tensor(Tensor_):
|
|||
TypeError: If `size` is not an int, list or tuple of integers.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
|
||||
|
@ -4627,7 +4627,7 @@ class Tensor(Tensor_):
|
|||
TypeError: If `size` is not an int, list or tuple of integers.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3]), mindspore.float32)
|
||||
|
@ -4690,7 +4690,7 @@ class Tensor(Tensor_):
|
|||
Tensor, has the same shape as input tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
|
||||
|
@ -4721,7 +4721,7 @@ class Tensor(Tensor_):
|
|||
ValueError: If `dim` is not in range of [-len(x.shape), len(x.shape)).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
|
||||
|
@ -4780,7 +4780,7 @@ class Tensor(Tensor_):
|
|||
Tensor, the shape is the same as the input tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.asarray(np.complex(1.3 + 0.4j)), mindspore.complex64)
|
||||
|
|
|
@ -1154,6 +1154,8 @@ def reset_ps_context():
|
|||
Reset parameter server training mode context attributes to the default values:
|
||||
|
||||
- enable_ps: False.
|
||||
|
||||
Meaning of each field and its default value refer to :func:`mindspore.set_ps_context`.
|
||||
"""
|
||||
_reset_ps_context()
|
||||
|
||||
|
|
|
@ -142,7 +142,7 @@ class TypeCast(TensorOperation):
|
|||
TypeError: If `data_type` is not of type bool, int, float or string.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
|
|
@ -896,7 +896,7 @@ class TypeCast(TensorOperation):
|
|||
TypeError: If `data_type` is not of MindSpore data type bool, int, float, string or type :class:`numpy.dtype` .
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
|
|
@ -716,7 +716,7 @@ class HWC2CHW(ImageTensorOperation):
|
|||
RuntimeError: If given tensor shape is not <H, W> or <H, W, C>.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> transforms_list = [c_vision.Decode(),
|
||||
|
@ -822,7 +822,7 @@ class Normalize(ImageTensorOperation):
|
|||
RuntimeError: If given tensor shape is not <H, W> or <...,H, W, C>.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> decode_op = c_vision.Decode()
|
||||
|
@ -1214,7 +1214,7 @@ class RandomColorAdjust(ImageTensorOperation):
|
|||
RuntimeError: If given tensor shape is not <H, W, C>.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> decode_op = c_vision.Decode()
|
||||
|
@ -1561,7 +1561,7 @@ class RandomHorizontalFlip(ImageTensorOperation):
|
|||
RuntimeError: If given tensor shape is not <H, W> or <H, W, C>.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> transforms_list = [c_vision.Decode(), c_vision.RandomHorizontalFlip(0.75)]
|
||||
|
@ -2070,7 +2070,7 @@ class RandomSharpness(ImageTensorOperation):
|
|||
ValueError: If `degrees` is in (max, min) format instead of (min, max).
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> transforms_list = [c_vision.Decode(), c_vision.RandomSharpness(degrees=(0.2, 1.9))]
|
||||
|
@ -2136,7 +2136,7 @@ class RandomVerticalFlip(ImageTensorOperation):
|
|||
RuntimeError: If given tensor shape is not <H, W> or <H, W, C>.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> transforms_list = [c_vision.Decode(), c_vision.RandomVerticalFlip(0.25)]
|
||||
|
@ -2200,7 +2200,7 @@ class Rescale(ImageTensorOperation):
|
|||
TypeError: If `shift` is not of type float.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> transforms_list = [c_vision.Decode(), c_vision.Rescale(1.0 / 255.0, -1.0)]
|
||||
|
|
|
@ -3465,7 +3465,7 @@ class Rescale(ImageTensorOperation):
|
|||
TypeError: If `shift` is not of type float.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> transforms_list = [vision.Decode(), vision.Rescale(1.0 / 255.0, -1.0)]
|
||||
|
@ -4074,7 +4074,7 @@ class ToType(TypeCast):
|
|||
TypeError: If `data_type` is not of type :class:`mindspore.dtype` or :class:`numpy.dtype` .
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``Ascend`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
|
|
@ -2193,7 +2193,7 @@ class GraphCell(Cell):
|
|||
If the parameter exists in the graph according to the name, update it's value.
|
||||
If the parameter does not exist, ignore it. Default: None.
|
||||
obf_password (int): The password used for dynamic obfuscation. "dynamic obfuscation" is used for model
|
||||
protection, which can refer to `mindspore.train.serialization.obfuscate_model()`. If the input 'graph' is a
|
||||
protection, which can refer to :func:`mindspore.obfuscate_model`. If the input 'graph' is a
|
||||
func_graph loaded from a mindir file obfuscated in password mode, then obf_password should be provided.
|
||||
obf_password should be larger than zero and less or equal than int_64 (9223372036854775807). default: None.
|
||||
|
||||
|
|
|
@ -1322,7 +1322,7 @@ class SoftShrink(Cell):
|
|||
ValueError: If lambd is less than 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([[ 0.5297, 0.7871, 1.1754], [ 0.7836, 0.6218, -1.1542]]), mstype.float16)
|
||||
|
@ -1370,7 +1370,7 @@ class HShrink(Cell):
|
|||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
@ -1419,7 +1419,7 @@ class Threshold(Cell):
|
|||
TypeError: If `value` is not a float or an int.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
|
|
@ -644,7 +644,7 @@ class Identity(Cell):
|
|||
TypeError: If `x` is not a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
|
||||
|
@ -1298,7 +1298,7 @@ class ResizeBilinear(Cell):
|
|||
ValueError: If `size` is a list or tuple whose length is not equal to 2.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor([[[[1, 2, 3, 4], [5, 6, 7, 8]]]], mindspore.float32)
|
||||
|
|
|
@ -441,14 +441,14 @@ class ReflectionPad1d(_ReflectionPadNd):
|
|||
Args:
|
||||
padding (union[int, tuple]): The padding size to pad the last dimension of input tensor.
|
||||
If padding is an integer: all directions will be padded with the same size.
|
||||
If padding is a tuple: uses :math:`(pad_{left}, pad_{right})` to pad.
|
||||
If padding is a tuple: uses :math:`(pad\_left, pad\_right)` to pad.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - 2D or 3D, shape: :math:`(C, W_{in})` or :math:`(N, C, W_{in})`.
|
||||
|
||||
Outputs:
|
||||
Tensor, after padding. Shape: :math:`(C, W_{out})` or :math:`(N, C, W_{out})`,
|
||||
where :math:`W_{out} = W_{in} + pad_{left} + pad_{right}`.
|
||||
where :math:`W_{out} = W_{in} + pad\_left + pad\_right`.
|
||||
|
||||
Raises:
|
||||
TypeError: If 'padding' is not a tuple or int.
|
||||
|
@ -490,14 +490,14 @@ class ReflectionPad2d(_ReflectionPadNd):
|
|||
Args:
|
||||
padding (union[int, tuple]): The padding size to pad the input tensor.
|
||||
If padding is an integer: all directions will be padded with the same size.
|
||||
If padding is a tuple: uses :math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down})` to pad.
|
||||
If padding is a tuple: uses :math:`(pad\_left, pad\_right, pad\_up, pad\_down)` to pad.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - 3D or 4D, shape: :math:`(C, H_{in}, W_{out})` or :math:`(N, C, H_{out}, W_{out})`.
|
||||
|
||||
Outputs:
|
||||
Tensor, after padding. Shape: :math:`(C, H_{out}, W_{out})` or :math:`(N, C, H_{out}, W_{out})`,
|
||||
where :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`, :math:`W_{out} = W_{in} + pad_{left} + pad_{right}`.
|
||||
where :math:`H_{out} = H_{in} + pad\_up + pad\_down`, :math:`W_{out} = W_{in} + pad\_left + pad\_right`.
|
||||
|
||||
Raises:
|
||||
TypeError: If 'padding' is not a tuple or int.
|
||||
|
@ -545,17 +545,17 @@ class ReflectionPad3d(_ReflectionPadNd):
|
|||
|
||||
Args:
|
||||
padding (union[int, tuple]): The padding size to pad the input tensor.
|
||||
If padding is an integer: all directions will be padded with the same size.
|
||||
If padding is a tuple: uses :math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down},
|
||||
pad_{front}, pad_{back})` to pad.
|
||||
If padding is an integer: all directions will be padded with the same size.
|
||||
If padding is a tuple: uses :math:`(pad\_left, pad\_right, pad\_up, pad\_down,
|
||||
pad\_front, pad\_back)` to pad.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - 4D Tensor, shape: :math:`(N, D_{in}, H_{in}, W_{out})`.
|
||||
|
||||
Outputs:
|
||||
Tensor, after padding. Shape: :math:`(N, D_{out}, H_{out}, W_{out})`,
|
||||
where :math:`D_{out} = D_{in} + pad_{front} + pad_{back}`, :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`
|
||||
:math:`W_{out} = W_{in} + pad_{left} + pad_{right}`.
|
||||
where :math:`D_{out} = D_{in} + pad\_front + pad\_back`, :math:`H_{out} = H_{in} + pad\_up + pad\_down`
|
||||
:math:`W_{out} = W_{in} + pad\_left + pad\_right`.
|
||||
|
||||
Raises:
|
||||
TypeError: If 'padding' is not a tuple or int.
|
||||
|
@ -586,11 +586,9 @@ class ReflectionPad3d(_ReflectionPadNd):
|
|||
[[[[3. 2. 3. 2.]
|
||||
[1. 0. 1. 0.]
|
||||
[3. 2. 3. 2.]]
|
||||
|
||||
[[7. 6. 7. 6.]
|
||||
[5. 4. 5. 4.]
|
||||
[7. 6. 7. 6.]]
|
||||
|
||||
[[3. 2. 3. 2.]
|
||||
[1. 0. 1. 0.]
|
||||
[3. 2. 3. 2.]]]]
|
||||
|
|
|
@ -159,7 +159,7 @@ class Adagrad(Optimizer):
|
|||
ValueError: If `accum` or `weight_decay` is less than 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
|
|
|
@ -395,7 +395,7 @@ class GetNextSingleOp(Cell):
|
|||
"""
|
||||
Cell to run for getting the next operation.
|
||||
|
||||
For detailed information, refer to `mindspore.ops.GetNext`.
|
||||
For detailed information, refer to :class:`mindspore.ops.GetNext`.
|
||||
|
||||
Args:
|
||||
dataset_types (list[:class:`mindspore.dtype`]): The types of dataset.
|
||||
|
|
|
@ -96,12 +96,10 @@ def clip_by_global_norm(x, clip_norm=1.0, use_norm=None):
|
|||
Clips tensor values by the ratio of the sum of their norms.
|
||||
|
||||
Note:
|
||||
Input `x` should be a tuple or list of tensors. Otherwise, it will raise an error.
|
||||
|
||||
Note:
|
||||
On the SEMI_AUTO_PARALLEL mode or AUTO_PARALLEL mode, if the input `x` is the gradient,
|
||||
the gradient norm values on all devices will be automatically aggregated by allreduce inserted after the local
|
||||
square sum of the gradients.
|
||||
- Input `x` should be a tuple or list of tensors. Otherwise, it will raise an error.
|
||||
- On the SEMI_AUTO_PARALLEL mode or AUTO_PARALLEL mode, if the input `x` is the gradient,
|
||||
the gradient norm values on all devices will be automatically aggregated by allreduce inserted after
|
||||
the local square sum of the gradients.
|
||||
|
||||
Args:
|
||||
x (Union(tuple[Tensor], list[Tensor])): Input data to clip.
|
||||
|
|
|
@ -2107,7 +2107,7 @@ def scatter_max(input_x, indices, updates):
|
|||
and `updates` is greater than 8 dimensions.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32), name="input_x")
|
||||
|
@ -3841,7 +3841,7 @@ def meshgrid(inputs, indexing='xy'):
|
|||
ValueError: If `indexing` is neither 'xy' nor 'ij'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
@ -3947,11 +3947,12 @@ def affine_grid(theta, output_size, align_corners=False):
|
|||
def broadcast_to(x, shape):
|
||||
"""
|
||||
Broadcasts input tensor to a given shape. The dim of input shape must be smaller
|
||||
than or equal to that of target shape. Suppose input shape is :math:`(x1, x2, ..., xm)`,
|
||||
than or equal to that of target shape. Suppose input shape is :math:`(x_1, x_2, ..., x_m)`,
|
||||
target shape is :math:`(*, y_1, y_2, ..., y_m)`, where :math:`*` means any additional dimension.
|
||||
The broadcast rules are as follows:
|
||||
|
||||
Compare the value of `x_m` and `y_m`, `x_{m-1}` and `y_{m-1}`, ..., `x_1` and `y_1` consecutively and
|
||||
Compare the value of :math:`x_m` and :math:`y_m`, :math:`x_{m-1}` and :math:`y_{m-1}`, ...,
|
||||
:math:`x_1` and :math:`y_1` consecutively and
|
||||
decide whether these shapes are broadcastable and what the broadcast result is.
|
||||
|
||||
If the value pairs at a specific dim are equal, then that value goes right into that dim of output shape.
|
||||
|
@ -5464,7 +5465,7 @@ def fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1):
|
|||
ValueError: If `input.shape[3]` does not match the calculated number of sliding blocks.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(input_data=np.random.rand(16, 16, 4, 25), dtype=mstype.float32)
|
||||
|
@ -5728,7 +5729,7 @@ def mvlgamma(input, p):
|
|||
ValueError: If not all elements of `input` are greater than :math:`(p - 1) / 2`.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[3, 4, 5], [4, 2, 6]]), mindspore.float32)
|
||||
|
|
|
@ -953,7 +953,7 @@ def div(input, other, rounding_mode=None):
|
|||
ValueError: If `rounding_mode` value is not None, "floor" or "trunc".
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
|
||||
|
@ -1978,7 +1978,7 @@ def tan(x):
|
|||
TypeError: If `x` is not a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([-1.0, 0.0, 1.0]), mindspore.float32)
|
||||
|
@ -2442,7 +2442,7 @@ def atan2(x, y):
|
|||
when data type conversion of Parameter is not supported.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([0, 1]), mindspore.float32)
|
||||
|
@ -3297,7 +3297,7 @@ def truncate_div(x, y):
|
|||
TypeError: If `x` and `y` is not one of the following: Tensor, Number, bool.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
|
||||
|
@ -3343,7 +3343,7 @@ def truncate_mod(x, y):
|
|||
ValueError: If the shape `x` and `y` cannot be broadcasted to each other.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
|
||||
|
@ -5488,7 +5488,7 @@ def cummin(x, axis):
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = min(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = min(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
@ -5540,7 +5540,7 @@ def cummax(x, axis):
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = max(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = max(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
@ -7131,7 +7131,7 @@ def renorm(input_x, p, dim, maxnorm):
|
|||
ValueError: If the value of `p` less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]), mindspore.float32)
|
||||
|
@ -8163,7 +8163,7 @@ def kron(x, y):
|
|||
TypeError: If `y` is not a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
@ -8894,7 +8894,7 @@ def erfinv(input):
|
|||
TypeError: If dtype of `input` is not float16, float32 or float64.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([0, 0.5, -0.9]), mindspore.float32)
|
||||
|
@ -8976,7 +8976,7 @@ def cumprod(input, dim, dtype=None):
|
|||
ValueError: If `dim` is None.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3], np.float32))
|
||||
|
@ -9008,7 +9008,7 @@ def greater(input, other):
|
|||
Tensor, the shape is the same as the one after broadcasting, and the data type is bool.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
|
||||
|
@ -9038,7 +9038,7 @@ def greater_equal(input, other):
|
|||
Tensor, the shape is the same as the one after broadcasting, and the data type is bool.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3]), mindspore.int32)
|
||||
|
@ -9087,7 +9087,7 @@ def igamma(input, other):
|
|||
ValueError: If `input` could not be broadcast to a tensor with shape of `other`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
|
||||
|
@ -9136,7 +9136,7 @@ def igammac(input, other):
|
|||
ValueError: If `input` could not be broadcast to a tensor with shape of `other`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
|
||||
|
@ -9285,7 +9285,7 @@ def isinf(input):
|
|||
TypeError: If `input` is not a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
|
||||
|
@ -9406,7 +9406,7 @@ def imag(input):
|
|||
TypeError: If `input` is not a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.asarray(np.complex(1.3 + 0.4j)), mindspore.complex64)
|
||||
|
|
|
@ -686,11 +686,11 @@ def adaptive_max_pool3d(x, output_size, return_indices=False):
|
|||
|
||||
def max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=None):
|
||||
r"""
|
||||
Computes a partial inverse of MaxPool1d.
|
||||
Computes a partial inverse of maxpool1d.
|
||||
|
||||
MaxPool1d is not fully invertible, since the non-maximal values are lost.
|
||||
maxpool1d is not fully invertible, since the non-maximal values are lost.
|
||||
|
||||
max_unpool1d takes the output of MaxPool1d as input including the indices of the maximal values
|
||||
max_unpool1d takes the output of maxpool1d as input including the indices of the maximal values
|
||||
and computes a partial inverse in which all non-maximal values are set to zero. Typically the input
|
||||
is of shape :math:`(N, C, H_{in})` or :math:`(C, H_{in})`, and the output is of shape :math:`(N, C, H_{out}`
|
||||
or :math:`(C, H_{out}`. The operation is as follows.
|
||||
|
@ -1134,7 +1134,7 @@ def celu(x, alpha=1.0):
|
|||
ValueError: If `alpha` has the value of 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
|
||||
|
@ -2014,7 +2014,7 @@ def interpolate(x, roi=None, scales=None, sizes=None, coordinate_transformation_
|
|||
ValueError: If `mode` is not in the support list.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> # case 1: linear mode
|
||||
|
@ -2161,7 +2161,7 @@ def soft_shrink(x, lambd=0.5):
|
|||
ValueError: If lambd is less than 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor
|
||||
|
@ -4168,7 +4168,7 @@ def batch_norm(input_x, running_mean, running_var, weight, bias, training=False,
|
|||
TypeError: If dtype of `input_x`, `weight` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.ones([2, 2]), mindspore.float32)
|
||||
|
@ -4789,7 +4789,7 @@ def gelu(input_x, approximate='none'):
|
|||
ValueError: If `approximate` value is neither `none` or `tanh`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor([1.0, 2.0, 3.0], mindspore.float32)
|
||||
|
@ -5040,7 +5040,7 @@ def mse_loss(input_x, target, reduction='mean'):
|
|||
ValueError: If `input_x` and `target` have different shapes and cannot be broadcasted.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
|
||||
|
@ -5097,7 +5097,7 @@ def msort(x):
|
|||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
|
|
|
@ -676,7 +676,7 @@ def coo_add(x1: COOTensor, x2: COOTensor, thresh: Tensor) -> COOTensor:
|
|||
TypeError: If (x1/x2)'s value's type is not matched with thresh's type.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor, COOTensor
|
||||
|
|
|
@ -115,7 +115,7 @@ def csr_tan(x: CSRTensor) -> CSRTensor:
|
|||
TypeError: If `x` is not a CSRTensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
|
||||
|
@ -150,7 +150,7 @@ def coo_tan(x: COOTensor) -> COOTensor:
|
|||
TypeError: If `x` is not a COOTensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
|
||||
|
@ -1938,7 +1938,7 @@ def csr_isinf(x: CSRTensor) -> CSRTensor:
|
|||
TypeError: If `x` is not a CSRTensor.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> indptr = Tensor([0, 1, 2, 2], dtype=mstype.int32)
|
||||
|
@ -1978,7 +1978,7 @@ def coo_isinf(x: COOTensor) -> COOTensor:
|
|||
TypeError: If `x` is not a COOTensor.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> indices = Tensor([[0, 1], [1, 2]], dtype=mstype.int64)
|
||||
|
|
|
@ -1444,7 +1444,7 @@ class LayerNormGradGrad(Primitive):
|
|||
ValueError: If gamma, d_dg, d_db don't have the same shape.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
@ -1936,7 +1936,7 @@ class UpsampleNearest3DGrad(Primitive):
|
|||
ValueError: If shape of `x` is not 5D.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU`` ``Ascend`` ``CPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
@prim_attr_register
|
||||
def __init__(self, input_size, output_size=None, scales=None):
|
||||
|
@ -2541,7 +2541,7 @@ class PdistGrad(Primitive):
|
|||
ValueError: If dimension of `x` is not 2.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
@ -2616,7 +2616,7 @@ class HShrinkGrad(Primitive):
|
|||
TypeError: If dtype of `gradients` or `features` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
@ -2990,7 +2990,7 @@ class UpsampleTrilinear3DGrad(Primitive):
|
|||
ValueError: If elements number of `input_size` is not 5.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
@prim_attr_register
|
||||
def __init__(self, input_size, output_size=None, scales=None, align_corners=False):
|
||||
|
@ -3240,7 +3240,7 @@ class TraceGrad(Primitive):
|
|||
ValueError: If length of shape of `x_shape` is not equal to 2.
|
||||
|
||||
Support Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
@ -3553,7 +3553,7 @@ class ResizeBicubicGrad(Primitive):
|
|||
ValueError: If `size` dim is not 4.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
@prim_attr_register
|
||||
def __init__(self, align_corners=False, half_pixel_centers=False):
|
||||
|
|
|
@ -78,19 +78,25 @@ class ExtractImagePatches(Primitive):
|
|||
Tensor, a 4-D tensor whose data type is same as 'input_x',
|
||||
and the shape is [out_batch, out_depth, out_row, out_col], Where the out_batch is the same as the in_batch
|
||||
and
|
||||
|
||||
.. math::
|
||||
out_depth=ksize\_row * ksize\_col * in\_depth
|
||||
|
||||
and
|
||||
if 'padding' is "valid":
|
||||
|
||||
.. math::
|
||||
out\_row=floor((in\_row - (ksize\_row + (ksize\_row - 1) * (rate\_row - 1))) / stride\_row) + 1
|
||||
out\_col=floor((in\_col - (ksize\_col + (ksize\_col - 1) * (rate\_col - 1))) / stride\_col) + 1
|
||||
|
||||
if 'padding' is "same":
|
||||
|
||||
.. math::
|
||||
out\_row=floor((in\_row - 1) / stride\_row) + 1
|
||||
out\_col=floor((in\_col - 1) / stride\_col) + 1
|
||||
Supported Platforms:
|
||||
``GPU`` ``Ascend``
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
@ -194,12 +200,14 @@ class Lamb(PrimitiveWithInfer):
|
|||
Default: 0.0.
|
||||
- **global_step** (Tensor) - Tensor to record current global step.
|
||||
- **gradient** (Tensor) - Gradient, has the same shape and data type as `var`.
|
||||
|
||||
Outputs:
|
||||
Tensor, the updated parameters.
|
||||
|
||||
- **var** (Tensor) - The same shape and data type as `var`.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU`` ``Ascend``
|
||||
``Ascend````GPU``
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
|
|
@ -3595,7 +3595,7 @@ class Mvlgamma(Primitive):
|
|||
Refer to :func:`mindspore.ops.mvlgamma` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[3, 4, 5], [4, 2, 6]]), mindspore.float32)
|
||||
|
@ -4039,7 +4039,7 @@ class ScatterMax(_ScatterOpDynamic):
|
|||
and `updates` is greater than 8 dimensions.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32),
|
||||
|
@ -4272,7 +4272,7 @@ class ScatterSub(Primitive):
|
|||
is required when data type conversion of Parameter is not supported.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]), mindspore.float32), name="x")
|
||||
|
@ -5465,7 +5465,7 @@ class Meshgrid(PrimitiveWithInfer):
|
|||
Refer to :func:`mindspore.ops.meshgrid` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
|
||||
|
@ -5808,7 +5808,7 @@ class EmbeddingLookup(PrimitiveWithCheck):
|
|||
ValueError: If length of shape of `input_params` is greater than 2.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_params = Tensor(np.array([[8, 9], [10, 11], [12, 13], [14, 15]]), mindspore.float32)
|
||||
|
@ -5883,7 +5883,7 @@ class Identity(Primitive):
|
|||
TypeError: If `x` is not a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
|
||||
|
@ -6689,7 +6689,7 @@ class TensorScatterElements(Primitive):
|
|||
value of that position in the output will be nondeterministic.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> op = ops.TensorScatterElements(0, "none")
|
||||
|
@ -6964,7 +6964,7 @@ class LowerBound(Primitive):
|
|||
ValueError: If the first dimension of the shape of `sorted_x` is not equal to that of `values`.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
@ -7017,7 +7017,7 @@ class UpperBound(Primitive):
|
|||
ValueError: If the number of rows of `sorted_x` is not consistent with that of `values`.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
@ -7618,7 +7618,7 @@ class HammingWindow(Primitive):
|
|||
ValueError: If data of `length` is negative.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> # case 1: periodic=True.
|
||||
|
|
|
@ -462,7 +462,7 @@ class NonMaxSuppressionWithOverlaps(Primitive):
|
|||
ValueError: If the shape of `scores` is not equal to the shape of the dim0 or dim1 of `overlaps`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> overlaps = Tensor(np.array([[0.6964692, 0.28613934, 0.22685145, 0.5513148],
|
||||
|
@ -662,7 +662,7 @@ class ResizeLinear1D(Primitive):
|
|||
TypeError: If `coordinate_transformation_mode` is not in the support list.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input = Tensor([[[1, 2, 3], [4, 5, 6]]], mindspore.float32)
|
||||
|
@ -718,7 +718,7 @@ class ResizeBilinearV2(Primitive):
|
|||
ValueError: If `size` contains other than 2 elements.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor([[[[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]]], mindspore.float32)
|
||||
|
@ -780,7 +780,7 @@ class ResizeBicubic(Primitive):
|
|||
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> class NetResizeBicubic(nn.Cell):
|
||||
|
@ -883,7 +883,7 @@ class ResizeArea(Primitive):
|
|||
ValueError: If any value of `size` is not positive.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> images = Tensor([[[[2], [4], [6], [8]], [[10], [12], [14], [16]]]], mindspore.float16)
|
||||
|
|
|
@ -3278,7 +3278,7 @@ class TruncateDiv(Primitive):
|
|||
TypeError: If `x` and `y` is not one of the following: Tensor, Number, bool.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
|
||||
|
@ -3331,7 +3331,7 @@ class TruncateMod(Primitive):
|
|||
ValueError: If the shape `x` and `y` cannot be broadcasted to each other.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([2, 4, -1]), mindspore.int32)
|
||||
|
@ -4698,7 +4698,7 @@ class Sign(Primitive):
|
|||
TypeError: If `x` is not a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[2.0, 0.0, -1.0]]), mindspore.float32)
|
||||
|
@ -4743,7 +4743,7 @@ class Tan(Primitive):
|
|||
Refer to :func:`mindspore.ops.tan` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> tan = ops.Tan()
|
||||
|
@ -4815,7 +4815,7 @@ class Atan2(_MathBinaryOp):
|
|||
Refer to :func:`mindspore.ops.atan2` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([0, 1]), mindspore.float32)
|
||||
|
@ -4888,7 +4888,7 @@ class BitwiseAnd(_BitwiseBinaryOp):
|
|||
Refer to :func:`mindspore.ops.bitwise_and` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
|
||||
|
@ -4907,7 +4907,7 @@ class BitwiseOr(_BitwiseBinaryOp):
|
|||
Refer to :func:`mindspore.ops.bitwise_or` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
|
||||
|
@ -4926,7 +4926,7 @@ class BitwiseXor(_BitwiseBinaryOp):
|
|||
Refer to :func:`mindspore.ops.bitwise_xor` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
|
||||
|
@ -5795,7 +5795,7 @@ class ComplexAbs(Primitive):
|
|||
TypeError: If the input type is not complex64 or complex128.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.asarray(np.complex(3+4j)), mindspore.complex64)
|
||||
|
@ -5858,7 +5858,7 @@ class Complex(Primitive):
|
|||
TypeError: If the dtypes of two inputs are not same.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> real = Tensor(np.array([1]), mindspore.float32)
|
||||
|
@ -6363,7 +6363,7 @@ class LuSolve(Primitive):
|
|||
ValueError: If `x` dimension less than 2, `lu_data` dimension less than 2 or `lu_pivots` dimension less than 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[1], [3], [3]]), mindspore.float32)
|
||||
|
@ -6853,7 +6853,7 @@ class Zeta(Primitive):
|
|||
ValueError: If shape of `x` is not same as the `q`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([10.]), mindspore.float32)
|
||||
|
@ -6960,7 +6960,7 @@ class Renorm(Primitive):
|
|||
Refer to :func:`mindspore.ops.renorm` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]), mindspore.float32)
|
||||
|
@ -7298,7 +7298,7 @@ class NextAfter(Primitive):
|
|||
ValueError: If `x1`'s shape is not the same as `x2`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> nextafter = ops.NextAfter()
|
||||
|
|
|
@ -1273,7 +1273,7 @@ class BatchNorm(PrimitiveWithInfer):
|
|||
TypeError: If dtype of `input_x`, `scale` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.ones([2, 2]), mindspore.float32)
|
||||
|
@ -3664,7 +3664,7 @@ class ResizeBilinear(PrimitiveWithInfer):
|
|||
ValueError: If length of shape of `x` is not equal to 4.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor([[[[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]]], mindspore.float32)
|
||||
|
@ -3752,7 +3752,7 @@ class UpsampleTrilinear3D(Primitive):
|
|||
ValueError: If size of `output_size` is not equal 3 when `output_size` is specified.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> ops = ops.UpsampleTrilinear3D(output_size=[4, 64, 48])
|
||||
|
@ -5689,7 +5689,7 @@ class ApplyAdadelta(Primitive):
|
|||
is not supported.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
@ -5784,7 +5784,7 @@ class ApplyAdagrad(Primitive):
|
|||
RuntimeError: If the data type of `var`, `accum` and `grad` conversion of Parameter is not supported.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> class Net(nn.Cell):
|
||||
|
@ -5980,7 +5980,7 @@ class SparseApplyAdagradV2(Primitive):
|
|||
RuntimeError: If the data type of `var`, `accum` and `grad` conversion of Parameter is not supported.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> class Net(nn.Cell):
|
||||
|
@ -8363,7 +8363,7 @@ class SoftShrink(Primitive):
|
|||
Refer to :func:`mindspore.ops.soft_shrink` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
@ -8391,7 +8391,7 @@ class HShrink(Primitive):
|
|||
Refer to :func:`mindspore.ops.hardshrink` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
|
@ -8596,7 +8596,7 @@ class SparseApplyRMSProp(Primitive):
|
|||
RuntimeError: If the data type of `var`, `ms`, `mom` and `grad` conversion of Parameter is not supported.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> class SparseApplyRMSPropNet(nn.Cell):
|
||||
|
@ -8938,7 +8938,7 @@ class ApplyAdamWithAmsgrad(Primitive):
|
|||
ValueError: If the shape of `beta1_power`, `beta2_power`, `lr` is not 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> class ApplyAdamWithAmsgradNet(nn.Cell):
|
||||
|
@ -9480,7 +9480,7 @@ class TripletMarginLoss(Primitive):
|
|||
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> loss = ops.TripletMarginLoss()
|
||||
|
@ -9510,7 +9510,7 @@ class DeformableOffsets(Primitive):
|
|||
Refer to :func:`mindspore.ops.deformable_conv2d` for more details.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
|
@ -9559,8 +9559,7 @@ class DeformableOffsets(Primitive):
|
|||
class GridSampler2D(Primitive):
|
||||
"""
|
||||
This operation samples 2d input_x by using interpolation based on flow field grid,
|
||||
which is usually gennerated by
|
||||
affine_grid.
|
||||
which is usually gennerated by :func:`mindspore.ops.affine_grid`.
|
||||
|
||||
Args:
|
||||
interpolation_mode (str, optional): An optional string specifying the interpolation method.
|
||||
|
@ -9656,7 +9655,7 @@ class Pdist(Primitive):
|
|||
ValueError: If dimension of `x` is not 2.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor
|
||||
|
|
|
@ -223,7 +223,7 @@ class SparseSlice(Primitive):
|
|||
ValueError: If the shape of `shape` is not corresponding to `size`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> indices = Tensor(np.array([[0, 1], [1, 2], [1, 3], [2, 2]]).astype(np.int64))
|
||||
|
@ -1527,7 +1527,7 @@ class SparseAdd(Primitive):
|
|||
TypeError: If (x1_values/x2_values)'s type is not matched with thresh's type.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor
|
||||
|
|
|
@ -56,7 +56,7 @@ def block_diag(*arrs):
|
|||
ValueError: If there are Tensors with dimensions higher than 2 in all arguments.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -155,7 +155,7 @@ def solve_triangular(a, b, trans=0, lower=False, unit_diagonal=False,
|
|||
ValueError: If `a` is singular matrix.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
Solve the lower triangular system :math:`a x = b`, where::
|
||||
|
@ -224,7 +224,7 @@ def inv(a, overwrite_a=False, check_finite=True):
|
|||
ValueError: If :math:`a` is not square, or not 2D.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -289,7 +289,7 @@ def cho_factor(a, lower=False, overwrite_a=False, check_finite=True):
|
|||
ValueError: If input a tensor is not a square matrix or it's dims not equal to 2D.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -350,7 +350,7 @@ def cholesky(a, lower=False, overwrite_a=False, check_finite=True):
|
|||
ValueError: If input a tensor is not a square matrix or it's dims not equal to 2D.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -402,7 +402,7 @@ def cho_solve(c_and_lower, b, overwrite_b=False, check_finite=True):
|
|||
Tensor, the solution to the system a x = b
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -512,7 +512,7 @@ def eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False,
|
|||
ValueError: If `eigvals` is not None.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -603,7 +603,7 @@ def lu_factor(a, overwrite_a=False, check_finite=True):
|
|||
ValueError: If :math:`a` is not square.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -672,7 +672,7 @@ def lu(a, permute_l=False, overwrite_a=False, check_finite=True):
|
|||
- Tensor, :math:`(K, N)` upper triangular or trapezoidal matrix.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -750,7 +750,7 @@ def lu_solve(lu_and_piv, b, trans=0, overwrite_b=False, check_finite=True):
|
|||
Tensor, solution to the system
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
|
|
@ -327,7 +327,7 @@ def line_search(f, xk, pk, jac=None, gfk=None, old_fval=None, old_old_fval=None,
|
|||
LineSearchResults, results of line search results.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
|
|
@ -96,7 +96,7 @@ def minimize(func, x0, args=(), method=None, jac=None, hess=None, hessp=None, bo
|
|||
OptimizeResults, object holding optimization results.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
|
|
@ -318,7 +318,7 @@ def gmres(A, b, x0=None, *, tol=1e-5, restart=20, maxiter=None,
|
|||
>0 : convergence to tolerance not achieved, number of iterations. <0 : illegal input or breakdown.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -493,7 +493,7 @@ def cg(A, b, x0=None, *, tol=1e-5, atol=0.0, maxiter=None, M=None, callback=None
|
|||
TypeError: If `A` and `b` don't have the same data types.
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
@ -617,7 +617,7 @@ def bicgstab(A, b, x0=None, *, tol=1e-5, atol=0.0, maxiter=None, M=None):
|
|||
TypeError: If `A`, `x0` and `b` don't have the same float types(`mstype.float32` or `mstype.float64`).
|
||||
|
||||
Supported Platforms:
|
||||
``CPU`` ``GPU``
|
||||
``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as onp
|
||||
|
|
|
@ -1884,7 +1884,8 @@ def load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=
|
|||
"""
|
||||
Load checkpoint into net for distributed predication. Used in the case of distributed inference.
|
||||
For details of distributed inference, please check:
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/distributed_inference.html>`_.
|
||||
`Distributed Inference
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/parallel/distributed_inference.html>`_ .
|
||||
|
||||
Args:
|
||||
network (Cell): Network for distributed predication.
|
||||
|
|
Loading…
Reference in New Issue