commit
a1d5aebc72
|
@ -67,6 +67,7 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
mindspore.ops.FractionalMaxPool
|
||||
mindspore.ops.FractionalMaxPoolWithFixedKsize
|
||||
mindspore.ops.FractionalMaxPool3DWithFixedKsize
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.GridSampler3D
|
||||
mindspore.ops.LayerNorm
|
||||
mindspore.ops.LRN
|
||||
|
@ -196,7 +197,6 @@ MindSpore中 `mindspore.ops` 接口与上一版本相比,新增、删除和支
|
|||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.ComputeAccidentalHits
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.LogUniformCandidateSampler
|
||||
mindspore.ops.UniformCandidateSampler
|
||||
mindspore.ops.UpsampleNearest3D
|
||||
|
|
|
@ -13,8 +13,10 @@ mindspore.load
|
|||
|
||||
- **dec_key** (bytes) - 用于解密的字节类型密钥。有效长度为 16、24 或 32。
|
||||
- **dec_mode** (Union[str, function]) - 指定解密模式,设置dec_key时生效。可选项:'AES-GCM' | 'SM4-CBC' | 'AES-CBC' | 自定义解密函数。默认值:"AES-GCM"。
|
||||
|
||||
- 关于使用自定义解密加载的详情,请查看 `教程 <https://www.mindspore.cn/mindarmour/docs/zh-CN/r2.0.0-alpha/model_encrypt_protection.html>`_。
|
||||
- **obf_func** (function) - 导入混淆模型所需要的函数,可以参考 `obfuscate_model() <https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/api_python/mindspore/mindspore.obfuscate_model.html>` 了解详情。
|
||||
|
||||
- **obf_func** (function) - 导入混淆模型所需要的函数,可以参考 `obfuscate_model() <https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/api_python/mindspore/mindspore.obfuscate_model.html>`_ 了解详情。
|
||||
|
||||
返回:
|
||||
GraphCell,一个可以由 `GraphCell` 构成的可执行的编译图。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.load_distributed_checkpoint
|
|||
|
||||
.. py:function:: mindspore.load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=None, train_strategy_filename=None, strict_load=False, dec_key=None, dec_mode='AES-GCM')
|
||||
|
||||
给分布式预测加载checkpoint文件到网络。用于分布式推理。关于分布式推理的细节,请参考:https://www.mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/parallel/distributed_inference.html。
|
||||
给分布式预测加载checkpoint文件到网络。用于分布式推理。关于分布式推理的细节,请参考: `分布式推理 <https://www.mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/parallel/distributed_inference.html>`_ 。
|
||||
|
||||
参数:
|
||||
- **network** (Cell) - 分布式预测网络。
|
||||
|
|
|
@ -3,4 +3,4 @@ mindspore.reset_ps_context
|
|||
|
||||
.. py:function:: mindspore.reset_ps_context()
|
||||
|
||||
将参数服务器训练模式上下文中的属性重置为默认值。各字段的含义及其默认值见'set_ps_context'接口。
|
||||
将参数服务器训练模式上下文中的属性重置为默认值。各字段的含义及其默认值见 :func:`mindspore.set_ps_context` 接口。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.nn.GetNextSingleOp
|
|||
|
||||
.. py:class:: mindspore.nn.GetNextSingleOp(dataset_types, dataset_shapes, queue_name)
|
||||
|
||||
用于获取下一条数据的Cell。更详细的信息请参考 `mindspore.ops.GetNext` 。
|
||||
用于获取下一条数据的Cell。更详细的信息请参考 :class:`mindspore.ops.GetNext` 。
|
||||
|
||||
参数:
|
||||
- **dataset_types** (list[:class:`mindspore.dtype`]) - 数据集类型。
|
||||
|
|
|
@ -10,7 +10,7 @@ mindspore.nn.GraphCell
|
|||
参数:
|
||||
- **graph** (FuncGraph) - 从MindIR加载的编译图。
|
||||
- **params_init** (dict) - 需要在图中初始化的参数。key为参数名称,类型为字符串,value为 Tensor 或 Parameter。如果参数名在图中已经存在,则更新其值;如果不存在,则忽略。默认值:None。
|
||||
- **obf_password** (int) - 用于动态混淆保护的password。动态混淆是一种模型保护方法,可以参考 :func:`mindspore.train.serialization.obfuscate_model` 。如果导入的 `graph` 是一个经过混淆的模型,那么 `obf_password` 应该要提供。 `obf_password` 的取值范围是(0, 9223372036854775807]。默认值:None。
|
||||
- **obf_password** (int) - 用于动态混淆保护的password。动态混淆是一种模型保护方法,可以参考 :func:`mindspore.obfuscate_model` 。如果导入的 `graph` 是一个经过混淆的模型,那么 `obf_password` 应该要提供。 `obf_password` 的取值范围是(0, 9223372036854775807]。默认值:None。
|
||||
|
||||
异常:
|
||||
- **TypeError** - 如果图不是FuncGraph类型。
|
||||
|
|
|
@ -28,7 +28,7 @@ mindspore.nn.HingeEmbeddingLoss
|
|||
- **reduction** (str) - 指定应用于输出结果的计算方式,'none'、'mean'、'sum',默认值:'mean'。
|
||||
|
||||
输入:
|
||||
- **logits** (Tensor) - 预测值,公式中表示为 :math:`x`,shape为:math:`(*)`。`*` 代表着任意数量的维度。
|
||||
- **logits** (Tensor) - 预测值,公式中表示为 :math:`x`,shape为 :math:`(*)`。`*` 代表着任意数量的维度。
|
||||
- **labels** (Tensor) - 标签值,公式中表示为 :math:`y`,和 `logits` 具有相同shape,包含1或-1。
|
||||
|
||||
返回:
|
||||
|
|
|
@ -4,7 +4,7 @@ mindspore.nn.MaxUnpool1d
|
|||
.. py:class:: mindspore.nn.MaxUnpool1d(kernel_size, stride=None, padding=0)
|
||||
|
||||
`Maxpool1d` 的部分逆过程。 `Maxpool1d` 不是完全可逆的,因为非最大值丢失。
|
||||
`MaxUnpool1d` 以 `MaxPool1d` 的输出为输入,包括最大值的索引。在计算 `maxpool1d` 部分逆的过程中,非最大值设置为零。
|
||||
`MaxUnpool1d` 以 `MaxPool1d` 的输出为输入,包括最大值的索引。在计算 `Maxpool1d` 部分逆的过程中,非最大值设置为零。
|
||||
支持的输入数据格式为 :math:`(N, C, H_{in})` 或 :math:`(C, H_{in})` ,输出数据的个格式为 :math:`(N, C, H_{out})`
|
||||
或 :math:`(C, H_{out})` ,计算公式如下:
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ mindspore.nn.ReflectionPad1d
|
|||
根据 `padding` 对输入 `x` 进行填充。
|
||||
|
||||
参数:
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则为(pad_left, pad_right)。
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则为 :math:`(pad\_left, pad\_right)`。
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - 输入Tensor, 2D或3D。shape为 :math:`(C, W_{in})` 或 :math:`(N, C, W_{in})` 。
|
||||
|
|
|
@ -6,13 +6,13 @@ mindspore.nn.ReflectionPad2d
|
|||
根据 `padding` 对输入 `x` 进行填充。
|
||||
|
||||
参数:
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则顺序为 :math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down})`。
|
||||
- **padding** (union[int, tuple]) - 填充大小,如果输入为int,则对所有边界进行相同大小的填充;如果是tuple,则顺序为 :math:`(pad\_left, pad\_right, pad\_up, pad\_down)`。
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - 输入Tensor, shape为 :math:`(C, H_{in}, W_{in})` 或 :math:`(N, C, H_{in}, W_{in})` 。
|
||||
|
||||
输出:
|
||||
Tensor,填充后的Tensor, shape为 :math:`(C, H_{out}, W_{out})` 或 :math:`(N, C, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`, :math:`W_{out} = W_{in} + pad_{left} + pad_{right}` 。
|
||||
Tensor,填充后的Tensor, shape为 :math:`(C, H_{out}, W_{out})` 或 :math:`(N, C, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad\_up + pad\_down`, :math:`W_{out} = W_{in} + pad\_left + pad\_right` 。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `padding` 不是tuple或int。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.ops.GridSampler2D
|
|||
|
||||
.. py:class:: mindspore.ops.GridSampler2D(interpolation_mode='bilinear', padding_mode='zeros', align_corners=False)
|
||||
|
||||
此操作使用基于流场网格的插值对 2D input_x进行采样,该插值通常由 `affine_grid` 生成。
|
||||
此操作使用基于流场网格的插值对 2D input_x进行采样,该插值通常由 :func:`mindspore.ops.affine_grid` 生成。
|
||||
|
||||
参数:
|
||||
- **interpolation_mode** (str,可选) - 指定插值方法的可选字符串。可选值为:"bilinear"、"nearest",默认为:"bilinear"。
|
||||
|
|
|
@ -10,7 +10,7 @@ mindspore.ops.Zeta
|
|||
|
||||
.. math::
|
||||
|
||||
\\zeta \\left ( x,q \\right )= \\textstyle \\sum_{n=0} ^ {\\infty} \\left ( q+n\\right )^{-x}
|
||||
\zeta \left ( x,q \right )= \textstyle \sum_{n=0} ^ {\infty} \left ( q+n\right )^{-x}
|
||||
|
||||
输入:
|
||||
- **x** (Tensor) - Tensor,数据类型为:float32、float64。
|
||||
|
|
|
@ -3,9 +3,9 @@ mindspore.ops.broadcast_to
|
|||
|
||||
.. py:function:: mindspore.ops.broadcast_to(x, shape)
|
||||
|
||||
将输入shape广播到目标shape。输入shape维度必须小于等于目标shape维度,设输入shape为 :math: `(x1, x2, ..., xm)`,目标shape为 :math:`(*, y_1, y_2, ..., y_m)`,其中 :math:`*` 为任意额外的维度。广播规则如下:
|
||||
将输入shape广播到目标shape。输入shape维度必须小于等于目标shape维度,设输入shape为 :math:`(x_1, x_2, ..., x_m)`,目标shape为 :math:`(*, y_1, y_2, ..., y_m)`,其中 :math:`*` 为任意额外的维度。广播规则如下:
|
||||
|
||||
依次比较 `x_m` 与 `y_m` 、 `x_{m-1}` 与 `y_{m-1}` 、...、 `x_1` 与 `y_1` 的值确定是否可以广播以及广播后输出shape对应维的值。
|
||||
依次比较 :math:`x_m` 与 :math:`y_m` 、 :math:`x_{m-1}` 与 :math:`y_{m-1}` 、...、 :math:`x_1` 与 :math:`y_1` 的值确定是否可以广播以及广播后输出shape对应维的值。
|
||||
|
||||
- 如果相等,则这个值即为目标shape该维的值。比如说输入shape为 :math:`(2, 3)` ,目标shape为 :math:`(2, 3)` ,则输出shape为 :math:`(2, 3)`。
|
||||
|
||||
|
|
|
@ -6,10 +6,8 @@
|
|||
通过权重梯度总和的比率来裁剪多个Tensor的值。
|
||||
|
||||
.. note::
|
||||
输入 `x` 应为Tensor的tuple或list。否则,将引发错误。
|
||||
|
||||
.. note::
|
||||
在半自动并行模式或自动并行模式下,如果输入是梯度,那么将会自动汇聚所有设备上的梯度的平方和。
|
||||
- 输入 `x` 应为Tensor的tuple或list。否则,将引发错误。
|
||||
- 在半自动并行模式或自动并行模式下,如果输入是梯度,那么将会自动汇聚所有设备上的梯度的平方和。
|
||||
|
||||
参数:
|
||||
- **x** (Union(tuple[Tensor], list[Tensor])) - 由Tensor组成的tuple,其每个元素为任意维度的Tensor。
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.cummax
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = max(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = max(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.cummin
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = min(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = min(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
参数:
|
||||
|
|
|
@ -9,7 +9,7 @@ mindspore.ops.floor
|
|||
out_i = \lfloor x_i \rfloor
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - Floor的输入,任意维度的Tensor,秩应小于8。其数据类型必须为float16、float32。
|
||||
- **x** (Tensor) - floor的输入,任意维度的Tensor,秩应小于8。其数据类型必须为float16、float32。
|
||||
|
||||
返回:
|
||||
Tensor,shape与 `x` 相同。
|
||||
|
|
|
@ -3,8 +3,8 @@ mindspore.ops.max_unpool1d
|
|||
|
||||
.. py:function:: mindspore.ops.max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=None)
|
||||
|
||||
`Maxpool1d` 的部分逆过程。 `Maxpool1d` 不是完全可逆的,因为非最大值丢失。
|
||||
`max_unpool1d` 以 `MaxPool1d` 的输出为输入,包括最大值的索引。在计算 `Maxpool1d` 部分逆的过程中,非最大值设置为零。
|
||||
`maxpool1d` 的部分逆过程。 `maxpool1d` 不是完全可逆的,因为非最大值丢失。
|
||||
`max_unpool1d` 以 `maxpool1d` 的输出为输入,包括最大值的索引。在计算 `maxpool1d` 部分逆的过程中,非最大值设置为零。
|
||||
支持的输入数据格式为 :math:`(N, C, H_{in})` 或 :math:`(C, H_{in})` ,输出数据的格式为 :math:`(N, C, H_{out})`
|
||||
或 :math:`(C, H_{out})` ,计算公式如下:
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ mindspore.ops.sqrt
|
|||
逐元素返回当前Tensor的平方根。
|
||||
|
||||
.. math::
|
||||
out_{i} = \\sqrt{x_{i}}
|
||||
out_{i} = \sqrt{x_{i}}
|
||||
|
||||
参数:
|
||||
- **x** (Tensor) - 输入Tensor,数据类型为number.Number,其rank需要在[0, 7]范围内.
|
||||
|
|
|
@ -66,6 +66,7 @@ Neural Network
|
|||
mindspore.ops.FractionalMaxPool
|
||||
mindspore.ops.FractionalMaxPoolWithFixedKsize
|
||||
mindspore.ops.FractionalMaxPool3DWithFixedKsize
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.GridSampler3D
|
||||
mindspore.ops.LayerNorm
|
||||
mindspore.ops.LRN
|
||||
|
@ -196,7 +197,6 @@ Sampling Operator
|
|||
:template: classtemplate.rst
|
||||
|
||||
mindspore.ops.ComputeAccidentalHits
|
||||
mindspore.ops.GridSampler2D
|
||||
mindspore.ops.LogUniformCandidateSampler
|
||||
mindspore.ops.UniformCandidateSampler
|
||||
|
||||
|
|
|
@ -1153,6 +1153,8 @@ def reset_ps_context():
|
|||
Reset parameter server training mode context attributes to the default values:
|
||||
|
||||
- enable_ps: False.
|
||||
|
||||
Meaning of each field and its default value refer to :func:`mindspore.set_ps_context`.
|
||||
"""
|
||||
_reset_ps_context()
|
||||
|
||||
|
|
|
@ -2190,7 +2190,7 @@ class GraphCell(Cell):
|
|||
If the parameter exists in the graph according to the name, update it's value.
|
||||
If the parameter does not exist, ignore it. Default: None.
|
||||
obf_password (int): The password used for dynamic obfuscation. "dynamic obfuscation" is used for model
|
||||
protection, which can refer to `mindspore.train.serialization.obfuscate_model()`. If the input 'graph' is a
|
||||
protection, which can refer to :func:`mindspore.obfuscate_model`. If the input 'graph' is a
|
||||
func_graph loaded from a mindir file obfuscated in password mode, then obf_password should be provided.
|
||||
obf_password should be larger than zero and less or equal than int_64 (9223372036854775807). default: None.
|
||||
|
||||
|
|
|
@ -439,14 +439,14 @@ class ReflectionPad1d(_ReflectionPadNd):
|
|||
Args:
|
||||
padding (union[int, tuple]): The padding size to pad the last dimension of input tensor.
|
||||
If padding is an integer: all directions will be padded with the same size.
|
||||
If padding is a tuple: uses :math:`(pad_{left}, pad_{right})` to pad.
|
||||
If padding is a tuple: uses :math:`(pad\_left, pad\_right)` to pad.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - 2D or 3D, shape: :math:`(C, W_{in})` or :math:`(N, C, W_{in})`.
|
||||
|
||||
Outputs:
|
||||
Tensor, after padding. Shape: :math:`(C, W_{out})` or :math:`(N, C, W_{out})`,
|
||||
where :math:`W_{out} = W_{in} + pad_{left} + pad_{right}`.
|
||||
where :math:`W_{out} = W_{in} + pad\_left + pad\_right`.
|
||||
|
||||
Raises:
|
||||
TypeError: If 'padding' is not a tuple or int.
|
||||
|
@ -488,14 +488,14 @@ class ReflectionPad2d(_ReflectionPadNd):
|
|||
Args:
|
||||
padding (union[int, tuple]): The padding size to pad the input tensor.
|
||||
If padding is an integer: all directions will be padded with the same size.
|
||||
If padding is a tuple: uses :math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down})` to pad.
|
||||
If padding is a tuple: uses :math:`(pad\_left, pad\_right, pad\_up, pad\_down)` to pad.
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - 3D or 4D, shape: :math:`(C, H_{in}, W_{out})` or :math:`(N, C, H_{out}, W_{out})`.
|
||||
|
||||
Outputs:
|
||||
Tensor, after padding. Shape: :math:`(C, H_{out}, W_{out})` or :math:`(N, C, H_{out}, W_{out})`,
|
||||
where :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`, :math:`W_{out} = W_{in} + pad_{left} + pad_{right}`.
|
||||
where :math:`H_{out} = H_{in} + pad\_up + pad\_down`, :math:`W_{out} = W_{in} + pad\_left + pad\_right`.
|
||||
|
||||
Raises:
|
||||
TypeError: If 'padding' is not a tuple or int.
|
||||
|
|
|
@ -389,7 +389,7 @@ class GetNextSingleOp(Cell):
|
|||
"""
|
||||
Cell to run for getting the next operation.
|
||||
|
||||
For detailed information, refer to `mindspore.ops.GetNext`.
|
||||
For detailed information, refer to :class:`mindspore.ops.GetNext`.
|
||||
|
||||
Args:
|
||||
dataset_types (list[:class:`mindspore.dtype`]): The types of dataset.
|
||||
|
|
|
@ -95,12 +95,10 @@ def clip_by_global_norm(x, clip_norm=1.0, use_norm=None):
|
|||
Clips tensor values by the ratio of the sum of their norms.
|
||||
|
||||
Note:
|
||||
Input `x` should be a tuple or list of tensors. Otherwise, it will raise an error.
|
||||
|
||||
Note:
|
||||
On the SEMI_AUTO_PARALLEL mode or AUTO_PARALLEL mode, if the input `x` is the gradient,
|
||||
the gradient norm values on all devices will be automatically aggregated by allreduce inserted after the local
|
||||
square sum of the gradients.
|
||||
- Input `x` should be a tuple or list of tensors. Otherwise, it will raise an error.
|
||||
- On the SEMI_AUTO_PARALLEL mode or AUTO_PARALLEL mode, if the input `x` is the gradient,
|
||||
the gradient norm values on all devices will be automatically aggregated by allreduce inserted
|
||||
after the local square sum of the gradients.
|
||||
|
||||
Args:
|
||||
x (Union(tuple[Tensor], list[Tensor])): Input data to clip.
|
||||
|
|
|
@ -3746,11 +3746,12 @@ def affine_grid(theta, output_size, align_corners=False):
|
|||
def broadcast_to(x, shape):
|
||||
"""
|
||||
Broadcasts input tensor to a given shape. The dim of input shape must be smaller
|
||||
than or equal to that of target shape. Suppose input shape is :math:`(x1, x2, ..., xm)`,
|
||||
than or equal to that of target shape. Suppose input shape is :math:`(x_1, x_2, ..., x_m)`,
|
||||
target shape is :math:`(*, y_1, y_2, ..., y_m)`, where :math:`*` means any additional dimension.
|
||||
The broadcast rules are as follows:
|
||||
|
||||
Compare the value of `x_m` and `y_m`, `x_{m-1}` and `y_{m-1}`, ..., `x_1` and `y_1` consecutively and
|
||||
Compare the value of :math:`x_m` and :math:`y_m`, :math:`x_{m-1}` and :math:`y_{m-1}`, ...,
|
||||
:math:`x_1` and :math:`y_1` consecutively and
|
||||
decide whether these shapes are broadcastable and what the broadcast result is.
|
||||
|
||||
If the value pairs at a specific dim are equal, then that value goes right into that dim of output shape.
|
||||
|
|
|
@ -4592,7 +4592,7 @@ def cummin(x, axis):
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = min(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = min(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
@ -4644,7 +4644,7 @@ def cummax(x, axis):
|
|||
|
||||
.. math::
|
||||
\begin{array}{ll} \\
|
||||
y{i} = max(x{1}, x{2}, ... , x{i})
|
||||
y_{i} = max(x_{1}, x_{2}, ... , x_{i})
|
||||
\end{array}
|
||||
|
||||
Args:
|
||||
|
|
|
@ -675,11 +675,11 @@ def adaptive_max_pool3d(x, output_size, return_indices=False):
|
|||
|
||||
def max_unpool1d(x, indices, kernel_size, stride=None, padding=0, output_size=None):
|
||||
r"""
|
||||
Computes a partial inverse of MaxPool1d.
|
||||
Computes a partial inverse of maxpool1d.
|
||||
|
||||
MaxPool1d is not fully invertible, since the non-maximal values are lost.
|
||||
maxpool1d is not fully invertible, since the non-maximal values are lost.
|
||||
|
||||
max_unpool1d takes the output of MaxPool1d as input including the indices of the maximal values
|
||||
max_unpool1d takes the output of maxpool1d as input including the indices of the maximal values
|
||||
and computes a partial inverse in which all non-maximal values are set to zero. Typically the input
|
||||
is of shape :math:`(N, C, H_{in})` or :math:`(C, H_{in})`, and the output is of shape :math:`(N, C, H_{out}`
|
||||
or :math:`(C, H_{out}`. The operation is as follows.
|
||||
|
|
|
@ -9602,7 +9602,7 @@ class DeformableOffsets(Primitive):
|
|||
class GridSampler2D(Primitive):
|
||||
"""
|
||||
This operation samples 2d input_x by using interpolation based on flow field grid, which is usually gennerated by
|
||||
affine_grid.
|
||||
:func:`mindspore.ops.affine_grid`.
|
||||
|
||||
Args:
|
||||
interpolation_mode (str): An optional string specifying the interpolation method. The optional values are
|
||||
|
|
|
@ -1880,7 +1880,8 @@ def load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=
|
|||
"""
|
||||
Load checkpoint into net for distributed predication. Used in the case of distributed inference.
|
||||
For details of distributed inference, please check:
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/r2.0.0-alpha/parallel/distributed_inference.html>`_.
|
||||
`Distributed Inference <https://www.mindspore.cn/tutorials/experts/en/r2.0.0-alpha/parallel/
|
||||
distributed_inference.html>`_ .
|
||||
|
||||
Args:
|
||||
network (Cell): Network for distributed predication.
|
||||
|
|
Loading…
Reference in New Issue