modify the format and order of API files
This commit is contained in:
parent
4a00560a8a
commit
cf3550e306
|
@ -53,7 +53,7 @@
|
|||
- **create_data_info_queue** (bool, 可选) - 是否创建一个队列,用于存储每条数据的数据类型和shape。默认值:False,不创建。
|
||||
|
||||
.. note::
|
||||
如果设备类型为Ascend,每次传输的数据大小限制为256MB。
|
||||
如果设备类型为Ascend,数据的特征将被逐一传输。每次传输的数据大小限制为256MB。
|
||||
|
||||
返回:
|
||||
Dataset,用于帮助发送数据到设备上的数据集对象。
|
||||
|
|
|
@ -115,7 +115,7 @@
|
|||
- **create_data_info_queue** (bool, 可选) - 是否创建存储数据类型和shape的队列,默认值:False。
|
||||
|
||||
.. note::
|
||||
该接口在将来会被删除或不可见,建议使用 `device_queue` 接口。
|
||||
该接口在将来会被删除或不可见。建议使用 `device_queue` 接口。
|
||||
如果设备为Ascend,则逐个传输数据。每次数据传输的限制为256M。
|
||||
|
||||
返回:
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.dataset.audio.Angle
|
|||
|
||||
计算复数序列的角度。
|
||||
|
||||
.. note:: 待处理音频维度需为(..., complex=2),其中第0维代表实部,第1维代表虚部。
|
||||
.. note:: 待处理音频维度需为(..., complex=2)。第0维代表实部,第1维代表虚部。
|
||||
|
||||
异常:
|
||||
- **RuntimeError** - 当输入音频的shape不为<..., complex=2>。
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.dataset.audio.ComplexNorm
|
|||
|
||||
计算复数序列的范数。
|
||||
|
||||
.. note:: 待处理音频维度需为(..., complex=2),其中第0维代表实部,第1维代表虚部。
|
||||
.. note:: 待处理音频维度需为(..., complex=2)。第0维代表实部,第1维代表虚部。
|
||||
|
||||
参数:
|
||||
- **power** (float, 可选) - 范数的幂,取值必须非负,默认值:1.0。
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.dataset.audio.TimeStretch
|
|||
|
||||
以给定的比例拉伸音频短时傅里叶(Short Time Fourier Transform, STFT)频谱的时域,但不改变音频的音高。
|
||||
|
||||
.. note:: 待处理音频维度需为(..., freq, time, complex=2),其中第0维代表实部,第1维代表虚部。
|
||||
.. note:: 待处理音频维度需为(..., freq, time, complex=2)。第0维代表实部,第1维代表虚部。
|
||||
|
||||
参数:
|
||||
- **hop_length** (int, 可选) - STFT窗之间每跳的长度,即连续帧之间的样本数,默认值:None,表示取 `n_freq - 1`。
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
mindspore.dataset.text.BasicTokenizer
|
||||
=====================================
|
||||
======================================
|
||||
|
||||
.. py:class:: mindspore.dataset.text.BasicTokenizer(lower_case=False, keep_whitespace=False, normalization_form=NormalizeForm.NONE, preserve_unused_token=True, with_offsets=False)
|
||||
|
||||
|
|
|
@ -43,7 +43,7 @@
|
|||
|
||||
.. warning:: 这是一个实验性接口,后续可能删除或修改。
|
||||
|
||||
.. note:: 如果预先调用该接口构建计算图,那么 `Model.train` 会直接执行计算图。预构建计算图目前仅支持GRAPH_MOD模式和Ascend处理器,仅支持数据下沉模式。
|
||||
.. note:: 如果预先调用该接口构建计算图,那么 `Model.train` 会直接执行计算图。预构建计算图目前仅支持GRAPH_MOD模式和Ascend处理器。仅支持数据下沉模式。
|
||||
|
||||
参数:
|
||||
- **train_dataset** (Dataset) - 一个训练集迭代器。如果定义了 `train_dataset` ,将会构建训练计算图。默认值:None。
|
||||
|
@ -58,8 +58,8 @@
|
|||
使用PyNative模式或CPU处理器时,模型评估流程将以非下沉模式执行。
|
||||
|
||||
.. note::
|
||||
如果 `dataset_sink_mode` 配置为True,数据将被发送到处理器中。此时数据集与模型绑定,数据集仅能在当前模型中使用。如果处理器是Ascend,数据特征将被逐一传输,每次数据传输的上限是256M。
|
||||
该接口会构建并执行计算图,如果使用前先执行了 `Model.build` ,那么它会直接执行计算图而不构建。
|
||||
如果 `dataset_sink_mode` 配置为True,数据将被发送到处理器中。此时数据集与模型绑定,数据集仅能在当前模型中使用。如果处理器是Ascend,数据特征将被逐一传输。每次数据传输的上限是256M。
|
||||
该接口会构建并执行计算图。如果使用前先执行了 `Model.build` ,那么它会直接执行计算图而不构建。
|
||||
|
||||
参数:
|
||||
- **valid_dataset** (Dataset) - 评估模型的数据集。
|
||||
|
|
|
@ -8,7 +8,7 @@ mindspore.SparseTensor
|
|||
`SparseTensor` 只能在 `Cell` 的构造方法中使用。
|
||||
|
||||
.. note::
|
||||
此接口从 1.7 版本开始弃用,并计划在将来移除,请使用 `COOTensor`。
|
||||
此接口从 1.7 版本开始弃用,并计划在将来移除。请使用 `COOTensor`。
|
||||
|
||||
对于稠密张量,其 `SparseTensor(indices, values, shape)` 具有 `dense[indices[i]] = values[i]` 。
|
||||
|
||||
|
|
|
@ -460,7 +460,7 @@
|
|||
|
||||
递归设置该Cell中的所有算子的并行策略为数据并行。
|
||||
|
||||
.. note:: 仅在图模式、全自动并行(AUTO_PARALLEL)模式下生效。
|
||||
.. note:: 仅在图模式,使用auto_parallel_context = ParallelMode.AUTO_PARALLEL生效。
|
||||
|
||||
.. py:method:: set_grad(requires_grad=True)
|
||||
|
||||
|
@ -527,7 +527,7 @@
|
|||
其中的每一个元素指定对应的输入/输出的Tensor分布策略,可参考: `mindspore.ops.Primitive.shard` 的描述,也可以设置为None,会默认以数据并行执行。
|
||||
其余算子的并行策略由输入输出指定的策略推导得到。
|
||||
|
||||
.. note:: 需设置为PyNative模式,并且全自动并行(AUTO_PARALLEL),同时设置 `set_auto_parallel_context` 中的搜索模式(search mode)为"sharding_propagation"。
|
||||
.. note:: 需设置为PyNative模式,并且ParallelMode.AUTO_PARALLEL,同时设置 `set_auto_parallel_context` 中的搜索模式(search mode)为"sharding_propagation"。
|
||||
|
||||
参数:
|
||||
- **in_strategy** (tuple) - 指定各输入的切分策略,输入元组的每个元素可以为元组或None,元组即具体指定输入每一维的切分策略,None则会默认以数据并行执行。
|
||||
|
|
|
@ -8,7 +8,7 @@ mindspore.nn.OneHot
|
|||
输入的 `indices` 表示的位置取值为on_value,其他所有位置取值为off_value。
|
||||
|
||||
.. note::
|
||||
如果indices是n阶Tensor,那么返回的one-hot Tensor则为n+1阶Tensor,新增 `axis` 维度。
|
||||
如果indices是n阶Tensor,那么返回的one-hot Tensor则为n+1阶Tensor。新增 `axis` 维度。
|
||||
|
||||
如果 `indices` 是Scalar,则输出shape将是长度为 `depth` 的向量。
|
||||
|
||||
|
|
|
@ -1,2 +1,2 @@
|
|||
如果前向网络使用了SparseGatherV2等算子,优化器会执行稀疏运算。通过设置 `target` 为CPU,可在主机(host)上进行稀疏运算。
|
||||
如果前向网络使用了SparseGatherV2等算子,优化器会执行稀疏运算,通过设置 `target` 为CPU,可在主机(host)上进行稀疏运算。
|
||||
稀疏特性在持续开发中。
|
||||
|
|
|
@ -2,4 +2,4 @@
|
|||
|
||||
参数分组情况下,可以分组调整权重衰减策略。
|
||||
|
||||
分组时,每组网络参数均可配置 `weight_decay` ,若未配置,则该组网络参数使用优化器中配置的 `weight_decay` 。
|
||||
分组时,每组网络参数均可配置 `weight_decay` 。若未配置,则该组网络参数使用优化器中配置的 `weight_decay` 。
|
||||
|
|
|
@ -19,11 +19,11 @@ mindspore.nn.probability.bijector.Bijector
|
|||
|
||||
但所有参数都应具有相同的float类型,否则将引发TypeError。
|
||||
|
||||
具体来说,参数类型跟随输入值的数据类型,即当 `dtype` 为None时,Bijector的参数将被强制转换为与输入值相同的类型。
|
||||
具体来说,参数类型跟随输入值的数据类型。即当 `dtype` 为None时,Bijector的参数将被强制转换为与输入值相同的类型。
|
||||
|
||||
当指定了 `dtype` 时,参数和输入值的 `dtype` 必须相同。
|
||||
|
||||
当参数类型或输入值类型与 `dtype` 不相同时,将引发TypeError。只能使用mindspore的float数据类型来指定Bijector的 `dtype` 。
|
||||
当参数类型或输入值类型与 `dtype` 不相同时,将引发TypeError。只能使用mindspore.float_type数据类型来指定Bijector的 `dtype` 。
|
||||
|
||||
.. py:method:: cast_param_by_value(value, para)
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@ mindspore.nn.probability.distribution.Beta
|
|||
|
||||
.. note::
|
||||
- `concentration1` 和 `concentration0` 中元素必须大于零。
|
||||
- `dist_spec_args` 是 `concentration1` 和 `concentration0`。
|
||||
- `dtype` 必须是float,因为 Beta 分布是连续的。
|
||||
|
||||
异常:
|
||||
|
|
|
@ -15,7 +15,7 @@ mindspore.nn.probability.distribution.Geometric
|
|||
- **name** (str) - 分布的名称。默认值:'Geometric'。
|
||||
|
||||
.. note::
|
||||
`probs` 必须是合适的概率(0<p<1)。
|
||||
`probs` 必须是合适的概率(0<p<1)。`dist_spec_args` 是 `probs`。
|
||||
|
||||
|
||||
异常:
|
||||
|
|
|
@ -18,7 +18,7 @@ mindspore.nn.probability.distribution.Poisson
|
|||
- **name** (str) - 分布的名称。默认值:'Poisson'。
|
||||
|
||||
.. note::
|
||||
`rate` 必须大于0。
|
||||
`rate` 必须大于0。 `dist_spec_args` 是 `rate`。
|
||||
|
||||
异常:
|
||||
- **ValueError** - `rate` 中元素小于0。
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
使用指定方式对通信组内的所有设备的Tensor数据进行规约操作,所有设备都得到相同的结果
|
||||
|
||||
.. note::
|
||||
AllReduce操作暂不支持"prod"。集合中的所有进程的Tensor必须具有相同的shape和格式。用户在使用之前需要设置环境变量,运行下面的例子。获取详情请点击官方网站 `MindSpore <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 。
|
||||
AllReduce操作暂不支持"prod"。集合中的所有进程的Tensor必须具有相同的shape和格式。用户在使用之前需要设置环境变量,运行下面的例子,获取详情请点击官方网站 `MindSpore <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 。
|
||||
|
||||
参数:
|
||||
- **op** (str) - 规约的具体操作,如"sum"、"max"、和"min"。默认值:ReduceOp.SUM。
|
||||
|
|
|
@ -6,7 +6,8 @@
|
|||
对输入数据整组广播。
|
||||
|
||||
.. note::
|
||||
集合中的所有进程的Tensor的shape和数据格式相同。
|
||||
集合中的所有进程的Tensor的shape和数据格式相同。在运行下面样例时,用户需要预设通信环境变量,请在 `MindSpore \
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 官网上查看详情。
|
||||
|
||||
参数:
|
||||
- **root_rank** (int) - 表示发送源的进程编号。除发送数据的进程外,存在于所有进程中。
|
||||
|
|
|
@ -9,7 +9,7 @@ mindspore.ops.Print
|
|||
|
||||
.. note::
|
||||
在PyNative模式下,请使用Python print函数。在Graph模式下,bool、int和float将被转换为Tensor进行打印,str保持不变。
|
||||
该方法用于代码调试,当同时print大量数据时,为了保证主进程不受影响,可能会丢失一些数据,这时推荐使用 `Summary` 功能,具体可查看
|
||||
该方法用于代码调试。当同时print大量数据时,为了保证主进程不受影响,可能会丢失一些数据,这时推荐使用 `Summary` 功能。具体可查看
|
||||
`Summary <https://www.mindspore.cn/mindinsight/docs/zh-CN/master/summary_record.html?highlight=summary#>`_。
|
||||
|
||||
输入:
|
||||
|
|
|
@ -4,6 +4,6 @@ mindspore.ops.Range
|
|||
.. py:class:: mindspore.ops.Range(maxlen=1000000)
|
||||
|
||||
返回从 `start` 开始,步长为 `delta` ,且不超过 `limit` (不包括 `limit` )的序列。
|
||||
序列的长度不能超过`maxlen`。 `maxlen`的默认值为1000000。
|
||||
序列的长度不能超过 `maxlen`。 `maxlen` 的默认值为1000000。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.range`。
|
||||
|
|
|
@ -24,11 +24,11 @@ mindspore.ops.UniformInt
|
|||
- **maxval** (Tensor) - 分布参数, :math:`b` 。
|
||||
决定生成随机数的上限,数据类型为int32。需为标量。
|
||||
|
||||
输出:
|
||||
Tensor。shape为输入 `shape` ,数据类型支持int32。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `seed` 或 `seed2` 不是int类型。
|
||||
- **TypeError** - `shape` 不是Tuple。
|
||||
- **TypeError** - `minval` 或 `maxval` 不是Tensor。
|
||||
- **ValueError** - `shape` 不是常量值。
|
||||
|
||||
输出:
|
||||
Tensor。shape为输入 `shape` ,数据类型支持int32。
|
||||
|
|
|
@ -17,33 +17,31 @@ mindspore.ops.conv2d
|
|||
|
||||
请参考论文 `Gradient Based Learning Applied to Document Recognition <http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf>`_ 。更详细的介绍,参见:http://cs231n.github.io/convolutional-networks/。
|
||||
|
||||
**参数:**
|
||||
- **x** (Tensor): shape为 :math:`(N, C_{in}, H_{in}, W_{in})` 的Tensor.
|
||||
- **weight** (Tensor) - 设置卷积核的大小为 :math:`(\text{kernel_size[0]}, \text{kernel_size[1]})` ,则shape为 :math:`(C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]})` 。
|
||||
- **kernel_size** (Union[int, tuple[int]]) - 数据类型为int或一个包含2个int组成的元组。指定二维卷积核的高度和宽度。单个整数表示该值同时适用于内核的高度和宽度。包含2个整数的元组表示第一个值用于高度,另一个值用于内核的宽度。
|
||||
- **mode** (int) - 指定不同的卷积模式。此值目前未被使用。默认值:1。
|
||||
- **pad_mode** (str) - 指定填充模式。取值为"same","valid",或"pad"。默认值:"valid"。
|
||||
参数:
|
||||
- **x** (Tensor): shape为 :math:`(N, C_{in}, H_{in}, W_{in})` 的Tensor.
|
||||
- **weight** (Tensor) - 设置卷积核的大小为 :math:`(\text{kernel_size[0]}, \text{kernel_size[1]})` ,则shape为 :math:`(C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]})` 。
|
||||
- **kernel_size** (Union[int, tuple[int]]) - 数据类型为int或一个包含2个int组成的元组。指定二维卷积核的高度和宽度。单个整数表示该值同时适用于内核的高度和宽度。包含2个整数的元组表示第一个值用于高度,另一个值用于内核的宽度。
|
||||
- **mode** (int) - 指定不同的卷积模式。此值目前未被使用。默认值:1。
|
||||
- **pad_mode** (str) - 指定填充模式。取值为"same","valid",或"pad"。默认值:"valid"。
|
||||
|
||||
- **same**: 输出的高度和宽度分别与输入整除 `stride` 后的值相同。填充将被均匀地添加到高和宽的两侧,剩余填充量将被添加到维度末端。若设置该模式,`pad_val` 的值必须为0。
|
||||
- **valid**: 在不填充的前提下返回有效计算所得的输出。不满足计算的多余像素会被丢弃。如果设置此模式,则 `pad_val` 的值必须为0。
|
||||
- **pad**: 对输入 `x` 进行填充。在输入的高度和宽度方向上填充 `pad_val` 大小的0。如果设置此模式, `pad_val` 必须大于或等于0。
|
||||
|
||||
- **pad_val** (Union(int, tuple[int])) - 输入 `x` 的高度和宽度方向上填充的数量。数据类型为int或包含4个int组成的tuple。如果 `pad_val` 是一个int,那么上、下、左、右的填充都等于 `pad_val` 。如果 `pad_val` 是一个有4个int组成的tuple,那么上、下、左、右的填充分别等于 `pad_val[0]` 、 `pad_val[1]` 、 `pad_val[2]` 和 `pad_val[3]` 。值应该要大于等于0,默认值:0。
|
||||
- **stride** (Union(int, tuple[int])) - 卷积核移动的步长,数据类型为int或两个int组成的tuple。一个int表示在高度和宽度方向的移动步长均为该值。两个int组成的tuple分别表示在高度和宽度方向的移动步长。默认值:1。
|
||||
- **dilation** (Union(int, tuple[int])) - 卷积核膨胀尺寸。数据类型为int或由2个int组成的tuple。若 :math:`k > 1` ,则卷积核间隔 `k` 个元素进行采样。垂直和水平方向上的 `k` ,其取值范围分别为[1, H]和[1, W]。默认值:1。
|
||||
- **group** (int) - 将过滤器拆分为组。默认值:1。
|
||||
- **data_format** (str) - 数据格式的可选值有"NHWC","NCHW"。默认值:"NCHW"。
|
||||
- **same**: 输出的高度和宽度分别与输入整除 `stride` 后的值相同。填充将被均匀地添加到高和宽的两侧,剩余填充量将被添加到维度末端。若设置该模式,`pad_val` 的值必须为0。
|
||||
- **valid**: 在不填充的前提下返回有效计算所得的输出。不满足计算的多余像素会被丢弃。如果设置此模式,则 `pad_val` 的值必须为0。
|
||||
- **pad**: 对输入 `x` 进行填充。在输入的高度和宽度方向上填充 `pad_val` 大小的0。如果设置此模式, `pad_val` 必须大于或等于0。
|
||||
|
||||
- **pad_val** (Union(int, tuple[int])) - 输入 `x` 的高度和宽度方向上填充的数量。数据类型为int或包含4个int组成的tuple。如果 `pad_val` 是一个int,那么上、下、左、右的填充都等于 `pad_val` 。如果 `pad_val` 是一个有4个int组成的tuple,那么上、下、左、右的填充分别等于 `pad_val[0]` 、 `pad_val[1]` 、 `pad_val[2]` 和 `pad_val[3]` 。值应该要大于等于0,默认值:0。
|
||||
- **stride** (Union(int, tuple[int])) - 卷积核移动的步长,数据类型为int或两个int组成的tuple。一个int表示在高度和宽度方向的移动步长均为该值。两个int组成的tuple分别表示在高度和宽度方向的移动步长。默认值:1。
|
||||
- **dilation** (Union(int, tuple[int])) - 卷积核膨胀尺寸。数据类型为int或由2个int组成的tuple。若 :math:`k > 1` ,则卷积核间隔 `k` 个元素进行采样。垂直和水平方向上的 `k` ,其取值范围分别为[1, H]和[1, W]。默认值:1。
|
||||
- **group** (int) - 将过滤器拆分为组。默认值:1。
|
||||
- **data_format** (str) - 数据格式的可选值有"NHWC","NCHW"。默认值:"NCHW"。
|
||||
|
||||
**返回:**
|
||||
返回:
|
||||
Tensor,卷积后的值。shape为 :math:`(N, C_{out}, H_{out}, W_{out})` 。
|
||||
|
||||
Tensor,卷积后的值。shape为 :math:`(N, C_{out}, H_{out}, W_{out})` 。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `kernel_size` 、 `stride` 、 `pad_val` 或 `dilation` 既不是int也不是tuple。
|
||||
- **TypeError** - `out_channel` 或 `group` 不是int。
|
||||
- **ValueError** - `kernel_size` 、 `stride` 或 `diation` 小于1。
|
||||
- **ValueError** - `pad_mode` 不是"same"、"valid"或"pad"。
|
||||
- **ValueError** - `pad_val` 是一个长度不等于4的tuple。
|
||||
- **ValueError** - `pad_mode` 不等于"pad",`pad_val` 不等于(0, 0, 0, 0)。
|
||||
- **ValueError** - `data_format` 既不是"NCW",也不是"NHWC"。
|
||||
异常:
|
||||
- **TypeError** - `kernel_size` 、 `stride` 、 `pad_val` 或 `dilation` 既不是int也不是tuple。
|
||||
- **TypeError** - `out_channel` 或 `group` 不是int。
|
||||
- **ValueError** - `kernel_size` 、 `stride` 或 `diation` 小于1。
|
||||
- **ValueError** - `pad_mode` 不是"same"、"valid"或"pad"。
|
||||
- **ValueError** - `pad_val` 是一个长度不等于4的tuple。
|
||||
- **ValueError** - `pad_mode` 不等于"pad",`pad_val` 不等于(0, 0, 0, 0)。
|
||||
- **ValueError** - `data_format` 既不是"NCW",也不是"NHWC"。
|
||||
|
|
|
@ -21,6 +21,7 @@ mindspore.ops.scatter_mul
|
|||
Tensor,更新后的 `input_x` ,shape和类型与 `input_x` 相同。
|
||||
|
||||
异常:
|
||||
- **TypeError** - `use_locking` 不是bool。
|
||||
- **TypeError** - `indices` 不是int32。
|
||||
- **ValueError** - `updates` 的shape不等于 `indices.shape + x.shape[1:]` 。
|
||||
- **RuntimeError** - 当 `input_x` 和 `updates` 类型不一致,需要进行类型转换时,如果 `updates` 不支持转成参数 `input_x` 需要的数据类型,就会报错。
|
|
@ -30,6 +30,15 @@ def rearrange_inputs(func):
|
|||
|
||||
This decorator is currently applied on the `update` of :class:`mindspore.nn.Metric`.
|
||||
|
||||
Args:
|
||||
func (Callable): A candidate function to be wrapped whose input will be rearranged.
|
||||
|
||||
Returns:
|
||||
Callable, used to exchange metadata between functions.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore.nn import rearrange_inputs
|
||||
>>> class RearrangeInputsExample:
|
||||
|
@ -52,15 +61,6 @@ def rearrange_inputs(func):
|
|||
>>> outs = rearrange_inputs_example.update(5, 9)
|
||||
>>> print(outs)
|
||||
(9, 5)
|
||||
|
||||
Args:
|
||||
func (Callable): A candidate function to be wrapped whose input will be rearranged.
|
||||
|
||||
Returns:
|
||||
Callable, used to exchange metadata between functions.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
"""
|
||||
@functools.wraps(func)
|
||||
def wrapper(self, *inputs):
|
||||
|
|
|
@ -60,6 +60,12 @@ class SGD(Optimizer):
|
|||
|
||||
Here : where p, v and u denote the parameters, accum, and momentum respectively.
|
||||
|
||||
Note:
|
||||
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without
|
||||
'beta' or 'gamma' in their names. Users can group parameters to change the strategy of decaying weight. When
|
||||
parameters are grouped, each group can set `weight_decay`. If not, the `weight_decay` in optimizer will be
|
||||
applied.
|
||||
|
||||
Args:
|
||||
params (Union[list[Parameter], list[dict]]): Must be list of `Parameter` or list of `dict`. When the
|
||||
`params` is a list of `dict`, the string "params", "lr", "grad_centralization" and
|
||||
|
|
|
@ -38,9 +38,6 @@ class Bijector(Cell):
|
|||
dtype (mindspore.dtype): The type of the distributions that the Bijector can operate on. Default: None.
|
||||
param (dict): The parameters used to initialize the Bijector. Default: None.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`dtype` of bijector represents the type of the distributions that the bijector could operate on.
|
||||
When `dtype` is None, there is no enforcement on the type of input value except that the input value
|
||||
|
@ -51,6 +48,9 @@ class Bijector(Cell):
|
|||
When `dtype` is specified, it is forcing the parameters and input value to be the same dtype as `dtype`.
|
||||
When the type of parameters or the type of the input value is not the same as `dtype`, a TypeError will be
|
||||
raised. Only subtype of mindspore.float_type can be used to specify bijector's `dtype`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
|
|
|
@ -44,9 +44,9 @@ class Beta(Distribution):
|
|||
name (str): The name of the distribution. Default: 'Beta'.
|
||||
|
||||
Note:
|
||||
`concentration1` and `concentration0` must be greater than zero.
|
||||
`dist_spec_args` are `concentration1` and `concentration0`.
|
||||
`dtype` must be a float type because Beta distributions are continuous.
|
||||
- `concentration1` and `concentration0` must be greater than zero.
|
||||
- `dist_spec_args` are `concentration1` and `concentration0`.
|
||||
- `dtype` must be a float type because Beta distributions are continuous.
|
||||
|
||||
Raises:
|
||||
ValueError: When concentration1 <= 0 or concentration0 >=1.
|
||||
|
|
|
@ -39,15 +39,15 @@ class Categorical(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.int32.
|
||||
name (str): The name of the distribution. Default: Categorical.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`probs` must have rank at least 1, values are proper probabilities and sum to 1.
|
||||
|
||||
Raises:
|
||||
ValueError: When the sum of all elements in `probs` is not 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -41,9 +41,6 @@ class Cauchy(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||
name (str): The name of the distribution. Default: 'Cauchy'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
Note:
|
||||
`scale` must be greater than zero.
|
||||
`dist_spec_args` are `loc` and `scale`.
|
||||
|
@ -54,6 +51,9 @@ class Cauchy(Distribution):
|
|||
ValueError: When scale <= 0.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -40,9 +40,6 @@ class Exponential(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||
name (str): The name of the distribution. Default: 'Exponential'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`rate` must be strictly greater than 0.
|
||||
`dist_spec_args` is `rate`.
|
||||
|
@ -52,6 +49,9 @@ class Exponential(Distribution):
|
|||
ValueError: When rate <= 0.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -45,9 +45,6 @@ class Gamma(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||
name (str): The name of the distribution. Default: 'Gamma'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
Note:
|
||||
`concentration` and `rate` must be greater than zero.
|
||||
`dist_spec_args` are `concentration` and `rate`.
|
||||
|
@ -57,6 +54,9 @@ class Gamma(Distribution):
|
|||
ValueError: When concentration <= 0 or rate <= 0.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -37,9 +37,6 @@ class Geometric(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.int32.
|
||||
name (str): The name of the distribution. Default: 'Geometric'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`probs` must be a proper probability (0 < p < 1).
|
||||
`dist_spec_args` is `probs`.
|
||||
|
@ -47,6 +44,9 @@ class Geometric(Distribution):
|
|||
Raises:
|
||||
ValueError: When p <= 0 or p >= 1.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -43,9 +43,6 @@ class Gumbel(TransformedDistribution):
|
|||
dtype (mindspore.dtype): type of the distribution. Default: mstype.float32.
|
||||
name (str): the name of the distribution. Default: 'Gumbel'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`scale` must be greater than zero.
|
||||
`dist_spec_args` are `loc` and `scale`.
|
||||
|
@ -55,6 +52,9 @@ class Gumbel(TransformedDistribution):
|
|||
ValueError: When scale <= 0.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import numpy as np
|
||||
|
|
|
@ -44,9 +44,6 @@ class LogNormal(msd.TransformedDistribution):
|
|||
dtype (mindspore.dtype): type of the distribution. Default: mstype.float32.
|
||||
name (str): the name of the distribution. Default: 'LogNormal'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`scale` must be greater than zero.
|
||||
`dist_spec_args` are `loc` and `scale`.
|
||||
|
@ -56,6 +53,9 @@ class LogNormal(msd.TransformedDistribution):
|
|||
ValueError: When scale <= 0.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
>>> import mindspore
|
||||
|
|
|
@ -41,9 +41,6 @@ class Logistic(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||
name (str): The name of the distribution. Default: 'Logistic'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`scale` must be greater than zero.
|
||||
`dist_spec_args` are `loc` and `scale`.
|
||||
|
@ -53,6 +50,9 @@ class Logistic(Distribution):
|
|||
ValueError: When scale <= 0.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -43,9 +43,6 @@ class Normal(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||
name (str): The name of the distribution. Default: 'Normal'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`sd` must be greater than zero.
|
||||
`dist_spec_args` are `mean` and `sd`.
|
||||
|
@ -55,6 +52,9 @@ class Normal(Distribution):
|
|||
ValueError: When sd <= 0.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -37,9 +37,6 @@ class Poisson(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||
name (str): The name of the distribution. Default: 'Poisson'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
Note:
|
||||
`rate` must be strictly greater than 0.
|
||||
`dist_spec_args` is `rate`.
|
||||
|
@ -47,6 +44,9 @@ class Poisson(Distribution):
|
|||
Raises:
|
||||
ValueError: When rate <= 0.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
|
|
|
@ -40,13 +40,6 @@ class TransformedDistribution(Distribution):
|
|||
will use this seed; elsewise, the underlying distribution's seed will be used.
|
||||
name (str): The name of the transformed distribution. Default: 'transformed_distribution'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: When the input `bijector` is not a Bijector instance.
|
||||
TypeError: When the input `distribution` is not a Distribution instance.
|
||||
|
||||
Note:
|
||||
The arguments used to initialize the original distribution cannot be None.
|
||||
For example, mynormal = msd.Normal(dtype=mindspore.float32) cannot be used to initialized a
|
||||
|
@ -58,6 +51,13 @@ class TransformedDistribution(Distribution):
|
|||
distribution. Derived class can overwrite `default_parameters` and `parameter_names` by calling
|
||||
`reset_parameters` followed by `add_parameter`.
|
||||
|
||||
Raises:
|
||||
TypeError: When the input `bijector` is not a Bijector instance.
|
||||
TypeError: When the input `distribution` is not a Distribution instance.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
>>> import mindspore
|
||||
|
|
|
@ -41,9 +41,6 @@ class Uniform(Distribution):
|
|||
dtype (mindspore.dtype): The type of the event samples. Default: mstype.float32.
|
||||
name (str): The name of the distribution. Default: 'Uniform'.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Note:
|
||||
`low` must be strictly less than `high`.
|
||||
`dist_spec_args` are `high` and `low`.
|
||||
|
@ -53,6 +50,8 @@ class Uniform(Distribution):
|
|||
ValueError: When high <= low.
|
||||
TypeError: When the input `dtype` is not a subclass of float.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
|
|
|
@ -560,7 +560,7 @@ def batch_dot(x1, x2, axes=None):
|
|||
Default: None.
|
||||
|
||||
Returns:
|
||||
Tensor, batch dot product of `x1` and `x2`. For example: The Shape of output
|
||||
Tensor, batch dot product of `x1` and `x2`. For example, the Shape of output
|
||||
for input `x1` shapes (batch, d1, axes, d2) and `x2` shapes (batch, d3, axes, d4) is (batch, d1, d2, d3, d4),
|
||||
where d1 and d2 means any number.
|
||||
|
||||
|
|
|
@ -1075,7 +1075,7 @@ def slice(input_x, begin, size):
|
|||
size (Union[tuple, list]): The size of the slice. Only constant value is allowed.
|
||||
|
||||
Returns:
|
||||
Tensor, the shape is : input `size`, the data type is the same as `input_x`.
|
||||
Tensor, the shape is input `size`, the data type is the same as `input_x`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `begin` or `size` is neither tuple nor list.
|
||||
|
|
|
@ -2239,7 +2239,7 @@ def matrix_solve(matrix, rhs, adjoint=False):
|
|||
adjoint(bool): Indicating whether to solve with matrix or its (block-wise) adjoint. Default: False.
|
||||
|
||||
Returns:
|
||||
x (Tensor): The dtype and shape is the same as 'rhs'.
|
||||
x (Tensor), The dtype and shape is the same as 'rhs'.
|
||||
|
||||
Raises:
|
||||
TypeError: If adjoint is not the type of bool.
|
||||
|
|
|
@ -381,15 +381,15 @@ def celu(x, alpha=1.0):
|
|||
Returns:
|
||||
Tensor, has the same data type and shape as the input.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `alpha` is not a float.
|
||||
ValueError: If `alpha` has the value of 0.
|
||||
TypeError: If `x` is not a Tensor.
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([-2.0, -1.0, 1.0, 2.0]), mindspore.float32)
|
||||
>>> output = ops.celu(x, alpha=1.0)
|
||||
|
@ -563,14 +563,14 @@ def kl_div(logits, labels, reduction='mean'):
|
|||
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and has the same shape as `logits`.
|
||||
Otherwise it is a scalar.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `reduction` is not a str.
|
||||
TypeError: If neither `logits` nor `labels` is a Tensor.
|
||||
TypeError: If dtype of `logits` or `labels` is not float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
|
@ -616,14 +616,14 @@ def hardshrink(x, lambd=0.5):
|
|||
Returns:
|
||||
Tensor, has the same data type and shape as the input `x`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `lambd` is not a float.
|
||||
TypeError: If `x` is not a tensor.
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[ 0.5, 1, 2.0], [0.0533,0.0776,-2.1233]]), mindspore.float32)
|
||||
>>> output = ops.hardshrink(x)
|
||||
|
@ -772,9 +772,6 @@ def interpolate(x, roi=None, scales=None, sizes=None, coordinate_transformation_
|
|||
Returns:
|
||||
Resized tensor, with the same data type as input `x`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `x` is not a Tensor.
|
||||
TypeError: If the data type of `x` is not supported.
|
||||
|
@ -787,6 +784,9 @@ def interpolate(x, roi=None, scales=None, sizes=None, coordinate_transformation_
|
|||
TypeError: If `mode` is not a string.
|
||||
ValueError: If `mode` is not in the support list.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> # case 1: linear mode
|
||||
>>> x = Tensor([[[1, 2, 3], [4, 5, 6]]], mindspore.float32)
|
||||
|
@ -972,12 +972,12 @@ def selu(input_x):
|
|||
Returns:
|
||||
Tensor, with the same type and shape as the `input_x`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
||||
>>> output = ops.selu(input_x)
|
||||
|
@ -1046,9 +1046,6 @@ def deformable_conv2d(x, weight, offsets, kernel_size, strides, padding, bias=No
|
|||
\text{dilations[3]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\
|
||||
\end{array}
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `strides`, `padding`, `kernel_size` or `dilations` is not a tuple with integer elements.
|
||||
TypeError: If `modulated` is not a bool.
|
||||
|
@ -1066,6 +1063,9 @@ def deformable_conv2d(x, weight, offsets, kernel_size, strides, padding, bias=No
|
|||
with "numpy.ones()".
|
||||
- `kernel_size` should meet the requirement::math:`3 * kernel\_size[0] * kernel\_size[1] > 8`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.ones((4, 3, 10, 10)), mstype.float32)
|
||||
>>> kh, kw = 3, 3
|
||||
|
@ -1725,12 +1725,12 @@ def mish(x):
|
|||
Returns:
|
||||
Tensor, with the same type and shape as the `x`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
||||
>>> output = ops.mish(input_x)
|
||||
|
|
|
@ -320,7 +320,7 @@ def dense_to_sparse_coo(tensor):
|
|||
tensor: A dense tensor, must be 2-D.
|
||||
|
||||
Returns:
|
||||
COOTensor, a sparse representation of the original dense tensor, containing:
|
||||
COOTensor, a sparse representation of the original dense tensor, containing the following parts.
|
||||
|
||||
- indices (Tensor): 2-D integer tensor, indicates the positions of `values` of the dense tensor.
|
||||
- values (Tensor): 1-D tensor, indicates the non-zero values of the dense tensor.
|
||||
|
@ -366,7 +366,7 @@ def dense_to_sparse_csr(tensor):
|
|||
tensor: A dense tensor, must be 2-D.
|
||||
|
||||
Returns:
|
||||
CSRTensor, a sparse representation of the original dense tensor, containing:
|
||||
CSRTensor, a sparse representation of the original dense tensor, containing the following parts.
|
||||
|
||||
- indptr (Tensor): 1-D integer tensor, indicates the start and end point for `values` in each row.
|
||||
- indices (Tensor): 1-D integer tensor, indicates the column positions of all non-zero values of the input.
|
||||
|
@ -454,8 +454,8 @@ def sparse_concat(sp_input, concat_dim=0):
|
|||
|
||||
Outputs:
|
||||
- **output** (COOtensor) - the result of concatenates the input SparseTensor along the
|
||||
specified dimension. OutShape: OutShape[non concat_dim] is equal to InShape[non concat_dim] and
|
||||
OutShape[concat_dim] is all input concat_dim axis shape accumulate.
|
||||
specified dimension. OutShape: OutShape[non concat_dim] is equal to InShape[non concat_dim] and
|
||||
OutShape[concat_dim] is all input concat_dim axis shape accumulate.
|
||||
|
||||
Raises:
|
||||
ValueError: If only one sparse tensor input.
|
||||
|
|
|
@ -188,7 +188,7 @@ class AllGather(PrimitiveWithInfer):
|
|||
|
||||
Note:
|
||||
The tensors must have the same shape and format in all processes of the collection. The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
communication environment variables before running the following example. Please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/en/master/api_python/mindspore.ops.html#communication-operator>`_.
|
||||
|
||||
|
|
|
@ -433,7 +433,7 @@ class Print(PrimitiveWithInfer):
|
|||
str remains unchanged.
|
||||
This function is used for debugging. When too much data is printed at the same time,
|
||||
in order not to affect the main process, the framework may discard some data. At this time,
|
||||
if you need to record the data completely, you can recommended to use the `Summary` function. Please check
|
||||
if you need to record the data completely, you can recommended to use the `Summary` function, and can check
|
||||
`Summary <https://www.mindspore.cn/mindinsight/docs/zh-CN/master/summary_record.html?highlight=summary#>`_.
|
||||
|
||||
Inputs:
|
||||
|
|
|
@ -3074,13 +3074,12 @@ class MulNoNan(_MathBinaryOp):
|
|||
Tensor, the shape is the same as the shape after broadcasting,
|
||||
and the data type is the one with higher precision among the two inputs.
|
||||
|
||||
Raises:
|
||||
TypeError: If neither `x` nor `y` is a Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If neither `x` nor `y` is a Tensor.
|
||||
|
||||
Examples:
|
||||
>>> # case 1 : same data type and shape of two inputs, there are some 0 in y.
|
||||
>>> x = Tensor(np.array([[-1.0, 6.0, np.inf], [np.nan, -7.0, 4.0]]), mindspore.float32)
|
||||
|
@ -5375,7 +5374,7 @@ class MatrixInverse(Primitive):
|
|||
result may be returned.
|
||||
|
||||
Note:
|
||||
The parameter 'adjoint' is only supporting False right now. Because complex number is not supported at present.
|
||||
The parameter 'adjoint' is only supporting False right now, because complex number is not supported at present.
|
||||
|
||||
Args:
|
||||
adjoint (bool) : An optional bool. Default: False.
|
||||
|
@ -6254,23 +6253,23 @@ class RaggedRange(Primitive):
|
|||
|
||||
Inputs:
|
||||
- **starts** (Tensor) - The starts of each range, whose type is int32, int64, float32 or float64,
|
||||
and shape is 0D or 1D.
|
||||
and shape is 0D or 1D.
|
||||
- **limits** (Tensor) - The limits of each range, whose type and shape should be same as input `starts`.
|
||||
- **deltas** (Tensor) - The deltas of each range, whose type and shape should be same as input `starts`,
|
||||
and each element in the tensor should not be equal to 0.
|
||||
and each element in the tensor should not be equal to 0.
|
||||
Outputs:
|
||||
- **rt_nested_splits** (Tensor) - The nested splits of the return `RaggedTensor`,
|
||||
and type of the tensor is `Tsplits`,
|
||||
shape of the tensor is equal to shape of input `starts` plus 1.
|
||||
and type of the tensor is `Tsplits`,
|
||||
shape of the tensor is equal to shape of input `starts` plus 1.
|
||||
- **rt_dense_values** (Tensor) - The dense values of the return `RaggedTensor`,
|
||||
and type of the tensor should be same as input `starts`.
|
||||
Let size of input `starts`, input `limits` and input `deltas` are i,
|
||||
if type of the input `starts`, input `limits` and input `deltas`
|
||||
are int32 or int64, shape of the output `rt_dense_values` is equal to
|
||||
sum(abs(limits[i] - starts[i]) + abs(deltas[i]) - 1) / abs(deltas[i])),
|
||||
if type of the input `starts`, input `limits` and input `deltas`
|
||||
are float32 or float64, shape of the output `rt_dense_values` is equal to
|
||||
sum(ceil(abs((limits[i] - starts[i]) / deltas[i]))).
|
||||
and type of the tensor should be same as input `starts`.
|
||||
Let size of input `starts`, input `limits` and input `deltas` are i,
|
||||
if type of the input `starts`, input `limits` and input `deltas`
|
||||
are int32 or int64, shape of the output `rt_dense_values` is equal to
|
||||
sum(abs(limits[i] - starts[i]) + abs(deltas[i]) - 1) / abs(deltas[i])),
|
||||
if type of the input `starts`, input `limits` and input `deltas`
|
||||
are float32 or float64, shape of the output `rt_dense_values` is equal to
|
||||
sum(ceil(abs((limits[i] - starts[i]) / deltas[i]))).
|
||||
Raises:
|
||||
TypeError: If any input is not Tensor.
|
||||
TypeError: If the type of `starts` is not one of the following dtype: int32, int64, float32, float64.
|
||||
|
|
|
@ -695,12 +695,12 @@ class Mish(PrimitiveWithInfer):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `x`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `x` is neither float16 nor float32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
||||
>>> mish = ops.Mish()
|
||||
|
@ -743,12 +743,12 @@ class SeLU(Primitive):
|
|||
Outputs:
|
||||
Tensor, with the same type and shape as the `input_x`.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If dtype of `input_x` is not int8, int32, float16, float32, float64.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore.ops.operations.nn_ops import SeLU
|
||||
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
|
||||
|
@ -7104,7 +7104,7 @@ class Dropout2D(PrimitiveWithInfer):
|
|||
Dropout2D can improve the independence between channel feature maps.
|
||||
|
||||
Note:
|
||||
The keep probability :math:`keep\_prob` is equal to 'ops.dropout2d' input '1-p'.
|
||||
The keep probability :math:`keep\_prob` is equal to :math:`1 - p` in :func:`mindspore.ops.dropout2d`.
|
||||
|
||||
Refer to :func:`mindspore.ops.dropout2d` for more detail.
|
||||
|
||||
|
@ -7138,7 +7138,7 @@ class Dropout3D(PrimitiveWithInfer):
|
|||
|
||||
|
||||
Note:
|
||||
The keep probability :math:`keep\_prob` is equal to 'ops.dropout3d' input '1-p'.
|
||||
The keep probability :math:`keep\_prob` is equal to :math:`1 - p` in :func:`mindspore.ops.dropout2d`.
|
||||
|
||||
Refer to :func:`mindspore.ops.dropout3d` for more detail.
|
||||
|
||||
|
@ -8571,9 +8571,6 @@ class Conv3DTranspose(Primitive):
|
|||
Tensor of shape :math:`(N, C_{out}//group, D_{out}, H_{out}, W_{out})`,
|
||||
where :math:`group` is the Args parameter.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Raises:
|
||||
TypeError: If `in_channel`, `out_channel` or `group` is not an int.
|
||||
TypeError: If `kernel_size`, `stride`, `pad` , `dilation` or `output_padding` is neither an int not a tuple.
|
||||
|
@ -8586,6 +8583,9 @@ class Conv3DTranspose(Primitive):
|
|||
TypeError: If data type of dout and weight is not float16.
|
||||
ValueError: If bias is not none. The rank of dout and weight is not 5.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
||||
Examples:
|
||||
>>> dout = Tensor(np.ones([32, 16, 10, 32, 32]), mindspore.float16)
|
||||
>>> weight = Tensor(np.ones([16, 3, 4, 6, 2]), mindspore.float16)
|
||||
|
|
|
@ -566,15 +566,15 @@ class UniformInt(Primitive):
|
|||
- **maxval** (Tensor) - The distribution parameter, b.
|
||||
It defines the maximum possibly generated value, with int32 data type. Only one number is supported.
|
||||
|
||||
Outputs:
|
||||
Tensor. The shape is the same as the input 'shape', and the data type is int32.
|
||||
|
||||
Raises:
|
||||
TypeError: If neither `seed` nor `seed2` is an int.
|
||||
TypeError: If `shape` is not a tuple.
|
||||
TypeError: If neither `minval` nor `maxval` is a Tensor.
|
||||
ValueError: If `shape` is not a constant value.
|
||||
|
||||
Outputs:
|
||||
Tensor. The shape is the same as the input 'shape', and the data type is int32.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
|
|
Loading…
Reference in New Issue