forked from mindspore-Ecosystem/mindspore
commit
578cfe732c
|
@ -18,5 +18,4 @@
|
|||
Dict,包含序列化数据集图的字典。
|
||||
|
||||
异常:
|
||||
- **ValueError** - 不支持用户定义的Python函数的序列化。
|
||||
- **OSError** - 无法打开文件。
|
||||
|
|
|
@ -465,7 +465,7 @@ mindspore.rewrite
|
|||
PatternEngine实现了如何通过PattenNode修改SymbolTree。
|
||||
|
||||
参数:
|
||||
- **pattern** (Union[PatternNode,List]) - PatternNode的实例或用于构造 `Pattent` 的Cell类型列表。
|
||||
- **pattern** (Union[PatternNode, List]) - PatternNode的实例或用于构造 `Pattent` 的Cell类型列表。
|
||||
- **replacement** (callable) - 生成新节点的接口实现,如果为None则不进行任何匹配操作。
|
||||
|
||||
.. py:method:: mindspore.rewrite.PatternEngine.apply(stree: SymbolTree)
|
||||
|
@ -505,7 +505,7 @@ mindspore.rewrite
|
|||
为当前节点添加一个输入。
|
||||
|
||||
参数:
|
||||
- **node** (PattenNode) - 新增的输入节点。
|
||||
- **node** (PatternNode) - 新增的输入节点。
|
||||
|
||||
异常:
|
||||
- **TypeError** - 如果参数 `node` 不是PattenNode类型。
|
||||
|
|
|
@ -76,9 +76,10 @@ mindspore.nn.Adam
|
|||
|
||||
.. include:: mindspore.nn.optim_arg_loss_scale.rst
|
||||
|
||||
- **kwargs** -
|
||||
- use_lazy (bool):是否使用Lazy Adam算法。默认值:False。如果为True,使用lazy Adam,反之使用普通Adam算法。
|
||||
- use_offload (bool):是否在主机CPU上运行Adam优化算法。默认值:False。如果为True,使用offload方法,反之使用普通Adam算法。
|
||||
- **kwargs** -
|
||||
|
||||
- use_lazy (bool):是否使用Lazy Adam算法。默认值:False。如果为True,使用lazy Adam,反之使用普通Adam算法。
|
||||
- use_offload (bool):是否在主机CPU上运行Adam优化算法。默认值:False。如果为True,使用offload方法,反之使用普通Adam算法。
|
||||
|
||||
输入:
|
||||
- **gradients** (tuple[Tensor]) - `params` 的梯度,形状(shape)与 `params` 相同。
|
||||
|
|
|
@ -14,7 +14,7 @@ mindspore.ops.ReduceOp
|
|||
|
||||
.. note::
|
||||
有关更多信息,请参阅示例。这需要在具有多个加速卡的环境中运行。
|
||||
在运行以下示例之前,用户需要预设环境变量。请参考官方网站 `MindSpore \
|
||||
在运行以下示例之前,用户需要预设环境变量,请参考官方网站 `MindSpore \
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#通信算子>`_ 。
|
||||
|
||||
有四种操作选项,"SUM"、"MAX"、"MIN"和"PROD"。
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.ReduceScatter
|
|||
|
||||
.. note::
|
||||
在集合的所有过程中,Tensor必须具有相同的shape和格式。
|
||||
在运行以下示例之前,用户需要预设环境变量。请参考
|
||||
在运行以下示例之前,用户需要预设环境变量,请参考
|
||||
`MindSpore官方网站 <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#%E9%80%9A%E4%BF%A1%E7%AE%97%E5%AD%90>`_。
|
||||
|
||||
参数:
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.ops.approximate_equal
|
||||
===============================
|
||||
|
||||
.. py:function:: mindspore.ops.approximate_equal(x, y, tolerance=1e-05)
|
||||
.. py:function:: mindspore.ops.approximate_equal(x, y, tolerance=1e-5)
|
||||
|
||||
逐元素计算abs(x-y),如果小于tolerance则为True,否则为False。
|
||||
|
||||
|
@ -19,7 +19,7 @@ mindspore.ops.approximate_equal
|
|||
- **y** (Tensor) - 输入Tensor,shape与数据类型与 `x` 相同。
|
||||
- **tolerance** (float) - 两元素可被视为相等的最大偏差。默认值:1e-05。
|
||||
|
||||
输出:
|
||||
返回:
|
||||
Tensor,shape与 `x` 相同,bool类型。
|
||||
|
||||
异常:
|
||||
|
|
|
@ -14,8 +14,8 @@ mindspore.ops.crop_and_resize
|
|||
- **boxes** (Tensor) - shape为 :math:`(num_boxes, 4)` 的2维Tensor。其中,第 :math:`i` 行指定对第 :math:`\text{box_indices[i]}` 张图像裁剪时的归一化坐标 :math:`[y1, x1, y2, x2]`,那么通过归一化的 :math:`y` 坐标值可映射到的图像坐标为 :math:`y * (image\_height - 1)`,因此,归一化的图像高度 :math:`[0, 1]` 间隔映射到的图像高度间隔为 :math:`[0, image\_height - 1]`。我们也允许 :math:`y1 > y2`,这种情况下,就是对图像进行的上下翻转,宽度方向与此类似。同时,我们也允许归一化的坐标值超出 :math:`[0, 1]` 的区间,这种情况下,采用 :math:`\text{extrapolation_value}` 进行填充。数据类型:float32。
|
||||
- **box_indices** (Tensor) - shape为 :math:`(num_boxes)` 的1维Tensor,其中,每一个元素必须是 :math:`[0, batch)` 区间内的值。:math:`\test{box_indices[i]}` 指定 :math:`\test{boxes[i, :]}` 所指向的图像索引。数据类型:int32。
|
||||
- **crop_size** (Tuple[int]) - 2元组 :math:`(crop_height, crop_width)`,该输入必须为常量并且均为正值。指定对裁剪出的图像进行调整时的输出大小,纵横比可与原图不一致。数据类型:int32。
|
||||
- **method** (str): 指定调整大小时的采样方法,取值为"bilinear"、 "nearest"或"bilinear_v2",其中,"bilinear"是标准的线性插值算法,而在某些情况下,"bilinear_v2"可能会得到更优的效果。默认值:"bilinear"。
|
||||
- **extrapolation_value** (float): 指定外插时的浮点值。默认值: 0.0。
|
||||
- **method** (str) - 指定调整大小时的采样方法,取值为"bilinear"、 "nearest"或"bilinear_v2",其中,"bilinear"是标准的线性插值算法,而在某些情况下,"bilinear_v2"可能会得到更优的效果。默认值:"bilinear"。
|
||||
- **extrapolation_value** (float) - 指定外插时的浮点值。默认值: 0.0。
|
||||
|
||||
返回:
|
||||
Tensor,shape为 :math:`(num_boxes, crop_height, crop_width, depth)`,数据类型:float32 。
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.jacfwd
|
|||
|
||||
参数:
|
||||
- **fn** (Union[Function, Cell]) - 待求导的函数或网络。以Tensor为入参,返回Tensor或Tensor数组。
|
||||
- **grad_position** (Union[NoneType, int, tuple[int]]) - 指定求导输入位置的索引。若为int类型,表示对单个输入求导;若为tuple类型,表示对tuple内索引的位置求导,其中索引从0开始。默认值:0。
|
||||
- **grad_position** (Union[int, tuple[int]]) - 指定求导输入位置的索引。若为int类型,表示对单个输入求导;若为tuple类型,表示对tuple内索引的位置求导,其中索引从0开始。默认值:0。
|
||||
- **has_aux** (bool) - 若 `has_aux` 为True,只有 `fn` 的第一个输出参与 `fn` 的求导,其他输出将直接返回。此时, `fn` 的输出数量必须超过一个。默认值:False。
|
||||
|
||||
返回:
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.jacrev
|
|||
|
||||
参数:
|
||||
- **fn** (Union[Function, Cell]) - 待求导的函数或网络。以Tensor为入参,返回Tensor或Tensor数组。
|
||||
- **grad_position** (Union[NoneType, int, tuple[int]]) - 指定求导输入位置的索引。若为int类型,表示对单个输入求导;若为tuple类型,表示对tuple内索引的位置求导,其中索引从0开始。默认值:0。
|
||||
- **grad_position** (Union[int, tuple[int]]) - 指定求导输入位置的索引。若为int类型,表示对单个输入求导;若为tuple类型,表示对tuple内索引的位置求导,其中索引从0开始。默认值:0。
|
||||
- **has_aux** (bool) - 若 `has_aux` 为True,只有 `fn` 的第一个输出参与 `fn` 的求导,其他输出将直接返回。此时, `fn` 的输出数量必须超过一个。默认值:False。
|
||||
|
||||
返回:
|
||||
|
|
|
@ -36,7 +36,6 @@ def serialize(dataset, json_filepath=""):
|
|||
a related warning message is reported and the obtained JSON file cannot be deserialized
|
||||
into a usable data pipeline.
|
||||
|
||||
|
||||
Args:
|
||||
dataset (Dataset): The starting node.
|
||||
json_filepath (str): The filepath where a serialized JSON file will be generated (default="").
|
||||
|
|
|
@ -1131,7 +1131,7 @@ def strided_slice(input_x,
|
|||
|
||||
If the ith bit of `shrink_axis_mask` is 1, `begin`, `end` and `strides`
|
||||
are ignored and dimension i will be shrunk to 0. For a 5*6*7 Tensor `input_x`,
|
||||
if `shrink_axis_mask` is ob010`, it is equivalent to slice x[:, 5, :]`
|
||||
if `shrink_axis_mask` is ob010, it is equivalent to slice `x[:, 5, :]`
|
||||
and results in an output shape of :math:`(5, 7)`.
|
||||
|
||||
Note:
|
||||
|
@ -1144,9 +1144,9 @@ def strided_slice(input_x,
|
|||
Only non-negative int is allowed.
|
||||
end (tuple[int]): A tuple or which represents the maximum location where to end.
|
||||
Only non-negative int is allowed.
|
||||
strides (tuple[int]): - A tuple which represents the strides is continuously added
|
||||
before reaching the maximum location. Only int is allowed, it can be negative
|
||||
which results in reversed slicing.
|
||||
strides (tuple[int]): A tuple which represents the strides is continuously added
|
||||
before reaching the maximum location. Only int is allowed, it can be negative
|
||||
which results in reversed slicing.
|
||||
begin_mask (int, optional): Starting index of the slice. Default: 0.
|
||||
end_mask (int, optional): Ending index of the slice. Default: 0.
|
||||
ellipsis_mask (int, optional): An int mask, ignore slicing operation when set to 1. Default: 0.
|
||||
|
|
|
@ -940,7 +940,7 @@ def jacfwd(fn, grad_position=0, has_aux=False):
|
|||
reverse mode to get better performance.
|
||||
|
||||
Args:
|
||||
fn (Union[Cell, function]): Function to do GradOperation.
|
||||
fn (Union[Cell, Function]): Function to do GradOperation.
|
||||
grad_position (Union[int, tuple[int]]): If int, get the gradient with respect to single input.
|
||||
If tuple, get the gradients with respect to selected inputs. 'grad_position' begins with 0. Default: 0.
|
||||
has_aux (bool): If True, only the first output of `fn` contributes the gradient of `fn`, while the other outputs
|
||||
|
@ -1132,7 +1132,7 @@ def jacrev(fn, grad_position=0, has_aux=False):
|
|||
forward mode to get better performance.
|
||||
|
||||
Args:
|
||||
fn (Union[Cell, function]): Function to do GradOperation.
|
||||
fn (Union[Cell, Function]): Function to do GradOperation.
|
||||
grad_position (Union[int, tuple[int]]): If int, get the gradient with respect to single input.
|
||||
If tuple, get the gradients with respect to selected inputs. 'grad_position' begins with 0. Default: 0.
|
||||
has_aux (bool): If True, only the first output of `fn` contributes the gradient of `fn`, while the other outputs
|
||||
|
|
|
@ -168,7 +168,7 @@ def crop_and_resize(image, boxes, box_indices, crop_size, method="bilinear", ext
|
|||
extrapolation_value (float): An optional float value used extrapolation, if applicable. Default: 0.0.
|
||||
|
||||
Returns:
|
||||
A 4-D tensor of shape [num_boxes, crop_height, crop_width, depth] with type: float32.
|
||||
A 4-D tensor of shape [num_boxes, crop_height, crop_width, depth] with type(float32).
|
||||
|
||||
Raises:
|
||||
TypeError: If `image` or `boxes` or `box_indices` is not a Tensor.
|
||||
|
|
|
@ -3045,10 +3045,12 @@ def conv3d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, gr
|
|||
:math:`(N, C_{out}, D_{out}, H_{out}, W_{out})`. Where :math:`N` is batch size, :math:`C` is channel number,
|
||||
:math:`D` is depth, :math:`H` is height, :math:`W` is width.
|
||||
the formula is defined as:
|
||||
|
||||
.. math::
|
||||
\operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+
|
||||
\sum_{k=0}^{C_{in}-1} ccor(\text {weight}\left(C_{\text {out}_j}, k\right),
|
||||
\operatorname{input}\left(N_{i}, k\right))
|
||||
|
||||
where :math:`k` is kernel, :math:`ccor` is the cross-correlation operator.
|
||||
If the 'pad_mode' is set to be "valid", the output depth, height and width will be
|
||||
:math:`\left \lfloor{1 + \frac{D_{in} + 2 \times \text{padding} - \text{ks_d} -
|
||||
|
@ -3059,6 +3061,7 @@ def conv3d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, gr
|
|||
(\text{ks_w} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor` respectively. Where
|
||||
:math:`dilation` is Spacing between kernel elements, :math:`stride` is The step length of each step,
|
||||
:math:`padding` is zero-padding added to both sides of the input.
|
||||
|
||||
Args:
|
||||
inputs (Tensor): Tensor of shape :math:`(N, C_{in}, D_{in}, H_{in}, W_{in})`.
|
||||
weight (Tensor): Set size of kernel is :math:`(\text{kernel_size[0]}, \text{kernel_size[1]},
|
||||
|
@ -3094,8 +3097,10 @@ def conv3d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, gr
|
|||
Its value must be greater than or equal to 1 and bounded by the height and width of the input. Default: 1.
|
||||
group (int, optional): Splits filter into groups, `in_channels` and `out_channels` must be
|
||||
divisible by the number of groups. Default: 1. Only 1 is currently supported.
|
||||
|
||||
Returns:
|
||||
Tensor, the value that applied 3D convolution. The shape is :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})`.
|
||||
|
||||
Raises:
|
||||
TypeError: If `out_channel` or `group` is not an int.
|
||||
TypeError: If `stride`, `padding` or `dilation` is neither an int nor a tuple.
|
||||
|
@ -3103,8 +3108,10 @@ def conv3d(inputs, weight, pad_mode="valid", padding=0, stride=1, dilation=1, gr
|
|||
ValueError: If `pad_mode` is not one of 'same', 'valid' or 'pad'.
|
||||
ValueError: If `padding` is a tuple whose length is not equal to 4.
|
||||
ValueError: If `pad_mode` is not equal to 'pad' and `pad` is not equal to (0, 0, 0, 0, 0, 0).
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
|
||||
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
|
||||
|
@ -3167,27 +3174,29 @@ def multi_margin_loss(inputs, target, p=1, margin=1, weight=None, reduction='mea
|
|||
:math:`0 \leq y \leq \text{x.size}(1)-1`):
|
||||
For each mini-batch sample, the loss in terms of the 1D input :math:`x` and scalar
|
||||
output :math:`y` is:
|
||||
|
||||
.. math::
|
||||
\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p)}{\text{x.size}(0)}
|
||||
|
||||
where :math:`x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}`
|
||||
and :math:`i \neq y`.
|
||||
Optionally, you can give non-equal weighting on the classes by passing
|
||||
a 1D input `weight` tensor into the constructor.
|
||||
|
||||
Args:
|
||||
- **inputs** (Tensor) - Input , with shape :math:`(N, C)`. Data type only support float32, float16 or float64.
|
||||
- **target** (Tensor) - Ground truth labels, with shape :math:`(N,)`. Data type only support int64. The
|
||||
value of target should be non-negative, less than C.
|
||||
- **p** (int, optional) - The norm degree for pairwise distance. Should be 1 or 2. Default: 1.
|
||||
- **margin** (int, optional) - A parameter to change pairwise distance. Default: 1.
|
||||
- **weight** (Tensor, optional) - The rescaling weight to each class with shape :math:`(C,)`. Data type only
|
||||
support float16, float32 or float64. Default: None.
|
||||
- **reduction** (str, optional) - Apply specific reduction method to the output: 'none', 'mean',
|
||||
'sum'. Default: 'mean'.
|
||||
inputs (Tensor): Input , with shape :math:`(N, C)`. Data type only support float32, float16 or float64.
|
||||
target (Tensor): Ground truth labels, with shape :math:`(N,)`. Data type only support int64. The
|
||||
value of target should be non-negative, less than C.
|
||||
p (int, optional): The norm degree for pairwise distance. Should be 1 or 2. Default: 1.
|
||||
margin (int, optional): A parameter to change pairwise distance. Default: 1.
|
||||
weight (Tensor, optional): The rescaling weight to each class with shape :math:`(C,)`. Data type only
|
||||
support float16, float32 or float64. Default: None.
|
||||
reduction** (str, optional): Apply specific reduction method to the output: 'none', 'mean',
|
||||
'sum'. Default: 'mean'.
|
||||
|
||||
- 'none': no reduction will be applied.
|
||||
- 'mean': the sum of the output will be divided by the number of elements in the output.
|
||||
- 'sum': the output will be summed.
|
||||
- 'none': no reduction will be applied.
|
||||
- 'mean': the sum of the output will be divided by the number of elements in the output.
|
||||
- 'sum': the output will be summed.
|
||||
|
||||
Returns:
|
||||
Tensor, When `reduction` is 'none', the shape is :math:`(N,)`.
|
||||
|
@ -3228,12 +3237,15 @@ def multi_margin_loss(inputs, target, p=1, margin=1, weight=None, reduction='mea
|
|||
def multi_label_margin_loss(inputs, target, reduction='mean'):
|
||||
r"""
|
||||
MultilabelMarginLoss operation.
|
||||
|
||||
Creates a criterion that optimizes a multi-class multi-classification
|
||||
hinge loss (margin-based loss) between input :math:`x` (a 2D mini-batch `Tensor`)
|
||||
and output :math:`y` (which is a 2D `Tensor` of target class indices).
|
||||
For each sample in the mini-batch:
|
||||
|
||||
.. math::
|
||||
\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)}
|
||||
|
||||
where :math:`x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}`, \
|
||||
:math:`y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}`, \
|
||||
:math:`0 \leq y[j] \leq \text{x.size}(0)-1`, \
|
||||
|
@ -3244,16 +3256,16 @@ def multi_label_margin_loss(inputs, target, reduction='mean'):
|
|||
This allows for different samples to have variable amounts of target classes.
|
||||
|
||||
Args:
|
||||
- **inputs** (Tensor) - Predict data. Tensor of shape :math:`(C)` or :math:`(N, C)`, where :math:`N`
|
||||
is the batch size and :math:`C` is the number of classes. Data type must be float16 or float32.
|
||||
- **target** (Tensor) - Ground truth data, with the same shape as `x`, data type must be int32 and
|
||||
label targets padded by -1.
|
||||
- **reduction** (str, optional) - Apply specific reduction method to the output: 'none', 'mean',
|
||||
'sum'. Default: 'mean'.
|
||||
inputs (Tensor): Predict data. Tensor of shape :math:`(C)` or :math:`(N, C)`, where :math:`N`
|
||||
is the batch size and :math:`C` is the number of classes. Data type must be float16 or float32.
|
||||
target (Tensor): Ground truth data, with the same shape as `x`, data type must be int32 and
|
||||
label targets padded by -1.
|
||||
reduction (str, optional): Apply specific reduction method to the output: 'none', 'mean',
|
||||
'sum'. Default: 'mean'.
|
||||
|
||||
- 'none': no reduction will be applied.
|
||||
- 'mean': the sum of the output will be divided by the number of elements in the output.
|
||||
- 'sum': the output will be summed.
|
||||
- 'none': no reduction will be applied.
|
||||
- 'mean': the sum of the output will be divided by the number of elements in the output.
|
||||
- 'sum': the output will be summed.
|
||||
|
||||
Returns:
|
||||
- **outputs** (Union[Tensor, Scalar]) - The loss of MultilabelMarginLoss. If `reduction` is "none", its shape
|
||||
|
|
Loading…
Reference in New Issue