!45708 modify format

Merge pull request !45708 from 俞涵/code_docs_1110
This commit is contained in:
i-robot 2022-11-21 01:20:55 +00:00 committed by Gitee
commit 9c4d3694d6
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
27 changed files with 101 additions and 109 deletions

View File

@ -8,7 +8,7 @@ mindspore.dataset.vision.read_image
参数:
- **filename** (str) - 待读取图像文件路径。
- **mode** (ImageReadMode, 可选) - 图像读取模式。它可以是 [ImageReadMode.UNCHANGED、ImageReadMode.GRAYSCALE、ImageReadMode.COLOR]
- **mode** (int, 可选) - 图像读取模式。它可以是 [ImageReadMode.UNCHANGED、ImageReadMode.GRAYSCALE、ImageReadMode.COLOR]
中的任何一个。默认值ImageReadMode.UNCHANGED。
- **ImageReadMode.UNCHANGED** - 按照图像原始格式读取。

View File

@ -721,7 +721,7 @@ Parameter操作算子
光谱算子
----------
.. mscnautosummary::
.. mscnplatformautosummary::
:toctree: ops
:nosignatures:
:template: classtemplate.rst

View File

@ -3,4 +3,4 @@ mindspore.Tensor.log10
.. py:method:: mindspore.Tensor.log10()
更多参考详见 :func:`mindspore.ops.log10`
详情请参考 :func:`mindspore.ops.log10`

View File

@ -3,4 +3,4 @@ mindspore.Tensor.log2
.. py:method:: mindspore.Tensor.log2()
更多参考详见 :func:`mindspore.ops.log2`
详情请参考 :func:`mindspore.ops.log2`

View File

@ -21,11 +21,11 @@ mindspore.set_ps_context
参数:
- **enable_ps** (bool) - 表示是否启用参数服务器训练模式。只有在enable_ps设置为True后环境变量才会生效。默认值False。
- **config_file_path** (string) - 配置文件路径,用于容灾恢复等, 目前参数服务器训练模式仅支持Server容灾。默认值''。
- **config_file_path** (string) - 配置文件路径,用于容灾恢复等, 目前参数服务器训练模式仅支持Server容灾。默认值 ''。
- **scheduler_manage_port** (int) - 调度器HTTP端口对外开放用于接收和处理用户扩容/缩容等请求。默认值11202。
- **enable_ssl** (bool) - 设置是否打开SSL认证。默认值True。
- **client_password** (str) - 用于解密客户端证书密钥的密码。默认值:''。
- **server_password** (str) - 用于解密服务端证书密钥的密码。默认值:''。
- **client_password** (str) - 用于解密客户端证书密钥的密码。默认值: ''。
- **server_password** (str) - 用于解密服务端证书密钥的密码。默认值: ''。
异常:

View File

@ -1,7 +1,7 @@
mindspore.ops.AdaptiveMaxPool3D
===============================
.. py:class:: mindspore.ops.AdaptiveMaxPool3D(output_size)
.. py:class:: mindspore.ops.AdaptiveMaxPool3D()
三维自适应最大值池化。

View File

@ -1,7 +1,7 @@
mindspore.ops.Cross
====================
.. py:class:: mindspore.ops.Cross(dim=None)
.. py:class:: mindspore.ops.Cross(dim=-65530)
计算两个输入向量/向量数组的向量积。

View File

@ -2,7 +2,7 @@ mindspore.ops.Partial
======================
.. py:class:: mindspore.ops.Partial
生成偏函数的实例。通过给一般函数的部分参数提供初始值来衍生出有特定功能的新函数。
输入:

View File

@ -718,7 +718,7 @@ Customizing Operator
Spectral Operator
-----------------
.. autosummary::
.. msplatformautosummary::
:toctree: ops
:nosignatures:
:template: classtemplate.rst

View File

@ -94,7 +94,7 @@ class Converter:
model named model.prototxt.ms in /home/user/.
weight_file (str, optional): Set the path of input model weight file. Required only when fmk_type is
FmkType.CAFFE. The Caffe model is generally divided into two files: 'model.prototxt' is model structure,
corresponding to 'model_file` parameter; `model.Caffemodel' is model weight value file, corresponding to
corresponding to `model_file` parameter; 'model.Caffemodel' is model weight value file, corresponding to
`weight_file` parameter. For example, "/home/user/model.caffemodel". Default: "".
config_file (str, optional): Set the path of the configuration file of Converter can be used to post-training,
offline split op to parallel, disable op fusion ability and set plugin so path. `config_file' uses the
@ -115,45 +115,45 @@ class Converter:
parameter. For example, {"inTensor1": [1, 32, 32, 32], "inTensor2": [1, 1, 32, 32]}. Default: None, None is
equivalent to {}.
- Usage 1:The input of the model to be converted is dynamic shape, but prepare to use fixed shape for
inference, then set the parameter to fixed shape. After setting, when inferring on the converted
model, the default input shape is the same as the parameter setting, no need to resize.
- Usage 2: No matter whether the original input of the model to be converted is dynamic shape or not,
but prepare to use fixed shape for inference, and the performance of the model is
expected to be optimized as much as possible, then set the parameter to fixed shape. After
setting, the model structure will be further optimized, but the converted model may lose the
characteristics of dynamic shape(some operators strongly related to shape will be merged).
- Usage 3: When using the converter function to generate code for Micro inference execution, it is
recommended to set the parameter to reduce the probability of errors during deployment.
When the model contains a Shape ops or the input of the model to be converted is a dynamic
shape, you must set the parameter to fixed shape to support the relevant shape optimization and
code generation.
- Usage 1:The input of the model to be converted is dynamic shape, but prepare to use fixed shape for
inference, then set the parameter to fixed shape. After setting, when inferring on the converted
model, the default input shape is the same as the parameter setting, no need to resize.
- Usage 2: No matter whether the original input of the model to be converted is dynamic shape or not,
but prepare to use fixed shape for inference, and the performance of the model is
expected to be optimized as much as possible, then set the parameter to fixed shape. After
setting, the model structure will be further optimized, but the converted model may lose the
characteristics of dynamic shape(some operators strongly related to shape will be merged).
- Usage 3: When using the converter function to generate code for Micro inference execution, it is
recommended to set the parameter to reduce the probability of errors during deployment.
When the model contains a Shape ops or the input of the model to be converted is a dynamic
shape, you must set the parameter to fixed shape to support the relevant shape optimization and
code generation.
input_format (Format, optional): Set the input format of exported model. Only Valid for 4-dimensional input. The
following 2 input formats are supported: Format.NCHW | Format.NHWC. Default: Format.NHWC.
- Format.NCHW: Store tensor data in the order of batch N, channel C, height H and width W.
- Format.NHWC: Store tensor data in the order of batch N, height H, width W and channel C.
- Format.NCHW: Store tensor data in the order of batch N, channel C, height H and width W.
- Format.NHWC: Store tensor data in the order of batch N, height H, width W and channel C.
input_data_type (DataType, optional): Set the data type of the quantization model input Tensor. It is only valid
when the quantization parameters ( `scale` and `zero point` ) of the model input tensor are available.
The following 4 DataTypes are supported: DataType.FLOAT32 | DataType.INT8 | DataType.UINT8 |
DataType.UNKNOWN. Default: DataType.FLOAT32.
- DataType.FLOAT32: 32-bit floating-point number.
- DataType.INT8: 8-bit integer.
- DataType.UINT8: unsigned 8-bit integer.
- DataType.UNKNOWN: Set the Same DataType as the model input Tensor.
- DataType.FLOAT32: 32-bit floating-point number.
- DataType.INT8: 8-bit integer.
- DataType.UINT8: unsigned 8-bit integer.
- DataType.UNKNOWN: Set the Same DataType as the model input Tensor.
output_data_type (DataType, optional): Set the data type of the quantization model output Tensor. It is only
valid when the quantization parameters ( `scale` and `zero point` ) of the model output tensor are
available. The following 4 DataTypes are supported: DataType.FLOAT32 | DataType.INT8 | DataType.UINT8 |
DataType.UNKNOWN. Default: DataType.FLOAT32.
- DataType.FLOAT32: 32-bit floating-point number.
- DataType.INT8: 8-bit integer.
- DataType.UINT8: unsigned 8-bit integer.
- DataType.UNKNOWN: Set the Same DataType as the model output Tensor.
- DataType.FLOAT32: 32-bit floating-point number.
- DataType.INT8: 8-bit integer.
- DataType.UINT8: unsigned 8-bit integer.
- DataType.UNKNOWN: Set the Same DataType as the model output Tensor.
export_mindir (ModelType, optional): Set the model type needs to be export. Options: ModelType.MINDIR |
ModelType.MINDIR_LITE. Default: ModelType.MINDIR_LITE. For details, see

View File

@ -83,7 +83,6 @@ def mutable(input_data):
Examples:
>>> import mindspore.nn as nn
>>> import mindspore.ops as ops
>>> from mindspore.ops.composite import GradOperation
>>> from mindspore.common import mutable
>>> from mindspore.common import dtype as mstype
>>> from mindspore import Tensor
@ -102,7 +101,7 @@ def mutable(input_data):
... def __init__(self, net):
... super(GradNetWrtX, self).__init__()
... self.net = net
... self.grad_op = GradOperation()
... self.grad_op = ops.GradOperation()
...
... def construct(self, z):
... gradient_function = self.grad_op(self.net)

View File

@ -456,7 +456,7 @@ def create_group(group, rank_ids):
Examples:
>>> from mindspore import set_context
>>> from mindspore.ops import operations as ops
>>> import mindspore.ops as ops
>>> from mindspore.communication.management import init, create_group
>>> set_context(device_target="Ascend")
>>> init()

View File

@ -501,13 +501,13 @@ class MarginRankingLoss(LossBase):
Examples:
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as P
>>> import mindspore.ops as ops
>>> from mindspore.ops import Tensor
>>> import numpy as np
>>> loss1 = nn.MarginRankingLoss(reduction='none')
>>> loss2 = nn.MarginRankingLoss(reduction='mean')
>>> loss3 = nn.MarginRankingLoss(reduction='sum')
>>> sign = P.Sign()
>>> sign = ops.Sign()
>>> input1 = Tensor(np.array([0.3864, -2.4093, -1.4076]), ms.float32)
>>> input2 = Tensor(np.array([-0.6012, -1.6681, 1.2928]), ms.float32)
>>> target = sign(Tensor(np.array([-2, -2, 3]), ms.float32))

View File

@ -211,12 +211,11 @@ class GradOperation(GradOperation_):
Examples:
>>> from mindspore import ParameterTuple
>>> from mindspore.ops.composite import GradOperation
>>> from mindspore.ops import operations as P
>>> import mindspore.ops as ops
>>> class Net(nn.Cell):
... def __init__(self):
... super(Net, self).__init__()
... self.matmul = P.MatMul()
... self.matmul = ops.MatMul()
... self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
... def construct(self, x, y):
... x = x * self.z
@ -227,7 +226,7 @@ class GradOperation(GradOperation_):
... def __init__(self, net):
... super(GradNetWrtX, self).__init__()
... self.net = net
... self.grad_op = GradOperation()
... self.grad_op = ops.GradOperation()
... def construct(self, x, y):
... gradient_function = self.grad_op(self.net)
... return gradient_function(x, y)
@ -243,7 +242,7 @@ class GradOperation(GradOperation_):
... def __init__(self, net):
... super(GradNetWrtXY, self).__init__()
... self.net = net
... self.grad_op = GradOperation(get_all=True)
... self.grad_op = ops.GradOperation(get_all=True)
... def construct(self, x, y):
... gradient_function = self.grad_op(self.net)
... return gradient_function(x, y)
@ -263,7 +262,7 @@ class GradOperation(GradOperation_):
... def __init__(self, net):
... super(GradNetWrtXYWithSensParam, self).__init__()
... self.net = net
... self.grad_op = GradOperation(get_all=True, sens_param=True)
... self.grad_op = ops.GradOperation(get_all=True, sens_param=True)
... self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
... def construct(self, x, y):
... gradient_function = self.grad_op(self.net)
@ -285,7 +284,7 @@ class GradOperation(GradOperation_):
... super(GradNetWithWrtParams, self).__init__()
... self.net = net
... self.params = ParameterTuple(net.trainable_params())
... self.grad_op = GradOperation(get_by_list=True)
... self.grad_op = ops.GradOperation(get_by_list=True)
... def construct(self, x, y):
... gradient_function = self.grad_op(self.net, self.params)
... return gradient_function(x, y)
@ -301,7 +300,7 @@ class GradOperation(GradOperation_):
... super(GradNetWrtInputsAndParams, self).__init__()
... self.net = net
... self.params = ParameterTuple(net.trainable_params())
... self.grad_op = GradOperation(get_all=True, get_by_list=True)
... self.grad_op = ops.GradOperation(get_all=True, get_by_list=True)
... def construct(self, x, y):
... gradient_function = self.grad_op(self.net, self.params)
... return gradient_function(x, y)
@ -625,10 +624,10 @@ class MultitypeFuncGraph(MultitypeFuncGraph_):
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore import dtype as mstype
>>> from mindspore.ops.composite import MultitypeFuncGraph
>>> import mindspore.ops as ops
>>>
>>> tensor_add = ops.Add()
>>> add = MultitypeFuncGraph('add')
>>> add = ops.MultitypeFuncGraph('add')
>>> @add.register("Number", "Number")
... def add_scala(x, y):
... return x + y
@ -731,23 +730,22 @@ class HyperMap(HyperMap_):
Examples:
>>> from mindspore import Tensor, ops
>>> from mindspore.ops.composite.base import MultitypeFuncGraph, HyperMap
>>> from mindspore import dtype as mstype
>>> nest_tensor_list = ((Tensor(1, mstype.float32), Tensor(2, mstype.float32)),
... (Tensor(3, mstype.float32), Tensor(4, mstype.float32)))
>>> # square all the tensor in the nested list
>>>
>>> square = MultitypeFuncGraph('square')
>>> square = ops.MultitypeFuncGraph('square')
>>> @square.register("Tensor")
... def square_tensor(x):
... return ops.square(x)
>>>
>>> common_map = HyperMap()
>>> common_map = ops.HyperMap()
>>> output = common_map(square, nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),
(Tensor(shape=[], dtype=Float32, value= 9), Tensor(shape=[], dtype=Float32, value= 16)))
>>> square_map = HyperMap(square, False)
>>> square_map = ops.HyperMap(square, False)
>>> output = square_map(nest_tensor_list)
>>> print(output)
((Tensor(shape=[], dtype=Float32, value= 1), Tensor(shape=[], dtype=Float32, value= 4)),

View File

@ -3692,17 +3692,17 @@ def broadcast_to(x, shape):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> from mindspore.ops.function import broadcast_to
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> shape = (2, 3)
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> output = broadcast_to(x, shape)
>>> output = ops.broadcast_to(x, shape)
>>> print(output)
[[1. 2. 3.]
[1. 2. 3.]]
>>> shape = (-1, 2)
>>> x = Tensor(np.array([[1], [2]]).astype(np.float32))
>>> output = broadcast_to(x, shape)
>>> output = ops.broadcast_to(x, shape)
>>> print(output)
[[1. 1.]
[2. 2.]]

View File

@ -405,15 +405,14 @@ def jet(fn, primals, series):
>>> import numpy as np
>>> import mindspore.nn as nn
>>> import mindspore as ms
>>> import mindspore.ops as P
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> from mindspore.ops.functional import jet
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> class Net(nn.Cell):
... def __init__(self):
... super().__init__()
... self.sin = P.Sin()
... self.exp = P.Exp()
... self.sin = ops.Sin()
... self.exp = ops.Exp()
... def construct(self, x):
... out1 = self.sin(x)
... out2 = self.exp(out1)
@ -421,7 +420,7 @@ def jet(fn, primals, series):
>>> primals = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> series = Tensor(np.array([[[1, 1], [1, 1]], [[0, 0], [0, 0]], [[0, 0], [0, 0]]]).astype(np.float32))
>>> net = Net()
>>> out_primals, out_series = jet(net, primals, series)
>>> out_primals, out_series = ops.jet(net, primals, series)
>>> print(out_primals, out_series)
[[2.319777 2.4825778]
[1.1515628 0.4691642]] [[[ 1.2533808 -1.0331168 ]
@ -516,15 +515,14 @@ def derivative(fn, primals, order):
>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.ops as P
>>> import mindspore.ops as ops
>>> from mindspore import Tensor
>>> from mindspore.ops.functional import derivative
>>> ms.set_context(mode=ms.GRAPH_MODE)
>>> class Net(nn.Cell):
... def __init__(self):
... super().__init__()
... self.sin = P.Sin()
... self.exp = P.Exp()
... self.sin = ops.Sin()
... self.exp = ops.Exp()
... def construct(self, x):
... out1 = self.sin(x)
... out2 = self.exp(out1)
@ -532,7 +530,7 @@ def derivative(fn, primals, order):
>>> primals = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> order = 3
>>> net = Net()
>>> out_primals, out_series = derivative(net, primals, order)
>>> out_primals, out_series = ops.derivative(net, primals, order)
>>> print(out_primals, out_series)
[[2.319777 2.4825778]
[1.1515628 0.4691642]] [[-4.0515366 3.6724353 ]

View File

@ -416,8 +416,8 @@ def argmin(x, axis=-1, keepdims=False):
Args:
x (Tensor): Input tensor. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
axis (int): Axis where the Argmin operation applies to. Default: -1.
keepdims (boolean, optional): Whether the output tensor retains the specified
axis (Union[int, None], optional): Axis where the Argmin operation applies to. Default: None.
keepdims (bool, optional): Whether the output tensor retains the specified
dimension. Ignored if `axis` is None. Default: False.
Returns:
@ -3103,11 +3103,11 @@ def approximate_equal(x, y, tolerance=1e-5):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> from mindspore.ops.function.math_func import approximate_equal
>>> import mindspore.ops as ops
>>> tol = 1.5
>>> x = Tensor(np.array([1, 2, 3]), mstype.float32)
>>> y = Tensor(np.array([2, 4, 6]), mstype.float32)
>>> output = approximate_equal(Tensor(x), Tensor(y), tol)
>>> output = ops.approximate_equal(Tensor(x), Tensor(y), tol)
>>> print(output)
[ True False False]
"""
@ -3708,9 +3708,8 @@ def std(input_x, axis=(), unbiased=True, keep_dims=False):
``Ascend`` ``CPU``
Examples:
>>> from mindspore.ops import functional as F
>>> input_x = Tensor(np.array([[1, 2, 3], [-1, 1, 4]]).astype(np.float32))
>>> output = F.std(input_x, 1, True, False)
>>> output = ops.std(input_x, 1, True, False)
>>> output_std, output_mean = output[0], output[1]
>>> print(output_std)
[1. 2.5166116]

View File

@ -2410,16 +2410,16 @@ def pad(input_x, padding, mode='constant', value=None):
Example: to pad only the last dimension of the input tensor, then
:attr:`padding` has the form
:math:`(\text{padding\_left}, \text{padding\_right})`;
:math:`(\text{padding_left}, \text{padding_right})`;
Example: to pad the last 2 dimensions of the input tensor, then use
:math:`(\text{padding\_left}, \text{padding\_right},`
:math:`\text{padding\_top}, \text{padding\_bottom})`;
:math:`(\text{padding_left}, \text{padding_right}`,
:math:`\text{padding_top}, \text{padding_bottom})`;
Example: to pad the last 3 dimensions, use
:math:`(\text{padding\_left}, \text{padding\_right},`
:math:`\text{padding\_top}, \text{padding\_bottom}`
:math:`\text{padding\_front}, \text{padding\_back})` and so on.
:math:`(\text{padding_left}, \text{padding_right}`,
:math:`\text{padding_top}, \text{padding_bottom}`,
:math:`\text{padding_front}, \text{padding_back})` and so on.
mode (str, optional): Pad filling mode, "constant", "reflect" or "replicate". Default: "constant".
@ -2457,7 +2457,6 @@ def pad(input_x, padding, mode='constant', value=None):
[[[[6. 0. 1.]
[6. 2. 3.]
[6. 6. 6.]]
[[6. 4. 5.]
[6. 6. 7.]
[6. 6. 6.]]]]
@ -2466,7 +2465,6 @@ def pad(input_x, padding, mode='constant', value=None):
[[[[1. 0. 1.]
[3. 2. 3.]
[1. 0. 1.]]
[[5. 4. 5.]
[7. 6. 7.]
[5. 4. 5.]]]]
@ -2476,7 +2474,6 @@ def pad(input_x, padding, mode='constant', value=None):
[0. 0. 1. 1.]
[2. 2. 3. 3.]
[2. 2. 3. 3.]]
[[4. 4. 5. 5.]
[4. 4. 5. 5.]
[4. 4. 5. 5.]

View File

@ -256,7 +256,7 @@ def csr_mm(a: CSRTensor, b: CSRTensor, trans_a: bool = False, trans_b: bool = Fa
Examples:
>>> from mindspore import Tensor, CSRTensor
>>> from mindspore import dtype as mstype
>>> from mindspore.ops.function import csr_mm
>>> import mindspore.ops as ops
>>> a_shape = (4, 5)
>>> a_indptr = Tensor([0, 1, 1, 3, 4], dtype=mstype.int32)
>>> a_indices = Tensor([0, 3, 4, 0],dtype=mstype.int32)
@ -267,7 +267,7 @@ def csr_mm(a: CSRTensor, b: CSRTensor, trans_a: bool = False, trans_b: bool = Fa
>>> b_values = Tensor([2.0, 7.0, 8.0], dtype=mstype.float32)
>>> a = CSRTensor(a_indptr, a_indices, a_values, a_shape)
>>> b = CSRTensor(b_indptr, b_indices, b_values, b_shape)
>>> c = csr_mm(a, b)
>>> c = ops.csr_mm(a, b)
>>> print(c.shape)
(4, 3)
>>> print(c.values)
@ -727,15 +727,15 @@ def csr_softmax(logits: CSRTensor, dtype: mstype):
Examples:
>>> import mindspore as ms
>>> import mindspore.ops as ops
>>> import mindspore.common.dtype as mstype
>>> from mindspore import Tensor, CSRTensor
>>> from mindspore.ops.function import csr_softmax
>>> logits_indptr = Tensor([0, 4, 6], dtype=mstype.int32)
>>> logits_indices = Tensor([0, 2, 3, 4, 3, 4], dtype=mstype.int32)
>>> logits_values = Tensor([1, 2, 3, 4, 1, 2], dtype=mstype.float32)
>>> shape = (2, 6)
>>> logits = CSRTensor(logits_indptr, logits_indices, logits_values, shape)
>>> out = csr_softmax(logits, dtype=mstype.float32)
>>> out = ops.csr_softmax(logits, dtype=mstype.float32)
>>> print(out)
CSRTensor(shape=[2, 6], dtype=Float32, indptr=Tensor(shape=[3], dtype=Int32, value=[0 4 6]),
indices=Tensor(shape=[6], dtype=Int32, value=[0 2 3 4 3 4]),
@ -783,7 +783,7 @@ def csr_add(a: CSRTensor, b: CSRTensor, alpha: Tensor, beta: Tensor) -> CSRTenso
Examples:
>>> import mindspore.common.dtype as mstype
>>> from mindspore import Tensor, CSRTensor
>>> from mindspore.ops.functional import csr_add
>>> import mindspore.ops as ops
>>> a_indptr = Tensor([0, 1, 2], dtype=mstype.int32)
>>> a_indices = Tensor([0, 1], dtype=mstype.int32)
>>> a_values = Tensor([1, 2], dtype=mstype.float32)
@ -795,7 +795,7 @@ def csr_add(a: CSRTensor, b: CSRTensor, alpha: Tensor, beta: Tensor) -> CSRTenso
>>> beta = Tensor(1, mstype.float32)
>>> csra = CSRTensor(a_indptr, a_indices, a_values, shape)
>>> csrb = CSRTensor(b_indptr, b_indices, b_values, shape)
>>> out = csr_add(csra, csrb, alpha, beta)
>>> out = ops.csr_add(csra, csrb, alpha, beta)
>>> print(out)
CSRTensor(shape=[2,6], dtype=Float32,
indptr=Tensor(shape=[3], dtype=Int32, value = [0, 1, 2]),

View File

@ -4449,11 +4449,11 @@ class ScatterMul(_ScatterOpDynamic):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> from mindspore.ops import operations as op
>>> import mindspore.ops as ops
>>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mstype.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mstype.float32)
>>> scatter_mul = op.ScatterMul()
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[2. 2. 2.]
@ -4470,7 +4470,7 @@ class ScatterMul(_ScatterOpDynamic):
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = op.ScatterMul()
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 1. 1. 1.]
@ -4487,7 +4487,7 @@ class ScatterMul(_ScatterOpDynamic):
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = op.ScatterMul()
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 3. 3. 3.]
@ -4504,7 +4504,7 @@ class ScatterMul(_ScatterOpDynamic):
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mstype.float32)
>>> scatter_mul = op.ScatterMul()
>>> scatter_mul = ops.ScatterMul()
>>> output = scatter_mul(input_x, indices, updates)
>>> print(output)
[[ 7. 7. 7.]
@ -4557,11 +4557,11 @@ class ScatterDiv(_ScatterOpDynamic):
``Ascend`` ``GPU`` ``CPU``
Examples:
>>> from mindspore.ops import operations as op
>>> import mindspore.ops as ops
>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mstype.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mstype.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mstype.float32)
>>> scatter_div = op.ScatterDiv()
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
@ -4579,7 +4579,7 @@ class ScatterDiv(_ScatterOpDynamic):
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
... [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = op.ScatterDiv()
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[105. 105. 105.]
@ -4597,7 +4597,7 @@ class ScatterDiv(_ScatterOpDynamic):
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
... [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = op.ScatterDiv()
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[35. 35. 35.]
@ -4615,7 +4615,7 @@ class ScatterDiv(_ScatterOpDynamic):
>>> indices = Tensor(np.array([[0, 1], [0, 1]]), mstype.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
... [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mstype.float32)
>>> scatter_div = op.ScatterDiv()
>>> scatter_div = ops.ScatterDiv()
>>> output = scatter_div(input_x, indices, updates)
>>> print(output)
[[21. 21. 21.]
@ -7819,7 +7819,7 @@ class Bincount(Primitive):
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Example:
Examples:
>>> array = Tensor(np.array([1, 2, 2, 3, 3, 3, 4, 4, 4, 4]), mindspore.int32)
>>> size = Tensor(5, mindspore.int32)
>>> weights = Tensor(np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), mindspore.float32)

View File

@ -34,7 +34,7 @@ class AdjustSaturation(Primitive):
Inputs:
- **image** (Tensor): Images to adjust. Must be one of the following types: float16, float32.
At least 3-D.The last dimension is interpreted as channels, and must be three.
At least 3-D.The last dimension is interpreted as channels, and must be three.
- **scale** (Tensor): A float scale to add to the saturation. A Tensor of type float32. Must be 0-D.
Outputs:

View File

@ -1632,7 +1632,7 @@ class Betainc(Primitive):
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Example:
Examples:
>>> a = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> b = Tensor(np.array([1, 1, 1]), mindspore.float32)
>>> x = Tensor(np.array([1, 1,1 ]), mindspore.float32)

View File

@ -2581,7 +2581,7 @@ class Conv2DTranspose(Conv2DBackpropInput):
out_channel (int): The dimensionality of the output space.
kernel_size (Union[int, tuple[int]]): The size of the convolution window.
pad_mode (str): Modes to fill padding. It could be "valid", "same", or "pad". Default: "valid".
Please refer to :class:`mindspore.nn.Conv2DTranspose` for more specifications about `pad_mode`.
Please refer to :class:`mindspore.nn.Conv2dTranspose` for more specifications about `pad_mode`.
pad (Union[int, tuple[int]]): The pad value to be filled. Default: 0. If `pad` is an integer, the paddings of
top, bottom, left and right are the same, equal to pad. If `pad` is a tuple of four integers, the
padding of top, bottom, left and right equal to pad[0], pad[1], pad[2], and pad[3] correspondingly.

View File

@ -450,10 +450,10 @@ class Partial(Primitive):
Examples:
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> import mindspore.ops as ops
>>> def show_input(x, y, z):
... return x, y, z
>>> partial = P.Partial()
>>> partial = ops.Partial()
>>> partial_show_input = partial(show_input, Tensor(1))
>>> output1 = partial_show_input(Tensor(2), Tensor(3))
>>> print(output1)

View File

@ -567,7 +567,7 @@ class SparseTensorDenseAdd(Primitive):
Examples:
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as ops
>>> import mindspore.ops as ops
>>> from mindspore.common import dtype as mstype
>>> x1_indices = Tensor([[0, 0], [0, 1]], dtype=mstype.int64)
>>> x1_values = Tensor([1, 1], dtype=mstype.float32)

View File

@ -94,7 +94,7 @@ class SymbolTree:
An instance of `Node`.
Raises:
TypeError: If `func` is not FuntionType.
TypeError: If `func` is not FunctionType.
TypeError: If `targets` is not `list`.
TypeError: If the type of `targets` is not str.
TypeError: If arg in `args` is not ParamType.

View File

@ -675,9 +675,10 @@ def load_checkpoint(ckpt_file_name, net=None, strict_load=False, filter_prefix=N
Returns:
Dict, key is parameter name, value is a Parameter or string. When the `append_dict` parameter of
:func:`mindspore.save_checkpoint` and the `append_info` parameter of :class:`CheckpointConfig` are used to
save the checkpoint, `append_dict` and `append_info` are dict types, and their value are string, then the
return value obtained by loading checkpoint is string, and in other cases the return value is Parameter.
:func:`mindspore.save_checkpoint` and the `append_info` parameter of :class:`mindspore.train.CheckpointConfig`
are used to save the checkpoint, `append_dict` and `append_info` are dict types, and their value are string,
then the return value obtained by loading checkpoint is string, and in other cases the return value is
Parameter.
Raises:
ValueError: Checkpoint file's format is incorrect.