modify format

This commit is contained in:
huodagu 2022-07-19 11:16:19 +08:00
parent a22dbcf9a3
commit 17e78004ae
12 changed files with 55 additions and 51 deletions

View File

@ -8,12 +8,12 @@ mindspore.dataset.Graph
该接口支持输入表示节点、边及其特征的NumPy数组来进行图初始化。如果 `working_mode` 是默认的 `local` 模式,则不需要指定 `working_mode``hostname``port``num_client``auto_shutdown` 等输入参数。
参数:
- **edges**(Union[list, numpy.ndarray]): 以COO格式表示的边shape为 [2, num_edges]。
- **node_feat**(dict, 可选): 节点的特征输入数据格式应该是dict其中key表示特征的类型用字符串表示比如'weight'等value应该是shape为 [num_nodes, num_node_features] 的NumPy数组。
- **edge_feat**(dict, 可选): 边的特征输入数据格式应该是dict其中key表示特征的类型用字符串表示比如'weight'等value应该是shape为 [num_edges, num_edge_features] 的NumPy数组。
- **graph_feat**(dict, 可选)附加特征,不能分配给 `node_feat` 或者 `edge_feat` 输入数据格式应该是dictkey是特征的类型用字符串表示; value应该是NumPy数组其shape可以不受限制。
- **node_type**(Union[list, numpy.ndarray], 可选): 节点的类型每个元素都是字符串表示每个节点的类型。如果未提供则每个节点的默认类型为“0”。
- **edge_type**(Union[list, numpy.ndarray], 可选): 边的类型每个元素都是字符串表示每条边的类型。如果未提供则每条边的默认类型为“0”。
- **edges** (Union[list, numpy.ndarray]) - 以COO格式表示的边shape为 [2, num_edges]。
- **node_feat** (dict, 可选) - 节点的特征输入数据格式应该是dict其中key表示特征的类型用字符串表示比如'weight'等value应该是shape为 [num_nodes, num_node_features] 的NumPy数组。
- **edge_feat** (dict, 可选) - 边的特征输入数据格式应该是dict其中key表示特征的类型用字符串表示比如'weight'等value应该是shape为 [num_edges, num_edge_features] 的NumPy数组。
- **graph_feat** (dict, 可选) - 附加特征,不能分配给 `node_feat` 或者 `edge_feat` 输入数据格式应该是dictkey是特征的类型用字符串表示; value应该是NumPy数组其shape可以不受限制。
- **node_type** (Union[list, numpy.ndarray], 可选) - 节点的类型每个元素都是字符串表示每个节点的类型。如果未提供则每个节点的默认类型为“0”。
- **edge_type** (Union[list, numpy.ndarray], 可选) - 边的类型每个元素都是字符串表示每条边的类型。如果未提供则每条边的默认类型为“0”。
- **num_parallel_workers** (int, 可选) - 读取数据的工作线程数默认值None使用mindspore.dataset.config中配置的线程数。
- **working_mode** (str, 可选) - 设置工作模式,目前支持'local'/'client'/'server',默认值:'local'。
@ -38,7 +38,7 @@ mindspore.dataset.Graph
获取图的所有边。
参数:
- **edge_type** (str) - 指定边的类型Graph初始化未指定`edge_type`时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
- **edge_type** (str) - 指定边的类型Graph初始化未指定 `edge_type` 时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
返回:
numpy.ndarray包含边的数组。
@ -151,7 +151,7 @@ mindspore.dataset.Graph
获取图中的所有节点。
参数:
- **node_type** (str) - 指定节点的类型。Graph初始化未指定`edge_type`时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
- **node_type** (str) - 指定节点的类型。Graph初始化未指定 `edge_type` 时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
返回:
numpy.ndarray包含节点的数组。

View File

@ -25,7 +25,7 @@ mindspore.nn.AdaptiveAvgPool2d
**输入:**
- **input_x** (Tensor) - AdaptiveAvgPool2d的输入为三维或四维的Tensor数据类型为float16、float32或者float64。
- **x** (Tensor) - AdaptiveAvgPool2d的输入为三维或四维的Tensor数据类型为float16、float32或者float64。
**输出:**
@ -34,6 +34,6 @@ mindspore.nn.AdaptiveAvgPool2d
**异常:**
- **ValueError** - 如果 `output_size` 是tuple并且 `output_size` 的长度不是2。
- **TypeError** - 如果 `input_x` 不是Tensor。
- **TypeError** - 如果 `input_x` 的数据类型不是float16、float32或者float64。
- **ValueError** - 如果 `input_x` 的维度小于或等于 `output_size` 的维度。
- **TypeError** - 如果 `x` 不是Tensor。
- **TypeError** - 如果 `x` 的数据类型不是float16、float32或者float64。
- **ValueError** - 如果 `x` 的维度小于或等于 `output_size` 的维度。

View File

@ -7,7 +7,7 @@ mindspore.nn.ReflectionPad2d
**参数:**
- **padding** (union[int, tuple]) - 填充大小, 如果输入为int 则对所有边界进行相同大小的填充; 如果是tuple则顺序为:math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down})`
- **padding** (union[int, tuple]) - 填充大小, 如果输入为int 则对所有边界进行相同大小的填充; 如果是tuple则顺序为 :math:`(pad_{left}, pad_{right}, pad_{up}, pad_{down})`
**输入:**
@ -15,7 +15,7 @@ mindspore.nn.ReflectionPad2d
**输出:**
Tensor填充后的Tensor, shape为 :math:`(C, H_{out}, W_{out})`:math:`(N, C, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`,:math:`W_{out} = W_{in} + pad\_left + pad\_right` 。
Tensor填充后的Tensor, shape为 :math:`(C, H_{out}, W_{out})`:math:`(N, C, H_{out}, W_{out})`。其中 :math:`H_{out} = H_{in} + pad_{up} + pad_{down}`, :math:`W_{out} = W_{in} + pad_{left} + pad_{right}` 。
**异常:**

View File

@ -1,7 +1,7 @@
mindspore.ops.BesselI0e
========================
.. py:class:: mindspore.ops.BesselI0e(x)
.. py:class:: mindspore.ops.BesselI0e()
逐元素计算输入数据的BesselI0e函数值。
@ -12,11 +12,11 @@ mindspore.ops.BesselI0e
其中bessel_i0是第一类0阶的Bessel函数。
**参数**
**输入**
- **x** (Tensor) - 任意维度的Tensor。数据类型应为float16float32或float64。
**返回**
**输出**
Tensorshape和数据类型与 `x` 相同。

View File

@ -1,7 +1,7 @@
mindspore.ops.BesselI1e
========================
.. py:class:: mindspore.ops.BesselI1e(x)
.. py:class:: mindspore.ops.BesselI1e()
逐元素计算输入数据的BesselI1e函数值。
@ -12,11 +12,11 @@ mindspore.ops.BesselI1e
其中bessel_i1是第一类1阶的Bessel函数。
**参数**
**输入**
- **x** (Tensor) - 任意维度的Tensor。数据类型应为float16float32或float64。
**返回**
**输出**
Tensorshape和数据类型与 `x` 相同。

View File

@ -1,7 +1,7 @@
mindspore.ops.stack
====================
.. py:function:: mindspore.ops.stack(input_x, axis)
.. py:function:: mindspore.ops.stack(input_x, axis=0)
在指定轴上对输入Tensor序列进行堆叠。

View File

@ -1,7 +1,7 @@
maxdspore.ops.tensor_scatter_max
mindspore.ops.tensor_scatter_max
================================
.. py:function:: maxdspore.ops.tensor_scatter_max(input_x, indices, updates)
.. py:function:: mindspore.ops.tensor_scatter_max(input_x, indices, updates)
根据指定的更新值和输入索引通过最大值运算输出结果以Tensor形式返回。

View File

@ -1,7 +1,7 @@
mindspore.ops.unstack
=======================
.. py:function:: mindspore.ops.unstack(axis=0)
.. py:function:: mindspore.ops.unstack(input_x, axis=0)
根据指定轴对输入矩阵进行分解。

View File

@ -541,9 +541,9 @@ class AdaptiveAvgPool2d(Cell):
Raises:
ValueError: If `output_size` is a tuple and the length of `output_size` is not 2.
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is not float16, float32 or float64.
ValueError: If the dimension of `input_x` is less than or equal to the dimension of `output_size`.
TypeError: If `x` is not a Tensor.
TypeError: If dtype of `x` is not float16, float32 or float64.
ValueError: If the dimension of `x` is less than or equal to the dimension of `output_size`.
Supported Platforms:
``GPU``

View File

@ -345,17 +345,17 @@ class FeedForward(Cell):
where the :math:`W_1, W_2, b_1` and :math:`b_2` are trainable parameters.
Args:
hidden_size: (int): The dimension of the inputs.
ffn_hidden_size: (int): The intermediate hidden size.
dropout_rate: (float): The dropout rate for the second linear's output.
hidden_act: (str): The activation of the internal feedforward layer. Supports 'relu',
hidden_size (int): The dimension of the inputs.
ffn_hidden_size (int): The intermediate hidden size.
dropout_rate (float): The dropout rate for the second linear's output.
hidden_act (str): The activation of the internal feedforward layer. Supports 'relu',
'relu6', 'tanh', 'gelu', 'fast_gelu', 'elu', 'sigmoid', 'prelu', 'leakyrelu', 'hswish',
'hsigmoid', 'logsigmoid' and so on. Default: gelu.
expert_num: (int): The number of experts used in Linear. For the case expert_num > 1, BatchMatMul is used
expert_num (int): The number of experts used in Linear. For the case expert_num > 1, BatchMatMul is used
and the first dimension in BatchMatMul indicate expert_num. Default: 1.
expert_group_size (int): The number of tokens in each data parallel group. Default: None. This parameter is
effective only when in AUTO_PARALLEL mode, and NOT SHARDING_PROPAGATION.
param_init_type: (dtype.Number): The parameter initialization type. Should be mstype.float32 or
param_init_type (dtype.Number): The parameter initialization type. Should be mstype.float32 or
mstype.float16. Default: mstype.float32.
parallel_config (OpParallelConfig, MoEParallelConfig): The config of parallel setting, see
`OpParallelConfig` or `MoEParallelConfig`. When MoE is applied, MoEParallelConfig is effective,
@ -626,11 +626,11 @@ class VocabEmbedding(Cell):
as the shard is designed for 2d inputs.
Args:
vocab_size: (int): Size of the dictionary of embeddings.
embedding_size: (int): The size of each embedding vector.
parallel_config(EmbeddingOpParallelConfig): The parallel config of network. Default
vocab_size (int): Size of the dictionary of embeddings.
embedding_size (int): The size of each embedding vector.
parallel_config (EmbeddingOpParallelConfig): The parallel config of network. Default
`default_embedding_parallel_config`, an instance of `EmbeddingOpParallelConfig` with default args.
param_init: (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the embedding_table.
param_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the embedding_table.
Refer to class `initializer` for the values of string when a string
is specified. Default: 'normal'.

View File

@ -593,7 +593,7 @@ def unique_with_pad(x, pad_num):
- y (Tensor) - The unique elements filled with pad_num, the shape and data type same as `x`.
- idx (Tensor) - The index of each value of `x` in the unique output `y`, the shape and data type same as `x`.
Raises:x
Raises:
TypeError: If dtype of `x` is neither int32 nor int64.
ValueError: If length of shape of `x` is not equal to 1.
@ -1151,7 +1151,7 @@ def unstack(input_x, axis=0):
This is the opposite of pack.
Args:
input_x (Tensor) - The shape is :math:`(x_1, x_2, ..., x_R)`.
input_x (Tensor): The shape is :math:`(x_1, x_2, ..., x_R)`.
A tensor to be unstacked and the rank of the tensor must be greater than 0.
axis (int): Dimension along which to unpack. Default: 0.
Negative values wrap around. The range is [-R, R).

View File

@ -1754,8 +1754,9 @@ def ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reducti
zero_infinity (bool): Whether to set infinite loss and correlation gradient to zero. Default: False.
Returns:
neg_log_likelihood (Tensor): A loss value which is differentiable with respect to each input node.
log_alpha (Tensor): The probability of possible trace of input to target.
neg_log_likelihood (Tensor), A loss value which is differentiable with respect to each input node.
log_alpha (Tensor), The probability of possible trace of input to target.
Raises:
TypeError: If `zero_infinity` is not a bool, reduction is not string.
@ -1806,23 +1807,26 @@ def ctc_greedy_decoder(inputs, sequence_length, merge_repeated=True):
Performs greedy decoding on the logits given in inputs.
Args:
inputs (Tensor) - The input Tensor must be a 3-D tensor whose shape is
inputs (Tensor): The input Tensor must be a 3-D tensor whose shape is
:math:`(max\_time, batch\_size, num\_classes)`. `num_classes` must be `num_labels + 1` classes,
`num_labels` indicates the number of actual labels. Blank labels are reserved.
Default blank label is `num_classes - 1`. Data type must be float32 or float64.
sequence_length (Tensor) - A tensor containing sequence lengths with the shape of :math:`(batch\_size, )`.
sequence_length (Tensor): A tensor containing sequence lengths with the shape of :math:`(batch\_size, )`.
The type must be int32. Each value in the tensor must be equal to or less than `max_time`.
merge_repeated (bool): If true, merge repeated classes in output. Default: True.
Returns:
decoded_indices (Tensor): A tensor with shape of :math:`(total\_decoded\_outputs, 2)`.
Data type is int64.
decoded_values (Tensor): A tensor with shape of :math:`(total\_decoded\_outputs, )`,
it stores the decoded classes. Data type is int64.
decoded_shape (Tensor): A tensor with shape of :math:`(batch\_size, max\_decoded\_legth)`.
Data type is int64.
log_probability (Tensor): A tensor with shape of :math:`(batch\_size, 1)`,
containing sequence log-probability, has the same type as `inputs`.
decoded_indices (Tensor), A tensor with shape of :math:`(total\_decoded\_outputs, 2)`.
Data type is int64.
decoded_values (Tensor), A tensor with shape of :math:`(total\_decoded\_outputs, )`,
it stores the decoded classes. Data type is int64.
decoded_shape (Tensor), A tensor with shape of :math:`(batch\_size, max\_decoded\_legth)`.
Data type is int64.
log_probability (Tensor), A tensor with shape of :math:`(batch\_size, 1)`,
containing sequence log-probability, has the same type as `inputs`.
Raises:
TypeError: If `merge_repeated` is not a bool.