Merge pull request !44128 from xumengjuan1/code_docs_x12
This commit is contained in:
i-robot 2022-10-20 09:22:09 +00:00 committed by Gitee
commit acbdcd1722
No known key found for this signature in database
GPG Key ID: 173E9B9CA92EEF8F
7 changed files with 7 additions and 17 deletions

View File

@ -44,7 +44,7 @@ mindspore.dataset.Graph
获取图的所有边。
参数:
- **edge_type** (str) - 指定边的类型Graph初始化未指定 `edge_type` 时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
- **edge_type** (str) - 指定边的类型Graph初始化未指定 `edge_type` 时,默认值为'0'
返回:
numpy.ndarray包含边的数组。
@ -157,7 +157,7 @@ mindspore.dataset.Graph
获取图中的所有节点。
参数:
- **node_type** (str) - 指定节点的类型。Graph初始化未指定 `node_type` 时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
- **node_type** (str) - 指定节点的类型。Graph初始化未指定 `node_type` 时,默认值为'0'
返回:
numpy.ndarray包含节点的数组。

View File

@ -6,9 +6,6 @@ mindspore.dataset.GraphData
从共享文件或数据库中读取用于GNN训练的图数据集。
支持读取图数据集Cora、Citeseer和PubMed。
关于如何将源数据集加载到mindspore中请参考 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/
master/advanced/dataset/augment_graph_data.html>`_。
参数:
- **dataset_file** (str) - 数据集文件路径。
- **num_parallel_workers** (int, 可选) - 读取数据的工作线程数默认值None使用mindspore.dataset.config中配置的线程数。
@ -36,7 +33,7 @@ mindspore.dataset.GraphData
获取图的所有边。
参数:
- **edge_type** (int) - 指定边的类型在数据集转换为MindRecord格式时需要指定 `edge_type` 的值并在此API中对应使用。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
- **edge_type** (int) - 指定边的类型在数据集转换为MindRecord格式时需要指定 `edge_type` 的值并在此API中对应使用
返回:
numpy.ndarray包含边的数组。
@ -149,7 +146,7 @@ mindspore.dataset.GraphData
获取图中的所有节点。
参数:
- **node_type** (int) - 指定节点的类型。在数据集转换为MindRecord格式时需要指定 `node_type` 的值并在此API中对应使用。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_
- **node_type** (int) - 指定节点的类型。在数据集转换为MindRecord格式时需要指定 `node_type` 的值并在此API中对应使用
返回:
numpy.ndarray包含节点的数组。

View File

@ -2,4 +2,4 @@
由于此优化器没有 `loss_scale` 的参数,因此需要通过其他方式处理 `loss_scale`
如何正确处理 `loss_scale` 详见 `LossScale <https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html>`_
如何正确处理 `loss_scale` 详见 `LossScale <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html>`_

View File

@ -15,9 +15,6 @@
"""
This file contains basic classes that help users do flexible dataset loading.
You can define your own dataset loading class, and use GeneratorDataset to help load data.
You can refer to the
`tutorial <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/custom.html>`_
to help define your dataset loading.
After declaring the dataset object, you can further apply dataset operations
(e.g. filter, skip, concat, map, batch) on it.
"""

View File

@ -78,10 +78,6 @@ class GraphData:
Reads the graph dataset used for GNN training from the shared file and database.
Support reading graph datasets like Cora, Citeseer and PubMed.
About how to load raw graph dataset into MindSpore please
refer to `Loading Graph Dataset <https://www.mindspore.cn/tutorials/zh-CN/
master/advanced/dataset/augment_graph_data.html>`_.
Args:
dataset_file (str): One of file names in the dataset.
num_parallel_workers (int, optional): Number of workers to process the dataset in parallel

View File

@ -853,7 +853,7 @@ class AdamWeightDecay(Optimizer):
There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used
and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'.
As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means, refer
document `LossScale <https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html>`_ to
document `LossScale <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html>`_ to
process `loss_scale` correctly.
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without

View File

@ -133,7 +133,7 @@ class Lamb(Optimizer):
There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used
and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'.
As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means. Refer
document `LossScale <https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html>`_ to
document `LossScale <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html>`_ to
process `loss_scale` correctly.
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without