forked from mindspore-Ecosystem/mindspore
!44128 modify doc
Merge pull request !44128 from xumengjuan1/code_docs_x12
This commit is contained in:
commit
acbdcd1722
|
@ -44,7 +44,7 @@ mindspore.dataset.Graph
|
|||
获取图的所有边。
|
||||
|
||||
参数:
|
||||
- **edge_type** (str) - 指定边的类型,Graph初始化未指定 `edge_type` 时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_ 。
|
||||
- **edge_type** (str) - 指定边的类型,Graph初始化未指定 `edge_type` 时,默认值为'0'。
|
||||
|
||||
返回:
|
||||
numpy.ndarray,包含边的数组。
|
||||
|
@ -157,7 +157,7 @@ mindspore.dataset.Graph
|
|||
获取图中的所有节点。
|
||||
|
||||
参数:
|
||||
- **node_type** (str) - 指定节点的类型。Graph初始化未指定 `node_type` 时,默认值为'0'。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_ 。
|
||||
- **node_type** (str) - 指定节点的类型。Graph初始化未指定 `node_type` 时,默认值为'0'。
|
||||
|
||||
返回:
|
||||
numpy.ndarray,包含节点的数组。
|
||||
|
|
|
@ -6,9 +6,6 @@ mindspore.dataset.GraphData
|
|||
从共享文件或数据库中读取用于GNN训练的图数据集。
|
||||
支持读取图数据集Cora、Citeseer和PubMed。
|
||||
|
||||
关于如何将源数据集加载到mindspore中请参考 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/
|
||||
master/advanced/dataset/augment_graph_data.html>`_。
|
||||
|
||||
参数:
|
||||
- **dataset_file** (str) - 数据集文件路径。
|
||||
- **num_parallel_workers** (int, 可选) - 读取数据的工作线程数,默认值:None,使用mindspore.dataset.config中配置的线程数。
|
||||
|
@ -36,7 +33,7 @@ mindspore.dataset.GraphData
|
|||
获取图的所有边。
|
||||
|
||||
参数:
|
||||
- **edge_type** (int) - 指定边的类型,在数据集转换为MindRecord格式时,需要指定 `edge_type` 的值,并在此API中对应使用。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_ 。
|
||||
- **edge_type** (int) - 指定边的类型,在数据集转换为MindRecord格式时,需要指定 `edge_type` 的值,并在此API中对应使用。
|
||||
|
||||
返回:
|
||||
numpy.ndarray,包含边的数组。
|
||||
|
@ -149,7 +146,7 @@ mindspore.dataset.GraphData
|
|||
获取图中的所有节点。
|
||||
|
||||
参数:
|
||||
- **node_type** (int) - 指定节点的类型。在数据集转换为MindRecord格式时,需要指定 `node_type` 的值,并在此API中对应使用。详见 `加载图数据集 <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/augment_graph_data.html>`_ 。
|
||||
- **node_type** (int) - 指定节点的类型。在数据集转换为MindRecord格式时,需要指定 `node_type` 的值,并在此API中对应使用。
|
||||
|
||||
返回:
|
||||
numpy.ndarray,包含节点的数组。
|
||||
|
|
|
@ -2,4 +2,4 @@
|
|||
|
||||
由于此优化器没有 `loss_scale` 的参数,因此需要通过其他方式处理 `loss_scale` 。
|
||||
|
||||
如何正确处理 `loss_scale` 详见 `LossScale <https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html>`_。
|
||||
如何正确处理 `loss_scale` 详见 `LossScale <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html>`_。
|
||||
|
|
|
@ -15,9 +15,6 @@
|
|||
"""
|
||||
This file contains basic classes that help users do flexible dataset loading.
|
||||
You can define your own dataset loading class, and use GeneratorDataset to help load data.
|
||||
You can refer to the
|
||||
`tutorial <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/custom.html>`_
|
||||
to help define your dataset loading.
|
||||
After declaring the dataset object, you can further apply dataset operations
|
||||
(e.g. filter, skip, concat, map, batch) on it.
|
||||
"""
|
||||
|
|
|
@ -78,10 +78,6 @@ class GraphData:
|
|||
Reads the graph dataset used for GNN training from the shared file and database.
|
||||
Support reading graph datasets like Cora, Citeseer and PubMed.
|
||||
|
||||
About how to load raw graph dataset into MindSpore please
|
||||
refer to `Loading Graph Dataset <https://www.mindspore.cn/tutorials/zh-CN/
|
||||
master/advanced/dataset/augment_graph_data.html>`_.
|
||||
|
||||
Args:
|
||||
dataset_file (str): One of file names in the dataset.
|
||||
num_parallel_workers (int, optional): Number of workers to process the dataset in parallel
|
||||
|
|
|
@ -853,7 +853,7 @@ class AdamWeightDecay(Optimizer):
|
|||
There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used
|
||||
and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'.
|
||||
As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means, refer
|
||||
document `LossScale <https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html>`_ to
|
||||
document `LossScale <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html>`_ to
|
||||
process `loss_scale` correctly.
|
||||
|
||||
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without
|
||||
|
|
|
@ -133,7 +133,7 @@ class Lamb(Optimizer):
|
|||
There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used
|
||||
and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'.
|
||||
As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means. Refer
|
||||
document `LossScale <https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html>`_ to
|
||||
document `LossScale <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/mixed_precision.html>`_ to
|
||||
process `loss_scale` correctly.
|
||||
|
||||
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without
|
||||
|
|
Loading…
Reference in New Issue