forked from mindspore-Ecosystem/mindspore
!32811 modify link to fit new repo docs structure
Merge pull request !32811 from lvmingfu/code_docs_master11
This commit is contained in:
commit
b183b17b41
|
@ -45,4 +45,4 @@ https://raw.githubusercontent.com/dmlc/web-data/master/tensorflow/models/object_
|
|||
https://github.com/google/googletestcd
|
||||
https://github.com/dmlc/dgl/blob/master/examples/pytorch/gcn/train.pyfrom
|
||||
https://github.com/FrozenGene/tflite/releases/download/v1.13.1/tflite-1.13.1-py3-none-any.whlpip3
|
||||
https://github.com/siju-samuel/darknet/blob/master/
|
||||
https://github.com/siju-samuel/darknet/blob/master/
|
|
@ -44,7 +44,7 @@ enrichment of the AI software/hardware application ecosystem.
|
|||
|
||||
<img src="https://gitee.com/mindspore/mindspore/raw/master/docs/MindSpore-architecture.png" alt="MindSpore Architecture"/>
|
||||
|
||||
For more details please check out our [Architecture Guide](https://www.mindspore.cn/docs/programming_guide/en/master/architecture.html).
|
||||
For more details please check out our [Architecture Guide](https://www.mindspore.cn/tutorials/en/master/beginner/introduction.html).
|
||||
|
||||
### Automatic Differentiation
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ MindSpore提供了友好的设计和高效的执行,旨在提升数据科学
|
|||
|
||||
<img src="https://gitee.com/mindspore/mindspore/raw/master/docs/MindSpore-architecture-zh.png" alt="MindSpore Architecture"/>
|
||||
|
||||
欲了解更多详情,请查看我们的[总体架构](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/architecture.html)。
|
||||
欲了解更多详情,请查看我们的[总体架构](https://www.mindspore.cn/tutorials/zh-CN/master/beginner/introduction.html)。
|
||||
|
||||
### 自动微分
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ mindspore.dataset.CLUEDataset
|
|||
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
根据给定的 `task` 参数 和 `usage` 配置,数据集会生成不同的输出列:
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ mindspore.dataset.Caltech256Dataset
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,下表中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ mindspore.dataset.CelebADataset
|
|||
- **num_samples** (int, 可选) - 指定从数据集中读取的样本数,可以小于数据集总数。默认值:None,读取全部样本图片。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ mindspore.dataset.Cifar100Dataset
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,下表中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ mindspore.dataset.Cifar10Dataset
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,下表中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ mindspore.dataset.CityscapesDataset
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,下表中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,表2中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **extra_metadata** (bool, 可选) - 用于指定是否额外输出一个数据列用于表示图片元信息。如果为True,则将额外输出一个名为 `[_meta-filename, dtype=string]` 的数据列,默认值:False。
|
||||
|
||||
[表1] 根据不同 `task` 参数设置,生成数据集具有不同的输出列:
|
||||
|
|
|
@ -21,7 +21,7 @@ mindspore.dataset.DIV2KDataset
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,下表中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -174,10 +174,10 @@
|
|||
- **column_order** (Union[str, list[str]], 可选) - 指定传递到下一个数据集操作的数据列的顺序。如果 `input_columns` 长度不等于 `output_columns` 长度,则必须指定此参数。 注意:参数的列名不限定在 `input_columns` 和 `output_columns` 中指定的列,也可以是上一个操作输出的未被处理的数据列。默认值:None,按照原输入顺序排列。
|
||||
- **num_parallel_workers** (int, 可选) - 指定map操作的多进程/多线程并发数,加快处理速度。默认值:None,将使用 `set_num_parallel_workers` 设置的并发数。
|
||||
- **python_multiprocessing** (bool, 可选) - 启用Python多进程模式加速map操作。当传入的 `operations` 计算量很大时,开启此选项可能会有较好效果。默认值:False。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **callbacks** (DSCallback, list[DSCallback], 可选) - 要调用的Dataset回调函数列表。默认值:None。
|
||||
- **max_rowsize** (int, 可选) - 指定在多进程之间复制数据时,共享内存分配的最大空间,仅当 `python_multiprocessing` 为True时,该选项有效。默认值:16,单位为MB。
|
||||
- **offload** (bool, 可选) - 是否进行异构硬件加速,详情请阅读 `数据准备异构加速 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_dataset_offload.html>`_ 。默认值:None。
|
||||
- **offload** (bool, 可选) - 是否进行异构硬件加速,详情请阅读 `数据准备异构加速 <https://www.mindspore.cn/docs/zh-CN/master/design/dataset_offload.html>`_ 。默认值:None。
|
||||
|
||||
.. note::
|
||||
- `operations` 参数主要接收 `mindspore.dataset` 模块中c_transforms、py_transforms算子,以及用户定义的Python函数(PyFuncs)。
|
||||
|
|
|
@ -5,8 +5,7 @@ mindspore.dataset.DatasetCache
|
|||
|
||||
创建数据缓存客户端实例。
|
||||
|
||||
关于单节点数据缓存的使用,请参阅 `单节点数据缓存教程 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_cache.html>`_ 、
|
||||
`单节点数据缓存编程指南 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_。
|
||||
关于单节点数据缓存的使用,请参阅 `单节点数据缓存教程 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/enable_cache.html>`_ 。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ mindspore.dataset.ImageFolderDataset
|
|||
- **decode** (bool, 可选) - 是否对读取的图片进行解码操作,默认值:False,不解码。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
- **decode** (bool, 可选) - 是否对读取的图片进行解码操作,默认值:False,不解码。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
- **padded_sample** (dict, 可选): 指定额外添加到数据集的样本,可用于在分布式训练时补齐分片数据,注意字典的键名需要与 `column_list` 指定的列名相同。默认值:None,不添加样本。需要与 `num_padded` 参数同时使用。
|
||||
- **num_padded** (int, 可选) - 指定额外添加的数据集样本的数量。在分布式训练时可用于为数据集补齐样本,使得总样本数量可被 `num_shards` 整除。默认值:None,不添加样本。需要与 `padded_sample` 参数同时使用。
|
||||
- **num_samples** (int, 可选) - 指定从数据集中读取的样本数。默认值:None,读取所有样本。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ mindspore.dataset.MnistDataset
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,下表中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ mindspore.dataset.TFRecordDataset
|
|||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **shard_equal_rows** (bool, 可选) - 分布式训练时,为所有分片获取等量的数据行数。默认值:False。如果 `shard_equal_rows` 为False,则可能会使得每个分片的数据条目不相等,从而导致分布式训练失败。因此当每个TFRecord文件的数据数量不相等时,建议将此参数设置为True。注意,只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
|
||||
**异常:**
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ mindspore.dataset.VOCDataset
|
|||
- **sampler** (Sampler, 可选) - 指定从数据集中选取样本的采样器,默认值:None,下表中会展示不同配置的预期行为。
|
||||
- **num_shards** (int, 可选) - 指定分布式训练时将数据集进行划分的分片数,默认值:None。指定此参数后, `num_samples` 表示每个分片的最大样本数。
|
||||
- **shard_id** (int, 可选) - 指定分布式训练时使用的分片ID号,默认值:None。只有当指定了 `num_shards` 时才能指定此参数。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **cache** (DatasetCache, 可选) - 单节点数据缓存服务,用于加快数据集处理,详情请阅读 `单节点数据缓存 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_ 。默认值:None,不使用缓存。
|
||||
- **extra_metadata** (bool, 可选) - 用于指定是否额外输出一个数据列用于表示图片元信息。如果为True,则将额外输出一个名为 `[_meta-filename, dtype=string]` 的数据列,默认值:False。
|
||||
|
||||
根据给定的 `task` 配置,生成数据集具有不同的输出列:
|
||||
|
|
|
@ -3,11 +3,11 @@ mindspore.dataset.WaitedDSCallback
|
|||
|
||||
.. py:class:: mindspore.dataset.WaitedDSCallback(step_size=1)
|
||||
|
||||
阻塞式数据处理回调类的抽象基类,用于与训练回调类 `mindspore.train.callback <https://mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_ 的同步。
|
||||
阻塞式数据处理回调类的抽象基类,用于与训练回调类 `mindspore.train.callback <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_ 的同步。
|
||||
|
||||
可用于在step或epoch开始前执行自定义的回调方法,例如在自动数据增强中根据上一个epoch的loss值来更新增强算子参数配置。
|
||||
|
||||
用户可通过 `train_run_context` 获取网络训练相关信息,如 `network` 、 `train_network` 、 `epoch_num` 、 `batch_num` 、 `loss_fn` 、 `optimizer` 、 `parallel_mode` 、 `device_number` 、 `list_callback` 、 `cur_epoch_num` 、 `cur_step_num` 、 `dataset_sink_mode` 、 `net_outputs` 等,详见 `mindspore.train.callback <https://mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_ 。
|
||||
用户可通过 `train_run_context` 获取网络训练相关信息,如 `network` 、 `train_network` 、 `epoch_num` 、 `batch_num` 、 `loss_fn` 、 `optimizer` 、 `parallel_mode` 、 `device_number` 、 `list_callback` 、 `cur_epoch_num` 、 `cur_step_num` 、 `dataset_sink_mode` 、 `net_outputs` 等,详见 `mindspore.train.callback <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_ 。
|
||||
|
||||
用户可通过 `ds_run_context` 获取数据处理管道相关信息,包括 `cur_epoch_num` (当前epoch数)、 `cur_step_num_in_epoch` (当前epoch的step数)、 `cur_step_num` (当前step数)。
|
||||
|
||||
|
|
|
@ -99,7 +99,7 @@ MindSpore context,用于配置当前执行环境,包括执行模式、执行
|
|||
内存重用:
|
||||
|
||||
- **mem_Reuse**:表示内存复用功能是否打开。设置为True时,将打开内存复用功能。设置为False时,将关闭内存复用功能。
|
||||
有关running data recoder和内存复用配置详细信息,请查看 `配置RDR和内存复用 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html>`_。
|
||||
有关running data recoder和内存复用配置详细信息,请查看 `配置RDR和内存复用 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debugging_info.html>`_。
|
||||
|
||||
|
||||
- **precompile_only** (bool) - 表示是否仅预编译网络。默认值:False。设置为True时,仅编译网络,而不执行网络。
|
||||
|
@ -111,7 +111,7 @@ MindSpore context,用于配置当前执行环境,包括执行模式、执行
|
|||
|
||||
- **pynative_synchronize** (bool) - 表示是否在PyNative模式下启动设备同步执行。默认值:False。设置为False时,将在设备上异步执行算子。当算子执行出错时,将无法定位特定错误脚本代码的位置。当设置为True时,将在设备上同步执行算子。这将降低程序的执行性能。此时,当算子执行出错时,可以根据错误的调用栈来定位错误脚本代码的位置。
|
||||
- **mode** (int) - 表示在GRAPH_MODE(0)或PYNATIVE_MODE(1)模式中的运行。默认值:GRAPH_MODE(0)。GRAPH_MODE或PYNATIVE_MODE可以通过 `mode` 属性设置,两种模式都支持所有后端。默认模式为GRAPH_MODE。
|
||||
- **enable_graph_kernel** (bool) - 表示开启图算融合去优化网络执行性能。默认值:False。如果 `enable_graph_kernel` 设置为True,则可以启用加速。有关图算融合的详细信息,请查看 `使能图算融合 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_graph_kernel_fusion.html>`_ 。
|
||||
- **enable_graph_kernel** (bool) - 表示开启图算融合去优化网络执行性能。默认值:False。如果 `enable_graph_kernel` 设置为True,则可以启用加速。有关图算融合的详细信息,请查看 `使能图算融合 <https://www.mindspore.cn/docs/zh-CN/master/design/enable_graph_kernel_fusion.html>`_ 。
|
||||
- **graph_kernel_flags** (str) - 图算融合的优化选项,当与enable_graph_kernel冲突时,它的优先级更高。其仅适用于有经验的用户。例如,context.set_context(graph_kernel_flags="--opt_level=2 --dump_as_text")。一些常用选项:
|
||||
|
||||
- **opt_level**:设置优化级别。默认值:2。当opt_level的值大于0时,启动图算融合。可选值包括:
|
||||
|
@ -133,11 +133,11 @@ MindSpore context,用于配置当前执行环境,包括执行模式、执行
|
|||
- RL,GA:当RL和GA优化同时打开时,工具会根据网络模型中的不同算子类型自动选择RL或GA。RL和GA的顺序没有区别。(自动选择)。
|
||||
|
||||
|
||||
有关启用算子调优工具设置的更多信息,请查看 `使能算子调优工具 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_auto_tune.html>`_。
|
||||
有关启用算子调优工具设置的更多信息,请查看 `使能算子调优工具 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/auto_tune.html>`_。
|
||||
|
||||
- **check_bprop** (bool) - 表示是否检查反向传播节点,以确保反向传播节点输出的形状(shape)和数据类型与输入参数相同。默认值:False。
|
||||
- **max_call_depth** (int) - 指定函数调用的最大深度。其值必须为正整数。默认值:1000。当嵌套Cell太深或子图数量太多时,需要设置 `max_call_depth` 参数。系统最大堆栈深度应随着 `max_call_depth` 的调整而设置为更大的值,否则可能会因为系统堆栈溢出而引发 "core dumped" 异常。
|
||||
- **enable_sparse** (bool) - 表示是否启用稀疏特征。默认值:False。有关稀疏特征和稀疏张量的详细信息,请查看 `稀疏张量 <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/tensor.html#sparse-tensor>`_。
|
||||
- **enable_sparse** (bool) - 表示是否启用稀疏特征。默认值:False。有关稀疏特征和稀疏张量的详细信息,请查看 `稀疏张量 <https://www.mindspore.cn/tutorials/zh-CN/master/beginner/tensor.html#sparse-tensor>`_。
|
||||
- **grad_for_scalar** (bool): 表示是否获取标量梯度。默认值:False。当 `grad_for_scalar` 设置为True时,则可以导出函数的标量输入。由于后端目前不支持伸缩操作,所以该接口只支持在前端可推演的简单操作。
|
||||
- **enable_compile_cache** (bool) - 表示是否加载或者保存前端编译的图。当 `enable_compile_cache` 被设置为True时,在第一次执行的过程中,一个硬件无关的编译缓存会被生成并且导出为一个MINDIR文件。当该网络被再次执行时,如果 `enable_compile_cache` 仍然为True并且网络脚本没有被更改,那么这个编译缓存会被加载。注意目前只支持有限的Python脚本更改的自动检测,这意味着可能有正确性风险。默认值:False。这是一个实验特性,可能会被更改或者删除。
|
||||
- **compile_cache_path** (str) - 保存前端图编译缓存的路径。默认值:"."。如果目录不存在,系统会自动创建这个目录。缓存会被保存到如下目录: `compile_cache_path/rank_${rank_id}/` 。 `rank_id` 是集群上当前设备的ID。
|
||||
|
|
|
@ -8,7 +8,7 @@ mindspore.dataset
|
|||
|
||||
大多数数据集可以通过指定参数 `cache` 启用缓存服务,以提升整体数据处理效率。
|
||||
请注意Windows平台上还不支持缓存服务,因此在Windows上加载和处理数据时,请勿使用。更多介绍和限制,
|
||||
请参考 `Single-Node Tensor Cache <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cache.html>`_。
|
||||
请参考 `Single-Node Tensor Cache <https://www.mindspore.cn/tutorials/experts/zh-CN/master/data_engine/cache.html>`_。
|
||||
|
||||
在API示例中,常用的模块导入方法如下:
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.build_searched_strategy
|
|||
|
||||
.. py:class:: mindspore.build_searched_strategy(strategy_filename)
|
||||
|
||||
构建网络中每个参数的策略,用于分布式推理。关于它的使用细节,请参考: `保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/save_load_model_hybrid_parallel.html>`_。
|
||||
构建网络中每个参数的策略,用于分布式推理。关于它的使用细节,请参考: `保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load_model_hybrid_parallel.html>`_。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.load_distributed_checkpoint
|
|||
|
||||
.. py:method:: mindspore.load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=None, train_strategy_filename=None, strict_load=False, dec_key=None, dec_mode='AES-GCM')
|
||||
|
||||
给分布式预测加载checkpoint文件到网络,用于分布式推理。关于分布式推理的细节,请参考: https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_inference.html 。
|
||||
给分布式预测加载checkpoint文件到网络,用于分布式推理。关于分布式推理的细节,请参考: https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_inference.html 。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.merge_sliced_parameter
|
|||
|
||||
.. py:method:: mindspore.merge_sliced_parameter(sliced_parameters, strategy=None)
|
||||
|
||||
将参数切片合并为一个完整的参数,用于分布式推理。关于它的细节,请参考:`保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/save_load_model_hybrid_parallel.html>`_。
|
||||
将参数切片合并为一个完整的参数,用于分布式推理。关于它的细节,请参考:`保存和加载模型(HyBrid Parallel模式) <https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/save_load_model_hybrid_parallel.html>`_。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ mindspore.set_dump
|
|||
|
||||
启用或者禁用 `target` 及其子节点的Dump数据功能。
|
||||
|
||||
`target` 为 :class:`mindspore.nn.Cell` 或 :class:`mindspore.ops.Primitive` 的实例。请注意,此API仅在开启异步Dump功能且Dump配置文件中的 `dump_mode` 字段为"2"时生效。有关详细信息,请参阅 `Dump功能文档 <https://mindspore.cn/docs/programming_guide/zh-CN/master/dump_in_graph_mode.html>`_ 。默认状态下, :class:`mindspore.nn.Cell` 和 :class:`mindspore.ops.Primitive` 实例不使能Dump数据功能。
|
||||
`target` 为 :class:`mindspore.nn.Cell` 或 :class:`mindspore.ops.Primitive` 的实例。请注意,此API仅在开启异步Dump功能且Dump配置文件中的 `dump_mode` 字段为"2"时生效。有关详细信息,请参阅 `Dump功能文档 <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/dump_in_graph_mode.html>`_ 。默认状态下, :class:`mindspore.nn.Cell` 和 :class:`mindspore.ops.Primitive` 实例不使能Dump数据功能。
|
||||
|
||||
.. Warning::
|
||||
此类中的所有API均为实验版本,将来可能更改或者删除。
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.nn.CellList
|
|||
|
||||
.. py:class:: mindspore.nn.CellList(*args, **kwargs)
|
||||
|
||||
构造Cell列表。关于Cell的介绍,可参考 `Cell <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_。
|
||||
构造Cell列表。关于Cell的介绍,可参考 `Cell <https://www.mindspore.cn/docs/zh-CN/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_。
|
||||
|
||||
CellList可以像普通Python列表一样使用,其包含的Cell均已初始化。
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.nn.Flatten
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Tensor) - 要展平的输入Tensor。shape为 :math:`(N, *)`,其中 :math:`*` 表示任意的附加维度。数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_。
|
||||
- **x** (Tensor) - 要展平的输入Tensor。shape为 :math:`(N, *)`,其中 :math:`*` 表示任意的附加维度。数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_。
|
||||
|
||||
**输出:**
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ mindspore.nn.ReLU
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Tensor) - 用于计算ReLU的任意维度的Tensor。数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_。
|
||||
- **x** (Tensor) - 用于计算ReLU的任意维度的Tensor。数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_。
|
||||
|
||||
**输出:**
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ mindspore.nn.SequentialCell
|
|||
|
||||
.. py:class:: mindspore.nn.SequentialCell(*args)
|
||||
|
||||
构造Cell顺序容器。关于Cell的介绍,可参考 `<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_。
|
||||
构造Cell顺序容器。关于Cell的介绍,可参考 `<https://www.mindspore.cn/docs/zh-CN/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_。
|
||||
|
||||
SequentialCell将按照传入List的顺序依次将Cell添加。此外,也支持OrderedDict作为构造器传入。
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ mindspore.nn.Tril
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Tensor):输入Tensor。数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
- **x** (Tensor):输入Tensor。数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
- **k** (int):对角线的索引。默认值:0。假设输入的矩阵的维度分别为d1,d2,则k的范围应在[-min(d1, d2)+1, min(d1, d2)-1],超出该范围时输出值与输入 `x` 一致。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -2,4 +2,4 @@
|
|||
|
||||
由于此优化器没有 `loss_scale` 的参数,因此需要通过其他方式处理 `loss_scale` 。
|
||||
|
||||
如何正确处理 `loss_scale` 详见 `LossScale <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/lossscale.html>`_。
|
||||
如何正确处理 `loss_scale` 详见 `LossScale <https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html>`_。
|
||||
|
|
|
@ -17,7 +17,7 @@ mindspore.ops.Add
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -9,7 +9,7 @@ mindspore.ops.AddN
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union(tuple[Tensor], list[Tensor])) - Tensor组成的tuble或list,类型为 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
- **x** (Union(tuple[Tensor], list[Tensor])) - Tensor组成的tuble或list,类型为 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
|
||||
**输出:**
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
在指定的通信组中汇聚Tensor。
|
||||
|
||||
.. note::
|
||||
集合中所有进程的Tensor必须具有相同的shape和格式。用户在使用之前需要设置环境变量,运行下面的例子。获取详情请点击官方网站 `MindSpore <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_ 。
|
||||
集合中所有进程的Tensor必须具有相同的shape和格式。用户在使用之前需要设置环境变量,运行下面的例子。获取详情请点击官方网站 `MindSpore <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_ 。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
使用指定方式对通信组内的所有设备的Tensor数据进行规约操作,所有设备都得到相同的结果
|
||||
|
||||
.. note::
|
||||
AllReduce操作暂不支持"prod"。集合中的所有进程的Tensor必须具有相同的shape和格式。用户在使用之前需要设置环境变量,运行下面的例子。获取详情请点击官方网站 `MindSpore <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_ 。
|
||||
AllReduce操作暂不支持"prod"。集合中的所有进程的Tensor必须具有相同的shape和格式。用户在使用之前需要设置环境变量,运行下面的例子。获取详情请点击官方网站 `MindSpore <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_ 。
|
||||
|
||||
**参数:**
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ mindspore.ops.Div
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -12,7 +12,7 @@ mindspore.ops.Eye
|
|||
|
||||
- **n** (int) - 指定返回Tensor的行数。仅支持常量值。
|
||||
- **m** (int) - 指定返回Tensor的列数。仅支持常量值。
|
||||
- **t** (mindspore.dtype) - 指定返回Tensor的数据类型。数据类型必须是 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
- **t** (mindspore.dtype) - 指定返回Tensor的数据类型。数据类型必须是 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
|
||||
**输出:**
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.Fill
|
|||
|
||||
**输入:**
|
||||
|
||||
- **type** (mindspore.dtype) - 指定输出Tensor的数据类型。数据类型只支持 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 和 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
- **type** (mindspore.dtype) - 指定输出Tensor的数据类型。数据类型只支持 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 和 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
- **shape** (tuple[int]) - 指定输出Tensor的shape。
|
||||
- **value** (Union(number.Number, bool)) - 用来填充输出Tensor的值。
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ mindspore.ops.Gather
|
|||
|
||||
.. note::
|
||||
1.input_indices的值必须在 `[0, input_param.shape[axis])` 范围内,超出该范围结果未定义。
|
||||
2.Ascend平台上,input_params的数据类型当前不能是 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
2.Ascend平台上,input_params的数据类型当前不能是 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
|
||||
**输入:**
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ mindspore.ops.Greater
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -20,7 +20,7 @@ mindspore.ops.LessEqual
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -17,7 +17,7 @@ mindspore.ops.Mul
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -17,7 +17,7 @@ mindspore.ops.Pow
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -7,7 +7,7 @@ mindspore.ops.Size
|
|||
|
||||
**输入:**
|
||||
|
||||
- **input_x** (Tensor) - 输入参数,shape为 :math:`(x_1, x_2, ..., x_R)` 。数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
- **input_x** (Tensor) - 输入参数,shape为 :math:`(x_1, x_2, ..., x_R)` 。数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 。
|
||||
|
||||
**输出:**
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ mindspore.ops.Sub
|
|||
|
||||
**输入:**
|
||||
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **x** (Union[Tensor, number.Number, bool]) - 第一个输入,是一个number.Number、bool值或数据类型为 `number <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 或 `bool_ <https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.html#mindspore.dtype>`_ 的Tensor。
|
||||
- **y** (Union[Tensor, number.Number, bool]) - 第二个输入,当第一个输入是Tensor时,第二个输入应该是一个number.Number或bool值,或数据类型为number或bool_的Tensor。当第一个输入是Scalar时,第二个输入必须是数据类型为number或bool_的Tensor。
|
||||
|
||||
**输出:**
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
|
||||
Callback函数可以在step或epoch开始前或结束后执行一些操作。
|
||||
要创建自定义Callback,需要继承Callback基类并重载它相应的方法,有关自定义Callback的详细信息,请查看
|
||||
`Callback <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html>`_。
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/custom_debugging_info.html>`_。
|
||||
|
||||
.. py:method:: begin(run_context)
|
||||
|
||||
|
|
|
@ -102,7 +102,7 @@
|
|||
|
||||
**异常:**
|
||||
|
||||
- **TypeError:** `step` 不为整型,或 `train_network` 的类型不为 `mindspore.nn.Cell <https://www.mindspore.cn/docs/api/zh-CN/master/api_python/nn/mindspore.nn.Cell.html?highlight=MindSpore.nn.cell#mindspore-nn-cell>`_ 。
|
||||
- **TypeError:** `step` 不为整型,或 `train_network` 的类型不为 `mindspore.nn.Cell <https://www.mindspore.cn/docs/zh-CN/master/api_python/nn/mindspore.nn.Cell.html?highlight=MindSpore.nn.cell#mindspore-nn-cell>`_ 。
|
||||
|
||||
.. py:method:: set_mode(mode)
|
||||
|
||||
|
|
|
@ -25,9 +25,9 @@ def set_dump(target, enabled=True):
|
|||
|
||||
`target` should be an instance of :class:`mindspore.nn.Cell` or :class:`mindspore.ops.Primitive` .
|
||||
Please note that this API takes effect only when Asynchronous Dump is enabled and the `dump_mode`
|
||||
field in dump config file is "2". See the `dump document <https://mindspore.cn/docs/programming_guide/
|
||||
en/master/dump_in_graph_mode.html>`_ for details.
|
||||
The default enabled status for a :class:`mindspore.nn.Cell` or :class:`mindspore.ops.Primitive` is False.
|
||||
field in dump config file is "2". See the `dump document <https://www.mindspore.cn/tutorials/
|
||||
experts/en/master/debug/dump_in_graph_mode.html>`_ for details. The default enabled status for
|
||||
a :class:`mindspore.nn.Cell` or :class:`mindspore.ops.Primitive` is False.
|
||||
|
||||
.. warning::
|
||||
This is an experimental prototype that is subject to change or deletion.
|
||||
|
|
|
@ -797,7 +797,7 @@ def set_context(**kwargs):
|
|||
If enable_graph_kernel is set to True, acceleration can be enabled.
|
||||
For details of graph kernel fusion, please check
|
||||
`Enabling Graph Kernel Fusion
|
||||
<https://www.mindspore.cn/docs/programming_guide/en/master/enable_graph_kernel_fusion.html>`_.
|
||||
<https://www.mindspore.cn/docs/en/master/design/enable_graph_kernel_fusion.html>`_.
|
||||
graph_kernel_flags (str) –
|
||||
Optimization options of graph kernel fusion, and the priority is higher when it conflicts
|
||||
with enable_graph_kernel. Only for experienced users.
|
||||
|
@ -832,7 +832,7 @@ def set_context(**kwargs):
|
|||
|
||||
For more information about the enable operator tuning tool settings, please check
|
||||
`Enable the operator optimization tool
|
||||
<https://www.mindspore.cn/docs/programming_guide/en/master/enable_auto_tune.html>`_.
|
||||
<https://www.mindspore.cn/tutorials/experts/en/master/debug/auto_tune.html>`_.
|
||||
check_bprop (bool): Whether to check back propagation nodes. The checking ensures that the shape and dtype
|
||||
of back propagation node outputs is the same as input parameters. Default: False.
|
||||
max_call_depth (int): Specify the maximum depth of function call. Must be positive integer. Default: 1000.
|
||||
|
@ -841,7 +841,7 @@ def set_context(**kwargs):
|
|||
set larger too, otherwise a `core dumped` exception may be raised because of system stack overflow.
|
||||
enable_sparse (bool): Whether to enable sparsity feature. Default: False.
|
||||
For details of sparsity and sparse tensor, please check
|
||||
`sparse tensor <https://www.mindspore.cn/docs/programming_guide/en/master/tensor.html#sparse-tensor>`_.
|
||||
`sparse tensor <https://www.mindspore.cn/tutorials/en/master/beginner/tensor.html#sparse-tensor>`_.
|
||||
grad_for_scalar (bool): Whether to get gradient for scalar. Default: False.
|
||||
When grad_for_scalar is set to True, the function's scalar input can be derived.
|
||||
The default value is False. Because the back-end does not support scaling operations currently,
|
||||
|
|
|
@ -21,7 +21,7 @@ Besides, this module provides APIs to sample data while loading.
|
|||
|
||||
We can enable cache in most of the dataset with its key arguments 'cache'. Please notice that cache is not supported
|
||||
on Windows platform yet. Do not use it while loading and processing data on Windows. More introductions and limitations
|
||||
can refer `Single-Node Tensor Cache <https://www.mindspore.cn/docs/programming_guide/en/master/cache.html>`_.
|
||||
can refer `Single-Node Tensor Cache <https://www.mindspore.cn/tutorials/experts/en/master/data_engine/cache.html>`_.
|
||||
|
||||
Common imported modules in corresponding API examples are as follows:
|
||||
|
||||
|
|
|
@ -130,7 +130,7 @@ class WaitedDSCallback(Callback, DSCallback):
|
|||
r"""
|
||||
Abstract base class used to build dataset callback classes that are synchronized with the training callback class
|
||||
`mindspore.train.callback \
|
||||
<https://mindspore.cn/docs/api/en/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_.
|
||||
<https://www.mindspore.cn/docs/en/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_.
|
||||
|
||||
It can be used to execute a custom callback method before a step or an epoch, such as
|
||||
updating the parameters of operators according to the loss of the previous training epoch in auto augmentation.
|
||||
|
@ -140,7 +140,7 @@ class WaitedDSCallback(Callback, DSCallback):
|
|||
`device_number`, `list_callback`, `cur_epoch_num`, `cur_step_num`, `dataset_sink_mode`,
|
||||
`net_outputs`, etc., see
|
||||
`mindspore.train.callback \
|
||||
<https://mindspore.cn/docs/api/en/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_.
|
||||
<https://www.mindspore.cn/docs/en/master/api_python/mindspore.train.html#mindspore.train.callback.Callback>`_.
|
||||
|
||||
Users can obtain the dataset pipeline context through `ds_run_context`, including
|
||||
`cur_epoch_num`, `cur_step_num_in_epoch` and `cur_step_num`.
|
||||
|
|
|
@ -26,8 +26,8 @@ class DatasetCache:
|
|||
"""
|
||||
A client to interface with tensor caching service.
|
||||
|
||||
For details, please check `Tutorial <https://www.mindspore.cn/docs/programming_guide/en/master/enable_cache.html>`_,
|
||||
`Programming guide <https://www.mindspore.cn/docs/programming_guide/en/master/cache.html>`_.
|
||||
For details, please check `Tutorial <https://www.mindspore.cn/
|
||||
tutorials/experts/en/master/data_engine/enable_cache.html>`_.
|
||||
|
||||
Args:
|
||||
session_id (int): A user assigned session id for the current pipeline.
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
This file contains contains basic classes that help users do flexible dataset loading.
|
||||
You can define your own dataset loading class, and use GeneratorDataset to help load data.
|
||||
You can refer to the
|
||||
`tutorial <https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html#loading-user-defined-dataset>`
|
||||
`tutorial <https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/custom.html#loading-user-defined-dataset>`
|
||||
to help define your dataset loading.
|
||||
After declaring the dataset object, you can further apply dataset operations
|
||||
(e.g. filter, skip, concat, map, batch) on it.
|
||||
|
|
|
@ -75,7 +75,8 @@ class GraphData:
|
|||
Support reading graph datasets like Cora, Citeseer and PubMed.
|
||||
|
||||
About how to load raw graph dataset into MindSpore please
|
||||
refer to `Loading Graph Dataset <https://mindspore.cn/docs/programming_guide/zh-CN/master/load_dataset_gnn.html>`_.
|
||||
refer to `Loading Graph Dataset <https://www.mindspore.cn/tutorials/zh-CN/
|
||||
master/advanced/dataset/enhanced_graph_data.html>`_.
|
||||
|
||||
Args:
|
||||
dataset_file (str): One of file names in the dataset.
|
||||
|
|
|
@ -269,7 +269,8 @@ class ReLU(Cell):
|
|||
Activation_function#/media/File:Activation_rectified_linear.svg>`_ .
|
||||
|
||||
Inputs:
|
||||
- **x** (Tensor) - The input of ReLU is a Tensor of any dimension. The data type is `number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
- **x** (Tensor) - The input of ReLU is a Tensor of any dimension. The data type is `number <https://www.mind
|
||||
spore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
|
||||
Outputs:
|
||||
Tensor, with the same type and shape as the `x`.
|
||||
|
|
|
@ -181,7 +181,7 @@ class Flatten(Cell):
|
|||
|
||||
Inputs:
|
||||
- **x** (Tensor) - The input Tensor to be flattened. The data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
The shape is :math:`(N, *)` , where :math:`*` means any number of additional dimensions
|
||||
and the shape can't be ().
|
||||
|
||||
|
@ -1027,7 +1027,7 @@ class Tril(Cell):
|
|||
|
||||
Inputs:
|
||||
- **x** (Tensor) - The input tensor. The data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **k** (Int) - The index of diagonal. Default: 0. If the dimensions of the input matrix are d1 and d2,
|
||||
the range of k should be in [-min(d1, d2)+1, min(d1, d2)-1], and the output value will be the same as the
|
||||
input `x` when `k` is out of range.
|
||||
|
|
|
@ -106,7 +106,7 @@ class _CellListBase:
|
|||
class SequentialCell(Cell):
|
||||
"""
|
||||
Sequential Cell container. For more details about Cell, please refer to
|
||||
`Cell <https://www.mindspore.cn/docs/api/en/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_.
|
||||
`Cell <https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_.
|
||||
|
||||
A list of Cells will be added to it in the order they are passed in the constructor.
|
||||
Alternatively, an ordered dict of cells can also be passed in.
|
||||
|
@ -280,7 +280,7 @@ class SequentialCell(Cell):
|
|||
class CellList(_CellListBase, Cell):
|
||||
"""
|
||||
Holds Cells in a list. For more details about Cell, please refer to
|
||||
`Cell <https://www.mindspore.cn/docs/api/en/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_.
|
||||
`Cell <https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Cell.html#mindspore.nn.Cell>`_.
|
||||
|
||||
CellList can be used like a regular Python list, the Cells it contains have been initialized.
|
||||
|
||||
|
|
|
@ -463,8 +463,8 @@ class AdamWeightDecay(Optimizer):
|
|||
There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used
|
||||
and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'.
|
||||
As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means, refer
|
||||
document `LossScale <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/lossscale.html>`_ to process
|
||||
`loss_scale` correctly.
|
||||
document `LossScale <https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html>`_ to
|
||||
process `loss_scale` correctly.
|
||||
|
||||
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without
|
||||
'beta' or 'gamma' in their names. Users can group parameters to change the strategy of decaying weight. When
|
||||
|
|
|
@ -227,8 +227,8 @@ class Lamb(Optimizer):
|
|||
There is usually no connection between a optimizer and mixed precision. But when `FixedLossScaleManager` is used
|
||||
and `drop_overflow_update` in `FixedLossScaleManager` is set to False, optimizer needs to set the 'loss_scale'.
|
||||
As this optimizer has no argument of `loss_scale`, so `loss_scale` needs to be processed by other means, refer
|
||||
document `LossScale <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/lossscale.html>`_ to process
|
||||
`loss_scale` correctly.
|
||||
document `LossScale <https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html>`_ to
|
||||
process `loss_scale` correctly.
|
||||
|
||||
If parameters are not grouped, the `weight_decay` in optimizer will be applied on the network parameters without
|
||||
'beta' or 'gamma' in their names. Users can group parameters to change the strategy of decaying weight. When
|
||||
|
|
|
@ -892,7 +892,7 @@ class Gather(Primitive):
|
|||
out of range.
|
||||
|
||||
2.The data type of input_params cannot be
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ on Ascend
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ on Ascend
|
||||
platform currently.
|
||||
|
||||
Inputs:
|
||||
|
@ -1289,7 +1289,7 @@ class Size(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **input_x** (Tensor) - Input parameters, the shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
|
||||
Outputs:
|
||||
int. A scalar representing the elements' size of `input_x`, tensor is the number of elements
|
||||
|
@ -1333,8 +1333,8 @@ class Fill(PrimitiveWithInfer):
|
|||
|
||||
Inputs:
|
||||
- **type** (mindspore.dtype) - The specified type of output tensor. The data type only supports
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ and
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ and
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
- **shape** (tuple[int]) - The specified shape of output tensor.
|
||||
- **value** (Union(number.Number, bool)) - Value to fill the returned tensor.
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ class ReduceOp:
|
|||
The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU``
|
||||
|
@ -104,7 +104,7 @@ class AllReduce(PrimitiveWithInfer):
|
|||
The tensors must have the same shape and format in all processes of the collection. The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
Args:
|
||||
op (str): Specifies an operation used for element-wise reductions,
|
||||
|
@ -183,7 +183,7 @@ class AllGather(PrimitiveWithInfer):
|
|||
The tensors must have the same shape and format in all processes of the collection. The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
Args:
|
||||
group (str): The communication group to work on. Default: "GlobalComm.WORLD_COMM_GROUP".
|
||||
|
@ -389,7 +389,7 @@ class ReduceScatter(PrimitiveWithInfer):
|
|||
The tensors must have the same shape and format in all processes of the collection. The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
Args:
|
||||
op (str): Specifies an operation used for element-wise reductions,
|
||||
|
@ -523,7 +523,7 @@ class Broadcast(PrimitiveWithInfer):
|
|||
The tensors must have the same shape and format in all processes of the collection. The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
Args:
|
||||
root_rank (int): Source rank. Required in all processes except the one
|
||||
|
@ -660,11 +660,11 @@ class NeighborExchange(Primitive):
|
|||
The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
|
||||
in the same subnet, please check the `details \
|
||||
<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ops.html#id2>`_.
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
|
||||
|
||||
Args:
|
||||
send_rank_ids (list(int)): Ranks which the data is sent to.
|
||||
|
@ -736,11 +736,11 @@ class AlltoAll(PrimitiveWithInfer):
|
|||
The tensors must have the same shape and format in all processes of the collection. The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
|
||||
in the same subnet, please check the `details \
|
||||
<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ops.html#id2>`_.
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
|
||||
|
||||
Args:
|
||||
split_count (int): On each process, divide blocks into split_count number.
|
||||
|
@ -827,11 +827,11 @@ class NeighborExchangeV2(Primitive):
|
|||
The user needs to preset
|
||||
communication environment variables before running the following example, please check the details on the
|
||||
official website of `MindSpore \
|
||||
<https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
<https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.ops.html#communication-operators>`_.
|
||||
|
||||
This operator requires a full-mesh network topology, each device has the same vlan id, and the ip & mask are
|
||||
in the same subnet, please check the `details \
|
||||
<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ops.html#id2>`_.
|
||||
<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/distributed_training_ops.html#id2>`_.
|
||||
|
||||
Args:
|
||||
send_rank_ids (list(int)): Ranks which the data is sent to. 8 rank_ids represents 8 directions, if one
|
||||
|
|
|
@ -199,8 +199,8 @@ class Add(_MathBinaryOp):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor,
|
||||
the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -1610,8 +1610,8 @@ class AddN(Primitive):
|
|||
|
||||
Inputs:
|
||||
- **x** (Union(tuple[Tensor], list[Tensor])) - A tuple or list composed of Tensor, the data type is
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
|
||||
Outputs:
|
||||
Tensor, has the same shape and dtype as each Tensor of `x`.
|
||||
|
@ -1906,8 +1906,8 @@ class Sub(_MathBinaryOp):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor,
|
||||
the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -1960,8 +1960,8 @@ class Mul(_MathBinaryOp):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor,
|
||||
the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -2262,8 +2262,8 @@ class Pow(Primitive):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor,
|
||||
the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -2889,8 +2889,8 @@ class Div(_MathBinaryOp):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor,
|
||||
the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -2976,8 +2976,8 @@ class DivNoNan(_MathBinaryOp):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input is a number.Number or
|
||||
a bool when the first input is a tensor or a tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -3476,8 +3476,8 @@ class Xlogy(Primitive):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input is a number.Number or
|
||||
a bool when the first input is a tensor or a tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -3904,8 +3904,8 @@ class Greater(_LogicBinaryOp):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ .
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor,
|
||||
the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
@ -4102,8 +4102,8 @@ class LessEqual(_LogicBinaryOp):
|
|||
Inputs:
|
||||
- **x** (Union[Tensor, number.Number, bool]) - The first input is a number.Number or
|
||||
a bool or a tensor whose data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_ or
|
||||
`bool_ <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
- **y** (Union[Tensor, number.Number, bool]) - The second input, when the first input is a Tensor,
|
||||
the second input should be a number.Number or bool value, or a Tensor whose data type is number or bool\_.
|
||||
When the first input is Scalar, the second input must be a Tensor whose data type is number or bool\_.
|
||||
|
|
|
@ -481,7 +481,7 @@ class ReLU(Primitive):
|
|||
Inputs:
|
||||
- **input_x** (Tensor) - Tensor of shape :math:`(N, *)`, where :math:`*` means, any number of
|
||||
additional dimensions, data type is
|
||||
`number <https://www.mindspore.cn/docs/api/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
`number <https://www.mindspore.cn/docs/en/master/api_python/mindspore.html#mindspore.dtype>`_.
|
||||
|
||||
Outputs:
|
||||
Tensor of shape :math:`(N, *)`, with the same type and shape as the `input_x`.
|
||||
|
|
|
@ -82,7 +82,7 @@ class Callback:
|
|||
Callback function can perform some operations before and after step or epoch.
|
||||
To create a custom callback, subclass Callback and override the method associated
|
||||
with the stage of interest. For details of Callback fusion, please check
|
||||
`Callback <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html>`_.
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debugging_info.html>`_.
|
||||
|
||||
Examples:
|
||||
>>> import numpy as np
|
||||
|
@ -249,7 +249,7 @@ class RunContext:
|
|||
Callback objects can stop the loop by calling request_stop() of run_context.
|
||||
This class needs to be used with :class:`mindspore.train.callback.Callback`.
|
||||
For details of Callback fusion, please check
|
||||
`Callback <https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html>`_.
|
||||
`Callback <https://www.mindspore.cn/tutorials/experts/en/master/debug/custom_debugging_info.html>`_.
|
||||
|
||||
Args:
|
||||
original_args (dict): Holding the related information of model.
|
||||
|
|
|
@ -1396,7 +1396,7 @@ def build_searched_strategy(strategy_filename):
|
|||
"""
|
||||
Build strategy of every parameter in network. Used in the case of distributed inference.
|
||||
For details of it, please check:
|
||||
`<https://www.mindspore.cn/docs/programming_guide/en/master/save_load_model_hybrid_parallel.html>`_.
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load_model_hybrid_parallel.html>`_.
|
||||
|
||||
Args:
|
||||
strategy_filename (str): Name of strategy file.
|
||||
|
@ -1447,7 +1447,7 @@ def merge_sliced_parameter(sliced_parameters, strategy=None):
|
|||
"""
|
||||
Merge parameter slices into one parameter. Used in the case of distributed inference.
|
||||
For details of it, please check:
|
||||
`<https://www.mindspore.cn/docs/programming_guide/en/master/save_load_model_hybrid_parallel.html>`_.
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/save_load_model_hybrid_parallel.html>`_.
|
||||
|
||||
Args:
|
||||
sliced_parameters (list[Parameter]): Parameter slices in order of rank id.
|
||||
|
@ -1541,7 +1541,7 @@ def load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=
|
|||
"""
|
||||
Load checkpoint into net for distributed predication. Used in the case of distributed inference.
|
||||
For details of distributed inference, please check:
|
||||
`<https://www.mindspore.cn/docs/programming_guide/en/master/distributed_inference.html>`_.
|
||||
`<https://www.mindspore.cn/tutorials/experts/en/master/parallel/distributed_inference.html>`_.
|
||||
|
||||
Args:
|
||||
network (Cell): Network for distributed predication.
|
||||
|
|
|
@ -346,7 +346,7 @@ class SummaryRecord:
|
|||
|
||||
Raises:
|
||||
TypeError: `step` is not int,or `train_network` is not `mindspore.nn.Cell \
|
||||
<https://www.mindspore.cn/docs/api/en/master/api_python/nn/mindspore.nn.Cell.html#mindspore-nn-cell>`_ 。
|
||||
<https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Cell.html#mindspore-nn-cell>`_ 。
|
||||
|
||||
Examples:
|
||||
>>> from mindspore.train.summary import SummaryRecord
|
||||
|
|
Loading…
Reference in New Issue